Monthly Archives: September 2009
As I alluded to in a previous post, I posit that it might not be long before the progress of our technological usage reaches a threshold. Whenever I spout off about this topic, I tend to get funny looks, but I encourage everyone to bear with me.
Think about our relationship with technology in just the past fifty years. If I may be poetic, we’ve progressed from computers filling buildings and only able to run the most basic operations to carrying around smartphones equipped with processors many orders of magnitude more capable than the Apollo XI Command Module’s on-board computers. We’ve witnessed the rise of the Arpanet, the Internet, Web 2.0 (a term I still kinda dislike), and real-time photo-realistic graphics rendering. More information can stream across global distances faster each day. And with Moore’s Law, this capacity doubles every 18 months to 2 years.
As far as our behavior goes, we are the most digital-socially integrated group of humans ever, and that too shows no likelihood of abating. We constantly devise new ways to use technology to keep in contact with our circle of friends, monitor our world and create new digital worlds to escape into. So, I ask – absolutely, 100%, I kid you not – seriously: How long until we become a race of cyborgs?
Just to be absolutely clear: I do NOT mean this.
It’s not a big leap. In fact, with the above trends factored in, what used to be a big chasm is rapidly shrinking into what is soon to be a mere crack in the sidewalk. If we continue carrying more powerful and capable technology around with us, increasing our reliance on digital maintenance of our knowledge base, and going where we can with nanotechnology, it can’t be long. Our generation might be cautious but convincible, and generations older probably won’t adopt, but what about those yet to come? There will come a point when I won’t be content with physically interacting with a device when I know a more accurate, more efficient route exists directly with my brain. It’s started with increasingly portable yet capable laptops and phones. The next step will be wearable but separate tech that makes these functions more permanently accessible. After that will be technology that passively observes our nervous system operation to intuit our will into physical manifestations. ( Actually, this device might already do that.) After we’ve contented ourselves with using trained subconscious eye movement to access and display information on a HUD before our eyes, what else could be the next step?
Integration isn’t going to happen overnight. In fact, a lot of these steps have already been taken in crude ways to assist those with injuries or disabilities manage some life functions through alternative means. As we continue to chart these technologies and guide them into more effective use, it will pass the mark of parity with the biological and on to technological superiority. At that point, elective adoption of these technologies will increase and humans will gradually alter themselves for the simple objective benefits afforded.
I won’t commit the classic folly and assign a specific timeframe for when I think this will come to pass, but I will offer that I sincerely believe it will be well within my lifetime. When we reach this threshold, what reason will we have as a culture to ignore it and say we will go no further?
In response to Tuesdays class discussion and Dr. Schirmer’s article, “The Personal as Public: Identity Construction/Fragmentation Online,” I’d like to propose another way of looking at how we present ourselves online: facets.
Regarding the Prensky article, “Digital Natives, Digital Immigrants,” I’m going to explore some things I didn’t get to on other classmates’ blogs.
(CRAP! I had this sitting as a draft since Tuesday.)
My response to Zappen’s “Digital Rhetoric: Toward an Integrated Theory” was lukewarm. Maybe I’ve completely missed the point – and perhaps, I can be forgiven this because the point seems so buried – but it seems to me the take-away is that new digital forms of expression and rhetoric are only superficially different from the traditional notions. Assuming I’m on target, I’ll proceed.
Well, duh. I don’t really see how this manner of communication could be any different based on the medium. The point, the essence, is the exchange. Be it 140 characters at a time and tapped out on the subway or at length in peer-reviewed journals, discourse is discourse is discourse. Legitimacy bestowed may make it more professional or suitable for specific application, but it is still discourse and it is still valuable.
It seems a point being made is that one of the aspects of so-called “New Media” is a scientific reproduction or affectation of traditional media – a mathematical breakdown of something like a video or image. That strikes me as odd. In reality, EVERYTHING we’ve ever done could potentially fall under this definition if the parameters remain this loose and computing power continues its unabated expansion. Just like a mathematical equation can reproduce the display of an image by orienting millions of small bits and dots of color, so too could the more physical characteristics be included. A painting is not just the colors used and their relative orientation – it’s the way light plays off the inconsistencies in the pigment, or how different canvases interact with different paints. There are mathematical ways to describe all of this. So, given the time, wouldn’t a)processing power needed to three-dimensionally render a digital expression for how all of this data is presented and constructed, and b)continuing advances in physical reproductions (such as 3-D printers) eventually produce something that is a 100% flawless reproduction of an analog original? No doubt reaching this level of sophistication would take time, but since it couldn’t be achieved by a human hand, wouldn’t this also be New Media?
Furthermore, there is reference to Alan Turing, but nothing to say of the infamous Turing test. For those who may not know, the Turing test asserts that a sufficiently complex and well programmed computer could carry on a conversation with a human as another human could. No one can claim that honor yet, but this, too, must be only a matter of time. Then media will be capable of expanding into yet another realm it has been shut out of thus far: human-like/human-to-human communication.