Monthly Archives: October 2009

For Good Measure: 01001001 01101110 01110000 01110101 01110100 00101111 01001111 01110101 01110100 01110000 01110101 01110100

The video below shows a promising (but not as “OMG brain-controlled stuff WOOT!” as the referring site would have you believe) early step in bridging the gap between the mind and technology.

While the researchers draw the conclusion that this is revolutionary direct brain-brain communication, I’m not quite on board. First, all this display really shows is a kludge system that allowed someone to transmit rudimentary thought to another human through a mediator (not directly) – in this case, the Internet. Here’s the gist of the research:

Researchers attach EEG sensors to subject 1, designed to understand 1’s imagining of raising either their left or right arm as binary code – a 1 or a 0.

Meanwhile, subject 2, who sits in a different location covered by Mr. Shakey Cam, being flashed by an LED that subtly differentiates between the 1 and the 0 is hooked up to EEG amplifiers.

Somehow, the data encoded in the flashed 1s and 0s is interpretable by subject 2’s brain. Subject 2 correctly interprets a series as transmitted by subject 1. 1-0-1-1 is transmitted, received, and decoded.

What I see in this is actually a bit more promising than brain-brain communication (which this isn’t) but goes overlooked. This is brain-computer communication! Granted this is a data transfer rate that makes a 1200 baud modem seem like light speed, but it’s a start. If one takes for a good thing our impending transformation into a race of silicon-augments, this is a HUUUGE step. Plenty of research shows our brain’s sensitivity to broad effect reading and writing from external parties. But to show that we are capable of encoding a specific thought process that a computer can correctly articulate instruction and meaning from? That’s the encouraging thing to me.

I know I probably seem like I await the Singularity with at least unstable glee, and probably to some observers with unhealthy obsession. The reality (or at least my impression of reality) is that the Singularity isn’t a scary thing for me. I don’t think we’re looking at our impending doom, but instead at the next great step in civilization. Once thought merges so seamlessly with technology, it’ll be the next best thing to being everywhere at once. I’d have to think that such potential would be a transformative power in humanity.

But at a generous 3 seconds per 1 or 0, as seen in the video, it’s going to be a long time until I can transmit 01001101 01110010 00101110 00100000 01010111 01100001 01110100 01110011 01101111 01101110 00101100 00100000 01100011 01101111 01101101 01100101 00100000 01101000 01100101 01110010 01100101 00101110 (that would take just under 10 minutes) without breaking a sweat. (click here if you’d like to decode it.)

Just think how long it would take to convince a murderous robot to spare your life.


Who makes Steve Guttenberg a star?

After reading other conversations about what Linked In means to classmates, I have another shot to take at the topic: Social media as modern-day secret handshake.

(Apologies for this being the best Simpsons Stonecutters video I can find)

In the classic episode of The Simpsons, Homer is unhappy to learn that he’s apparently the only man in Springfield who isn’t part of the Stonecutters, a secret order that apparently responsible for everything that goes on in the world. Homer, at first on the outside, struggles with getting a local plumber to fix a flooded basement. He is only able to secure timely, competent help from the plumber after revealing he is a newly-minted Stonecutter, thereby entitling him to the benefits usually reserved for the elite. Is this what social media, especially Linked In, will do to the professional landscape of the next few years?

Taking a look at this from two points of view, I’ll start first with the apologist’s, which, in full disclosure, probably most closely resembles my own views. I see these new tools of social media/Web 2.0 as the just that: tools. If you believe that genies do not go back into bottles, what’s done is done and we will do well to go with this flow. I have access to the same technology that anyone else does. As the apologist, I use it without further thought to my colleagues’ compatibilities or any concern for voluntary fair play. I understand that there are those who will shy away from this frontier of professional communication, but that’s not really my problem, is it? Ten years ago, access to computers and the Internet was something of a small club; computers were expensive, their usability factor didn’t encourage adoption, and the reward returned for investment of time honestly didn’t amount to much more than a few computer geek friends talking ad nauseum about last night’s hackfest on Diablo. Today, that’s not the case. A perfectly capable nettop computer can be had for under $400, less than some of the technologically-adverse would drop on a month’s car payment. Operating systems are the most accessible they’ve ever been and the robust offering of simple yet effective applications means no one can reasonably be left out in the cold. So throw off your inhibitions about Twitter, Facebook, or Linked In and join the rest of the online world. The connections you can make and maintain will do more than just kill time; they might help put you on an inside track at work. If you’re willing to put in quality e-suckup time, they why shouldn’t you benefit from your technological aptitude?

On the other side of the coin, we have the inclusionists,: Not everyone is capable of or willing to engage in the same level of immersion, and since it doesn’t really reflect on professional aptitude one way or another, isn’t it just another type of Boy’s Only Club? If I’m a (stereotype alert!) twenty year veteran of my office who just doesn’t give a damn about all that online stuff, isn’t it a type of discrimination to penalize me for not participating? That is what you’re doing, in effect, if I can’t connect on Facebook etc. with the others in the office who care about that kind of thing.

Taking from both sides, I can see validity to both points. As admitted before, my probable allegiance would be to the apologists, judging by how easily I was able to place myself in that position. But it’s always easier for the previously-initiated (Digital Native/Naturalized Citizen?) to call the newcomers out for not being willing to adapt. It’s also reasonable for the newcomers to expect equal reward for equal work. But is it really like that? Has it ever not been this way? Precedent doesn’t equate to acceptability, but it does make change more difficult. What would have to/should have to happen to level this playing field. Or is it a false assumption that leveling is desirable?

Oh, and because a shameless Simpsons quote is applicable here if no other time: Carl: “Oh and don’t bother calling 9-1-1 anymore. Here’s the real number.” (Homer is handed a slip of paper reading ‘9-1-2’)

For Good Measure: When the Singularity comes, plz to has mai life?

Just as a quick note for those of you who don’t know what the Singularity is – it’s the theoretical point when artificial intelligence will achieve parity in complexity and capability with human intelligence. It’s perpetually just over the hill, but it is pretty much an inevitability.

A lot of what you hear in the next breath after the term Singularity is Matrix or Terminator style lamentations that humans cannot coexist with a self-aware race of computers. I don’t agree. One of the great differences between us and any sort of AI we can see in the foreseeable future is our predilection for emotion-based overreaction. I don’t mean to be glib, but as long as our existence doesn’t logically contradict existence for such AI, we don’t have much to worry about. Additionally, such popular cautionary tales as we see in literature and film assure that we’re designing AI with any variety of kill switches in mind.

So the more likely reality is that sometime soon(ish), we’re going to live in a world where intelligent computers are a reality. And while they await the completion of their ablative-armored, six-foot tall, red-eyed, Austrian accented physical bodies, our first interactions with them will probably be more like what we saw in class on Tuesday in Apple’s World of Tomorrow style future tech flick. They may even be cloud-residing digital dwellers who are simply our contemporaries from a universe we can’t comprehend. And we’ll be visitors in their native landscape. What kind of reaction will they have to us in this environment?

What implications might this have for our digital universe? Is that where the battle, if there is one, might be waged? We exist in two places at once right now: the digital and the analog. Even if our newly aware digital colleague has a physical body of sorts, it’s not designed to interact with its physical world. And aside from the occasional in-person input they may receive, they will have only our digital presences from which to parse out human behavior. They’ll have our posting on discussion forums to understand our attitudes towards interpersonal relations. They’ll have “Shit My Dad Says” to understand how we see the world. They’ll have millions of unflattering pictures of felines engaged in grammatically-challenged quests for self-discovery and cheezburgerz to understand our humor. That will be their only context for understanding us.

Well, on second thought, we’re all screwed.

Information on the Singularity:

Singularity Summit 2009