RSS

Remote Controlled Mice Today, Remote Controlled Humans Tomorrow - You may not have to wait until the year 2154 for your own remote-controlled body...

Remote Controlled Mice Today, Remote Controlled Humans Tomorrow
By BARRACUDA (Reporter)

You may not have to wait until the year 2154 for your own remote-controlled body. In this week's excerpt, Mark Stephen Meadows discusses wetware technology and how the science-fiction of Avatar is quickly becoming science fact.

During a radio interview in December of 2009, I was asked, "Do you think the vision of Avatar is something we'll see in the future?" I paused for a second and made my Jake Sully wish list. What do we need to make Avatar happen, roughly?
First is data transfer; you have to be able to drive the system at a distance. The myoelectrics and BMIs can work locally, and we've also seen that they can work at a distance. So, remote control; we've seen the U.S. Army driving UAVs this way. Check.
Second is output. You have to think to affect the interface. You'll be lying down in a tank and you'll be rigged up to some kind of myoelectric or BMI (or combination thereof) interface. We've seen Cyberdyne and Honda both driving robots this way. Check.
Third is input. Pumping the arms and legs is one thing, but there's a bigger trick of moving sensory data into your head. Moving data into your little vampire-coffin isn't the problem, but getting visual data into your eye could be. We've learned a bit about retinal implants and cochlear implants functioning today, so it seems that visual or auditory information could be converted from analog to digital, or vice versa, and could be sent into and out of the brain. Now, whether we end up having to break the skin to get it there is another question, but with that magic 144 years of future stirred in, let's call it a check. So those are the outlines for a remote neuroprosthetic.
Fourth is the system-the avatar itself.
***

I have to pause for a moment and tell you about one of the weirdest things I've come across in my travels, which is the notion of exactly what is needed for item number four. It is called a hybrot.
In the early 1990s, a number of scientists managed to establish a dialogue between a computer simulation and a wad of neurons in a Petri dish. Literally, the technique is called "dynamic clamping" and it works by taking a cluster of brain cells and soaking them in chemicals to tease them apart. Then, by chemically welding them to an electrical circuit board, you can measure the input membrane potential from one neuron and inject the output (the current from that neuron) into another. Hijacking the current, you can then interface it with a standard computer. It's a simple idea which presents a pretty reductionist view of the brain as a linking of inputs and outputs. The dynamic clamp method can be extended from the cellular level to the systems level, artificially monitoring and constraining the relationship between the neural system, the computer, and the behavior. It's wetware hacking.

READ MORE HERE

Full credits to/thank you and source taken from:
http://www.b4in.mobi/story/
http://gizmodo.com/

  • Digg
  • Del.icio.us
  • StumbleUpon
  • Reddit
  • RSS

0 comments:

Post a Comment

Visitors Map

Online Users