Although you are correct in that all humans will have different layouts at the neuron level, the brain is still quite similar between individuals. The brain, as far as I know at least, has localized sets of neurons that perform certain functions, these "sections" of the brain are similar both in size and location between individuals(and feel free to correct me if I'm wrong guys).
That’s correct. It makes a lot of intuitive sense when you consider that the gross (larger scale) structure of the brain is controlled by our genes, and only tiny difference in these genes exist between individual humans, and only small difference between humans and may other animals. Our brains are not very dissimilar to those of chimpanzees (differing mostly in size), or even house cats (differing much in relative size of various regions).
On a sufficiently small scale, the connection of our roughly 100 000 000 000 neurons and the molecules that bias their excitatory and inhibitory sensitivity to their neighbors differ very significantly, because this is how our conscious and unconscious memories, personality traits, etc., are stored. Formation on this scale is much more rapid, and due much more to experience delivered by our senses than our genes.
But like I have said many times before, the problem with this technology lies not with the actual means of application or the concept itself, but in the calibration of said systems to function as an extension of the user.
I disagree. I think the actual means by which a brain-machine-interface is accomplished is the most critical and immanent part of the problem, because until it’s sufficiently solved, no input or output data exists to calibrate. Most non-neurologists, I think, greatly underappreciate how difficult this problem is.
I read an article a few weeks back about how the Massachusetts Institute of Technology(MIT) had successfully designed a system of very small "tubes" that could be inserted into the brain to deliver medicine, but they also said it could house wires instead. Assuming that this technology is past the concept stage and will or has already entered trials, we may see this technology emerge sooner than we thought.
I think you’re referring to recent work by Polina Anikeeva
and colleagues, described in articles such as this 19 Jan 2015 MIT news article
, and her 31 Jul 2015 TED talk
A big thank you for leading me to this, NotBrad!
For a BCI fan like me (and, I assume, most of the people reading or posting in this forum) this is wonderful and exciting stuff.
From what I’ve read and watched, the thrust of Anikeeva’s approach to date involves a combination of Optogenetics (which inserts genes into neurons to make them sensitive to and emit visible light, though I believe Anikeeva’s work involves only making neurons sensitive to light, reading their activity electically) and fine (0.00001 to 0.0001 m, about the diameter of a hair) polymer fibers that can act as light guides, electrodes, and drug-delivering tubes. Much finer than the common commercial chronic electrodes used to treat disease like Parkinson’s (which are about 0.001 m), these fibers are also “softer” and more flexible, similar to small brain blood vessels, so they damage the brain less – though it’s still necessary to surgically penetrate the skull to implant them.
At about 11:30 in her TED video, Anikeeva talks about an entirely different, “wireless” approach. This involves injecting a solution of magnetic nanoparticles near specific neurons, then heating them with a changing magnetic field, which can produce an effect similar to the natural excitation of neurons. While interesting, and, I gather from the video, actually achieved, I don’t see how this could be done to create a BCI like the NerveGear, and while it’s “wireless”, brain injections are far from nonintrusive.
Some more specific education/career advice: go to MIT and get on Anikeeva’s team. She’s under 35 years old, so will likely be in the field for many years to come.
But the body is an incredible machine, and is capable of adapting to things like bionic limbs. These bionic limbs are not a "plug and play" system like what is seen in Sword Art Online, but it certainly is a proof of concept. I imagine that when this technology becomes a reality, there will be a lot of work that has to go into learning how to use the device. because lets face it, unless it is a system that works like a limb that is learned by the user, there will always be the potential for hacking into said device to harm the user.
Any device that can raise substantial voltages across neuron membranes, include present-day DBS systems
and experimental systems like the Dobelle Eye
, can be harmful to the user if they malfunction, accidentally or maliciously, causing seizures and other brain dysfunction. Present-day DBS system have potential serious psychiatric side-effects, including such things as compulsive gambling (which Anikeeva mentions in her TED talk) and hypersexuality. These problems with present day system are due to a lack of understanding of how they work, and their course spatial resolution, but even with improvements in these areas, the potential for short or long-term adverse effects is innate to a system with the ability to directly affect brain neurons.
It is, or course, possible to avoid this by not directly effecting the brain, but rather stimulating peripheral nerves. I don’t see much advantage of doing this other than through the already well-tuned sensory organs – eyes, ears, touch and heat receptors, etc.
As for reading and stimulating the wrong nerves, and the user training themselves to compensate, I don’t see much utility in this, other than for controlling prosthetic limbs, where the right nerves have been lost, or are not available due to lack of surgical technology. If you are going to train surrogate nerves to control and sense unusual real or virtual body parts, I think you’d do best to use those most capable – the fingers and hands. That is essentially what present day force-feedback video game and remote control system controllers – the present-day epitome of which are many-axis haptic control systems like the ones I mention in this post
– do. After a short period of training, the user of typical video game controller feels fairly immersed in the game. I image the user of a more advanced system like the CyberGlove Haptic Workstation
would feel even more immersed, and be capable of controlling a virtual world avatar with much greater precision and realism than in ordinary present-day video games.
After reading your comment CraigD, I think that the device concept is physically possible...
What I meant by “such a device” in “it’s still uncertain if any such device is physically possible” is an entirely non-intrusive system – one that doesn’t require sticking any physical object, even a micro or nano-scopic one – into the user’s brain
...but the sheer size of said device would be closer to this
Or maybe this, an actual image of arguably the best truly non-intrusive neural activity imaging device presently available, a MEG
In the case of MEG, the technology can likely be much miniaturized by using SREF magnetometers
rather than the usual SQUIDs
. Most of size of a MEG machine is due to the cooling system needed for its SQUIDs. This link
from the Wikipedia article suggests that a complete SREF unit could be as small as 1 mm3
, no larger than the typical EEG
Plus the radiation required to constantly manipulate the brain would likely cause cancer within a few months of use, if not a few days.
As long as it’s not ionizing
, radiation won’t itself cause cancer or other disease.
The only radiation to which any nerve cells (such as naturally occurring retinal cells in the eye, or cells artificially altered via optogenetic
techniques) are sensitive is in the visible spectrum, so EM radiation used to manipulate the brain won’t directly harm it.
This isn’t true of radiation used to image the brain, such as the X-rays used in CAT scanners, which is ionizing.