Jump to content
Science Forums

Almost Realtime “Perfect” Nervous System Reading Of A Tiny Worm


CraigD

Recommended Posts

Following up on this post I wrote challenging JimSolo’s claim that “intercepting neural pathways, or even tapping off those signals, is a completely new class of research.” I came across the work of Robert Prevedel and Young-Gyu Yoon, who in 2014 captured the activity of all of the nerves in C elegans worms at a rate of up to 50 samples per second. Their paper, “Simultaneous whole-animal 3D-imaging of neuronal activity using light-field microscopy”, can be read free here. This MIT news article has a good summary of the work.

 

Here’re the essentials of how their system works:

  • The worms are infected with a virus that inserts a gene into their cells that expresses a protein that emits light when touched by calcium ions involved in neuron firing.
  • Each of these flashes is detected by an array of light sensors, a technique known as light-field microscopy
  • The data from these sensors is processed by a computer program to give a 3 dimensional moving image of the firing of all the worm’s neurons
The computer in this system wasn’t fast enough by about a factor of 1000 (several minutes to process 1 second) to give a real-time moving image, but I believe could be. Since the experiment was “read only”, real-time wasn’ necessary, so I imagine not worth additional cost and effort.

 

The system is has a resolution of about 0.000002 m, high enough to image each individual neuron. It’s fast enough, at up to 50 Hz, to detect each individual firing (C elegan’s nerves are slower than some human ones, about 0.02 s vs up to 0.002 s). So I’d describe it as “perfect” in spatial and temporal resolution.

 

The news article mentions that the researchers plan to combine this “reading” technique with optogenetic techniques to allow “writing” – that is, causing individual neurons to fire on command – but the Google Scholar pages for Prevedel and Yoon don’t show anything for this yet.

 

So I wonder if realtime LFM like this could be an enabling technology for something like the fictional NerveGear?

 

A C elegans has exactly 302 neurons, compared to a variable number around 100,000,000,000 in a human brain, so at least, such a system would have to be scaled up. A bigger problem is that C elegans is essential transparent and very slender, while the human brain and braincase is thick and opaque to the visible light emitted by phosphorescent proteins like the nls-GCaMP5K Prevede and Yoon used. I can’t imagine engineering a protein that could emit light that could penetrate the skull, and since such light is ionizing radiation (eg: x-ray), that you’d want such a thing, so once again, I’m smacked with the need to stick stuff through the skull and into the brain. There just doesn’t seem to be anyway around this.

 

Compared to schemes that involve using huge numbers of electrically conductive electrodes (like the one described here), though, LFM is attractive. Though the array Prevede and Yoon used had more sensor elements than C elegans has neurons, I think in principle an array of a few hundred sensors could realtime image millions of neurons, so the system could be much smaller.

Link to comment
Share on other sites

This is amazing. Both the scientific achievement and the fact that there is some actual scientific information on the FullDive forum. Not that I'm bashing anyone else, but most other posts can be very easily misconstrued as shams, scams or hoaxes due to a seeming abundance of teenagers and lack of science.

 

I can’t imagine engineering a protein that could emit light that could penetrate the skull, and since such light is ionizing radiation (eg: x-ray), that you’d want such a thing, so once again, I’m smacked with the need to stick stuff through the skull and into the brain. There just doesn’t seem to be anyway around this.

 

Concerning this, I think a lot of research is going into doing just that, although in the reverse end, that is: writing to the brain. The United States' DARPA has invested what seems to be a lot in doing this using optogenetics, which entails adding a gene that expresses itself by firing off an associated neuron when exposed to a certain frequency of light. Granted, DARPA plans on inserting a so-called "cortical modem" in the brain to handle the lighting of neurons based on wireless signals, but the idea is apparently sound. I think this also partially addresses the radiation concern.

 

Early last month, a doctor at the Retina Foundation of the Southwest ran an optogenetic clinical trial on 15 almost blind people, the first human trials for the field. The gene was targeted at the nerves near the eyes and not the brain itself, and no results have been published (or are expected to be for the next few months), but it would seem we're seeing progress on the writing front of FullDive tech.

 

I'm in full agreement that at the moment, any form of reading and writing would require, as you poetically put it, "the need to stick stuff through the skull and into the brain."

Link to comment
Share on other sites

I'm in full agreement that at the moment, any form of reading and writing would require, as you poetically put it, "the need to stick stuff through the skull and into the brain."

I think I was thinking too narrowly when I wrote that. Though I think you must get “stuff” – electrodes, or if optogenetics are used, fiberoptic pipes – into the brain, present day neurosurgeons are already doing this by going around the skull by threading fine insulated wires connected to “stentrodes” – an amalgamation of “stent” and “electrode” – through blood vessels into the brain. According to this 2/9/16 MIT Technology Review article, and this Nature 4/18/15 Nature article (subscription required), one of these was used to get 180 days of “high fidelity” data from a freely moving sheep.

 

My favorite idea for “wiring the brain” at the resolution needed for an effectively perfect read/write brain-computer interface is to use as-yet-nonexistent, self-implanting nanoscopic electrodes, as I’ve described in this, this, and this post. A key assumption in these imaginings of mine is that the best approach is to minimize the length of each “nanofiber” by having them pass from a helmet-shaped device directly through the scalp and skull to the target neuron. This may be a bad assumption – perhaps the nanofibers would do better to thread the brains vascular system, starting from a collar-shaped device.

 

The key enabling technology I’m imagining here is something I call “2-dimensional Drexlarian nanotech”, meaning the machines involved are nanoscopic (~ 10-8 m) in 2-D cross section, macroscopic (~ 0.1 m) in 1-D length. It’s a media-neutral – I’m imagining atom-by-atom building of a motile fiber able to steer itself while having force supplied along its macroscopic long axis, sort of a “nanoscopic nail”.

Link to comment
Share on other sites

My favorite idea for “wiring the brain” at the resolution needed for an effectively perfect read/write brain-computer interface is to use as-yet-nonexistent, self-implanting nanoscopic electrodes, as I’ve described in this, this, and this post. A key assumption in these imaginings of mine is that the best approach is to minimize the length of each “nanofiber” by having them pass from a helmet-shaped device directly through the scalp and skull to the target neuron. This may be a bad assumption – perhaps the nanofibers would do better to thread the brains vascular system, starting from a collar-shaped device.

 

Yes, I've read several of your posts concerning hybrid nano-macro electrodes and I think the idea is inspired. I do however, believe that the effort required to develop such technology would only come to fruition after a Sword Art Online-type FullDive device like the NerveGear, although that's just speculation on my part. It just intuitively seems easier to me to develop higher fidelity wireless reading/writing devices than to have something semi-autonomously invade the brain, not to mention safer. 

 

That said, there has been research into using ultrasound as a more precise alternative to EEG for non-invasive brain scanning, though I can't say I fully understand it myself. All I can really say with surety is that it is more precise but a lot slower than typical brain reading methods. The point is, I think non-invasive measures are still the next step on this road, with maybe your suggestion coming in later down the line. Though I may be biased by the SAO and Accel World continuity.

 

EDIT: I recently discovered that there is an ongoing project to combine ultrasound with NIRS "to achieve greater speed, accuracy, and detail in monitoring brain activity than either approach could provide alone."

Edited by FullDiver
Link to comment
Share on other sites

Yes, I've read several of your posts concerning hybrid nano-macro electrodes and I think the idea is inspired. I do however, believe that the effort required to develop such technology would only come to fruition after a Sword Art Online-type FullDive device like the NerveGear, although that's just speculation on my part. It just intuitively seems easier to me to develop higher fidelity wireless reading/writing devices than to have something semi-autonomously invade the brain, not to mention safer.

Hi-fi wireless BCIs would be wonderful. It’s clearly what Kawahara imagined when he wrote the SAO light novels, which became the anime that inspired most of the readers of this forum. However, I’m skeptical that such a scheme using any sort of near-future technology will be able to much improve on systems like the famous (at least with BCI enthusiasts ;) ) “brain-to-brain communication” system demonstrated in 2014 (see "Conscious brain-to-brain communication in humans using non-invasive technologies" (19 Aug 2014) Grau C, Ginhoux R, Riera A, Nguyen TL, Chauvat H, Berg M, Amengual JL, Pascual-Leone A, Ruffini G, or this IO9 story). Grau and his colleagues used off-the-shelf EEG hardware and a computer program to detect when a “sender” subject was thinking intently about the word “hola” or “ciao”, and activate or not activate an off-the-shelf TNS device next to the head of a “receiver” subject, who then experienced or didn’t experience a phosphene, allowing them to know which word the sender was sending.

 

Being a true read+write system, this is a major improvement over existing, read-only commercial EEG-based systems like the EPOC (or it’s cheap relatives, the Mindflex toy, Neurosky, etc), and is already proving valuable in improving the quality of life of people crippled by brain and spine injuries, I think hard physics limits on the resolution of nonintrusive technologies like EEG and TMS will never permit its bandwidth to be scaled up and it resolution scaled down enough to be used for a game interface like SOA’s NerveGear.

 

A truly non-intrusive system will, I think, require a technology far beyond our current physics, if it’s even possible in principle.

 

So my argument for the feasibility of something like the “nanofiber” scheme I’ve described is that having your brain skewered with many thousand of more nanoscopic electrodes, despite how intrusive it sounds, can actually be considered “non-intrusive”, because it could be painless and practically non-detectable, more so than a technically non-intrusive but actually dangerous system like an x-ray camera.

 

In the near-term – the next 10 years or so – I expect that VR systems like the one recently demonstrated at “Sword Art Online – The Beginning – alpha test” events, will be state-of-the-art, and may be successfully commoditized into devices ordinary people can buy and play, possibly with the addition of detailed haptic feedback via a body-fitting suit.

 

Consumer reaction is critical. As I’ve stressed before, there’s good evidence that consumers may prefer less immersive systems to more immersive ones, as witnessed by the persistent popularity of keyboard/controller and screen game interfaces despite 20+ years of availability of far better VR systems. So even modest VR systems like the Oculus Rifts may not be commercially successful, in which case state-of-the-art ones like the SAO–TB –alpha test one likely will never see the commercial market.

 

Depending on how well it’s made, I think the upcoming in 2017 movie adaptation of Ready Player One may have a strong influence on consumer enthusiasm for VR. As, unlike SAO, it describes a physically possible, highly immersive VR system, this influence could be much more impactful than the SAO anime’s.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...