Jump to content
Science Forums

How Long Until We Could Make A Real Sword Art Online (sao) Nerve Gear Type Device


jacob2mitts

Recommended Posts

Moderation note: This is the original post that lead to the creation of the FullDive Technology” subforum. Posts have been moved from it into other threads, according to subject, to make them shorter and more readable.

This thread is for discussion of how to make an actual brain-computer interface like the NerveGear shown in the Sword Art Online anime

 

If you have seen the hit anime Sword Art Online (or S.A.O for short) you most likely know what the nerve gear is and what it does. Going into this though i will explain what it exactly does as if no one has ever heard of it. The Nerve Gear is a VR device (or virtual reality device). I have posted pictures as to what it looks like post-84204-0-01218000-1406748998_thumb.jpg post-84204-0-96235900-1406749072_thumb.jpg . Through the device every sense is being used. When you are in the game you smell the things around you, you can taste everything you eat. You can see and hear everything around you in a photo realalistic enviroment. You can feel everything as if it were in the real world right in front of you. (Please note the sensation of touch and feel is not to the of the real world though.) When i say you can  feel things i mean each type of feeling, mechanical reception the feeling of contact, thermo reception the feeling of hot and cold, stretch reception the feeling of muscle compression, kinestesia the sensing of body movements, proprioception the sensation of a body's place, and equilibriaception the sensation of balance. So i will break down what we have and what we will most likely have in the near future if everthing goes right that has to do with these five senses. Photorealistic graphics are expected to be around by the year 2020 as predicted by the scientist Michio Kaku. (thats two years before S.A.O is even released in the anime). Headphones are already great and will only get better with time. With taste and smell finding the answer will exponentially help the as the two senses are so closely related. touch is the big one here, there are many devices to simulate each type of touch i described but it would need to be all in one and need to all fit with in the helmet used as it is the only peice of equipment used in the show (though it is connected to a strong pc). the device would only need to touch your head and still give you feeling anywhere on your body. As for the controls they are completly controlled through the brain. You have the full range of movement that you have in the real world in the game, (without ever actually moving in real life). I would imagine this would be done through an EEG (electroencephalogram) that could take the brain's electrical signals used to move and redirect them into a computer that would us them as movement commands. By the way none of this is invasive so nothing has to connect into your body. So give your ideas to how far along we are to doing this and new tech that would make this possible. i personaly plan on going to college to create such a device so please lets start a discussion about this. i believe this is all possible because if humans put there mind to it anything can be created. An example of this is the atom bomb, everyone thought it was impossible to split an atom but we did it and now we have nuclear generators and reactors.

Link to comment
Share on other sites

Welcome to hypography jacob2mitts! Please feel free to start a topic in the introductions forum to tell us something about yourself.

 

How Long Until We Could Make A Real Sword Art Online (sao) Nerve Gear Type Device?

I’ll approach this question by break down what the Nerve Gear helmet in SAO and the SAO MORPG computer program is shown doing:

  • Simulating a realistic world for many (on the order of 10000) simultaneous users
  • Reading brain states with sufficiently high spatial and temporal resolution
  • Analyzing in real time the read brain states to create input data for #1
  • Writing brain states with sufficiently high spatial and temporal resolution
IMHO, #1 could be done now. Note that the simulated world doesn’t need to be a truly accurate simulation, modeling very complicated things like weather and biology, only accurate enough to give the appearance of a real world. Such simulation parallelize well, so the computer hardware necessary to run them scales well – the size of the simulated world and number of users can be increased to within reasonable limits by adding hardware.

 

#2 and 4 are harder questions, both to state precisely and to answer. Notice my use of the wiggle-word “sufficiently”. To the best of my knowledge, we don’t at present have a good guess as to how high a spatial resolution is necessary to capture the data needed for #3. However from well-known brain data, we can estimate a resolution that must be sufficient, though the actual needed resolution may not need to be so great.

 

The number of neurons in the human brain is about 1011, its volume about 0.0012 m2 (source: this 2013 Nature Article). From this, you can calculate the spatial resolution necessary to image individual neurons: about 0.00001 m (10 microns).

 

For temporal resolution, we know that the fastest changes in nerves - their “action potentials” – is on the order of 0.001 seconds (1 milisecond)

 

Present day fMRI brain imaging machines have a maximum spatial resolution of about 0.002 m, and a temporal resolution (how long they take to capture a full brain image) of about 1 second. To the best of my knowledge, the temporal resolution of fMRI could be increased by a factor of 1000 without undue difficulty – there’s no great drive to do so, because its current performance is good enough for its primary use in medicine. The spatial resolution, however, I understand has a theoretical maximum of about 0.0001 m, 10 times to course to image individual neurons. (source: this 2000 conference paper draft)

 

The conclusion I get from this is that, unless the actual needed resolution for #2 is much less than individual neurons, MRI technology isn’t feasible. MRI has the highest resolution of present-day non-intrusive brain imaging technologies, so either an entirely new approach is needed, or an intrusive one.

 

My best guess is that an intrusive technology is needed – many fine insulated wires inserted into the brain. So, rather than the neat helmet in SAO, something like the wiring implied by the “brain plugs” shown in 1999 film The Matrix,

though I imagine the connectors could be made nearly invisible, rather than the ugly industrial looking spike and socket shown in the movie. It’s possible that hardware could be implanted in the brain along with the wires, allowing a magnetic or radio rather than a hard-wired connection to the outside.

 

Once function #2 has gotten the brain data, is has to transfer it to a computer for fuction #3, analyzing the data. Taking the above, this requires a transfer rate of at most 1014 bits/second. This rate, though about 1,000,000 times higher than in commonplace hardware, has been achieved. The actual rate needed would be much less, because all of the neurons in the brain don’t fire in the same millisecond.

 

Compared to #2, imaging the brain in sufficient resolution, I don’t think #3, analyzing the brain data, would be terribly difficult. For a single user, it would, I expect, require more computer than running the MORPG, though, so in present-day technology, a top-of-the-line supercomputer would be needed for each user.

 

#4, writing to the brain, is so far beyond present day technology it’s hard to sensibly speculate about. There are some present-day devices that stimulate the brain non-intrusively, using magnetic fields, but they are very low resolution, affecting whole brain areas on the order of 0.01 m in diameter. Though useful in medicine for treating some brain diseases, they can only “write thoughts” in a very crude manner, such as causing temporary reduction in thinking or memorizing ability, or the perception of vague flashes of light.

 

Again, this leads me to conclude that a true “read-write” brain interface like the one suggested in SAO would need some sort of wired brain connection.

 

Function #2 – reading brain states – is also needed to “upload your mind” into a computer, a dear goal of extropians, transhumanists, and others of this ilk. 9 years ago, we had the thread Upload your mind into a computer by 2050?, discussing futurologist Ian Pearson prediction that this would be possible by 2050. This seems a not unreasonable guess for when at least the “read” part of the Nerve Gear from SAO might be possible –more likely, I think, than 2022, the Reki Kawahara (the novelist that wrote SOA, starting with the first light novels in 2002) prediction for a complete read-write one.

 

An important question to ask before going too far in trying to duplicate a fictional nonintrusive, direct brain read-write device, is whether this is really the best approach. My hunch is that it’s not, and immersive VR that “writes” to our existing sense organs – eyes, ears, and skin, etc. – while “reading” our motor nerves, is.

 

Another good question to ask is, given that pretty good virtual reality systems (eg Virtuality, a line of home and arcade systems) existed in the early 1990s, why wasn’t it popular, and why is so little of it around now? Given that consumer electronics technology is driven in large part by what proves popular, we’ve decades of evidence that gamers don’t want VR, preferring systems with keyboards, handheld controllers, 2-D displays, and speakers or headphones.

 

I’m looking forward to seeing how the latest, and perhaps best-funded forays into VR, the Oculus Rift and Sony’s Project Morpheus, do commercially. Recent reports are that some very good games for these systems will be available late this year or early 2015. If these games are very good, but are not popular, my suspicion that VR itself is not popular will be bolstered.

Link to comment
Share on other sites

  • 2 months later...

This is beyond my scope, but this is a very interesting subject to me, for various reasons. I read the links you gave, Craig, including the 2000 conference paper; in that article was another link, which is just as interesting about nanotechnologies and their use in helping to image the brain beyond fMRI: http://www.foresight.org/conference/MNT8/Papers/Flitman/index.html

Edited by CraigD
Replaced missing foresight.org with archive.org capture
Link to comment
Share on other sites

  • 4 weeks later...

Reading motions from your brain is not the real problem, I think.

 

The problem is "writing" them back into users brain without physical contact. We have no technology to do this yet, but in my opinion we haven´t to write into the brain directly. What about manipulating the main nervous system in the neck? If we bring the user into REM-sleep his/her body isn´t able to move anymore, but the organs are still working.

If we are able to build this technology, we are able to build something like nerve-gear from SAO.

 

I´ll talk to some neurologists at my university soon and after this I´ll update this post :)

 

@KiritoAsuna: working together with you would be the greatest for me, I hope you are still working on this project.

Link to comment
Share on other sites

  • 2 weeks later...

I’m delighted by the enthusiasm expressed by jacob2mitts, KiritoAsuna, deadlydoener, and darkfightermick for developing computer-brain interfaces, and hope them success in whatever direction their enthusiasm draws them. :thumbs_up I wish they’d post their thoughts on the subject, thought – I can’t recall many threads where 4 new members stopped after a single post. :(

 

I made a quick survey of the current state of the general consumer virtual reality business, and was surprised to find that it appears to be being driven by a few young people such as: Palmer Luckey, inventor of the Oculus Rift VR headset display, and Nathan Burba, and James Iliff, cofounders of the VR game company Survios (the successor to “Project Holodeck”), which, according to this May 2014 story (which also give some background on the caution among business people about not repeating the collapse of the 1990s VR business) hopes to complete its first commercial game in 2015.

 

Though these people and companies are working on immersive VR using the ordinary sense of sight and sound, with input from body movement and handheld position+orientation-sensitive controllers, not direct brain interfaces like those imagined in the multimedia fictional SAO, they appear to me to be working on the important preliminary work of the actual design and programming of user interaction with immersive VR environment. I think this work is as or more important than work on direct brain-computer interfaces – which, as I mentioned upthread, may never be technically successful or commercially popular.

 

This work involves some likely critical decisions about “game mechanics” – how to do things we take for granted in the physical world, such as put things into and remove them from our pocket, holsters and packs, and other containers. Nearly all computer games to date do this in a very “immersion breaking way”, typically by popping up a menus and selection lists. In their talks and blogs, folk like Iliff discuss alternatives afforded by VR headsets and position-sensing handheld controllers that allow you to see in-game “virtual hands”, and actually see and reach into worn or other virtual containers.

 

I found interesting that despite its depiction of a brain-computer immersive VR interface nearly indistinguishable from real world experience, the SAO anime series shows in-game characters using pop-up menus similar to computer games of the past 25 years. Though they’ve clearly put some thought into R game design, I don’t think the creators of SAO have put as much or as high-quality thought into it as folk like Iliff.

Link to comment
Share on other sites

I mean, how about this: The Nervegear acts as a second body to the brain and a second brain to the body.

The former is for the input/output system, and the latter is to put the body into a sleeping state. Why don't we scan for what kind of signal is being sent to & from the brain, and then store the data for use in-game? Like with sight, the eyes detect light and convert it into electro-chemical impulses in neurons. why don't we scan that and then write the information we want to give to the player based on the information we get? Of course there will be a lot of calibrations for each person, but I think people are willing to do that to go to the "game world". Also this maybe useful for medical purposes.

Link to comment
Share on other sites

After talking to some neurologists, I know following:

 

1. Writing directly into human brain would be some really dangerous, if you do something wrong, the user would be instantly dead. Writing into nerves in the neck is the better idea, because you can´t kill people with this technology (if you don´t want this).

 

2. If you look into human medulla between C2 and C3 (2nd and 3rd cervicial) you can see following:

=> you can see nothing useful :D

What I want to show is, that we must not stimulate single axons. If you look at this, you can see, there are areas for every part of the body. We can´t write in single axons at the moment, but we don´t need to do this! If we want to simulate an injury on a finger, we can stimulate whole nerves for this finger at once!

 

3. Resulting from 1 writing into medulla is the better than writing directly into brain, but if we write into medulla, we are just able to simulate a second body without a face. We could solve this problem, if we use two writing units, one writing into medulla and one writing into the following nerves (picture from AzoraK´s post):

 

Olfactory (I)

Optic (II)

Trochlear (IV)

Trigeminal (V)

Vestibulochochlear (VIII)

Hypoglossal (XIII)

 

EDIT:

If you want to know the meaning of the words in the picture, just ask, im able to speak latin ;)

 

EDIT No2:

I have searched for technologies for reading brain activity and found the NIST mini sensoring technology (http://www.nist.gov/pml/div688/brain-041912.cfm). This technology could read out motions from the brain easy and fast. Only problem is: if the sensors are broken, they´ll explode, because there is rubidium in it, and we know, what rubidium does in contact with water... =@ :hal_skeleton:

Edited by deadlydoener
Link to comment
Share on other sites

Heck of a conversation!

Again I'll ask for a little patience for my slow comprehension of such an intense study of the brain's functions and corresponding neurons. So far, what I'm beginning to get is that in order to “create,” in one sense, the interface of a user and virtual reality gaming, one would need the “maps” of the paths of at least, the nervous system. According to several citations this seems to be available already – to a degree.

In order for the user to “feel” the sensations elicited in a particular game – such as Halo or another, and again, only to a degree, one would need the “neuron mapping,” and for want of a better word, to explain, why, or how we feel such sensations. Is that too far off the mark?

 

In other words, why do we feel pain? What is pain? Is this consciousness?

Now I don’t know how far virtual reality gamers want to go, but I can imagine wanting to feel some of the sensations as you play; jumping from one precipice to another, or climbing into some kind of aircraft and flying at supersonic speeds, which to my way of thinking is merely psychological. Which again, is coupled to consciousness.

Are these particular sensations ‘psychological’ – or are they merely perceptions of a reality, though I can say firsthand, an airplane that stalls is physically felt; how would we go about causing the user of a virtual reality situation to “feel” that sudden drop in altitude? Because for me, that is what virtual reality is all about – ‘feeling’ the sensations without the subsequent, possible damage to our physical being.

Creating an AI with “consciousness,” is impossible, I’d believe.

Edited by zazz54
Link to comment
Share on other sites

After talking to some neurologists ...

Good to see you seriously researching, Deadlydoener. I envy your access to neurologists – I talk to MDs regularly, but only professionally, not about subjects like this thread’s.

 

1. Writing directly into human brain would be some really dangerous, if you do something wrong, the user would be instantly dead. Writing into nerves in the neck is the better idea, because you can´t kill people with this technology (if you don´t want this).

I don’t think this is true.

 

The medulla, both the oblongata region inside the skull and the spinal region inside the spine (also called the spinal cord), control important body functions. Interfering with the nerve signals in the C2-C3 region of the spinal cord could potentially be as bad as severing the spine there, which typically destroys a person’s ability to breath.

 

Other areas of the brain have been “written to” for several decades, as medical therapies, the main goals being treating diseases such as epilepsy and depression, and giving vision to blind people. For some details, see Wikipedia article “deep brain stimulation” and 2002 Wired magazine article “Vision Quest”.

 

Still, I think it makes a lot of sense to consider placing computer interfaces near the spinal cord, because it’s closer to the surface than the brain. Though helmet and headband-like devices are popular in movies and TV shows, I think this is more due to visual art considerations than neuroscience.

Link to comment
Share on other sites

  • 2 weeks later...

I also found some web pages that might interest most of you:

 

http://www.technologyreview.com/featuredstory/526501/brain-mapping/

 

http://www.technologyreview.com/featuredstory/526506/neuromorphic-chips/

Says:

'These “neuromorphic” chips[...]will be designed to process sensory data such as images and sound and to respond to changes in that data in ways not specifically programmed.'

Link to comment
Share on other sites

I just finished the SAO series and the idea of NerveGear seems very good for me. One of the things that I realized when looking through the facts that they give us in the series is that pinged really hard  with me is the fact that the brain waves were read from waves. The storyline says that the nerve gear intercepted the commands to the human body around the connection between the spinal cord and the brain. This would be quite impractical, because then we would not be able to intercept some of the commands to neck. The real place in which we would need to intercept most of the commands would be in the cerebellum (or cerebrum, I get those two confused). The reasoning behind that is that the stimulus to the entire body could be intercepted there.

Link to comment
Share on other sites

Hi everyone, just joined this site simply due to the enthusiasm in me wanting to see a NerveGear-like technology coming true. After going through a few pages on the internet, I came across this. http://emotiv.com/ . And it seems that they've managed to read brain waves through EEG (electroencephalography) like what Jacob has mentioned when he started this thread. Hope it helps in advancing any particular research any of you has been working on, not sure if you've seen this but I thought I'd just share this with everyone interested.

 

Edit: Here's a video link to see it in action.

 

Edit #2: Here's another video. It's about computer graphics and it's potential in portraying realism as compared to current computer graphics using polygons.

Edited by TanglingTreats
Link to comment
Share on other sites

Hello everyone, 

 

First of all, that's very kind to say zazz, however most of the people my age aren't busy at all with this. That is pretty obvious though. 

Anyway, this just popped up in my Facebook feed: http://www.iflscience.com/technology/worms-mind-robot-body

 

Might be interesting for you guys.

Well Computer1up, if they aren't, they ought to be. Interesting report on the worm. Seems mapping the neurons is developing at a rapid pace. Next we'll be having a robot that purrs like a cat.

Link to comment
Share on other sites

  • 2 weeks later...

Hi all, i have been reading this thread and find the ideas amazing. I think that if we could differenciate the electrical pulses that come form our brains and codify them for different actions in the game just like the way a computer works when you press the keyboard and the words appear in the screen then half of the work would be done, but I'm not sure about how this can be done because:

 

I don't really know if the electrical pulse that your brain sends to move your right hand is tha same that sends to move your left feet but in a differet nerve, if that is the case, then we would have to place specific receptors in all the nerves that come from the brain, which would be much more difficult than the first case.

 

In case we can differenciate the electrical pulses, we should also inhibit them to prevent  the movement that we want to do in the game to be made into reality, but we cannot inhibit the ones that are used for vital functions such as breathing and all the others organs, which should be kept working. But if we don't transcribe the breathing pulses into the game, we could not be able to breathe insisde the game (not really a problem but maybe it result in less inmersion) so its not only about inhibit and recreate in the wirtual world but inthis case we could have this actions replicated and done simultaneously in both, the real and virtual world.

 

I'm sorry for my English, its not my first language, but i hope you can understand what i mean even if i use wrong vocabulary sometimes.

Edited by gonnn
Link to comment
Share on other sites

Guest
This topic is now closed to further replies.
×
×
  • Create New...