Jump to content
Science Forums

Alternatives To Nervegear-Type Devices

Recommended Posts

Moderation note: This post was split from How Long Until We Could Make A Real Sword Art Online (sao) Nerve Gear Type Device, because that long thread is being broken up into threads by subject to make them shorter and more readable.


This thread is for discussion of alternatives to direct using brain-computer interfaces like the NerveGear device depicted in Sword Art Online for achieving more immersive virtual reality


In other words, why do we feel pain? What is pain? Is this consciousness?

I think it’s wise, in this thread, to avoid the “c word”, and the complicated philosophy that attend it, like the “hard problem of consciousness”, and whether “consciousness” can be simulated by computer programs, that’s long vexed philosophers, physiologists, and more recently, computer programmers. Though the show depicts a few human-like AIs, the technology depicted in SAO, and the focus of this thread, is simply a “better screen and controller” that allows game players to have a more deeply immersive virtual reality game experience than the typical 2-D screen controller/keyboard/mouse scheme.


… how would we go about causing the user of a virtual reality situation to “feel” that sudden drop in altitude?

The best present day system do this by actually, physically dropping the user, using large hydraulic or pneumatic jacks. This kind of “virtual reality” has been achieved fairly well for several decades, in the form of flight simulators. For some aircraft, such as airliners, - present day simulators are nearly indistinguishable from actually piloting the aircraft.


Similar systems for entertainment can be found in many amusement parks, and are increasing convincing and immersive. If you’re in the Orland, Florida area, I recommend the complete in 2012 Transformers: The Ride 3D. It’s amazing how, with a combination of physical movements, 3-D glasses, and projected images, this ride give the impression of having traveled thousands of meters horizontally and hundreds vertically, within a 60000 square foot, 60 foot tall building.


These systems, however are large and expensive. To achieve comparable feelings of movement in a consumer product, would require, I think, a very different approach.


Fortunately, a lot is understood about the perception of movement – or, more accurately, the perception of acceleration, as we can’t a actually perceive motion, only acceleration. It’s done primarily via fine, hair-like nerves (cilia) that detect the movement of fluid in the 3 semicircular canals located inside each ear. The signals from the cilia are carried from the ears to the brain by the vestibular nerve.


A system that could either artificially move this fluid, or stimulate the vestibular nerve almost exactly as the cilia do for a particular rotational or linear acceleration should be able to almost perfectly give the sensation of that acceleration, without the user actually being moved.

Link to post
Share on other sites
  • 4 weeks later...

Welcome to hypography, gonnn!


In case we can differenciate the electrical pulses, we should also inhibit them to prevent the movement that we want to do in the game to be made into reality, but we cannot inhibit the ones that are used for vital functions such as breathing and all the others organs, which should be kept working.

You point out a good question – how to keep the user of a Nerve Gear=like device from running and jumping around, without killing or permanently paralyzing them. As most people are focused on key problems of “reading and writing” to and from the brain, this question is often overlooked.


One solution is not to. This is how a couple of excellent novels on the subject of immersive virtual reality, Neal Stephenson’s 1992 Snow Crash and Ernest Cline’s 2011 Ready Player One depict it:


In Snow Crash, people sword fight and other fun stuff in VR by actually finding a wide open space, a sword, and going through the real motions while a computer tracks them. They wear lightweight stereooptical glasses and tight-sealing earphones, but otherwise nothing but their usual cloths – the computer does all its magic with tracking and communication lasers.


In Ready Player One, people use, on the low end, just sterooptical glasses, earphones, and hand-held controllers. On the high end, they suspend themselves from harneses while wearing haptic (touch sensing and feed-backing) body suits in addition to the headgear.


In neither of these stories are direct nerve-to-computer interfaces used. Nonetheless, the fictional experience is described as deeply immersive. Though it denies the main idea of this thread, I have a hunch these imaginings are right, and the best technology for immersive VR may already be built into our brains, in the form of our amazing imaginative senses. Brief session with an early (ca 1998) version of the CAVE VR system, where I believe I really became unaware that I wasn’t physically in the simulated world lead me to conclude that as long as VR is pleasant and comfortable (in particular, doesn’t give the user motion sickness, the dreaded “ralph effect”), it might not need to use technology more advanced than what’s available now.


Setting aside the “don’t” solution, it merits pointing out that we humans (and many biologically similar animals) already have a system built in that suppresses our motor functions while allowing us to experience an immersive VR simulation: our dream state, REM sleep. Though not fully understood, we know from experiments going back more than 65 years that a well-defined brainstem structure, the pontine tegmentum, is responsible for inhibiting our muscles during sleep. If this structure is physically damaged or chemically suppressed, a dreaming animal physically acts out its dreams, walking, jumping, etc. This video is from one such experiment, I think by Michel Jouvet ca 1960. Malfunctions of this system are suspected to cause disorders such as sleepwalking and sleep paralysis.


A system sophisticated enough to read and write detailed motor and sensory data to and from the brain would, I expect, be able to stimulate the pontine tegmentum, activating this system.


Thinking about similarities between dreaming and computer generated VR leads me to wonder if a system like SAO’s might be achieved by, rather than generating a detailed, shared 3D graphical world in a computer and interfacing users with it, injecting “synchronizing” and communicating signals into people’s dreams – that is, rather than a shared computer simulated world, whether an artificially “shared dream” might be the model.


With a bit or reimaging, might it make more sense to imagine Kirito, Asuna, and the other SAO characters to be in a shared dream, most of the processing being done locally by their brains, rather than in in MMORPG, more advanced but architecturally not much different than WOW?


I find this idea a little spooky, though, because dreams appear to me (it’s important to be cautious in our assumptions about what really, neurologically, happens to us when dreaming, and how closely it matches our waking memories of it) involve not just sensations but sometimes deep emotions, or even beliefs. Hacking dreams strikes me as potentially unprecedentedly powerful technology.

Link to post
Share on other sites

Welcome to hypography, gonnn!

 Thank you Craig, have been reading the forums for a while, but when I saw this thread, decided to create an account, as this is one of the projects i would like to work in the future. 



With a bit or reimaging, might it make more sense to imagine Kirito, Asuna, and the other SAO characters to be in a shared dream, most of the processing being done locally by their brains, rather than in in MMORPG, more advanced but architecturally not much different than WOW?


I find this idea a little spooky, though, because dreams appear to me (it’s important to be cautious in our assumptions about what really, neurologically, happens to us when dreaming, and how closely it matches our waking memories of it) involve not just sensations but sometimes deep emotions, or even beliefs. Hacking dreams strikes me as potentially unprecedentedly powerful technology.

 I don't think I understand the shared dream concept, as we all dream different things we would need to create a common place for both dreamers, which would be in fact the same as a completely inmersive virtual reality, but forcing the players to be sleeping when playing, which would reduce it's potencial as a casual, accesible for everyone, game.


I find the idea of the use of pontine tegmentum to stop the movement as a great soultion. This could keep all the vital functions of the body and put away the idea of inhibitors, which would make the proccess of development much more difficult.


I'm not sure if all the pulses that come from our brain are the same, but in case we can differenciate each one of them, copy even the slightlies movement, and use the pontine tegmentum to create a state similar to a dream in our body, maybe the idea of a virtual world could stop being science fiction in 20 or 25 years, and as the only thing we would need is to read the electrical pulses the idea of a device like the seen in SAO may be the solution, so we could put away the idea of intrusive surgery.  

Link to post
Share on other sites

I find it a little hard to imagine the scenario about the "shared dream" concept though, it'll be awesome if you could elaborate more on it.

What I’m suggesting is that, with artificial controls, it might be possible to produce in a human person a dream experience practically indistinguishable from the virtual reality experience depicted in Sword Art Online, and perhaps one even indistinguishable from waking experience.


For most people, dreams are so convincingly real that they are not aware they aren’t awake and experiencing external reality. Dreamers feel real happiness, excitement, and fear in dreams. Even when the dreamer is aware they’re dreaming (lucid dreaming), the dream remains realistic and vivid. So one of the primary goals of immersive virtual reality – its immersiveness and convincingness – is already present in dreams.


I don't think I understand the shared dream concept, as we all dream different things we would need to create a common place for both dreamers …

This is the big challenge with my “shared dream” proposal (let’s call it a Shared Dream Virtual Reality Game – SDVRG): Dreams, while immersive and convincing are entirely “one-player” and un-shared. I’m suggesting that some sort of “writing to the brain” computer-brain interface could change that, and further, that it might be easier to do this with a CBI than to produce an immersive, compelling artificial hallucination, as depicted in SAO.


A key reason that two people almost (except in the case of vanishingly rare coincidence) always dream different things is because there is no information is shared between them when they are dreaming. Yet we know, from scientific research and personal experience, that dreams are strongly products of suggestion. We tend to dream about our recent experience, especially experiences that strongly captured our attention, such as watching movies and playing video games (I’ve had many dreams I’d describe as “continuations” of video games I played ‘til I was exhausted and ready to sleep, and doubt I’m unusual in having this experience). A SDVRG would provide these suggestions during, not before the beginning of sleep. Rather than replacing a person’s sensory perception, reading their motor nerve activity (or their brain nerve precursors) and maintaining a detailed external model of the VR world, it would insert small sense-like “suggestions”, shaping the dream into an approximate match for the externally modeled world.


Note that the dream need only approximate match the computer simulated world, because regardless of its source – reality or a computer simulation – the internal model of reality we construct from our senses is only an approximation of the real or simulated external world. Two person who are near each other believe they experience the same external reality not usually because they agree on all the remembered details of the experience, but because they believe they have both experienced, from slightly different points-of-view, a single, objective reality. Scientific experiments have shown that, if questioned in detail, people don’t agree on small, less important details, only on large, important ones. We feel as if our memories are very accurate, similar to camcorder recordings, but they aren’t.


So, it’s possible that two people would agree as much about their memories of two artificially influenced (via an SDVRG system) dreams as they would agree about their memories about a real-world experience, or a computer simulation perceived with their usual senses.

Link to post
Share on other sites

Interesting Craig. However, I'm not sure how a 'dream-world-virtual-reality game,' could ever be shared. The perceptions the 'users' have or receive will be different. You wrote, "approximate matches..." but I'm not so sure it could ever be matched, despite the suggestions otherwise.

I like the idea of a single user that might be immersed into the virtual world, through the dream state stimulation, actually making up his own game as he went along. You might have the initial, suggested world, and start from there; maybe a category of initial worlds to choose from, then on into the "VR sleep world."

Edited by zazz54
Link to post
Share on other sites

I made this account specially for this topic ^^
I cannot help that much, because Im only 15 years old and I don't know that much as you do.
I would love to be a part of this project.


In Inception, there was a machine, wich connected all dreamers in one shared dream ( could be nice base for SDVRG, but much bigger ).


I used to be lucid dreamer, so i I'll give you some usefull links to forum.
It's called ld4all.com. You need to log in, to read forum. 
When you can browse normally forum, go to this topic:



This is topic about "Chroniclers", group of people, who describe experience from Shared Lucid Dreaming (this might be helpfull).


The second link, is about  shared dream concept (how could we achieve it), it has 3 parts, everything is written in topic:



And the third one, as someone mentioned before, we need a place to meet up ( In concept of SDVRG, we need to have something in common, like place) 


http://www.lucidcrossroads.net/ (more information on site)


I think our dreams are the best base to make infinite worlds and etc. In anime, NG put you into dream like state, so I assume, it's a best option ( and the hardest one :P). 

Sorry for bad English, not my first language :P

Edited by Beebey
Link to post
Share on other sites

As quoted from Bach-y-Rita in the book The Brain That Changes Itself written by Norman Doidge, "We see with our brains, not with our eyes". Our eyes are just receptors for detecting changes in light but it's our brain that perceives and "see". There's a device that has been invented by him that allows the blind to see by having a strip of plastic that's covered with electrodes placed on their tongue and it transmits information to the brain through a camera and a computer. He also stated that our tongues are the ideal "brain-machine interface" because of the lack of insensitive layer of dead skin. This is basically the concept of sensory substitution.


We could, for the most primitive version of the NerveGear, transmit visual information (and any others if necessary) the same way while at the same time researching for another method that doesn't require electrodes (imagine having a shocking feeling at the tongue all the time when using the primitive NerveGear, don't think it'll be good). Reading brain waves/activity, is another issue however. The link which Redicicle V2 shared, for the most part they are working on the game. But they would require a NerveGear to test the game our properly so they came up with an idea on how the NerveGear should function.

Edited by TanglingTreats
Link to post
Share on other sites
  • 2 weeks later...

Hi. I am another one who made an account specifically for this thread, after watching SAO, ALO and having started watching GGO.

I have been an enthusiastic supporter of the BCI ever since Lawnmower Man and Tron but I don't have too much of a scientific mind. I am good at picking up concepts but details and numbers are all Greek to me (especially concepts on human biology and medicine, having been raised in a medical family).  Along with the admition that I am no neuro scientist nor computer engineer, I am just a craft journeyman in the field of glasblowing and enthusiastic roleplayer, computer gamer and escapist (part otaku, part hikikomori).


Anyways, introductions behind us, here's my tuppence:


I don't think it would be neccessary to scan nor manipulate the entire brain, as many here said, you only need to read motoric impulses and write into sensoric impulses, stay away from glandular control (emotions) and cerebral/cognitive functions, those only would open the door to brainwashing and other forms of behavior control/personality manipulation (I don't know how the woman in the Emotiv video can act as if brainscan based targeted advertising could ever be a good thing...)  However, there are some senses that are a bit more obscure and so far have not been mentioned here...and not being a neuroscientist, I don't know where in the brain those are handled:

Sense of balance

Sense of body position (allows you to touch your nose with your pinky while your eyes are clsoed and arms crossed)

Sense of time (the passing of time as well and the chronological oerder and distance of memories)


If anybody can pinpoint where those senses are and how to manipulate them, would be good to add to this thread (balance is in the middle ear...duh...)


I think the most feasable way to do it is by a hybridised pseudo-non-intrusive method.

Use the microfibers and nanotech sensors to bring the signals of the brain closer to the skull's surface, and then be picked up and translated by the helmet.  If I under stand the previous post correctly, those microfibers are small enough to even pass bone without causing any damage or being noticed.  For an intrusive method, that sounds pretty non-intrusive to me.

The hemlet (or hood or balaclava or whatever) translates the neural pulse from the motoric microfiber sensors into digital streams and sends them to the computer and translates the digital input from the computer into neural pulse format and passes that on to the sensoric microfiber sensors.

The computer (and internet servers) provide the game environment in a data format designed for translation into neuro-pulse


That Emotiv video mentioned requirements of training to teach the computer how to interpret your brain waves and to teach the user how to think right.  A calibration of the systme can be achived by requiring the user to wear a monitor-only device (not as noticable as a full helmet...maybe a collar ring like in Accel World) to record brainwave activities through a normal day or two before using the main unit.  This would be only to calibrate the unit to the individual neural pattern of the user, it will need to be able to to interpret neural signals already when it comes from the factory, calibration is just for the individual detail and fine tuning.  This also adds to the write up about the SAO internal lore that was not mentioned in the previous (I think DHenry's) post.  It was mentioned in the first episode already, they had to touch themselves on their elbows, knees etc... to calibrate the system to their body measures.


A completely non-intrusive alternative to pseudo-intrusive microfibers, you could use syhtesis (as mentioned above) entering input either (as mentioned) through the tongue or through the eyes (Armitage III and Ghost in the Shell.....Warning: Risk if Epileptic fits) (Synthesis is when sensoric information is processed and interpreted by the wrong sensoric center....causing amongst other things sounds to have colour or an object's surface texture to smell funny....think acid trip...)


I don't know why everybody thinks about Matrix only when mentioning intrusive methods....Matrix is obviously using a very crude coaxial connection, way to crude to actually have the resolution to interpret or stimulate neural activity accurately...also, it is implanted in many parts of the body and the main connector in the back of the skull.  the old Cyberpunk and Shadowrun pen&paper games had jacks and 5-pin connectors on the side of the skull, much smaller and more cult than Matrix ever was or ever will be.  Ghost in the Shell uses an array of 4 optic fiber portsin the neck, just under the edge of the skull. John Mnemnonic uses a singe optic fiber port (can't remember right now if it was in the back of the neck or behind the ear, was a long long time ago)  I thiknk I have somewhere also seen USB port variations.  All those have the downside that, even if you use microfiber for the cyberneural network, you still need processing/translation hardware inside the skull (which with optic fiber also means having an internal power source or additional power connectors).



Legal issues have been mentioned here before.  I don't think the makers of SAO can sue for anything other than maybe changing the colour scheme or visor style on the helmet.  All they created was the fantasy of a pseudo-science.  Whoever is the first to turn that fiction into real science and actual technology is the one with the patent rights.  Or do you think Jules Verne and the writer of the Perry Rhodan books could have sued NASA for building a rocket that can transport man to the moon?  The only legal dangers there lay in liabilities.  If the helmet is physically not able to access the parts of the brain that control thought, emotion and personality, and if the input sensors are limited in their max power output to within safe levels and all epilepsy warning are written in bold letters on the box, helmet and games, there should be no problem.  Any "accident" would be due to somebody tampering with the system, which voids the warranty and frees the original creator from all liability.  (That at least will the be standpoint of the PR and Legal departments)



I know, we all are in it for the game, for the love of SAO and MMORPG and the fantasy of BCIs and VR.

But there is so much more that can be done with it.  It can save a truck load of taxpayer's money, it can revolutionise medical prosthesis, it can revolutionise warfare, it can bring school classrooms into the kid's bedrooms, it can centralise the directorship of international companies, it can allow a surgeon in Chicago to perform coroneal surgery in London, the applications are endless....

Imagine a war where nobody dies and no buildings gets damaged...terms and consequence of victory/defeat are negotiated in beforehand, then both countries send their soldiers to a GGO type battlefield to duke it out and still be home in time for dinner, because they never even left the home.  No more letters to the widows of fallen soldiers.  And not to mention the reduction in cost for bullets and fuel for training..no more rebuilding the training scenarios that your tank cadets just blew to bits...send them to virtual training scenarios...training accidents and casulties are a thing of the past.

Imagine members of parlament staying in their constituency to represent their voters and still being able to attent parlament without having to spend thousands of taxpayer's money every year on comute costs, limousine rental and owning a second home in the capital.  Video-conferencing made it possible...NerveGear will make it preferable.

Why spend hours every day in the San Francisco super-traffic jam to get to the office if you can hold your board meetings at home between football and the news?  Who would not like to spend more time with their family, watch their children grow up, have early nights with the wife,etc...? Do we really have to spend a fifth of our time in traffic or public transport?

With present state-of-the-art limb replacement, the patients have to focus on the movements and it feels alien.  NerveGear will allow for the prosthesis to carry a full range of tactile sensation and be moved as easily and precise as if it was flesh and blood.


How about crime prevention?  Specifically criminal compulsive behavior such as for excample sociopathic violence and pedophilia.  Surpressing them will only work for short times before the compulsive need becomes too strong to contain, the only sure way to cure such disorders would be by chemical or surgical lobotomy.  Watching reruns of Die Hard movies and playing Rainbow Six will only temporarely pacify, not satisfy, the need.  Sooner or later such virtual stimulation will not be enough and the real thing is needed.  The level of realism deep dive VR can offer even in the creation of AI, can give real satisfaction, and therfor release and defusion, to criminal compulsions without anybody getting hurt, abused, killed, taken advantage of or otherwise harmed.


With above examples, funding is easy...there are millions to be gotten form the medical sector, and billions more from the ministry of defense.Don't w

orry abouot that delaying or preventing the game.  Console based videogames are where the next generation of computer technology is beta tested. SAO will have the NerveGear before anybody else, even before GI Joe and the NSA.


I am certainly no spiel master and definitely not a lawer (I have been told by many people I should be a lawer (amongst others by a lawer, a magistrate, a councel official and two police officers), due to the way I can (ab)use words, but I do have a sense of justice, integrity and self-respect...so...no way), but if your PR department ever needs tips on hair splitting, pedantics or legal puns, give me a call.  I will also volunteer as guinney pig in exchange for lifelong and free guarantied (beta) access to all full dive VRMMO....and if my newest Dellboy Trotter scheme works out, I would be willing to pour funds onto that fire....but I still need to find out if my newest cash cow gives milk before I promise anything on that subject.


Ganbatte kudasai.



Link to post
Share on other sites

Hello, I've just made this account for this thread specifically. However, I may have an idea for a "Demo" if you please. Combining The Oculus Rift, a motion tracker, and anything else... we can create a type of early protoype, which we would then combine the technologies into one headset, thus creating a Nerve Gear in a sense.

Link to post
Share on other sites

Welcome to Hypography, Shodan! :)

Ok, once again I got the time to think on this, and came up with a few ideas:

Fist of all: let’s break down the project into smaller, tangible parts.

I like your approach. :thumbs_up Lists are our friends.


One correction:

… Using Pontine Tegmentum (how to deliver it to the corresponding part of the brain?)

... Non-chemical activation of brain region normally activated by Pontine Tegmentum

The pontine tegmentum is a part of the brain, not a chemical.


The precise mechanism by which the pontine tegmentum initiates REM sleep paralysis of the skeletal muscles, isn’t known. What’s know is that if the pontine tegmentum is disabled, either surgically, or by injecting a chemical such as potassium, the paralysis doesn’t occur, and a sleeper will move around, acting out their dreams. Also, if it is activated, such as with an injection of a chemical like Carbachol, the subject will enter REM sleep.


So, by activating the pontine tegmentum, you can immobilize a person, though a side effect is that it puts their brain into a REM sleep state.


Until recently, how REM sleep paralyzes the skeletal muscles wasn’t very well understood, but as of 2012, it appears to be due to a combination of 2 neurochemicals, GABA and glycine, acting on the cells in the motor cortex region of the brain. (see this 2012 SfN news release for details)


So by deactivating parts of the motor cortex, you can paralyze the skeletal muscles.


In an earlier post, I suggested:

A system sophisticated enough to read and write detailed motor and sensory data to and from the brain would, I expect, be able to stimulate the pontine tegmentum, activating this system.

If the side effects of activating the pontine tegmentum aren’t conducive to a “deep-dive” computer-brain interface, an alternative appears to be deactivating the parts of the motor cortex it targets. A drawback to this approach is we’ve long known how to activate the pontine tegmentum, while at present, we don’t know much about deactivating the motor cortex – but we’re assuming we have “a system sophisticated enough to read and write detailed motor and sensory data”, so can assume we will in the future.


Listing the challenges involved in essentially disconnecting the brain from the greater body, then simulating a body in addition to a world beyond that bring me back to questioning the assumption that this will truly be necessary, except in cases of people with severely disable bodies, such as blind and quadriplegic people.


A system that suspends the user’s body in such a way that it can move in any way without hitting anything, while precisely measuring the position of key bones, applying pressure and other tactile sensations at any point, and physically moving the entire body short distances, combined with a video display screen for each eye and an audio speaker for each ear, could, I think, provide a convincing enough sensory imitation of any physical activity that can be performed on Earth that a person could allow themselves to be fooled in to perceiving it as real.


Let’s call this a “deep dive waldo” (DDW), combining terms from the SAO manga with a 1942 Heinlein story.


Such a system would need to fully “grasp” each finger, toe, head, neck, torso, and all of the limbs, similarly but more completely than powered exoskeleton systems like the 1965 GE Hardiman[/url, or the 2010 Ratheon XOS 2. Where systems control a robot surrounding the user, the DDW would control an in-game avatar.


Here’s Shodan’s list with the DDW’s specifications:

  • Sensory input
    • Haptic feedback – small electromechanical actuators to provide detailed tactile sensations to hands, face, etc, while large one provide large continuous pressure to the soles of the feet (which are never in contact with the ground), etc. Abrupt pressure simulates the sensation of impact, while force oriented on the skeleton provides continuous force simulating weight and resistance to movement.
    • Acceleration feedback (our biological G-sensor) – the entire body is either actually accelerated, or oriented to provide a 1 g “down” sesation in the direction called for by the simulation. Algorithms such as those in present-day motion rides allow user to perceive much larger drops and other movements than the body is actually moved. (see this post for more)
    • Proprioceptive nerve system. – no special system needed. Actual body position matches simulated position.
  • Control output

    The DDW inherently measures all body output.

    A conventional eye tracking system provides eye position data for the in-game avatar. It has no other purpose.

  • Ways to block motoric signals from reaching musles

    The DDW doesn’t, and doesn’t need to, do this.

The most obvious disadvantage of a fully realized DDW vs. a brain-computer interface-using one is that it takes a lot of space and machinery, and likely would be too expensive for most people to own.


Its main advantage is that it’s completely noninvasive, and requires no new basic technology. Every part of the technology has been done, in some cases decades ago. It has just not yet been put together into a single VR system.


The DDW as described above doesn’t simulate taste or smell, though such a system could be added. A Minimally invasive “food simulator” that simulate the full eating and smelling experience with liquid flavors and scents, a device inserted in the mouth for texture and chewing resistance, and sound via audio output was made by Tsukuba University’s Hiroo Iwata around 2003, so this technology also exists.

(Source: several, including this Ars Electroica article)


The DDW wouldn’t be innately safe. Because it must have the mechanical power to simulate strong forces on the limbs and cause strong accelerations, it could injure the user, either due to a hardware or simulation software failure, or if the simulation were allowed to be too accurately simulate a violent event.

Link to post
Share on other sites

What if we didn't give the DDW the ability or command to cause such an incident to occur. For example: In the show, microwaves could destroy your brain. One could assume that the creator of the NerveGear put that command in and allowed the machine to create microwaves to do that. We could just not allow any of the system to proceed past anything remotely harmful, but still providing the effects that you listed to ensure immersion. Correct?

Link to post
Share on other sites

@Shodan and CraigD...


Every time I come on here and read your guys post, I feel like I have to take a week to hibernate so my brain can rest considering you guys are incredibly bright! No sarcasm intended, I will look into the Haptic feedback and see what I can find. I have a general idea on where to begin and I'll hopefully have some credible information or concepts to pass on to you guys.


Lol honestly it feels like you guys are over on one side of the lab building a rocket with toothpicks and I'm in the corner beating 2 rocks together to make ice!


But I'll keep it brief and just end it with "I'll do my best."

Link to post
Share on other sites

The DDW wouldn’t be innately safe. Because it must have the mechanical power to simulate strong forces on the limbs and cause strong accelerations, it could injure the user, either due to a hardware or simulation software failure, or if the simulation were allowed to be too accurately simulate a violent event.

What if we didn't give the DDW the ability or command to cause such an incident to occur. For example: In the show, microwaves could destroy your brain. One could assume that the creator of the NerveGear put that command in and allowed the machine to create microwaves to do that. We could just not allow any of the system to proceed past anything remotely harmful, but still providing the effects that you listed to ensure immersion. Correct?

The problem is one of realism vs. safety.


For the DDW to be realistic, it must be able to subject its user’s body to forces nearly identical to those experienced in real life. Force experienced in real life can injure you. So to be realistic, the DDW must be capable of injuring the user.


We would, of course, write software as failure-proof as possible to prevent the DDW from injuring its user, be “fading out” forces that, were the simulation perfectly accurate, would injure them. If this software failed, however, the user could be injured.


Alternately, we could build the system with the assumption that it not be truly realistic, but rather “dialed down” to be merely representational. For example, all of its skeletal muscle forces could be 1/10th their actual magnitude, effectively making the user feel either 1/10th their usual mass, or 10 times as strong, or some combination of the two. This might be desirable for game reasons, as people might enjoy the experience of being supernaturally strong and/or light.


A complication is that injuries don’t always require great force. Some injuries are caused by relatively small forces for which a person is unprepared, or small forces applied with poor physical posture or technique.


A fundamental problem is that, to provide realism, the system must be able to provide high, brief forces, and also low, prolonged ones. If a single mechanical actuator is used to produce both kinds of forces,aA software or hardware malfunction could cause it to produce a high, prolonged. A possible solution would be to use multiple actuators, some of which are only mechanically capable of short duration forces, other capable of long duration, but low, forces.

Link to post
Share on other sites

As much as I appreciate your emphasis to the original design, I can see CraigD's idea for a DDW being a good way to start the process. It's robust, dependable, non-lethal, non-invasive and doesn't put the player in immediate danger when in use. But as an actual NerveGear suppliment, it wont have quite the same effect. However, it's better to start on concrete footing and build up to the sky than to keep your head in the clouds and believe you're walking on sunshine.


We have to remember that, at present, we dont have all the desirable information at present to deem this research as doable. There's the major issue of the read/write process to the brain and running it finely so as to isolate the brain without stopping the rest of the body, hence my idea of a virtual brain. A copy of your neural reaction to sensitivity tests, eye response and hearing and whatnot. It's also possible to have the NG linked to the player through neural readings or retina ID from the eye shield on the helmet.

But with a virtual brain we can guarantee an extra layer of player safety saying that it's not really their brains...


Besides, if you'll recall from Alfheim, there was a room full of virtual copies of the 300 or so 'trapped players'' brains. I doubt they went out of their way to render the brains for experimental testing because 'it looks cool' :p

Link to post
Share on other sites

also I agree with the idea of CraidD to create DDW which may be already a good basis from which to begin, in fact it is very tempting.

Then I wanted to indicate some aspects in which I doubt: keep multiple devices connected VR, and run them all correctly and simultaneously; and in my opinion, players who will use such a device must have a good gaming computer to connect and operate correctly the kind of NerveGear, this leads me to think that it will take enough money to be able to play a VRMMORPG and this seems to me a block of commercialisation of the product. I would like to be proved wrong on what I said though by my side is a secondary issue.

A marginally interesting DDW device, the way CraigD described it, would cost more than a good car. And it will not give an experiwnce anywhere near NerveGear. It's a totally different project. Waldo devices have been out there for decades, but they are not suitable for anything like a VRMMORPG experience. Controlling combat exosceletons - yes, other robots - sure. But not immersive VR experience.

Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Create New...