Jump to content
Science Forums

Recommended Posts

Hi guys...

Even as a child I always dreamed of entering a world full of virtual reality where each of us could have the world who want and where want to live, all he needs only to choose the game or suitable server to him, then after seeing Sword Art Online I started to want this technology and I think that with the current technological development we could already start working on this project and that in a few years we may already have our nervegear, indeed I am convinced that within a maximum of 10 years we would have this technology, now we get to the point I created this thread because I want any of you have any ideas to develop this technology to put in place I want you to begin to make concrete things. Do you have any ideas? well started to work on it started in svilluparle put together a team and work you do so that your ideal or mine or that of many others is realized. But then again we have to start working on it in a concrete way to talk about it alone or do research without starting to develop our ideas is useless.

Well I hope that in spite of my bad English we understood each other, I would be very happy if you began to create real development team, I'd like to be helpful but I still have 17 years and I am majoring in economics, but maybe I can put you in touch with someone to create a web site or another, in short, I could still make myself useful and I'm thinking of attending a university or mechanical engineering etc.

And with this I salute you and I recommend it if you want to start developing nervegear so dreamed we begin to put to work.

I expect many answers.

 

PS: Yes I'm Italian :D

 

ITALIAN VERSION / VERSIONE ITALIANA

Salve ragazzi... Fin da bambino ho sempre sognato di entrare in un mondo di realtà virtuale completo dove ognuno di noi potrebbe avere il mondo che desidera e in cui vorrebbe vivere, gli basterebbe solo scegliere il gioco o server adatto a lui, poi dopo aver visto Sword Art Online ho iniziato a desiderare questa tecnologia e penso che con lo sviluppo tecnologico attuale potremmo già iniziare a lavorare su questo progetto e che in qualche anno potremmo già avere il nostro nervegear, anzi sono convinto che entro massimo 10 anni avremmo questa tecnologia, ora arrivo al punto ho creato questa discussione perchè voglio che chiunque di voi abbia qualche idea per sviluppare questa tecnologia la metta in atto voglio che iniziamo a fare delle cose concrete. Avete delle idee? bene iniziate a lavorarci su iniziate a svilluparle mettete insieme una squadra e lavorate fate in modo che il vostro ideale o il mio o quello di tanti altri si realizzi. Ma ripeto dobbiamo iniziare a lavorarci su in modo concreto parlarne solo o fare ricerche senza iniziare a sviluppare le nostre idee è inutile. Bene spero che nonostante il mio pessimo inglese ci siamo capiti, sarei molto felice se iniziaste a creare dei veri team di sviluppo, mi piacerebbe esservi utile ma ho ancora 17 anni e mi sto diplomando in economia, ma magari posso mettervi in contatto con qualcuno per creare un sito web o altro, insomma potrei comunque rendermi utile e sto pensando di frequentare un universita di meccanica o ingegneria ecc.

E con questo vi saluto e mi raccomando se volete iniziare a sviluppare il nervegear tanto sognato iniziamo a metterci a lavoro.

Aspetto tante risposte.

 

Link to comment
Share on other sites

Welcome to hypography and its Fuldive Tech forum, Kiriel! :)

 

I’m impressed that you appear to have gotten past the language barrier using Google Translate or something similar! :thumbs_up If it hadn’t failed to translate “bene iniziate a lavorarci su iniziate a svilluparle”, and you hadn’t included the Italian version of your post, I doubt I’d have noticed.

 

... now we get to the point I created this thread because I want any of you have any ideas to develop this technology to put in place I want you to begin to make concrete things.

There are a lot of related topics in this forum, so let’s be clear what specific technology you’re referring to: a brain-computer interface so perfect that its user is unable to distinguish between the computer-generated virtual reality it provides and the “actual reality” we’re currently experiencing. For short, let’s call this “NerveGear”, borrowing for the Sword Art Online anime that inspired so many people to think about actually realizing this technology.

 

There are many good discussions to be had about other topics needed for a complete system, such as how to write “artificial intelligence” programs to procedurally generate a VR world and fill it with human and nonhuman animals indistinguishable from AR humans and animals, but let’s not discuss that now.

 

If you thoroughly read this forum and more widely, you’ll see that I advocate a NerveGear that works by quickly and painlessly infiltrating the brain with a 100,000 to 10,000,000 wire-like machines about 10 nm (10-8 meter) in diameter that remain connected to a helmet-like part.

 

These “nano-wire” machines would need to have several powerful functions, a primary one being the ability to precisely and swiftly “burrow” between many kinds of cells in the users body. So I think the best development starting place is building machines like these, starting with existing techniques for pushing bundles about 7 x 10-6 m in diameter consisting of about 100 wires into the brain through blood vessels.

 

See here and here for more description of imagined and currently (2005) realized ideas.

 

Because these “nano-wire” systems would be able to infiltrate not only brain, but any body tissue, they would enable not only the NerveGear, but tremendous improvements in medical diagnostic imaging and surgery.

Link to comment
Share on other sites

Welcome to hypography and its Fuldive Tech forum, Kiriel! :)

 

I’m impressed that you appear to have gotten past the language barrier using Google Translate or something similar! :thumbs_up If it hadn’t failed to translate “bene iniziate a lavorarci su iniziate a svilluparle”, and you hadn’t included the Italian version of your post, I doubt I’d have noticed.

 

There are a lot of related topics in this forum, so let’s be clear what specific technology you’re referring to: a brain-computer interface so perfect that its user is unable to distinguish between the computer-generated virtual reality it provides and the “actual reality” we’re currently experiencing. For short, let’s call this “NerveGear”, borrowing for the Sword Art Online anime that inspired so many people to think about actually realizing this technology.

 

There are many good discussions to be had about other topics needed for a complete system, such as how to write “artificial intelligence” programs to procedurally generate a VR world and fill it with human and nonhuman animals indistinguishable from AR humans and animals, but let’s not discuss that now.

 

If you thoroughly read this forum and more widely, you’ll see that I advocate a NerveGear that works by quickly and painlessly infiltrating the brain with a 100,000 to 10,000,000 wire-like machines about 10 nm (10-8 meter) in diameter that remain connected to a helmet-like part.

 

These “nano-wire” machines would need to have several powerful functions, a primary one being the ability to precisely and swiftly “burrow” between many kinds of cells in the users body. So I think the best development starting place is building machines like these, starting with existing techniques for pushing bundles about 7 x 10-6 m in diameter consisting of about 100 wires into the brain through blood vessels.

 

See here and here for more description of imagined and currently (2005) realized ideas.

 

Because these “nano-wire” systems would be able to infiltrate not only brain, but any body tissue, they would enable not only the NerveGear, but tremendous improvements in medical diagnostic imaging and surgery.

There is currently a DARPA project involved with attempting to do this with up to a million electrodes.  From the proposals submitted so far, we are very far away from being able to achieve that.

 

Keep in mind that state of the art right now is the Utah array, with up to 64 electrodes.  People are using these for prosthetic interfaces.  A large array (with a plug to the outside world) gets implanted in the patient's brain, and it allows the brain to control the patient's arm in cases of quadriplegia.   It's an acute treatment, which means it has to come out in a few months when the infections get bad.  When it comes out, it takes a chunk of the patient's brain with it, since the brain tissue grows into the array.  This is considered acceptable because the people were paralyzed anyway; they weren't really using their motor cortex.

Link to comment
Share on other sites

There is currently a DARPA project involved with attempting to do this with up to a million electrodes. From the proposals submitted so far, we are very far away from being able to achieve that.

I think your referring to DARPA’s Neural Engineering System Design (NESD) program. It’s solicitation page is here, the best short description I could find, here, a helpful FAQ page for proposal writers (with some annoying broken URLs) here. (PS: the “hypo” in “hypography” stands for hyperlink, so please link to things you refer to in your posts. Without hyperlinks, computer based reading is no better than paper-based reading!)

 

Thanks for the reference. :thumbs_up Though modestly funded (US$60,000,000 over 4 years), NESD sounds like a good thing for the development of brain-computer interface technology, and I look forward to following its progress.

 

NESD’s basic requirement is for a surgically implantable device that can read at least 1,000,000 individual neurons and stimulate at least 100,000 in a 2 cm2 area of the brain. This is much less ambitious than my proposal (and a bit more flexible, in the NESD explicitly specifies that the device is not required to use electrodes, leaving the optogenetic option open), which is for a self-installing array of 100,000 to 10,000,000 electrodes.

 

The key difference between NESD and my idea is that mine requires that the electrodes be self-moving. So my approach requires the development of a new technology not needed by NESD – nanoscopic machines that can “crawl” between cells, trailing a “tail” that connects them to a macroscopic source of power and control.

 

The key difference between my idea and those of the better known (they were featured in several episodes of the 1980s – 90s TV series Star Trek: The Next Generation!) “free swimming nanobot” that Drexler and others have written about since 1986 is this long tail. I think of my idea as “2D nanotech” as distinct from the “3D nanotech” featured in Drexlers writing and ST:TNG. My hope is that this avoids the well know power and “fat” and “sticky fingers” problems raised in objection to Drexler’s ideas, as well as avoiding the need for amazingly smart computers that can fit in a 10-24 m3 volume.

 

Keep in mind that state of the art right now is the Utah array, with up to 64 electrodes

There’ve been a few Multielectrode arrays with more than 64 electrodes, (eg: the Drobel Eye ca 2000, which had 100. Bionic Vision Australia has made retinal-implanted MEMs with 1024 electrodes), but I agree that there’s been surprisingly little increase in the number of electrodes since the first brain implanted MEAs (not all are – most MEMs-using neurological research is done in vitro) in the early 1980s. NESD’s goal of more than 1,000,000 should break the ~1,000 electrode “barrier” by a factor of 1,000 :)

 

It's an acute treatment, which means it has to come out in a few months when the infections get bad. When it comes out, it takes a chunk of the patient's brain with it, since the brain tissue grows into the array. This is considered acceptable because the people were paralyzed anyway; they weren't really using their motor cortex.

I believe it’s an immune system inflammatory response, not an infection, that limits how long chronic electrode implants can be left in the brain.

 

I recall reading some papers in the last few years about chronic electrode implants that could be left in for a little longer than 18 months, and systems with only a few electrodes, like DBS “brain pacemakers”, used to treat conditions like epilepsy and depression, commonly last longer than their 4-year battery life, but the progressive injury to the brain of implanted electrodes is a major problem for BCIs like the imagined NerveGear. (The fictional device uses “microwave transceivers”, so doesn’t have this problem, but there are many reasons to conclude that wireless brain reading/stimulating systems of the needed resolution are impossible)

 

My “self-moving nano-wire” scheme is attractive regarding this problem, because it automatically implants and removes itself with each use. It’s also smaller (10-8 m) than single microlectrodes (around 10-6 m at the tip to 10-5 in the main body) or MEMs (about 10-2 m, so it should innately injure the brain less. I’m hopefull that, because it’s smaller than the cells its infiltrating, the nano-wire system wouldn’t injure the brain at all, though I doubt it could avoid an inflammatory response. It wouldn’t need to, though, because it would only be left in place as long as the user was using the NerveGear device, which I hope would rarely be more than 24 hrs.

Link to comment
Share on other sites

Before I get to the whole of what I want to say, I'd like to just make sure that there's a few things addressed:

 

Firstly, I've been reading a variety of threads on how something of a sort akin to NerveGear could be developed in theory. While I understand that many individuals would try to advocate for the non-intrusive model depicted in the anime, I personally am one to support CraigD's proposal of using the nanotechnology. While people may speculate on the dangers of using such technology, I would like to point out that such a system, coupled with a backup battery (in the event of a power-outage), would in fact be better overall when considering safety. Unlike the microwaves used in the anime, which as we all know require a certain amount of power to be produced, the nanotech would ensure one's safe and immediate reentry into "real space" while avoiding potentially sending the user into a state of permanent paralysis. This then trumps any other ethical issues related to the invasive technique, as there is also the potential for the excessive radiation generated by the waves to damage tissue in ways that we have as of yet to detect and/or track for a sufficient amount of time. While I ain't a professional in Philosophy, I do happen to major in the subject.

 

Secondly, the issue of overheating the system itself needs to be addressed. Currently, technology as it stands can only allow for certain amounts of run time of an operating system with the use of fans to circulate cooler air into the heat-generating system. Should there be insufficient coolage, the system will just shut down--as I'm certain we've all experienced at some point or other with our various other devices. Silicon, while versatile, has its limits. In another thread, the possibility for using diamond-semiconductors actually seems viable given that the anime actually references the use of such items within the system (http://swordartonline.wikia.com/wiki/NerveGear, "Transceivers"). These not only could potentially cut the overall heat generated by the system, but produce the speeds needed to run the game on a whole.

 

Beyond all of that, I believe that there is huge potential with submitting proposals for grants to kickstart the development of these nanobots, as well as the threads and number of electrodes requisite for the project to move forward. The largest issue posed however is the human testing of such materials. As Internal Review Boards stand now, it would be a long an arduous process to get every test requisite for developing these items approved. Their biggest concerns, I think at least, would be of the nanobots detaching from the electrode-shafts within the human body and having no viable way to remove the shafts (which could potentially damage internal organs, but that's another series of events entirely). Going with the diamond-semiconductors, as well as matching the helmet to the individual to ensure the specs are perfect for the user, there's the potential to inlay speakers in every facet of the machine--albeit they would have to be very numerous to compensate for relative size. One might even go so far as to suggest that a screen be placed on the face of the device so that individuals could contact the individual in-game for efficiency's sake. The challenge would be to develop a cushioning for the helmet that is not only durable, but moldable and able to provide relative comfort, while also housing the nanobots and sheathes. 

 

As for how the game itself would run, it does show in the anime that a cartage is inserted into the device. This could potentially be circumvented by using what we consider a super computer in today's sense of technology.

 

Those are just my thoughts.

Link to comment
Share on other sites

I think your referring to DARPA’s Neural Engineering System Design (NESD) program. It’s solicitation page is here, the best short description I could find, here, a helpful FAQ page for proposal writers (with some annoying broken URLs) here. (PS: the “hypo” in “hypography” stands for hyperlink, so please link to things you refer to in your posts. Without hyperlinks, computer based reading is no better than paper-based reading!)

 

Thanks for the reference. :thumbs_up Though modestly funded (US$60,000,000 over 4 years), NESD sounds like a good thing for the development of brain-computer interface technology, and I look forward to following its progress.

 

Unfortunately much of that will likely be going away.  The BRAIN initiative that (in part) begat the DARPA program was one of Obama's initiatives.

 

 

I believe it’s an immune system inflammatory response, not an infection, that limits how long chronic electrode implants can be left in the brain.

That's the most proximate issue for the electrode itself.  The infection risk occurs because all existing systems (with the exception of the Medtronic DBS system) have to penetrate the skin.

 

 

My “self-moving nano-wire” scheme is attractive regarding this problem, because it automatically implants and removes itself with each use. It’s also smaller (10-8 m) than single microlectrodes (around 10-6 m at the tip to 10-5 in the main body) or MEMs (about 10-2 m, so it should innately injure the brain less. I’m hopefull that, because it’s smaller than the cells its infiltrating, the nano-wire system wouldn’t injure the brain at all, though I doubt it could avoid an inflammatory response. It wouldn’t need to, though, because it would only be left in place as long as the user was using the NerveGear device, which I hope would rarely be more than 24 hrs.

That would be great.  But from what I know about the field, we are decades away from such a technology.

Link to comment
Share on other sites

Unfortunately much of that will likely be going away.  The BRAIN initiative that (in part) begat the DARPA program was one of Obama's initiatives.

 

 

That's the most proximate issue for the electrode itself.  The infection risk occurs because all existing systems (with the exception of the Medtronic DBS system) have to penetrate the skin.

 

That would be great.  But from what I know about the field, we are decades away from such a technology.

About the technology, there are actually potential alternatives, such as using a viral genetic modification to have the brain produce something akin to the dye used in an MRI, that could greatly increase the resolution for some devices, as well as that it could be used in conjunction with another technology following the same principle; ie a different die and not an MRI.

 

As for DARPA funding? We don't have any finite details on that yet, but we can only hope he doesn't scrap the program.

Link to comment
Share on other sites

Welcome to hypography, Romer! :) Please feel free to start a topic in the introductions forum to tell us something about yourself.

 

Sorry for the late reply. Interest in the Fulldive Technology forum has decreased a lot of late.

 

Simply saying,will psychology help?

I mean,i seen video of people making other able see things that are not there but still they can feel it,even if it is not there.

I think you’re asking if hypnosis, which some psychologists and psychiatrists use as a therapeutic tool, could be used to create an immersive virtual reality experience like the one depicted in the fictional Sword Art Online manga and anime.

 

I don’t think it could be. I’m familiar with hypnosis from having attended demonstrations of it when I was in college, where one of my instructors was a hypnotherapist, and from reports from a family member who underwent hypnotherapy to help her quit smoking cigarettes. While the techniques can be used to make some people report unreal experience, the range of those experiences is limited by the surrounding environment and the comprehension and receptivity of the subject.

 

Most hypnotic sessions are subtle, more like deep conversations than the generation of a false reality. The most dramatic examples of the latter I’ve seen is a demonstration in which the hypnotists told the subject that they were now as light as a feather, and asked them to jump from a window and float softly to the ground below, then return to the room where the demonstration was taking place. The subject then, without being told to, exited the room and building via its stairs, stood below the indicated window, then returned via the stairs. When asked, they said they had, as asked, jumped from the window and floated to the ground. When I talked to the subject informally hours after the demonstration, he said he knew he had actually taken the stairs, but was comfortable reporting that he had floated from the window to the ground.

 

From my personal experience and experience with psychology literature, I believe depictions of hypnotists causing people to effectively hallucinate are inaccurate. They appear often in books and movies, but rarely or never in reality.

 

The kinds of experience that can be produced via hypnosis are not, I think, what people would want or expect from a VR system.

 

That said, I think that mild hypnosis is common among good “pencil-and-paper” role playing gamers. The imaginative, shared fantasy nature of PnPRPGs lends itself, I think, to mild, informal hypnotic techniques. Though such experiences can be wonderful and immersive, I don’t think anyone would call them VR.

Link to comment
Share on other sites

  • 3 weeks later...

I think 10 years may be optimistic. We have seen great strides this year. Some researchers managed to control a robotic arm with their mind, fingers and all. DARPA still has that electrode array proposition up. VR headsets have sold really well. My issues with that number is that we aren't just building electrodes and parsing data.

 

After building a device for reading the brain's states and figuring  how to parse it, you need a CPU to do it in real time. I personally believe a CPU using a Cell-like architecture will be what is finally accepted. Then we need a device to send data back. Either nano wires, infrared radiation, pure electricity... There's quite a few options.

 

Let's say we take 2 years just to finalize a device with enough resolution to mimic a NerveGear (I think DARPA's giant EEG array has had no progress in one year). Now, we need a CPU to handle it. It took 7 years to make IA64 (the most recent major from scratch CPU architecture). Supercomputers can be used to code in the meantime (theoretically). We might take less time due to copying from an existing arch. We are now 9 years in and we have a prototype device for reading the brain accurately. 

 

Using the knowledge we have and the fact we already use implants, I assume development on a device to send data at a high resolution will be relatively concurrent. Any time spent will merely be acquiring this tech from a different group/adapting it. We are still at 9 years.

 

BUT WAIT! Safety! To undergo enough tests to declare this safe for any sort of release, it will take multiple years unless everyone gets so excited they fast track it either by 1) cutting corners or 2) having a huge team. 

 

Finally, it has to be shipped and code has to be put in place. I would put my estimate at 15 years. That can be argued optimistic. Also pessimistic. I don't know. Any CPU dev will likely be halted by Silicon's natural limits so we might have to wait for a new substance. Using something like nanowires/nanobots would also slow things down. I can't predict the future. The best way to find out would be to wait.

 

When it comes to making a dev team, good luck. Plenty of these comments come and go. I'm 'one to talk' as I host a group. I doubt you will do anything in an official standpoint (legally registering as a company). I doubt you could match up to DARPA or other similar organizations. But I wish you luck. It would be nice to see an underdog win for once.

 

--@Kayaba

Edited by BrainJackers
Link to comment
Share on other sites

  • 1 month later...

Ok, just made this account to join the topic. For everyone viewing, I want to give my idea of how this would work in simple terms.

 

We could interact with parts of the brain for movement of your character in game. For instance. If you try to move your leg in the real world, the device, whatever it is that picks up the signal, intercepts it and reroutes it to the game and moves the leg exactly as the brain instructed it to move, which is what I think happens in SAO.

 

For hearing, advanced headphones with real life like surround sound.

 

For sight we just need to figure out how to access the part of the brain that picks up the data from your eyes and figure out how to directly access that to give images.

 

For taste with this not being that important, we could just go with textures and interact with the part of the brain that recognizes feeling.

 

Same goes for interacting with the world, intercept nerves system and send signals through brain.

 

Let you brainiacs out there figure this out, as a lot of 8t already has been figured out.

Link to comment
Share on other sites

That is what the idea is, on the most basic level.

 

Why are you talking about headphones if you're also talking about directly stimulating other brain cells?

 

"just" doesn't apply to anything here. Implementations of the ideas described are extremely hard and require tech being developed or is undeveloped. That's why we don't all have VR devices right now.

 

Taste is important for immersion. It would be quite a disconnect from the virtual world to bite into a piece of bread just to get no taste...

 

A lot hasn't been figured out. A lot has been theorized. Dr. Jianjun Meng (and others) developed a mind controlled robot arm with reasonable accuracy. Still not good enough for sword swinging. We do have some control of simulating textures yet that's super rough.

 

We have no BCI that can currently convert vocal cord movement to text. We have no stable method of body paralysis. We have no way to simulate smell/taste/audio. We have no stable method of blocking external signals.

 

I encourage you to read up on current research, DARPA's BCI projects, how hard it would be to recreate those experiments at home...

 

--@Kayaba

Link to comment
Share on other sites

Ok, just made this account to join the topic. For everyone viewing, I want to give my idea of how this would work in simple terms.

 

We could interact with parts of the brain for movement of your character in game. For instance. If you try to move your leg in the real world, the device, whatever it is that picks up the signal, intercepts it and reroutes it to the game and moves the leg exactly as the brain instructed it to move, which is what I think happens in SAO.

 

For hearing, advanced headphones with real life like surround sound.

 

For sight we just need to figure out how to access the part of the brain that picks up the data from your eyes and figure out how to directly access that to give images.

 

For taste with this not being that important, we could just go with textures and interact with the part of the brain that recognizes feeling.

 

Same goes for interacting with the world, intercept nerves system and send signals through brain.

 

Let you brainiacs out there figure this out, as a lot of 8t already has been figured out.

Unfortunately, I would think that yo are quite mistaken in one regard. True immersion is would be possible today IF we could stimulate taste. Current VR systems already sufficiently stimulate the user to invoke reflexive actions in response to stimuli, but what we can't do is simulate taste and smell(which are both the same sense), if we could then the adoption rate for VR technology would likely be significantly higher. Scent is just as important as any other sense in terms of full immersion, you can't have full immersion without all them.

Edited by NotBrad
Link to comment
Share on other sites

I know we can stimulate muscles. The issue is tracking them. That robot arm only had 75% accuracy, not nearly enough for a full VR world. It also could not handle finger movement.

 

Simulating the feel of textures is still extremely rough yet can happen.

 

IIRC we cannot simulate audio in. We also cannot get audio out (we can get yes/no with greater than 50% accuracy but it's still rough).

 

Taste and smell aren't something we can do, as you said.

 

We have the basics, but not the true technology needed.

 

--@Kayaba

Edited by BrainJackers
Link to comment
Share on other sites

Dr. Jianjun Meng (and others) developed a mind controlled robot arm with reasonable accuracy. Still not good enough for sword swinging.

“Not good enough for sword swinging”, is, I think, both an understatement and an overstatement. To understand why I think this, we need to understand in some detail what the study participants of research into EEG-controlled robot arms like Meng and Hortal’s are actually doing.

 

Using free and commercially available hardware and software, with EEG electrodes stuck to their scalps, they practice producing several EEG (“brain wave”) pattern that could be recognized by a computer program monitoring the electrodes output. In Meng’s study, these corresponded to thinking about moving their left hand, their right hand, both hands, or relaxing both hands. Hortal’s subjects produced recognizable EEGs by performing distinct alphabetical or numerical mental exercises. These EEG-recognizing events caused a robotic arm to move left, right, forward, or backward (in one phase of the test – later phases used the same event to control lower and raising the arm and closing and opening its grippers), The participants then tried to control the robot arm to successfully perform tasks like hovering over 1 of 4 target objects for 2 seconds, which in Meng’s study, they were able to do about 77% of the time, taking about 7 seconds.

 

What’s important to note here is that the study wasn’t reading the brain effects of the participants’ natural movements in tasks like picking up objects. They were essentially “pushing buttons” to control the robot arm, but instead of using the usual fingers on buttons, by achieving distinct mental states.

 

Source: Noninvasive Electroencephalogram Based Control of a Robotic Arm for Reach and Grasp Tasks, Scientific Report, 2016/12/14/online

 

I said “both an understatement and an overstatement” because the way a sword is swung in most video games is actually easier than the robot arm controlling tasks Meng’s study participants were attempting. In a video game, swinging a sword usually requires just one button push to trigger an animated sequence in the game avatar, fewer and less precisely controlled actions than the “4 buttons” in the studies.

 

I don’t think it would be difficult to replace the robot arm in Meng’s study with inputs to a video game requiring only 4 buttons, like a 2-D fighting game with only left, right, jump, and attack buttons. Despite being “mind controlled”, I expect such an experience would feel much less immersive than playing the game with the usual fingers and buttons.

 

The few example I’ve seen of EEG-based controls being arguably better than ordinary controllers with their motion sensor, joysticks, and buttons were when the game called for the player to do something weird, like telekinetically levitate an object, such as in the little “Jedi Mind Trainer” game that Emotive gives away for people to use with its EPOC EEG+motion sensor headset. Here’s a review of that, and the other 2 freebie “Cortex Arcade” EPOC games, “EmotiPong” (Pong) and “Cerebral Constructor” (Tetris).

 

True immersion is would be possible today IF we could stimulate taste.

I don’t think this is true. The best present day non-intrusive BCIs, like those I discuss above, are less good at creating a sense of immersion than ordinary controllers.

 

Simulating tastes and smells in movies or video games is not, I think, too difficult. All that’s necessary is to actually spray or squirt something with the desired taste on cue. Such technology is not much to be found, I think, because people don’t much want it.

 

Smell simulators foray into the public market from time to time – “Smell-O-Vision” and “AromaRama” ca 1960, “iSmell”, “AromaJet”, “Scent Dome”, “P@D” and others since 2000 – but despite growing sophistication, nobody seems much interested in them. This Wikipedia article has a pretty good history of these odd, unpopular systems.

 

I don’t want to give too gloomy a prognostication of that might be possible with present-day technology like EEG and EMG, so created this new thread, “A Low-Tech, Full-Body, Natural Motion, Emg-Based VR Input System”

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...