# How Long Until We Could Make A Real Sword Art Online (sao) Nerve Gear Type Device

VirtualReality

258 replies to this topic

### #18 TanglingTreats

TanglingTreats

Curious

• Members
• 7 posts

Posted 26 December 2014 - 05:07 AM

Hi all, i have been reading this thread and find the ideas amazing. I think that if we could differenciate the electrical pulses that come form our brains and codify them for different actions in the game just like the way a computer works when you press the keyboard and the words appear in the screen then half of the work would be done, but I'm not sure about how this can be done because:

I don't really know if the electrical pulse that your brain sends to move your right hand is tha same that sends to move your left feet but in a differet nerve, if that is the case, then we would have to place specific receptors in all the nerves that come from the brain, which would be much more difficult than the first case.

In case we can differenciate the electrical pulses, we should also inhibit them to prevent  the movement that we want to do in the game to be made into reality, but we cannot inhibit the ones that are used for vital functions such as breathing and all the others organs, which should be kept working. But if we don't transcribe the breathing pulses into the game, we could not be able to breathe insisde the game (not really a problem but maybe it result in less inmersion) so its not only about inhibit and recreate in the wirtual world but inthis case we could have this actions replicated and done simultaneously in both, the real and virtual world.

I'm sorry for my English, its not my first language, but i hope you can understand what i mean even if i use wrong vocabulary sometimes.

Yea, that's possible. You could check out the first link I have in my first post for such a technology by reading electrical pulses or activity. It is possible to have certain movements created using our mind (which is what we do everyday anyway), in the computer itself. However, it's only possible if the computer can read what is sent through the device. Which means a software needs to be created to allow the device to do such a conversion by reading brain activity and translating them into electrical signals that the computer can receive and understand. Do correct my mistakes if there are any, I'm not sure if my sources are reliable and if there's any fact that I misinterpreted.

### #19 gonnn

gonnn

Curious

• Members
• 8 posts

Posted 26 December 2014 - 09:01 AM

I have seen the link and I think that the device is awesome, but to create a virtual reality we would need to go a little bit forward with it, not only read all the pulses, which should the actions that we do unconciusly too, but we would need to prevent the pulses form reaching the body and make them go to the receptor, which must be an inhibitor too. The thing that concerns me about this is that although  in a few years we will probably be able to read and differenciate all the pulses we should probably need intrusive surgery to place the inhibitors in a place that doesn't prevent the pulses from happen, but from reach the rest of the body.

I'm thinking about it in a way not like doing a videogame only, but as a way of reproducing of a missing limb into a prothesis, which could be a huge step forward in medical science and is highly relacionated with the virtual reality system portrayed in SAO.

Next year i will enter university to study biotechnology, so I'm not sure if what I'm saying is correct, these are only conjetures based on what i have studied in biology, which isn't much at the moment.

### #20 TanglingTreats

TanglingTreats

Curious

• Members
• 7 posts

Posted 26 December 2014 - 09:22 PM

Speaking about prosthesis, here's another link I have. Even though it's not in a virtual world, but I think the theory behind it could reproduce similar results for virtual reality.

### #21 zazz54

zazz54

• Members
• 67 posts

Posted 27 December 2014 - 07:32 AM

Speaking about prosthesis, here's another link I have. Even though it's not in a virtual world, but I think the theory behind it could reproduce similar results for virtual reality.

The article was intriguing to say the least. I am astounded at the progress we have made in this area. The possibilities are nearly endless. However, as has been stated, intrusive surgery was needed to make the prosthetic limbs respond. This of course won't do for the consumer wishing to enter a virtual reality simulator. Has there been any work done on a 'connection' that isn't as intrusive?

### #22 CraigD

CraigD

Creating

• 8034 posts

Posted 27 December 2014 - 12:25 PM

It’s great to see everyone so enthusiastic about creating a true immersive brain-computer interface like the fictional SAO’s Nerve Gear!

After going through a few pages on the internet, I came across this. http://emotiv.com/ . And it seems that they've managed to read brain waves through EEG (electroencephalography) like what Jacob has mentioned when he started this thread.

Off the top of my head, I remembered (their existence, not their release dates) these “consumer grade” EEG (more perhaps EMG – more on this distinction later) makers:
• NeuroSky, which sold a small number of developers prototypes in 2007, then a US$199 device in 2009, and currently sells the least expensive EEG device, the$79 MindWave
• Mattel which sold the Mindflex game in 2009 for about $150. The Mindflex uses, under license, hardware and software developed by NeuroSky • Uncle Milton toys which sold the Star Wars Force Trainer in 2009 for about$130 (this blog page has a good description of its important part). It uses Mindflex technology, too.
• Emotive, as noted above, which makes the EPOC, which start at $399 (the software to get raw data from it adds$300 to the cost).
A quick visit to this Wikipedia article showed there are many more, the first sold (for a out-of-nearly-everybody’s-reach \$20,000!) in 2003.

All these devices are inferior to medical EEGs in that, for ease of the user putting them on and taking them off, they use “dry electrodes” rather than conductive gel “wet electrodes”, and they have fewer electrodes in fewer places than medical “caps” – from 3 (for NeuroSky-based sesnsors) to 14 (for the EPOC), vs the usual 24 or 32, or as many as several hundred, used in clinical medicine and medical research.

Another issue with non-medical EEG devices is that, except for some limited research done to answer just that question, it’s unclear what they’re actually measuring. They EPOC is clearly designed to be more of an EMG (electromyograh, which measures muscle, rather than deep brain, activity) than an EEG, as appear to be the case with the Mindflex, too.

In short, these devices don’t really “read the brain” in a usable way, but are more unusual input devices. They’re good for games where you want to require a weird action to control weird in-game mechanics, like telekinesis or magic, but not very good as ordinary pointing/moving/triggering game controllers for un-weird game mechanics.

Edit #2: Here's another video. It's about computer graphics and it's potential in portraying realism as compared to current computer graphics using polygons.

Folk should be aware of the history of Euclideon, and careful to separate their well-known hype from the actual (and, unfortunately for people not making money from it, very secretive) software development work they’ve done since their 2010 founding. It’s a good idea to read some criticism of the company and it’s claims, such as this much-cited post by Mincraft developer Markus “Notch” Peterson.

There are many opinions on what Euclideon is up to. My short take on it is that they initially developed their “Infinite Detail” point cloud/voxel graphics engine in the hope that it would become a popular video game engine technology, but when for many reasons (one of the greatest being, as Notch noted, it’s notoriously difficult to “deform a voxel cloud” in a way to allow moving objects in a voxel-based graphics program) that didn’t happen, changed direction to focus on their “Geoverse” scheme for building graphical data using geospatial scanning systems (Lidar, etc). It’s too early to say with confidence, but this change in direction seems to be working better for them, as it’s attracted the interest and support of some big, established scanning companies. As the Euclideon CEO (and main public speaker) Bruce Dell explains in the video, video game companies currently spend a lot of money having artists creating graphics data, on which the popularity of their games often depends. Being able to replace this expensive work with captures of real-world data, which is essentially free and in huge supply, could drastically reduce the cost of producing high-quality video games.

Some video game graphics fundamentals are needed to appreciate the controversy over Euclideon’s Infinite Detail.
One of the fundamental programming problems in rendering views of 3-dimensional data is avoiding having to look at all of a world’s data each time you render a view of it. The main technique used to do this is to pre-calculate which points/polygons in the data need to be checked for a point-of-view in given rectangular section of the world – a “build map” of all the graphics data in the world.

Bruce Dell describes Infinite Detail as using a revolutionarily clever “search engine indexing system”. This is almost certainly some sort of build map.

A problem with pre-rendered build maps is that, when the world changes, the maps must be re-calculated. Tricks can be used to avoid this for simple things like opening doors, and high-performance map rebuilding programs (Ken Silverman’s ca 1995 “Build Engine” is a famous example) have been written, but large moving objects remains a bane of build maps. I suspect Infinite Detail is subject to this bane, without the saving graces of schemes like Build Engine’s.

### #23 gonnn

gonnn

Curious

• Members
• 8 posts

Posted 27 December 2014 - 06:02 PM

Speaking about prosthesis, here's another link I have. Even though it's not in a virtual world, but I think the theory behind it could reproduce similar results for virtual reality.

I saw this some time ago and find it completely amazing. Even though the movements are still very basic and mechanic, in a few years this can be developed into a completely fine limb that can be used in everyday live. Things like that is what draw me to science since I was a kid, just imagine explaining to someone of the eighteen century that a man can move a robotic arm conected to this body just with his thoughts, it's crazy!

If we can connect the movements into a computer instead of a robotic limb and prevent the movements with the system explained by Craig I think that we could have done almost everything to make the project happen, leaving the software problems aside, but if we compare the graphic desing of videogames from ten years ago to the one's developed today we can see a huge development in the matter, so I don't think there is a real difficulty in creating the virtual world with the technology of ten or twenty years in the future.

### #24 TanglingTreats

TanglingTreats

Curious

• Members
• 7 posts

Posted 27 December 2014 - 08:08 PM

@zazz54 Other than what Craig has stated in his reply to my post about the EPOC, I've not read about any other non-intrusive methods of connection. However, I will be looking for more research papers that have been done in the area of BCI.

@CraigD Thanks for the information you've provided. It helped to clear some misconceptions that I have had and taught me new things. I find it a little hard to imagine the scenario about the "shared dream" concept though, it'll be awesome if you could elaborate more on it. Since in an MMORPG we gotta know the location of the person in an area, so one way or another, coordinates will be required in order to correctly place someone at that spot that they are dreaming about and interaction might be another issue.

Moderation note: A reply to this post was moved to How Long Until We Could Make A Real Sword Art Online (sao) Nerve Gear Type Device

The way I see virtual reality in terms of gaming, is that when we are looking at something, we look through the perspective of a "camera" that has been placed at that height and angle, kind of like how games are being made today. However when we look at someone else's avatar in the game, we don't actually see the camera floating there, we see the computer-generated model of their faces and everything. For things to be more realistic, this is when reading the brain activity comes in, the position of ours eyes in the game will be determined by what the device can tell by reading our brain signals etc. For the sense of touch... I've not really thought that far yet, touch is a really complicated sensory system, feeling the heat and the pressure we exert on objects etc.

@gonnn Yea, you're right. Unfortunately we might not be at the stage where we would know what might be a possible solution to create such a device and to test it out but I suppose we could always do our own research and discuss the possibilities here. Video game development has definitely improved and it grows with the technology that we have till this day and it might still be in the future. What's social media and the internet without the resources that we have taken advantage of right? As long as research in the area of nanotechnology continually advances, smaller things will be able to achieve even greater feats and computing power would increase with it. In fact, I think that with one improvement in an area of science leads to another in other areas.

### #25 TimberEyes

TimberEyes

Curious

• Members
• 2 posts

Posted 28 December 2014 - 12:37 PM

Hi! I'm finding this topic very interesting to read, and although I don't really have anything to contribute (I'm doing research instead), I have some questions. Of course, some of my questions may be really simple or already asked (despite me reading the thread) which I apologise in advance for.

Would over-heating have an effect of the Nerve Gear? By my knowledge there was a console designed for the NG which I suspect would have some difficulties (if made in real-life) running the enormous maps, rendering, and supporting npcs and players whilst also reading and writing signals sent from the NG. Although, this may not be a problem later in life.

Would a console make the NG any easier to read the brain, or is it completely dependant upon the helmet? (Bit of a shorter question.)

For ethical reasons, how would the NG protect itself from being hacked and recreating events like what happened in the SAO plot? The example may not even be possible, buuuut if it were, I wouldn't like to be part of it.

Thanks. ^^

On a side note, I'm amazed at how respective people are in this thread. It's nice to be able to read different ideas without one another being rude.

### #26 CraigD

CraigD

Creating

• 8034 posts

Posted 28 December 2014 - 11:31 PM

I'm not sure if all the pulses that come from our brain are the same, but in case we can differenciate each one of them ...

There are distinct, identifiable patterns, but they’re not usable to get detailed information about what the brain is doing.

The signals sent from the brain to the muscles isn’t a digital signal, like a word or packet of binary data on a computer data bus or network, but an analog signal. Nerves “pulse” – that is, action potentials propagate along nerves – with a fairly constant duration. A signal to cause a muscle to contract with more force pulses more often – with a higher frequency – than one to cause it to contract with less force.

We can tell, via electromyograms (EMGs), when and how strongly a muscle is being signaled by nerves to move, but only because we place the electrodes for the EMG near the muscle or nerves.

“Brain waves”, which can be measured by a special kinds of EMGs taken with electrodes placed on the scalp, electroencephalograms (EEGs), measure the activity of millions of brain nerves pulsing. They can be used to tell what the brain is doing in a very general sense – sleeping, relaxing, thinking hard, moving muscles – but not the specific muscles being moved, or specific thoughts being thought. Because electrodes can be placed in many places on the scalp, it’s possible to tell where in the brain specific waves are coming from, and by knowing that kinds of sensations, thoughts, or muscle movements involve that part of the brain, but because the scalp is far from the brain, and the electric charges an EEG measures follow many paths, the resolution isn’t high enough to get much detail (resolution). This resolution can be improved greatly by putting the electrodes closer to the nerves to be measures, by implanting them in the brain.

### #27 Turbowheel

Turbowheel

Curious

• Members
• 1 posts

Posted 31 December 2014 - 04:48 PM

jacob2mitts, KiritoAsuna, deadlydoener, darkfightermick, brvndo. I too am going to college and wanting to work towards making such a thing as a nervegear. I have actually planning on making this my goal in life. Maybe we can all come together as a team and make it happen? Let me know if and when you guys want to talk more about this in real time. I have some great thoughts on this.
The signals from the brain necessary to intercept are only the ones intended for the body. That would include the muscles and the senses. I believe that Chris is right in that as of right now, we do not have the technology capable of writing information directly to the nerves. However I think that if we can use something like the neck to transmit the signals as had been suggested before, we can use the brain to recognize those signals and put them where they need to be itself. We would not need the ability to write directly to the brain in that case as the brain would do the writing itself. The brain reading technology does need to be improved I think though but it is likely it will in the next several years as there is initiatives into improving this kind of technology by the medical community even if not huge ones.
The nerve gear also blocks the signals from reaching the body where they should. It is the most important part I think we need to focus on since as far aa I am aware, there is no known way to do so at all, short of outright cutting the physical connection between the body part and the brain. A couple possible ways is a man in the middle or a blocking signal but that would require either a wired connection or extensive research into the subject.
• Brvndo1 and RedicicleV2 like this

### #28 Zachjmac

Zachjmac

Curious

• Members
• 4 posts

Posted 02 January 2015 - 03:57 PM

Hello everybody,

I am another one of the people who created an account just to TRY to contribute to this thread. I am 16 years old so I do not have much knowledge in the fields required to make NerveGear possible. I recently watched SAO for the first time and was immediately hooked on the idea. I caught up to the current episodes in 2 days and still have not stopped thinking about how amazing it would be to have FDVR. This thread has got me really excited that there are plenty of bright people who are inspired to make the idea possible! I have began researching all about the different elements that would be needed to make such a thing possible, and I had one question. I apologize if it has already been answered or a stupid question.

I understand why we would need something to read the brain waves, but why do we need something to write to the brain?

Also, I like Craig's idea of Shared Dream Virtual Reality, but wouldn't that be too unpredictable? It does seem to be the safest idea because there is no intrusion needed, but if it is based on the players personal dream wouldn't everyone see something different? Would there be a way to make everyone's SDVR the same?

• RedicicleV2 and nordicpjolover like this

### #29 gonnn

gonnn

Curious

• Members
• 8 posts

Posted 02 January 2015 - 05:16 PM

I understand why we would need something to read the brain waves, but why do we need something to write to the brain?

Because in a fully inmersive virtual reality system you wouldn't be wearing headphones to hear what is happening in the game and you wouldn't be looking to a screen like in the Oculus Rift, you would be in a dream state paralysis and the process would have two parts (that is what I understand, it may be wrong):

One would be reading your movements so you could do them inside the game and prevent you from doing them in reality with something like anesthesia, for example.

The other part would be to write in the brain the information that your sense would catch from the game if it was real life, that is what you see, feel, hear...

This is the reason why the most optimistic people in the matter think that this VR would come in 10 or 15 years, if no more than that, and not in 3 or 5 years.

This should be researched first to help people that suffer disabilities, it could make blind people see by writing in the brain the things seen by a camera, or deaf people hear by writing the sounds that a microphone captures, for example.

• Zachjmac and nordicpjolover like this

### #30 Zachjmac

Zachjmac

Curious

• Members
• 4 posts

Posted 03 January 2015 - 05:55 PM

I found some cool articles about technology that might be able to be used for the creation of a NG.

If this experiment was legit, we might not be as far away from converting brain waves into usable data than we might think we are.

http://www.washingto...rain-interface/

Based on this article about Sleep Paralysis, wouldn't that mean we would just need the NG to send our body into a state where we are producing glycine and metabrotropic to make sure our muscles do not react to our brain waves? Do we need to be able to disable the parts of the brain or nervous system that control our movements?

http://www.livescien...-paralysis.html

Edited by Zachjmac, 03 January 2015 - 05:56 PM.

### #31 furrypot

furrypot

Curious

• Members
• 2 posts

Posted 04 January 2015 - 01:47 PM

Hi everyone. I'm currently working on something like NerveGear, well the goal is the same to achieve Full body immersion, just without using microwaves (like nervegear), It's based on hypnagogia and synesthesia...

The first mini prototype was successful, but it's not very practical... It is based on hypnagogia/ sleep paralysis. You put on any HMD (like Oculus Rift), and do WILD (lucid dreaming technique), but with open eyes, after a while hypnagogia starts to appear and the scene where you look at HMD becomes more and more vivid until it looks super realistic. So the bad thing is to keep eyes open, with every blink it feels like lifting dumbbells... Another bad thing is when the scene becomes almost full immersive I snap out of that state and become fully "awake"... I did a lot of research and found that almost the same had happened before falling asleep with oculus rift... That mini prototype has no controller for doing actions. It covers only vision and at some degree makes feeling like you had other senses but they depend on how you think it could feel...

So I'm now making much more advanced prototype that is bases more on synesthesia than hypnagogia. Soon I will post about it on my blog http://furrypotvr.tumblr.com

If you have some ideas or want to make something like NerveGear. We could share some thoughts or even collaborate !

### #32 RedicicleV2

RedicicleV2

Curious

• Members
• 6 posts

Posted 04 January 2015 - 01:57 PM

Hi everyone. I'm currently working on something like NerveGear, well the goal is the same to achieve Full body immersion, just without using microwaves (like nervegear), It's based on hypnagogia and synesthesia...

The first mini prototype was successful, but it's not very practical... It is based on hypnagogia/ sleep paralysis. You put on any HMD (like Oculus Rift), and do WILD (lucid dreaming technique), but with open eyes, after a while hypnagogia starts to appear and the scene where you look at HMD becomes more and more vivid until it looks super realistic. So the bad thing is to keep eyes open, with every blink it feels like lifting dumbbells... Another bad thing is when the scene becomes almost full immersive I snap out of that state and become fully "awake"... I did a lot of research and found that almost the same had happened before falling asleep with oculus rift... That mini prototype has no controller for doing actions. It covers only vision and at some degree makes feeling like you had other senses but they depend on how you think it could feel...

So I'm now making much more advanced prototype that is bases more on synesthesia than hypnagogia. Soon I will post about it on my blog http://furrypotvr.tumblr.com

If you have some ideas or want to make something like NerveGear. We could share some thoughts or even collaborate !

I LUB U IF U MAKE IT (no homo

### #33 jonahismiley

jonahismiley

Thinking

• Members
• 10 posts

Posted 04 January 2015 - 10:50 PM

Yes. I decided to search this topic up because I watch SAO and I love these exciting topics that people love to talk about. I(12 years old), just like Curiosity am younger and I won't know as much neurology terms as others, but I have always wanted to be able to make such a device as the Nervegear. I can already see that people like you guys will make such a device. I on writing was wondering... How many studies have people done on writing parts of the brain, and yes, I agree that it can be very dangerous in many ways, but, maybe we are thinking about it the wrong way,(yes, I know that that line is probably in five billion different sci-fi and detective movies, and it's cheesy) maybe there's a way to do this without writing into the brain. Maybe a type of illusion perhaps. Without a screen of course, because that would ruin the purpose of vr. Even though writing into the brain is technically an illusion of course. Maybe that is the only way to do so.

### #34 CraigD

CraigD

Creating

• 8034 posts

Posted 11 January 2015 - 02:28 PM

... I'm too looking for studies which could help me making a Full-dive device... I hope (believe) that one day we will invent that device.

A lot of science and engineering literature can be found online by searching for “brain-computer interfaces”. Much of it is pretty technical, and all takes some effort to digest, but it’s a lot of fun to just surf/wade through. As somebody who’s been interested in the subject since he was a teenager (in the 1970s, when BCIs were a distant dream, almost wholly in the realm of speculative fiction), I make a habit of following its literature, though I’m doing a lot of catching up, and revising many of my assumption, since this thread started.

The best general summary of the subject I’ve found in a professional journal are the introduction and “review of state-or-the-art systems” in IEEE Signal Processing Jan 2008, which can be read here. My July 2014 post in this thread is pretty good for an amateur, I think, and rare in that it directly compares real and fictional examples.

Here’s my partial synopsis of the current state-of-the-art in brain-computer interfaces, and speculation about its likely near future.

BCIs can be categorized on a few main axes:
• Invasive (fictional example: The Matrix) vs. non-invasive (Sword Art Online);
• Brain->computer (“read only”) vs. computer->brain (“write only”) vs. brain<->computer (bidirectional, “read/write”) (most fictional example are of fully bidirectional r/w interfaces);
• Realtime vs(most fictional example are realtime) vs. non-realtime.
The major difference between invasive and non-invasive “reading” approaches is resolution. Invasive systems consisting of arrays of surgically implanted electrodes are able to narrow down the source of a given signal to a smaller collection of neurons, in the ideal case individual ones, while non-invasive systems such as EEG and fMRI can narrow it down only to collection larger than about 1000000 neurons. This isn’t because electrodes actually touch individual neurons, which are many times smaller than present-day electrodes. Whether with implanted or surface attached EEG, sophisticated algorithms taking signals from many electrodes and often run much slower than realtime, on recorded data, are needed to achieve maximum resolution.

The difference in resolution between invasive and non-invasive “writing” is even greater, so much so that at present, only implanted electrodes can truly be said to write to neurons. Worse, the resolution improvements made by signal-processing algorithms is essentially possible only in the “reading” case – you can in principle improve he signal-to-noise ratio of brain output signals by using better algorithms and faster computers, but can’t take similar approaches to make the brain do the same with noisy input. The brain’s “hardware and software” sophisticated as they already are can’t be “upgraded”.

So 150 days after my July 2014 post, I still think the only way we’ll be able to create a real SAO-style BCI is invasively. I’m optimistic that improvements in signal processing technology may be able to much improve non-intrusive read-only BCIs, but don’t think such interfaces will be able to do much more than trigger simulated-world actions in the same way the button controls on a present day video game system trigger game actions, such as in a typical fighting game like a SAO PS Vita game from Bandai Namco. Given that our brains are pretty good at interfacing via fingers and buttons well enough for this kind of game mechanics, I’m skeptical if a BCI would seem to gamers to be an improvement or even as good, much as we’ve found with motion-sensing interfaces like the Wii remote and the PS Move.

This doesn’t mean I’m abandoning the dream of a deep dive BCI. I just think such a system will have to be an invasive one, and a nanotech one.

Invasive BCIs are creepily unattractive to most people now, I think, because of the current state of the art of invasive electrodes, which requires a surgical team to cut open your head and implant them. My hope rests on the vision that electrodes can be made much smaller (about 100 nm, 10-7 m), approaching or even smaller than individual neurons. At such scale, they could pass though tissue, including bone, without the subject noticing, and, while technically still invasive, practically be less noticeable than the magnetic and radio-frequency signals the makers of SOA imagined for the fictional NerveGear.

I elaborated on the idea, in the context of its use in medicine, rather than for CBIs, and inserted though blood vessels rather than directly though the scalp, in this 2005 post. From it:

Now, imagine that rather than being just a featureless fibre mechanically pushed, pulled, and jiggled with the aid of external radiological imaging, each fibre ends in a “nanobot” with 10 nanometer (10-8) features including manipulators, propellers, touch sensors, light emitters and sensors – all the features of a free-swimming machine, but without the ultra-miniature powerplant, brain/memory, communication system, and waste heat radiator/exchanger a free swimming design requires, and, rather that being at the whims of Brownian motion, being anchored to the end of what, at that scale, amounts to a massive, near stationary cylinder. The cylinder provides course, push/pull movement, but the manipulator and propeller features handle fine guidance. In addition to emitting light for its own sensors, this “nano-head” can send minute blips of high-energy (1e15 – 1e18 hz) radiation to provide high-resolution position data to external sensors, include an identifying data tag. The fibre supplies power, removes heat, and streams data both ways. On the end opposite its nano-head sits all the information processing power the macroscopic scale world can provide. In between, movement of the fibre can be facilitated by specialized clusters of features similar to the head.

The technology for long, thin wires like has existed in prototype for 10+ years. A proof of concept of electrodes this thin was done in 2005 (see NFS press release Wiring the Brain at the Nanoscale).

In the fictional imagery of Sword Art Online, what I’m imagining would be a helmet like the NerveGear, but when you turn it on, it doesn’t shoot EM radiation into your brain, but many invisibly thin wires.