That is an interesting concept. I think it would need to be calibrated to the person (different muscle sizes) and you would lose energy a lot faster (with the resistance, of course, you're supposedly using the same energy).
Due to it being hard to setup, both per use, and the first-use calibration (with possible later calibrations if you're muscles increase), and the issue with being bound, I don't think it's usable for the masses
I don’t imagine the setup I’m describing – the LTFBNMEMG for lack of a better name – would require much or very complicated, or possibly even any
, calibration, because, like nearly every video game control system to date, it wouldn’t adapt to its user, but require its user to adapt to it
, something we, with our amazingly flexible nervous systems, are adept at.
This facility for immersing ourselves into relationships with environments different than our bodies’ with the ordinary world is what allows us to do things like walk on stilts, ride bikes, drive cars and pilot aircraft, as well as experience present-day videogames with interfaces consisting of just a few finger+thumb-controlled buttons and joysticks, eyes, and ears. Given even the clunkiest interfaces, our brains and extended nervous systems adapt to find a way to allow us to control real physical vehicles and virtual avatars, experiencing them as if they are our own bodies.
So, starting with the assumption that present-day video game controllers are actually very good at what they do, and provide an imperfect but enjoyable level of virtual reality immersion – which, for folk focused on the challenge of really building the fictional NerveGear
brain-computer interface, may seem counterintuitive – I’m looking with the LTFBNMEMG to merely make a better controller, counting on the users’ nervous systems to do the difficult interfacing work, as they do with present-day controllers.
I’d better expand on what I mean by “better”, starting with some discussion of the essential limitations of a standard (eg: PlayStation
, Xbox, and various lookalikes for PCs) video game controller. This essential, and obvious, limitation is that they take input only from our fingers and thumbs. As such, they’re excellent for allowing us to control an avatar that is essentially a tank
– a vehicle that can move along a surface in any direction, pointing and firing a gun in a direction independent of its motion – giving rise to the popularity of the first person shooter game genre
, arguably the only genre (“shooter” optional) suitable for deeply immersive VR. Even when the “vehicle” you’re driving in a 1PS is rendered as a human body, it’s still effectively a tank, facing a single direction while moving a single direction, with a few un-tank-like additions such as the ability to duck, jump, and climb.
Standard controllers fail at allowing you to control the movement of individual human body parts. Actions that are literally child’s play in the real world, like hand assembling Lego blocks, crawling, grabbing, punching and kicking, can’t be done realistically in a virtual one using a standard controller.
This is because standard controllers only get input from our fingers and thumbs, not our arms and legs. The LTFBNMEMG seeks to remedy this by getting input from these body parts essentially ignored by present-day controllers.
... (different muscle sizes) and you would lose energy a lot faster (with the resistance, of course, you're supposedly using the same energy).
Now, you obviously wouldn't be feeling stuff, but you might get imprints where you're held down which wouldn't be pleasant.
I’m counting on our nervous systems adapting so that we don’t tire or injure ourselves straining against the restraint, unconsciously adapting to use only as much muscle force as required to move the game avatar as desired, just as we quickly learn that pushing buttons very hard on a standard controller doesn’t make the in-game action they control more forceful.
Though I described the LTFBNMEMG’s restraint as like those used “to restrain an agitated patient or dangerous person”, I imagine them as being lighter and not difficult to escape from, so the user would have to learn not to get over-excited and use too much force.
My question is "Could you turn around/look behind you?"
Since I’m describing an imagined device, I confidently answer “yes”.
Seriously, “can you turn around” challenge for a LTFBNMEMG system has 2 main parts: first an EMG
problem, then a kinesiology
Let’s assume the “turning around” action involves a situation where the game avatar is standing or walking.
First the system must detect, via an EMG from the user’s leg and pelvic muscles, indicating their intention to lift a set down in a different place and orientation their foot. The voltages of the various EMG electrodes on the strapped-down LTFBNMEMG user will be similar to those attached to a person actually performing the step-and-turn action.
Next, these voltages need to inform analogs of the muscles they indicate activity from in a kinesiology model, and this model must compute the effect of this motion on the entire body. If the system works right, then the modeled avatar body will perform the desired step and turn action, and the user’s VR display will show them an image from a direction rotated by the angle computed by the modeling program.
Note how this differs from the way you turn and look behind you in a present-day video game.
In most present-day game, you turn “like a tank” either by moving the right joystick of a standard controller, or by pointing a pointing device (eg: a PlayStation Move controller) at the edges of the screen. You are essentially using your hands to move a control that functions like the turret-aiming mechanism of a tank.
In a LTFBNatural Motion
EMG system, you are using your leg muscles to signal a kinesiology model to move your virtual body, a more natural scheme.Earlier, I noted that a key limitation
of a standard controller is that it takes input only from your fingers and thumbs (or in the case of the Motion controller, your entire hand). The key advantage of a LTFBNMEMG controller is that it takes inputs from as many of your body parts as it attaches electrodes to.
The muscle groups in our hands are really good, I’d say the best, ones, but naturally limited to controlling the motion of our fingers and thumbs. The LTFBNMEMG seeks to bring the other muscle groups into play.
This is an interesting idea. I would think you don't even need EEG points, you could just allow a small range of movement against transitionally resistive sensors (though now that i think about it, EEG might actually be easier lol)
That EMG would be easier – the “Low Tech” part of the title – than pressure sensing was my thought, also. EMG electrodes and system are old, proven, cheap technology, while pressure sensors are less so. Also, the rigid mechanical setup needed to use pressure sensor to sense the action of the various flexor, extensor, and torsional muscles in our limbs with pressure sensors would need to be elaborate and adjusted to precisely fit the users body. EMG electrodes are simple stick-on affairs.
At risk of distracting in this already long thread, the stick-on nature of electrodes brings up a practical issue. The best EMG electrodes are “wet”, meaning they have a bit of conductive gel, and an adhesive patch edge used to stick them securely to your skin. Commercial consumer electrode-using products have either used dry electrodes (eg: the Mattel MindFlex
) or felt-covered ones you wet with saline solution before use (eg: the Emotiv EPOC
). While the EPOC’s saline electrodes improve much on dry ones, it’s still not as good as the gel-and-adhesive electrode (see this BioMed Central article
). So while it’s tempting to make a LTFBNMEMG using dry or saline electodes attached to the inside of stretchy garment - you can quickly take it on and off, and reuse it for a long time, this would be giving up a lot of sensitivity and accuracy.
I think trying to move your restrained body but seeing it move in the VR system would be weird. Perhaps along the lines of the so called "body transfer illusion".
I agree – I think it would, at first, be deeply weird, but not weirder than how it first feels to move and aim your point of view in a 1PS while sitting still and controlling everything with your thumbs.
My biggest worry with the LTFBNMEMG is that some of the arm-related muscle groups work naturally only when the arms are in different positions – for example, over-head vs forward vs to the sides – so the system might not be able to model some body positions as well as others, or at all.
As I said above, I’m very much counting on the nervous system’s ability to adapt to fix the limitations of the system. I don’t see any way to test if this is right or not than to actually build and test the system.
"My question is "Could you turn around/look behind you?"" (idk how to quote single lines)
I don't see why not considering that it's just a compound muscle movement (turning the hips/neck and moving the legs) that could be captured by the electrodes.
I was thinking, with an eye to the short term practical, that the head wouldn’t be restrained, and the system would use a COTS system like a HTC Vive
Since inaccuracy in eye direction tracking is the leading cause of VR sickness
, we’d want the most responsive and accurate possible system for it. It’s hard to get much more accurate than allowing the head to move and physically tracking it.