Jump to content
Science Forums

The Debate On Interpretation Versus Interpolation


NotBrad

Recommended Posts

So this is one issue that I have come to realize will most certainly cause the greatest headache(metaphorically) behind the actual development of the interface itself; that being the issue of interpolation, such as the concept of the skills present in Sword Art Online, rather than the interpretation of real movements.

 

To clarify:

Interpolation:

the insertion of something of a different nature into something else.
"the interpolation of songs into the piece"
  • MATHEMATICS
    the insertion of an intermediate value or term into a series by estimating or calculating it from surrounding known values.
    "yields were estimated using linear interpolation"
     

This meaning that the game-engine has motion assist that serves as surrogate for real world "muscle memory" and allows untrained individuals to perform activities they lack the training for in real life

 

Interpretation:

noun
noun: interpretation
  1. the action of explaining the meaning of something.
    "the interpretation of data"
    synonyms: explanationelucidation, expounding, expositionexplicationexegesisclarification More
     
     
     

This meaning that the program merely interprets the nervous signals and performs the explicit commands sent out by the brain and performs only those actions.

 

Which do you think is better, and what do think is actually more practical to implement in the grand scheme?

Link to comment
Share on other sites

I feel that interpolation would be the easiest and most effective way to do this for a few reasons

 

1. The nerve gear would already be loaded down with the task of interpreting the signals from the nervous system and ensuring the quality of the 5 senses.

 

2. If the nerve gear interpreted this, then you would be limiting what other games could be used with the nerve gear, if instead this was based around the games engine, the same motion could be used for different commands across different games. All the nerve gear would then have to do is adust the vision of the player to portray what the games engine is saying is happening

Link to comment
Share on other sites

It would be easier to implement interpretation in a BCI rig as otherwise, you are simulating a brain (or at least parts of it). There are computers/projects that have tried to do this and all have met limited success with high costs. 

 

It could save on time if instead of polling a connection, you could simply move on with its action though. Therefore, I think interpretation with interpolation used no more than to calculate a few 'steps' (neurons/neuron groups) is best, as long as you can simulate that accurately faster than a poll would take. After a few steps, the unpredictability becomes quite high.

 

Fascinating question. @Panther mentioned something similar to me.

 

--@Kayaba

Link to comment
Share on other sites

I feel that interpolation would be the easiest and most effective way to do this for a few reasons

 

1. The nerve gear would already be loaded down with the task of interpreting the signals from the nervous system and ensuring the quality of the 5 senses.

 

2. If the nerve gear interpreted this, then you would be limiting what other games could be used with the nerve gear, if instead this was based around the games engine, the same motion could be used for different commands across different games. All the nerve gear would then have to do is adust the vision of the player to portray what the games engine is saying is happening

I posted just as you did, hopefully the double post will be understood. I can't figure our how to quote while editing...

 

You think it would be easier to simulate what's going on (vs finding out) because (paraphrased text):

1. The BCI rig is already finding out? What?

 

2. If the BCI rig found out vs simulated what's happening, then games will be limited. Why? The time used to calculate these neurons on a large scale is more than the time used to poll the BCI (likely, this may change) and is prone to more errors. If we involve the game engine (which wasn't talked about and has no definition here), you can implement a rig-wide command (that isn't done in an engine, by the way). All that is needed after that is for the rig to show what the game says is happening. Isn't that needed no matter what?

 

--@Kayaba

Edited by BrainJackers
Link to comment
Share on other sites

Well you are write in the long run it may cause some trouble(-ish ) but lets discuss it out here is what i think.....

 

After reading your question i tried to learn a bit about muscle memory as so after reading and thinking i came to realize that the key factor that would decide the requirement of support assist would be its location. If it is near the brain or in the brain then their would be no need for support assist but if it is in our body or a mixed system then it is extremely crucial.

 

After reading the case file on  https://en.wikipedia.org/wiki/Muscle_memory i think it is crucial so we don't have a option. So first option seems correct to me.

 

 

P.S 

In ALO Leefa was able to transfer(or use) all her kendo skills  which means the system has the ability to read users muscle memory.

Edited by Akihiko
Link to comment
Share on other sites

If you had a human avatar, rigged properly, muscle memory automatically carries over. The movements are stored in the brain which would send the signals out to the muscles, just as a regular movement would work. The only important thing to note about muscle memory here is that it isn't as conscious as unpracticed movements. I do not believe that matters when reading from the motor cortex (the logical place to track movement from).

 

I didn't note the muscle memory statement. I feel that has no effect on the subject. "This meaning that the game-engine has motion assist that serves as surrogate for real world 'muscle memory' and allows untrained individuals to perform activities they lack the training for in real life" <- that sounds like a 'Sword Skill' and doesn't require interpolation, It just has to give the body the stimuli that it is moving/has moved. 

 

Interpolation could be used to stimulate these movements, just to read them along with the natural movements. The issue there is that sending data, just to receive it and tell the body the movement happened, adds a extra, unneeded step.

 

--@Kayaba

Link to comment
Share on other sites

Yes. I do support overlapping. The debate was whether to interpolate the source of the movement OR to simply interpolate the effects. Interpolating the effects is a better option, as while it requires better memory management to cause an overlay, it uses way less CPU cycles.

 

--@Kayaba

Edited by BrainJackers
Link to comment
Share on other sites

I like “interpolated vs interpreted” name for this continuum better than one’s I’ve been using, like “unrealistic vs realistic”. :thumbs_up

 

Which do you think is better, and what do think is actually more practical to implement in the grand scheme?

I think that depends on what grand scheme you have in mind.

 

If your goal is to make a system that you can use to train in skills usable in actual reality, the interface should be as interpretative as possible. It’s likely that militaries would be major consumers of very realistic VR systems, because they could use them to train soldiers to be very competent in dangerous situations without risking injuring or killing them,

 

If your goal is to make an entertaining game popular with many players, the interface should be very interpolative. The history of video games shows that most players like the feeling of being very good without having to actually be very skilled in the same way their in-game avatars appear to be. As a result, nearly all present day video games allow single or combination button pushes to trigger avatar movements impossible to most people, and employ schemes such as gun lock-on or larger-than-image hit boxes to allow the avatar to be very good at aiming without the player having to be.

 

In ALO Leefa was able to transfer(or use) all her kendo skills which means the system has the ability to read users muscle memory.

How this could work – allowing a person who is actually good at sword fighting to be good at in-game sword fighting, while also allowing people who are not actually good at sword fighting, to be good at in in-game, is interesting to consider.

 

As best I can guess, the system would have to surreptitiously measure the player’s actual ability, then use it to give their in-game character superior or unusual characteristics used in determining fight outcomes – that is, give them a bonus for being skilled ITRW.

 

If actual muscle memory – which is really brain memory about activities we don’t consciously think about – was applied in a very interpretive way, and the game universe has unrealistic physics, such as allowing characters to leap and fly and swords to do impossible things, real world skill would, I think, put the player at a disadvantage. Real world muscle memory skill involves a lot of knowing not to do wrong, silly things, and expecting swords to not behave in impossible ways – nearly the same silly things that work well in the game, and the same impossible ways swords behave in the game.

 

You can get a sense of this by playing games with people who are actually good at what’s depicted in the game, like playing Ace Combat with an actual fighter pilot. To play the game well, you must make very wrong control movements, which a real world fighter pilot has spent hundreds of hours training themselves to unconsciously never do. I had a very amusing experience watching my 10 year old son win dogfight after dogfight playing against an actual active duty fighter pilot. Eventually, the pilot quit playing, worried that getting good at the game would give him bad habits that might get him killed in a real airplane.

Link to comment
Share on other sites

True that would mean we also have to add a write system in system assist program which keeps learning any new moves or saves the moves that had been repeted number of times.....(I guess the OSS function in ALO was the same system) but general moment would need system assist like walking ,typing,fighting etc which means a basic skill module can be prepared to surve as start up assist or basic assist and over time the usage of a particular skill would strenght the skill. The motor cortex has the ability to controll all the muscle without muscle memory with great concentration. From this i could deduce that their has to be two copies of the same action one copy for identification and another copy for sending comand(typicaly one in brain and other in different motor neuron). Which would mean you would have all your skills but you need to concentrate to do the same motion

Link to comment
Share on other sites

~snip~

If actual muscle memory – which is really brain memory about activities we don’t consciously think about – was applied in a very interpretive way, and the game universe has unrealistic physics, such as allowing characters to leap and fly and swords to do impossible things, real world skill would, I think, put the player at a disadvantage. Real world muscle memory skill involves a lot of knowing not to do wrong, silly things, and expecting swords to not behave in impossible ways – nearly the same silly things that work well in the game, and the same impossible ways swords behave in the game.

 

~snip~

To reply to this bit, I don't think this necessarily works that way. If you look at how humans with specific repetitive training react to certain stimuli I think you'll notice that much of it is performed sub-consciously, and that's where the issue of interpolation comes in, transitioning from your real-world body to your avatar in-game actually presents a new issue; the differences between real world nerves and the simulated ones in a given game. There would likely be very little translation from what I would hypothesize in terms of reflexes between the avatar and the real body due to the severance of the simulated neural pathways upon disconnect from the game. To be more clear, unless the simulated neural surrogate within the game engine is a match or near-match to the user, they might have issues transferring any of their skills from the real world into the game at all. If your brain has adjusted your whole life to initiate certain nerves on certain timings to coordinate your balance and footwork for the hugely important task of walking, what happens when suddenly many of those nerves are no longer calibrated quite correctly? Would extended periods of full-dive actually force you to unlearn the muscle memory for your real body? But in a more solution oriented context, how would you go about creating surrogate sub-routines within the engine that can simulate those very complicated sequences to successfully coordinate all of the muscles in a simulated leg to both walk and feel like the users own flesh and blood?

 

That is what I mean by interpolated, interpretation would be to intercept the commands sent by the brain to the muscles and translate that into the actions of an avatar within a simulation. Interpolation on the other hand, intercepts the command to move before the brain has broken it down into it's finite process and then breaks that command down on the engine side so as to translate that onto a standard model for human kinesiology. This concept, though never put forward explicitly within any show revolving around VR or AR must be present in the SAO universe based on the fact that sword skills are something activated simply by will, heavily implying that these are unlock-able sets of muscle memory that are stored within the game engine. It is also pointed out several times that their senses within the game-world and the feeling of moving itself were quite different from the real world though, again, this is never fully explained to my knowledge in-lore and may just be me reading more into the content than is actually there.

 

That was a whole boatload of rambling but there it is.

Link to comment
Share on other sites

@weamy here

 

I feel like this thread has got way too overcomplicated. Though I may be missing the point. I'm still not entirely sure what you guys mean.

 

@NotBrad , there should be no need for 'simulated nerves' in the game or the engine. Your avatar is just a series of images being shown in front of you eyes. Your avatar is controlled by input from the user.

 

When you think of moving, a BCI will detect a signal from the motor cortex part of the brain. The BCI converts this signal into binary. The computer then inputs this into the game and you see yourself moving.

Muscle Memory controls your motor cortex. Even if the signal does originate from the cerebellum, a signal will have to be produced in the motor cortex if you want to move or even think of moving. You don't need a computer system to simulate walking. The cerebellum does that for you.

 

--@weamy

Link to comment
Share on other sites

Yes you are right we may not need a computer system to simulate walking but the discussion for existence of such a system is based upon the fact that the motion and action performed by is a brain based function not brain and body based.
Or to state clearly
IF
1] The motor cortex is the only one providing instructions.
2] The commands released by the motor cortex or cerebellum are full commands and not short forms of command.
3] The action that is performed is not disturbing thought or has any need of concentration.

 

 above conditions is satisfied then we may not require any "system assist".(system assist is a system that assist the user to perform his motion seamlessly i virtual world)

Edited by Akihiko
Link to comment
Share on other sites

@weamy here

 

I feel like this thread has got way too overcomplicated. Though I may be missing the point. I'm still not entirely sure what you guys mean.

 

@NotBrad , there should be no need for 'simulated nerves' in the game or the engine. Your avatar is just a series of images being shown in front of you eyes. Your avatar is controlled by input from the user.

 

When you think of moving, a BCI will detect a signal from the motor cortex part of the brain. The BCI converts this signal into binary. The computer then inputs this into the game and you see yourself moving.

Muscle Memory controls your motor cortex. Even if the signal does originate from the cerebellum, a signal will have to be produced in the motor cortex if you want to move or even think of moving. You don't need a computer system to simulate walking. The cerebellum does that for you.

 

--@weamy

The discussion is centered around the means by which the input is provided to the game engine and how it interacts with the user.

Link to comment
Share on other sites

BTW guys, read my P.S. first.

 

@akihiko

It is a brain based function but in RL, it affects the body. I don't understand the rest of your message. What do you mean by 'system assist'? What does the system assist you with?

 

 

 

Here's what happens in the brain when you decide to move your arm (in my understanding):

 

You decide to move your arm. This occurs in the frontal lobe. ------------>  A signal is produced in the cerebellum to get the 'instruction manual' on arm moving.

-----------> The cerebellum sends a signal to the motor cortex telling it to activate nerves 'A', 'B' and 'C' --------> a signal is produced in those nerves and a signal is taken to the arm making it move.

 

From my understanding, the same thing happens when you think about moving your arm, just that your arm doesn't move.

 

A BCI rig would read signals in the motor cortex, see that nerves A,B and C have been activated. This is then translated by a computer so that the system knows that is the command to move your arm. The computer sends information to the game avatar making it move the arm.

 

 

 

I don't see any other way to send input to the system.

And when you say how it interacts with the user, can you give an example? I'm unfortunately still having trouble understanding your point.

 

--@weamy

 

P.S. I've been re-reading the posts a few times now. Are you guys talking about how movement in-game will be different to in real life? And therefore your normal brain signals would be wrong? As in normal brain signals would produce the wrong movement in-game. So when you guys say system assist, you mean the system will take the wrong brain signal and make the correct in-game movement? In that case, can't you just calibrate the game so that the correct brain signals make the correct movement in game?

Link to comment
Share on other sites

BTW guys, read my P.S. first.

 

@akihiko

It is a brain based function but in RL, it affects the body. I don't understand the rest of your message. What do you mean by 'system assist'? What does the system assist you with?

 

 

 

Here's what happens in the brain when you decide to move your arm (in my understanding):

 

You decide to move your arm. This occurs in the frontal lobe. ------------>  A signal is produced in the cerebellum to get the 'instruction manual' on arm moving.

-----------> The cerebellum sends a signal to the motor cortex telling it to activate nerves 'A', 'B' and 'C' --------> a signal is produced in those nerves and a signal is taken to the arm making it move.

 

From my understanding, the same thing happens when you think about moving your arm, just that your arm doesn't move.

 

A BCI rig would read signals in the motor cortex, see that nerves A,B and C have been activated. This is then translated by a computer so that the system knows that is the command to move your arm. The computer sends information to the game avatar making it move the arm.

 

 

 

I don't see any other way to send input to the system.

And when you say how it interacts with the user, can you give an example? I'm unfortunately still having trouble understanding your point.

 

--@weamy

 

P.S. I've been re-reading the posts a few times now. Are you guys talking about how movement in-game will be different to in real life? And therefore your normal brain signals would be wrong? As in normal brain signals would produce the wrong movement in-game. So when you guys say system assist, you mean the system will take the wrong brain signal and make the correct in-game movement? In that case, can't you just calibrate the game so that the correct brain signals make the correct movement in game?

All right, I'll explain what is being discussed,

The current discussion centers around the theoretical issues of exclusively BCI-based tech and how the different implementations would affect the variety of possible experiences and the fidelity of said experiences. More precisely, the human body is not based on a fixed set of blueprints, it is grown organically from what amounts biologically to two cells with completely different designs that are combined to  create a new design. That being said, there are a plethora of unknowns in this sphere of discussion. Firstly, there are billions of nerves with billions of branching points throughout the body, each and every one of these nerves may or may not be crucial to address within an FDVR environment to guarantee immersion or prevent such occurrences as motion sickness and temporary physical impairment upon exit. In addition to that, due to the lack of particular investigation into the field and a lack of tools to do so, we still don't know for sure what nervous functionality is exactly the same from one person to another, which of course comes with the territory of hypothetical technology. But what we can do is discuss the potential solutions and hypothetical practices that would be relevant to the issues we already know the technology will certainly face. That's what this is.

As for the interpretation v interpolation discussion,

The concept is quite simple while also being quite complicated, but I'll try and simplify it as much as possible. Basically, interpretation is the interception of all relevant brain signals and then using the actual commands for individual nerves as the brain has produced them and translating that into communications that can be understood by the game engine. On the other hand, interpolation is intercepting the command early on before it is broken down into individual commands and rather simulating another body that is controlled by your will to move rather than the actual commands that your brain would use to control your individual muscles.

So here are the benefits and drawbacks of said schools of thought:

Interpolation

Benefits

- Allows for systems such as aim assist and universal skills

- Allows for EXP based RPGs where you unlock skills

- Reduces the necessary temporal resolutioin needed

- Allows players to potentially play as non-anthropomorpic characters

- would allow neurologically disabled players to join (handicapped due to brain damage)

Drawbacks

- Could be disorienting at first

- Massive software overheads

- would require much deeper understanding of the human brain

- Could impede your motor function in real life

Neutral (not related to discussion or unsure)

- Skills gained in game would not transfer to real life

- Could not be used for Augmented Reality experiences

 

Interpretation

Benefits

- Muscle memory is transferred

- Shallower learning curve

- Reduces the necessary temporal resolutioin needed

- Allows players to potentially play in ways that are not constrained to a body

- would allow physically disabled players to join (handicapped due to brain damage)

Drawbacks

- Massive temporal resolution would be needed

- would require much more advanced hardware

- Could very easily cause disorientation and motion sickness

Neutral (not related to discussion or unsure)

- Skills gained in game would transfer to real life

- Could be used for Augmented Reality experiences

 

Of course these could be debated but this is the argument I was having in my head when I posed the question to the community. But this is something that is quite the metaphorical rabbit hole for discussion relating to the concept of Full-Dive Virtual Reality.

Link to comment
Share on other sites

Sorry if my post was unclear and about what i mean by system assist is a system that help us out when we work in an auto pilot mode eg. while we are driving  we never actually think about the road we are or how to steer left or right , my balance is of and such and such it just work on its own. Now think about it if we had to concentrate how to drive every time we drive then would we really able to enjoy the driving or do something else while driving.

 

To be clear System assist is a system that provides you support and read your half complete commands and complete them like your real body. Because body does not work on just simple notation but a complex process which as NotBrad said we have yet to understand.

 

And yes i am worried about the moments made in game may be different from the real world.

 

P.s 

Do you know or have any topic based on writing the data into the brain ?.

Edited by Akihiko
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...