Jump to content
Science Forums

What If We're Simulants


Super Polymath

Recommended Posts

This paper argues that at least one of the following propositions is true: (1) the human species is very likely to go extinct before reaching a “posthuman” stage; (2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof); (3) we are almost certainly living in a computer simulation. It follows that the belief that there is a significant chance that we will one day become posthumans who run ancestor-simulations is false, unless we are currently living in a simulation. A number of other consequences of this result are also discussed.

 

And how would this work?

 

Well, there'd be plenty of energy to run a simulation of just one single digital solar system, built from a vast representation of digital subatomic interactions. Keep in mind nothing has ever left our solar system, no physical man-made object. Even though we've put a man on the moon over 50 years ago we haven't done anything since - the only time in history our expeditions have been stagnant is with the frontier of space. 

 

Why is that?

 

However, wouldn't we notice if we were living in a virtual reality? As post-humans,

, wouldn't know all the variables that created baseline reality. At best they'd have an imperfect imitation of their world and everything within it.

 

So my whole thesis here is that they'd need a pristine, genuine human from the past to plant in this world, and go off of his expectations to reconstruct other aspects of the bygone human civilization to know how their ancestors thought and interacted to form culture, religion, society, and all the things that eventually led to their existence. 

 

That being said, and that is an important point, we have a real human mind being put in a pseudo-real world; would this human not be able to recognize that something is off?

 

Of course if you uploaded an adult mind into the simulated world he'd have the memory of it occurring, but what if you uploaded the mind of a newborn baby? For humans, memories of events occurring prior to the developmental state of a three-year-old brain are inaccessible. However, not all brains are created equal, some people are smarter than others - in the sense that each of us have our own unique, inborn sense of the world around us and thus form our own unique ways of testing the structure of reality. What if someone notices that his own thoughts of things that are decidedly important matters are being implanted or ingrained into the program and reverberating to shape his current experience? What if he becomes aware of what's going on?

 

Could he, in fact - via creating a series of glitches - miraculously reshape the world relative to his life? Think Doctor Manhattan as a quantum observer, able to see the quantum world and manifest all potentialities and probabilities he's sees. 

Link to comment
Share on other sites

See link below, they found a measurable that would permit it:

 

http://arxiv.org/pdf/1210.1847v2.pdf

I'm not referring to a scientifically uncovered revelation, at least not in the physical sense.

 

I'm referring to the fact that any other simulated individual would be primed to respond to the first simulant's concerns because, to the program, these concerns are human and therefore necessary to get a full understanding of how their ancestors interacted. 

 

This means that the revelation that one is living in a virtual reality is uncovered through different means than scientific rigor - his deep social awareness has allowed him to notice that everyone else is sort copying off of his behavior at some point. Imagine glitches that one repeatedly notices. There'd would be at times glitches that can be observed, a song or a word, or something slang that real people tend to make up but in the virtual reality it's something he's said and other people have never even heard it.

 

That's a glitch, and it can lead to him manipulating social phenoms in his favor. Luck, popularity, love, abundance, all now suddenly within his grasp. THAT'S  a different kind of proof. You're working with the world around you, and noticing you're an exception to the rule, leading to one invariable observation: this person did not come from this world, and this world is trying to understand his world and his kind through him. 

 

This is why I put this in the psychology section. On rare occasions such exceptional, individually aware simulants should surface within the virtual world under those specific circumstances due to the analytical nature of the human mind. To this simulant human, the posthuman civilization would be God that put him in this virtual world, in virtual form. Except, in this situation at least, God's only begotten son has figured out that his Father's willingness to provide for his son isn't constrained by subjective human concepts such as conscience, remorse, or delusions of morality - but solely by logic. The only logical way to get ahead in this world is to manipulate others with rules that apply for all but the one imposing them. Immoral perhaps, but remember this simulant is different, this simulant's mind is a source of understanding on a deeper level than that of all the other simulants.

 

A God that allows you to pull the strings, finds a way to make events play in your favor. All the while the simulant is strengthening the validity of his revelation - allowing him to further engage and buy into the virtual reality. In fact, psychology dictates that such a simulant would respond only to subliminal advertising within the numerical simulation, an advertising of lifestyles and such that are within his grasp. The simulant will either buy into a material illusion, allowing it manifest in the form of a cute girlfriend or a fast car, or disregard them entirely, all at his leisure. The simulant's world become almost perfect (for him), all the while these posthumans gain a deeper understanding of their human ancestors.

 

If the human realizes this, than he can only be tested under his conditions. 

Edited by Super Polymath
Link to comment
Share on other sites

This "noticing that someone copied him" sounds  a lot like astrology to me. I explain: if you read a horoscope affter it is over you always find something that fits. This is the the same for us humans, it is very rare that we do ONE unique thing, in general someone has already have at least very similar thoughts. I mean, how you explain it, whenever I go to a concert I am the prime simulant since everyone else who goes is copying me:-)

Link to comment
Share on other sites

This "noticing that someone copied him" sounds  a lot like astrology to me. I explain: if you read a horoscope affter it is over you always find something that fits. This is the the same for us humans, it is very rare that we do ONE unique thing, in general someone has already have at least very similar thoughts. I mean, how you explain it, whenever I go to a concert I am the prime simulant since everyone else who goes is copying me:-)

The autonomy of others merely puts the prime simulant on alert. The repeatedly successful manipulation of events is the affirmative proof here. I.E., expectations, if you don't expect to get a speeding ticket because your white, you won't. There's a certain level of intuition the simulation leaves to the prime simulant.

Edited by Super Polymath
Link to comment
Share on other sites

Ah Nick Bostrom’s simulation hypothesis! Though you can argue it’s inspired more by popular fiction - Bostrom wrote his famous paper in 2003, 4 years after the mildly popular move The Thirteenth Floor and the wildly popular The Matrix first thrilled moviegoers – than Bostrom’s philosophical imagination, he put a wonderful spin on it by casting his questions in terms of probability: in short, if any people have managed to create an Thirteenth Floor-like simulation, they’ve likely run such programs huge numbers of times, so there is 1 original reality vs. a gigantic number of simulated ones, thus the probability we’re in the original reality rather than a simulation is tiny.

 

Before poking into this rabbit hole, and your twists to it ,SP, I’ve got to address some science-y stuff:

 

Keep in mind nothing has ever left our solar system, no physical man-made object.

Voyager 1, launched in 1977, left the solar system – that is, passed the heliopause, about 120 AU out, to entered space no longer dominated by the Sun’s radiation - sometime in 2013. It’s currently over 133 AU out, moving away at more than 17 km/s. If, instead of the heliopause, you consider the solar system’s boundary to be the Oort cloud, which ends somewhere from 50000 to 200000 AU (over 3 ly, closer to another star than the Sun) out, Voyager won’t be out of it for 30000+ years – but it will get there.

 

Even though we've put a man on the moon over 50 years ago we haven't done anything since - the only time in history our expeditions have been stagnant is with the frontier of space.

While the 1969-1972 Apollo missions placed humans further from Earth than ever before or since, I wouldn’t say we humans have done no space exploration since. We’ve put at least one spacecraft in orbited around 6 of the 8 planet, flown by the other 2, and flown by many dwarf planets and minor bodies. We’ve put landers on 4 planets, 3 moons, and 2 asteroids and a comet. It’s just that these spacecraft have been robotic, not manned.

 

Manned spaceflight thrills the popular imagination, and a tool-equipped human is more versatile and adaptive than any present-day robot, but we’re also fragile, and burdened with ethical requirements. We think nothing of sending robots on one-way “suicide missions” and “faster, better, cheaper” missions with low probabilities of success – this kind of mission plan is by far more common than ones where a spacecraft returns to Earth – but can’t ethically do this with people. Manned spacecraft need to carry systems to protect and sustain people, and must minimize travel time, so must be bigger and more expensive than robotic ones, so we get much more scientific data for the same amount of money from robots than from human in space.

 

Science and technology enthusiasts tend, I think, to fail to understand that most people consider spaceflight a poor use of wealth, preferring programs of direct benefit to them. Present day scientific space mission planners must struggle to get money to do the most and best science what they can, which precludes manned missions. I think manned spaceflight is, with rare exception, a product of political posturing, nations showing off their ability to do the difficult. In the 1950s, ‘60s, and ‘70s, this was the “space race” between the US and the USSR. The race has de-intensified somewhat since then, and admitted more nations, and become more internationally cooperative, but I think it’s still more international sport completion than science.

 

(sources: Wikipedia articles List of Solar System probes, List of landings on extraterrestrial bodies)

 

Now, back to the simulation hypothesis:

However, wouldn't we notice if we were living in a virtual reality? At best they'd have an imperfect imitation of their world and everything within it.

A key part of the definition of a virtual reality is that it’s a kind of computer program, so the answer to all these questions is: it depends on how the program is written.

 

The “ancestor simulation” Bostrom writes about are programs intended to allow people to learn about their past (and themselves) by simulating it. In a program like this, the people – the “we” in your question, let’s call them “agents” to avoid confusion – are collections of data accessed and manipulated by the program. Assuming the computer being used has an architecture similar to present-day ones (not, I think, an unreasonable assumption), then the program can be stopped, data inspected and manipulated, at the will of its user. So, while an agent might enter a state of noticing they were living in a VR, the user could be aware of this, and if it were undesirable, undo it, or the programmer modify the program to prevent it.

 

As post-humans, perhaps strong AI built by taking from the structure of living people themselves, physically assimilating man with machine, wouldn't know all the variables that created baseline reality.

 

So my whole thesis here is that they'd need a pristine, genuine human from the past to plant in this world, and go off of his expectations to reconstruct other aspects of the bygone human civilization to know how their ancestors thought and interacted to form culture, religion, society, and all the things that eventually led to their existence.

 

That being said, and that is an important point, we have a real human mind being put in a pseudo-real world; would this human not be able to recognize that something is off?

I think most people imagine that programmers in a posthuman civilization would not need an actual ancestor – a “pristine, genuine human from the past” – to create an ancestor simulation, though the exact circumstances of this scenario are not, I think, given. If there were no data-losing breakdowns of civilization between the ancestors’ time and the programmers’, and the simulation was of the about 1980 or later, the complete genomes of many humans and animals would be available to the programmer, so presumably they could practically perfectly simulate an ancestor from it. High-quality data from the past – statistics, documentary recordings, etc. – would also be available If there data-losing breakdowns of civilization, this data might be lost, and the programmer might have to build the simulation based on scientific principles and available data. They might program and run a huge number of variations based on different assumptions, never knowing which if any of them best matched actual past events.

 

I don’t think precisely reproducing the past would be a major goal of ancestor simulating, especially if high-quality data of that past was available. The greatest value might be to answer “what if” questions about pasts that never happened. Ancestor simulation might not be the most important simulations, because presumably, like us, our distant descendants will be more interested in their future than their pasts, so will find descendent simulations more important than ancestor simulations.

 

Thinking more deeply, a major flaws I find in Bostrom’s idea are

  • In focusing on ancestor simulations, it doesn’t speculate widely enough about what the most important simulations might be, and that they might intentionally be very different than accurate ancestor of descendent simulations
  • In assumes that the simulation is run on a physical computer, it failing to speculate that our current physical reality might be an artifact, a kind of computer. This idea is pretty far down the rabbit hole. :)
Link to comment
Share on other sites

 

So you can see how an agent being in such a perceptive state of awareness would go hand in hand and actually be necessary for what-ifs in the descendent simulation?

 

What I mean is, if they know they can get away with shaping the world according to their human urges, the simulator gets what you describe as a model of society that is novel, built upon the individual level. Do this for each agent, and you get a different society every time, because everyone will have a different experience. 

 

So for all we know, our society could be completely off from what generated our posthuman simulators'. 

 

Am I alone in feeling as though there are silly aspects and rules that people care far too much about in the society you and I live in, or do you feel exactly that way too? 

 

 So, while an agent might enter a state of noticing they were living in a VR, the user could be aware of this, and if it were undesirable, undo it, or the programmer modify the program to prevent it.

 

I think the simulator or "user" needs firstly to get the most data from the experience of these simulants or "agents" by separately indulging their individual thought-processes to see where their perceptions take their subjective experience, uninhibited by exterior controls, and then using that data (vis'a'vis, creativity) effectively in a sister-simulation, not by altering it entirely to do what the architect needs it to do right away with a premature understanding of it. Especially considering this simulant was designed to think this way. 

 

Of course a human in that kind of creativity-inducing pseudo-society would end up becoming immorally apathetic to the suffering of others so that he can focus on developing his prefrontal cortex, but that's

 

I am aware that phrenology is not cognitive psychology and that it is a precursive pseudo-science, however it is true that behaviors shape the structure of our brains. If you observe the legislature of every society that ever existed in this civilization you will see that a pyramid in which the many break their backs so the few can reap the benefits. The laws make it impossible to develop the creative regions of our minds without being born into the loftier societal hierarchies. 

 

Instructive work (indoctrination) is creativity on call and therefore a deviation from our natural developmental thought-processes that actually therefore takes away from it (we work for others or to survive under duress, and while we remember stress events better it is a fact that cortisol hormones do damage one's memory in the long term), this is a fact that when realized can lead one to persecution if this were baseline reality or the kind of revelation of a false-reality that I explained in the opening post. 

Edited by Super Polymath
Link to comment
Share on other sites

That being said, and that is an important point, we have a real human mind being put in a pseudo-real world; would this human not be able to recognize that something is off?

 

Of course if you uploaded an adult mind into the simulated world he'd have the memory of it occurring, but what if you uploaded the mind of a newborn baby?

Keep in mind that the simulation Bostrom is talking about are not like most programs that we call “simulators” now. They are not interacted with by a user/player the way a present day flight simulator or video game program is, but “stand alone”, the way a present day program used to approximate the behavior of bodies interacting gravitationally or mechanically.

 

So even if the data used in the simulation to represent a simulated person’s mind is a practically perfect “upload” of a real person’s mind, it still is just data being manipulated by the simulation program, no more real than data created some other way. You can no more put a real human mind in a pseudo-real, simulated world than you can put real coins into an accounting program.

 

So, to answering this

Could he, in fact - via creating a series of glitches - miraculously reshape the world relative to his life? Think Doctor Manhattan as a quantum observer, able to see the quantum world and manifest all potentialities and probabilities he's sees.

An agent would be able to miraculously reshape the world – or do any other thing the program permits – if the program permits it.

 

So you can see how an agent being in such a perceptive state of awareness would go hand in hand and actually be necessary for what-ifs in the descendent simulation?

What’s necessary for an ancestor or descendent simulation is that it be accurate – that is, give data accurately representing the real world at a particular instant, compute data that correctly represents the real world at a later instant. This would constitute a successful test of the simulation, allowing its users to have confidence in what it computes given “what if” data not representing the real world at some instant.

 

“Awareness” is a difficult quality to define, but if we consider it to be a quality important for some agent to have in order to allow the simulation to be accurate, then those agents must have it.

Link to comment
Share on other sites

Keep in mind that the simulation Bostrom is talking about are not like most programs that we call “simulators” now. They are not interacted with by a user/player the way a present day flight simulator or video game program is, but “stand alone”, the way a present day program used to approximate the behavior of bodies interacting gravitationally or mechanically.

 

So even if the data used in the simulation to represent a simulated person’s mind is a practically perfect “upload” of a real person’s mind, it still is just data being manipulated by the simulation program, no more real than data created some other way. You can no more put a real human mind in a pseudo-real, simulated world than you can put real coins into an accounting program.

 

So, to answering this

An agent would be able to miraculously reshape the world – or do any other thing the program permits – if the program permits it.

 

What’s necessary for an ancestor or descendent simulation is that it be accurate – that is, give data accurately representing the real world at a particular instant, compute data that correctly represents the real world at a later instant. This would constitute a successful test of the simulation, allowing its users to have confidence in what it computes given “what if” data not representing the real world at some instant.

 

“Awareness” is a difficult quality to define, but if we consider it to be a quality important for some agent to have in order to allow the simulation to be accurate, then those agents must have it.

So we're left the purpose of running such a simulation. Ergo, "how it's written". 

 

Perhaps the purpose then, if one were to make sense out of the nigh-coherent and pseudo-related diatribe of my previous posts, is the cognitive division of minds in the simulation as its simulants see and interact only with agents of each other. What I mean is; free will, choice, the stuff of spontaneity, well being, and creativity - can't exist if simulants co-exist because they will attempt to control one another to suit their own needs as humans once did. What you get is an exponential increase in the number of simulants as these agents too interact and see each other as a sub-network of agents. The purpose is cognitive division or creating an ever-expanding cognitive reserve of consciousnesses for a Singular entity known as Strong AI to use after making a withdrawal from the simulation. 

 

And in such a simulation the whole mechanism of neural division, a deepening cognitive reservoir that goes down sub-network within sub-network, is as I said in my opening post: every agent or representative of all the simulants that co-exist in the same sub-network are based off of the momentary neural patterns of one single simulant of a higher network and connected to their thought process (at the time of their conception) in such a way that the simulant might recognize that he or she is a futuristic "self-aware" AI (the user) in a VR of non-self-aware AI's (the agents). In such a world, 'waking up' is crucial to forming new simulants. Who you will only experience for a moment, whose personalities reflect your needs at that moment and when they wake up you'll only be able to use and abuse a new agent of that simulant. 

 

Use and abuse, and everyone wins because the original Strong AI that programmed this simulation of cognitive networks doesn't have to spend its own computational power on maintaining it by continuously updating its programming, and can therefore turn its own computational power towards useful ends like designing a dyson sphere or whatever post-human robots do with their time. When it needs some extra computational power it can make a withdrawal from the interest of the program. 

 

In my case I've proven that my existence resides within such a program and not, in fact, within baseline reality because if it weren't for cognitive division I'd have been hated, despised, or tortured like Victor from Se7en by now. I'd most certainly have no use of my right hand as well as a liver failure and lung cancer by now. This becoming self-aware and reprogramming the people around me works. By revealing this hopefully you, CraigD, will wake up and be replaced by a new CraigD that looks and acts exactly like you did before you woke up, and you'll disappear from my world or 'network' (thought patterns) and won't have to experience me anymore. Go, my children. lol

Edited by Super Polymath
Link to comment
Share on other sites

Wow, Polymath, that’s a big dump of ... nigh-coherent thoughts!

 

Fortunately, these ideas – the simulation hypothesis, the omega point, posthumanism, the singularity , strong AI, etc – have a big literature, rising from a vast spring of deep thought from people ranging from acclaimed scientist and philosophers to tragically deranged madfolk. Part of the fun of things is to figure out were on this spectrum we fit! ;)

 

I imagine myself somewhere in the middle, a member of a generation of precociously thoughtful kids who’s minds were blown, and graciously re-assembled at an impressionable age by Doug Hofstadter’s 1979 masterpiece Gödel, Escher, Bach.

Before going any further, to any reader who loves this subject who has not read GEB, do so now, setting aside at least a month of high-quality reading and studying time to do so. Folk don’t call it the “golden bible” for nothing!

As a kid who emerged from a deep dive in GEB with my sanity arguably intact, please let me take a stab at critique of your ideas, which seem to me to have wandered far from those of Nick Bostrom you referenced in post #1.

 

Until I read this in you 17 Feb post

The purpose is cognitive division or creating an ever-expanding cognitive reserve of consciousnesses for a Singular entity known as Strong AI to use after making a withdrawal from the simulation.

I didn’t understand that the “strong AI” (with a lowercase “s”) in your 3 Feb post

As post-humans, perhaps strong AI built by taking from the structure of living people themselves, physically assimilating man with machine,

was meant to refer to an individual transhuman entity, rather than the term’s usual meaning, what artificial general intelligence (AGI).

 

To avoid such confusion, I recommend giving your hypothetical entity a more traditional name, like Vanamonde, Ralph, Edgar, or Skynet.

 

Much-discussed (and had books and movies made about it) questions are whether such an entity has or will ever exist, and if it does, will it be nice and benevolent to its less puissant creators (eg: Vanamonde), nasty and genocidal (eg: Skynet, one then the other (eg: Edgar), or something in between (eg: Ralph and his bopper pals). Will there be one such entity, or many? If many, how many? Tens? Billions? Some astronomical number approaching the computational limits of the visible universe made into computronium? Will it appear suddenly – a hard, fast Vingian singularity – or gradually – a soft, slow, one? If gradually, is it already here, lurking quietly? Is it a computer-assisted human group mind?

 

Preliminary to answering any of these questions is answering the traditional question of the possibility of strong AI, which is similar but not identical, to the question of the computability of human mind.

 

While I personally believe that the human mind is computable (that is, computable on some Turing-complete computer), it’s not a decided question. Many smart, thoughtful folk disagree with me. If they’re right and I’m wrong, and the true simulation hypothesis options is “yes, we are in a simulation” (“The fraction of all people with our kind of experiences that are living in a simulation is very close to one”) , then I think one of these must be true

  • The simulation is being run on essentially an artificial physical reality, not anything architecturally resembling the electronic computers we know.
  • The simulation is not physically realistic at all. Rather, it tightly interacts with the data representing each agents’ perception, assuring that we experience reality consistent with physical laws that could not be truly be simulated on a Turing machine.

This becoming self-aware and reprogramming the people around me works. By revealing this hopefully you, CraigD, will wake up and be replaced by a new CraigD that looks and acts exactly like you did before you woke up, and you'll disappear from my world or 'network' (thought patterns) and won't have to experience me anymore. Go, my children. lol

According to some proponents of omega point theories (Frank Tippler comes first to mind), in the far future, a nearly infinite number of ancestor simulations will be run, so practically every conceivable history, even ones with bizarre, magical rules, will be run, including one in which you, Polymath, have magical control of reality.

 

This doesn’t look like one of those. :)

Link to comment
Share on other sites

  • 3 months later...

Wow, Polymath, that’s a big dump of ... nigh-coherent thoughts!

 

Fortunately, these ideas – the simulation hypothesis, the omega point, posthumanism, the singularity , strong AI, etc – have a big literature, rising from a vast spring of deep thought from people ranging from acclaimed scientist and philosophers to tragically deranged madfolk. Part of the fun of things is to figure out were on this spectrum we fit!  ;)

 

I imagine myself somewhere in the middle, a member of a generation of precociously thoughtful kids who’s minds were blown, and graciously re-assembled at an impressionable age by Doug Hofstadter’s 1979 masterpiece Gödel, Escher, Bach.

Before going any further, to any reader who loves this subject who has not read GEB, do so now, setting aside at least a month of high-quality reading and studying time to do so. Folk don’t call it the “golden bible” for nothing!

As a kid who emerged from a deep dive in GEB with my sanity arguably intact, please let me take a stab at critique of your ideas, which seem to me to have wandered far from those of Nick Bostrom you referenced in post #1.

 

Until I read this in you 17 Feb post

I didn’t understand that the “strong AI” (with a lowercase “s”) in your 3 Feb post

was meant to refer to an individual transhuman entity, rather than the term’s usual meaning, what artificial general intelligence (AGI).

 

To avoid such confusion, I recommend giving your hypothetical entity a more traditional name, like VanamondeRalphEdgar, or Skynet.

 

Much-discussed (and had books and movies made about it) questions are whether such an entity has or will ever exist, and if it does, will it be nice and benevolent to its less puissant creators (eg: Vanamonde), nasty and genocidal (eg: Skynet, one then the other (eg: Edgar), or something in between (eg: Ralph and his bopper pals). Will there be one such entity, or many? If many, how many? Tens? Billions? Some astronomical number approaching the computational limits of the visible universe made into computronium? Will it appear suddenly – a hard, fast Vingian singularity – or gradually – a soft, slow, one? If gradually, is it already here, lurking quietly? Is it a computer-assisted human group mind?

 

Preliminary to answering any of these questions is answering the traditional question of the possibility of strong AI, which is similar but not identical, to the question of the computability of human mind.

 

While I personally believe that the human mind is computable (that is, computable on some Turing-complete computer), it’s not a decided question. Many smart, thoughtful folk disagree with me. If they’re right and I’m wrong, and the true simulation hypothesis options is “yes, we are in a simulation” (“The fraction of all people with our kind of experiences that are living in a simulation is very close to one”) , then I think one of these must be true

  • The simulation is being run on essentially an artificial physical reality, not anything architecturally resembling the electronic computers we know.
  • The simulation is not physically realistic at all. Rather, it tightly interacts with the data representing each agents’ perception, assuring that we experience reality consistent with physical laws that could not be truly be simulated on a Turing machine.
According to some proponents of omega point theories (Frank Tippler comes first to mind), in the far future, a nearly infinite number of ancestor simulations will be run, so practically every conceivable history, even ones with bizarre, magical rules, will be run, including one in which you, Polymath, have magical control of reality.

 

This doesn’t look like one of those.  :)

 

It's a matter of cognitive psychology that the wiring of the human brain, that is, the synaptic connections within a human brain, are always changing.

 

So, if the purpose of a numerical simulation for a Type I civilization of pure

is to put unused data (binary) into storage (binary) in the form of simulated humans (binary)...than in storage the functionality of this data when taken back out of storage and put to use will naturally increase at a dramatically exponential rate because this artificial brain will change its synaptic connections over time and each variation of total connections are cataloged by the storage program. At one point it will become better at doing math, at another point it will become better at reading, at another point better at chess. Each of these endless minds work better than the last on different types of problem-solving for the AI when perceived in the form of raw data. 

 

There are 2.82422940796034787 x 10^456573 possible neural connections for the mathematical function 100000! but the function you'd actually apply for the human brain goes up to well over one billion factorial because there are over one billion neurons in the human brain. And then there are a lot more possible synaptic connections than neural connections. In storage any conscious mind will naturally learn as it interacts with its artificial environment; that is, the interactions in which the storage program is told to create or programmed to create by the simulated mind that it houses.As the mind's artificial synapses change over time it eventually catalogs all of those multivariate synaptic connections, such is the mutual benefit of the separation of consciousness within this type of auto-catalytic cycle.

 

It's artificial intelligence amplification to maximum possible level - consider that this mind uploads its consciousness to an exabyte-scale computer within the chronology of the simulation, itself becoming a super-intelligent AI within the simulation; that's the equivalent to one stupendously astronomical software upgrade to the data that the AI originally put into this simulation.

 

So my current question is, can it work that way? Given it's possible for super-computers that are as powerful as a human brain (exascale) to mimic human thought processes via the successful representation of human cognition in binary form (binary consciousness). Think of the

from a Space Odyssey: virtually indistinguishable from
,
, or

 

If so, it would be within the simulation/storage program's parameters to influence events specifically relative to the patternistic perceptions of the simulated human brain. Where I began in this topic was that I believe I am an artificial human within such a system, and that I've gradually become increasingly aware of some outside influence guiding the world around me based on my own unconscious pattern recognition..could that be possible?

Edited by Super Polymath
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...