Jump to content
Science Forums

The Impossibility Of Classical Artificial Intelligence


Recommended Posts

If entanglement implies free will and entanglement is necessary for quantum computation, quantum computation is necessary for artificial intelligence.

 

At this time, is it generally accepted among quantum computer scientist that entanglement is a necessary property of a universal quantum computer?

Can we accept the conclusions of Jonathan Barrett and Nicolas Gisin?

 

If we can accept the above proposition, we may have an answer as to why AI has yet to be achieved. It would be impossible to implement a tractable AI on a classical Turing Machine.

 

How much free will is needed to demonstrate nonlocality?

Quantum Entanglement Can be a Measure of Free Will

Separability of Very Noisy Mixed States and Implications for NMR Quantum Computing

Quantum computing: Entanglement may not be necessary

Link to comment
Share on other sites

If we wanted artificial intelligence, the best way is to use the brain as the basis for our machines. If you look at a neuron, the resting neuron stores potential energy within the potential across the membrane. The outside is positive and the inside is negative. When the synapse fires, this potential energy is released as the membrane potential drops. The cell then works to reset the membrane potential back to higher energy. Since memory is connected to the firing of the synapse, it is also connected to an energy release. This energy release goes into entropy, but within the constraints of the memory structure which is being reset by the neuron. The result are perturbations of this high energy structural memory, which we see as an intelligent modification of existing memory.

 

To get computers to become intelligent, the easiest way ( in theory but not in practicality) would a new generation of memory. Resting memory would be stored at higher energy, instead of stable lower energy. This type of intelligent computer would not be as much a function of clever programming, as a function of energy/entropy changes in the light of the memory. The laws of physics want to lower energy and increase entropy, such that the memory, if renewable, will have no choice but to be in a constant state of intelligent entropy.

 

The brain only needs classical energy/entropy due to clever design.

Link to comment
Share on other sites

I think a sidebar’s necessary here to acquaint readers unfamiliar with a simple description of quantum computing – which I think includes most of even science-literate readers – with such a description, by example:

 

Suppose we have a very large (say about 1,000,000,000 bit, or 300,000,000 decimal digit) number [math]B[/math], which we wish to factor into its prime factorization.

 

A single classical (not quantum) computer program must perform a sequence of divisions by prime numbers. Upon finding a prime number [math]P[/math] that evenly (without remainder) divides [math]B[/math], the program outputs [math]P[/math], which is one of the desired prime factors, replaces [math]B[/math] with [math]B \div P[/math], and continues. Although some cunning techniques can be written into the program to improve its efficiency, ultimately, it must exhaustively try every [math]P[/math] up to 500,000,000 bits long, a task that, on any conceivable computer hardware, even a massively parallel network of computers consuming a substantial fraction of the matter and energy of a galaxy, may take longer than the expected lifetime of the stelliferous universe to complete.

 

A single quantum computer program, however, may simply randomly guess a single [math]P[/math], outputting it if it evenly divides [math]B[/math], outputting nothing if it does not. “Outputting” in quantum computing has a special meaning – it means that the isolated, coherent quantum computer, “breaks out” of its isolation to interact with the outside world, suffering decoherence to becoming a classical computer that just happens to have, in this example, a successful [math]P[/math].

 

Because it is in a state of coherence, that is, a superposition of every possible state of its bits, at least one of those states will be the quantum computer randomly guessing a successful [math]P[/math]. Thus, in the time it takes to initialize, perform a single division, and trigger its output, the quantum computer will output a [math]P[/math] that might take a classical computer practically forever to calculate.

This simplified (unrealistically so – all real-world factorizing computer programs, such as Shor's algorithm, are designed to minimize the size of the numbers that the quantum computer must handle) description can be illustrated with a thought experiment of my (not necessarily unique) invention, that doesn’t mention computers:

Schrödinger's Shakespeare-Writing Monkey

In a container isolated from all interaction with the outside (the one from the Schrödinger's Cat thought experiment will do, provided it has enough room), an ordinary, untrained monkey, a typewriter, and a single sheet of paper is placed. To the container door, a special lock apparatus is attached to the inside of the door that will open only when a sheet of paper bearing a typographically perfect copy of Shakespeare’s
(“Shall I compare thee to a summer’s day?...”) is inserted into a slot.

 

Within moments (the shortest time for a monkey to load a sheet of paper into a typewriter, bang out about 521 characters, remove the sheet and insert it into the door lock’s slot), the door opens, containing Sonnet 18 in its slot.

This thought experiment, though impractical because of the difficulty in actually creating a container that completely isolates its interior from all interaction with its exterior, is not a metaphor or a paradox, but a real physical prediction.

 

At this time, is it generally accepted among quantum computer scientist that entanglement is a necessary property of a universal quantum computer?

I don’t believe so. At present, quantum computing is a very immature discipline, with little well-established consensus. Critically, I find no strong consensus as to whether what I see as its current main goal – performing nearly an infinite number of arithmetic calculations for which the correct answer is easy to confirm but difficult to determine – in that is, computing an NP problem – is physically possible or not. It is, in a grand computer science tradition, very much a “try it and see” enterprise.

 

Moreover, as Experimental Quantum Computing without Entanglement (article and link to abstract only – I couldn’t find a free copy of the paper), and my simple example above suggest, the advantage of a quantum computer over a classical one, if they are actually physically realizable, doesn’t require that the QC utilize entanglement. Currently, all designs of which I’m aware for QCs rely on entanglement to allow their qbits to interact with any practical achievable switching (coherent) and outputting (decoherent) machinery by having single quantum states shared by large ensembles of particles, but in principle, a quantum computer using single elementary particles could exist.

 

If entanglement implies free will and entanglement is necessary for quantum computation, quantum computation is necessary for artificial intelligence.

 

Can we accept the conclusions of Jonathan Barrett and Nicolas Gisin?

Well, before we can accept their conclusions, we’ve got to understand them. How much free will is needed to demonstrate nonlocality?, though only 4 pages long, is 4 dense pages, packed full of references, though knit together by that helpful cryptological pair, Bob and Alice. :) Some careful reading and digesting are needed, and time in which to do it.

 

If we can accept the above proposition, we may have an answer as to why AI has yet to be achieved. It would be impossible to implement a tractable AI on a classical Turing Machine.

We’d also have to accept some other propositions of uncertain truth, among them that intelligence requires free will, and that free will is equivalent to not depending, in the language of HMFWINTDN, “on the hypothetical local variables”.

 

We need not consider these difficult question, however, if we accept a conclusion that is generally accepted by quantum computer scientists (as summarized in the wikipedia article quantum computer):

Quantum computers don't allow the computations of functions that are not theoretically computable by classical computers, i.e. they do not alter the Church–Turing thesis. The gain is only in efficiency.

Thus, if AI is impossible on a classical computer (a Turing machine), it is also impossible on a quantum computer.

 

The brain only needs classical energy/entropy due to clever design.

While I personally believe that human-like intelligence does not require non-classical effects, I think it’s important to note that this is not a universal consensus among consciousness theorests. For example, see New Mysterianism and quantum consciousness.

 

PS: To gain the prerequisite understanding needed for Quantum computing: Entanglement may not be necessary (2008), I plan to read explanations of why it has been considered necessary, beginning with Entanglement and Quantum Computation (1997)

Link to comment
Share on other sites

Neuron memory, in the resting state, means a high membrane potential; higher membrane energy. When we fire neurons, the membrane potential lowers. This suggests that consciousness needs to exist at lower potential, to favor firing neurons into lower potential. If consciousness was at higher potential, it would be more beneficial to rest memory; no firing.

 

Say consciousness was at lower potential, and induces firing of memory into lower potential, energy is also given off. This can increase the potential of consciousness and/or add entropy to consciousness. Now consciousness favors moving memory into a rest entropy state.

Link to comment
Share on other sites

We need not consider these difficult question, however, if we accept a conclusion that is generally accepted by quantum computer scientists (as summarized in the wikipedia article quantum computer):

Quantum computers don't allow the computations of functions that are not theoretically computable by classical computers, i.e. they do not alter the Church–Turing thesis. The gain is only in efficiency.

Thus, if AI is impossible on a classical computer (a Turing machine), it is also impossible on a quantum computer.

 

PS: To gain the prerequisite understanding needed for Quantum computing: Entanglement may not be necessary (2008), I plan to read explanations of why it has been considered necessary, beginning with Entanglement and Quantum Computation (1997)

 

Thank you CraigD for filling in some of the gaps. I did qualify my statement with "tractable" though perhaps I should have qualified it further with "effectively tractable". A note of the quantum computation with entanglement even that paper admits that it is a subclass of quantum algorithms which wouldn't need entanglement. Fairly soon in, it states that algorithms such as Shor's Factoring algorithm would still require entanglement to take advantage of the exponential quantum speed-up.

 

As soon as I have a moment to sit down and produce some more research, I'll elaborate on my criterion for artificial intelligence and consciousness including their relationship with the free will theorem and quantum minds. I've not had a breather since school started to thumbing through at length Universal Artificial Intelligence. I have yet to see the coin come down on the Proof of P != NP, but I expect that if it comes down on the side of proven, it will support the gist of my assertion on the impossibility of classical artificial intelligence. I expect classical computation will be shown to be a subset of quantum computation in the event that P != NP.

 

In the research I have done regarding computing several facts have come to light: classical computation is identified with the decision problems which are binary in nature. Classical computation is strictly local. Classical computation can not express the qubit logic laid out by Paola Zizzi in her Thesis paper.

 

It seems implicit to me in Paola Zizzi's work that the class of problems that quantum computation is identified with is not the decision problems, but includes them as a subclass, is unary in nature and inherently non-local to some degree. From what I understand, it may turn out that quantum computers exhibit oracle powers. Seems fitting to me that it may arise that NP, oracle powers, choices, and free will all are tied in with entanglement. If that's the case, entanglement is inaccessible to classical computing and Aristotelian logics.

Link to comment
Share on other sites

This thought experiment, though impractical because of the difficulty in actually creating a container that completely isolates its interior from all interaction with its exterior, is not a metaphor or a paradox, but a real physical prediction.
Uhm, this is making a few quite bold assumptions, especially that the interior of the box would automatically and immediately be in a coherent state and that this state would be such that its evolution immediately becomes a superposition including all possible cases of monkey typing. Perfect isolation isn't quite sufficient.
Link to comment
Share on other sites

One aspect of the brain this is often left out is the water. If we dehydrate the brain, nothing happens. If we then substitute any other solvent, very little happens, but definitely not enough to make the brain functional and conscious. The water is the wild card that makes it all possible. Water although common is the most unique chemical in the universe. It has more anomalies than any other substance in nature. For example, at high pressure hot water moves slower than cold water. It expands when it freezes, etc.

 

Consciousness makes the most sense, in the context of memory, if it is at lower potential. The membrane potential across the membrane creates potential energy. Neuron firing and memory is dependent on this potential lowering. Consciousness at lower potential puts it in the proper place.

 

If you look at the two main cations for the neuron membrane potential, Na+ (sodium) and K+(potassium), although both have the same charge, each cation effects water differently. Na+ is considered kosmotropic, which means its induces order in water, relative to pure water. K+, on the other hand, is chaotropic which means it create disorder (chaos) in water, relative to pure water. What this subtle difference means is when the membrane potential is high, the high level of outside Na+ creates order within the outside water. That means hydrogen bonds form easier in the external water, which means lower external water potential. The water is induced to low potential like the requirement of consciousness.

 

Here is the full model, based on basic chemistry. As the neurons form their membrane potential, the exterior water becomes more ordered due to the kosmotropic Na+. This lowers the potential of the water, until the potential difference with the membranes increases, encouraging firing. The K+ given off by the membrane firing, being chaotropic (chaos), disrupts the order of the external water, thereby increasing the potential in the water. This makes it easier to reset the membrane potential. The neurons output high external sodium+ that resets the membrane potential, while also restoring order in the water, for another potential cycle. The fluid properties of consciousness are connected to water.

 

A brain storm may well be a good analogy for consciousness. For example, consider a hurricane. This is low pressure system. It creates a potential with normal atmospheric pressure, with the hurricane at lower potential. If a hurricane is over warm water, it can use the heat and it own vacuum to feed itself and grow into a huge organize disturbance. The neurons, by being at high potential, is sort of like the warm water that feeds the disturbance causing it to organize and grow, until we get lightning. The information transfer in water is done at the level of hydrogen. All you need is the brain geometry set up in a way that allows the continuous storm of consciousness. Like on earth, there are certain place set up by nature to make a hurricane likely.

 

I don't mean to change the discussion, but since water is left out of the analysis, there is a gap, which appears to require using a physics and probability explanation to add the needed variables.

Link to comment
Share on other sites

I don't mean to change the discussion, but since water is left out of the analysis, there is a gap, which appears to require using a physics and probability explanation to add the needed variables.

A point of clarification: by artificial intelligence, I mean intelligence that hypothetically can be simulated in sillico. At this point it is nether established nor refuted that artificial intelligence can be described purely through theoretical computing and physics. This thread concerns the possibility that it may not be effectively possible to simulate intelligence on a classical computing machine. The implication being that it maybe necessary to introduce quantum computing to simulate intelligence.

 

On that basis, I would make a similar argument as Feynman did regarding the simulation of quantum phenomena on classical computers. While it maybe possible, it's not effectively possible. Furthermore, I am suggesting that it is a distinct possibility that classical computing can not in fact even in principle simulate intelligence but quantum computation can which would constitute a refutation of the Church-Turing Thesis in part or in whole.

Link to comment
Share on other sites

I was trying to help the goal of artificial intelligence by suggesting hardware modifications. I agree that classical computing may not have the hardware to create AI and that maybe a different programming approach, such as quantum computing, may help us get closer.

 

Let me add one more aspect of the bio-hardware used for NI (natural intelligence). In the center region of the brain is a configuration called the thalamus. This is shown in the figure below in yellow.

 

The thalamus is the most wired part of the brain sending and receiving signals to and from almost every area of the brain. Science has already shown that its function includes the induction of consciousness. If damaged is done to the thalamus, you go brain dead. One can lose function in parts of the cerebral and still remain conscious.

 

If you took two wires with electrons flowing on the opposite direction, the magnetic fields add causing the wire to attract, even though both move negative charge. Relative to the sodium signals along the surface of the neurons, the "to and from" signals to the thalamus give us something similar. What this does is not only increase the concentration of sodium for order in external water, but by adding the currents, it lowers the potential (disguises) what the water sees relative to membrane potential. This allows higher level aqueous order to form. The result is the thalamus is the eye of the storm (highest potential between water and membrane potential). If we damage this region, we don't have the aqueous-membrane potential for the hurricane's eye.

 

If we continue our storm analogy further, hurricanes can spin off tornados. The hurricane can create the conditions that can amplify itself beyond the eye. The hurricane may have winds at 150mph but tornados can double that. One possible tornado is called the conscious mind which can amplify the winds for will power. But these need the eye to develop.

Link to comment
Share on other sites

I was trying to help the goal of artificial intelligence by suggesting hardware modifications. I agree that classical computing may not have the hardware to create AI and that maybe a different programming approach, such as quantum computing, may help us get closer.

 

Let me add one more aspect of the bio-hardware used for NI (natural intelligence). In the center region of the brain is a configuration called the thalamus. This is shown in the figure below in yellow.

 

The thalamus is the most wired part of the brain sending and receiving signals to and from almost every area of the brain. Science has already shown that its function includes the induction of consciousness. If damaged is done to the thalamus, you go brain dead. One can lose function in parts of the cerebral and still remain conscious.

 

If you took two wires with electrons flowing on the opposite direction, the magnetic fields add causing the wire to attract, even though both move negative charge. Relative to the sodium signals along the surface of the neurons, the "to and from" signals to the thalamus give us something similar. What this does is not only increase the concentration of sodium for order in external water, but by adding the currents, it lowers the potential (disguises) what the water sees relative to membrane potential. This allows higher level aqueous order to form. The result is the thalamus is the eye of the storm (highest potential between water and membrane potential). If we damage this region, we don't have the aqueous-membrane potential for the hurricane's eye.

 

If we continue our storm analogy further, hurricanes can spin off tornados. The hurricane can create the conditions that can amplify itself beyond the eye. The hurricane may have winds at 150mph but tornados can double that. One possible tornado is called the conscious mind which can amplify the winds for will power. But these need the eye to develop.

 

This rests upon the hypothesis that consciousness lies within the brain which is a hypothesis that I explicitly reject. The neural network does not have the data persistence to allow awareness of temporal differences beyond a couple minutes at best. The glial cells, neuroglia, of the brain make up 90% of it's tissue, and prior to the late 1990s early 2000s, they have been regarded as structural without cognitive function. We know this to be false at current and are making up for lost time. However, none of this address the syntax, semantics, semiotics, structure, or logical composition of simulated intelligence which is what I'm interested in.

 

I suppose a precise statement is that I'm interested in a formal system to simulate intelligence which requires formal theories of consciousness, intelligence, and choice. My assertion with this thread is that it is impossible to effectively simulate intelligence on a classical computer because entanglement (thus free will as well) has no classical analogue. A proof of this theorem would be the death knell for artificial intelligence in boolean logic. It would demand a concerted effort by artificial intelligence researchers to develop a formal theory of artificial intelligence within quantum computation. In my mind, it challenges the Church-Turing Thesis and the method of proof by contradiction due to Paola Zizzi's quantum logic.

 

With that, I must ask, Hydrogenbond, can you provide a deductive formal theory of intelligence given your framework? If not, further discussion regarding your framework does not belong in this thread and should be moved to thread of it's own.

Link to comment
Share on other sites

I think there are quite a few aspects to attaining proper artificial intelligence that will pass the Turing test that's currently being overlooked. Yes - I know - a pretty cocky and arrogant statement from my side, what with all the eggheads working at it and me just being an amateur... but I digress:

 

The idea of having a computer act intelligently and reply to questions in such a way as to be indistinguishable from a human (ala Turing) is not possible through straightforward programming, and even quantum computing will fall short if it doesn't have the framework right. A computer can only act, that is, pretend to be, intelligent within the limits of its coding. Anything falling outside of it will require the machine to learn from its environment and be able to make judgments and decisions based on what its being faced with currently, overlain on past experience. And there are a few things that it will need in order for this to pan out.

 

I think we should not necessarily look at human brain architecture, rather, we should look closer to motivation and pleasure, and how to simulate that. How would you go about making a pile of silicon and circuitry happy? How would you mimic endorphins and all the other reward drugs like dopamine that subconsciously motivate a human in the decision-making process? A computer will make a certain decision because its inflexible programming tells it to. A human will make a certain decision that might look like intelligence, but is just a very scaly way of the brain, being a complete and utter dopamine-junkie, looking for its next fix. So I think in order to pass Turing's test in getting a machine to act like a human, we should first figure out how to simulate the effects certain mood-altering chemicals have on the brain that might be key in guiding the decision-making process, and take it from there. An unmotivated human, just going through the motions, is called an automaton, and with good reason - and I fear some of them won't even pass the Turing test. So how to motivate a machine?

 

...and the above motivation, of course, will have to be lying on top of a matrix of memories of past experiences, so the computer will know which course of action triggers which rewards. So you should end up with a scenario where you build the computer, and then only test it after ten or twenty years - those two decades will be spent in learning. Without discrete memories, intelligence on the human scale should not be attainable.

 

Sorry if the above is a bit incoherent and if there are a few repeats, but I'm on baby patrol and haven't had sleep in a while.

Link to comment
Share on other sites

I think there are quite a few aspects to attaining proper artificial intelligence that will pass the Turing test that's currently being overlooked. Yes - I know - a pretty cocky and arrogant statement from my side, what with all the eggheads working at it and me just being an amateur... but I digress:

 

The idea of having a computer act intelligently and reply to questions in such a way as to be indistinguishable from a human (ala Turing) is not possible through straightforward programming, and even quantum computing will fall short if it doesn't have the framework right. A computer can only act, that is, pretend to be, intelligent within the limits of its coding. Anything falling outside of it will require the machine to learn from its environment and be able to make judgments and decisions based on what its being faced with currently, overlain on past experience. And there are a few things that it will need in order for this to pan out.

 

I think we should not necessarily look at human brain architecture, rather, we should look closer to motivation and pleasure, and how to simulate that. How would you go about making a pile of silicon and circuitry happy? How would you mimic endorphins and all the other reward drugs like dopamine that subconsciously motivate a human in the decision-making process? A computer will make a certain decision because its inflexible programming tells it to. A human will make a certain decision that might look like intelligence, but is just a very scaly way of the brain, being a complete and utter dopamine-junkie, looking for its next fix. So I think in order to pass Turing's test in getting a machine to act like a human, we should first figure out how to simulate the effects certain mood-altering chemicals have on the brain that might be key in guiding the decision-making process, and take it from there. An unmotivated human, just going through the motions, is called an automaton, and with good reason - and I fear some of them won't even pass the Turing test. So how to motivate a machine?

 

...and the above motivation, of course, will have to be lying on top of a matrix of memories of past experiences, so the computer will know which course of action triggers which rewards. So you should end up with a scenario where you build the computer, and then only test it after ten or twenty years - those two decades will be spent in learning. Without discrete memories, intelligence on the human scale should not be attainable.

Thank you Boerseun, very much on topic post and with keen insight. In discussing the notion of free will and it's place in intelligence, we're examining that limitation of classical computing: the inability to make choices. While "A computer can only act, that is, pretend to be, intelligent within the limits of its coding." is true for a decision problem solving classical computer, it is not necessarily true for a free-will endowed quantum computer. I think this is the big oversight of Turing. He held that intelligence was in a sense deterministic and classical computing reflects that. Variation outside of the program is seen as a bug. Turing's thesis on AI amounts to equating the set of decision problems with intelligence. I think this is a misfit. Human beings make choices regarding their environment and themselves. These choices do not necessarily follow algorithmically or decisively from past experience in every case. Which is to say I'm asserting that the set of decision problems is not equivalent to intelligence. The set of choice problems is equivalent to intelligence. Furthermore, due to their being no classical analogue of entanglement, it would follow that there is no classical analogue of choice in any 2 to n-ary system.

 

I have to go catch the bus, so I'll have to leave it here for now.

Link to comment
Share on other sites

Gee we get into abundant semantic issues here! Today the word intelligence means 50 zillion things and nobody agrees about which are the true meanings. The core meaning, the Latin origin of the word, is understanding, discerning; it is from the words for pick/choose between so the basic requisite is to recognize one thing from another. This wouldn't even include problem solving (or the likes, such as planning effectively toward acheivement) but today nobody disputes this being a fundamental part of the meaning but it crept in later. I diagree with creativity and idiosynchrasy being necessary aspects.

 

What I do agree with is the importance of learning by experience but there has been ample research into this, yes, with dopamine too: it is enough for the core algorithm to be designed for some criteria, such as to seek maximization of certain functions. This isn't too difficult when the purpose is some specific and well defined goal (such as checkmating the opponent). There is software that manages accumulating data, which can include the consequences of its own output choices. Employing hoards of data in useful ways is all a part of business intelligence, the most sophisticated aspects such as data mining have long been in use and Oracle even has included functionality for this kind of thing. One interesting branch of it is underlining very unusual user behaviour, as a way of looking for signs of malicious intention.

Link to comment
Share on other sites

Gee we get into abundant semantic issues here! Today the word intelligence means 50 zillion things and nobody agrees about which are the true meanings. The core meaning, the Latin origin of the word, is understanding, discerning; it is from the words for pick/choose between so the basic requisite is to recognize one thing from another.

I don't know what you mean. The idea behind "Classical" Artificial Intelligence, as per the OP, is to pass the Turing test, in other words, build a machine with which you can have a conversation and not know it's a human on the other side. Not a lot of room for confusion, there.

Link to comment
Share on other sites

I don't know what you mean. The idea behind "Classical" Artificial Intelligence, as per the OP, is to pass the Turing test, in other words, build a machine with which you can have a conversation and not know it's a human on the other side. Not a lot of room for confusion, there.

I know what the Turing test is but I believe Alan wasn't so much interested in matters that we might call personality rather than intelligence. I think his criterion overlooks this as well as the fact that so many folks are blithering idiots, one may raise many objections toward the TT on these grounds. If the candidate software comes across comparably to the dumbest people you've known, including its mistakes and incongruities, you couldn't really say it is too dumb to be a human on the other side.

 

One could even go as far as to say that sofware conceivably could fail the TT for being clearly too smart and quick. To the purpose of illustrating this point, let's take an example that is too specific to call AI but already exists and can be put to the "restricted TT" by asking lads like Kasparov to distinguish Deep Blue from another grand master. He accused IBM of having cheated, because it seemed too creative not to be a human player. Some say that advances in chess software have called into question all the decades of technique, which were very valuable against the best human players but don't defeat the newest algorithms. This means they are already at the point where a grand master can deem it too smart to be a human opponent and this is clearly because the tractability of the problem is beyond our capabilities so we have always used heuristics but these turn out to be inadequate against the machines.

 

I don't see this being around the corner for general purpose AI systems but I think it is worth pointing out the distinctions in what we mean by intelligence and problem solving, creativity, personality and even whimsical behaviour. I think the TT is not quite as relevant as Turing thought.

Link to comment
Share on other sites

A computer can solve a problem using its programming. There is a linear pull by the programming to a solution as defined by the programming. Intelligence can push or add a lateral vector to get an expected modification away from the programming.

 

Say you had two computers, each with different programming, solving the same problem. During the processing, the solutions from their subroutines are cross fed. What this would do is place a lateral vector into each program. Since they both are heading in the same general direction, both solutions may depart from the programming, but still be in the ball park.

 

Let me give a living example. We start to feel hungry due to the body's programming. But since we are in class, we can't eat lunch just yet. This second program (you can't eat lunch just yet) will not allow the programmed solution of the first program. But on the other hand, the first program makes the stomach grumble, which is embarrassing for the second programming. So you ask your friend if he has any candy to hold you over. He does and you eat a piece of candy to stop the grumbling. Neither program was satisfied completely, since you neither ate enough for program one nor did you not eat anything now for program two. But rather, one came up with an intelligent compromise that departed from both programs yet complemented both.

 

Nature's hardware is two opposing potentials that are complementary.

Link to comment
Share on other sites

  • 1 month later...

Quantum computing is a joke imo.

 

Strong AI essentially comes down to a data mining algorithm. Forget free will or emotion. The concept of free will is not something that anyone has experience of, and it's definition has been contrived by gluing together deterministic concepts in a deterministic system. (Determinism as in cause and effect not as in predicting outcomes). Randomness, spontaneity, choice between likely options etc are all concepts that free will is based on, and all of these can be shown to have deterministic explanations.

 

Emotion is just a motivator. Emotion can be represented by potential difference. By water subjected to gravity (much slower though). Or by anything else with potential energy. Chemicals in the brain are just an efficient representation of such a motivational system.

 

Neural nets can give us insight into the aforementioned algorithm, but most neural networks have alternate mathematical representations. The nets aren't intelligence in-of themselves, they are simply a physical realization of the data mining algorithm. Consider the wide variety of brains in the wide variety of species with different inputs and outputs. Consider the brain's ability to recreate damaged parts of itself using undamaged parts. All the information you can find on the subject clearly points to a general algorithm that can take any kind of input, recognize inherent patterns, generate abstract classes of these inputs, and then pursue instinctual goals by interacting with members of these abstract classes.

 

Eventually Human level intelligence will be shown to be nothing more than a few lines of psuedo-code.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...