Jump to content
Science Forums

Was HAL conscious?


Recommended Posts

Was HAL, the computer featured in Stanley Kubrick's film 2001: A Space Odyssey, a sentient being, or merely the product of "brute force" computation?

 

Since his debut in 1968, HAL has served as the guidepost for artificial intelligence research. More than any other character in fiction, he has represented the enormous potential of the field, and has helped to launch the carriers of many an AI researcher. Calm, rational, and eerily human, HAL would certainly pass the Turing test. But was he actually a conscious being -- an emergent byproduct of some robust future algorithm -- awake and aware of his surroundings? Or was he instead a masterpiece of human simulation, produced by the interplay between cleverly designed software and extremely fast -- but conventional -- hardware?

 

Of course, HAL is imaginary, but his legacy reminds us that achieving the appearance of human-like machine intelligence need not necessarily require true sentience. In the film, scientists clearly are uncertain whether or not HAL is conscious:

 

Reporter
:
The sixth member of the Discovery crew was not concerned with the problems of hibernation, for he was the latest result of machine intelligence: the H.A.L. 9000 computer, which can reproduce -- though some experts still prefer to use the word mimic -- most of the activities of the human brain, and with incalculably greater speed and reliability.

 

HAL, on the other hand, makes a case for his own self-awareness:

 

HAL
:
I enjoy working with people. I have a stimulating relationship with Dr. Poole and Dr. Bowman. My mission responsibilities range over the entire operation of the ship, so I am constantly occupied. I am putting myself to the fullest possible use, which is all I think that any conscious entity can ever hope to do.

 

Much air play has been given to the glamorous future of artificial intelligence, the dawn of sentient machines. But little attention has gone toward imagining a less glamorous -- but arguably more realistic -- future in which machines might be constructed to appear conscious, without actually being so.

 

In his book, The Age of Spiritual Machines, Ray Kurzweil predicted that by the year 2019 a $1,000 computing device will have the computational ability of the human brain. He further predicted that just ten years later, a $1,000 machine will have the computing capacity of approximately one thousand human brains. Regardless of whether or not you agree with Kurzweil's timetable, one thing is certain: computer "horsepower" has increased dramatically since their inception, and seems likely to increase just as dramatically in the near future.

 

Let us imagine that it is 2019 and software development has advanced no further than today, while hardware has progressed to the point where it matches the computational ability of the human brain (estimated by Kurzweil to be 20 million billion calculations per second). Even with present-day software, the sheer horsepower behind such hardware will make such systems capable of amazing things. Is it possible that problems like computer vision, knowledge representation, machine learning, and natural language processing will be solved by brute force computation, even if no new software efficiencies are implemented?

 

Consider the progress made in Chess playing computers. For a long time in the 1970s and 1980s it remained an open question whether any Chess program would ever be able to defeat the expertise of top humans. In 1968, International Master David Levy made a famous bet that no chess computer would be able to beat him within ten years. He won his bet in 1978 by beating Chess 4.7 (the strongest computer at the time), but acknowledged then that it would not be long before he would be surpassed. In 1989, Levy was defeated by the computer Deep Thought in an exhibition match.

 

Chess algorithms work not by reproducing human cognitive processes, but by examining future moves. They have attained tournament-level playing ability almost exclusively due to dramatic speed increases in their number-crunching hardware. In their book, How Computers Play Chess, researchers David Levy and Monty Newborn estimated that doubling the computer speed gains approximately fifty to seventy ELO* points in playing strength.

 

As Nigel Shadbolt of the University of Southampton said: "I believe that massive computing power directed with a light touch toward interesting parts of a problem space can yield remarkable results."

 

I asked a few AI researchers what they thought about the possibility of brute force computation eventually simulating human intelligence, and here is what they told me:

 

Steve Grand:

 

Take the simplest possible method of brute force AI: a straightforward list of the answers to all anticipated questions. You can draw a network of all the possible relationships between objects and verbs, representing the number of possible questions and answers. If the knowledge domain only has one object then there are very few questions that could be asked about it. If there are two objects then you can ask questions about each object, but also about relationships between the two (is lead denser than water?). As you add more objects the number of questions rises as the factorial. Clearly there are more questions that could be asked about a world containing a few dozen objects than there are subatomic particles in the entire universe. So quite quickly you reach the point at which the universe simply isn't big enough to hold a computer that could store the necessary answers. So you obviously have to take a more sophisticated approach.

 

The most sophisticated approach would be an accurate model of a human brain, configured by real personal experiences of the world. This is clearly capable of passing the Turing test and it scales efficiently but it's Strong AI. So where is the point between these two extremes at which the results result from a cheat are sufficiently convincing -- and does this method of representation scale well enough not to require a computer larger than the number of bits in a manageable chunk of the universe? My feeling is, it doesn't scale well at all -- there is no substitute for the structure of the brain itself -- the brain is its own best description and any other valid description contains vastly more bits than a brain (or even a thousand brains).

 

Ben Goertzel:

 

I think that faking intelligence in a Turing-test context is almost surely possible, but only using many orders of magnitude more computing power than exists in the human brain. Mathematically, one can prove that it IS possible if one has sufficiently much computing power -- but this theorem doesn't tell you much of any practical use, because the proof doesn't tell you whether the amount of computing power is, say, bigger or smaller than the amount of computing power in the entire universe.

 

Hugo De Garis:

 

There's a huge difference between high bit rate computers and high intelligence. A high bit rate is a necessary condition for an intelligent machine but not sufficient. To be sufficient, the bits in the circuitry have to be connected in brain like ways, but we don't know how to do that yet. We will probably have to wait until nanotech gives us powerful new tools to investigate the principles of how the brain performs its magic. Then we can put those principles into machines and get the same level of intelligence performing a million times faster, i.e., at light speed compared to chemical speed. My view of the timing is that we won't have real nanotech until the 2020s, then in the 2030s there will be an explosion of knowledge in neuroscience, which we will be putting into brain like computers in the 2040s.

 

Steve Lehar:

 

The Turing test is a pretty strange benchmark from the outset. The idea is to restrict the 'user interface" to something that computers can handle, like a text I/O interface. But the REAL mark of intelligence, human or otherwise, is the ability to walk into the room and sit down in front of the user interface, with the kind of grace and elegance that a cat or lizard or snake can demonstrate, even if they can't figure out how to read the screen or type an input. If we could replicate even THAT degree of intelligence and biomechanical grace, we would be much farther advanced in replicating human intelligence.

 

I think the Turing test is a very biased and restricted benchmark, designed more to demonstrate the "abilities" of our stupid digital computers than to release the potential of true human or animal intelligence. How about an anti-Turing test, where the creature or machine has to walk into a room, identify where the user interface is located, and sit down in front of it? How long would Kurzweil suppose it take before we can accomplish THAT in artificial intelligence?

 

One of the big surprises of the search for artificial intelligence has been the fact that the "white collar" type tasks, such as calculating Boolean logic, solving differential equations, navigating a spaceship to the moon and back, are apparently the "easy" problems of computation, while the more basic "blue collar" tasks of getting dressed in the morning, identifying the wife and kids to communicate the appropriate kisses and nods, and driving the body to work, are actually the REAL frontiers of human intelligence; we have NO IDEA how they are done.

 

Paul Almond:

 

What do you mean by a Turing test pass? Do you mean fooling the average person into thinking that they are talking to a human for 5 minutes? 5 days? 5 years? As an example, would you require the machine to reproduce anything like the detailed e-mail exchanges we have had for a long time now? Would you expect the e-mail messages you have sent me to be answerable, to some degree?

 

I think this is where we can run into problems. Given a prolonged enough exchange, passing the Turing test would probably be as hard as having full consciousness anyway -- because of the scope a person has for catching the computer out -- so I don’t really see a proper Turing test pass as a particularly easy problem.

 

I think that mimicry of consciousness would imply consciousness, but I don’t think it could be done by brute force. I think it would require cleverness in software design of some kind. This means I do not expect huge processing power, in itself, to deliver a Turing test pass. However, when we get such [super-fast] hardware, a lot of AI research will become easier. Furthermore -- and this is a big point -- lots of AI algorithms that might have been impractical before now become practical, so a lot of speculation can now be tested experimentally. I think it would speed up AI research a great deal, and the start of true AI might emerge not many years after.

 

There is one exception where brute force could clearly deliver AI. That is, if you had the ability to somehow record the structure of a human brain with sufficient accuracy. You could then get a computer to “run” your image of a human brain and you would have an AI system: you would not know how it worked, of course, and your AI may not thank you for it; it would have the memories and personality of whatever brain you used. You would not know how this AI system worked without some research. It would not even know itself.

 

In 2001: A Space Odyssey, HAL acted as if he was conscious -- but was he? We'll never know for sure, but if one day brute force computation conquers many of the problems associated with artificial intelligence, the question of machine sentience may be a whole lot easier to answer.

 

Reporter
:
Do you believe HAL has genuine emotions?

 

Dave Bowman
:
Well, he acts like he has genuine emotions. Of course, he was programmed that way to make it easier for us to talk with him. But as to whether or not he has real feelings is something I don't think anyone can truthfully answer.

 

__________

*The ELO rating system is a method for calculating the relative skill levels of players in two-player games such as chess and Go.

 

Machines Like Us

Link to comment
Share on other sites

I am of the frame of mind that, yes, HAL was conscious. And, just to be pedantic, HAL was featured in Aurhur C. Clarke's book before he was featured in Stanley Kubrick's film.

 

That's not quite accurate. Kubrick and Clarke co-wrote the screenplay for 2001 before the book was written, then Clarke wrote the book while the movie was being filmed. A rift grew between Clarke and Kubrick at the time, because Kubrick insisted that the book not be released until after film was completed.

 

This Wikipedia page discusses the issue.

Link to comment
Share on other sites

Was HAL, the computer featured in Stanley Kubrick's film 2001: A Space Odyssey, a sentient being, or merely the product of "brute force" computation?
I don’t believe this should be phrased as an either/or question. In principle, hypothetically, why could HAL not be both sentient and the product of brute force computation?

 

In GEB, Hofstadter posed a question similar to the one asked in post #1 using a hypothetical, very advanced computer capable of accurately simulating all of the individual cells of a living human body. By the definition in this context, “accurate simulation” requires that the program not deviate significantly from what it simulates, all of the cells of a living human, so, as the cells of a human body appear to create the observed behaviors to which we ascribe sentience, the simulation would, also. We would expect the simulation of a human being to differ from the human being upon which it is based in about the way that identical twins differ from one another.

 

This is (and is acknowledged as) essentially a more elaborate and plausible restatement of Searle’s Chinese room. While some of the most compelling responses to this thought experiment note that it is practically impossible to implement, requiring more human beings and printed books than there are atoms in the universe to build, and taking more time to perform its described operation (a single exchange in a conversation in Chinese) than any biological species could survive, basic science such as that described in several of post #1’s quotes suggest that computers available within several decades might be capable of simulating all of the cells in a human body.

 

Of course, such a simulation is likely a terribly and unnecessarily inefficient way to create a sentient computer, but demonstrates that, if one accepts the premise that human sentience arises from human biological processes, and that simulation of biological cells with arbitrary precision is in principle possible, one must accept that it is possible to simulate sentience which is, in any scientifically measurable way, indistinguishable from “natural” sentience.

 

Both Searle and Hofstadter cleverly and purposefully avoid and accomplish a couple of things in their thought experiments:

  • The need to define “sentience”. Presumably, sentience is a quality possessed by humans, so a adequate simulation of a human has this quality also, even if (As I and many other believe is the case) we are unable to formally define it.
  • Solace for a couple of generations of AI researchers and programmers smarting from the failure of the discipline to find the “clever tricks”, assumed by speculative fiction writers like Clarke to be ready for the finding, that would allow HAL-like programs to “became operational on January 12, 1992” (or, as it was written 2 year later in the book, 1997). Programming culture was and remains pervaded by the folk belief that, if one simply remains immersed enough long enough on a particular problem, even a very hard one – if one prototypes enough – such tricks will emerge, allowing the problem to be solved. This has not happened, to our considerable embarrassment.
     
    Programming culture is also pervaded by the religious-like belief that sufficient processing power and brute force can be counted on, when needed, to succeed where clever, elegant tricks fail. So the notion that, to program true, HAL-like AI, we need “only” simulate dumb, squishy biochemical cells, is comforting (falsely, as folk acquainted with biological simulation programming can attest).

Though useful and clever, the failure to define sentience (AKA consciousness, awareness, etc) is troubling, as it allows the possibility that sentience is a semantic null – that is, a term describing something that does not actually exists, like the phrase “a smooth circle with 3 interior angles totaling 180 degrees”. I, and presumably all of us humans, certainly feel, subjectively, that we are sentient, but it’s not disproved (and IMHO, likely) that this feeling is merely a cognitive and behavior trait emerging from the particular family of “thinking machines” of which we’re instances. Just as at one time most people were confident that all human beings were incontrovertible proof of the existence of God and factuality of the creation story given in the Bible book of Genesis, the belief that we possess a fundamental “spark of self-awareness” qualitatively absent from, say, a anti-locking braking system computer, or a dog, or an amoeba, may be one that, a few or hundreds of generations hence, few thoughtful people believe.

 

On the other hand, some very smart people, such as Roger Penrose, have speculated that sentience may have little to do with computation, but rather be related to not-yet understood physical phenomena related to how particular parts of our bodies, such as microtubules in brain cells, interact with fundamental forces, such as gravity. This speculation, while not yet producing scientifically testable hypotheses, is intensely interesting, and has inspired some wonderful speculative fiction, such as Matthew Hughes’s 2008 Hugo nominee, "The Helper and His Hero", which speculates that consciousness is due more fundamentally to gravity than of biology.

 

To disclose my personal biases on the question at hand, allow me to state that I believe, but can’t prove, that computer programs like HAL are possible, and should be considered as sentient as a human being. I also believe that sentience is a semantic null, and that Penrose’s speculation about the physics of consciousness is incorrect and entirely on the wrong track.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...