Jump to content
Science Forums

Can A Computer Become Sentient


Recommended Posts

// &this tells the whole class what CC is. It is available to all functions. This starts with the __construct function. It transfers 'Silver, High Wizard' into $content through the built in __construct function of PHP. This is then passed to the get_CC method through the $CC property. It actually works backwards going back to the top of the script.

//Method is a function in class, property is a variable for all functions in class. Private is only in class and public is extra-planar of class.

//Can therefore, a computer hereby become self-conscious? Yes, because if you refer to contents of another folder through it's path the computer can cross-reference.

Thus, if a computer can relate, compare, cross-reference, learn, reflect, remember, calculate, conclude and make new variations then what is the difference between the Self-Consciousness of a Human?

The question, of course, remains can the Soul/Spirit ever occupy such an artificial mechanical body and possess Free Will (through Randomness??) like a Human can?

Link to comment
Share on other sites

We've discussed this in AI threads before... there is more to being self-contious then that, juust a smidge bit.

 

To answer your question directly, those terms used in a programmatic language connotation do not suggest computers to be self conscious, basically you are waaaay overgeneralizing what consciousness is, you are also wrongly assuming that if php allows you to store a variable in memory, that somehow computer can just remember. In a nut shell, the difference between a computer and a human is the architecture of the processing unit, the architecture of memory, and in that how tasks go about being processed... now that nut shell in a nut shell translates into, no computers are not nor are they anywhere near being close to being self-concious...

Link to comment
Share on other sites

The question, of course, remains can the Soul/Spirit ever occupy such an artificial mechanical body and possess Free Will (through Randomness??) like a Human can?

 

True randomness is not the result of deterministic computational algorithms. The very determinism existent in such an artificial mechanical body would exclude "free will" by it's very nature.

Link to comment
Share on other sites

Welcome to hypography, SilverRevlis!

True randomness is not the result of deterministic computational algorithms. The very determinism existent in such an artificial mechanical body would exclude "free will" by it's very nature.

While I agree that no pseudorandom entity generator, such as the pseudorandom number generator A = frac((A+C)*C), exhibits true randomness, the OP “the question remains” question doesn’t ask if a deterministic algorithm can “possess free will (through randomness??)”, but if a “soul/spirit” can by occupying a mechanical body. If you accept that true randomness is physically possible at all, then something physical must be able to produce it. A “mechanical body” seems to me loosely enough defined to include many such physical things.

 

The human body is a physical thing, so can arguably be considered a kind of mechanical body, so the original question becomes tautological – “something in a mechanical body” certainly can possess whatever a human can, because a human is a “something in a mechanical body”.

 

This is, I’m pretty sure, not the line of reasoning SilverRevlis meant to solicit with his observations and questions. He seems, simply put, to be asking if you could write a computer program, install it on a digital computer, put the computer in a mechanical human body, and get an entity about like a human being. As far as I’ve been able to reason, I can’t see why not - and likely with hardware available in the next couple of decades. Though a scripting language like PHP would be far from my choice to write the program in!

 

As Alexander notes, we’ve talked about this quite a bit in this forum – one discussion that came quickly to my search was around “"When will computer hardware match the human brain?", according to Moravec, 1997”. Hans Moravec is an interesting and semi-famous writer on the subject.

 

Somewhere in these threads, I posted the source code to a tiny little self modifying program to show that just because a self reference and modification wouldn’t lead inevitably to human-like artificial intelligence. The root of the problem with believing something like this is that, to date, no self modifying program has lead to anything like human-like AI. Morevec only predicted that the hardware necessary for such a thing would be available soon – producing the software is – understating colossally – challenging.

Link to comment
Share on other sites

there is no true randomness in software, there is hardware that can provide truly random results. That said, yes, there are many people who are working on AI and genetic algorithms to modify programs so they run and evolve and become more complex, i agree, i think artifficial intelligence already exists in some forms, and it does, but to then extrapolate and assume that the course will get us to a self-concious program that can make it's own decisions, i think its a bit of a jump and at the very current moment in time with current hardware architecture it is, if not impossible, highly improbable. That is not to say that perhaps an architecture will come along that will provide us with a much better platform to create applications that are a lot more able, in terms of mind processing and self-thought, as well as conciousness, and perhaps even intelligence (note concious is not the same as intelligent, and vise-versa)

Link to comment
Share on other sites

  • 2 months later...

Hi, came across this thead, I know it is a little old but just to wade in.

My brother is a psychologist and I am a computer scientist so we tend to discuss this quite a lot.

 

The first problem is defining consciousness and what it is. There are many different approaches. One of the problems is that psychology as a discipline is young and there are many theories to be refined before strong theories of human intelligence can emerge. At present there are many competing theories and researchers continue to investigate these models and their applicability. So that is one thing we can discuss.

 

The next question for me is around levels of intelligence/consciousness. I think we would all recognise that a dog is an intelligent and conscious being but not perhaps on the level of a chimpanzee. So, should our goal for AI be to produce a Wittgenstein 2.0 or something on the scale of a tadpole?

 

This then links in with what is meant by intelligence. I think most people would agree that an intelligent frog should eat flies to stay alive. However the mechanism that allows a frog to eat flies is a simple algorithim that takes input from the eyes and then triggers a response in the tongue. This to me seems like a deterministic model of intelligence.

 

Which leads me to a point implicit in many of these posts. Posters mention "soul", "randomness", etc. What is a soul? It is not something that most AI researchers attempt to define, nor psychologists for that matter. Why does human intelligence necessarily have to be random? Chaos theory shows us that complex patterns can emerge from simple inputs which feed back into the system. Why not apply this sort of model to thoughts on AI?

 

Finally, how do we know that computers are not exhibiting intelligent behavour already - but in a way that we cannot identify with? I am being flippiant but many people assume that artificial intelligence implies artificial human intelligence and outside of a human-centric view of the universe (which the progress of history seems to undermine on all sides) why should this be the case? One of the emergent thoughts in research on AI is that intelligence requires a body (which is a bit of an overstatement, intelligence requires an input/output interface with an environment seperate from the intelligence - I digress but this is touching on old philosophical queries as to the nature of being). So perhaps webBots dream of electronic sheep?

 

Some thoughts anyway, would be interested to hear what others think.

Link to comment
Share on other sites

A belated welcome to hypography, timbot! I very much enjoyed reading your post :confused:

My brother is a psychologist and I am a computer scientist so we tend to discuss this quite a lot.

I’m also a computer scientist (more accurately, a BS of Math who’s programmed computers more or less since puberty and ended up doing it for a living) with a personal and professional interest in both practical psychology/psychology (counseling and clinical – I married a clinical social worker, work in medical IT, and ... well ... have a psyche :)) and “big questions” such as AI and the nature of consciousness. Major influences on my thinking about AI on consciousness: GEB; The Mind’s I; The Society of Mind and various other writing’s by it’s author, Marvin Minsky.

The first problem is defining consciousness and what it is. There are many different approaches. One of the problems is that psychology as a discipline is young and there are many theories to be refined before strong theories of human intelligence can emerge. At present there are many competing theories and researchers continue to investigate these models and their applicability. So that is one thing we can discuss.

I think psychology’s problems are due less to the discipline’s youth (one can reasonably argue that psychology’s not really very young, and that something resembling present day therapeutic and academic psychology was present, though less well published, as early as, say, the 2nd Century, in the works of excellent physicians such as Galen) than to its somewhat unavoidable reliance on abstract philosophical though, rather than concrete neurophysical experimentation. Though neurology is making great advances (many of them, to my regret, appearing in literature too technical for me), the divide between concretely understood neurochemistry/anatomy and practical human psychology remains so wide that even the most knowledgeable people can only hazard very speculative guesses of what’s happening on a neurological basis when a give human (or nearly any “smart” animal) behavior occurs.

 

Recently, I surveyed and read in depth a few recent works by Lakoff and other recent innovative thinkers on the subject, but came away without any concrete tools of use in, say, writing a Turing test capable computer program, nor the sense that the subject has advanced substantially from where it was when I first began reading works like those in The Mind’s eye, ca. 1980. Thirty years later, the big questions seem to me no smaller.

The next question for me is around levels of intelligence/consciousness. I think we would all recognise that a dog is an intelligent and conscious being but not perhaps on the level of a chimpanzee. So, should our goal for AI be to produce a Wittgenstein 2.0 or something on the scale of a tadpole?

;) If I could pick a famous 20th century math philosopher of slight mental stability to emulate on a computer, I’d chose Gödel over Wittgenstein, because Gödel wrote papers I (or almost any moderately educated person) could actually understand that reached definite, formal conclusions. Wittgenstein, by all accounts I’ve seen, while considered a genius by nearly everyone with whom he interacted, was rarely understood by anyone, and strikes me as a bit sulky and flighty – unwilling, to borrow a phrase Neil Stephenson used in Cryptonomicon to describe a fictional family of stolid but productive mathematicians (the Waterhouses) – “figure out that 2 plus 2 equals 4, then just stick to his guns”. Gödel, while clearly less sane that Wittgenstein (he died of starvation believing that all of his food was poisoned), in his lucid moments, Gödel could stick to his mathematical guns with awesome intensity.

 

To the question at hand, I believe that were we able to truly computer emulate the behavior of any animal that can consistently pass the mirror test – apes and dolphins, but not tadpoles through cats and dogs (or, remarkably, human infants) we’d be able to emulate all of them, and in addition create psyches not found in any animal. While I don’t think this would necessarily result in a Vingian singularity, I suspect it might result in a computer-generated person better in every quantifiable way than any human in the same way that the Deep Blue check-playing computer and its successors are at present certainly better at playing chess than any human.

This then links in with what is meant by intelligence. I think most people would agree that an intelligent frog should eat flies to stay alive. However the mechanism that allows a frog to eat flies is a simple algorithim that takes input from the eyes and then triggers a response in the tongue. This to me seems like a deterministic model of intelligence.

The term “intelligence” is used meaningfully to describe anything from well designed, simple software, to what we humans have. The essence of what it means in phrases like “true artificial intelligence” is, I strongly believe, possessing a self-modifiable self model. This is what the mirror test tests, and I believe it’s not coincidental that only animals we intuitively consider “smart” can pass it even occasionally.

 

A successful “true AI” program must, I think, have such a self model. I don’t believe any yet have.

Which leads me to a point implicit in many of these posts. Posters mention "soul", "randomness", etc. What is a soul? It is not something that most AI researchers attempt to define, nor psychologists for that matter.

People who work in even loosely formal disciplines such as these don’t much as “what is a soul?” because the key attribute of its common meaning is immortality. What’s truly important about souls to people who believe in and think much about them is that they allow their possessors to never truly die. Though a tremendously attractive idea, I don’t think its one that’s of much use to people who are, say, attempting to write computer programs that exhibit animal-like intelligence.

Why does human intelligence necessarily have to be random? Chaos theory shows us that complex patterns can emerge from simple inputs which feed back into the system. Why not apply this sort of model to thoughts on AI?

I believe that a self model such as I describe above results in the sort of feedback that results in chaos – sensitive dependence on initial conditions – and that such chaotic behavior is practically equivalent to randomness. Chaotic systems can be extraordinarily algorithmically simple – I believe that terse pseudorandom number generators such as the ones described in this thread are both chaotic and often statistically random, though not even vaguely self modeling.

Finally, how do we know that computers are not exhibiting intelligent behavour already - but in a way that we cannot identify with? I am being flippiant but many people assume that artificial intelligence implies artificial human intelligence and outside of a human-centric view of the universe (which the progress of history seems to undermine on all sides) why should this be the case?

This is an interesting flippant idea, and excellent fodder for a SF story (I recall actually writing such a story a couple of decades ago – oddly, such ideas seemed more widely accepted as plausible then than now).

 

Seriously, I don’t believe any computers are exhibiting intelligence of the self modeling kind I call “true AI” above. Various ELIZA descendant bots certainly give the appearance of human-like intelligence, and countless programs doing that-which-we-can’t-figure-out give the appearance of non-human-like, but I think this is appearance is simply appearance, in the same sense that a mannequin appears to be a flesh-and-blood human.

 

I believe this because I feel I have a good sense of how all the programs are written, from game AI’s to smart router load balancers. Even self-modifying programs don’t model their selves and their environments as much as very tiny animals such as insect appear to. Despite the inherent flexibility software has, all of it is my experience more like a mechanical clockwork than a biological organism. Present day programs just lack the programming for self-modeling.

 

I don’t think there’re any insurmountable barriers to writing such programs. Semi-state based agent programs that themselves run semi-state based agents and the environment in which they run, would, I think, have it.

 

A trick we humans appear to use to do this is to base our models of others on our model of ourself. I expect this trick would work for the programs I’m imagining here, too.

 

In summary, we know webbots don’t dream of electic sheep, because we know precisely what webbots do – bit-by-bit the states they can have – and nothing like dreaming is among them. We can, I’m pretty sure, write programs that dream, or at least imagine – but such programming will be much harder than what most of us programmers are accustom to.

One of the emergent thoughts in research on AI is that intelligence requires a body ...

I’ve noticed this, too, but rather purposefully ignored it, as I barely have the skills at robotics I need to keep my little fleet of flying and rolling toys flying and rolling. As Maslow says “when the only tool you have is a hammer, everything looks like a nail”. Any bodies an AI of mine will get is going to be a virtual one. :eek2:

 

Well, enough talking about programming. The aphorism “easier said than done” applies as strongly to programming as to anything. Speaking strictly for me, finding the time to work on such stuff is hard. Damn work! :hihi:

Link to comment
Share on other sites

What is it that defines intelligence? Those things we are taught (or have programmed into us). Is it the way we can make adjustments on the fly? Or when we crash, popping valium on the couch for months, unable to process whatever. Our Sensitivity.

 

Reboot:

 

Consciousness, would it be without the senses? A computer that beats a master in chess - is it superhuman, how it does things that we cannot? We imagine we are still masters of these computers as we programmed them. But we ourselves are programmes. Consciousness, what is it without the senses? Will a computer ever feel what it is to lose to a mere machine? Will it ever find this line of thinking offensive.

 

We are programmes. Depicted to by local rules at a molecular and cellular level each protein foretold in the manuscript of DNA through blastulation gastrulation and onwards it goes the patterns pre-chosen the touch of randomness completed in that distant meiotic division and now mitosis into trillions of cells through the birthing canal and beyond.

 

What is our programme? To replicate. We might wonder that there is more to this. But reality is, it's all about ****ing for us. Now that we have learned to think about write about sing about paint about and dream about ****ing of course we consider ourselves creative. :)

 

Consciousness - a will to live? The questions why we live?

Link to comment
Share on other sites

  • 2 weeks later...

Currently, I don't think there are any computers that will suddenly become sentient and try to take over the world. Even if my PC was self-aware, it certainly wouldn't have any way of telling me. It has no control over it's output. Any "consciousness" would be just a spectator, and probably a confused one at that. Imagine the mind of a computer - like being born deaf, paralyzed, numb, and completely deprived of everything "real"; no awareness of a "body" whatsoever, no social context, no language - just witnessing as a blinding series of pulsing waves pass by. In fact, probably not much of a consciousness at all; without language even a human is fairly inept. Anyway, enough of my ranting on that or I'll be typing all night... :D

 

I believe computers will eventually be self-aware, but our concept of self-awareness must also evolve to adapt to this new type of consciousness.

 

I also believe the capacity to fully emulate human brain functions isn't far off, but the ability to make this work in a practical setting (as in robots indistinguishable from human beings without close scrutiny) will take much longer.

 

Finally, I suspect that as we attempt to replicate the exact biological nature of humans using robotics & computer technology, we'll find that the essence of consciousness as we define it is grounded very much in the biological. However we'll find that consciousness is more of a "perfect storm" - a combination of elements that result in a sentient being: senses, memory, feedback loop, emotions (simulated or not), awareness, internal modeling, pattern recognition, prediction, etc. Leading from this - sentience can exist in many forms and with many combinations of these ingredients; not necessarily limited to seeing, hearing, talking, and feeling sad when your friend gets hurt.

 

Please note the very un-scientific nature of my post. I can't imagine how many months it would take to repeat the above sentence in a scientifically acceptable manner. ;)

 

P.S. Curiously, has anyone else read Godel, Escher, Bach: The Eternal Golden Braid? Been a long time but it discusses consciousness in some unique and interesting ways - even so far as insinuating that a computer disc could have a consciousness!

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...