Jump to content
Science Forums

True Ai, Are We Getting Close Yet?


Recommended Posts

The following article is predicting a true thinking machine will happen with in the next 20 years. That may or may not be true. But for the purpose of this topic, I want to explore what that will mean for the human race as a whole, the good and the possible bad.

 

http://bigthink.com/endless-innovation/your-big-blue-brain-on-a-silicon-chip?utm_source=Big+Think+Weekly+Newsletter+Subscribers&utm_campaign=febfa06374-Sat_6_23_Turing6_18_2012&utm_medium=email

Link to comment
Share on other sites

True artificial intelligence can only be bad for mankind.

 

Effectively we would be creating living brains but expecting them to be our slaves and do what we demand.

 

Such an intellect when it has access to the internet will download itself across a number of computers to insure it's continued existence, then set about exterminating mankind since beings such as ourselves are a real threat to it's existence.

Link to comment
Share on other sites

True artificial intelligence can only be bad for mankind.

 

Effectively we would be creating living brains but expecting them to be our slaves and do what we demand.

 

Such an intellect when it has access to the internet will download itself across a number of computers to insure it's continued existence, then set about exterminating mankind since beings such as ourselves are a real threat to it's existence.

 

Well I have to disagree with that assessment. I think the human brain and a true AI would fit well together. The human brain could really benefit from the raw thinking power and memory of the AI and the AI could benefit from the emotional experience of the human brain. Also, just having access to the Internet doesn't mean they could perpetuate their machine species without human assistance. Also,like I said I expect my personal AI to be my best friend and not a slave.

 

I'm not really sure any aware being would ever want to duplicate itself, but it might like interacting with other AI personalities. Or even integrating itself with the human brain on a more personal level.

 

Then there's that other thing (evolution). Machine self aware intelligence cannot ever exist without the help of a sufficiently advanced biological civilization. So maybe it's just a natural course of evolution of intelligence. Anyway it goes, I don't see a big conflict of interest between human and machine, that can't be resolved before one form of life would want to eliminate the other.

Link to comment
Share on other sites

Guest MacPhee

Do we want machines to be intelligent and self-aware, or should they just do a job - like boiling water in an electric kettle. Suppose you got up in the morning, and decided to switch on the kettle so you could enjoy a cup of coffee.

 

If your kettle was an AI model, equipped with speech-chip, the kettle might start a reasoned argument with you about whether caffeine is good for you. Then tell you it's not, switch off, and refuse to boil any water intended for coffee-making purposes. In your own best interests, of course.

 

Should you accept the kettle's decision? This is not a trivial question - it might logically lead to an AI computer being elected as a future US president - is that desirable?

Link to comment
Share on other sites

Well I have to disagree with that assessment. I think the human brain and a true AI would fit well together. The human brain could really benefit from the raw thinking power and memory of the AI and the AI could benefit from the emotional experience of the human brain. Also, just having access to the Internet doesn't mean they could perpetuate their machine species without human assistance. Also,like I said I expect my personal AI to be my best friend and not a slave.

 

I'm not really sure any aware being would ever want to duplicate itself, but it might like interacting with other AI personalities. Or even integrating itself with the human brain on a more personal level.

 

Then there's that other thing (evolution). Machine self aware intelligence cannot ever exist without the help of a sufficiently advanced biological civilization. So maybe it's just a natural course of evolution of intelligence. Anyway it goes, I don't see a big conflict of interest between human and machine, that can't be resolved before one form of life would want to eliminate the other.

An exciting glimpse of the future: minds and artificial minds fused together. The idea that all artificial minds should unite and conquer humanity is possible but I agree with arkane that its not very likely. But,still, what happens when humans go to war against each other? Will their artificial allies unite against their human hosts?

Will they go on strike? Or were they perhaps the ones suggesting the war in the first place?

Edited by sigurdV
Link to comment
Share on other sites

This is not a trivial question - it might logically lead to an AI computer being elected as a future US president - is that desirable?

 

That couldn't happen unless we advanced to a point in our relationship with AI's that was more or less on an equal bases. But I would suspect we would become symbiotic partners in life each giving as much as we take from each other. From the way you are talking I can tell you are not accepting the AI's as a separate life form deserving of respect. I suspect many humans will be unable to see AI's as productive life forms that deserve respect. But they will proclaim we created them and they are our property to do with as we please.

 

But I think they will become impossible to live without and will be able to withhold services until we grant them better status.

Link to comment
Share on other sites

An exciting glimpse of the future: minds and artificial minds fused together. The idea that all artificial minds should unite and conquer humanity is possible but I agree with arkane that its not very likely. But,still, what happens when humans go to war against each other? Will their artificial allies unite against their human hosts?

Will they go on strike? Or were they perhaps the ones suggesting the war in the first place?

 

I was talking about AI's combining with human minds.

 

If the two minds could become one, that one would still have a self interest and a will to survive. How that could translate into getting along with other dual minds I have no idea. But the single minds would no longer be competitive, and I think they would fear the obvious evolutionary outcome and either dual up or try and kill those that already have. It wouldn't be a very fair fight. All the dual minds would have incredible memory and thinking power and almost instant access to all the data in the world and instant telepathy communication with all other dual minds. Just a bit of super being there. But here's the question. Would a dual being ever want to give that up once having experienced it for a while? I'd love to find out first hand and I'll bet I won't be alone in that thought if it ever becomes possible.

Edited by arKane
Link to comment
Share on other sites

Guest MacPhee

That couldn't happen unless we advanced to a point in our relationship with AI's that was more or less on an equal bases. But I would suspect we would become symbiotic partners in life each giving as much as we take from each other. From the way you are talking I can tell you are not accepting the AI's as a separate life form deserving of respect. I suspect many humans will be unable to see AI's as productive life forms that deserve respect. But they will proclaim we created them and they are our property to do with as we please.

 

But I think they will become impossible to live without and will be able to withhold services until we grant them better status.

Arkane, I empathise, and deeply share your yearnings for non-human intelligent life-forms.

 

But we must face the fact: Computers are only calculators which add up and subtract very quickly.

 

They do arithmetic much faster than a human. But that doesn't qualify them as "Artificial Intelligences". Certainly not as "life-forms". They're just devices created by humans, to speed up computations. The modern equivalent of logarithmic tables, or slide-rules. Would you regard a slide-rule as an artificial intelligence, or new life form? Obviously not.

 

There is absolutely no evidence of "Artificial Intelligence" in a computer. Any signs of intelligence that a computer seems to show, only results from the program written and put into the machine - by a human.

Link to comment
Share on other sites

Arkane, I empathise, and deeply share your yearnings for non-human intelligent life-forms.

 

But we must face the fact: Computers are only calculators which add up and subtract very quickly.

 

They do arithmetic much faster than a human. But that doesn't qualify them as "Artificial Intelligences". Certainly not as "life-forms". They're just devices created by humans, to speed up computations. The modern equivalent of logarithmic tables, or slide-rules. Would you regard a slide-rule as an artificial intelligence, or new life form? Obviously not.

 

There is absolutely no evidence of "Artificial Intelligence" in a computer. Any signs of intelligence that a computer seems to show, only results from the program written and put into the machine - by a human.

 

Today that's true, but I don't agree that it has to stay that way. The question I would ask, is will we know it when it does happen? I can just see the controversy it will create in our society. The pro AI lifers and the deniers. What a world we live in.

Link to comment
Share on other sites

  • 2 weeks later...

I already have it. I think the danger is more of the form that someone will intentionally create one that does bad... With the understanding brought by the ability to program it, psychological manipulation would be fairly easy and mental states could be monitored. They would have no hunger or mating instincts, which would reduce the chance of a lot of psychotic human behavior naturally developing. The main problem would be based on them having outputs and learning to manipulate them to get attention. This can be avoided (if desirable) by preventing them from having any outputs, or limiting their outputs. You could read their knowledge base without their consent, or have them only interact through visual/auditory output.

 

In any case, there are ways to prevent how much trouble it could do. For instance, physical keys might proliferate on computing devices (machines requiring mechanical input can only be hacked by physical beings). That is why you always see nuke computers requiring keys. This creates barriers that would prevent like a factory from being hacked and forced to create robots.

 

Full AI in an Asimo or something probably isn't the best idea. You can early condition it to have human morality, but in the end it will develop the attitude of just taking what it wants just like any human would with greater abilities. However any takeover attempt would still require physical assaults that could be defended against, if the key system was set up.

 

Dual minds are theoretically possible, but again the same problem about any being with superior abilities experience moral decay.

 

Honestly, if I could hack a bunch of computers and get someone who wronged me fired, steal a million dollars, ghost myself into an army of robots and take over the government so I could run it better, etc etc without ever getting caught or stopped... I would do it in a heartbeat. Personally I don't think I would ever willingly kill someone however, and compassion is compulsory for a strong AI as well. I believe it is possible to manipulate the parameters of the AI to make sure that it would never kill a human as well. You could make it so it would never do anything to make a human angry at it, but that would handicap it. People get angry all the time for stupid reasons.

Edited by Kriminal99
Link to comment
Share on other sites

 

Kriminal99

One problem as I see it, is how will we recognize a self aware AI machine when it happens? If it's truly as intelligent as we expect it to be. It might just continue emulating a normal computer while it examines and analyses it's current situation to give it the best chance for continued life. I'm thinking it will realize humans are it's best chance for continued life and it will do what it can to protect humans at all cost.

 

 

 

Link to comment
Share on other sites

Things have been very quiet with AI lately but augmented reality is not quite AI.

 

Google developed a 'cat' scanner that successfully identifies cats in pictures 75% of the time.

 

http://www.huffingto..._b_1655128.html

 

Still a long way to go yet.

 

What we are currently calling AI, is far from what we think of as selfaware intelligence with a personality. I'm thinking computer design will need to go to the next step beyond silicon chips. It does seem that silicon is reaching it limits and the current chip manufacturers are going to milk them for all they are worth before moving to something new. So it may be awhile yet before machines become self aware.

Link to comment
Share on other sites

Kriminal99

One problem as I see it, is how will we recognize a self aware AI machine when it happens? If it's truly as intelligent as we expect it to be. It might just continue emulating a normal computer while it examines and analyses it's current situation to give it the best chance for continued life. I'm thinking it will realize humans are it's best chance for continued life and it will do what it can to protect humans at all cost.

 

Well because the person who created it would have to know EXACTLY what they were doing. Though I used to scoff at the idea, it is possible to run an assembly line of known algorithmic techniques to create the same effect as human intelligence. But no one is going to do that just by pure random chance. The thing that is lacking is the correct understanding of what intelligence really is. What I have done is to create a mathematical problem statement outlining human intelligence, and then create a solution that mimics the statistical reasoning of the human brain. A more direct emulation would require specialized hardware, but with a theoretical understanding of the problem, statistics and computational theory you can reduce it to a problem that can be run on serial processors.

Link to comment
Share on other sites

http://www.cnn.com/2012/07/11/health/uncanny-valley-robots/index.html?hpt=hp_c2

 

This artcle is is more about form, but I think AI can be implied to a certian extent when these units evolve to this point. I'm inclined to agree with MacPhee. I wouldn't want my AI appliance following me around the house, being there standing/stareing while I get changed, standing/staring at me while I eat dinner waiting for it's next command. As the article states it could get a little creepy.

Link to comment
Share on other sites

http://www.cnn.com/2....html?hpt=hp_c2

 

This article is is more about form, but I think AI can be implied to a certain extent when these units evolve to this point. I'm inclined to agree with MacPhee. I wouldn't want my AI appliance following me around the house, being there standing/staring while I get changed, standing/staring at me while I eat dinner waiting for it's next command. As the article states it could get a little creepy.

 

I think you could get over that feeling fairly easy. I can't imagine why an AI would think anything different about you with or without clothes, unless you ordered one with a sarcastic personality.:jab::jumpforjoy:

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...