Jump to content
Science Forums

Artificial intelligence


Recommended Posts

I just purchased my first computer about a year ago, and believe me "at my age", it's been a real thrash to learn to use one of these things. I must admit that this is something that I'd been putting off because I wasn't sure that I was up to the task. You know that we older people just don't learn as quickly as you younger folks. Anyway I made the plunge and I'm really starting to enjoy the usefullness of this technology. It wasn't too long after I purchased my XP that I was reading an article by a conputer scientist in Scientific American magazine about artifical intelligence. He really got my attention with his assessment of what he called the future threat that artificial intelligence might bring with it's final developement. It's been a while sense I read the article and I can't remember the man's name but he was concerned that if artificial intelligence were to become a reality that the consequences could be tragic for humanity. Does anyone at this forum have any current information on the progress of this technology, and what do you think? Are we just getting a little paranoid with this kind of conclusion.

Link to comment
Share on other sites

That reminds me of the movie AI. Personally, I think whatever happened in the movie will happen in our world. Sooner or later, we will be scared of the strength of artificial inteligence, and the possibility of them taking over our world. Then like in the movie, the violent and evil nature of humans will be revealed.

Link to comment
Share on other sites

Are we just getting a little paranoid with this kind of conclusion.

It's just paranoia. It took nature (or whatever agent one wants to credit) about 10 billion years to produce life on Earth, and then another 3.697 billion years (give or take a few years) to produce sentient beings. I doubt we will be able to mimic something with our intelligence in the foreseeable future.

 

Or if they do, lets hope they put in a mix of intelligence, stupidity and arrogance to make sure the machine is able to learn from its mistakes...:)

Link to comment
Share on other sites

Hey Tinny; I think Tormod was pointing out that if an artificial intellect would be forth coming, we humans could hope for it to be balanced by a little self reflection. I think one of our finest traits as humans is our ability to laugh at our selves. Lets just hope that any success with this technology strikes such a balance. Just a note here; I'm sure that Tormod has an answer he would like to submit also, and I'll ask to be excused for jumping in here to voice my agreement with his last statement. Have a good day Tinny.

Link to comment
Share on other sites

Yes, infamous' assumption is correct. I was not being sarcastic. I do think that a certain level of articficial intelligence is possible to attain, but it does depend on how far we are willing to stretch the term "intelligence".

 

Self awareness is problematic as a concept. How do we know if something is self aware or not? If not, how can we be sure that the choices made by any machine will take into account the effect these choices will have on human beings?

 

Choice by calculation is very different than choice by sentiment.

 

I do not claim to have all the answers in this area, though. I think it's a very interesting topic.

Link to comment
Share on other sites

i still don't understand what was meant by that statement. Need to improve my verbal skills. Any suggestions?

 

What is choice by sentiment? How does it make choices. It somehow implies a free will. Something programmed cannot make choices, especially when it is made up of completely deterministic matter.

Link to comment
Share on other sites

Yes Tormod; The subject of self awareness is the key word here. To establish some ground work for this debate, we must first define "self awareness". Is our own self awareness just an electro-chemical proccess, or does it rise to some higher level we can discribe in scientific terminology? And if we can't define it in scientific jargon, how then can we define it.? Any thoughts anyone?

Link to comment
Share on other sites

Yes Tormod; The subject of self awareness is the key word here. To establish some ground work for this debate, we must first define "self awareness". Is our own self awareness just an electro-chemical proccess, or does it rise to some higher level we can discribe in scientific terminology? And if we can't define it in scientific jargon, how then can we define it.? Any thoughts anyone?

ah, seems like a job for me huh? *rubbing my hands* but i think it is off topic. unless you really want mine.

what are we supposed to get at here? it's the limits of AI right? whether it'll threaten us or not?

Link to comment
Share on other sites

I completely believe that it is possible to achieve artificial intelligence. I will suppose here that we're talking about intelligence that is similar to, and on the same level as, human intelligence. This should very well be possible. The question is, when we reach a point where computer intelligence surpass ours, what will happen? Will we accept that new intelligence as a new intelligent species? Will we use the technology to enhance ourselves and our brains? Will we be able to upload ourselves?

Link to comment
Share on other sites

Hello Stargazer; Very good points you make there. But I'll add one more for the road. How will we compete, you know the law of the jungle thing, the strongest will survive. Will we maintain control and cause this new intelligence to be subject to our domination. You realize that all life resists such control. We can only expect this new form to rebel, if we don't we are being naive.

Link to comment
Share on other sites

Hello Stargazer; Very good points you make there. But I'll add one more for the road. How will we compete, you know the law of the jungle thing, the strongest will survive. Will we maintain control and cause this new intelligence to be subject to our domination. You realize that all life resists such control. We can only expect this new form to rebel, if we don't we are being naive.

That's something I see as very possible too. I think that we need to accept intelligent machines on some level, or, if they surpass us we will need to maintain control. Another solution to this could be what I mentioned, that is, we use the same technology to enhance our selves, so we will be able to compete with the machines or robots. If you can't beat them, join them. But then it's possible that intelligent robots could improve vastly for each generation, that is, the smarter they get, the better they can improve themselves... at an accelerating speed? Will we be able to compete with that even if we use implants etc.?

Link to comment
Share on other sites

I will suppose here that we're talking about intelligence that is similar to, and on the same level as, human intelligence.

I see a big problem here. How would we define "similar to human intelligence"?

 

In fact (and this is a topic which has been discussed from several angles here before) - what is "intelligence"?

 

If we look it up in a dictionary, it might say stuff like "the measure of ability and skill" or "information about others". But we are talking about it here as the sum of the cosnciousness of a human being - ie, what makes a human being think "I am".

 

That is what I mean with self-awareness. If the machine does not recognize that it exists, it is not self aware, and it will not be anywhere near "human intelligence".

 

The question is, when we reach a point where computer intelligence surpass ours, what will happen? Will we accept that new intelligence as a new intelligent species? Will we use the technology to enhance ourselves and our brains? Will we be able to upload ourselves?

Why would we need to upload ourselves into a mind that has already surpassed us (or twist it around - why would that mind allow an inferior being to be uploaded into it)? Seems to beat the point.

 

What you're talking about here seems to be some sort of super-advanced cyber storage, like in Greg Bear's "Eon" and "Eternity" novels, some place where we can upload people's minds so they live on even after death. That would not imply, however, that the machine in itself needs to be intelligent, only that we would be able to build a storage facility where we recreate the sentiments of a human being to such an extent that a self-aware unit actually identifies itself with the actual human being that is dead (or copied), and can go on to lead some sort of existence within this machine.

 

In Peter Hamilton's Nightfall trilogy the Edenists "upload" themselves into machines and fuse all their common intellects into a giant combine of knowledge. This is a fascinating concept, but in fact it is no more than a large dictionary with the ability to reflect.

 

Don't get me wrong - AI is an incredible idea and I am not saying I am right here. I am trying to question the very basic assumptions about intelligence. I was once asked in a discussion here to define intelligence. I have given it much thought and I am nowhere near being able to say what I think it is, except that I believe the term "human intelligence" is not quantifiable and thus it is not possible to build a machine that is similar to it, nor surpass it.

 

It might be able to think, but not to be human.

Link to comment
Share on other sites

What is choice by sentiment? How does it make choices. It somehow implies a free will.
I think in order to avoid seeing this thread veer off track, let's look away from the free will issue if that is okay with everyone. For the sake of argument, we are discussing whether an artificial mind can surpass a human mind. Being in the same universe, it would necessarily have to follow the same natural forces that we do.

 

Okay. Choice by sentiment means a choice is made by weighing the pros and cons and also giving the problem at hand some sort of "gut feeling" based on experience, skill, education, historiy, cultural background. Or put simply: An educated choice. this choice must involve some sort of empathy (ability to imagine how others will feel about the choice).

 

A calculated choice is here defined as a choice made simlpy by measuring the pros against the cons and choosing the option that gives the most yield seen from the point of the mind. A calculated choice thus would not be based on empathy.

 

How does it make choices? Beats me! I frankly don't know.

Link to comment
Share on other sites

i think it is off topic

What is off topic? Anyway, since infamous started this thread I'll let him decide that for now.

 

what are we supposed to get at here? it's the limits of AI right? whether it'll threaten us or not?

Original post: "if artificial intelligence were to become a reality that the consequences could be tragic for humanity"

 

In order to discuss that, we need to establish what intelligence is, and whether self-awareness is a requirement for anything to make sentiment choices.

 

Infamous - it's your thread, correct me if I'm wrong here.

Link to comment
Share on other sites

I see a big problem here. How would we define "similar to human intelligence"?

 

In fact (and this is a topic which has been discussed from several angles here before) - what is "intelligence"?

 

If we look it up in a dictionary, it might say stuff like "the measure of ability and skill" or "information about others". But we are talking about it here as the sum of the cosnciousness of a human being - ie, what makes a human being think "I am".

 

That is what I mean with self-awareness. If the machine does not recognize that it exists, it is not self aware, and it will not be anywhere near "human intelligence".

I'm not sure, but I suppose it wouldn't have to work in the exact same way as a human brain, only work in any kind of way so that the result is something we could call "human."

 

But you're right, it is difficult to define what human intelligence really is...

 

Why would we need to upload ourselves into a mind that has already surpassed us (or twist it around - why would that mind allow an inferior being to be uploaded into it)? Seems to beat the point.

No, I'm talking about uploading to a computer that is specially prepared to be loaded with an entire mind. Maybe technology would permit us to then become androids with the looks we can choose ourselves. Or maybe we can upload copies of ourselves into any robot including spaceprobes, for examples.

 

What you're talking about here seems to be some sort of super-advanced cyber storage, like in Greg Bear's "Eon" and "Eternity" novels, some place where we can upload people's minds so they live on even after death. That would not imply, however, that the machine in itself needs to be intelligent, only that we would be able to build a storage facility where we recreate the sentiments of a human being to such an extent that a self-aware unit actually identifies itself with the actual human being that is dead (or copied), and can go on to lead some sort of existence within this machine.

 

In Peter Hamilton's Nightfall trilogy the Edenists "upload" themselves into machines and fuse all their common intellects into a giant combine of knowledge. This is a fascinating concept, but in fact it is no more than a large dictionary with the ability to reflect.

Interesting ideas. I haven't read any of those books but they do sound interesting. And yes that is somewhat what I mean, to upload the mind to a machine. It could then be simply a backup or a new brain altogether.

 

A few times I've been thinking about a programming language for the brain. It seems impossible...

 

Don't get me wrong - AI is an incredible idea and I am not saying I am right here. I am trying to question the very basic assumptions about intelligence. I was once asked in a discussion here to define intelligence. I have given it much thought and I am nowhere near being able to say what I think it is, except that I believe the term "human intelligence" is not quantifiable and thus it is not possible to build a machine that is similar to it, nor surpass it.

 

It might be able to think, but not to be human.

I've been thinking a lot about this. Isn't it possible that intelligence and self awareness are really manmade categories for properties that we see that some structures have? At least from our perspective...

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...