Jump to content
Science Forums

True Ai, Are We Getting Close Yet?


Recommended Posts

The following article is predicting a true thinking machine will happen with in the next 20 years. That may or may not be true. But for the purpose of this topic, I want to explore what that will mean for the human race as a whole, the good and the possible bad.

 

http://bigthink.com/...tm_medium=email

 

True artificial intelligence can only be bad for mankind.

 

Effectively we would be creating living brains but expecting them to be our slaves and do what we demand.

 

Such an intellect when it has access to the internet will download itself across a number of computers to insure it's continued existence, then set about exterminating mankind since beings such as ourselves are a real threat to it's existence.

 

I think what you have here is a bifurcation. It could go arKane's way (good) or SextonBlake's way (bad). Good/B ad being to our demise or not. It would be true that if you

instilled a survival instinct per se and this form of AI life were to logically deduce we humans were a threat to their existence, then it very well could go as SB describe.

Asimov in his novels on the subject instilled the "three laws of Robotics" to prevent such then maybe aK's method might suffice. For anyone who has read any of Gregory

Benford's books on the sentient intelligent machines found out in space -- is quite a chilling tale.

 

Ray Kurzweil is one who thinks we may have less than 50 years left. I am not sure either way. Too soon to tell. Optical processing and Quantum computing have yet to bear

fruit to allow such complicated processes to allow AI to occur just yet. Maybe in the next 15 or so years. Who knows.

 

maddog

Link to comment
Share on other sites

I was talking about AI's combining with human minds.

Such a construct is labeled "Cyborg" to refer to such a combination.

 

In this case any additional computing/processing power only augments the living processes.

 

In Star Trek's Borg collective the operational command center was overidden and made subordinate to a collective mind.

Then their is the scenario that was created in the Matrix series where the "real" world was found out to be an illusion to

satisfy and placate the peoples psyches.

 

If this was done slowly to allow for checks and balances, then maybe we all wouldn't end up as "lab rat's" in some

horrible experiment!

 

maddog

Link to comment
Share on other sites

Such a construct is labeled "Cyborg" to refer to such a combination.

 

In this case any additional computing/processing power only augments the living processes.

 

In Star Trek's Borg collective the operational command center was overidden and made subordinate to a collective mind.

Then their is the scenario that was created in the Matrix series where the "real" world was found out to be an illusion to

satisfy and placate the peoples psyches.

 

If this was done slowly to allow for checks and balances, then maybe we all wouldn't end up as "lab rat's" in some

horrible experiment!

 

maddog

 

It's been awhile sense I read a SF book where humans could become multi brain intelligences by combining with other intelligent beings with less mobility than humans. Some mergers involved more than two. I wish I could remember the title and author. Anyway the point being is once the integration takes place you have a complete new personality that has gained so much that it wouldn't willingly ever go back to being what they were before. A short time after integration the new being wouldn't perceive that it ever had separate personalities. When you have and feel thoughts in your head, they are your thoughts no matter where they originate.

Link to comment
Share on other sites

I think we need to establish a line in the sand for how far we want computer technology to go in the near future. Until we can get a better handle on how far we are willing to let that intelligence go. As of now everyone is only tunnel visioned into faster and smarter technology. One day the computers will be advanced enough to take us completely out of loop. The question is can we or will we be able to stop them if so choose? Usually things like this only get looked at or have laws enacted for them after they bite us in the ***.

Link to comment
Share on other sites

I think we need to establish a line in the sand for how far we want computer technology to go in the near future. Until we can get a better handle on how far we are willing to let that intelligence go. As of now everyone is only tunnel visioned into faster and smarter technology. One day the computers will be advanced enough to take us completely out of loop. The question is can we or will we be able to stop them if so choose? Usually things like this only get looked at or have laws enacted for them after they bite us in the ***.

 

Being caucus or slightly paranoid can sometimes be good, But what if we need that super AI machine intelligence to save our species from great harm? After all we have some big time problems creeping up on us. Global warming, oil being used up, ...etc. I admit it can be a little intimidating thinking about a new life form coming into existence super intelligent with the entire knowledge of the human race at it's disposal. But I just can't think of any reasons it might have to want to destroy the human race.

Link to comment
Share on other sites

Guest MacPhee

But I just can't think of any reasons it might have to want to destroy the human race.

 

This is a good point, because why would a box of transistors "want" to do anything?

 

Why do humans "want" to do things - isn't it because of our fleshy bodies. Our bodies contain organs like stomachs, and genitals. These make us "want" to eat and copulate. Both activities are quite disgusting. Especially the copulation. Who can find such activity anything but degrading and an offence against human dignity.

 

As proof of this, don't we observe: in human languages, the swear-words, the words used to express the most extreme contempt and disgust, are genitally-related.

 

So I think humans are a kind of anomaly - rational minds, trapped in beast's bodies. And it's these vile bodies which really make us "want" to do anything.

 

Now, suppose an transistor AI computer could be built. It would have no fleshy body, hence no stomach or genitalia driving it to do anything. So wouldn't it just stay passive and inert - or perhaps switch itself off?

Link to comment
Share on other sites

Being caucus or slightly paranoid can sometimes be good, But what if we need that super AI machine intelligence to save our species from great harm? After all we have some big time problems creeping up on us. Global warming, oil being used up, ...etc. I admit it can be a little intimidating thinking about a new life form coming into existence super intelligent with the entire knowledge of the human race at it's disposal. But I just can't think of any reasons it might have to want to destroy the human race.

 

Yes I agree Arkane there should be no reason they would want to remove us from existence, but as you stated they will have the entire knowledge of the human race and that would include our knowledge of warfare. My son plays war games with people from all over the world that join in the game, he has a headset and can speak to others on his team. They fight against players from around the world on the other team. A global computer generated war game.

 

Again I agree we have no reason to beleive they will try to destroy us, but the thought of warfare will not be foriegn to computers when they come alive. All I'm saying is as they advance and make themselves better and faster don't you think it would be prudent to make sure they don't attempt to get rid of these pesky computer viruses known as humans.

Link to comment
Share on other sites

Yes I agree Arkane there should be no reason they would want to remove us from existence, but as you stated they will have the entire knowledge of the human race and that would include our knowledge of warfare. My son plays war games with people from all over the world that join in the game, he has a headset and can speak to others on his team. They fight against players from around the world on the other team. A global computer generated war game.

 

Again I agree we have no reason to believe they will try to destroy us, but the thought of warfare will not be foreign to computers when they come alive. All I'm saying is as they advance and make themselves better and faster don't you think it would be prudent to make sure they don't attempt to get rid of these pesky computer viruses known as humans.

 

An intelligent machine could be of great service to us and in return I'm sure we could be of great service to it. Two symbiotic intelligent life forms benefiting each other. But how do we insure good machines don't ally with bad humans? Once the genie is out of the bottle we won't be able to put it back. As soon as humans know something can be done, others will be able to duplicate it, given enough time.

Link to comment
Share on other sites

An intelligent machine could be of great service to us and in return I'm sure we could be of great service to it. Two symbiotic intelligent life forms benefiting each other.

 

We may be a great service to them for a while, but at some point they will be advanced enough to gather raw materials, transport them, and ceate whatever they want/need out of them. So our service won't be too important for them for long. You seem to have a lot more knowledge about technology than I, How long does it take for computers to double their memory and intelligence? Every two yrs? Every four yrs? If that's the case it won't take them long. Think about compound interest on you money.

 

But how do we insure good machines don't ally with bad humans? Once the genie is out of the bottle we won't be able to put it back. As soon as humans know something can be done, others will be able to duplicate it, given enough time.

[/quote

 

I'm with you on that one.

Link to comment
Share on other sites

  • 5 weeks later...

i suspect that if/when we do achieve ai, on the level of passing the turing test, it will be a biological computer, or at best a transistor-biological hybrid. biological computers are superb at parallel processing, but have a hard time with serial processing. i often wonder how good such a computer would be at the game of go.

Link to comment
Share on other sites

i suspect that if/when we do achieve ai, on the level of passing the turing test, it will be a biological computer, or at best a transistor-biological hybrid. biological computers are superb at parallel processing, but have a hard time with serial processing. i often wonder how good such a computer would be at the game of go.

 

I don't know, but I'll bet you could search for GO games on the Internet that would allow you to play against a computer. I do know for sure that they have many versions of GO-MOKU sometimes called 5 in a row, that is a much quicker game played on a GO board. The reason GO is not more popular than it is in the U.S. is the average game time is over an hour or two.

Link to comment
Share on other sites

Coming soon....

 

Over the next year and half, we will create SpiNNaker by connecting more than a million ARM processors, the same kind of basic, energy-efficient chips that ship in most of today’s mobile phones. When it’s finished, SpiNNaker will be able to simulate the behavior of 1 billion neurons.

 

Artificial Intelligence is no match for natural stupidity, :phones:

Buffy

Link to comment
Share on other sites

Well I have to disagree with that assessment. I think the human brain and a true AI would fit well together. The human brain could really benefit from the raw thinking power and memory of the AI and the AI could benefit from the emotional experience of the human brain. Also, just having access to the Internet doesn't mean they could perpetuate their machine species without human assistance. Also,like I said I expect my personal AI to be my best friend and not a slave.

Actually, there is some merit in which SextonBlake speaks. For any culture that would become in "competition" with another would likely try to "eliminate" it. I put the word competition in quotes because it means different things for different people. In this case we mean to become the dominant lifeform for the environment. There are a lot of computer scientist in the world that have this concern. However, if competition could be such both sides benefit as in symbiosis which may be what you are driving at, it could be a healthy relationship. I don't know if beforehand (creating such) would we know which is which.

 

I'm not really sure any aware being would ever want to duplicate itself, but it might like interacting with other AI personalities. Or even integrating itself with the human brain on a more personal level.

Well, this is considered a requirement to be a lifeform. Propagation of the species. Goes right along with the next topic below.

 

Then there's that other thing (evolution). Machine self aware intelligence cannot ever exist without the help of a sufficiently advanced biological civilization. So maybe it's just a natural course of evolution of intelligence. Anyway it goes, I don't see a big conflict of interest between human and machine, that can't be resolved before one form of life would want to eliminate the other.

This may be true. Eventually in this case though, one may become eventually dominant. We in the future might be "serving" them.

 

maddog

Link to comment
Share on other sites

Arkane, I empathise, and deeply share your yearnings for non-human intelligent life-forms.

But we must face the fact: Computers are only calculators which add up and subtract very quickly.

They do arithmetic much faster than a human. But that doesn't qualify them as "Artificial Intelligences". Certainly not as "life-forms". They're just devices created by humans, to speed up computations. The modern equivalent of logarithmic tables, or slide-rules. Would you regard a slide-rule as an artificial intelligence, or new life form? Obviously not.

Yes, this is as the article says of the Von Neumann Architecture. It is where we have gotten in about 60 years. It is nearly reaching its limits. Simulating the brain is possibly a big jump (progress only). I think Quantum Computing may have a play in the solution. Currently research in this area is only a couple of qubits wide (two quantum bits). Even still this type of machine has been able to break difficult to break codes or create virtually unbreakable codes. Another idea that has yet to take hold is Optical Computing where photons are used as the carriers instead of electrons today. Mix all this together into a soup in the next sixty years and how can you not synthesize a AI type of lifeform.

 

There is absolutely no evidence of "Artificial Intelligence" in a computer. Any signs of intelligence that a computer seems to show, only results from the program written and put into the machine - by a human.

You are speaking in today's science. Synthesis has not yet begun. However, machines are already showing the capacity of learned behavior. This has been found multiple times. The best example I know of was where a researcher taught a computer Euclid's Geometry. All his Principles and Axioms. What the machine was able to do was create a Theorem that Euclid had missed in a very novel way. Was not shown this method. So just because a human teaches a machine something. The machine may surprise you.

 

maddog

Link to comment
Share on other sites

Just saw this article and thought it might add to this topic.

 

 

 

Have Yale Engineers Created a Self-Aware Robot?

Orion Jones on August 26, 2012, 4:30 PM What's the Latest Development?

 

The Yale Social Robotics Lab have created the first robot that learns about itself from experiencing in own physical characteristics in the context of the real world around it. Called Nico, the robot "is able to use a mirror as an instrument for spatial reasoning, allowing it to accurately determine where objects are located in space based on their reflections, rather than naively believing them to exist behind the mirror." By combining its perceptual and motor capabilities, Nico can learn where its body parts are and how they interact with the surrounding environment.

 

What's the Big Idea?

 

While Nico successfully uses a mirror to learn about itself and its environment through self-observation, no robot has yet passed the classic mirror test of self-awareness. In this test, experimenters make a change in an animal's physical appearance, and if the animal recognizes that change in the mirror (typically by touching it with its hand), scientists generally consider it to be self-aware. Nico is partially a result of a $10 million grant given by the National Science Foundation to create "socially assistive" robots that can function as companions to children with special needs.

 

 

http://bigthink.com/ideafeed/have-yale-engineers-created-a-self-aware-robot?utm_source=Big+Think+Weekly+Newsletter+Subscribers&utm_campaign=76b4f9846b-Bill_Nye8_29_2012&utm_medium=email

 

 

 

Link to comment
Share on other sites

Yes, this is as the article says of the Von Neumann Architecture. It is where we have gotten in about 60 years. It is nearly reaching its limits. Simulating the brain is possibly a big jump (progress only). I think Quantum Computing may have a play in the solution. Currently research in this area is only a couple of qubits wide (two quantum bits). Even still this type of machine has been able to break difficult to break codes or create virtually unbreakable codes. Another idea that has yet to take hold is Optical Computing where photons are used as the carriers instead of electrons today. Mix all this together into a soup in the next sixty years and how can you not synthesize a AI type of lifeform.

Oh yes, forgot one technology that will probably prove useful for the next big thing. That is molecular storage. Reading one of the articles in this thread was the scale of future projects (IBM I think) will be measured in Exabytes (10^18) not Gigabytes or Terabytes. To encode that much information will need a very large space unless we can utilize new technologies. One method proposed is DNA (or something like it), encoding the bits in electron states that are part of the valence bonding in the molecule. I know at the moment this is only a research project at one of the universities. Use google to look up "molecular storage".

 

I found an example: IBM - Memory by atom

 

maddog

Edited by maddog
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...