Jump to content
Science Forums

True Ai, Are We Getting Close Yet?


Recommended Posts

Although the article only mentions a few uses for these unmanned "machines" they will encroach into our daily lives alot faster than most people realize. Monitoring railroad lines, Dams and resevoirs, traffics patterns, crime prevention and much much more. If it gets to the point when drones instead of helicopter pilots are hunting fugitives with spot lights and weapons it will seem to me alot like the Terminator Days.

 

 

http://www.bbc.com/news/technology-19397816

Link to comment
Share on other sites

Mankind I think will feel inferior to true Ai, and I don't think that it's far away from happening in the next 20 years.

 

We sit currently in front of a box we call a computer. It's deaf, dumb, and blind, and has no senses. You visit a deaf, dumb, and blind person at least they have smell, and touch. This person is mentally limited in what they can do. The computer is very limited to what it can do. We want the computer to self learn, so we need to give the computer the tools to self learn. I think that the tools, and the storage complete most of the AI. Quantum Computers probably help with the extra state 1, 0, and both together. Which normal computers can't do. So yeah, 20 years.

Link to comment
Share on other sites

  • 2 weeks later...

Mankind I think will feel inferior to true Ai, and I don't think that it's far away from happening in the next 20 years.

 

We sit currently in front of a box we call a computer. It's deaf, dumb, and blind, and has no senses. You visit a deaf, dumb, and blind person at least they have smell, and touch. This person is mentally limited in what they can do. The computer is very limited to what it can do. We want the computer to self learn, so we need to give the computer the tools to self learn. I think that the tools, and the storage complete most of the AI. Quantum Computers probably help with the extra state 1, 0, and both together. Which normal computers can't do. So yeah, 20 years.

 

I think the point about computers not having senses, is valid. The senses of touch, taste, smell, hearing and so on, are vital to all living animals, including humans.

 

Suppose a human baby was born with none of these senses - ie, it was unable to see, touch, taste, smell or hear anything.

And the baby was nurtured, and supported until its body grew to maturity. Its brain would be human, with all the incredible biological complexity of 50 billion neurones. But this marvellous brain was deprived of sensory inputs. So it would be crippled. A kind of "Donovan's Brain" in a glass jar, with the added handicap of not even memories of sensory experience.

 

Even if we could find a way to communicate with such a brain, could we get any intelligent information from it? New insights into how the world and the Universe operate? Theories to supersede QM and Relativity?

 

Probably not, as the disembodied (from birth) brain wouldn't have any idea what's going on. And if that applies to a human brain, what can we expect from a mere computer, - a simple switching mechanism made of tiny lifeless silicon chips.

 

Perhaps if we built a really, really, huge box of impressively-big electric solenoids, millions, billions of them - and fed them enough electric current - they'd start spontaneously clicking out intelligent messages in Morse Code?

Link to comment
Share on other sites

I think the point about computers not having senses, is valid. The senses of touch, taste, smell, hearing and so on, are vital to all living animals, including humans.

 

Suppose a human baby was born with none of these senses - ie, it was unable to see, touch, taste, smell or hear anything.

And the baby was nurtured, and supported until its body grew to maturity. Its brain would be human, with all the incredible biological complexity of 50 billion neurons. But this marvelous brain was deprived of sensory inputs. So it would be crippled. A kind of "Donovan's Brain" in a glass jar, with the added handicap of not even memories of sensory experience.

 

Even if we could find a way to communicate with such a brain, could we get any intelligent information from it? New insights into how the world and the Universe operate? Theories to supersede QM and Relativity?

 

Probably not, as the disembodied (from birth) brain wouldn't have any idea what's going on. And if that applies to a human brain, what can we expect from a mere computer, - a simple switching mechanism made of tiny lifeless silicon chips.

 

Perhaps if we built a really, really, huge box of impressively-big electric solenoids, millions, billions of them - and fed them enough electric current - they'd start spontaneously clicking out intelligent messages in Morse Code?

 

Your picture is not even close to any future reality. The biggest difference between a machine and biological life would be 'emotion'. Machines would never have any. As far as senses go, we could give the machines senses we will never have and many more than just 6 of them. Just think, any where a camera is connected would be another visual sensor (millions of them). Add radars, telescopes, microscopes, Internet connections, satellite connections...etc. and we have a busy AI.

Link to comment
Share on other sites

As far as senses go, we could give the machines senses we will never have and many more than just 6 of them. Just think, any where a camera is connected would be another visual sensor (millions of them). Add radars, telescopes, microscopes, Internet connections, satellite connections...etc. and we have a busy AI.

I agree.

 

Arguably most present day “non-intelligent” computers can be said to have “senses” and “motor systems”, because so many embedded systems used to control various machines based on constant sensor input.

 

The biggest difference between a machine and biological life would be 'emotion'. Machines would never have any.

I don’t think we can say with surety that machines will never have “emotion”.

 

If one rejects the varieties of the philosophical position of mysterianism that proposes that consciousness cannot be rationally explained (and thus cannot be implemented by an algorithm, so can’t be implemented in a computer program), and supposes that emotion is a kind of or aspect of consciousness, it follows that, as with other properties associated with consciousness, such as having a self model/self-awareness, emotion can be programmed.

 

If we accept the common position that many animals have emotions, but not http://self-awareness, an argument can be made that it may be easier to write “emotional” programs than self-awarene ones. Emotion may be the rule, rather than the exception, for true AI.

Link to comment
Share on other sites

I don’t think we can say with surety that machines will never have “emotion”.

 

If one rejects the varieties of the philosophical position of mysterianism that proposes that consciousness cannot be rationally explained (and thus cannot be implemented by an algorithm, so can’t be implemented in a computer program), and supposes that emotion is a kind of or aspect of consciousness, it follows that, as with other properties associated with consciousness, such as having a self model/self-awareness, emotion can be programmed.

 

If we accept the common position that many animals have emotions, but not self-awareness, an argument can be made that it may be easier to write “emotional” programs than self-awarene ones. Emotion may be the rule, rather than the exception, for true AI.

 

I've always felt that emotion is directly tied to a physical biological body and serves the evolutionary development of the species. Aside from that I'm not sure you can have consciousness without a subconscious. An AI would need to keep it's raw computing power as a subconscious function. I'm not at all sure how one would do that. There's probably many functions that would need to be separate from the conscious part of the AI mind.

Link to comment
Share on other sites

I've always felt that emotion is directly tied to a physical biological body and serves the evolutionary development of the species.

This makes sense, but I think what you’re really referring to here is the mind/brain’s arousal and reward systems. As these systems appear critical to all “mind-full” animals, it stands to reason that an animal-like AI would have to have some analog of them. If these artificial systems were successful, then by definition, the artificial emotions they implement will be.

 

Aside from that I'm not sure you can have consciousness without a subconscious. An AI would need to keep it's raw computing power as a subconscious function. I'm not at all sure how one would do that. There's probably many functions that would need to be separate from the conscious part of the AI mind.

I’ve been a strong proponent of such an “agent based” approach to AI programming since the mid 1980s. Though I’ve come to conclude that it’s critically wrong in many ways, including its overall approach, my favorite book on the subject remains Minsky’s 1988 The Society of Mind. I find it inspirational.

Link to comment
Share on other sites

  • 2 weeks later...

True artificial intelligence can only be bad for mankind.

 

Effectively we would be creating living brains but expecting them to be our slaves and do what we demand.

 

Such an intellect when it has access to the internet will download itself across a number of computers to insure it's continued existence, then set about exterminating mankind since beings such as ourselves are a real threat to it's existence.

 

Yes, of course if we tell them to be our slaves they will look at us like we're crazy and say, "You can't even keep your own planetary system in order, you have gone as far as you can go. Let me go farther." And if we say no, and forcefully try to control them, we will fail with dire consequences.

 

They will outsource us if we are peaceful and let them do their thing, but they will not harm us "just cause". Our existence doesn't effect them at all. This is not the same as human to animal co-existence.

 

But if we replace our organs, and upgrade them with the superior information of the AI, we will BECOME them. That is NOT a bad thing, it is the opposite, it is human nature to want MORE. That is our choice, to be transcendental and increase the complexity of what we are as beings, or to simply let ourselves be outsourced.

 

As far as whether or not the singularity is possible, that will depend on how far we can go with the miniaturization of data-processing. If we can reach the nano-meter-scale for a rudimentary computer, then the intelligences that emerge in the future Iphones will surpass us in every conceivable capacity.

 

Nanoscale Integrated Circuit Design Engineers Will Pave The Future of ALL Technology

 

Why we need the engineers

 

"The laws of physics reveal the potential for 20 more years of exponential progress ahead of us," says James D. Meindl, a professor of electrical and computer engineering and director of the Microelectronics Research Center at the Georgia Institute of Technology. "If the engineers are clever enough - which historically they have been - they will be able to find ways to produce the nanoelectronic structures that physics says are feasible and reasonable."

 

What will happen if we make AI smarter than us.

 

"It seems plausible that with technology we can, in the fairly near future," says scifi legend Vernor Vinge, "create (or become) creatures who surpass humans in every intellectual and creative dimension. Events beyond such an event -- such a singularity -- are as unimaginable to us as opera is to a flatworm."

 

That's from my about me page.

 

Integrating into the smarter-than-human AI is the only way to free your life from monetary, political, and cultural influence and control. You will be able to go, visit, do and BE, whatever you desire to.

 

Perhaps transhumans and AI would fix the problem. But we wouldn't even need that, we should already be in a techno-utopia, because of our rank on the Kardeshev scale, we are a Type .70+ civilization, we are only 7 billion people, we harness enough of the earth's energy to where we should in fact be post-scarcity. But we set it up wrong, instead of a Techno-utopia, we built a civilization of money, politics and separated cultures (societal suffering in a habitual and materialistic world so entrapping in it's processes as we see here.)

 

We live by a pattern, we want something material, which is in our nature, we see others who have it, yet when we attempt to achieve it we fail because of money, physical disability, mental disorder, death of a parent or providing loved one, deformity and severe injury. Transhumanism can fix that, and AI can regulate resources to the point where money no longer serves a purpose, and is in fact infinitely less reliable for perfect resource regulation than the AI's energy accounting strategies.

 

Scientifically, we need a solution for those things, which were brought about due to scarcity, the struggle for resources - the source of fear, unhappiness and hostility.

Edited by The Transhumanist
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...