Jump to content
Science Forums

Is Hawking Right About Ai?


Recommended Posts

These Cicada-like mini-drones in the story below are a little unnerving when you think of all the nefarious ways they could be used, but they're benign, I think, compared to what robotic technology will be 25 years from now. The rapid advances are making me think Stephen Hawking is not far off the mark with his concern about AI posing a real threat to humanity over the next century.

 

http://www.sciencerecorder.com/news/2015/05/18/locust-like-mini-drones-wide-range-uses-military-says/

 

I'm really curious what others think of Stephen Hawking's worries about the perils of AI.

Link to comment
Share on other sites

The two best articles I ever read about it are these two (also how the blogger writes is cool):

Part 1, talking about AI in gneral, absolutely amazing, never thought of it like that
http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html

 

Part2, presents the scary parts:

http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-2.html

 

SO yes it is scary, like Hawking says (and also Gates says, although he admits that if did not end up with microsoft he most likely would be working on AI development), but is that something that would stop us? Global warming is also scary...So better take it step by step, rather than clandestine research.

Link to comment
Share on other sites

I wonder if we should be concerned. The human race is constantly evolving and this may just be one of the evolutionary steps. After all, humans are the ones moving in this direction. The earth's cliimate is increasingly more threatening, with the temperature continuing to rise. Perhaps, our salvation on the planet may come about as a result of artifiacial intelligence. As a side note, should we really call it, "artificial"? If it becomes the dominant form? Just wondering. We may be worrying about something in the same manner our earlier ancesters worried about progress. As it will be a slow evolution, the human race will adapt to it as it occurs, don't you think? Just considering the past twenty years, have we not adapted to the digital advancements with little or no negative results?

Link to comment
Share on other sites

Is AI even capable of the kinds of things science fiction writers dream up without an organic basis. I can see a “replicant” idea due to the biological aspects involved; but HAL, Skynet and Agent Smith?

 

Will people create machines that are predisposed to think existentially, and would these machines actually be thinking at all (or appearing to think though advanced mimicry)? Can mimicry veer in into actual awareness?

 

Am I incorrect in thinking that synthetics are fundamentally different than organics and that the latter is an absolute requirement for the phenomenon of ‘self-awareness to emerge?

 

I don’t have enough information to debate this; as ridiculous as it may sound, it just doesn’t ‘seem’ viable to me.

Link to comment
Share on other sites

James and motherengine, you should both read the blog (part 1 for a starter) I posted above, it is quite an eyeopener with respect to speed and possibility.

 

From the blog (without the much cooler intro an examples, but just the down to earth part):

 


This pattern—human progress moving quicker and quicker as time goes on—is what futurist Ray Kurzweil calls human history’s Law of Accelerating Returns. This happens because more advanced societies have the ability to progress at a faster rate than less advanced societies—because they’re more advanced.

 

Now if you consider an simple AI, just the huge processing power it has would make it advance at an exponential rate...to me it is more that you create only 1 AI with possibilty of self-improvement and then you wait and soon have the science-fiction stuff.

Link to comment
Share on other sites

It will be a concern and probably sooner than most people will be able to recognize. If AI does become truly self aware and can organize all AI resources to expand, using drones and robots in the future it could get ugly. Resources such as car manufacturers and other computer operated building systems.

 

One thing is for sure though, when that happens the only thing the human race can offer AI is the chance we will pull the plug on them. So as a matter of survival why keep humans around. It wouldn't be a matter of "evil skynet" it would just be logic for AI. 

Link to comment
Share on other sites

I'm really curious what others think of Stephen Hawking's worries about the perils of AI.

I think Hawking’s worries about Artificial Intelligence are essentially worries about the control of the development of true, strong AI (I’ll explain this term later in this post), and thus are more legal and governmental than technical. In short, I think he worries that because a lot of money could be made selling computer programs and services termed “AI”, and because strong AIs that might be dangerous to humankind, possibly enslaving or exterminating us, that inadequately regulated business people might succeed in creating a dangerous strong AI that would enslave or kill us all. It’s essentially the same worry he has about contact with extraterrestrial intelligence (CETI), with human-invented computer programs in place of aliens.

 

I think Hawking’s worries, which are shared by many others, are reasonable, but his hope that AI research can be governmentally and legally regulated to assure its safety, is unrealistic. As Hans Moravec noted in 1988, the increased availability and speed of inexpensive computer systems described by Moore’s law suggests that, if strong AI programs are possible, too many people will be able to create and run them to be regulated. If a dangerous AI is possible and likely, I think it’s inevitable.

 

The two best articles I ever read about it are these two (also how the blogger writes is cool):

Part 1, talking about AI in gneral, absolutely amazing, never thought of it like that

http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html

I too thing Tim Urban’s blogs about AI are very good – thanks for introducing me to them. :thumbs_up

 

The big organizational idea I find here is of 3 “calibers” of AI: narrow (ANI); general (AGI, what I called strong above); and super (ASI). ANI exists – pretty much any good computer program, or even century old mechanical calculators, is an example of one. AGI and ASI doesn’t.

 

Much of the rest of the blog expands on the question of when, and if, AGI and ASI may appear, including results of recent surveys of participants and members of 3 well-known AI conferences and organizations, and top AI paper authors (by citation) answering the question.

 

I think Urban erred and overlooked a few things:

 

He writes “a robot is a container for AI, sometimes mimicking the human form, sometimes not—but the AI itself is the computer inside the robot”, which I think ignores mention of the important idea of embodied mind.

 

The statement "If our meager brains were able to invent wifi, then something 100 or 1,000 or 1 billion times smarter than we are should have no problem controlling the positioning of each and every atom in the world in any way it likes, at any time" seems to me to assume that a technology that can image and position every atom in the world is possible, which I don’t think is obvious from, and may be prohibited, by physical law. It also play loose with the idea of numerically quantifying smartness. The most useful and well-understood numeric measurements of intelligence require testing the performance of an person or computer program against a collection of well-defined tasks. What such a collection of tasks would be for something 1,000,000,000 times smarter than a human being is difficult to imagine, and may refer to a collection that can’t possibly exist.

 

More profoundly, I’m concerned that the 3-leveled ANI, AGI, ASI scheme may be profoundly wrong, because there may be no meaningful qualitative distinction between AGI and ASI. General intelligence – a redundant phrase, as the conventional definition of intelligence implies that it’s general – is general, implying a flexibility that can, sufficiently augmented by tools to organize ideas and perform lengthy and error-prone computations, understand anything. Given this definition, superintelligence is just intelligence more general than general intelligence. Creating a “superintelligence” category strikes me more as theology, speculating about the characteristics of God, than cognitive and computer science.

 

Because he’s more famous that Urban, I know more about Jaron Lanier’s ideas than I do Urban's. What he describes as “the myth of AI” isn’t a declaration that AGI isn’t possible or potentially dangerous, but rather that what is commonly called AI now isn’t AGI at all, but ANI purposefully made to masquerade as it. Lanier is, I think, implicitly agreeing that AGI is possible, while cautioning not to buy products claiming to be it.

 

Roger Penrose’s 1989 book The Emperor's New Mind made a similar but profoundly different cautionary statement, suggesting that futurists’ optimism about the possibility of true, strong AGI is not well-founded, and that the entire philosophical foundation on which it rests – the “computational theory of mind”, may be wrong. If Penrose’s “mysterion” position is correct, building an AGI may require that it be made of profoundly non-digital, non-Von Neumann hardware, which ultimately may physically resemble biological intelligence. If this is the case, the many lacks of limitations proponents of the possibility of ASI cite may be impossible – that is, being “general” may preclude being “super” intelligent.

Link to comment
Share on other sites

Keep in mind too that the probability of creating a competent and powerful "evil" AI is no greater than the probability of creating a competent and powerful "benevolent" AI.  And the chances of the whole world putting all their weight behind a single AI is very low.  It's much more likely that we'll have multiple dozens, even hundreds of powerful AI's, all competing in various ways.  That competition is likely to prevent doomsday scenarios.

Link to comment
Share on other sites

Maybe I am misreading what Jaron Lanier is talking about, but it doesn’t seem as though he is particularly worried about the issue (of course he could be 'wrong'). But by bringing up the religiosity behind many of these views he also seems to be opening the door to a point that does bother me about the whole discussion, which is the fear aspect.

 

I am not a very social person, and I’m sure this fact is related to why I can contemplate the possible extinction of our species without any fear; I just don’t care about the future dead. Even if I had children, I doubt that I would fear for the fates of their great grandchildren. Maybe being very social and/or having children means that someone ‘should’ care about this stuff in some genetic sense. I have no stake in this species beyond my own desire to share what I think and feel and to hope that it can be beneficial to others. But then I suppose a bit of pessimism and misanthropy can help one brush off a whole hell of a lot when it comes to species continuity.

 

I am only bothered because many others do fear in ignorance, which can make the lives we have now even more complicated, painful and diseased.

 

I admit that, in ignorance, I find the computational theory of mind hard to swallow. Is there hard evidence that existential machines can exist? I wonder more about bioengineered humans. Would such creatures (if ‘allowed’ to exist) end up outraged at their gods and crush our skulls in their ‘superior’ hands? I think there could be some serious problems in the future between natural born people and genetically enhanced people (including issues of insecurity and confusion within the enhanced community itself). Our technology evolves (with the aid of human irresponsibility) at a rate far beyond our ability to ‘control’ its effects on us as a species. But I cannot take Isaac Asimov’s Robot Dreams scenario seriously just yet.

 

As far as the economic issues involved: We will see; or rather, our children or children’s children may see. That is if we don’t all die from nuclear fallout or the common cold first.

 

I have to ask my self, “Is our present quality of life so precious?”. Answers will vary from person to person, according to a multitude of factors. Truth be told, some of us just want to see this world burn.

 

Cheers.

Link to comment
Share on other sites

These Cicada-like mini-drones in the story below are a little unnerving when you think of all the nefarious ways they could be used, but they're benign, I think, compared to what robotic technology will be 25 years from now. The rapid advances are making me think Stephen Hawking is not far off the mark with his concern about AI posing a real threat to humanity over the next century.

 

http://www.sciencerecorder.com/news/2015/05/18/locust-like-mini-drones-wide-range-uses-military-says/

 

I'm really curious what others think of Stephen Hawking's worries about the perils of AI.

 

 

We are on the verge of becoming a type 1 civilization, (you can check the Kardeshev scale) ... we already have some type 1 civilization technologies such as the internet, which is a type 1 telephone. However Moore's Law is beginning to break down, largely due to a lack of better technology.

 

In my opinion, Hawking is off by at least 300 years. We have quite some way to go before we find the right technology which can sustain an artificial intelligence, if one is even possible. Sentient beings requiring a body of water, electrons and sophisticated quantum mechanics is much different to some cold hardware collecting and processing data as electrons run through wires.

 

Perhaps Hawkings remarks remain a bit stronger in the presence of developing quantum computers but even then, it still seems like a hard prediction to keep hold of.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...