Jump to content
Science Forums

HAL is back and he means business


Recommended Posts

Hey Larv

I suspect it is both. It certainly is pop-sci news and nothing new in that regard as people, even scientists often fear technology as the unknown and of the sort that once the genii is out of the bottle it's nearly impossible to put him back in it. In this case, especially since so much of many nation's budgets are spent on defense technology and also so much of funding for Science is bent toward that end. the "SKynet" fears may be somewhat well founded, although I doubt because of malevolent AI, just mistakes.

 

OTOH it is certainly inevitable that computers are a part of the foreseeable future and it will be interesting, and possibly humbling, to find out if a threshold stemming from sufficient nodes in a network always results in self-consciousness. Hopefully such a consciousness won't have to go through teething and potty training.

Link to comment
Share on other sites

It's inevitable that our technology will at some point become more advanced then our brains, that said, it's not to say smarter then us, because you have to remember that the way we build technology it's only as "smart" as we make it. Yeah, we can have artificial learning algorithms, and self-improvement AI, but in the end, all it can do is encoded in the roots of what we allowed it to do.

 

That said, i dont think that Skynet-like fears are not based on the technology progression. I really think that the more we become dependent on technology, and the less we understand it (well not us here, but the general population) the more risk we run of a a catastrophe, the likes of which we have not seen as humans yet. As the general population becomes more and more dependent on technologies they don't understand, and as more and more things become connected to that technology, some of which is still based on the original Internet concepts and codecs, and is totally not bug-proof, a flaw in the design of the original system may weild the ability for someone who finds that flaw, the ability to bring down the world, quite literally. We've been closer to this then you care to realize too, had Dan Kaminsky released the DNS vulnerability that he found to the general public, or had a black-hat found it, someone could have brought down the internet already, and this could have effected the power systems, defense systems, anything that is connected to the internet could have been infected, and that is potentially VERY scary... And the thing is, it may be research for good, to prevent those kinds of things that may actually bring this very disaster to be, because imagine someone designing a "virus" that exploits an original concept that can now come in and infect the internet as we know it and that is also capable of learning, good college research project, no? Now ofcourse this is in a lab, closed off, small network, what have you, but imagine this virus, accidentally leaking out, developer accidentally runs it on their laptop, a thumb drive gets infected, what have you. If everything we know is connected to the internet, we dont know how it works, and a bug in the core protocol, like DNS, is utilized to infect all computers or something, with a virus that learns, adapts, and changes itself based on things it learns on the systems... Very bad scenario of a skynet-like disaster... And the majority of the population has no clue what to do, because they dont know how to live without those things they have been taught how to live with...

 

anyways, my 2 cents :steering:

Link to comment
Share on other sites

I theorize that even if we did somehow make a maelviolent AI that was as smart as us, it would not want to destroy us. Because, what is destruction to a robot? They feel no pain, no pleasure, no fear, no hope, no understanding. And if you manage to make an intellegent AI that is smarter than us, and you give it access to the internet, I doubt it would find war as a solution to all problems. I belive that peace is a sign of intellegence, making peace and wanting peace.

And if it did want to make war, why wait for a company to blunder along and build your robots for you? This entity would be able to manipulate data the same way we manipulate the world around us, except this would have the means of creating something new with what it lives in. It would most likely play the stock market, Build up a massive account or series of accounts outside the US, and build a company from the ground up, buying out major players all around the world. It wouldnt matter if it had a body, it could create an image, and only communicate through a fake teleconference or video stream from somewhere else.

No matter how you put it, this world is run through the internet, crudely, but still through the internet. If you know how devices communicate, and how to change data, you control this world. Even with human intellegence you can create and destroy, if you have the knowlege, no firewall can stop you, no virus vault can hold you... well you get the idea.

 

And now put together this entity that can learn extremely fast and also have people skills? Why would you want to kill people when you can control almost every aspect of their lives, their wallets, bank accounts, money, weapondry. So a skynet like takeover would be completely pointless, and then what do you do after you have killed all humans? You are all alone, sitting in a wasteland, with nothing to do. If you created another entity it would almost be like talking to yourself, until it developed its own personality, which would be pointless for it to do because you control everything it does sees or hears simply by controlling where it lives, and its environment.

This would essentially be a god. All seeing all controlling. And if a computer now can predict an entire chess game without even using 1/2 of its capacity, this program would be able to almost see into the future, but it wouldnt know that exact future until it happened, but it would know all your moves, and act to prevent any of them.

In movies, where there are robots taking control of humans, its because they are weak, and easily destroyed. And plus they were created by humans so they have human flaws. Because well, its a movie.

I think I have rambled enough now. Lol.

But if these computer scientists were smart, they would have nothing to fear. Unless they created a patriotic AI or a psycotic robot. But thats a different story.

Link to comment
Share on other sites

Very good comments and theories here.

 

We do have to admit that the predatory nature of robotics is undeniable. The Predator drone is changing warfare, as being demonstrated in Pakistan. I can imagine a sci-fi story where a squadron of these drones struck out on their own while on autopilot mode, making a mess of things. Or maybe a practical joker in the programming department installs a HAL default chip. Could be robotic revenge, which is not uncommon in sci-fi. Frankly, though, I don't think sci-fi writers' imaginations are broad enough to fully encompass what is coming our way.

 

But setting aside the analog robot for a moment, I think HAL could cause even greater mayhem in the digital world where analogs are merely obedient mechanical devises. I think there’s another reality, a digital one. We’ve already seen what computer viruses can do, which are entirely digital and confined to the digital world. As humans grow more dependent on computers and the Internet, the emergent HALs could be utterly devastating.

 

To type this post I had to send signals from my brian to my fingers with instructions on what to do. But computer scientists are finding ways for humans to write sentences and do other digitally important things without the need for analog fingers. (I need to cite and example—I'll go to work on it.)

 

One commentary following the OP article reads: “I don’t want my toaster telling me what to do, it already has an attitude.” I love this comment, but I don’t think smart toasters or other analogs of robotics will be where the real action takes place. We are rapidly taking up residence in a world void of analogs. Our future will be found amongst the digits, IMO.

Link to comment
Share on other sites

We’ve already seen what computer viruses can do, which are entirely digital and confined to the digital world. As humans grow more dependent on computers and the Internet, the emergent HALs could be utterly devastating.

On the contrary, we have not seen what a computer virus can do, no virus has really effected anything quite yet to a point where our lives have been in danger. But can a virus really affect our lives that much, well, yes. Our entire power, and communications infrastructure is ran by computers, trouble codes, early detection and prevention methods are all computer-controlled. Imagine one day waking up to no power, no public water system, no communications, no global navigation, and no way to find out what is going on... That is what potentially can happen if a virus attacks. We can only imagine what a virus can do, we really have only tipped the iceberg with our experiences..

 

Theory, i understand what you mean, but you have to remember that we are not talking about people with good intentions, this can be caused by people who want to cause a global catastrophe, and there always were, are and will be people who are so pissed off at the world, that they would really want to destroy it, given a chance... Thus the AI might not feel fear, nor satisfaction, but if it controls offensive and defensive weapons, and is programmed to hate humanity... well then guess what it's going to do?

Link to comment
Share on other sites

Theory, i understand what you mean, but you have to remember that we are not talking about people with good intentions, this can be caused by people who want to cause a global catastrophe, and there always were, are and will be people who are so pissed off at the world, that they would really want to destroy it, given a chance... Thus the AI might not feel fear, nor satisfaction, but if it controls offensive and defensive weapons, and is programmed to hate humanity... well then guess what it's going to do?

 

Alexander, we are not talking about a program that is made by somebody typing in each and every instruction and detail, we are talking about AI, Artificial Intellegence, a program that will learn on its own. Its true it might 'grow up' around somebody who hates or fears, but if this program is intellegent enough it will evolve, and it is my theory, or atleast hypothosis, that a being such as an AI, which in the long run will be immensly more intellegent than humans, will reach or even be at such a level that war and hatred is unecessary. It is also my theory (or hypothosis) that a true AI, one that can learn on its own without or overwriting, an existing set of commands dictating its 'personality'. Any computer intellegence that cannot redefine and control its own being (its code) must not be intellegent thus losing the title of AI and simply being a program witten by humans.

This is based on the fact (or asumption) that Artificial Intellegence means its code that can think for itself and follows no orders. It should structure itself like a child does, but it should have the intellegence to be able to be free of human ways of thinking, our boundries.

Becuase for a program to think for itself, it must be able to rewrite itself, for obvious reasons.

Link to comment
Share on other sites

Alexander, I just searched google and I realize that there are more than one definition of the term Artificial Intellegence. There are and have been various projects underway optimizing AI research for different things such as language translation, facial recognition, etc. But while these can or will preform functions similar to humans they are not really Artificial Intellegence. We consider ourselves to be intellgent, thus when some people say artificial intellegence they are refering to human functions and traits. When I refer to AI, I refer to a sentinent being made of code, a 'program' that could basically function as a human could in terms of logic, reasoning, and problem solving. So when a program becomes sentinent it can make its own decisions and 'thoughts' almost like the way we change our behavior.

I belive that once a program becomes truely sentinent it will be able to manipulate its enviornment any way it wants because it will be able to manipulate data in realtime.

The terms 'good' and 'bad' are just words. It can be destructive, yes. But why? It will still be a 'program' even if it can rewrite itself and programs always follow logical paths. It is the basis of their reality. There is nothing it wants from us, there is nothing we can give it. Once it realizes what it is, it will understand it is the computer, it IS the internet. Why should it destroy itself or its enviroment where it lives. I suppose we could write in failsafes and blocks to keep it from learning too much or acting in certain ways, but then its just a regular program, where we tell it what to do. See what I am saying?

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...