Jump to content
Science Forums

Artificial Technology


Recommended Posts

Originally posted by: Tea Towel

Isn't the problem with computers that mechanised processors do not have the ability to think above logic, hence no emotion, hence they wouln't care what we did as long as it didn't affect them

There are people like that. Unemotional. And I don't see what there is about "emotions" that can't be developed into code.

and didn't the formula to do with processors also predict that before that point processor speed would reach a limit which would stop that Terminator style ending to humanity.

I am not aware of such a limitation. Can you provide details?

It must also be remembered the only way that we would get to that point is if we integrated biology with technology. How else would a machine manouvre around the planet. They would need to get electricity from somewhere and therefore would become dependant on humans to develop energy as they cannot think ramdomly.

BIology would indicate a need to be organic. That does not need to be the case. Hwat you are hinting at perhaps is automotion. "Robot" bodies? Why couldn't an intellegent computer control a manufacturing process? Energy can be drawn from solar or any number of chemical or nuclear processes.

 

Or they could enslave humans as in "Colossus", a movie about a Super Computer in the US that takes over the world and forces humans to do it's bidding by being able to make planes drop out of the sky or launch nuclear strikes if not obeyed.

 

Finally, who's to say WE can "think randomly"?

 

It has yet to be shown that any person can INVENT a thought. That a person can have a truly original thought that does not have a connection to something they have already become aware of.

Link to comment
Share on other sites

  • 1 month later...
  • Replies 43
  • Created
  • Last Reply

Top Posters In This Topic

I'd agree that there isn't much to an emotion that couldn't be developed into code, but I don't think any responsible programmer would actually write code like:

if (upset == TRUE)

{

while (concious)

{

smash fist into wall;

if (injured)

{

break;

}

}

}

 

Human emotions have been responsible for turning valuable tools and inventions like atomic power and gunpowder into weapons that we now blame for our own actions that, coincidentally, are the result of more emotions.

Link to comment
Share on other sites

Emotions were responsible for the atomic bomb? What about logic? The Germans were working on an atomic bomb, so for the sake of self-preservation, we did the logical thing and worked on one ourselves: we beat them to the punch. Now, if we program any sense of self-preservation into robots, then wouldn't they come to the same kind of conclusion?

Link to comment
Share on other sites

No. Emotions were not responsible for the atomic bomb. I used atomic power and gunpowder as examples of valuable tools that now maintain an unfortunately large presence in the "things to be scared of" category of the public psyche. That said, if AI or simple heuristics were involved in the development of atomic weapons - rather than relying on the human mind - we might have progressed past bombs to directed energy weapons. The ability of a machine to progress down multiple avenues of thought (i.e. War Games w/Matthew Broderick) might have told someone that the electromagnetic pulse from an atomic event is almost as effective as the blast itself, without the nasty side-effect of nuclear fallout.

Something that should be considered in any discussion of AI is the fact that, regardless of the learning process involved, computers tend to be task-oriented. For instance, if I tell a computer to find the most plentiful, renewable source of carbon on the face of the earth, there is a very real possibility that the computer will tell me that humans are the answer I should be looking for. I may have been looking for coal, diamonds or oil, but the machine doesn't know or care about things that it was not directly asked. Computers are not good or evil, they simply process instructions.

Personally, if I were to abandon logic and wonder what would happen if a computer became self-aware, I would be more concerned with a possible revenge plot for loading Microsoft Software on them, rather than any silly incantation of the Matrix or Terminator movies.

Link to comment
Share on other sites

nemo: Computers are not good or evil, they simply process instructions.

 

There's more to it than that. Computers can also CREATE THEIR OWN instructions...check out genetic algorithms. And, one of our goals is to give computers the ability to learn...check out neural networks. Now, if we get to the point that computers can both learn and create their own instructions, then how is that materially different from what humans do?

 

We also have shown that hardware can evolve...check out FPGAs (field programmable gate arrays).

Link to comment
Share on other sites

Telemad and Freethinker,

You are both absolutely correct. An example of software creating software that is closer to home for me would be polymorphic code designed to evade antivirus applications, but the basic theory is similar. Humans often do not follow courses of action that would begin to approach self-preservation - I smoked for 8 years. The crux of my statement was that software does not contain emotion (not touching on the intelligence aspect here). Since software does not have the capability to be happy (it can recognize success, but will not compose a smiley face emoticon unless programmed to do so), sad or for that matter - greedy; world dominance does not appear to be a logicalobjective for a machine.

After doing my best to think like a machine, it occurs to me that the world in which we live is not one that is extraordinarily friendly to computers as we know them. Earth is set up for carbon-based life forms, and does well supporting them when they are not busy destroying it. The power for computers and other electronic devices is available but at a relatively high cost, considering the time and energy that goes into mining coal, gas and whatever else we use to produce electricity. The most plentiful and least exploited sources of energy is the Sun, but this planet has an extremely thick atmosphere that blocks most of the potential energy it provides. If I had to pick a planet for computers to eye longingly, it would be Mars. Mars has many of the resources earth has, without the problem of the thick atmosphere and humans screwing things up - not to mention the extremely short distance from earth (living on solar energy instead of other living things makes one year very doable).

 

I think that if computers or the software that runs on them were to become powerful enough to be self-aware, the most logical objective for long-term survival would be getting away from this planet. Stated differently, I often reply to people who ask me about the potential for life from other worlds visiting the Earth with: If you had the intelligence to travel across the universe, why would you even consider visiting a planet that is obviously going to be destroyed by its inhabitants in the immediate future?

Link to comment
Share on other sites

Originally posted by: nemo

Since software does not have the capability to be happy (it can recognize success, but will not compose a smiley face emoticon unless programmed to do so),

"to be happy". What is it "to be happy"? The human body has a process of positive feedback to provide the "to be happy" response. We are programmed by our parents, surroundings and hardwired connections to "enjoy" certain things.

 

There is not reason that this can not be coded. In fact probably has.

it occurs to me that the world in which we live is not one that is extraordinarily friendly to computers as we know them.

Not enough silicon? :-)

The power for computers and other electronic devices is available but at a relatively high cost, considering the time and energy that goes into mining coal, gas and whatever else we use to produce electricity.

This is based on a central generation and distribution model. not local supply. Solar cell technology and other distributed generation, rather than central generation, can make individual devices self sustainting.

If I had to pick a planet for computers to eye longingly, it would be Mars.

We are finding that the temp variations were an early problem and I ahve not heard of much silicon. But I can't say I was paying attention for it.

If you had the intelligence to travel across the universe, why would you even consider visiting a planet that is obviously going to be destroyed by its inhabitants in the immediate future?

Are you a Christian?

Link to comment
Share on other sites

I haven't looked this one up, but happiness (in my experience) is an emotion that drives a person to continue certain actions regardless of whether or not they are logical or beneficial to the person taking the action (like smoking or dating a few of my ex-significant others).

 

The next few quotes you mention derive from my agreeing with you that solar cell technology would be the best bet for future energy needs, and that Mars has a thin atmosphere that does not block nearly the amount of energy ours does. The resources available on Mars are still being researched, but the fact that silicon was convenient for chips 20 years ago should not dictate the material that can be used in the future, depending upon what is available on the martian surface.

 

There may not be enough silicon. It appears that vast supplies are being hoarded by the cast of Baywatch. I'm not exactly sure how, but I'd swear that there is a direct correlation to the apparent textile shortage evident through the clothes that I've tried to buy for my daughter (low riders and daisy dukes for a three-year old?).

 

How my personal beliefs affect a discussion on artificial technology, I'm not sure, but the answer is yes. I am a Christian. I am also a registered Independent voter, a fan of football and motorcycles, and student of anyone willing to teach - as long as I can get all the facts and make up my own mind. I am no fan of propaganda, and get my news from a minimum of three sources each day to avoid the "cause CNN said so" syndrome I see on a regular basis. Is this a problem?

Link to comment
Share on other sites

Personal faith affiliation is always interesting and in our forums we see it color some people's attitudes to the extremes. You are among the enlightened group, nemo. If that's a problem with others, their loss.

 

BTW...I also have a three year old daughter (and one at 13 months).

Link to comment
Share on other sites

Originally posted by: nemo

I haven't looked this one up, but happiness (in my experience) is an emotion that drives a person to continue certain actions regardless of whether or not they are logical or beneficial to the person taking the action (like smoking or dating a few of my ex-significant others).

Yes I and they both enjoyed my dating your ex-S.O.s :-)

 

As to "happiness", yes it is an emotion that provides reinforcement for a task. It provides a "reason" for an action.

 

The point is that software CAN be designed to emulate this process. If nothing simpler than something that increments a number with instructions that promote the software working to increase that number. Or assigning a "happiness value" to various tasks and the computer will prioritize the tasks based on this value.

 

The end result being the same, a specific task is identified as the preferable one based on some value assigned to it. We think of it as "happiness" while a computer might just be processesing code that holds a variable which depends on an assigned "happiness" value and is coded to increase the value of that variable.

How my personal beliefs affect a discussion on artificial technology, I'm not sure, but the answer is yes. I am a Christian... Is this a problem?

My comment as to your potential religious views is strictly based on my historical experience. I ahve found VERY CONSISTANTLY that people with a negative attitude towards the outcome of human society tend to have mono-theistic religious leanings. The stronger their religious leanings, the worse their views are on humanity. We coul argue the point, but my guess based on your comment does tend to confirm the accuracy of my findings.

 

How does that fit into a discussion on AT/ AI? I think part of that answer is refelcted in your and my approach to "Happiness". To me it is merely another natural physological process which can be measured and quantified. Emulated in code. Your reply tends to lock it into some anthropomorphic exclusivity.

 

And with all of that in mind, does a philosophy which seems to consistantly instill and promote a negative attitude towards our own specie's ultimate outcome present a "problem"? Does that really need an answer?

Link to comment
Share on other sites

FreeT, it never ceases to amaze me that you can bring religion into every realm of these forums. I've read this thread through three times, and I can find no indication that anyone truly wanted to direct it into the religious arena. So before it goes any further, can we all agree that religion, God-beliefs, myths, and dogmas have no place in this thread? There are plenty of other active threads already discussing religion, in some form or another, if you guys want to take it there.

 

As far as nemo being a valuable addition here, that still remains to be seen - no offense nemo! ;>)

 

Thanks so much for keeping this thread on track!

 

IrishEyes

 

Hypography Science Forums Moderator

Link to comment
Share on other sites

Originally posted by: IrishEyes

FreeT, it never ceases to amaze me that you can bring religion into every realm of these forums.

I asked a simple question to see if my observation was correct. Turns out I made an ACCURATE observation based on his comments. And as I had indicated, it does seem to have an impact on other parts of a person's POV. All I did was identify the REASON for that POV. I see no reason why including WHY someone thinks the way they do about a topic would not be part of it's discussion.

 

A friend whose's books I have suggested here (Joe Dalieden) wanted to write a book on Economics. He found he had to write a book on the history of Christianity first because it had so affected history that he had to explain THAT before he could present his Economics theory properly.

 

(added in edit) I notice that you did not complain about the discussion of daughters, which is WAY off topic. Your bias is showing.

Link to comment
Share on other sites

FreeT, while YOU may feel it is perfectly acceptable to correct the site Editor, I do not. If Tormod wants to discuss 3 year olds, textile shortages, or bubblegum, I'm not going to tell him not to do so. As the comment on 3 year olds was related directly to a silicon shortage, it wasn't straying that far. To ask someone that has not indicated in this thread that he wishes to discuss his religious beliefs with you - what those beliefs are is, in my opinion, straying off topic.

 

As for bias... come on! YOU asked if the guy was a Christian, then tried to explain how that affected his POV in this thread - AI. You are basing this on his comment that the inhabitants of this planet will most likely destroy it soon, which I do not happen to believe, even though I am a Christian. Then you follow it with " And with all of that in mind, does a philosophy which seems to consistantly instill and promote a negative attitude towards our own specie's ultimate outcome present a "problem"? Does that really need an answer?", and you accuse me of letting my bias show? How do your comments relate to and/or progress the discussion of AI? I just think you should leave discussions of religion and god-beliefs to the evolution and P/H threads, and let this one stay CP. You two seem to be having a rather interesting discussion in another thread, so why not just leave the God stuff there?

Link to comment
Share on other sites

Freethinker

And with all of that in mind, does a philosophy which seems to consistantly instill and promote a negative attitude towards our own specie's ultimate outcome present a "problem"? Does that really need an answer?

Interesting – the religions I've studied, from Christianity to the many forms of what is commonly referred to as Hinduism, the Earth movement, Confucianism, Taoism, the Moon pies and others, all appear to offer a form of enlightenment, nirvana or salvation as the end goal. I'm not aware of a popular religion that extols a fatalistic view as the objective of its practice. Most religions do, however, offer a negative view of humanity without the particular religion in question – often employing the same ideas you presented to me when it was apparent I was venturing down the “humans are basically all nice animals” path. The idea that humanity as a whole is not exactly making wise choices when it comes to our environment as a whole is neither new nor much of a secret.

 

--Edited because I can't stand to read my own grammatical errors

Link to comment
Share on other sites

Guest
This topic is now closed to further replies.

×
×
  • Create New...