Jump to content
Science Forums

Artificial Technology


Recommended Posts

  • Replies 43
  • Created
  • Last Reply

Top Posters In This Topic

Irish and FT,

 

Should I leave you two alone for a bit? I have no problem with someone asking me about my religion; indeed, if I did, that wouldn't say much for my religious convictions. I have more to say on this topic, but as IrishEyes noted – this is not exactly appropriate for a discussion on Artificial Intelligence.

 

Continuing the discussion on AI – I think the problem may lie with my logic relative to the system we're discussing. I understand the simplicity of testing for a given value and taking action based upon the return value of the test. My confusion originates more from why anyone would do such a thing. Emotion, for the most part, acts independent of logic – a condition generally considered to be a problem with software. I can understand the validity of wanting to explore every avenue of action, and the suitability of automating that process, but programming software to intentionally make what would normally be considered the “wrong” choices is something that I am having some difficulty wrapping my mind around.

 

I'll continue the religious discussion under the "Creating a Relgion" thread within Philosophy and Humanities

Link to comment
Share on other sites

Originally posted by: nemo

Irish and FT,

 

Should I leave you two alone for a bit?

Oh we sneek our private time in here and there! :-) As Tormod comments, we are perhaps the yin and yang of the site.

I have no problem with someone asking me about my religion; indeed, if I did, that wouldn't say much for my religious convictions. I have more to say on this topic, but as IrishEyes noted – this is not exactly appropriate for a discussion on Artificial Intelligence.

True. But it always is an interesting discussion fo me. There are various religion based threads to move this to.

 

That's what I get for being curious and aware. Your comment rang a bell, I ask a simple question based on the noted connection and away it goes!

Continuing the discussion on AI – I think the problem may lie with my logic relative to the system we're discussing. I understand the simplicity of testing for a given value and taking action based upon the return value of the test. My confusion originates more from why anyone would do such a thing. Emotion, for the most part, acts independent of logic –

I do not find that to be the case. At least from the "process" side. From the "resultant response" side perhaps.

 

i.e. an individual might make a decision which does result in a "reasoned" result. But the process is ALWAYS "logical". That is, "Emotion" is understood as a logical physiological process. Stymulous, response, ... This that cortext stimulated, ....

 

e.g. a person is at a State Fair and walks past the guy hawking some "wonder mop", the pichman uses established logical processes that are intended to promote an emotional response from the viewer. He uses terms and actions that stimulate the buying urge. We understand how and why these work based on logical scientific processes.

 

However the end result for the viewer is being lead into making perhaps an "irrational decision".

 

A "logical process" that results in an "irrational decision".

 

So "emotion" follows a logical process even if it's involvement in a decision process causes the decision maker to make an illogical decision because of it.

but programming software to intentionally make what would normally be considered the “wrong” choices is something that I am having some difficulty wrapping my mind around.

That is because you are assuming the desired result is to make a "wrong" decision. While the actual intended outcome is to make the "desired" outcome by inserting a weighting filter based on input which might be logically correct.

 

let's say we are designing software to control the movement of a vehicle. Let's say that one set of parameters is avoidance of pedestrians. A perhaps "logical" filter would be if forced to decide between two end results in which it is inevitable that pedestrians WILL be injured. Perhaps a "logical" decision would be based on the path that would harm the fewest people. However an "emotuion" filter is inserted that wieghts babies significantly higher than adults. And thus the software is designed so that the outcome would be 10 people are killed to save ONE infant.

 

Many traffic accidents happen because a driver saw some "cute little creature" in the road and they did not want to harm it. SO they make an ILLOGICAL choice to swerve and smash into a tree, killing themselves.

 

Thus a decision results in commiting suicide in order to save a kitty. A "logical" thought process (stimulous/ response) results in an irrational outcome decision.

 

This can be coded into software.

Link to comment
Share on other sites

Originally posted by: nemo

I'm not aware of a popular religion that extols a fatalistic view as the objective of its practice.

Christianity definately does. "Original sin" and the concept of external requirements for salvation, that we are not capable of achieving the desired end result in and of ourselves. It requires that the person assume they are incapable of doing good by themselves. According to Christian theology, we are born bad and require god's assistance to live right.

 

Romans 3:23 For all have sinned, and come short of the glory of God;

 

To bring this to topic. Code could be written that weights "salavation" significantly higher than direct logical stimulous. That requires extreme levels of external conditions to overcome a bias established internally to the contrary (emotion).

 

In fact this would seem to be the case in the 3 primary laws of robotics. They establish a filter, a weighting system in the operational code that puts the existence of the robot at a lower level than the existence of ANY INDIVIDUAL human. That we as the Creator of the robot COMMAND it to self destruct before causing harm to it's creator. This would correspond to the "emotional" reaction in which an adult would intentionally put themselves into "harms way" to potentially save an infant.

Link to comment
Share on other sites

According to Science, we are born helpless and require a great many people's assistance to meet Maslow's hierarchy of needs. The only self-sustaining entity science has envisioned is the myth of perpetual motion.

 

 

 

Nice verse – it's one of the most popular. Personally, I prefer 1 Timothy 1:15.

 

 

 

You are still coding logic and decision making processes that would be static as a replacement for spontaneous emotion. Out of curiosity, would the introduction of emotion into software be synonymous with free will? Would it ever be possible for a software application or robot to do something without a decision making process involved? How would you go about coding something as abstract as a sense of humor?

Link to comment
Share on other sites

Originally posted by: nemo

According to Science, we are born helpless (snip)

 

Huh? According to nature, I'd say. Or are you talking about some specific thing? Science (as you well know) is many things, LEAST of all a force which directs how nature behaves.

Link to comment
Share on other sites

Originally posted by: nemo

According to Science, we are born helpless and require a great many people's assistance to meet Maslow's hierarchy of needs.

Born DEPENDANT perhaps. But not HELPLESS. At birth we can breath for oourselves, our hearts pump blood to all parts of our body. We can take nourishment given to us (the "dependant" side) withoout instruction or coaxing. The suckle instinct is hardwired. As is pattern recognition to recognize basic shapes as potential friend/ foe. (we "recognize" human type facial features).

The only self-sustaining entity science has envisioned is the myth of perpetual motion.

"energy can not be created nor destroyed"

 

This is not meant as validation of "perpetual motion". But to show that Science considers energy to be self-sustaining.

Nice verse – it's one of the most popular. Personally, I prefer 1 Timothy 1:15.

Thanks for helping to prove why Christianity is BAD for society. Why it fosters a negative attitude within it's followers. That was my claim and you helped prove it.

You are still coding logic and decision making processes that would be static as a replacement for spontaneous emotion.

Please explain how you are differentiating them. Examples might help.

Out of curiosity, would the introduction of emotion into software be synonymous with free will?

Free Will? Now we have a problem! First we have to find an agreeable defintion of Free Will. If we wee to (for some absurd reason, lol) allow for the existence of the Christian god, there would not be Free Will. Omniscience and Free Will are mutually exclusive. Even without, proving Free Will is ...

Would it ever be possible for a software application or robot to do something without a decision making process involved?

Man, you don't give any easy ones. Do ants follow a "decision making process"?

How would you go about coding something as abstract as a sense of humor?

You don't get any easier as you go along do you? Humor is societally relevant. What's funny in one place dies in another. However it would seem feasible to code particular filters to emulate specific humor parameters.

Link to comment
Share on other sites

Maslow

 

 

Agreed. Dependancy is definitely a better fit than helplessness, and science doesn't direct nature. Out of curiosity, what topic would Maslow's work fit into?

 

 

Energy

 

 

Hadn't considered that. Thanks.

 

 

Christianity

 

 

I think we're addressing this topic pretty well over in Phil/Rel.

 

 

Spontaneous Emotion, Free Will and a Sense of Humor

 

 

I tried to answer these topics separately, but was getting redundant in my answers. My wife has this crazy idea that sometimes I just get up on the wrong side of the bed. Could a robot begin a process without a specific stimulus or intended result - or would every action be the result to some stimulus (allowing for the circumstance that no stimulus might be considered stimulus in and of itself)? Would robots ever be capable of having an argument about philosophy? Honestly, I'm having a hard time quantifying this subject myself; I keep having flashbacks to the movie "Short Circuit". I apologize for being vague and attempting to answer a question with more questions.

 

 

Ants

 

 

Their habit of forming single-file lines from their holes to their destination instead of wandering randomly about in search of food or whatever they are searching for would lend itself to a decision-making process for determining the validity of one ant's discovery of the location of food.

 

 

Humor

 

 

A guy walks up to me and says "Hey good-looking, I'm Steve".

 

I say "223, 27 and 0"

 

"What?"

 

"After 223 years of study, astronomy tells us that there are 27 heavenly bodies that should be orbiting Uranus, and none of them are named Steve."

 

--Does a computer 'get it', or am I simply stating a fact?

Link to comment
Share on other sites

Originally posted by: nemo

Maslow

Out of curiosity, what topic would Maslow's work fit into?

In itself, Philosophy, as input, any discussion I would think.

Spontaneous Emotion, Free Will and a Sense of Humor

 

Could a robot begin a process without a specific stimulus or intended result

Could a human?

or would every action be the result to some stimulus

I think we have to assume that every action since the BB is the result of a combination of various causalities. I keep seeing the approach of requiring a greater level of self "activation" demanded for AI than for humans.

I apologize for being vague and attempting to answer a question with more questions.

Actually I find this a better approach to discussions. No one ever has absolute knowlege. (Though I come closest!) Exploring alternatives seems the most viable approach.

Link to comment
Share on other sites

Guest
This topic is now closed to further replies.

×
×
  • Create New...