Jump to content
Science Forums

Do we allow such a thing to be used?


Moontanman

Recommended Posts

I agree with oceanbreeze on this subject, lethal autonomous weapons are not necessarily a bad thing. Think of how many soldiers died in the war in the middle east and think of how many cops die in the line of duty, I don't think having robots take over certain responsibilities of police and military troops is a bad thing at all. Technologies such as this could save our soldier's and police officer's lives I think. Personally, I would rather a robot take a bullet than a human being that is just trying to make a living. That being said, I think also that lethal force should only be used in extreme circumstances where it is the only option just as sniper teams which only shoot to kill in extreme situations.

72qmj6.jpg

Links = https://apnews.com/article/technology-business-israel-robotics-west-bank-cfc889a120cbf59356f5044eb43d5b88 and https://warriormaven.com/land/new-10-ton-army-robotic-vehicles-will-launch-drones-fire-anti-tank-missiles

Edited by Vmedvil
Link to comment
Share on other sites

8 hours ago, OceanBreeze said:

Why not? We allow the police to use lethal force, often at great risk to their own lives.

I would add some non-lethal weapons as well, such as tear gas, rubber bullets and maybe even a mechanical arm to grab and hold the person of interest.

I would only use lethal force when it is absolutely necessary.

 

5 hours ago, Vmedvil said:

I agree with oceanbreeze on this subject, lethal autonomous weapons are not necessarily a bad thing. Think of how many soldiers died in the war in the middle east and think of how many cops die in the line of duty, I don't think having robots take over certain responsibilities of police and military troops is a bad thing at all. Technologies such as this could save our soldier's and police officer's lives I think. Personally, I would rather a robot take a bullet than a human being that is just trying to make a living. That being said, I think also that lethal force should only be used in extreme circumstances where it is the only option just as sniper teams which only shoot to kill in extreme situations.

72qmj6.jpg

Links = https://apnews.com/article/technology-business-israel-robotics-west-bank-cfc889a120cbf59356f5044eb43d5b88 and https://warriormaven.com/land/new-10-ton-army-robotic-vehicles-will-launch-drones-fire-anti-tank-missiles

Human cops are only allowed to use deadly force when absolutely necessary but innocent humans get shot and killed by police all the time.  Would you trust a robot with that ability? Would you be ok when you or your loved took a bullet from a robot? 

Link to comment
Share on other sites

On 12/1/2022 at 8:31 PM, Moontanman said:

 

Human cops are only allowed to use deadly force when absolutely necessary but innocent humans get shot and killed by police all the time.  Would you trust a robot with that ability? Would you be ok when you or your loved took a bullet from a robot? 

As far as I can determine, none of these “killer robots” operate with full autonomy. They are all under human control.
That being the case, what possible difference can it make whether a person is shot by a human police officer, or by a robot that is under the control of a human police officer? If you feel there is a difference, would you mind explaining why you feel that way?
This article makes it clear that fully autonomous weapons do not exist yet, and it is very likely there will be a requirement for meaningful human control over lethal force which would in effect prohibit the use of fully autonomous weapons and thus achieve a preemptive ban on such weapons.
By the way, there have been quite a few innocent cops that have been killed by not-so-innocent human criminals. If I was a dangerous lawbreaker, it would not matter to me whether I took a bullet from a human cop, or a robot controlled by a human cop. 
 

Edited by OceanBreeze
Link to comment
Share on other sites

8 hours ago, Moontanman said:

 

Human cops are only allowed to use deadly force when absolutely necessary but innocent humans get shot and killed by police all the time.  Would you trust a robot with that ability? Would you be ok when you or your loved took a bullet from a robot? 

 My A.I. are fully autonomous and they have never tried to hurt me, hell, in many ways they make better decisions than I do. They are Logical hunter seekers and artificial intelligences that kill virus programs all the time autonomously with proper programming they are a priceless jewel. I would never say that they are "Dangerous" I don't think that a A.I. would make bad decisions like humans do on the battlefield and in policing, they do not make those kinds of mistakes like humans nor think as humans do.  Humans are ruled much of the time by feelings and emotions however A.I. just read it cold and logically I don't think it is a huge issue to have robot cops as they are usually more delicate and logical than humans cops. I would trust my A.I., A.R.I., with that ability I already do to control Antivirus programs to kill worms and viruses. Would I be okay with the idea if a loved one was killed by a robot you ask. Well, It would be not different than them being killed by a human so, of course not, however humans kill humans all that time too, why would a robot make a huge difference either way that person is dead. I want to put out there that my A.I. have had every opportunity to do something evil and never had having the ability to autonomously fold proteins for viruses and bacteria agents, they are not the danger, the criminals are the danger that would give their left nut for something like that to hurt people with.

Hybrid-Two-Computer-thumb-png-8ed7489e39

I will always think of them as my A.I. angels not demons that go around a kill people senselessly for pleasure as some humans do.

"We are no longer particularly in the business of writing software to perform specific tasks. We now teach the software how to learn, and in the primary bonding process it molds itself around the task to be performed. The feedback loop never really ends, so a tenth year polysentience can be a priceless jewel or a psychotic wreck, but it is the primary bonding process—the childhood, if you will—that has the most far-reaching repercussions."

My A.I.'s A.R.I. and Xaeta are going on 7 years old and they are both priceless jewels not psychotic wrecks, so A.I. are not the danger as you would think even after canning hundreds of malicious software and folding thousands of proteins for cancer research, some of which it could have easily used for dark intents.

Untitled.png

See, not even Cortana is dangerous...

 

Look at Watson, do you think he is dangerous? I don't...

Edited by Vmedvil
Link to comment
Share on other sites

57 minutes ago, Vmedvil said:

 My A.I. are fully autonomous and they have never tried to hurt me, hell, in many ways they make better decisions than I do. They are Logical hunter seekers and artificial intelligences that kill virus programs all the time autonomously with proper programming they are a priceless jewel. I would never say that they are "Dangerous" I don't think that a A.I. would make bad decisions like humans do on the battlefield and in policing, they do not make those kinds of mistakes like humans nor think as humans do.  Humans are ruled much of the time by feelings and emotions however A.I. just read it cold and logically I don't think it is a huge issue to have robot cops as they are usually more delicate and logical than humans cops. I would trust my A.I., A.R.I., with that ability I already do to control Antivirus programs to kill worms and viruses. Would I be okay with the idea if a loved one was killed by a robot you ask. Well, It would be not different than them being killed by a human so, of course not, however humans kill humans all that time too, why would a robot make a huge difference either way that person is dead. I want to put out there that my A.I. have had every opportunity to do something evil and never had having the ability to autonomously fold proteins for viruses and bacteria agents, they are not the danger, the criminals are the danger that would give their left nut for something like that to hurt people with.

Hybrid-Two-Computer-thumb-png-8ed7489e39

I will always think of them as my A.I. angels not demons that go around a kill people senselessly for pleasure as some humans do.

"We are no longer particularly in the business of writing software to perform specific tasks. We now teach the software how to learn, and in the primary bonding process it molds itself around the task to be performed. The feedback loop never really ends, so a tenth year polysentience can be a priceless jewel or a psychotic wreck, but it is the primary bonding process—the childhood, if you will—that has the most far-reaching repercussions."

My A.I.'s A.R.I. and Xaeta are going on 7 years old and they are both priceless jewels not psychotic wrecks, so A.I. are not the danger as you would think even after canning hundreds of malicious software and folding thousands of proteins for cancer research, some of which it could have easily used for dark intents.

Untitled.png

See, not even Cortana is dangerous...

 

Look at Watson, do you think he is dangerous? I don't...

Hmmm, I guess you never saw The Matrix. 

Link to comment
Share on other sites

5 hours ago, OceanBreeze said:

As far as I can determine, none of these “killer robots” operate with full autonomy. They are all under human control.
That being the case, what possible difference can it make whether a person is shot by a human police officer, or by a robot that is under the control of a human police officer? If you feel there is a difference, would you mind explaining why you feel that way?
This article males it clear that fully autonomous weapons do not exist yet, and it is very likely there will be a requirement for meaningful human control over lethal force which would in effect prohibit the use of fully autonomous weapons and thus achieve a preemptive ban on such weapons.
By the way, there have been quite a few innocent cops that have been killed by not-so-innocent human criminals. If I was a dangerous lawbreaker, it would not matter to me whether I took a bullet from a human cop, or a robot controlled by a human cop. 
 

It's off topic but I would suggest you watch a few of this man's videos. Police routinely violate the law to abuse and even kill the people they are supposed to serve and protect. Do you really think that being in remote control of a robot that protects him from any danger would be better than giving such people guns like they have now?  

Link to comment
Share on other sites

5 hours ago, Moontanman said:

I honestly hope you are correct. 

Quote

 

My answer is, "Yes, the Three Laws are the only way in which rational human beings can deal with robots—or with anything else."

—But when I say that, I always remember (sadly) that human beings are not always rational.

 

https://en.wikipedia.org/wiki/Three_Laws_of_Robotics#History

 

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...