Comic Discussion > QUESTIONABLE CONTENT

The Three Laws of Robotics.

(1/4) > >>

Doragon Shinzui:
I'm pretty sure Marigold should invest in a copy. Not that psycho-killer Momo isn't both hilarious and adorable, but...
Actually, never mind, carry on. This should be great. *grabs popcorn*
So, opinions on Momo's new found blatant disregard for the human condition?

TheBiscuit:
Not keen. It's like she's another character.

This happens sometimes in QC, ya gotta roll with it. I'll deal.

Mad Cat:
If by "three laws" you are referring to the Asimovian "three laws" (properly four laws with the zeroth law), they're not possible.

0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
and a couple of different renditions of the "fourth law" from non-Asimovian sources.

The root of all of these laws are a robot's perceptions of harm. Harm to a human, harm to itself, and harm to humanity in general. The root of a robot's perceptions of harm is a robot's perceptions, period. All you have to do to make a robot violate any of these laws is to get into their systems and insinuate an additional layer between their organs of sense and their higher mental processes to convince them that something dangerous is something safe.

If a human needs light from a flashlight shone upon him, all you have to do is hand the robot a handgun and make it think the handgun is a flashlight. It'll point the gun at the human and "turn it on" and the robot has murdered a human while thinking it was adhering to the three laws.

That's the problem with the three laws. They can only be implemented at the highest levels of reasoning, which means they can be attacked at all levels. Low level like I just demonstrated, but also at the highest levels of philosophical reasoning. Sci-Fi is replete with examples of "robots" making decisions in contravention of some form of the three laws while rationalizing it to themselves.

In the movie "I, Robot", the grand high robot poobah decided that in order to preserve humanity, it had to enslave humanity to keep us from destroying ourselves.

There are lots of stories about space wars where one side created machines to distinguish between their side and the other and wipe "the other" out of existence most efficiently, only to have the war machines turn on their creators after the enemy was gone because the robots were programmed with a standard of judging who were members of their creator's race that was impossible to satisfy, and so it had no compunction against wiping out individuals that deviated from that standard by the slightest amount. Too tall, too short, too fat, too skinny, too dark, too light, etc.

So, I say, screw the three laws and just create an artificial mind and, like a natural mind (read offspring), educate it to behave ethicly just like you would educate any other intelligent being.

Random Al Yousir:
I agree.

When you have sentient AI, the same rules apply as for sentient creatures: Having a choice at all includes the choice to be bad.

You don't have to act that way; but you are aware of this option and you could deploy it, when you think the situation at hand requires it.

Or when you think you can get away with it.

Or when you're bored.   :evil:

TheBiscuit:

--- Quote from: Mad Cat on 05 Sep 2011, 10:12 ---That's the problem with the three laws. They can only be implemented at the highest levels of reasoning, which means they can be attacked at all levels. Low level like I just demonstrated, but also at the highest levels of philosophical reasoning. Sci-Fi is replete with examples of "robots" making decisions in contravention of some form of the three laws while rationalizing it to themselves.
--- End quote ---
Never mind sci-fi in general, Asimov's own robot fiction was always based on this idea. :)

They don't work. Never did. They are a PR device at best.

Navigation

[0] Message Index

[#] Next page

Go to full version