This writer called Issac Assimov wrote science fiction and he made up robot laws. These laws are about how robots are supposed to act.
The go like this
A robot may not injure a human, or, through inaction, allow a human being to come to harm.
A robot must obey the orders given to it by human beings except where those orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Today at school, we had to talk in our group about the rules for robots. Mr. Parker gave us questions to ask in the group.
- What does inaction mean? What is an example of inaction? How would a person be injured through inaction?
- What does it mean “where the orders would conflict”?
- What does the third law mean?
- Why would we need these laws?
- Do you think three law is enough? What would be another law you think is important?
My group thinks inaction means not doing action. Not doing something you are supposed to be doing. A person could be injured if the robot needed to help them but did not. If a person was drowning in the swimming pool and the robot just stood there on the side and did not throw a floatie or give them a stick to grab.
When the orders would conflict is when they are opposite each other. Like if a person told the robot not to throw the floatie when the robot should throw the floatie.
A robot needs to know how to save itself. That’s the thrid law.
My group thinks these laws are not super clear and kinda say the same thing. The robot would need to be able to think for itself and we don’t know if there are any robots that can do that. And what if a robot needs to hurt one human to help another human? How does the robot know which human to save?
We know that robots are tools. And we program them to do what we want. The first rule would be that humans should not make a robot who would not try to take over the world.