The three laws

Simon B points at an article that reports on new safety guidelines for ‘next generation robots.’ Because he’s on LJ, and I don’t have a LJ login or OpenID (!! – how lazy am I?) I thought I’d comment here…

Simon points out that the three guidelines sound quite similar to Asimov’s three laws of robotics:

The guidelines will require manufacturers to install enough sensors to minimize the risk of the robots running into people and use soft and light materials so they do not cause harm if they do so, the officials said.

They will also be required to install emergency shut-off buttons, they said.

….

There are also efforts under way to create global guidelines. The ministry plans to have its measures reflect the global standards, the officials said.

Asimov’s, for your reference, are:

1. A robot may not harm a human being, or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence, as long as such protection does not conflict with the First or Second Law.

Source: Wikipedia.

First of all, how cool is it that the real world is catching up with science fiction (well, a teensy bit)? And secondly: I’m glad the through inaction subclause hasn’t made it into the safety guidelines. Imagine if we had robots wandering around trying to lower our cholesterol intake or preventing us from drinking alcohol in case we were to ‘come to harm’ inadvertently… Always a flaw in Asimov’s laws, I think :P.

Still, it would be concerning if they came up with the zeroth law independently, as Simon comments… ;)