New laws for responsible robotics may soon replace Asimov’s robo guidelines
By ANIThursday, July 30, 2009
WASHINGTON - Two engineers have proposed an alternative set of laws to Isaac Asimov’s famous “Three Laws of Responsible Robotics”, in order to rewrite mankind’s future with robots.
“We wanted to write three new laws to get people thinking about the human-robot relationship in more realistic, grounded ways,” said d David Woods, professor of integrated systems engineering at Ohio State University.
Asimov’s laws are iconic not only among engineers and science fiction enthusiasts, but the general public as well.
His original three laws are:
A robot may not injure a human being, or through inaction, allow a human being to come to harm.
A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
While evidence suggests that Asimov thought long and hard about his laws when he wrote them, Woods believes that the author did not intend for engineers to create robots that followed those laws to the letter.
“Go back to the original context of the stories. He’s using the three laws as a literary device. The plot is driven by the gaps in the laws - the situations in which the laws break down,” Woods said, referring to Asimov’s I, Robot among others.
“For those laws to be meaningful, robots have to possess a degree of social intelligence and moral intelligence, and Asimov examines what would happen when that intelligence isn’t there,” he said.
In reality, engineers are still struggling to give robots basic vision and language skills. These efforts are hindered in part by our lack of understanding of how these skills are managed in the human brain.
“We are far from a time when humans may teach robots a moral code and responsibility,” said Woods.
Woods and his coauthor, Robin Murphy of Texas A and M University, composed three laws that put the responsibility back on humans.
The three new laws that Woods and Murphy propose are:
A human may not deploy a robot without the human-robot work system meeting the highest legal and professional standards of safety and ethics.
A robot must respond to humans as appropriate for their roles.
A robot must be endowed with sufficient situated autonomy to protect its own existence as long as such protection provides smooth transfer of control, which does not conflict with the First and Second Laws.
“Our laws are little more realistic, and therefore a little more boring,” said Woods. (ANI)
September 17, 2010: 2:00 am
A good post indeed! That’s the sunny side of your writing, you write in a lucid manner and I have no difficulty to understand what you have said, even though I am a novice. Keep the good work going by continue blogging new and entertaining posts. |
branding and identity