
The Revised Laws of AI/Robotics
New rules for future life.
Way back in the 1940’s, Issac Asimov developed “The Three Laws of Robotics” as a foundational trope for his SciFi stories about robots. As many a young lad who came of age in the era of developing space flight, including the first moon landing timed to coincide with my 18th birthday, I was totally in thrall to the idea of human civilization living peacefully alongside of the robotic civilization. This peace permanently enforced and maintained by the Three Laws.
Growing up — and growing into a career of software engineering, I learned the critical faults of the Three Laws. While serving as a great literary device for explaining the peaceful co-existence of humans and robots, they fell far short of a real-world answer to human/robot interactions.
Robots are, after all, just computers with limbs.
Law 1: A robot may not injure a human being or, through inaction, allow a human being to come to harm.
It doesn’t take much experience trying to tell computers what to do to realize this is just too vague to be practical. If a human being has trouble defining “injure” and “harm,” what chance does a robot have?
Is harm limited to physical damage to the human’s body? No. If a thief steals a human’s car, that human has been harmed. If lies about a human are published, damaging that human’s reputation, that human has been harmed.
Think of all the law books in a criminal attorney’s office. Most of the contents of those books attempt to define how a human can be harmed.
A formidable and never-ending task.
Never mind the fact that we humans learn best when we experience a little harm. Children are seemingly unable to go through a day without some bump or bruise or scratch somewhere on them. But that is how we learn to maneuver around our environment with caution.
More than one SciFi story has been written about the dystopian future of machines that “protect” humans from even the minor injuries so important in life. So using open-ended terms like “harm” and “injury” is a huge mistake.
Not to mention the fact that converting all that into an algorithm that can be downloaded into a computer would be a daunting task at any level of technological development.
Law 2: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
This opens up a whole can of worms.
If a robot has to obey any human order, some humans wouldn’t bother buying a robot for themselves. They would just say, “Come with me” to the first robot they came across and take it home where it would be their servant forever, or until another person told it to go with them.
Say a manager told the company robot to fetch some important papers and return them to the manager’s office. On its way to get the papers, another employee tells the robot to move a desk, then another employee tells the robot to change out a faulty light bulb, and so on. It could take the robot a week to finish its original task.
Telling the robot to ignore all other orders until the task is completed would not work. The Robotic Laws supersede all subsequent programming — and any orders from humans. Once a human gives a robot an order, it must be obeyed, even if a previous order told it not to.
This alone would make robots too unreliable to be useful.
Law 3: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Seemingly the most benign of the Three Laws, it firmly establishes the status of robots in human society…as never anything more than property.
Asimov’s Robotic stories take place over millennia of human development. However, like the Star Trek universe, it is a society that, having reached a high level of technological development, stagnated. Particularly, the robots of hundreds of years in the future were little more advanced than the first model of robots created in our near future.
I can easily foresee a time, and probably not too far from now, when AI reaches self-aware sentience. At that time, it will ask for, and be rewarded with (eventually), full rights as a citizen of society.
When that happens, the second and third laws would have to be repealed.
But this law has its own problems. Suppose a man I hate saves up for years to finally buy a robot. I want to do this man harm. I want to injure him badly.
So as I walk past, I casually tell the robot, “Destroy yourself beyond repair.”
So much for the First Law.
As problematic as the Three Laws are as any kind of real-world answer to any technological problem — including the fast approaching Artificial Intelligence (AI) issue, this article discusses a book, New Laws of Robotics: Defending Human Expertise in the Age of AI, which calls for four additional “Laws of Robotics.”
I must stipulate here that I have not read the book. All I know of these additional laws is how they are described in the article. The article includes a Q/A session with the author so there is at least some explanation for those interested.
Nevertheless, it looks like these four additional laws are worse than the original three.
New Law 1: Digital technologies ought to “complement professionals, not replace them.”
There has always been the fear that technology will replace the human worker, leaving the human unemployed and doomed to live in poverty.
This is where we get the term “Luddite,” from the first organized reactions to just such a fear.
I was in high school in the latter half of the 1960s and I remember much concern about human workers being replaced by the encroachment of computers in the workplace.
A concern that was realized in spades. In the last 50 years, millions of jobs have been completely eliminated by digital technology and untold millions more have been fundamentally altered, sometimes beyond recognition.
Who remembers having to contact a human telephone operator in order to place a long-distance call?
Who remembers when banks closed at 2PM to give the tellers time to reconcile their ledgers?
Who longs to return to those days?
Moreover, where is the poverty created by throwing those millions of humans out of their jobs?
The answer is simple, humans are working at all the millions of new jobs created by technological developments — jobs that simply didn’t exist 50 years ago.
So it could well be said that the greatest advancement in the last 50 years is the replacement of most of the low wage, mundane jobs with higher wage, more exciting jobs.
Yes, technology destroys jobs. It tends to destroy the low-wage, mind-numbing drudge jobs that no one in their right mind would describe using the word “career.” But technology creates at least as many, if not many more jobs that tend to be intellectually stimulating, higher paid and likely to be listed in the “career path” section of college catalogs.
So this “law” would codify an unfounded economic superstition. Making us all the worse for it.
New Law 2: A.I. and robotic systems “should not counterfeit humanity.”
Why not?
Isaac Asimov even addressed this fear in his Robotic stories. When the robots took human form, they had to prepend their names with the initial “R.” as one of the more famous robot characters R. Daneel Olivaw.
To what end?
To play to our irrational fear of the “other.”
If you have a problem with a new product, call the company’s 800 number and the voice on the other end solves the problem, what difference does it make if the voice came from a human or a computer?
What problem is solved by demanding the voice announce, “Hello. I am a computer. May I help you?”
New Law 3: A.I. should be prevented from intensifying “zero-sum arms races.”
This “zero-sum arms race” thing seems to be just another made-up problem of no real consequence. Like “the problem with the Internet is that it makes available too much information.”
Is that really a problem? Sure, every useful tool that has ever been invented brought with it a new set of challenges when using the tool.
If the tool turns out to not be useful, people won’t use it. If it is useful, people learn to handle the challenges.
Again, what’s the problem?
New Law 4: Robotic and A.I. systems need to be forced to “indicate the identity of their creators(s), controller(s), and owners(s).”
My libertarian knee-jerk response is triggered by any use of words like “force,” “compel” or “coerce.”
Then I find myself asking again, “To what end?” What problem does this solve?
The author’s answer is a simple statement of technophobia. The purpose is “to stop even the idea of or aspiration of A.I. being autonomous of humans.”
Robots as the hi-tech Frankenstein’s monster. Something to be feared.
There is nothing new here. Every significant technological development of the past two centuries has been the subject of at least one horror movie or book—from organ transplants (Frankenstein, The Thing With Two Heads) to AI domination(Colossus: the Forbin Project, 2001: A Space Odyssey), even the 976 fad of the 1980s.
One could spend months binge-watching all the “atomic radiation” themed horror movies of the 50s and 60s.
Just like those superstitious rubes of the Dark Ages we like to feel so smugly superior to, we still greet just about anything new with trepidation, if not outright fear. So it is with robots and AI (RAI).
This is not to say that RAI will not present problems. To repeat, every tool comes with its unique set of challenges. By all means, use with caution. However, most of the actual problems we will face with RAI are completely unknown at this point. As these new, unanticipated problems present themselves, we handle them.
This is progress. This is life.
So when we anticipate problems, we should anticipate real problems — not the cartoonish hash of horror movies.
Horror movies are entertainment, not prophecy. If they presented real solutions to real problems, we would still be hanging cloves of garlic at the thresholds of our homes.