15 June 2012

Robocar Ethics

The Economist | Morals and the machine: As robots grow more autonomous, society needs to develop rules to manage them: ethical robots

One way of dealing with these difficult questions [of machine ethics] is to avoid them altogether, by banning autonomous battlefield robots and requiring cars to have the full attention of a human driver at all times. [...] Driverless cars are very likely to be safer than ordinary vehicles, as autopilots have made planes safer. Sebastian Thrun, a pioneer in the field, reckons driverless cars could save one million lives a year.
Autonomous cars will have to make ethical decisions. But note:
  1. In a lot of these cases we don't even know what the "right" decision is for people, so why would we expect people's tools to make the "right" decision?
  2. Even when we do, people often chose incorrectly due to pressure.
  3. In any situations we let people off the hook for their decisions, we ought to be willing to let their tools off as well.
  4. Making the demand for ethical machines too strongly may actually be highly unethical, for the reason given by Thrun above.
If my dog gets run over by a human it's small consolation that at least the human is capable of being ethical.

Why give legislative favor to moral people, who nonetheless routinely make bad decisions over amoral machines whose decisions rarely result in tragedy? When it comes to road safety I would rather have one person injured by an amoral machine than a thousand people injured by other human drivers, who do have moral agency but also have too much to drink, and talk on their cell phones, and get distracted by their kids fighting, and are plain and simple bad at driving.

Later on the Economist says that we might limit robots to techniques which are open to audit, so they (or their owners, or makers) can explain why they made the decisions they did. This means, as they point out, neural networks might not be allowed:
First, laws are needed to determine whether the designer, the programmer, the manufacturer or the operator is at fault if an autonomous drone strike goes wrong or a driverless car has an accident. In order to allocate responsibility, autonomous systems must keep detailed logs so that they can explain the reasoning behind their decisions when necessary. This has implications for system design: it may, for instance, rule out the use of artificial neural networks, decision-making systems that learn from example rather than obeying predefined rules.
Driving is one area where neural networks have excelled for a long time (e.g. ALVINN) and seem likely to in the future. So the Economist is just replaying the ethics-vs-outcomes debate on a smaller scale and this time they come to the opposite conclusion. They're saying they're willing to trade a more ethically-understandable system — not actually one which is more ethical, just one whose decisions are easier to pass ethical judgement on — for one which creates more ethical outcomes. They'll sacrifice ethics for road safety in general, but in the specifics they want to do the opposite: they'll sacrifice safety in order to benefit ethics.

If you're willing to allow amoral machines for the sake of reduced harm, you should be willing to allow amoral neural networks for the sake of further reduced harm. I don't see what principal allows you to draw a line in the middle allowing you to accept safe-but-amoral machines as long as they aren't too safe. You either except good outcomes as a relevant measure or you don't.

1 comment:

  1. Excellent, there's nothing quite like a blunt object to reinforce proper administration ethics.

    ReplyDelete