15 June 2012

"The Google-Trolley Problem"

Following up from the previous post on the ethics of autonomous vehicles:
Marginal Revolution | Alex Tabarrok | The Google-Trolley Problem

As you probably recall, the trolley problem concerns a moral dilemma. [...]

I want to ask a different question. Suppose that you are a programmer at Google and you are tasked with writing code for the Google-trolley. What code do you write? Should the trolley divert itself to the side track? Should the trolley run itself into a fat man to save five? If the Google-trolley does run itself into the fat man to save five should Sergey Brin be charged? Do your intuitions about the trolley problem change when we switch from the near view to the far (programming) view?

I think these questions are very important: Notice that the trolley problem is a thought experiment but the Google-trolley problem is a decision that now must be made [because Google has driverless cars].
I think it's most important to think of this not as a dilemma but as a trilemma:
  1. The robot car runs over one person; one person dies.
  2. The robot car runs over five people; five people die.
  3. Robot cars are outlawed because some people do not trust them to make them to make ethical decisions between #1 and #2; ten thousand people die because human drivers are terrible.
Sadly, status quo bias agitates hard for #3.

No comments:

Post a Comment