The New York Times’ article about the morality of AI auto driving decisions may be based on “government requirements for autonomous car morality might be one way to go, though the people surveyed in the Science article say they are not keen on that. Manufacturers could also tailor morality to a buyer’s choice.” The June 24, 2016 article entitled “When Machines Will Need Morals” started with example:
You’re driving through an intersection and three people step into the road; the only way to avoid hitting them is to steer into a wall, possibly causing serious injury to yourself. Would you sacrifice yourself?
Given the political debate going in 2016 on lots less complicated issues, I think most folks would not want the government to make the rules of morality for AI driven autos.
What do you think?