What are legislators, regulators and academics doing to help the introduction of Autonomous Vehicles (AVs)? I don’t know either.
One of the sessions of the 2017 Autonomous Vehicle Safety Regulation World Congress that was held in Novi, Michigan, was devoted to ethics. The idea is that AVs must be taught what to do when death is unavoidable (hold that thought). That is, if an accident is imminent, does the AV kill the old lady or the three-month-old baby? Does the AV protect the driver or others around it? Many media outlets, journals and blogs emphasize this conundrum. The MIT Review published Why Self Driving Cars Must be Programmed to Kill where it discussed the behaviors that need to be embedded into AVs to control casualties. Some of you may be familiar with MIT’s Moral Machine which is an online survey aimed at understanding what the public thinks AVs should do in the event of an accident that involves fatalities.
But this discussion has conveniently hurdled the question – do AVs need to be programmed to kill? Because the answer is absolutely not. There is no compelling argument for anyone to expect manufacturers to design this sort of capability into their vehicles. In fact, it is likely going to make matters worse.