Algorithmic Killing

MIT technology Review posted an article that touches on a topic I have missed in all the articles on Autonomous vehicles.  

It deals with the moral choices developers will have to make in programming the behaviour of driverless cars in the event of a 'moral dilemma'. The comments on this post are ignorant enough to demonstrate that a moral discussion is as relevant as a technological one. This is my comment:

Most commenters seem to be thinking that 'drivers' will be in the position to decide whether they will accept autonomous vehicles or not. It seems to me that insurance companies will decide for them. Another group of commenters prove to be true believers in technology assuming that autonomous technology will never fail. But at some point technology will always fail. Either because we have not yet really figured out truly cutting edge technology, or because it is too expensive to remedy every possible chain of events. As soon as the chances of dramatic event fall below a certain threshold, society 'as a whole' accepts the risk. The reason that an LNG filled truck or train passes your town is because actuaries at insurance companies have calculated that the risk of the town burning to the ground as a result of exploding tanks is small enough to be able to be held liable for. Another reason technology fails is because sometimes humans want it to fail: the hacker and the terrorist both share a desire to make the impossible possible. 

news

© Reframing Studio