Originally Posted by thom
When I buy a driverless car, Imma hack the software innit so that if the car has to quickly choose between rolling off a cliff OR ramming in the hypothetical ten people standing in my way, it will kill the ten people in my way.
AI will have to make decisions like that. Think about it.
thinking about it: how will the AI know if those 10 people are morally bankrupt enough to be worth killing?