This post is a little bit removed from my normal dataviz stuff but given
Tesla's announcement yesterday now is the time for this post to come out! Besides, it's finally a chance for me to get to use my
Philosophy degree!
I've been thinking for the last several years what the driverless-car revolution would really be like. When the first couple of driverless cars completed
DARPA's car course in 2005 and the future became much more real an article came out (which I've sadly lost the link to) that made some very salient points which I'll try to summarize now:
- The majority of cars spend their time idle/parked.
- If the price of driverless cars is prohibitive, wouldn't it be easier to spread that cost out with some neighbors since the car is only occupied for a brief time by each person per week?
- The downside of that is most people need cars are certain times to get to work on time, home, to pick up the kids from school, etc.
- Instead of splitting the cost of 'your' car with the neighborhood, what if instead you subscribed to a car 'service' (much like you currently subscribe to Netflix vs owning all your movies now)?
The ultimate point is that, in the somewhat near future, I think we can all agree that a large portion (if not all) cars will become driverless. They will be controlled by certain AI and algorithms that will enhance their safety features and reduce car-caused fatalities by a VAST number. We can already see that the initial rollout of driverless cars from
Google have been in accidents where other drivers are at fault.
Here's the thought experiment I want you to conduct:
There are two autonomous cars are next to one another on a bridge. The unthinkable happens and a large item falls off the back of a semi truck landing directly in the path of one of the cars. Let's say both have at least 1 person in them and there is a 100% of ONE of the vehicles inhabitants not surviving the crash (I think we've all seen
Final Destination...).
OK, so who gets to live? Software has to make that decision. Let's assume that both cars have the same AI/decision-making software in them (we'll get to a different idea in a sec). We can assume that in this point in automated driving cars would communicate with one another for enhanced safety such as warning about large obstructions, potholes, etc. What if one of the vehicles that was going to crash had a single person in it and the other vehicle had a family of four?
We would think at that point the cars would do a math to calculate that > lives = better! What if the single individual was someone working on ground breaking research into cancer treatments? Do we want cars to rank our lives? If you recall this type of software biasing for life/death was one of the crucial turning point (spoilers) for Will Smith's character in the movie
iRobot.
In the beginning of the film Will Smith's character has a car accident and a robot AI jumps into a freezing lake as his car is sinking pulling him (and NOT his young son) to safety despite his protests that the robot should save the son instead.
Will we get to choose? Will future cars ask for our preference for these types of ethical situations? Could I say, 'In the event that my car is spinning and going to collide with an object please make it on my side vs that of my daughter' ? Could we value high numbers of lives over our own or will the software choose for us?
Additionally... let's return to the idea of differing software. Would different manufacturers have different ethical applications running in their cars? If two cars were on a bridge from different auto-makers would they fight over whom gets to live?
Anyway... I know it's not my normal dataviz thing but I wanted to post this out here to get at least a small number of you all thinking about what driverless cars mean for robotic ethics. It's a HUUUUGE deal (in my opinion) and I figured I should open up a dialogue about it! Comments are always welcome on my twitter
@wjking0 so shoot me your thoughts and let's have a discussion about car AI ethics!