Tuesday, June 7, 2016

Driverless Cars - An Elixir?

This Blog, centers around Kirkpatrick (2015). While most people seem eager to embrace the technology and notion of driverless cars, including myself, Kirkpatrick cleverly builds excitement to what the potential moral pitfalls driverless cars will likely create when these cars eventually come to market. He begins by going in a direction that is seemingly the opposite direction we might expect "The driverless cars of the future are likely to be able to outperform most humans during routine driving tasks, since they will have greater perceptive abilities, better reaction times and will not suffer from distractions." (p. 19). He quickly follows this assertion up with a quote from a professor (Bryant Walker Smith) at the Transportation Research Board of the National Academies "So 90% of all crashes are caused, at least in part, by human error. As dangerous as driving is, the trillions of vehicle miles that we travel every year means that crashes are nonetheless a rare event for most drivers." Smith concludes with "The hope-though at this point it is a hope--is that automation can significantly reduce these kinds of crashes without introducing significant new sources of errors" (p. 19). This quote plants the seed that there is ‘hope’, and that driverless cars may present new sources of errors. What could these errors be?
Well done Kirkpatrick! You have provided reasons to believe driverless cars will be a great technology addition to the 21st Century and in the same breath injected a swirling sense of doubt that driverless cars will be the elixir to human tragedy found on millions of miles of roads around the world. In the US and Canada alone, over 32,000 lives were lost in vehicular accidents in 2015. This figure does not include the maimed or injured victims that are beyond repair, so just think of the impact that driverless cars could have if we could eliminate human error.  Gillis (2015) provides a glimpse “What overseers will make of Google X (car) remains to be seen, but even the toughest skeptics have trouble denying the potential (safety) of its car. After 1.5 million self-driven kilometers on U.S. roads, the test cars have yet to cause a collision. Think about that for a minute…1.5 million kilometers is approximately 932,000 miles. My driving experience has been over the better part of 38 years and while serious collisions involving serious injury or death have been avoided, the amount of miles driven is probably a figure near 932,000. How many collisions have occurred in my driving experience? If memory serves me correctly about seven, with four being of the fender bender variety (my latest one was in March of this year!). How many have you had in your lifetime?
Getting caught up in the excitement and emotion of the potential that driverless cars can provide is a dangerous pitfall in itself in that if we let positive emotions run amok we can lose a healthy positive to negative ratio that helps us stay grounded, or in other words ‘real’. Oishi, as cited by Fredrickson (2013), suggests that the “ultrahappy” employees may become complacent toward problems and opportunities” (p. 5).  This suggestion by Oishi supports my own assertion that we need some negativity to stay grounded when critically thinking, and in particular about driverless cars. So now, back to Keith’s pitfalls of driverless cars.
The pitfalls, as the article title would suggest, is the moral dilemma that an autonomous vehicle presents…the unavoidable accident “However, should an unavoidable crash situation arise, a driverless car’s method of seeing and identifying potential objects or hazards is different and less precise than the human eye brain connection, which likely will introduce moral dilemmas with respect to how an autonomous vehicle should react.” (p. 19). The unavoidable accident…could this be the dilemma that sinks, or at least slows down, the enthusiasm to bring driverless cars to the market? It would seem that this ethical dilemma will be weighed proportionately in context with how frequently “unavoidable accidents” may occur. Of the 932,000 miles driven by the Google X car, 17 unavoidable accidents have occurred due to human drivers (Gillis, 2015). Using this as a baseline, we can be pretty sure that the current technology would result in a significant amount of unavoidable accidents today. So the concern has foundation and is real. What are the moral dilemmas exactly?
Well, in order to examine the dilemmas critically would require an in-depth look at the ethics involved in an autonomous machine that has no moral value (LaFollette, 2007, p. 227), and quite honestly the class that was just put underneath my belt wore me out mentally…and you guessed it, it centered around ethics. Leadership Ethics and Corporate Responsibility to be exact. While the class was one of my personal favorites ever, the mental drain was incredible. So in short, highly recommended…make sure if you take it as an elective to set some time aside to regenerate.
Fortunately, this article does not delve into all of the different ethical angles that could be taken, but rather examines what could be thought of as normative views “…in the event of an unavoidable crash, does the car’s programming simply choose the outcome that likely will result in the greatest potential for safety of the driver and its occupants, or does it choose an option where the least amount of harm is done?” (Kirkpatrick, 2015, pp. 19-20). This is a classic ethical dilemma that humans would likely choose the first option, employing a deontological approach where the rule that family comes first over strangers prevails. A computer on the other hand would likely be programmed to choose the outcome that would produce the least amount of harm. This line of ethical reasoning is consequentialism (LaFollette, 2007, Chapter 2). How should we program the autonomous vehicle when confronted with an unavoidable accident then?
So, in the end, not so fast driverless cars. We humans still have some decisions to make!

References:
Fredrickson, B. L. (2013, July 15). Updated Thinking on Positivity Ratios. American
Psychologist. Advance online publication. doi: 10.1037/a0033584

Gillis, C. (2015, September 21). The human factor: Google thinks safe, driverless cars can be ready for sale in four years. Teaching them to navigate around all the terrible drivers on the road might just be the easy part. Maclean's, 128(37), 41
Kirkpatrick, K. (2015, August). The moral challenges of driverless cars. Communications of the ACM, 58(8), 19-20.

LaFollette, H. (2007). The practice of ethics. Malden, MA: Blackwell Publishing.

No comments:

Post a Comment