In 2014, more than 32,000 people were killed in car crashes in the United States. In 2012, more than 2 million Americans visited the emergency room as a result of car crashes. An estimated 94 percent of the crashes that caused these injuries and fatalities are attributable to human choice or error.
ADVERTISEMENT |
These are sobering statistics. Because human behavior is at the heart of them, they raise an interesting question: Once we can take people out of the equation, could driving your own car become as socially frowned upon as other risky habits, like smoking?
It’s less an intriguing hypothetical than a near-future public health question thanks to the rapid development and emergence of self-driving cars—a new federal policy for automated vehicles from the U.S. Department of Transportation (DOT) has just given self-driving cars another nudge forward.
Self-driving cars have progressed in leaps and bounds in recent years. In 2004, the Defense Advanced Research Projects Agency (DARPA) launched an autonomous vehicle grand challenge: Build a robotic vehicle able to “navigate 300 miles of rugged terrain between Los Angeles and Las Vegas.” In the first event, the top-scoring vehicle managed only 7.5 miles.
Twelve years later, autonomous vehicles are heading toward becoming commonplace. The Tesla Model S, for instance, comes ready-equipped with the company’s “autopilot.” Top car manufacturers like Ford and Volvo are investing heavily in self-driving vehicles, and Google and Uber already have test vehicles on the road.
Granted, these cars don’t have to navigate the desert terrain of the DARPA challenge (although it could be argued that urban roads present an altogether tougher challenge). And they’re still far from perfect (as recent crashes involving Google and Tesla vehicles demonstrate). Even so, progress over the past decade has been meteoric, and as self-driving cars learn from each near-miss, scrape, and full-blown crash, it’s likely to become faster still.
As we move toward a driverless car society, the social impacts are likely to be profound. The anticipated reduction in road deaths and injuries alone makes a complete transition to driverless vehicles a compelling public health proposition. There are likely to be other benefits as well: increased mobility and autonomy for the elderly and the disabled, less gridlock, the chance for multitaskers to catch up on office email or grab an extra 30 minutes of sleep on the way to work.
Potential benefits like these, and the risks of not realizing them, have prompted the DOT to pull together the just-released federal policy for automated vehicles. (DOT uses the term “highly automated vehicles,” which includes those that interact with each other and traffic control systems, as well as drive themselves.)
A smart policy around these smart cars
To get a sense of just how smart the new DOT policy is, it’s worth measuring it against a concept that’s been around for a while now: responsible innovation (or, if you’re in Europe, responsible research and innovation).
In 2013, three British academics, Jack Stilgoe, Richard Owen, and Phil Macnaghten, published their ideas on a framework for responsible innovation. They (and many of their colleagues including myself) were interested in how we develop powerful and complex new technologies in today’s highly interconnected world so they’re beneficial to society, rather than causing more problems than are resolved.
This trio wasn’t the first to grapple with how to innovate responsibly, and they’re far from the last. To me, their framework has the benefits of intellectual rigor along with real-world applicability.
Stilgoe and his co-authors suggest that four things are important for innovation to proceed responsibly:
• Anticipate what’s coming down the pike, and what it’s likely to do.
• Be aware of limitations and open to new ideas.
• Include key stakeholders—including members of the public—in policymaking.
• Be responsive to emerging needs, challenges, and opportunities.
The new DOT policy does a pretty good job of ticking the boxes here. It anticipates where autonomous vehicle technologies are going, and the potential benefits and pitfalls. It acknowledges the limitations of current understanding on how to ensure responsible development. It emphasizes the need to work with members of the public and others as the technology matures. And, it’s designed to evolve and grow alongside the technology and its social impacts.
This is a refreshing change from attempting to retrofit existing regulations to new technologies, which is often the modus operandi for government agencies. It indicates a willingness at the federal level to promote successful and responsible development through innovative policymaking. Instead of trying to dictate what self-driving cars should look like, the DOT has developed flexible rules that encourage manufacturers to innovate toward an autonomous vehicle industry that’s socially beneficial as well as economically viable.
This makes the new policy an interesting and unfolding case study in the governance and regulation of emerging technologies. It could end up being a useful model for regulating other tech innovations—and a fascinating experiment in policy-driven public health intervention.
Rule-making and public health risks
To understand this, consider those stats on car crash deaths in the United States. They represent a substantial public health challenge, and according to the National Safety Council, translate to car crashes in the United States being associated with one in every 113 deaths each year.
Reducing crash-related deaths by a factor of 10 through the widespread introduction of self-driving cars (not an unrealistic projection, given how many are due to human behavior) would reduce deaths associated with car crashes to roughly one in every 1,000 deaths. Potentially, as advanced automated vehicle technologies adapt and mature, this could conceivably be pushed as low as one in 10,000, or even lower—putting the chances of being killed in a car crash on a par with dying as a result of exposure to excessive natural heat. The impact on injuries and associated medical expenses is likely to be even more significant.
These projections, speculative as they are, are compelling enough to suggest that, at some point, human-driven vehicles will be seen as a public health risk to be managed and ultimately eliminated.
This may sound nearly inconceivable in today’s car culture, but the dramatic shift in attitudes toward smoking in recent years is a testament to how public health campaigns, regulations, and a changing culture can radically alter social norms.
I suspect that this prospect will induce a shudder of fear in some. Protestations about an erosion of the American way of life and a restriction of personal liberties are bound to follow. This in itself raises questions around what “responsibility” means: Does it simply mean reducing the risk of injury and death, or does it also mean protecting other things that are important to people, like freedom and culture?
Because of questions like these, responsible innovation depends on ensuring everyone potentially touched by a new technology has the chance to be a part of guiding how it’s developed and used. It’s why the ability to constantly evaluate, and if necessary, adjust the trajectories of emerging technologies is so important.
The new DOT policy for self-driving cars is a solid step in the right direction. Whether it is successful in practice remains to be seen, but the signs are encouraging.
In the meantime, the public health expert in me is excited by the prospect that, through smart policies and innovative technologies, we could one day make crash-related deaths and injuries a thing of the past.
This article was originally published on The Conversation. Read the original article.
Add new comment