What’s Holding Up the Autonomous Vehicle Takeover?

Bronwyn Howell

An enduring mystery of recent times (COVID-19 delays excepted) is why it is taking so long for autonomous vehicles (AVs) to revolutionize our roadways. Only a scant decade ago, predictions held that by the mid-2020s, professional truck and public transport driving would be consigned to the dustbin of history, alongside blacksmithing and other professions annihilated by the industrial revolution. But this prediction has hardly panned out. As I wrote previously:

Tesla CEO Elon Musk tweeted in 2016 that the company’s cars would be capable of a cross-country automated drive by “next year.” In 2019, he tweeted that “everyone with Tesla Full Self-Driving” would be able to do the same. Yet as 2022 dawns, it seems we are no nearer to ubiquitous autonomous vehicle nirvana than in 2012. Rather, the optimistic forecasts are now being downplayed, with one now suggesting that driverless vehicles will not be common and affordable until the 2050s or 2060s.

I also canvassed some of the reasons why the robots-are-stealing-our-jobs meme may be a trifle overdramatic. But it is equally worth asking why, given the great promises of increased productivity from new technologies, we haven’t actually seen the levels of progress anticipated from AV adoption —somewhat of a poster child for the artificial intelligence revolution. After all, it doesn’t seem that the development of requisite technology has been an impediment so far. The evidence of AVs’ potential is amply demonstrated in publicly available videos.

One plausible explanation is that even though the technology has been proven, sorting out the institutional environment in which AVs will operate remains a logjam. AVs contain hundreds of thousands of software-controlled sensors and processors interacting in a complex network system with each other just to run the car. Myriad others are required to detect the near-infinite array of situations the vehicle may find itself in when interacting with pedestrians, dogs, balls, weather, random hazards, and other vehicles using its right of way—some of which may be human-controlled and therefore less predictable than others run by the same or similar software.

An important issue to address is liability when something goes wrong. To be sure, transport licensing authorities will be most unwilling to let these vehicles loose on the roads without first having appropriate regulations in place. The Geneva (1949) and Vienna (1968) international conventions on road traffic assume all vehicles on the road (and if need be, the horses and oxen that power them) are controlled by a human being and ultimately hold that the human is responsible for any damage the vehicle or its power source may cause. But in the AV world, who is responsible if there isn’t a human driver? The vehicle owner may have some liability—as applies now, when owners must ensure vehicles meet specified safety standards (e.g., fitted seatbelts, non-bald tires, suspension in warrantable condition).

But when the requisite safety features are software-governed, how can owners know or even verify that the correct version of software is loaded and operational at any given moment? Should it be the owner’s responsibility anyway? After all, owners are not especially reliable when it comes to keeping their cellphone software updated, even when it comes to important safety applications (for example, COVID-19 tracing apps). And who even is the “owner”? It we take the approach used for cellphones and their apps, I may own the plastic and metal my phone is made of, and I have a contractual liability to pay a network operator for it to be connected, but I have negligible ownership rights to anything else on it.

At the highest possible level, one might assume that vehicle manufacturers, who specify which parts are put together for a particular AV model, could be held responsible. Yet they too face a similar problem: They may own some of the software used for vehicular controls, but for the most part rely on third-party manufacturers for components and software. They won’t want to be left exposed by one of these failing in its obligations to keep software on the components up-to-date or, even worse, failing financially and leaving no viable entity left to assume responsibility for costly damages awarded by the courts.

Furthermore, significant issues arise when the legal jurisdiction of a potentially liable entity (e.g., manufacturer) is not the jurisdiction where the car is used and has its mishap. If different jurisdictions have different rules and standards, how is either a vehicle or component manufacturer to respond?

Fortunately, these matters are being addressed by a United Nations working party on automated/autonomous and connected vehicles. There, hopefully sufficient agreement can be reached on both technical and institutional standards that allow AVs to achieve mainstream use. But as international consensus can take a very long time to be reached, and institutions are notoriously difficult to change, this may take some time.

Courtesy: (AEI.org)