How to engineer self-driving cars with risk and responsibility in mind
However careful manufacturers and drivers are, autonomous vehicles will inevitably be involved in accidents. A rigorous approach to the design of their system architectures can help establish clarity around questions of legal liability.
An example is an autonomous car on a two-way road following another car that is overtaking a parked one. There’s a fourth car with a traffic stream behind it coming in the opposite direction. Should the autonomous car stop or follow the car in front? Stopping is likely to give an unacceptable journey time as the car is already behind schedule due to traffic conditions. Does the latter course of action necessitate speeding up and breaking a speed limit? It’s currently not clear where liability lies if the driver isn’t actually controlling the car and the manoeuvre results in an accident or breach of the law.
In a Comment piece for E&T earlier this year, Steven Baker, a partner at global law firm White & Case, describes the legal problems in the UK and calls on regulators and manufacturers to collaborate on solving them. However, one fundamental problem is finding a common approach to both the qualitative legal terminology requiring judicial interpretation and the quantitative precision of engineering specifications.
As reported more recently in E&T, the UK government launched a consultation in August on the use of ‘Automated Lane Keeping System’ technology that can take over control of a vehicle at low speeds, keeping it safely in lane on motorways. Complemented by a three-year Law Commission project, which has presented detailed examinations of the legal problems with autonomous cars, this could pave the way for driverless cars being introduced on British roads as early as next year.
Reducing risks to be ‘As Low as Reasonably Practicable’ (ALARP) or an international equivalent is part of good engineering practice. It means that the autonomous car must be designed so that the risk of adverse consequences is minimised. ALARP also provides manufacturers with clarity – risks may exist, but good design should provide a defence against litigation.
‘Assignment of Legal Responsibilities for Decisions by Autonomous Cars Using System Architectures’, a paper I co-authored with my UCL colleague Steve Hailes that was published recently in IEEE Transactions on Technology and Society, suggests a solution to the legal and engineering problem by separating decision-making from the authority to act on the decision, making this an inherent part of the car-system’s architecture. The principle is a simple one. We can see this division by taking two examples: when a cruise-control system is engaged, the driver has given it authority to act on its decisions to speed up or slow down, unless the driver deliberately takes back authority to act. A satnav system on the other hand can recommend a route but cannot act on it; only the driver is authorised to follow it, or choose not to.
Authorisation of lethal actions by a human is an inherent part of military architectures, based on criteria derived from international humanitarian law. In my IET book ‘Systems Engineering for Ethical Autonomous Systems’, the law is interpreted in engineering terminology for autonomous weapon systems. The IEEE paper applies the same techniques to cars with SAE J3016 autonomy levels 2 and above using the 4D/RCS reference model architecture. 4D/RCS is based on a hierarchy of autonomous nodes with defined responsibilities to act and can be applied for the different driving-task timescales used in J3106.
The IEEE Transactions paper describes the owner, the driver, the car and its autonomous subsystems using this architecture and examines whether each node has the authority to act on its decisions. Every node can be autonomous, whether a human or a subsystem, and can use artificial intelligence and machine learning. Their authority to act is limited by a function called ‘authorised power’, which can be specified in engineering terminology to meet requirements set using technical and legal criteria. Lack of authority will trigger a node to either seek new information, refer the decision to an authorised node such as, but not necessarily, the driver, or enter a fail-safe mode. This is analogous to a management structure where a manager has defined powers that can be delegated but refers to the next level up if they don’t have the power themselves.
The advantage of this architectural approach is that it can be used to identify risk and its consequences, and clarify the division of responsibilities between humans and the car system. This applies down to the lowest level, whether autonomous or not, and can be interpreted and implemented by manufacturers and their suppliers using their own processes and management philosophies. If used as an inherent part of the design process, the car’s systems can be designed and tested to minimise their risk of causing an accident using standard engineering procedures. This will simplify design and reduce financial risk for manufacturers and their supply chains. It also helps meet the Law Commission’s recommendation “for a clear boundary distinguishing conventional driving (with or without automated assistance) and ‘high automation’ (or ‘self-driving’)”.