More than 1000 technologists and researchers have signed an open letter issued by the Future of Life Institute, a volunteer-run research organisation that works to mitigate existential risks to humanity, calling for artificial intelligence and smart machine research to focus on developing systems that are "robust and beneficial".
"Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls," the letter states.
Signatories included renowned physicist Stephen Hawking; Telsa Motors and SpaceX founder Elon Musk; Jaan Tallinn, co-founder of Skype; Facebook AI lab head Yann LeCun; and Peter Norvig, director of research at Google; Demis Hassabis, Shane Legg and Mustafa Suleyman, co-founders of DeepMind; as well as numerous other tech industry luminaries and university-based researchers.
The open letter states that "progress in AI research makes it timely to focus research not only on making AI more capable, but also on maximizing the societal benefit of AI".
Attached to the open letter is a document outlining research priorities (PDF) for developing "robust and beneficial" artificial intelligence. The document asks legal and ethical questions of rapidly advancing intelligence and autonomy in machines.
It looks at liability and law for autonomous weapons and vehicles, machine ethics, and the privacy implications of AI systems.
“Can lethal autonomous weapons be made to comply with humanitarian law?... If it is permissible or legal to use lethal autonomous weapons, how should these weapons be integrated into the existing command-and-control structure so that responsibility and liability be distributed, what technical realities and forecasts should inform these questions, and how should ‘meaningful human control’ over weapons be defined?" the document asks.
“Should legal questions about AI be handled by existing (software and internet-focused) ‘cyberlaw’, or should they be treated separately?
“What role should computer scientists play in the law and ethics of AI development and use?”
The document proposes four focus areas to ensuring high-assurance in systems and robustness: Verification, validity, security and control.
The seL4 kernel is cited as an example of verification in systems, where there’s “high confidence that a system will satisfy a set of formal constraints”.
Smarter machine learning algorithms or methods to ensure a system will fulfil a set of formal constraints need to be developed.
Validity "is concerned with undesirable behaviours that can arise despite a system’s formal correctness", the document states.
“Designing simplified rules – for example, to govern a self-driving car’s decisions in critical situations – will likely require expertise from both ethicists and computer scientists. Computational models of ethical reasoning may shed light on questions of computational expense and the viability of reliable ethical reasoning methods."
The DARPA SAFE program is cited as an example where system security vulnerabilities can be significantly reduced.
“[It] aims to build an integrated hardware-software system with a flexible metadata rule engine, on which can be built memory safety, fault isolation, and other protocols that could improve security by preventing exploitable flaws,” the document states.
When designing smart machines, humans need to be kept in the loop.
“The design of systems and protocols for transition between automated navigation and human control is a promising area for further research. Such issues also motivate broader research on how to optimally allocate tasks within human-computer teams, both for identifying situations where control should be transferred, and for applying human judgement efficiently to the highest-value decisions.”
Follow Rebecca Merrett on Twitter: @Rebecca_Merrett