John Fox on AI safety

 |   |  Conversations

John Fox is an interdisciplinary scientist with theoretical interests in AI and computer science, and an applied focus in medicine and medical software engineering. After training in experimental psychology at Durham and Cambridge Universities and post-doctoral fellowships at CMU and Cornell in the USA and UK (MRC) he joined the Imperial Cancer Research Fund (now Cancer Research UK) in 1981 as a researcher in medical AI. The group’s research was explicitly multidisciplinary and it subsequently made significant contributions in basic computer science, AI and medical informatics, and developed a number of successful technologies which have been commercialised.

In 1996 he and his team were awarded the 20th Anniversary Gold Medal of the European Federation of Medical Informatics for the development of PROforma, arguably the first formal computer language for modeling clinical decision and processes. Fox has published widely in computer science, cognitive science and biomedical engineering, and was the founding editor of the Knowledge Engineering Review (Cambridge University Press). Recent publications include a research monograph Safe and Sound: Artificial Intelligence in Hazardous Applications (MIT Press, 2000) which deals with the use of AI in safety-critical fields such as medicine.

Luke Muehlhauser: You’ve spent many years studying AI safety issues, in particular in medical contexts, e.g. in your 2000 book with Subrata Das, Safe and Sound: Artificial Intelligence in Hazardous Applications. What kinds of AI safety challenges have you focused on in the past decade or so?


John Fox: From my first research job, as a post-doc with AI founders Allen Newell and Herb Simon at CMU, I have been interested in computational theories of high level cognition. As a cognitive scientist I have been interested in theories that subsume a range of cognitive functions, from perception and reasoning to the uses of knowledge in autonomous decision-making. After I came back to the UK in 1975 I began to combine my theoretical interests with the practical goals of designing and deploying AI systems in medicine.

Since our book was published in 2000 I have been committed to testing the ideas in it by designing and deploying many kind of clinical systems, and demonstrating that AI techniques can significantly improve quality and safety of clinical decision-making and process management. Patient safety is fundamental to clinical practice so, alongside the goals of building systems that can improve on human performance, safety and ethics have always been near the top of my research agenda.


Luke Muehlhauser: Was it straightforward to address issues like safety and ethics in practice?


John Fox: While our concepts and technologies have proved to be clinically successful we have not achieved everything we hoped for. Our attempts to ensure, for example, that practical and commercial deployments of AI technologies should explicitly honor ethical principles and carry out active safety management have not yet achieved the traction that we need to achieve. I regard this as a serious cause for concern, and unfinished business in both scientific and engineering terms.

The next generation of large-scale knowledge based systems and software agents that we are now working on will be more intelligent and will have far more autonomous capabilities than current systems. The challenges for human safety and ethical use of AI that this implies are beginning to mirror those raised by the singularity hypothesis. We have much to learn from singularity researchers, and perhaps our experience in deploying autonomous agents in human healthcare will offer opportunities to ground some of the singularity debates as well.


Luke: You write that your “attempts to ensure… [that] commercial deployments of AI technologies should… carry out active safety management” have not yet received as much traction as you would like. Could you go into more detail on that? What did you try to accomplish on this front that didn’t get adopted by others, or wasn’t implemented?


John: Having worked in medical AI from the early ‘seventies I have always been keenly aware that while AI can help to mitigate the effects of human error there is a potential downside too. AI systems could be programmed incorrectly, or their knowledge could prescribe inappropriate practices, or they could have the effect of deskilling the human professionals who have the final responsibility for their patients. Despite well-known limitations of human cognition people remain far and away the most versatile and creative problem solvers on the planet.

In the early ‘nineties I had the opportunity to set up a project whose goal was to establish a rigorous framework for the design and implementation of AI systems for safety critical applications. Medicine was our practical focus but the RED project1 was aimed at the development of a general architecture for the design of autonomous agents that could be trusted to make decisions and carry out plans as reliably and safely as possible, certainly to be as competent and hence as trustworthy as human agents in comparable tasks. This is obviously a hard problem but we made sufficient progress on theoretical issues and design principles that I thought there was a good chance the techniques might be applicable in medical AI and maybe even more widely.

I thought AI was like medicine, where we all take it for granted that medical equipment and drug companies have a duty of care to show that their products are effective and safe before they can be certificated for commercial use. I also assumed that AI researchers would similarly recognize that we have a “duty of care” to all those potentially affected by poor engineering or misuse in safety critical settings but this was naïve. The commercial tools that have been based on the technologies derived from AI research have to date focused on just getting and keeping customers and safety always takes a back seat.

In retrospect I should have predicted that making sure that AI products are safe is not going to capture the enthusiasm of commercial suppliers. If you compare AI apps with drugs we all know that pharmaceutical companies have to be firmly regulated to make sure they fulfill their duty of care to their customers and patients. However proving drugs are safe is expensive and also runs the risk of revealing that your new wonder-drug isn’t even as effective as you claim! It’s the same with AI.

I continue to be surprised how optimistic software developers are – they always seem to have supreme confidence that worst-case scenarios wont happen, or that if they do happen then their management is someone else’s responsibility. That kind of technical over-confidence has led to countless catastrophes in the past, and it amazes me that it persists.

There is another piece to this, which concerns the roles and responsibilities of AI researchers. How many of us take the risks of AI seriously so that it forms a part of our day-to-day theoretical musings and influences our projects? MIRI has put one worst case scenario in front of us – the possibility that our creations might one day decide to obliterate us – but so far as I can tell the majority of working AI professionals either see safety issues as irrelevant to the pursuit of interesting scientific questions or, like the wider public, that the issues are just science fiction.

I think experience in medical AI trying to articulate and cope with human risk and safety may have a couple of important lessons for the wider AI community. First we have a duty of care that professional scientists cannot responsibly ignore. Second, the AI business will probably need to be regulated, in much the same way as the pharmaceutical business is. If these propositions are correct then the AI research community would be wise to engage with and lead on discussions around safety issues if it wants to ensure that the regulatory framework that we get is to our liking!


Luke: Now you write, “That kind of technical over-confidence has led to countless catastrophes in the past…” What are some example “catastrophes” you’re thinking of?


John: Psychologists have known for years that human decision-making is flawed, even if amazingly creative sometimes, and overconfidence is an important source of error in routine settings. A large part of the motivation for applying AI in medicine comes from the knowledge that, in the words of the Institute of Medicine, “To err is human” and overconfidence is an established cause of clinical mistakes.2

Over-confidence and its many relatives (complacency, optimism, arrogance and the like) have a huge influence on our personal successes and failures, and our collective futures. The outcomes of the US and UK’s recent adventures around the world can be easily identified as consequences of overconfidence, and it seems to me that the polarized positions about global warming and planetary catastrophe are both expressions of overconfidence, just in opposite directions.


Luke: Looking much further out… if one day we can engineer AGIs, do you think we are likely to figure out how to make them safe?


John: History says that making any technology safe is not an easy business. It took quite a few boiler explosions before high-pressure steam engines got their iconic centrifugal governors. Ensuring that new medical treatments are safe as well as effective is famously difficult and expensive. I think we should assume that getting to the point where an AGI manufacturer could guarantee its products are safe will be a hard road, and it is possible that guarantees are not possible in principle. We are not even clear yet what it means to be “safe”, at least not in computational terms.

It seems pretty obvious that entry level robotic products like the robots that carry out simple domestic chores or the “nursebots” that are being trialed for hospital use, have such a simple repertoire of behaviors that it should not be difficult to design their software controllers to operate safely in most conceivable circumstances. Standard safety engineering techniques like HAZOP3 are probably up to the job I think, and where software failures simply cannot be tolerated software engineering techniques like formal specification and model-checking are available.

There is also quite a lot of optimism around more challenging robotic applications like autonomous vehicles and medical robotics. Moustris et al.4 say that autonomous surgical robots are emerging that can be used in various roles, automating important steps in complex operations like open-heart surgery for example, and they expect them to become standard in – and to revolutionize the practice of – surgery. However at this point it doesn’t seem to me that surgical robots with a significant cognitive repertoire are feasible and a human surgeon will be in the loop for the foreseeable future.


Luke: So what might artificial intelligence learn from natural intelligence?


As a cognitive scientist working in medicine my interests are co-extensive with those of scientists working on AGIs. Medicine is such a vast domain that practicing it safely requires the ability to deal with countless clinical scenarios and interactions and even when working in a single specialist subfield requires substantial knowledge from other subfields. So much so that it is now well known that even very experienced humans with a large clinical repertoire are subject to significant levels of error.5 An artificial intelligence that could be helpful across medicine will require great versatility, and this will require a general understanding of medical expertise and a range of cognitive capabilities like reasoning, decision-making, planning, communication, reflection, learning and so forth.

If human experts are not safe is it well possible to ensure that an AGI, however sophisticated, will be? I think that it is pretty clear that the range of techniques currently available for assuring system safety will be useful in making specialist AI systems reliable and minimizing the likelihood of errors in situations that their human designers can anticipate. However, AI systems with general intelligence will be expected to address scenarios and hazards that are beyond us to solve currently and often beyond designers even to anticipate. I am optimistic but at the moment I don’t see any convincing reason to believe that we have the techniques that would be sufficient to guarantee that a clinical super-intelligence is safe, let alone an AGI that might be deployed in many domains.

 


Luke: Thanks, John!


  1. Rigorously Engineered Decisions 
  2. Overconfidence in major disasters:

    • D. Lucas. Understanding the Human Factor in Disasters. Interdisciplinary Science Reviews. Volume 17 Issue 2 (01 June 1992), pp. 185-190.
    • “Nuclear safety and security.

    Psychology of overconfidence:

    • Overconfidence effect.
    • C. Riordan. Three Ways Overconfidence Can Make a Fool of You Forbes Leadership Forum.

    Overconfidence in medicine:

    • R. Hanson. Overconfidence Erases Doc Advantage. Overcoming Bias, 2007.
    • E. Berner, M. Graber. Overconfidence as a Cause of Diagnostic Error in Medicine. The American Journal of Medicine. Volume 121, Issue 5, Supplement, Pages S2–S23, May 2008.
    • T. Ackerman. Doctors overconfident, study finds, even in hardest cases. Houston Chronicle, 2013.

    General technology example:

    • J. Vetter, A. Benlian, T. Hess. Overconfidence in IT Investment Decisions: Why Knowledge can be a Boon and Bane at the same Time. ICIS 2011 Proceedings. Paper 4. December 6, 2011. 

  3. Hazard and operability study 
  4. Int J Med Robotics Comput Assist Surg 2011; 7: 375–39 
  5. A. Ford. Domestic Robotics – Leave it to Roll-Oh, our Fun loving Retrobot. Institute for Ethics and Emerging Technologies, 2014.