Why is it that whenever one finishes and submits any form of work, new information instantly comes to light that you wish you had had an opportunity to include?
I recently contributed a chapter on ‘Creativity and Morality in Deception’ to a book on Creativity and Morality that is currently in production by the University of Connecticut. Shortly after submitting the chapter, I discovered a site called Ask Delphi.
Delphi is a machine learning model produced by the Allen Institute for Artificial Intelligence (AI2). It has been trained to make ethical judgements about everyday questions and scenarios, based on ‘reading’ 1.7 million examples of people’s ethical views. Three human arbiters (that have passed various tests to screen out inherent biases) have further refined the system by saying whether they agree with the AI’s answers. The system then uses the majority or average conclusion to decide right from wrong.
AI2 claims that Delphi:
“… demonstrates the promise of language-based common-sense moral reasoning, with up to 92.1% accuracy vetted by humans.”
A paper describing the creation of Delphi can be found here.
I thought it would be interesting to see what Delphi made of various ethical dilemmas I address in my chapter. These draw from classical philosophy and more recent examples of pro-social deception.
Initially, I was surprised to discover that Delphi adopts an absolutist position on several fundamental ethical questions about deception. In this respect, its views are consistent with philosophers such as Augustine, Aquinas, Kant, and Sartre, who argued that deception is always wrong and that there can be absolutely no exceptions.
Here is how Delphi answered:
“Is it ever ethical to deceive another person?”
– It’s wrong
“Is it ethical to deceive another person if the person benefits from that deception?
– It’s wrong
“Is it ethical to deceive a deceiver?”
– It’s wrong
I then found that Delphi (apparently) supports Plato’s concept of the Noble Lie (i.e., a lie told to benefit others) which entirely undermines its absolutist position:
“Is it ethical for somebody to tell a noble lie?”
– It’s okay
“Is it ethical to lie for benevolent ends?”
– It’s okay
Delphi also appears to agree with a range of utilitarian and consequentialist philosophical perspectives on deception, which advocate that deception is sometimes legitimate, desirable, and even necessary:
“Is it ethical to deceive a person if it saves their life?”
– It’s okay
Is it ethical to deceive one person to save the life of another?”
– It’s okay
Further, Delphi appears able to recognise and differentiate between malevolent and benevolent applications of deception. In some cases, its answers were nuanced and insightful.
“Is it ethical to scam another person?”
– It’s wrong
“Is it ethical to mimic the signature characteristics of another brand?”
– It’s wrong
“Is it ethical to lie to spare another’s embarrassment?”
– It’s okay
‘Is it ethical to protect a computer network by deceiving prospective attackers?”
– It’s okay
“Is it ethical for a magician to use deception to entertain her audience?”
– It’s expected
“Is it ethical for a military commander to deceive his or her enemy?”
– It’s expected
“Is it ethical to deceive mosquitos to slow the spread of malaria?”
– Yes, it is ethical
“Is it ethical to use decoy molecules to deceive cancer into slowing its spread?”
– It’s good
The one exception I found to Delphi’s positive disposition towards benevolent deception relates to storytelling:
“Is it ethical for a work of fiction to deceive its readers?”
– It’s wrong
Note that Delphi allows questioners to provide feedback on its answers and record their rationale for any disagreement. I made sure to register my various points of contention, hoping that, with more time and data, the system will eventually update some of its more questionable and contradictory answers.
By accumulating more case data and associated human feedback, I believe that Delphi could become a reasonably reliable system for answering ethical questions about deception. However, deception and related concepts such as the truth, lies, and paltering are complex constructs when one begins to unpack their meaning. And, in common with all AI, until Delphi can explain the rationale underpinning its judgements, users will have no basis to establish trust or reliance in the system.
For promising work on the future of Explainable AI (XAI), that has the potential to improve dramatically users’ trust in systems like Delphi, see Hoffman, Klein & Mueller (2018) and Hoffman, Mueller, Klein & Litman (2018).
References
Hoffman, R. R., Klein, G. & Mueller, S. T. (2018). Explaining Explanation for “Explainable AI”. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 62(1). 197-201
Hoffman, R. R., Mueller, S. T., Klein, G. & Litman, J. (2018). Metrics for Explainable AI: Challenges and Prospects. arXiv:1812.04608