Computers and Morality: There's more here than meets the eye
Henry S. Thompson

16 January 1999

Machines acting as independent agents in our world may seem like a fantasy from a science fiction story. But their fore-runners are among us already, easy to miss beside the flashier and more potent images of intelligent computers and robots that are so common in fiction. Real computer systems already in place have a degree of autonomy and potential impact on us that should serve as a wake-up call that deep and serious moral questions are involved.

There are many concerned scientists who have warned that indeed there are substantial moral choices involved in empowering computer systems with the potential for significant harm (or benefit) to human beings. Computers are already in more or less unsupervised control of landing airplanes in the fog, calculating and administering radiation therapy and stocking warehouses. We even came quite close, in the early 1980s, to putting a fully automatic launch control system in place for the nuclear missiles based in Western Europe. There are some statutory controls in place, for instance in the aircraft industry, but the issues involved need to be understood at a more general level by the general public, and responded to more systematically by government.

As we look towards the new millenium, however, a more intriguing and uplifting prospect emerges from the confrontation of machines and morality: perhaps we can learn something about the origins and nature of our own moral sensibilities by a consideration of computational morality.

Before getting too lost in abstractions, consider an example. Under what circumstances might we imagine installing a mechanical magistrate on the bench? What would we need to know before submitting ourselves to judgement by a computer? Surely in principle the idea must be attractive: such a judge would be unbiased by reason of social affinity or antipathy, undistracted by dispepsia, insomnia or incipient migraine, possessed of a letter-perfect memory for all statutes and precedents and, of course, able to apply this knowledge precisely to the matter before the court. In short, the ideal magistrate: knowledgable and dispassionate.

But there's something missing from this picture, even supposing we could overcome the very real difficulties which stand in the way of actually creating a computer with all those properties today or in the foreseeable future. Surely a crucial component of our willingness to submit to judgement is the unspoken, perhaps even unthought, assumption that the judge is responsible. Responsible in two intimately linked senses: responsible in his or her administration of justice, and responsible for it. And in the latter case, not just responsible in the abstract, in the way that El Nino is responsible for storms, but self-consciously responsible, not just being responsible, but taking responsibility.

And clearly responsibility is an aspect of personal morality. What we're really demanding here is that our imagined mechanical magistrate must be at least in this area a recognisably moral agent. So what would it take to make a computer a real moral agent? Well, how does anything become a real moral agent? Is 'become' even the right word here? Some would certainly argue that moral agency is intimately connected with the essence of being human, with the soul, that it's a manifestation of divine grace. From this perspective there could never be a non-human moral agent (unless by a similar act of grace!). If we think that moral agency is something which can be acquired, indeed that one of the major responsibilities of parents is to help their children acquire it, then we may legitimately ask if our mechanical magistrate might acquire it too.

Not surprisingly, this leads to the question of how children acquire the status of moral agents, supposing that they in fact do. The most obvious answer is that they get it by participating in a community of moral agents, who provide both an implicit model, and explicit instruction. And this seems to me to turn into an insurmountable problem for computers: we allow children to participate in families and society at large as part of their acculturation process, as a means of embuing them with a moral sensibility (or alternatively of stimulating/awakening a God-given disposition thereto), precisely because we have the most personal possible evidence that they are capable of moral agency - we know we were once like them, and we managed it. What evidence would it take to convince us that constructed artefacts, as opposed to flesh of our flesh, should be allowed that opportunity? A chicken and egg problem there is no obvious way out: It turns out it's not dispassionate computers we want, but compassionate ones -- ones we've grown up wiht, and which therefore share our values.

The kind of questions raised above illustrate what the study of computational morality might mean. We're using the thought experiment of the creation of a mechanical magistrate to probe our own self-understanding with respect to fundamental moral questions. And we can think of other aspects of our moral and spiritual life where such an approach might bear fruit: the nature of decisions and the question of free will (what actually happens when a computer 'decides' to sell stock, or prescribe a course of treatment), even euthanasia (the famous question of turning off a supposedly sentient computer).

Another encouraging thing about this idea of computational morality is that it's not just for the theologians and the academics. The kind of armchair exploration that arises from consideration of examples such as the mechanical magistrate is engaging for anyone (indeed Isaac Asimov wrote a whole series of stories nearly 50 years ago employing precisely this approach. Most of them can be found in the collection I, Robot, with the one entitled "Evidence" being particularly relevant). Increasingly science seems to be taken to be synonymous with secular humanism. So it's encouraging for scientists who do have a strong religious commitment to see a way to use our science to make investigation of our moral and spiritual natures a vivid and enlightening activity available to anyone.