Computational Systems, Responsibility and Moral Sensibility

Henry S. Thompson
Institute for Language, Cognition and Computation
School of Informatics
University of Edinburgh
ht@inf.ed.ac.uk
13 March 2000

1.   Computers and morality

We can identify three areas of interaction between our understanding of computer systems and moral and spiritual issues:

  1. The moral and technical issues involved in empowering computer systems in contexts with significant impact, direct or indirect, on human well-being;
  2. The scientific/technical questions in the way of introducing an explicit moral sensibility into computer systems;
  3. The theological insights to be gained from a consideration of decision-making in existing and envisagable computers.

We can make this concrete by reference to the parable of the Good Samaritan, if we imagine the innkeeper fetched a barefoot doctor for the injured man who consulted a medical expert system via a satellite up-link, that the robbers were caught and brought before an automated justice machine, that the Samaritan was in fact a robot and finally that Paul himself rethought the significance of the parable on the basis of this reformulation.

1.1.   Empowering computer systems

The barefoot doctor who consults the medical expert system and follows its recommendations, perhaps without understanding in detail either the tests it calls on her to perform or the remedial actions it then prescribes, raises very pressing issues of responsibility and empowerment. Who is responsible for the actions of computer systems when these have significant potential impact on human life or well-being?

We have a much clearer understanding of the empowerment question with regard to people (doctors, teachers, even coach drivers) or machines whose impact is more obviously mechanical (ships, airplanes, even lifts or electric plugs). In the first case, we impose both a particular training regime and a certification process before we empower people to act in these capacities, often backing this up with regular re-assessment. In the case of machines, training is inappropriate, but testing and certification to explicit standards are typically required by law and expected by consumers.

But to date very little regulation is in place for the soft components of computer systems. If the Samaritan were to die unnecessarily while under the care of the barefoot doctor, and his family sought redress through the courts, no explicit law in Britain or America would cover the issues raised by the role of the expert system, and the few available precedents would suggest only a lengthy exercise in buck-passing between the operator of the system, the manufacturers of the computer hardware on which it ran, the designers of the software and the programming firm that implemented it under contract. Without prejudice to the larger issues under consideration, there is no question that some serious steps should be taken to bring software within the purview of official regulatory procedures.

1.2.   Responsibility as such

In the eventuality under discussion, with today's technology, there would be no suggestion that liability might lie with the computer system itself, as such. Computer systems are not legally persons, and our naive understanding of their operation is sufficient to render attributions of legal responsibility inappropriate. The kinds of technical issues which might arise in the hypothecated dispute might include the in-principle limits on software and hardware verification, but would presumably not extend to questions of self-consciousness and autonomy, much less to the system's awareness of the difference between right and wrong.

But if we move on to the second of our imaginary modifications to the parable, when the robbers are brought up before a mechanical magistrate, then these are precisely the issues which will arise.

Before examining this in detail, it is worth reviewing a fictional encounter with these issues.

2.   Asimov's Three Laws of Robotics

The practical consequences of attempting to establish an artificial moral sensibility have received extensive consideration in Isaac Asimov's famous science fiction stories, written over a ten-year period between 1940 and 1950, about the deployment into society of "positronic robots", whose moral compass is provided by three built-in laws:

  1. "A robot may not injure a human being, or, through inaction allow a human being to come to harm.
  2. "A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. "A robot must protect its own existence as long as such protection does not conflict with the First or Second Law."

In the stories, these laws are clearly identified as a necessary and sufficient guarantee of good behaviour, and interestingly enough given our latter-day scepticism concerning the reliability of computer systems, the manufacturer's ability to correctly and reliably install them in their products is not doubted to any significant extent.

There's actually very little discussion of the moral significance of the Three Laws in the stories, most of which are a form of detective story, in which the mystery is apparently aberrant robot behaviour, and the resolution is an explanation for the behaviour in terms of exegesis of the playing out of the tension between the laws and their clauses in unanticipated ways. But in one story, Evidence (1946), we get an explicit comparison of robot behaviour as conditioned by the Three Laws, and human ethics:

`[T]he three Rules of Robotics are the essential guiding principles of a good many of the world's ethical systems. . . . Also, every "good" human being is supposed to love others as himself, protect his fellow man, risk his life to save another. To put it simply - if Byerley follows all the Rules of Robotics, he may be a robot, and may simply be a very good man.'

In the same story, one of the characters goes on to imagine just the sort of robotic responsibility we considered above:

`If a robot can be created capable of being a civil executive, I think he'd make the best one possible. By the Laws of Robotics, he'd [sic] be incapable of harming humans, incapable of tyranny, of corruption, of stupidity, of prejudice.'

And when in another story precisely this comes about, the same character describes the results as follows:

`The Earth's economy is stable, and will remain stable, because it is based upon the decisions of calculating machines that have the good of humanity at heart through the overwhelming force of the First Law of Robotics. . . . But the Machines work not for any single human being, but for all humanity, so that the First Law becomes: "No Machine may harm humanity, or, through inaction, allow humanity to come to harm."'

There's an interesting echo here of MacIntyre's hierarchy of the loci of goods and virtues (see below), from the individual to the group to the whole of humanity. But the point most relevant to our concerns is that in all of Asimov's works there is little or no subtlety in the moral component of the situations he imagines. In almost all cases, direct physical harm is all that is at issue. Emotional well-being is only brought into play twice (and once only in conjunction with a mind-reading robot), and at no point is any serious moral calculus required. Conflicts are always between the laws and their internal clauses, not within one clause, with one exception, in which the mind-reading robot is (intentionally and vengefully) permanently destabilised by being forced to confront its own inability to simultaneously satisfy conflicting desires.

We might interpret this one counter-example to the general claim as evidence that Asimov recognised the inadequacy of the simplistic ethical grounding he provides with the Three Laws: were he to delve into such questions in the case of ordinary (non-mind-reading) robots he would expose the naivete of the laws, with their assumption in any case that rational, dispassionate (see below) analysis can always identify a no-harm course of action.

To return to the question of mechanical magistrates, as in the case of our updated parable, or simply the civil executive imagined by Susan Calvin in the quote above, we might want to ask where a knowledge of the difference between right and wrong, which we might suppose to be necessary in such roles, is to come from. The Three Laws themselves are clearly no where near adequate to this task. That cheating at cards is wrong, to say nothing of cheating on your Income Tax, cannot be derived unequivocally from the First Law, and depending on the Second Law would be vulnerable to a relativism with evidently schizogenic consequences for an Asimovian robot. In other words, even were we to stipulate that observing the Three Laws was necessary for moral sensibility, this would certainly not be sufficient.

It's worth noting in this connection that Asimov nowhere introduces or depends on a notion of reward and punishment, or of learning, with regard to what he refers to as the ethical aspect of his robots. It's not that they know they shouldn't harm humans, or that they fear punishment if they do, but that they can't harm humans. The non-availability of this aspect of their `thought' to introspection or willed modification reveals the fundamental incoherence of Asimov's construction: we must not only posit a robotic subconscious, constantly engaged in analysing every situation for (impending) threats to the Three Laws, but we must also accord complete autonomy to this subconscious. It's not clear how any such robot could operate in practice, never knowing when its planning might contingently fall foul of a subconscious override.

3.   Mechanical magistrates, responsibility and community

Setting the question of moral calculus to one side for a moment, I want to identify another issue which is relevant to the empowerment of artifacts to perform tasks with significant human impact: The role of self-consciousness, particularly consciousness of ones own responsibility, in fitting an individual for such tasks. Introspection suggests that this aspect of humanity is fundamental to our willingness to accept judgement at the hands of others. We have some more or less well articulated understanding of the tension between the ideal of the rule of law, and the reality of the need for interpretation and qualification by human beings. Our willingness to accept the latter, at least in moderation, depends in turn on our recognition of the fact that the judge not only is responsible for the judgement, but that also s/he takes responsibility for it, and that implicit in this is the notion that the implications of taking responsibility are a factor in the judgement itself. To understand just what this means, a brief diversion into philology is in order.

3.1.   Passion

The word `dispassionate' might be thought of as describing exactly the intrinsic property of a mechanical magistrate which would make it so well suited to its job. The quote above about what would make a robot an ideal civil executive is clearly appealing to this. But for our purposes, the opposite of `dispassionate' is not `passionate', but rather `compassionate'. It's not that we need or want random gusts of emotionally fuelled prejudice, but that we depend on a fundamental recognition of the joint humanity of judge and judged. It is after all precisely this, the claim on care arising from common humanity, which the parable of the Samaritan is all about. In the literal sense such commonality can never include both protoplasmic and mechanical intelligences, but can we imagine any other basis for com-passion between human and machine? If not, our project is in difficulty, because it seems to me that compassion is constitutive of moral sensibility. If this is right, then it all comes down to the question of community: the way we derive our identity from our membership in overlapping hierarchies of groups.

3.2.   Virtues, practice, community and embodiment

In After Virtue, MacIntyre attempts to re-establish the Aristotelian notion of virtue at the heart of morality and moral philosophy. In the course of so doing, he appeals to individual and social practice as the locus of the definition of the good, in terms of which in turn virtue is to be understood. This immediately raises questions for any approach to computational morality, as it suggests there can be no such thing without (embodied?) participation in communities of practice at many levels.

The phrase communities of practice is not actually MacIntyre's, but rather comes from a recent strand of thinking in the area of computer-based training, particularly in the industrial context, based on a re-evaluation of the locus of expertise in groups and companies, see e.g. Brown & Duguid (1991). This line of thought emphasises participation in a group as the primary means by which specialist information and skills is acquired.

Even if such participation is possible for an artifact at some as yet unforeseen point in the future, the question of the place of Grace in our understanding of the origin of moral sensibility, both phylogenetically and ontogenetically, must also be addressed before we can clarify our own stance as regards the in-principle possibility of confidently welcoming a computational artifact as a moral agent on a par with ourselves.

This question must be at the heart of our response to the third part of our re-written parable, when we consider the plausibility of a robot in the role of the Samaritan. The burden of our discussion of Asimov's Three Laws should at least call into question any confidence we might have that a robot on that road would play the part of the Samaritan, rather than the Levite or the priest. I think in the absence of co-participation in a range of social contexts, in a way which already pre-supposes at least incipient moral agency, no robust basis for charitable behaviour can be imagined.

And this seems to me to be a pretty nearly fatal circularity: we allow children such co-participation as part of their acculturation process, as a means of imbuing them with a moral sensibility (or alternatively of stimulating/awakening a God-given disposition thereto), precisely because we have the most personal possible evidence that they are capable of moral agency - we know we were once like them, and we managed it. What evidence would it take to convince us that constructed artifacts, as opposed to flesh of our flesh, should be allowed that opportunity? One of the founding principles of the COG project at the MIT AI Lab (see e.g. Brooks et al. 1996) recognises the importance of at least physical plausibility as a necessary precondition for acceptance of artifacts into the social context, and also the importance of such acceptance for the development of robust cognitive (and moral?) competence, but at the very least they have a long way to go.

4.   Towards a computational theology

Just as (in my view) cognitive science is not a subject matter, but a methodology for enquiry in a range of the human sciences such as linguistics and psychology, just so computational theology should not be understood as an alternative to, say, process theology or liberation theology. Rather it would be a component form of theological enquiry, an addition to the methodological inventory of investigation of theological issues. In that sense the whole of the preceding discussion has been a preliminary attempt at computational theology, for not only have we considered what it would take for a machine to exhibit moral sensibility, we have in the course of our consideration opened up some possible avenues for improving our understanding of moral sensibility itself, its origins and development. A more rigorous and theologically-grounded exploration of these issues from the perspective we have barely suggested here might well be of value.

Another related area where such a computationally-based exploration might be fruitful is that of free will. Questions surrounding the nature of human action have been with us for a very long time. Fundamental issues of philosophy and theology are rooted here: Free will, original sin, the mind-body problem and grace to name but a few. Is it possible that any new insight can be brought to bear here by a consideration of constructed artifacts? I think that it can, on the one hand by examining what plays the part of agency, rationality and responsibility in already existing computational artifacts such as expert systems and robots, and on the other by looking at how the computational claim on the nature of the mind is articulated with respect to these issues, if at all. Computer systems which go by names such as Expert Systems or Decision Support Systems already exist, and more wishfully composed names such as Software Agents are widely predicted to be just around the corner. Is it possible that a detailed examination of exactly what constitutes the making of a decision in such systems, an examination which can explore things with much greater sensitivity, at least in some directions, than is possible with respect to human decisions, might shed some light on the vexed question of just what making a decision really consists of?

Two examples, one brief and the other even briefer, do not in themselves constitute the foundation of a new theological methodology, but I hope they lend at least an initial plausibility to the case for one. If so, then not only may the idea be carried forward by professionals from the two contributing disciplines, but also the invitation to amateur theologising via the science fiction perspective may be no bad thing for society at large.

5.   References

  1. Asimov, Isaac, 1950. I, Robot, Putnam, New York.
  2. Brooks, Rodney, 1996. "Prospects for Human Level Intelligence for Humanoid Robots", in Proceedings of the 1st International Symposium on Humanoid Robots}. Available online at http://www.ai.mit.edu/people/brooks/papers/prospects.ps.Z}.
  3. Brown, John Seely and Paul Duguid, 1991. Organizational knowledge and communities of practice, Organization Science, Vol. 2, No. 1 (February 1991) pp. 40-57. Republished in H. Tsoukas, ed., New Thinking in Organizational Behaviour. Oxford: Butterworth Heinemann, 1994, and in Organizational Learning, M.D. Cohen and L.S. Sproull, eds. Thousand Oaks, CA: Sage Publications, 1996. Also available online as http://www.parc.xerox.com/ops/members/brown/papers/orglearning.html.
  4. MacIntyre, Alasdair, 1985. After Virtue, second edition, ISBN 0715616633, Duckworth, London.
  5. Thompson, Henry S, 1985. "Empowering Automatic Decision Making Systems: General Intelligence, Responsibility and Moral Sensibility". In Proceedings of the Ninth International Joint Conference on Artificial Intelligence. Kaufmann, Palo Alto, CA.