We can identify three areas of interaction between our understanding of computer systems and moral and spiritual issues:
We can make this concrete by reference to the parable of the Good Samaritan, if we imagine the innkeeper fetched a barefoot doctor for the injured man who consulted a medical expert system via a satellite up-link, that the robbers were caught and brought before an automated justice machine, that the Samaritan was in fact a robot and finally that Paul himself rethought the significance of the parable on the basis of this reformulation.
The barefoot doctor who consults the medical expert system and follows its recommendations, perhaps without understanding in detail either the tests it calls on her to perform or the remedial actions it then prescribes, raises very pressing issues of responsibility and empowerment. Who is responsible for the actions of computer systems when these have significant potential impact on human life or well-being?
We have a much clearer understanding of the empowerment question with regard to people (doctors, teachers, even coach drivers) or machines whose impact is more obviously mechanical (ships, airplanes, even lifts or electric plugs). In the first case, we impose both a particular training regime and a certification process before we empower people to act in these capacities, often backing this up with regular re-assessment. In the case of machines, training is inappropriate, but testing and certification to explicit standards are typically required by law and expected by consumers.
But to date very little regulation is in place for the soft components of computer systems. If the Samaritan were to die unnecessarily while under the care of the barefoot doctor, and his family sought redress through the courts, no explicit law in Britain or America would cover the issues raised by the role of the expert system, and the few available precedents would suggest only a lengthy exercise in buck-passing between the operator of the system, the manufacturers of the computer hardware on which it ran, the designers of the software and the programming firm that implemented it under contract. Without prejudice to the larger issues under consideration, there is no question that some serious steps should be taken to bring software within the purview of official regulatory procedures.
In the eventuality under discussion, with today's technology, there would be no suggestion that liability might lie with the computer system itself, as such. Computer systems are not legally persons, and our naive understanding of their operation is sufficient to render attributions of legal responsibility inappropriate. The kinds of technical issues which might arise in the hypothecated dispute might include the in-principle limits on software and hardware verification, but would presumably not extend to questions of self-consciousness and autonomy, much less to the system's awareness of the difference between right and wrong.
But if we move on to the second of our imaginery modifications to the parable, when the robbers are brought up before a mechanical magistrate, then these are precisely the issues which will arise.
Before examining this in detail, it is worth reviewing a fictional encounter with these issues.
The practical consequences of attempting to establish an artificial moral sensibility have received extensive consideration in Isaac Asimov's famous science fiction stories, written over a ten-year period between 1940 and 1950, about the deployment into society of "positronic robots", whose moral compass is provided by three built-in laws:
In the stories, these laws are clearly identified as a necessary and sufficient guarantee of good behaviour, and interestingly enough given our latter-day skepticism concerning the reliability of computer systems, the manufacturer's ability to correclty and reliably install them in their products is not doubted to any significant extent.
There's actually very little discussion of the moral significance of the Three Laws in the stories, most of which are a form of detective story, in which the mystery is apparently aberrant robot behaviour, and the resolution is an explanation for the behaviour in terms of exigesis of the playing out of the tension between the laws and their clauses in unanticipated ways. But in one story, Evidence (1946), we get an explicit comparison of robot behaviour as conditioned by the Three Laws, and human ethics:
|`[T]he three Rules of Robotics are the essential guiding principles of a good many of the world's ethical systems. . . . Also, every "good" hyman being is supposed to love others as himself, protect his fellow man, risk his life to save another. To put it simply - if Byerley follws all the Rules of Robotics, he may be a robot, and may simply be a very good man.'|
In the same story, one of the characters goes on to imagine just the sort of robotic responsibility we considered above:
|`If a robot can be created capable of being a civil executive, I think he'd make the best one possible. By the Laws of Robotics, he'd [sic] be incapable of harming humans, incapable of tyranny, of corruption, of stupidity, of prejudice.'|
And when in another story precisely this comes about, the same character describes the results as follows:
|`The Earth's economy is stable, and will remain stable, because it is based upon the decisions of calculating machines that have the good of humanity at heart through the overwhelming force of the First Law of Robotics. . . . But the Machines work not for any single human being, but for all humanity, so that the First Law becomes: "No Machine may harm humanity, or, through inaction, allow humanity to come to harm."'|
There's an interesting echo here of MacIntyre's hierarchy of the loci of goods and virtues (see below), from the individual to the group to the whole of humanity. But the point most relevant to our concerns is that in all of Asimov's works there is little or no subtlety in the moral component of the situations he imagines. In almost all cases, direct physical harm is all that is at issue. Emotional well-being is only brought into play twice (and once only in conjunction with a mind-reading robot), and at no point is any serious moral calculus required. Conflicts are always between the laws and their internal clauses, not within one clause, with one exception, in which the mind-reading robot is (intentionally and vengefully) permanently destabilised by being forced to confront its own inability to simultaneously satisfy conflicting desires.
We might interpret this one counter-example to the general claim as evidence that Asimov recognised the inadequacy of the simplistic ethical grounding he provides with the Three Laws: were he to delve into such questions in the case of ordinary (non-mind-reading) robots he would expose the naivete of the laws, with their assumption in any case that rational, dispassionate (see below) analysis can always identify a no-harm course of action.
To return to the question of mechanical magistrates, as in the case of our updated parable, or simply the civil executive imagined by Susan Calvin in the quote above, we might want to ask where a knowledge of the difference between right and wrong, which we might suppose to be necessary in such roles, is to come from. The Three Laws themselves are clearly no where near adequate to this task. That cheating at cards is wrong, to say nothing of cheating on your Income Tax, cannot be derived unequivocally from the First Law, and depending on the Second Law would be vulnerable to a relativism with evidently schizogenic consequences for an Asimovian robot. In other words, even were we to stipulate that observing the Three Laws was necessary for moral sensibility, this would certainly not be sufficient.
It's worth noting in this connection that Asimov nowhere introduces or depends on a notion of reward and punishment, or of learning, with regard to what he refers to as the ethical aspect of his robots. It's not that they know they shouldn't harm humans, or that they fear punishment if they do, but that they can't harm humans. The non-availability of this aspect of their `thought' to introspection or willed modification reveals the fundamental incoherence of Asimov's construction: we must not only posit a robotic subconscious, constantly engaged in analysing every situation for (impending) threats to the Three Laws, but we must also accord complete autonomy to this subconscious. It's not clear how any such robot could operate in practice, never knowing when its planning might contingently fall foul of a subconcious override.
Setting the question of moral calculus to one side for a moment, I want to identify another issue which is relevant to the empowerment of artifacts to perform tasks with significant human impact: The role of self-consciousness, particularly consciousness of ones own responsibility, in fitting an individual for such tasks. Introspection suggests that this aspect of humanity is fundamental to our willingness to accept judgement at the hands of others. We have some more or less well articulated understanding of the tension between the ideal of the rule of law, and the reality of the need for interpretation and qualification by human beings. Our willingness to accept the latter, at least in moderation, depends in turn on our recognition of the fact that the judge not only is responsible for the judgement, but that also s/he takes responsibility for it, and that implicit in this is the notion that the implications of taking responsibility are a factor in the judgement itself. To understand just what this means, a brief diversion into philology is in order.
The word `dispassionate' might be thought of as describing exactly the intrinsic property of a mechanical magistrate which would make it so well suited to its job. The quote above about what would make a robot an ideal civil executive is clearly appealing to this. But for our purposes, the opposite of `dispassionate' is not `passionate', but rather `compassionate'. It's not that we need or want random gusts of emotionally fuelled prejudice, but that we depend on a fundamental recognition of the joint humanity of judge and judged. It is after all precisely this, the claim on care arising from common humanity, which the parable of the Samaritan is all about. In the literal sense such commonality can never include both protoplasmic and mechanical intelligences, but can we imagine any other basis for com-passion between human and machine, because it seems to me that compassion is constitutive of moral sensibility. If this is right, then it all comes down to the question of community: the way we derive our identity from our membership in overlapping hierarchies of groups.
In After Virtue, MacIntyre attempts to re-establish the Aristotelian notion of virtue at the heart of morality and moral philosophy. In the course of so doing, he appeals to individual and social practice as the locus of the definition of the good, in terms of which in turn virtue is to be understood. This immediately raises questions for any approach to computational morality, as it suggests there can be no such thing without (embodied?) participation in communities of practice at many levels.
Even if such participation is possible at some as yet unforeseen point in the future, the question of the place of Grace in our understanding of the origin of moral sensibility, both phylogenetically and ontogenetically, must also be addressed before we can clarify our own stance as regards the in-principle possibility of confidently welcoming a computational artefact as a moral agent on a par with ourselves.
This question must be at the heart of our response to the third part of our re-written parable, when we consider the plausibility of a robot in the role of the Samaritan. The burden of our discussion of Asimov's Three Laws should at least call into question any confidence we might have that a robot on that road would play the part of the Samaritan, rather than the Levite or the priest. I think in the absence of co-participation in a range of social contexts, in a way which already pre-supposes at least incipient moral agency, no robust basis for charitable behaviour can be imagined.
And this seems to me to be a pretty nearly fatal circularity: we allow children such co-participation as part of their acculturation process, as a means of embuing them with a moral sensibility (or alternatively of stimulating/awakening a God-given disposition thereto), precisely because we have the most personal possible evidence that they are capable of moral agency - we know we were once like them, and we managed it. What evidence would it take to convince us that constructed artefacts, as opposed to flesh of our flesh, should be allowed that opportunity?
Just as (in my view) cognitive science is not a subject matter, but a methodology for enquiry in a range of the human sciences such as linguistics and psychology, just so computational theology should not be understood as an alternative to, say, process theology or liberation theology. Rather it would be a component form of theological enquiry, an addition to the methodological inventory of investigation of theological issues. In that sense the whole of the preceding discussion has been a preliminary attempt at computational theology, for not only have we considered what it would take for a machine to exhibit moral sensibility, we have in the course of our consideration opened up some possible avenues for improving our understanding of moral sensibility itself, its origins and development. A more rigourous and theologically-grounded exploration of these issues from the perspective we have barely suggested here might well be of value.
Another related area where such a computationally-based exploration might be fruitful is that of free will. Questions surrounding the nature of human action have been with us for a very long time. Fundamental issues of philosophy and theology are rooted here: Free will, original sin, the mind-body problem and grace to name but a few. Is it possible that any new insight can be brought to bear here by a consideration of constructed artefacts? I think that it can, on the one hand by examining what plays the part of agency, rationality and responsibility in already existing computational artefacts such as expert systems and robots, and on the other by looking at how the computational claim on the nature of the mind is articulated with respect to these issues, if at all. Computer systems which go by names such as Expert Systems or Decision Support Systems already exist, and more wishfully composed names such as Software Agents are widely predicted to be just around the corner. Is it possible that a detailed examination of exactly what constitutes the making of a decision in such systems, an examination which can explore things with much greater sensitivity, at least in some directions, than is possible with respect to human decisions, might shed some light on the vexed question of just what making a decision really consists of?
Two examples, one brief and the other even briefer, do not in themselves constitute the foundation of a new theological methodology, but I hope they lend at least an initial plausibility to the case for one. If so, then not only may the idea be carried forward by professionals from the two contributing disciplines, but also the invitation to amateur theologising via the science fiction perspective may be no bad thing for society at large.