Artificial Intelligence and Cognitive Science:

Some definitions and some ruminations

Henry S. Thompson
28 Jul 2009

Table of Contents

1. Introduction

One thread of conversation which started early and ran through many channels throughout a weekend I spent recently at an un-conference in the company of a diverse collection of folk concerned the question of the status of the Artificial Intelligence project, and what its prospects of success might be. One aspect of the parts of this conversation which I participated in was an understandable uncertainty, or rather evident lack of consistency of understanding, with respect to the meaning of the terms being used, most notably artificial intelligence and cognitive science. I thought it worthwhile, therefore, to preface a small attempt at a substantive contribution to the conversation with some definitions, so that any (dis)agreements going forward will be more likely to be well-founded.

2. Definitions: Artificial Intelligence and Cognitive Science

A thorough historically-grounded exegisis of the terms artificial intelligence and cognitive science would demand work on the scale of a monograph at the least: all I intend to do here is give some working guidance to what I take most contemporary practioners would accept as the main contrast between the two. At its simplest that contrast is easily stated: AI is about machines, CogSci is about people. That is, the goal of the artificial intelligence project is to create (non-biological) artefacts which exhibit intellegent behaviour, whereas the goal of the cognitive science project is to explain human intelligence in computational terms.

It should be clear that although in practice successful outcomes for these two projects may overlap, in principle they need not. Stipulate that Deep Blue is an example of a successful artificial intelligence. But the mechanisms it employs are manifestly different from those of a human Grand Master, so it does not count as a successful explanation of human intelligence in the domain of chess-playing.

(I do not think that the sometimes-suggested equation of AI with a new appellation of "cognitive engineering" is helpful or even accurate: there is definitely real science involved in AI which takes it well beyond the realm of engineering. I think for example of Judea Pearl's work on heuristic search, or Minsky and Papert's on perceptrons, or, more recently, Sutton and Barto's on reinforcement learning.)

3. (Non-)Definitions: Intelligent, Intelligence, Thinking

The above definitions of course beg a question: what is meant by intelligent or intelligence? The ineffability of these words lead Alan Turing to his eponymous Test (Turing, A.M., 1950, "Computing machinery and intelligence", Mind, 59, 433-460). His position amounts to a version of the old saying "I may not know much, but I know what I like": in the context of his discussion of the objection that a machine could never be conscious, he says:

[Solipsism] may be the most logical view to hold but it makes communication of ideas difficult. A is liable to believe 'A thinks but B does not' whilst B believes 'B thinks but A does not'. Instead of arguing continually over this point it is usual to have the polite convention that everyone thinks. [emphasis added]

This is consistent with the Turing Test itself: intelligence is that quality which we impute to others when they exhibit the kinds of behaviour which we recognise in ourselves as involving thought: Speaking, listening, writing; understanding what we see, hear, read; designing, crafting, creating; reasoning, planning, arguing, learning, problem solving; feeling, guessing, loving, hating. Turing's position is that he sees no a priori reason why we should not in due course extend the same courtesy to some machines which we already extend to one another, and offers his Test as a thought experiment to support this contention. It would take us too far afield to address the accusation that Turing's approach is merely behaviourist and thus too simple—suffice it to say that it is neither.

4. Definitions: Weak versus Strong

The boundary between AI and CogSci has always been porous, and the nature of their relationship a subject for disagreement. The labels "Weak AI" and "Strong AI" have been used to characterise two distinct positions in this space:

[T]he assertion that machines could possibly act intelligently (or, perhaps better, act as if they were intelligent) is called the weak AI hypothesis by philosophers, and the assertion that machines that do so are actually thinking (as opposed to simulating thinking) is called the strong AI hypothesis. [emphasis in original]

and

[S]trong AI---the claim that running the right sort of program necessarily results in a mind.
(Russell, S and P. Norvig, 2003, Artificial Intelligence, A Modern Approach, 2nd edition.)

We can imagine a similar pair of terms for CogSci, looking back the other direction, as it were:

The strong CogSci hypothesis is that the only way to achieve intelligent behaviour in a non-biological artefact is by reconstructing human intelligence. The possibility of artefactual but distinctly non-human intelligence would in contrast be labelled the weak CogSci hypothesis.

Since I take causal embedding in the real world (in both directions) to be a necessary precondition for intelligence, I read all the above definitions as if they used phrases such as "suitably provisioned machine/artefact/...", meaning by that to cover not only the power of the underlying artefact and the sophistication of the programs running on it, but also the extent to which the artefact in question is causally embedded in the world.

5. Where do I stand?

I grew up intellectually in the Cognitive Science paradigm, with a bias towards the 'strong' end. That's still where my interest lies. I take it that the core proposition of Cognitive Science is that there is a level of description of human mental processes which is:

  1. distinct from (it's tempting to say something like 'higher than') the neural level;
  2. explanatory (that is, captures generalisations and makes predictions);
  3. essentially computational.

Point (1) is admittedly pretty much redundant, in that the use of the word 'mental' already presupposes a distinction between brain and mind. But it's by no means self-evident, indeed many reasonable practioners are at least agnostic and in the extreme downright antipathetic to this claim.

Point (2) hides a minefield, in that just what makes a description 'explanatory' has never been very clear to me, but that problem isn't unique to CogSci, it's at least common to the human sciences in general, and probably to other sciences as well.

Point (3) is the crux, of course. I have been most influenced in my belief that computation is constitutive of mentation by the work of Brian Cantwell Smith—see for example "The Owl and the Electric Encyclopedia" (Smith, B. C. 1991, Artificial Intelligence, 47, pp251–288), which reads as well today as it did when it was published. Substitute Kurzweil and Vinge for Feigenbaum and Lenat, its original targets, and its critique is equally devasting and very timely.

6. Picking up the conversation

The advantage of the computational medium is that it allows you to at least partially recover from the "If only I had said . . ." feeling after a conversation has ended. At one point near the end of the weekend someone set out her views on the artificial intelligence project and the relevance of neuroscience to it, and I was unable to articulate my disagreement before lunch ended. On reflection I've figured out what I was troubled by in what she said.

I understood her to be saying that she thinks the best way to get at a computational account of mental processes is to start from a (computational?) account of brain processes. But there's a fundamental property of computational systems, namely that semantics does not cross implementation boundaries (Brian Cantwell Smith again), which, at least as long as we believe the cognitive science project is correct in imagining the mind-brain relation to be modelled by the program-(virtual) machine relation, argues against her suggestion. That is, the semantics of a formal system are independent of the semantics of any substrate which implements them. I can implement a Lisp interpreter in C++ or Snobol or Prolog or machine code or Lisp or in a feed-forward neural network or any other Turing-complete mechanism. To the person analysing the semantics of the Lisp program, the substrate doesn't matter. So, by analogy (and compellingly so, as far as at least a substantial part of the Cognitive Science community history is concerned), properties of the neural substrate don't show up at the mental level, at least if that level is genuinely explanatory in its own right, i.e. has a semantics which is defined in its own terms.

That's not to say that the question of how mental functioning is realised at the brain level is uninteresting, only that the answer to that question is largely decoupled from the question of what mental functioning actually is.