In this essay, I shall explore the question of whether such a thing as a thinking machine is a possibility. This will involve a discussion of definitions, both of ‘machine’ and of ‘thought’.
One cannot begin such a discussion without an account of Cartesian dualism, the intuitively sensible view which separates mind and body, thought and physical process. Descartes proposed that human beings were unique in that they had a spiritual soul, or mind, which was non-extended and without motion, existing in some ‘other’ realm. The body was an advanced mechanism, which was somehow moved by the soul. This thinking survived and evolved as vitalism.
‘Vitalism holds that life results not from biochemical reactions but from a vital force unique to living things. Whereas modern science sees life as resulting from the complex interactions of mechanistic parts forming an organic whole, vitalism sees life as suffused with a substance not found in non-living nature,’
such as ‘The Force’ of the Star Wars concept.This conceptualisation has persisted in many forms to the present day. Descartes further suggested that animals, lacking the divine soul, were machines. He would argue against their ability to ‘think’ in the way that humans do. Whilst there is now a significant body of evidence supporting thought in higher mammals, another of Descartes suggestions pertains and persists. Descartes insisted that could a machine be created to look indistinguishable from a human, it could not pass itself off as human for two important reasons. First, a machine could never master language in a realistic and meaningful way and secondly it could never act appropriately in all possible contingencies. In other words, a machine would give itself away through poor choice of language or inappropriate actions.
Descartes’ comments do seem to be a direct foreshadowing of the seminal work of Turing in 1950. Turing suggested that, whilst it was meaningless to ask whether machines can think, it can be profitable to consider whether a machine can act in such as way as to convince a human interlocutor that it was actually human. The Turing Test was devised expressly to test such a proposition. Turing himself believed strongly that it was only a matter of time before machines – specifically digital computers – could pass his test. Turing describes an imitation game where there is an interrogator, a person and a machine. The interrogator is separated from the other person and the machine. Whilst the interrogator knows that there is both a machine and a person, she does not know which is which. A machine, therefore, is said to have passed the Turing Test if it is able to respond appropriately to questions posed by the interrogator in such a way as to convince the interrogator that it is in fact the person. Specifically, Turing stipulated that for a machine to be judged to have passed the test, an average interrogator should have more than a 70% chance of making the correct identification after just five minutes of questioning.
Turing rather optimistically believed that machine intelligence would become acceptable in short order: “I believe that at the end of the century the use of the words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted .” In fact, the Loebner Prize Competition, specifically set up for the task of running the Turing Test on hopeful machines, has become something of an AI embarrassment – in terms of how far we are from creating such thinking machines. Churchland (1995), as one of the observers of the test in 1993, admitted, ‘The truth is, none of the entrants was worth a damn, at least in terms of Artificial Intelligence.’ However, the fact that we are so far from the goal does not of itself negate Turing’s Test as being a good one. Churchland, in fact, does argue that the test is not a good one, and suggests that the Loebner Prize Competition only encourages programmers to fake intelligence in a very narrow range in order to ‘run off with the …prize’ .
Many objections have been raised to the validity of the Turing Test. One such is that the test is in some way chauvinistic; there is an inherent assumption that a thinking machine must in some way be like us. As Matt Carter (2007) points out, ‘The idea that we might one day be able to construct some artefact which has a mind in the same sense that we have minds is not a new one. It has featured in entertaining and frightening fictions since Mary Shelley first conceived of Frankenstein’s monster.’ This theme is well-explored in the science fiction genre relating to Artificial Intelligence; be it Commander Data in the Star Trek series or Schwarzenegger’s Terminator, time and again artificially created beings fail to accord their behaviour and language with the intricate and largely unwritten expectations of human behaviour. But herein lies a distinction crucial to our discussion. Does a thinking being need to be human-like? In our drive to create intelligence, are we guilty of an ‘after-our-own-image’ pre-conception of what a thinking thing must ‘look’ like, in terms of its functioning? Whilst these are good questions, they do not adversely affect the Turing Test – Turing merely asserted that a machine passing the test could be described as having intelligence, not that only such machines could be thus described.
Other objections to Turing have rested on increasingly ‘clever’ programming which, in a five minute period, could indeed dupe the man in the street. These objections miss the point that the Turing Test is specifically a situation in which the interrogator knows that one of her two subjects is a machine and is still unable to discover the true identity of the participants. Turing himself identified a variety of possible objections to his own test. He recognised that substance dualists would have a hard time accepting that a ‘body’ designed by humans could possibly be imbued with the ‘soul’ or ‘thinking thing’ which exists non-extended in another realm to the physical; they would necessarily contest that a machine passing the test was indeed ‘thinking’ in the way that humans do. However, the dualist camp has many difficulties to overcome, not least the question of why God couldn’t put consciousness into a machine if He saw fit. Turing also identified what Oppy terms ‘the head-in-the-sand’ objection: the implications of thinking machines – that we become inferior, that we have real concerns about being usurped or dominated – are so worrying that we ought not to pursue the goal. The objections, however, are not valid arguments against the possibility of thinking machines, simply an expression of fear of the result.
Artificial Intelligence (AI) is a label loosely applied to a range of possible entities. It would be helpful here to explore a definition. Perhaps the most persistent definitions have been suggested by Professor John Searle, of the University of California. He suggests that two interpretations of AI exist. Firstly, he identifies what he calls Strong AI. This is achieved when a machine achieves or supersedes human intellectual ability. Specifically, Searle proposes this definition to deal with the belief that computer programming – software – can cause a machine to become conscious, or indeed be conscious in its own right. Searle suggests that proponents of Strong AI believe that ‘by designing the right programmes with the right inputs and outputs, they are literally creating minds’. He specifically refers his arguments to the computer, having given a wider definition to the term ‘machine’. Searle argues that if by ‘machine’ we mean a physical system capable of performing certain functions – which surely we do – then humans are a very special kind of biological machine, and given that humans can think, machines can think, QED. In Strong AI, Searle is identifying the belief that a computer – this very specific type of machine – can not only act intelligently, but can have actual states of mind. ‘According to strong AI, the computer is not merely a tool in the study of the mind; rather, the appropriately programmed computer really is a mind, in the sense that computers given the right programs can be literally said to understand and have other cognitive states.’ We are all digital computers. This is more than analogy. Our brains are digital computers and our minds are computer programmes – the software to the hardware of our brain. This is known as Strong AI.
Weak AI is more of a specific intelligence, not at all encompassing the range of human cognisant abilities. Weak AI is a system designed to do a narrowly defined task exceedingly well. It is a system which simulates a conscious mind in a specific respect. This presents us with the issue of defining what it is we mean by ‘intelligence’ or ‘thinking’, and whether we mean the same thing by both terms.
Does a thinking thing have to be intelligent or vice versa? When IBM’s Deep Blue beat Kasparov in 1997, many were heralding an AI success. Others, determining that Deep Blue was ‘merely’ making computational choices, without any conscious awareness that it was doing so, meant that it wasn’t truly thinking or truly intelligent. Journalist Robert Wright commented on the match that whilst Kasparov felt down after losing the first game, it was improbable that Deep Blue could feel blue. He also reported the views of those against the whole notion of computers vs. humans in chess by citing the fact that fork-lift trucks were never invited to weight-lifting competitions: perhaps not a perfect analogy. As Drew McDermott put it, ‘Deep Blue is unintelligent because it is so narrow. It can win a chess game, but it can't recognize, much less pick up, a chess piece. It can't even carry on a conversation about the game it just won. Since the essence of intelligence would seem to be breadth, or the ability to react creatively to a wide variety of situations, it's hard to credit Deep Blue with much intelligence.’ Whilst this sounds pretty damning for the AI camp, McDermott however concludes, ‘Saying Deep Blue doesn't really think about chess is like saying an airplane doesn't really fly because it doesn't flap its wings.’ Computers such as Deep Blue can indeed help us to understand some of the processes of thought and this version - Weak AI - is what Turing referred to when he suggested in 1950 that it would soon be unremarkable to suggest that machine could think.
Searle tracks changes in philosophical thought about the mind. He traces the development from Descartes’ substance dualism, to the more palatable property dualism still defended today. He progresses through monism’s idealism, behavourism and physicalism, and showing where such arguments fail to account for causal relationships in the mind and paving the way for today’s more accepted viewpoint, that of functionalism. It is the functionalist viewpoint – that what makes something a desire or a thought or a feeling depends purely on its function, not at all on its internal makeup or engineering - which led to the computational model of the mind. Searle’s purpose in providing these definitions was to focus research on a realistic area, whilst being able to relatively easily dismiss the unrealistic, as he saw it, proposal of Strong AI. However, his refutation of Strong AI proved to be anything but easy. Searle proposed a basic thought experiment in order to debunk the idea – The Chinese Room.
The analogy is deceptively simple. A man inside a closed room receives input in the form of Chinese symbols. He has no knowledge of what the symbols mean, but does have a comprehensive set of instructions about how to select further Chinese symbols to output in response. Searle suggests that the instructions could be so clear and comprehensive that the ‘room’ could output appropriate responses to questions in Chinese. However, he argues pointedly, there is no point at which the man in the room understands Chinese. He neither understands the questions being put – indeed not even that they are questions – nor does he understand the answers he is providing; there is no understanding of Chinese. Searle here has provided an example of a machine that can pass the Turing Test. It could fool Chinese speakers, even if they knew one of two interlocutors was a machine. The output from the Chinese Room would be identical to that of a genuine, conscious Chinese speaker. However, there is nothing intelligently conscious about the Chinese Room, in terms of understanding Chinese. Searle’s thought experiment, then, asserts that the Turing Test is, in fact, no test at all for intelligence.
Searle’s argument goes further than this though. He sought to show that true AI is not possible, that representations and simulations of intelligence would always be shadowy ‘Platonic Cave’ reflections of the real thing. His arguments are intended to show ‘that while suitable programmed computers may appear to converse in natural language, they are not capable of understanding language, even in principal.’ More specifically, Searle is saying that formal computational systems, systems using the manipulations of symbols as a means of generating ‘intelligent’ output, can never think. This is a very strong claim, given that we don’t know how our own thinking comes about. Whilst it has intuitive appeal, there are many who have been unconvinced. I am inclined to accept Searle’s argument that computational shuffling of symbols to given rules cannot create a mind, that something else, quite possibly biological or emotional in form, is required. For now, there are many replies to the Chinese Room argument and some popular antecedents of the original thought experiment too. Some of these are worth considering in more detail as we evaluate what the implications are for allowing that machines can think.
The systems reply is one of the most basic replies to the Chinese Room. Whilst it may be true that the Searle in the room – the homunculus – has no understanding of Chinese, the room itself, as a whole system, does have understanding. This objection misses the point. As Searle was quick to point out, he could memorise the complex symbol-shuffling instruction manual, leave the room and operate from the middle of a field and still not understand Chinese, whilst performing equally well. He would still be taking the input, meaningless to him, running it through a complex system of programmed responses to squiggles and squaggles , and producing an output which although meaningful by virtue of the outside world, would still be meaningless to him. Searle further suggests that the systems reply entails absurd consequences, such as identifying sub-systems of ‘mind’ in things like stomachs, thermostats and telephones: ‘If we are to conclude that there must be cognition in me on the grounds that I have a certain sort of input and output and a program in between, then it looks like all sorts of noncognitive subsystems are going to turn out to be cognitive.’ What Searle is saying in response to the systems reply is quite simply that as long as the software is thought of in terms of manipulating symbols, no amount of playing with definitions will change that fact that symbols are being manipulated according to a set of instructions which do not in any way lead to actual understanding of what those symbols may mean. This in turn supports the position that computers cannot think. This position seems to me to be a tenable one.
Another related response to the Chinese room argument is the Virtual Mind reply. This suggests that whilst understanding might not be attributable directly to the computer, or to the system, it is possible that a virtual mind is created, separate to the original computer or system, which does understand Chinese. As Cole states, ‘The claim at issue should be “the computer creates a mind that understands Chinese”. A familiar model is characters in computer or video games. These characters have various abilities and personalities, and the characters are not identical with the hardware or program that creates them.’ Objections to the systems reply are valid here also. Penrose (2002) drew the analogy further through his consideration of The Chinese Gym variation – a room the size of India with Indians, suitably enough, doing the processing part of the equation. Penrose concluded that it was highly implausible that such a system could generate an independently existing mind capable of understanding in its own right.
Searle also makes short shrift of the Robot reply, in which the Chinese Room apparatus is put into a robot which has sensory input from cameras, microphones and tactile sensors. In such a case, goes the reply, the ‘room’ would learn through a causal connection between things it receives and what they represent. Searle simply tweaks the Chinese Room, having the input some from such devices instead of somebody physically putting pieces of paper into the room. He even adds an elaborate system of motion dampeners so that the man in the room is unaware that he is negotiating physical space. The notion remains unchanged – the man in the room is manipulating meaningless (to him) symbols according to instructions. He doesn’t understand what he is doing, in terms of attaching semantics to the input or output. The extra input he received in terms of ocular or auditory information would be equally meaningless and there would be nothing to attach meaning to – just extra work for the processor. Searle insists that ‘by instantiating the program I have no intentional states of the relevant type. All I do is follow formal instructions about manipulating formal symbols.’ There is no understanding created, neither for the homunculus within nor for the robot system.
The Brain Simulator reply to Searle’s thought experiment suggests that a computer which simulates every last firing of neurons and every last synapse of a real Chinese speaker’s brain would have to understand Chinese in the same way that the Chinese speaker does. Not so, claims Searle. His argument boils down to the fact that a simulation is just that, an image of the real thing. In the same way as a computer simulation of digestion cannot be mistaken for actual digestion, nor should a computer simulation of understanding Chinese – or thinking in general – be mistaken for actual understanding of Chinese, or actual thinking. Searle remains convinced that the phenomena of human thought – of mental states and consciousness - is directly related to the chemical and biological make-up of actual human brains. No amount of simulating will produce anything other than a simulation.
Other detractors have resorted to the Other Minds reply – an extrapolation of the old chestnut of the other minds problem. In much the same way that I cannot be sure that you are conscious in the same way that I am, nor could I be sure that a computer is, irrespective of its behaviours. However, I judge from your behaviour that it is highly probable that your mind functions largely as mine does; you do think and understand. Turing made this assumption thirty years before Searle came up with the Chinese Room. ‘Instead of arguing continually over this point it is usual to have the polite convention that everyone thinks.’ It is therefore possible to make the same case for the observable behaviour of a computer. If a computer acts as though it thinks, I should accord it that role. The Turing Test effectively does this, by extending the protocol to computers exhibiting thinking-like behaviour. Searle’s response to this is that we know the workings of a computer to be syntactical and not semantical. We know the computer is not thinking, because we know it is doing exactly what we told it to do – to manipulate symbols according to a set of instructions. He also objects in more general terms to this argument from analogy, suggesting that actually the argument in the case of other minds is not from pure analogy, but from recognising similar causal relationships on similar systems. Experience A on a body of type B will have a similar effect, C. Given this view, an argument from analogy falls down with reference to computers displaying ‘thinking-like’ behaviour, since the body, B, is of an entirely different nature – the causal-consequential relationship cannot be compared.
Many commentators upon the Chinese Room have declared that the argument is based on an intuition that the man in the room – and so a computer – cannot possibly understand; the Intuition reply. Ned Block (1980) suggested as much directly in his original response to the Chinese Room, that it depended upon an intuitive feeling that machines do not think. Pinker (1997) goes so far as to say, “You can almost hear {Searle} saying ‘Aw, c’mon! You mean that the guy understands Chinese?!!! Geddadahere! He doesn’t understand a word!!’” He makes the convincing point that Searle has so slowed down the thought process – a man in a room with paper instructions would take millions of years to process convincing replies to questions in Chinese – that he has falsely concluded that understanding is not present. If, continues Pinker, the process were speeded up to real time, we would not be so quick to dismiss the behavioural evidence before us. Pinker also suggests that the whole Chinese Room debate is merely a discussion about how we use the word ‘understand’ – perhaps reminiscent of Wittgenstein’s assertion that all of philosophy was language. He suggests using another, less-loaded word to escape the problem. In any event, argues Pinker (1988), the Turing Test is not a sufficient test for a thinking machine, and he suggests a harsher test for intelligence: ‘Could a mechanical device ever duplicate human intelligence, the ultimate test being whether it could cause a real human to fall in love with it?’ referring to an old episode of The Twilight Zone.
Some opponents to Searle’s case take the view that newer advances in computer sciences render his objections invalid. The example of parallel computing and neural networks, termed ‘connectionism’ is one of these. Arguing that modelling computers on the brain and arranging layers of neuron-like processing units which interact with one another, connectionism promises a closer analogy to the human brain than traditional von Neumann computers. Connectionism has moved a long way since original experiments were buried by the ‘exclusive or’ and neural nets now produce fascinating results. In terms of facial recognition, for example, Garrison W. Cottrell has been able to programme such a machine to be 100% accurate in recognising something as a face and 81% accurate in gender recognition on new samples which the net had not been trained on. This would certainly suggest that the machine has ‘learnt’. But despite optimism from some quarters and a conviction that Searle’s Chinese Room doesn’t impinge on the success of such machines , Searle dismisses the distinction – ‘The parallel ‘brain-like’ character of the processing…is irrelevant to the purely computational aspects of the process.’ Whilst these machines appear to learn – even to the extent of having ‘teacher computers’ to regulate connections strengths between layers – that fact that anything done on them can also be done, albeit more slowly, on a series machine (or a universal Turing machine) shows that there is no advance here other than that of speed. This would seem to me to be a very strong case – if it is true that speed alone is the advantage of such a system, it would be hard to argue that the system therefore can understand, simply by virtue of that quality.
Other responses – not specific to Searle’s arguments, but attempts to advance strong AI - include the CYC programme – an attempt to load a massive database with all the accumulated ‘common sense’ of humanity, in an effort to overcome the classic ‘frame problem’ in which an intelligent agent can sift relevant from non-relevant information. It is a controversial project, with many feeling that no matter how large the database, this wealth of common sense ‘knowledge’ is not what consciousness stems from. The advent of defeasible reasoning – the ability to override default assumptions in the light of new experiences or extra information - also suggests, to some, that progress is being made towards actual thought and learning.
It is important here to backtrack. My original question was whether machines could think. Searle has never argued that they cannot. His arguments are a refutation of strong AI – that software programmes in and of themselves could create a mind. His argument is basically as follows. He establishes three axioms: i) computers are syntactic formal systems (they manipulate abstract symbols devoid of any intention or meaning), ii) human minds have semantic content (meaning and intentionality) and iii) syntax alone cannot produce semantics. His conclusion is that ‘programs are neither constitutive of nor sufficient for minds.’ The whole Chinese Room debate was sparked by an attempt to illustrate the third axiom of this argument.
For us to be able to assess whether a machine can think – can have conscious experiences – we need to define what that is – and the problem here is that there is no easy answer to this issue. In trying to establish if machines – in theory at least – can be conscious, we find that we actually don’t know a great deal about what it is it is to be conscious. Stuart Sutherland writes, "Consciousness: the having of perceptions, thoughts and feelings; awareness. The term is impossible to define except in terms that are unintelligible without a grasp of what consciousness means... Nothing worth reading has been written about it." Searle gives a more pragmatic definition - "'Consciousness' refers to those states of sentience and awareness that typically begin when we awake from a dreamless sleep and continue until we go to sleep again, or fall into a coma or die or otherwise become 'unconscious'." – and Dennet concurs, "The improvements we install in our brain when we learn our languages permit us to review, recall, rehearse, redesign our own activities, turning our brains into echo chambers of sorts, in which otherwise evanescent processes can hang around and become objects in their own right. Those that persist the longest, acquiring influence as they persist, we call our conscious thoughts."
If we are to ascertain whether man-made mechanisms can attain conscious thought, we need first to be sure of what it is we are looking for. Searle makes the strong case that it is not computational symbol manipulation. Neuroscientists now work with a wide definition of consciousness, which basically incorporates all of our “‘experience’ – of Life, subjectively understood. Experiences, it is widely held, have a special qualitative character” , thus giving rise to the notion of qualia. A quale is held to be the qualitative character of experience. Damasio (2000) describes qualia as ‘simple sensory qualities to be found in the blueness of the sky or the tone of a cello’ . It is widely thought that the ability to define qualia, to explain them as phenomenal qualities, is to define consciousness.
Damasio puts forth some ideas about how the brain operates, and how, in particular, emotion plays an essential role in aiding and complementing our ability to understand and to function the real world. He cites examples of patients with impaired emotional areas of the brain and their corresponding dysfunction as rational thinking beings. He outlines the interactive processes of the biological and chemical systems of the human body, involving far more than ‘just’ the brain, which seems to suggest that modelling the brain alone could not hope to reproduce consciousness. This has huge implications for the world of artificial intelligence. If consciousness and reactive thought are dependant to some extent upon emotional feedback, then the computational theory of mind is in serious trouble, and the notion of building minds through software advances seems more doubtful than ever. Dreyfus was a forerunner to such thoughts and presented a related argument, that the body was necessary for the global interaction necessary to create understanding. It seems to me that there are two, possibly distinct, points here: (i) the emotions are involved, (ii) more than just the brain is involved.
Aleksander (2007) describes a machine designed by Stan Franklin of Memphis University – the Intelligent Distribution Agent, or IDA. The purpose of the machine was to organise billeting for US seamen via e-mail. ‘The key feature is that the seaman using the system should not feel that there has been a change from human billeters to a machine in terms of the sensitivity and concern with which their case is handled.’ The machine receives a sailor’s preferences, current postings and skill set and matches this information to available billets before making a suggestion. The analogy here to the Turing Test is unmistakable, but Aleksander suggests that such a connection would be too superficial. Franklin’s machine, he says, has not passed the Turing Test since it has no phenomenological consciousness as such. Whilst it is not clear to me that such phenomenological conscious was suggested by Turing, Aleksander says that it merely has ‘a functional stance that is sufficiently effective to leave users satisfied that they are interacting with a system that is ‘conscious’ of their needs.’ This example though, introduces a very significant issue – the more recent versions of IDA include emotional input – ‘guilt’ for not meeting the needs of the sailor. The process of inputs interacting with various forms of computational memory, including the marriage of both internal and external stimuli suggests what Franklin terms a ‘consciousness area’ which directs the process until ‘the “thought” is sufficiently well formed to activate an action-selection mechanism that communicates with the sailor and initiates a new set of internal and external inputs for further consideration.’ Franklin would certainly seem to be suggesting that his IDA can think, and it would seem the programmed ‘guilt’, as a rudimentary nodding to human emotion, might be a very important factor in the illusion of a concerned and compassionate thinking machine.
It seems to me that to really get into the notion of a thinking machine, we need to get into the notion of how our own thinking is possible. Why is there anything, rather than nothing, in terms of our consciousness? Dennet (2007) suggests that this is the ‘Hard Problem’ of consciousness, explaining why we have qualitative phenomenal experiences – as opposed to the ‘Easy’ problems, which involve simply identifying the mechanisms by which we perform various functions, such as smelling a flower or recognising an old friend; it can, he says, be dismissed. Dennet offers a new take on the Simulated Brain discussion, proposing a situation where Steven Pinker’s brain is being destroyed by a progressive brain disease. Following huge successes in the Easy Problems of consciousness, scientists are able to replace damaged sections, neuron by neuron, in such a way as neither he nor observers can tell any difference. Ultimately his entire brain is replaced. Steve thinks he is alive and well and continues to write and joke and feel pain. Dennet suggests that the problem is that there can be no test to dismiss or prove that Steve is experiencing reality consciously – we are back with the other minds problem. ‘There could not be an objective test that distinguished a clever robot from a really conscious person.’ Dennet thinks we should just move on and accept this.
Can a machine think? Trivially, yes. We are machines – biological and chemical machines as opposed to silicon – but machines none the less. Evidently, we can think. Ironically, we may end up with a machine that can do what we do, that has a level of intelligence equal to our own, but still have no idea how it achieves this. It will not be clear, from opening up a neural network and looking at the strings of syntactical numbers and symbols therein, just how such semantic awareness was generated. Grim clearly states, ‘At the neuron level, your brain works in terms of syntactic-like impulses, not semantic-like meanings. We don’t know how semantics is produced from syntax – but we know that it is.’ To the question of whether we can build an artificial machine that thinks as we do, Searle is content to reply that there is no logical reason in principal as to why not, but for now it is still within the realms of science fiction. Brains raise consciousness by operating causally, and any machine that could produce consciousness or intentionality would have to do so by duplicating the presently inexplicable causal powers of the brain. If the neuron-by-neuron replication of a human brain – with emotional feedback and sensory apparatus – does one day produce a machine that can think, perhaps we’ll find that we were asking the wrong question. Perhaps the question is, “How come we think?’
BIBLIOGRAPHY
Books
Carter, Matt Minds and Computers Edinburgh University Press 2007
Clark, Andy Being There: Putting Brain, Body, and World Together Again MIT Press 1998
Churchland, Paul The Engine Of Reason, The Seat Of The Soul: A Philosophical Journey Into The Brain MIT Press 1995
Cottingham, J. The Philosophical Writings of Descartes Vol I. CUP 1985
Damasio, Antonio Descartes’ Error Vintage Press 2006
Damasio, Antonio The Feeling of What Happens Vintage Press 2000
Dennett, Daniel 'Kinds of Minds', London: Weidenfeld & Nicolson 1996
Floridi, Luciano (Ed) The Blackwell Guide to the Philosophy of Computing and Information Blackwell Publications 2004
Lowe, E.J., An Introduction to the Philosophy of Mind CUP 2000
Mazis, Glen A. Humans, Animals, Machines SUNY Press 2008
Pinker, Steven How the Mind Works Penguin 1988
Searle, John The Mystery of Consciousness New York Review Books; 1st edition 1990
Stich Stephen P. and Warfield Ted A. (Ed) The Blackwell Guide to Philosophy of Mind Blackwell Publications 2003
Velmans, Max and Schneider, Susan (Ed) The Blackwell Companion to Consciousness Blackwell Publications 2008
The International Dictionary of Psychology. 2nd ed. New York: Crossroad, 1995
Internet resources
Cole, David ‘The Chinese Room Argument’ (2009)
http://plato.stanford.edu/entries/chinese-room/ Accessed 02/03/2010
Elton, Matthew ‘Persons, Animals, and Machines’, Technology, & Human Values, Vol. 23, No. 4, Special Issue: Humans, Animals, and Machines (Autumn, 1998), pp. 384-398 Stable URL: http://www.jstor.org/stable/690139 Accessed: 27/02/2010
Lenat, Douglas B. ’From 2001 to 2001: Common Sense and the Mind of HAL’ 2001 http://www.psych.utoronto.ca/users/reingold/courses/ai/cache/halslegacy.html January 7th 2011
McDermott, Drew ‘How Intelligent is Deep Blue?’ 1997 http://www.psych.utoronto.ca/users/reingold/courses/ai/cache/mcdermott.html
More, Max, ‘Beyond the Machine’ in proceedings of Ars Electronica 1997. (FleshFactor: informationmaschinemensch), Ars Electronica Center, Springer, Wien, New York, 1997. http://www.maxmore.com/machine.htm Sept 2010
Oppy, Graham ‘The Turing Test’ The Stanford Encyclopedia of Philosophy 2008 http://plato.standford.edu/entries/turing-test/
Searle, J. ‘Minds Brains and Programs’ Behavioral and Brain Sciences, 3, 417-57 1980 http://pami.uwaterloo.ca/tizhoosh/docs/Searle.pdf
Turing, A.M. (1950). Computing machinery and intelligence. Mind, 59, 433-460. http://loebner.net/Prizef/TuringArticle.html accessed 22nd December 2010
Wright, Robert, ‘Can machines think?’ Time Online Monday 25th march 1996 http://www.time.com/time/printout/0,8816,984304,00.html accessed 10 October 2010
Other
Churchland, Paul and Patricia Smith, ‘Could a machine think?’ Scientific American 1990 p 32 – 37
Dennet, Daniel ‘The Brain: A Clever Robot’ Time Magazine January 18th 2007
Grim, Patrick SUNY Lecture 15 Artificial intelligence and Lecture 17 Attacks on AI TTC Video 2008
Hacker ‘Is there anything it is like to be a bat?’ Philosophy 77 (300):157-174. 2002
Searle, John, ‘The Philosophy of Mind’ Lecture Series The Teaching Company Limited Partnership 1996
Searle, John, ‘Is the brain’s mind a computer programme?’ Scientific American 1990 p26-31
Thursday, January 20, 2011
Subscribe to:
Posts (Atom)