Can machines (or computers) think?
What did Alan Turing have to say to that question? Well, he believed that the question is too “meaningless to answer”. In full, he wrote:
“The original question, ‘Can machines think?’, I believe to be too meaningless to deserve discussion.”
In other words, how can we even answer that question if we don’t really know what thinking actually is in the first place? After all, is forming a mental image in one’s mind thinking? Is recalling someone from one’s past an act of thinking? What about simply surveying (both non-verbally and non-subvocally) a scene in front of you or picking up a book from the floor? If all these cases are examples of thinking, then what do they all share? Indeed do they share anything at all?
So now let’s also take the newer word “cognition”, which is a kinda synonym for the word “thinking”. In 1966, Ulric Neisser (the “father of cognitive psychology”) wrote these words:
“[T]he term ‘cognition’ refers to all processes by which the sensory input is transformed, reduced, elaborated, stored, recovered, and used. It is concerned with these processes even when they operate in the absence of relevant stimulation, as in images and hallucinations.”
Then Neisser concluded in the following way:
“[G]iven such a sweeping definition, it is apparent that cognition is involved in everything a human being might do; that every psychological phenomenon is a cognitive phenomenon.”
Thus, because of these and other complications, Alan Turing suggested bypassing the question “Can a machine think” entirely. Or at least he didn’t attempt to define the word “think”. Instead, he asked us whether or not a person would ever believe that he/she was having a conversation with another person (say, by letter, phone, behind a screen, etc.) when, in fact, he/she was actually conversing with a computer.
This was Turing’s well-known “test” of what it is to think (i.e., the Turing test).
Kurt Gödel would have nothing to do with this purely (as it were) behaviourist answer to the question “Can a machine think?” To him it didn’t matter if a computer or machine could (as it were) hoodwink people. What mattered to him was whether or not a computer can… well, really think.
Of course this position takes us back to square one.
1. Fundamentals of AI, ML and Deep Learning for Product Managers
2. The Unfortunate Power of Deep Learning
3. Graph Neural Network for 3D Object Detection in a Point Cloud
4. Know the biggest Notable difference between AI vs. Machine Learning
One may therefore assume that Gödel believed that he had a/the correct definition of the word “think”. (Precise definitions of the word “think” may not matter that much in these cases anyway.) And that’s because he rejected Turing’s claim that computers will, in time, be able to think just as human beings think.
The following are some helpful definitions of certain relevant words within this context. These definitions also show us the interdefinable nature of these words and the fact that they constitute what may be called a “vicious circle”:
“simulation” — noun. imitation or enactment, as of something anticipated or in testing. the act or process of pretending; feigning. an assumption or imitation of a particular appearance or form; counterfeit; sham.
“replication” — it generally involves repeating something. (Students of biology will know that the word is often used to indicate that an exact duplicate has been made, such as chromosomes that replicate themselves.)
“duplicate” — something that is an exact copy of something else.
“imitation” — something copied, or the act of copying.
To go into more detail on Gödel’s position.
Gödel believed that Turing had conflated the simulation of thought with genuine (human) thought. This may be analogous to the manner in which a computer-screen simulation of a fire is not a fire itself. (Gödel never used such an example.) However, is this analogy perfect? Not really. Sometimes when you simulate you actually replicate (or duplicate) what it is you’re supposed to be simulating. Thus if I simulate someone running, then I will actually be running. (That’s if I actually physically move my legs in the same manner as someone running; rather than, say, comically pretend to be running or create a computer simulation of me running.) Similarly, if I “simulate” someone jumping off a cliff, then I will actually be jumping off a cliff.
So, in this context at least, perhaps the words “imitate” and “copy” are more accurate than the word “simulate”. So what about imitating (or copying) someone running? Well, that would actually be running too. More relevantly, what about a computer imitating (or copying) human thinking? Wouldn’t that itself be a case of thinking?
In any case, the word “imitation” was used by Turing himself in the following passage from 1950:
“I believe that in about fifty years’ time [I.e., in the year 2000] it will be possible to programme computers, with a storage capacity of about 10–9, to make them play the imitation game so well that an average interrogator will not have more than 70 per cent. chance of making the right identification after five minutes of questioning.”
It can be seen that Turing’s claims above aren’t too grandiloquent at all. Firstly, Turing gave it fifty years (i.e., up to the year 2000) before a machine could successfully “win” the imitation game. And that’s even though crude computers had already been built when Turing wrote these words. (Computers date back to the 19th century — and arguably before that — with Charles Babbage’s computer; and Turing himself began designing his own “practical universal computing machine” in 1945.) Secondly, Turing only gives “an average interrogator” around a “70 per cent. chance of making the right identification”. Added to that, this interrogator is given “five minutes of questioning” in order to determine whether or not he’s talking to a machine. So one would intuitively believe that most people might have spotted the fake conversationalist immediately.
Now what about the words “”duplication” and “replication”?
If a computer successfully (or even unsuccessfully) adds 2 to 2 and get 4, then surely that is a duplication (or replication) of what humans do. Gödel, again, didn’t think so. That was mainly because he believed that when a computer carries out that addition, it is “merely” following a programme or abiding by a set of rules. (As the mathematical physicist Roger Penrose puts it, the computer doesn’t actually understand the “meanings” of the symbols “2” and “4” — see note at the end of this piece.) But isn’t that also — at least partly —what humans do? What is it that humans do — in the case of addition — that computers don’t do? Sure, there may be additional things which occur in a human mind when he/she adds 2 to 2. However, none of that is essential to that act of addition. Thus a human may be imagining 2 apples being added to 2 apples. Or he/she may be hearing music when in the process of addition. However, all this is over and above the act of adding 2 to 2.
As it is, Gödel had something else in mind here. He believed that when it comes to thought and mathematical reasoning (or at least when it comes to the “seeing” of a mathematical “truth” — see here), human beings transcend mere rule-following (or go beyond “algorithms” and the purely “mechanical”) and enter into another (as it were) realm… But that’s a subject for another day!
“[T]here is also a mystery about how it is that we perceive mathematical truth. It is not just that our brains are programmed to ‘calculate’ in reliable ways. There is something much more profound than that in the insights that even the humblest among us possess when we appreciate, for example, the actual meanings of the terms ‘zero’, ‘one’, ‘three’, ‘four’, etc.” — Roger Penrose (from his Road To Reality)