Can Computers Think?

Technology Jul 17, 2020

The short answer is “No”. The long answer is “It depends”. To be clear, there is no clearcut answer to the question, because the question and similar issues are being debated about for centuries now [1]. Answering the question also requires investigating the human in addition to the computer since attributing a machine the ability to think also requires discussions in philosophy and cognitive science. The main dependencies on these subject fields are the definitions of the process of thinking. If a computer is assumed to have the ability to think, there has to be a reasonable explanation of what it means to think.

Although they are a fundamental philosophical question on their own, for this essay we assume that thinking is not an activity exclusive to the human species and that there is no non-physical component that makes a human principally different from any other life forms in this universe. Based on these assumptions, one possible approach to enable computers to think would be to completely emulate a human brain. According to Bostrom, this approach would be less of a theoretical challenge, but more of a challenge of technological capabilities. It is, however, rather unlikely to succeed in the near future since the technologies required are not yet developed and other paths seem more promising [2].

Although it is not possible to simulate a complete brain yet, some machines already exceed the performance of humans in specific tasks deemed very difficult for machines, such as the game of Go [3]. In some of these specific tasks, a computer can even fool other humans into thinking that the computer is another human or at least the content produced by the computer was authored by a human in a significant amount of cases [4]. Already in 1966 Weizenbaum wrote a chatbot, which tricked some people into thinking that they were chatting with a human [5]. With the years’ computers have also become increasingly better at Natural Language Processing (NLP) tasks. The GPT-2 model by Radford et al. is able to trick a significant portion of humans into thinking that texts generated by it are written by a human [4:1]. A feature, that enables this, is that in contrast to other works the model can remember pieces of information mentioned several sentences back and refer to them. Furthermore, it successfully inserts relevant contextual information about the real world into text passages [6].

Fooling humans into thinking a computer is a human respectively that the content produced by a computer is produced by a human also ties into the research of Turing, who in 1950 probably was the first researcher to ask whether computers can think. Instead of actually defining the language terms necessary to answer the question, which he deemed to difficult, Turing opted to replace it with a different question respectively challenge. In the so-called “Imitation Game” also known as the “Turing Test” a computer has to convince a human interrogator via text messages into thinking, that it is a human [7]. Convincing a human into thinking a computer is a human, does however not necessarily show that computers can, in fact, think [8].

Convincing a human into thinking a computer is a human respectively convincingly good human-like performance by computers can also be only achieved in very specific isolated tasks. That is because current Artificial Intelligence (AI) research mainly focuses on solving specific tasks well with a computer instead of creating a software being able to reason about a multitude of problems similar to a human [9]. Researchers actually make a distinction between these “narrow” AI systems and so-called Artificial General Intelligences (AGIs). With this term OpenAI defines “highly autonomous systems that outperform humans at most economically valuable work” [10]. Goertzel and Pennachin refer to AGI as “a software program that can solve a variety of complex problems in a variety of different domains, and that controls itself autonomously, with its own thoughts, worries, feelings, strengths, weaknesses and predispositions” [9:1]. By its definition such an AGI would be able to think. Such a program does, however, currently not exist. Another question would thus be, whether a computer can only think, if it is an AGI or also if it is a “narrow” AI, i.e. it has to be considered whether exceedingly good performance in one task rewards a machine with the ability to think.

An argument against narrow AI having the ability to think is that thinking requires some form of self-awareness, which current computers lack. It could be argued that current computers merely execute instructions on the data they are given and that they do not do anything they are not told to do. In essence, computers execute a collection of algorithms. Here an algorithm is defined by the ISO as a “finite set of well-defined rules for the solution of a problem in a finite number of steps” [11]. Thus, if the instructions are not specified correctly or the input data is skewed, the output of the algorithm might also be faulty. A similar hypothesis is supported by the fact, that biased data leads to biased results. In her book Weapons of Math Destruction O’Neil explores how using algorithms for real word decision-making can increase inequality in society. If an algorithm is not carefully designed or its input data is not carefully selected, the inherent bias in the algorithms design and its input data might be reflected in its decisions [12]. This property is also known as “Algorithmic Bias” [13]. In this way, the computers do not think about a problem and try to solve it, but they rather aim to optimize a goal that is a proxy for the actual problem at hand by executing a set of instructions.

Another angle to the definition of thinking is the ability to abstract ideas about reality. Recent machine learning applications use Deep Neural Networks (DNNs) for object classification [14]. Those DNNs reason about images using structures inspired by the biological neurons in human brains. If a DNN, for example, classifies ostriches the structure of the DNN in a way is an abstraction of the idea of an ostrich since the DNN can even recognize pictures of ostriches, it has not seen yet. If we assume that the thoughts of a human are only dependent on his physiological presence, then by this definition the structure of the DNN could be seen as the thought about the object it classifies. Furthermore, for DNNs there are so-called adversarial examples, which insert carefully crafted noise into images of objects different from the object to classify, in the case of the ostrich, for example, a school bus, and thus trick the classifier into thinking that the picture shows the object to classify, in this case, an ostrich [13:1]. It could be argued, that the adversarial example, i.e. the crafted noise, might be what the classifier sees in the image of an ostrich and might be the idea or thought of an ostrich to the DNN. This notion of thoughts is very limited, however, since these supposed representation of thoughts are only valid in the narrow context of image recognition and do not apply to other settings. Because of these two points, because industry actors suggest narrow AI cannot think [15] and because there is no intelligent system performing on a human level yet [16], we conclude that current “narrow” AI does not have the capability to think.

Because current “narrow” AI cannot think, we consider the prospect of AGI. Although an AGI is not yet available, it might not be very far away in the future. In 2018 Grace et al. ran a survey investigating the predictions of AI researchers on the future of AI. One question prompted the researchers to estimate the time until computers achieve “High-Level Machine Intelligence (HLMI)” [16:1]. Although the opinions of the researchers had a large variation, they believe that “there is a 50 % chance of AI outperforming humans in all tasks in 45 years and of automating all human jobs in 120 years” [16:2].

All in all, in certain domain-specific tasks, computers do outperform humans. In particular settings computers are also able to successfully convince a person that they are a human. Furthermore, computers have some way of abstracting ideas about the real world. However, current computer systems only blindly execute a set of instructions and only achieve a very good performance in narrow and specific tasks. Furthermore, there is no AGI, which is self-aware and can independently reason about certain ideas, available yet. Consequently, computers are not able to think yet. This result is, however, only true under the assumptions mentioned in the text and might not be true under different assumptions and definitions.


  1. A. P. Saygin, I. Cicekli, and V. Akman. “Turing Test: 50 Years Later”. In: Minds and Machines 10.4 (2000), pp. 463–518. ↩︎

  2. N. Bostrom. Superintelligence. Paths, Dangers, Strategies. Oxford University Press, 2014. ↩︎

  3. D. Silver et al. “Mastering the game of Go with deep neural networks and tree search”. In: Nature 529.7587 (2016), pp. 484–489. ↩︎

  4. S. Kreps and M. McCain. Not Your Father’s Bots. AI Is Making Fake News Look Real. Foreign Affairs. Aug. 2, 2019. url: https://www.foreignaffairs.com/articles/2019-08-02/not-your-fathers-bots. ↩︎ ↩︎

  5. J. Weizenbaum. “ELIZA. A Computer Program For the Study of Natural Language Communication Between Man and Machine”. In: Communications of the ACM 9.1 (1966), pp. 36–45. ↩︎

  6. A. Radford et al. “Language Models are Unsupervised Multitask Learners”. In: OpenAI Blog (2019). ↩︎

  7. A. M. Turing. “Computing Machinery and Intelligence”. In: Mind 59 (1950). ↩︎

  8. J. R. Searle. “Minds, brains, and programs”. In: Behavioral and Brain Sciences 3.3 (1980), 417–424. doi: 10.1017/S0140525X00005756. ↩︎

  9. B. Goertzel and C. Pennachin. Artificial General Intelligence. Springer, 2007. ↩︎ ↩︎

  10. OpenAI. About OpenAI. url: https://openai.com/about/ (visited on June 22, 2020). ↩︎

  11. ISO. “Vocabulary”. ISO/IEC/IEEE 24765:2017(E). In: Systems and software engineering (2017). ↩︎

  12. C. O’Neil. Weapons of Math Destruction. How Big Data Increases Inequality and Threatens Democracy. Broadway Books, 2016. ↩︎

  13. S. Hajian, F. Bonchi, and C. Castillo. “Algorithmic Bias: From Discrimination Discovery to Fairness-aware Data Mining”. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 2016, pp. 2125–2126. doi: 10.1145/2939672.2945386. url: https://doi.org/10.1145/2939672.2945386. ↩︎ ↩︎

  14. C. Szegedy et al. “Intriguing properties of neural networks”. In: International Conference on Learning Representations. 2014. url: http://arxiv.org/abs/1312.6199. ↩︎

  15. D. Shapiro. Can Artificial Intelligence "Think"? Forbes. Oct. 23, 2019. url: https://www.forbes.com/sites/danielshapiro1/2019/10/23/can-artificial-intelligence-think/ (visited on June 23, 2020). ↩︎

  16. K. Grace et al. “Viewpoint: When Will AI Exceed Human Performance? Evidence from AI Experts”. In: Journal of Artificial Intelligence Research 62 (2018), pp. 729–754. doi: 10.1613/jair.1.11222. url: https://doi.org/10.1613/jair.1.11222. ↩︎ ↩︎ ↩︎


This is an essay I wrote for a university application.


Glossary

Notation Description
AGI Artificial General Intelligence
AI Artificial Intelligence.
DNN Deep Neural Network.
HLMI High-Level Machine Intelligence.
NLP Natural Language Processing.

Tags

Lukas Bernwald

Hi, my name is Lukas. I'm a Computer Science Student from Germany. I mainly post about topics relating to coding, technology, music, photography, entertainment and science.