Why Hinton is Wrong about his arguments on "(conscious) AI taking over"...
Recently the The Nobel Prize and ACM Turing Award Winner Geoffrey Hinton relaunched his ideas and fears about "AI's taking over" (see this recent interview
when he claims that "AI is already conscious" because - among the other things - it has goals, can create subgoals and, if implanted in a biological tissue of a real biological entity, such biological entity remains conscious and as a consequence also the implanted device is conscious (yes he said that). He also claimed something similar in his Nobel Prize Interview:
First: the fact that AI systems have goals (always provided by the humans) and create subgoals to achieve them is not new (see e.g. the subgoaling procedure in the cognitive architecture SOAR that relies on the means-end analysis heuristics, used already in the General Problem Solver developed by Newell, Shaw and Simon in 1959! I repeat: 1959!). So these two points do not make any sense. Also the fact that AI systems are able to invent new knowledge (going beyond subgoaling procedures) to solve new problems is not something new. For example: in 2018-2019 with my colleague Gian Luca Pozzato and our students we developed a system that used the TCL logical framework, https://www.antoniolieto.net/tcl_logic.html, to invent new knowledge for solving problems via concept combination, blending and fusion (the paper is here: https://doi.org/10.1016/j.cogsys.2019.08.005 ). None of these things make an AI system conscious: they are just general heuristic procedures that allow a system to have yet another strategy (provided by us) to perform better on unknown tasks.
In addition: the discourse that an AI system can functionally replace biological cells/neurons/tissues without making the system subject to this replacement “unconscious” (and therefore - this is how the reasoning of Hinton goes - making the AI system itself also “conscious”) is a complete nonsense since there is a confusion on many levels.
The first confusion concerns the distinction between functional and structural systems (discussed extensively in my book Cognitive Design for Artificial Minds https://www.amazon.com/Cognitive-Design-Artificial-Minds-Antonio/dp/1138207950). The second one - connected to the first - concerns the attribution to a functional component of a “structural” (i.e. cognitive/biological) explanation. This is a sort of ascription fallacy that is also described in the book (and that is very common nowadays).
In particular, such wrong ascription is based on the confusion between i) human-like computation and human-level performances and, as mentioned, on the one between ii) functionally vs structurally designed systems. Unfortunately it leads to the interpretation and explanation of the obtained AI output in terms of the underlying biological/cognitive theory explaining the same output in the biological system.
More specifically, in the case of the bionic example made by Hinton in his interview: of course we can have bionic systems that are integrated - via partial or total replacement - with biological cells and tissues but functional replacement does not imply any biological/cognitive attribution (on this please also see this paper https://doi.org/10.3389/frobt.2022.888199)
Think for example to the exoskeletons controlled via semi-invasive medical devices to record brain activity: these systems “function as” our biological counterpart and communicate well with other biological components leading to locomotion. But… Would you say that they (i.e.: the semi-invasive component implanted in our brain in this case…to follow Hinton’s unreasonable reasoning chain) are conscious (just because the biological entity that have them implanted is conscious)? That’s plain wrong and non sense.
These kind of claims are completely unjustified and wrong from a scientific perspective and have nothing to do with the typical concerns and legitimate discussions about the risks coming from the societal and ethical impact that AI has any technology.
University of Salerno and ICAR-CNR (Italy)
ACM Distinguished Speaker on Cognitively-Inspired AI
Author of “Cognitive Design for Artificial Minds”, Routledge (2021)
