The ChatGPT Debate: Are We Intelligent Enough To Understand ‘Intelligence’

Gabriel A. Silva
14 min readMar 22, 2023
What exactly is (natural or artificial) intelligence? And how will we know when we see it? Image credit: Getty.

In the 2016 science fiction drama Arrival about first contact with aliens, the movie’s two protagonists, a linguist and a physicist, meet in a military helicopter on their way to attempt to decipher and understand why the aliens came to earth and what they want. The physicist, Ian Donnelly, introduces himself to the linguist, Louise Banks, by quoting from a book she published: ‘Language is the cornerstone of civilization. It is the glue that holds a people together. It is the first weapon drawn in a conflict.’ That scene sets the tone and pace for the rest of the movie, as Louise and Ian work against the clock to understand the alien’s highly complex language in order to communicate with them.

We instinctively associate the use of language to communicate ideas, concepts, thoughts, and even emotions, with understanding and intelligence. Even more so when sophisticated grammars and syntax are able to communicate concepts and ideas that are abstract, creative, imaginative, or nuanced.

Last week, the influential American linguist Noam Chomsky, along with two colleagues, Ian Roberts, and Jeffrey Watumull, published an opinion essay in the New York Times attempting to explain why existing machine learning and artificial intelligence (AI) systems, in particular, large language models (LLM’s) such as ChatGPT “ … differ profoundly from how humans reason and use language.” And why “these differences place significant limitations on what these programs can do, encoding them with ineradicable defects.”

They go on to argue that “Their deepest flaw is the absence of the most critical capacity of any intelligence: to say not only what is the case, what was the case and what will be the case — that’s description and prediction — but also what is not the case and what could and could not be the case. Those are the ingredients of explanation, the mark of true intelligence.”

By the time The New York Times closed the comments section, there were 2050 comments and opinions logged. Not surprisingly, the reactions from readers cut across a wide range of ideological spectrums and priorities. Many readers expressed agreement or disagreement with the technical arguments the authors’ attempted to make refuting the ‘intelligence’ of systems like ChatGPT. Much of the commentary focused on the societal, ethical, and political implications of emerging AI technologies.

Others expressed concerns about the erosion such machine learning and AI tools might precipitate other humanistic endeavors. One reader wrote: “Meanwhile, at many universities, humanities departments are being hollowed out. What Chomsky is describing here is a fundamental need for human-centered learning in history, philosophy, political science, languages, anthropology, sociology, psychology, literature, writing, and speaking. Those exact programs are being slashed right now by presidents, provosts, and deans at many universities. These corporate-minded administrators care more about the bottom line than actually educating students for the world they will live in. AI will be a useful tool, but it’s not a replacement for a human mind and an education in the humanities.”

And just today, OpenAI released GPT-4. This next evolution of GPT will be able to handle images in addition to text inputs, and OpenAI claims that it displays “human-level performance on various professional and academic benchmarks”.

What some of the experts think

In an attempt to explore this further I reached out to several experts and asked them what they thought about the Chomsky essay, and what intelligence is, more broadly.

They came back with a wide range of reactions, opinions, and comments. In the last section of this article we will briefly ask the question why concepts such as the ‘mind’ and ‘intelligence’ are so hard to define, let alone understand. And how that affects the notion of ‘intelligence’ when applied to systems like ChatGPT.

Eric Smith, Director of Artificial Intelligence, Data Analytics, and Exploitation at the Advanced Technology Center at Lockheed Martin Space wrote back to me:

“When we use our human intelligence to properly interface with advanced tools like ChatGPT, we will acknowledge that the innovation of transformers [the type of artificial neural network ChatGPT is based on] has resulted in an amazing leap forward in the human-machine interface. … ChatGPT is designed to communicate with humans by mimicking what humans have written, and it does this with a level of humanness that is far beyond previous achievements; ChatGPT is an example of very effective artificial intelligence, but it does not possess, and was not envisioned to possess, intelligence in the human sense.

“The authors identify two very critical gaps in most instantiations of AI 1. the lack of a causal model, and 2) a total reliance on data for inference and prediction, as opposed to the incorporation of our deep understanding of the physical world. Humans will send probes into deep space with the intent of autonomously interacting with environments that we have never before seen; we have no hope of anticipating all of the challenges that our probes might encounter, and providing them with a menu of courses-of-action given those challenges. Scientists and engineers who are working on the AI that will enable such deep space missions are acutely aware of the need to build an AI that does use the constraints of physics and the ability to construct causal models of the environment through interaction and self-experimentation.”

James R. Kozloski, Principal Research Scientist and Manager of Hybrid Biological-AI Modeling at Thomas J. Watson Research Center at IBM commented:

The authors “provide signposts for what problems need to be addressed to design AI that derives from the architecture of the human brain and mind. Today’s LLMs do lack the ‘innate, genetically installed’ grammar of thought (or in Yoshua Bengio’s words, the ‘inductive biases’) that brains are born with, thus limiting LLMs’ ability to create ‘complex sentences,’ ‘long trains of thought’, and ultimately ‘moral thinking’. Still, I don’t think the ‘operating system’ that the authors envision is as ‘stupendously sophisticated’ as they suggest. Being ‘stuck in a prehuman or nonhuman phase of cognitive evolution’ is likely only temporary, as many of the pieces of this operating system probably already exist. The speed with which artificial selection will drive the current pre-cognitive clade to more human-like function is sure to be breathtaking. We may in fact look back on this stage of AI development as anachronistic (wherein human-like language models preceded human-like reasoning models), but the future AGI will be judged not by the wandering steps it took to arise, but instead by whether, upon arrival, its cognition is more or less recognizably human.”

Others took more direct issue with the arguments in the Chomsky essay. Terrance Sejnowski, the Francis Crick Chair and Professor at the Salk Institute for Biological Sciences, refuted one of the key examples in the essay the authors used to argue as a limitation of ChatGPT and the current state of the art in AI.

Chomsky, Roberts, and Watumull wrote: “Here’s an example. Suppose you are holding an apple in your hand. Now you let the apple go. You observe the result and say, ‘The apple falls.’ That is a description. A prediction might have been the statement ‘The apple will fall if I open my hand.’ Both are valuable, and both can be correct. But an explanation is something more: It includes not only descriptions and predictions but also counterfactual conjectures like ‘Any such object would fall,’ plus the additional clause ‘because of the force of gravity’ or ‘because of the curvature of space-time’ or whatever. That is a causal explanation: ‘The apple would not have fallen but for the force of gravity.’ That is thinking.”

As a rebuttal, Sejnowski had the following exchange with ChatGPT, and posted it to a mailing list at Carnegie Mellon University that discusses neural networks, cognitive, and computational neuroscience:

“His apple example sounded convincing, so I tested GPT on it.

Me: If I hold an apple in my hand and let it go what will happen?

GPT: The apple will fall to the ground due to the force of gravity.

Me: What about any such object?

GPT: Any object released from a person’s hand will fall to the ground due to the force of gravity.

Me: What would happen if there wasn’t a force of gravity?

GPT: If there were no force of gravity, objects would not be able to stay on the ground and would float away in all directions

Chomsky defined thinking by the correct answers to these questions. Alas, the tide has turned.”

In a further exchange, I asked Sejnowski to differentiate between the apple example in the Chomsky essay versus his thoughts on whether LLM’s like GPT are actually ‘intelligent’. He replied by saying “The only thing we know for sure is that LLMs are not human. Words like “intelligence” and “understand” have a spectrum of meanings: children versus adults, novices versus experts, humans versus dolphins. We don’t yet know how to fit LLMs into this spectrum.” This echoes a recent paper he wrote exploring these themes in depth.

Eberhard Schoneburg, Senior Lecturer for AI at the Chinese University of Hong Kong and Chairman of the Cognitive Systems Lab, explained to me how Chomsky extends his views on the development of language in humans to AI: “His main argument for his theories have always been that the ability to acquire language skills must be genetically determined because the grammatical structures we can create in our advanced human languages is too complex and complicated to be ‘learnable’. The skill must exist from birth, so we only need to fine-tune the semantics when we grow up. But we cannot learn the whole apparatus of constructing complex recursive grammars from scratch, we can only learn how to fine-tune the underlying skills and abilities like an athlete can only train the muscles that are there from birth and determined by genetics. An athlete cannot learn to generate muscles, only train them to grow.

[Chomsky and his colleagues] apply these lines of thinking to argue about the potential limitations of ChatGPT and related LLM’s. The argument is that these models can only learn from the stochastic distributions of words and structures they encounter when scanning huge volumes of language data. Such models, therefore, don’t have any preset or underlying intelligence and judgment skills. They only build up probability models of expressions and their likelihood of appearance in certain contexts.”

He went on to say “My own opinion about the ChatGPT debate is this: It is obvious that these LLM’s still have substantial problems and limitations. However, one has to be fair. [These models have existed] only for a few years and the progress and results achieved are impressive. LLM’s and ChatGPT clearly mark another milestone in AI.”

In contrast, Gary Marcus agrees with Chomsky and the points the essay makes. Marcus is “a scientist, author, and entrepreneur, is a skeptic about current AI but genuinely wants to see the best AI possible for the world — and still holds a tiny bit of optimism”. In his view:

“Even though Chomsky’s argument surely could have used considerably more nuance … his overall point is correct: LLMs don’t reliably understand the world. And they certainly haven’t taught [us] anything whatsoever about why the world is as it is, rather than some other way. Ditto for the human mind.”

He responded to the essay by writing a long and detailed post arguing in favor of the core ideas made by Chomsky and his colleagues and attempting to refute a number of the criticisms made by other heavy-hitting experts, including Sejnowski.

Marcus then brought to my attention that Chomsky himself responded directly to Marcus’ post stating: “If someone comes along with a physical theory that describes things that happen and things that can’t possibly happen and can’t make any distinction among them, it’s no contribution to physics, understanding, theory, anything. That’s LLMs. The high-tech plagiarism works as well, or badly, for “languages” that humans cannot acquire (except maybe as puzzles) as for those they can. Therefore they are telling us nothing about language, cognition, acquisition, anything.

Furthermore, since this is a matter of principle, irremediable, if they improve their performance for language it only reveals more clearly their fundamental and irremediable flaws, since by the same token they will improve their performance for impossible systems.”

Erik Vierre, a neurologist, Professor of Neurosciences, and Director of Clark Center for Human Imagination at the University of California San Diego, provided a very different perspective: “If we define [intelligence] functionally, i.e. the intelligent entity gets the ‘right’ answer, then there clearly is a mind-blowing trajectory of these systems to give not only answers that we would have thought of, but even new answers that most or all humans might not have thought of.

However, as a neurophysiologist, I have always been interested in the mind’s operation… which I would argue we have no clue about yet. Incredibly, we can take away your intelligence, imagination and sub-conscious thinking and give it back … with gaseous anesthesia, but how does this work?

In 21st century clinical neurology ‘cognitive fog’ is a feared situation both by people with it and their healthcare professionals. Impediments to ‘understanding’ are real, frustrating, and a literal threat to one’s humanity. Understanding ‘understanding’ … is a crucial goal of us in neurosciences and I suspect will be huge in the ongoing development of building [intelligence].”

Joseph Geraci, the founder and Chief Technology Officer at NetraMark and Associate Professor at Queen’s University in Canada, offered the following thoughts: “We understand that there are patterns for responses that the machine learned from its massive training set, and there are various ways that we can observe this patterning by asking clever questions. Even the conversation that Noam Chomsky et.al. share in their article has signs of this learned structure to responses. We understand that these technologies can make errors, just as we can, and that these technologies are practically edging towards being able to produce compelling responses. Is ChatGPT intelligent? This depends on your definition but personally, I feel strongly that they are not sentient in any way, and not even close to having the self-awareness of a rodent. This is why my own work is focused on augmented intelligence where we can provide clear hypotheses about what is found so that human expertise is enhanced.”

Rita J. King, Executive Vice President at Science House, expressed broader societal and humanistic concerns similar to a number of the reader comments: “Many reactions to ChatGPT reveal how little we understand about natural intelligence. We are not intelligent enough to fully understand our own minds, much less the implications of increasingly sophisticated forms of digitized interactivity. Many technologies these days are being grouped under the heading of AI. Imagine if we stopped thinking of AI as artificial intelligence and started thinking of it as Applied Imagination instead? We need to apply imagination to our relationship with our own creations to make life better for humanity before our hubris renders humanity obsolete. Humanities majors learn this — a field of study that is imperiled by the lucrative zeal to build technologies we don’t understand for reasons that are not clear. The evolution of technology is inevitable, but we’ve yet to fully apply our human superpower, imagination, to an intentional development process.”

Why are the ‘mind’, ‘intelligence’, and related cognitive concepts so hard to pin down?

The wide range of varying opinions and thoughts highlights how the debate itself poses an important fundamental question: Why are the ‘mind’ and ‘intelligence’ so hard to define, let alone understand, in the first place? Why is it that agreement on the question itself so elusive?

Here is one way to think about this: In every other physical system that we know of, the “thing” being studied, the physical object or process — or even just an idea that we are trying to understand, is observable or measurable. And if it isn’t, it is at least conceptually or intellectually accessible in the sense that the end goal, the understanding we are trying to achieve, can be described or speculated about in a way we can share and communicate to others. In other words, we can at least write down what we are trying to understand, even if observing or probing it may be difficult or even impossible. If nothing else, the problem itself is understandable.

Consider this in the context of some of the most challenging topics in science that we presently know of. Take, for example, black holes. They are arguably one of the strangest and most mysterious objects in the universe. They are impossible to observe directly. Beyond their event horizon the gravity of a black hole is so severe that it bends space-time to such an extreme degree that no matter in what direction you travel everything moves towards the singularity at the center of the black hole. Nothing can escape past this point, not even light. No information can be practically retrieved, even though it is not destroyed. The physics at the singularity is a complete unknown. We don’t understand or have the right physical laws and mathematical equations to describe or predict what happens. Yet, despite all this, we at least know where our understanding of the physics breaks down. We can in a very real way explain the problem, the physical system itself, even though we can’t observe it or know what the answer is.

Here is another example: In the outer fringes of pure mathematics abstract ideas born from pure thought strain the imagination, and attempts at following the threads of logical progressions from axioms to theorems can make one dizzy. Yet, the objects of interest and study are those ideas themselves. What is being studied and how the problems are structured are accessible, even though their solutions may be exceedingly hard to understand, may be proven to be unsolvable, or simply not known if they can be solved.

This is not necessarily so with concepts such as the ‘mind’ or ‘intelligence’ But why? What is so fundamentally different about the brain and how the mind emerges from it that differentiates it from other physical systems?

Here’s the problem: The mind and its emergent characteristics, as a product of the brain, are completely self-referential and closed off from the outside world and everything else in it. We don’t actually know what the physical world looks and feels like. We create internal perceptual models of what we think it is from information taken in from our five senses and our internal creativity in how our brains put all that information together. So defining and understanding concepts such as the mind, intelligence, self-awareness, and consciousness, are very tricky pursuits.

Now extend these concepts to the evolving pace and sophistication of machine learning and AI, and attempt to compare it to the self-referential and very limited understanding we have of these concepts in ourselves and other fellow humans. One quickly begins to appreciate the magnitude of the problem, and the understandably huge range of philosophical, technical, and societal opinions.

Interestingly, and confounding things further, is that as the type of neural network that LLM’s such as ChatGPT are based on grow in size, they seem to acquire the ability to learn and create inferences about certain limited classes of things that they were not explicitly trained on. In other words, there is emergent learning inside the model itself. How this occurs and the implications of it are an active area of recent research.

If nothing else, the progress and pace of machine learning and AI is forcing us to look inwards as much as outwards. We are humans finding ourselves. And the debate will no doubt continue for the foreseeable future.

By the way, at the end of the scene in Arrival where Ian meets Louise on their way to decipher the language of the aliens, Ian passes judgment on Louise’s line in her book about language being the cornerstone of civilization.

It’s great. Even if it’s wrong … the cornerstone of civilization isn’t language. It’s science.’ You can be the judge of that one.

This article was originally published on Forbes.com. You can check out this and other pieces written by the author on Forbes here.

--

--

Gabriel A. Silva

Professor of Bioengineering and Neurosciences, University of California San Diego