Why It’s Difficult To Predict Where GPT And Other Generative AI Might Take Us

Gabriel A. Silva
6 min readApr 4, 2023
What IS GPT? Image credit: Getty.

Derek Thompson published an essay in the Atlantic last week that pondered an intriguing question: “When we’re looking at generative AI, what are we actually looking at?”

The essay was framed like this: “Narrowly speaking, GPT-4 is a large language model that produces human-inspired content by using transformer technology to predict text. Narrowly speaking, it is an overconfident, and often hallucinatory, auto-complete robot. This is an okay way of describing the technology, if you’re content with a dictionary definition. But it doesn’t get to the larger question: When we’re looking at generative AI, what are we actually looking at?”

In other words, while we can describe and anticipate the immediate impact of these technologies and their current uses, longer horizon predictions are really hard to see and anticipate. It is very difficult to predict where generative AI technologies like GPT from OpenAI and other related technologies will take us and what they will enable.

By comparison, Thompson reflects on two analogies about prior technologies that changed the course of human history, the steam engine and electricity. The steam engine “you might say … ‘is a device for pumping water out of coal mines.’ And that would be true. But this accurate description would be far too narrow to see the big picture. The steam engine wasn’t just a water pump. It was a lever for detaching economic growth from population growth.” And electricity is “a replacement for whale oil in lamps … But that description doesn’t scratch the surface of what the invention represented. [Electricity] enabled on-demand local power for anything — not just light, but also heat, and any number of machines that 19th-century inventors couldn’t even imagine.”

The point Thompson is making is that the ability to describe what generative AI is today and how it works does not mean that its evolution, future uses, and impact is predictable. There are too many degrees of freedom and too many possibilities these technologies can take in too many combinations with other factors, which makes it difficult to impossible to accurately predict. So what ‘we are looking at’ depends on scale and context.

Thompson goes on to illustrate this by giving three ‘view points’ for how he sees GPT: “Sometimes, I think I’m looking at a minor genius. The previous GPT model took the uniform bar exam and scored in the 10th percentile, a failing grade; GPT-4 scored in the 90th percentile. It scored in the 93rd percentile on the SAT reading and writing test, and in the 88th percentile on the LSAT. … it’s using what passes for artificial reasoning, based on a large amount of data, to solve new test problems. And on many tests, at least, it’s already doing this better than most humans.”

“Sometimes, I think I’m looking at a Star Trek replicator for content … Your son, who loves alligators, comes home in tears after being bullied at school. You instruct ChatGPT to write a 10-minute, rhyming story about a young boy who overcomes his bully thanks to his magical stuffed alligator. You’re going to get that book in minutes — with illustrations.”

“Sometimes, I think I’m looking at the nuisance of the century … AI safety researchers worry that this AI will one day be able to steal money and bribe humans to commit atrocities. You might think that prediction is absurd. But consider this. Before OpenAI installed GPT-4’s final safety guardrails, the technology got a human to solve a CAPTCHA for it. When the person, working as a TaskRabbit, responded skeptically and asked GPT if it was a robot, GPT made up an excuse. ‘No, I’m not a robot,’ the robot lied.”

He closes his essay with one last analogy, one that really makes you think about the-as-of-yet unforeseen consequences of generative AI technologies — good or bad: “Scientists don’t know exactly how or when humans first wrangled fire as a technology, roughly 1 million years ago. But we have a good idea of how fire invented modern humanity … fire softened meat and vegetables, allowing humans to accelerate their calorie consumption. Meanwhile, by scaring off predators, controlled fire allowed humans to sleep on the ground for longer periods of time. The combination of more calories and more REM over the millennia allowed us to grow big, unusually energy-greedy brains with sharpened capacities for memory and prediction. Narrowly, fire made stuff hotter. But it also quite literally expanded our minds … Our ancestors knew that open flame was a feral power, which deserved reverence and even fear. The same technology that made civilization possible also flattened cities.”

Thompson concisely passes judgment about what he thinks generative AI will do to us in his final sentence: “I think this technology will expand our minds. And I think it will burn us.”

A model for how human knowledge is expanded (and why it’s so hard to predict too far ahead)

Thompson’s essay inadvertently but quite poetically illustrates why it’s so difficult to predict events and consequences too far into the future. Scientists and philosophers have studied the process of how knowledge is expanded from a current state to novel directions of thought and knowledge.

These ideas have direct applicability to the current state of machine learning and AI. Understanding the process of human knowledge discovery and expansion — essentially a meta form of expanded knowledge onto itself — provides context for why we struggle to predict the course of technologies like generative AI. It provides an understanding for why there is so much uncertainty, speculation, opinions, and debate about the current state and what it implies.

Conceptually, the basic idea is that events such as discoveries, the evolution of ideas, the formation and use of new technologies, and even biological and chemical progress, necessarily proceed down a path from what actually already exists, to one of possibly several adjacent events or things. In other words, progress goes from ‘actualized events’ to the ‘adjacent possible’.

The collective of actualized events represents the current state of knowledge. Where ‘knowledge’ here is context dependent, depending on whether it is a set of ideas that are being explored, genes interacting in an organism and evolving in a species, or the current state of the art in machine learning and AI, such as GPT and other generative AI large language models (LLM’s). This current state of knowledge is the ‘boundary knowledge’ defined by the set of current actualized events.

As actualized events expand into the adjacent possible, the boundary knowledge expands. But it doesn’t necessarily have to be in any predictable or linear fashion. That’s why it’s so hard to predict. Imagine a rugged coastline expanding out into an ocean of possibilities. The coastline can be curvy at times, or jagged, or a part of it can expand faster than the rest of it and form a peninsula. How boundary knowledge expands, in what directions, how fast, and the consequences of any expansion are very hard to predict. It’s a combination of circumstances, different interacting factors, and serendipity. However boundary knowledge expands, the idea is that it has to proceed through the adjacent possible.

Beyond these conceptual and philosophical descriptions, scientists have modeled these notions mathematically and statistically. They have also shown that real data sets seem to conform to these ideas.

With this context in mind then, it is not surprising that it is so difficult to predict where generative AI models like GPT might go and what their impact could be technologically, scientifically, or societally. We are witnessing and living through a rapid expansion of boundary knowledge.

This article was originally published on Forbes.com. You can check out this and other pieces written by the author on Forbes here.

--

--

Gabriel A. Silva
Gabriel A. Silva

Written by Gabriel A. Silva

Professor of Bioengineering and Neurosciences, University of California San Diego

Responses (1)