Is Generative Artificial Intelligence Intelligent?

Subscribe to the weekly posts via email here

Over the past few years, like many others, I’ve been preoccupied with sorting out what generative AI is, and what it is not. Generative AI is a tool, not a collaborator. It is impressive, but it is not an author. It can serve as a tutor, but not as a teacher. It makes for a capable assistant, but not a romantic partner. 

When I first began thinking about these distinctions, my aim was simply to deny generative AI anything beyond the status of a mere tool. I found the technology fascinating, but never close to being a cognitive agent, intelligent, or conscious. But lately, my mind has been fixated on a different thought: maybe agency, intelligence, and even consciousness are not the exalted phenomena we’ve made them out to be. 

Considering this, my recent research has taken intelligence as its first target. My work has largely been an attempt to develop a stronger, more precise conceptualization—one that could help us recognize an intelligent machine should we ever encounter one. And you, as the reader of this post, now have a front-row seat to some of my latest thinking on the matter. 

My aim, however, isn’t to persuade you that this is the right way to think about intelligence. Instead, it’s to spark another conversation about generative AI and to open the door to what already is, and has been for a long time, an active philosophical and scientific area of inquiry. 

Intelligence 

When we humans use the word intelligence, we usually mean something about how smart a person is—a success term that marks or measures someone’s ability to do something well, whether mathematical, analytical, or creative. And we usually use it comparatively: humans to humans, humans to animals, animals to each other.

Unsurprisingly, the history of measuring intelligence is jaded. It is entangled with sexism, racism, and human essentialism, even when mixed with good intentions, for example, efforts to improve education by classifying students into learning styles (see Stephen Jay Gould’s The Mismeasure of Man, 1981, for more). 

That same spirit has been applied to AI. To determine whether a system is intelligent, we test it against benchmarks (Bubeck et al., 2023; Reuel et al., 2024). We give machines the kinds of tests that measure intelligence in humans—entities we already assume are intelligent, because being intelligent has long been central to how we define ourselves: Homo sapiens, man the wise (Dennett, 2017; Ford, 2006; Rosenberg & Rosenberg, 2012). 

Philosophers, however, are uneasy with this trend. For the most part, they agree that observable behaviour alone is not enough to settle the question of intelligence (Block 1981; Grzankowski 2024; Searle 1980). This is because we’ve learned the limits of behaviourism, both philosophical and psychological varieties. To really know whether an entity is intelligent, then, a natural move is to look inside: to ask what kinds of cognitive functions it instantiates, and how those functions enable it to interact intelligently (or similarly: rationally, or intentionally) with its environment and others (Grzankowski 2024; Chalmers 2025). 

That strikes me as a worthwhile project. It isn’t the one that most interests me, but it remains a valuable one. The trouble, though, is that such a project is essentially null without a clear conceptualization of intelligence—something both philosophical and scientific literature agrees we lack (Gignac & Szordorai, 2024; Legg & Hutter, 2007). That is why attempts to define intelligence are a natural first step, and why it has become the step I’ve been taking recently. 

A Working Definition 

Here’s one definition I’ve been toying with: 

Intelligence is the capacity of a system or agent to execute functions or activities flexibly and adaptively via interaction with its environment.  

What follows from that? Well, if intelligence is understood as the execution of functions or activities, then the more functions a system or agent can perform, the more intelligent it is. On this view, Turing-completeness (the capacity to compute any computable function) could be taken as a marker of maximal intelligence. Put differently, Turing-completeness measures the breadth of tasks a system can carry out. If so, and given the common assumption that intelligence reflects the range of cognitive capacities a biological agent can exercise, it follows that even a system capable of realizing just a single function (so long as it does so flexibly and adaptively in interaction with its environment) must possess intelligence to some degree. This is not especially surprising, however, since even very simple organisms can reasonably be described as intelligent (Barron, 2023). 

Note also that this definition is intentionally deflationary. It divorces intelligence from social and psychological baggage and makes it possible to talk about humans, animals, and machines on the same conceptual plane. That may be useful, especially as it pertains to AI policy and design, for understanding where to place AI in conceptual space seems necessary for making sense of its past, present, and future impact. This, however, is to be determined. I also accept that the current iteration is perhaps too deflationary to be useful in its current state, but I think it is headed down a fruitful path.  

Moreover, this perspective may also explain why humans are often regarded as exceptionally intelligent. The human brain, though finite and biological, appears to approximate Turing-completeness (Turing, 1948). Strictly speaking, no biological system has infinite memory, but humans extend cognition through cultural and technological scaffolding—writing, mathematics, computers (Turing, 1948; Clark & Chalmers, 1998; Humphreys 2004; Brey 2005) including through generative AI. In practice, this makes our cognitive capabilities functionally unbounded. 

Evidence for this comes not only from the generality of human capabilities but also from structural features of human cognition. For instance, the generativity of language. Language is systematic and productive. Despite having a finite vocabulary humans can generate and understand an unbounded range of meaningful expressions (Fodor, 1975). This is at least if you roughly accept a Chomskian picture of language, which Geoffrey Hinton (the “Godfather of AI”) certainly does not (IASEAI, 2025). That is, Chomsky explains generativity by suggesting that there is a universal grammar of hard-coded structures and symbols that ground our linguistic competence, whereas Hinton attributes it to the emergent properties of statistical patterns between representations learned by neural networks. 

I am partial to Hinton’s view. I don’t think the brain literally stores symbols. I am more sympathetic to theories that suggest the brain realizes symbol-like behaviour through patterns of activation across networks. This would mean that higher-level capacities like reasoning and language, emerge from lower-level processes in the brain but are not necessarily reducible to them.  

What follows from this is that you cannot just dissect a brain and find the belief “the sky is blue,” nor scan a neural network and spot the fact “Paris is the capital of France” and this I do know. Thus, intelligence and meaning emerge from what systems can do. That is, the range of functions they can execute, rather than from any transparent feature of their architecture. Or as Herman Cappelen and Josh Dever (forthcoming) put it, “Meaning isn’t in the weights.” A play on Hilary Putnam’s famous quote “meaning ain’t in the head” which suggests that meaning is the contribution of an agent’s environment (Putnam, 1975). Hence why “interaction with its environment” is a component of the definition above. The same holds for intelligence: it isn’t in the code or circuits but in the system’s structure, function, and worldly integration (maybe). And this point does not collapse into behaviourism. 

What more there is to say is beyond the scope of a single post. But here’s the core idea: 

Generative AI is impressively capable. Its capabilities don’t come from approximating humanity in any straightforward way, but from its own structures, functions, and worldly integration. Even so, we should take seriously what these systems are, and recognize just how much we still don’t know.

The Challenge

So, I leave you with this challenge: 

Suppose generative AI is intelligent, where should we draw the line? Pick one role (author, collaborator, friend, etc.) and ask yourself: 

  1. Should this role be open to AI, or reserved for humans? 
  2. On what grounds do you make that distinction: moral status, social norms, practical usefulness, or something else? 

 

Your goal isn’t to land on a final or definitive answer—I certainly don’t have one. The definition I provide above is an idea, not a finished theory. What matters is tracing where your intuitions lead you, and then pressing on them: why do you hold them, and are they justified? 

As an example, I will explore the of role of Researcher. If AI is intelligent or soon to be humanly-so, my conviction remains that AI is not a Researcher. Although I believe it’s capabilities—should they advance further—would allow it to perform all the right activities, it remains to be that how this system is embedded in a research environment is not enough. To be a researcher is also to be immersed in the ethics and sociality of doing research, something I think requires an environmental embedding like that of a human and not that an AI even if its environmental embedding is a step in the direction of intelligence.

Subscribe to the weekly posts via email here

 

References

  • Barron, Andrew. 2023. “All Animal Intelligence Was Shaped by Just 5 Leaps in Brain Evolution.” The Conversation, July 4. https://doi.org/10.64628/AA.wrahtpgng. 
  • Brey, Philip. 2005. “The Epistemology and Ontology of Human-Computer Interaction.” Minds and Machines 15 (3): 383–98. https://doi.org/10.1007/s11023-005-9003-1.

  • Block, Ned. 1981. “Psychologism and Behaviorism.” The Philosophical Review 90 (1): 5. https://doi.org/10.2307/2184371.

  • Bubeck, Sébastien, Varun Chandrasekaran, Ronen Eldan, et al. 2023. “Sparks of Artificial General Intelligence: Early Experiments with GPT-4.” arXiv:2303.12712. Preprint, arXiv, April 13. https://doi.org/10.48550/arXiv.2303.12712. 

  • Cappelen, Herman, and Josh Dever. (forthcoming). “A Hyper-Externalist Manifesto for LLMs.” In Communicating with AI: Philosophical Perspectives, edited by Herman Cappelen and Rachel Sterken. Oxford University Press. Accessed September 23, 2025. https://philarchive.org/rec/CAPAHM. 

  • Chalmers, David J. 2025. “Propositional Interpretability in Artificial Intelligence.” arXiv:2501.15740. Preprint, arXiv, January 27. https://doi.org/10.48550/arXiv.2501.15740. 

  • Chemero, Anthony. 2023. “LLMs Differ from Human Cognition Because They Are Not Embodied.” Nature Human Behaviour 7 (11): 1828–29. https://doi.org/10.1038/s41562-023-01723-5. 

  • Clark, Andy, and David Chalmers. 1998. “The Extended Mind.” Analysis 58 (1): 7–19. 

  • Coelho Mollo, Dimitri. 2024. “Intelligent Behaviour.” Erkenntnis 89 (2): 705–21. https://doi.org/10.1007/s10670-022-00552-8. 

  • Dennett, D. C. 2017. From Bacteria to Bach and Back: The Evolution of Minds. Fist published as a Norton paperback. W. W. Norton & Company.

  • Fodor, Jerry A. 1975. The Language of Thought. The Language & Thought Series. Crowell. 

  • Ford, Kenneth M., ed. 2006. Thinking about Android Epistemology. AAAI Press & MIT Press. 

  • Gignac, Gilles E., and Eva T. Szodorai. 2024. “Defining Intelligence: Bridging the Gap between Human and Artificial Perspectives.” Intelligence 104 (May): 101832. https://doi.org/10.1016/j.intell.2024.101832. 

  • Gould, Stephen Jay. 1981. The Mismeasure of Man. Norton. 

  • Grzankowski, Alex. 2024. “Real Sparks of Artificial Intelligence and the Importance of Inner Interpretability.” Inquiry, January 11, 1–27. https://doi.org/10.1080/0020174X.2023.2296468. 

  • Humphreys, Paul. 2004. Extending Ourselves: Computational Science, Empiricism, and Scientific Method. Oxford University Press. 

  • Legg Shane, and Marcus Hutter. 2007. “Universal Intelligence: A Definition of Machine Intelligence.” Minds and Machines 17 (4): 391–444. https://doi.org/10.1007/s11023-007-9079-x.

  • Putnam, Hilary, ed. 1975. “The Meaning of ‘Meaning.’” In Philosophical Papers: Volume 2: Mind, Language and Reality, vol. 2. Cambridge University Press. https://doi.org/10.1017/CBO9780511625251.014. 

  • Reuel, Anka, Amelia Hardy, Chandler Smith, Max Lamparth, Malcolm Hardy, and Mykel J. Kochenderfer. 2024. “BetterBench: Assessing AI Benchmarks, Uncovering Issues, and Establishing Best Practices.” arXiv:2411.12990. Preprint, arXiv, November 20. https://doi.org/10.48550/arXiv.2411.12990. 

  • Rosenberg, Leon E., and Diane Drobnis Rosenberg. 2012. “Chapter 9 - Biological Evolution.” In Human Genes and Genomes, edited by Leon E. Rosenberg and Diane Drobnis Rosenberg. Academic Press. https://doi.org/10.1016/B978-0-12-385212-0.00009-3. 

  • Searle, John R. 1980. “Minds, Brains, and Programs.” Behavioral and Brain Sciences 3 (3): 417–24. https://doi.org/10.1017/S0140525X00005756. 

  • Turing, A. M. (1948). Intelligent Machinery, 1948 report for National Physical Laboratory. Reprinted in 2004. In B. Jack Copeland (Ed.), The Essential Turing (pp. 410–432). Clarendon Press. 

  • International Association for Safe & Ethical AI, dir. 2025. What Is Understanding? – Geoffrey Hinton | IASEAI 2025. 18:33. https://www.youtube.com/watch?v=6fvXWG9Auyg.  

Disclosure

My writing process is dialogical, involving conversations with others, with myself, and occasionally with Generative AI. Ideas in this post have grown out of this mix of conversations along with independent study, reading, and research. Nonetheless, the contribution is my own.

Your Challenger: Carson Johnston