Research and the commodification of intelligence
Part I: The Receipts Are In
The machines have arrived at the gates of the academy. Not as barbarians, but as prodigies.
Consider the evidence. This year alone: DeepMind's and OpenAI's systems achieved gold medal performance at the International Mathematical Olympiad, a rarified competition where the world's brightest teenage minds wrestle with problems that often humble professional mathematicians. OpenAI's system solved 12/12 problems in the International Collegiate Programming Contest, outperforming teams from MIT, Oxford, and Cambridge. Joshua Gans published an AI-co-generated economics paper in Economics Letters: peer-reviewed, accepted, cited. The Sakana AI Scientist submitted a paper to ICLR that passed initial review before being withdrawn following community controversy.
The position that machines cannot conduct research has become untenable. We must now grapple with what this means.
The Thought Experiment
Imagine it is 2030. Every researcher has access to AI systems with effective IQs of 300, whatever that might mean, available as cheaply as cloud storage is today. These systems write grants, design experiments, draft papers, review literature, generate hypotheses, and yes, produce novel results. They work tirelessly, iterating through possibilities at speeds that make human cognition look geological.
Note that I am not claiming this will happen, only suggesting that it is far enough within the realm of possibility that it merits thinking about inhabiting a world where this happens. But this is a thought experiment, not a forecast or a prediction.
So what happens to the research life of a university in a world of brilliant machines?
I recently sketched a little note that offers a simple economic framework for thinking through this question. The model is deceptively simple: research labs are production functions that combine human effort and AI capability to maximize their holy trinity of metrics: publications, citations, grant dollars. The kicker? AI capability, measured in terms of research competency, doubles every sixteen months (this is probably a very conservative estimated) whilst its price plummets.
(Note I focus on STEM research here as it is more amenable to a simple stylized model; research in, e.g., the humanities is more diverse, harder to model, and may, appropriately enough, prove to be more resiliently uniquely human, even in the face of brilliant machines.)
Think of it this way: if humans and AI are substitutes in research production (and the evidence suggests they increasingly are, whether we like that or not), then basic economics tells us that labs will shift toward the cheaper, more capable input. It's not malice; it's optimization in a system that rewards optimality.
But here's where it gets interesting, and perhaps unsettling: research isn't just production; it's a tournament. Grants flow to those who've already won grants. Citations cluster around the already-cited. Prestige compounds. In this model, when you combine exponentially improving AI with these winner-take-most dynamics, something rather dramatic happens: the research landscape doesn't just shift, it restructures.
The attention economy makes this worse. As AI-augmented labs flood journals, and preprint servers, with papers, the marginal value of each publication drops. We're already seeing this: acceptance rates plummet whilst submission volumes soar. The binding constraint becomes not production but attention: reviewers' time, readers' bandwidth, funders' focus. Labs optimize accordingly, pivoting toward whatever metrics still matter. The game changes, but the tournament continues.
The Challenge
Before I sketch the implications, I want you to pick a role and think carefully about what this vision of 2030 might look like for that role (perhaps for you):
-
Senior PI with tenure: You run a lab, manage grants, set research directions
-
Pre-tenure faculty: You're building a research program, racing the tenure clock
-
Graduate student: You're learning to research whilst doing the labour of research
-
Professional lab staff: You run experiments, maintain equipment, process data
-
Research support staff: You manage grants, ensure compliance, handle administration
Write it down. What does your day look like? What skills matter? What's your value proposition when a subscription service can do much of what you currently do, only faster and cheaper?
When you’re done, read on.
Part II: Life in the age of research machines
My stylized model yields predictions that are simultaneously precise and challenging.
The Core Dynamic
The model hinges on a single parameter: the elasticity of substitution between human and AI research labour. When this exceeds one, when humans and machines become substitutes rather than complements, the relative demand for human research effort decays exponentially. Not linearly. Exponentially.
The half-life of human involvement in research? Sixteen months divided by the substitution elasticity minus one. If humans and AI are reasonably substitutable, human research labour halves every year or two. The floor isn't zero, there are oversight requirements, embodied tasks, regulatory constraints, but it's low. Think skeletal crew, not thriving research team.
Senior PIs with Tenure: The Orchestrators
For tenured faculty, the model predicts a curious transformation. They don't disappear; they evolve into what we might call "AI orchestrators." Their labs hit the minimum human oversight threshold and stay there, whilst all additional funding flows into AI compute.
The successful ones, those with established networks, prestigious appointments, strong "θ values" in the model's terminology, capture an increasing share of grants. Why? Because when research output scales with AI spending and grants flow to those who've already won them, initial advantages compound viciously. The rich get richer, but the currency is compute, not personnel.
The job becomes familiarly managerial: selecting problems worth solving, navigating ethical review boards, managing intellectual property, maintaining industry relationships. Less doing science, more conducting it; in the orchestral sense.
Pre-Tenure Faculty: The Squeezed Middle
Tenure decisions rely on relative performance; you don't need to outrun the bear, just the other hikers. But when the top tier has access to effectively unlimited AI scaling whilst you're bootstrapping with startup funds, the performance distribution stretches dramatically upward.
The mathematics are unforgiving: for pre-tenure faculty with below-median prestige markers, the probability of achieving tenure decreases monotonically with time. Not because they're getting worse, but because the bar is accelerating away from them. The system might maintain its traditional tenure percentages, but the threshold becomes increasingly about who can leverage AI most effectively, which correlates suspiciously with who already has resources to leverage.
Graduate Students: From Apprentices to Teaching Assistants
The model bifurcates graduate funding into the typical categories of research assistantships (RAs) and teaching assistantships (TAs). The RA demand? Collapses to the regulatory minimum required for human oversight. In computational fields like mine, it approaches zero.
But undergraduates still need teaching (at least unless we automate that, too), so TA positions remain. The result is perverse: PhD programs become primarily teaching-labour schemes with a vestigial research component. The apprenticeship model of graduate education, learning by doing alongside senior researchers, withers when the "doing" is outsourced to machines.
What remains is perhaps a compressed, intensive training in problem selection, AI orchestration, and ethics. Less a gradual apprenticeship, more a crash course in research management.
Laboratory Staff: Embodiment matters
For professional research staff, the future depends on embodiment. In purely computational fields like bioinformatics, theoretical physics, and much of economics, demand for human staff approaches zero, replaced by a skeleton crew managing data pipelines and compute clusters.
But in fields requiring physical manipulation, wet labs, field work, animal studies, human staff bottom out at the safety and compliance minimum. For now, someone must still pipette, collect samples, handle specimens. Though even here, the growth happens in AI services, not human hiring. A fixed human core surrounded by expanding machine capability.
Research Administrators: From Proposal Writers to Compliance Officers
The model predicts a complete inversion in research administration. Currently, most staff support grant acquisition: editing proposals, managing submissions, crafting budgets. But when AI can write compelling grant applications in seconds, this function begins to evaporate.
What expands? Compliance, audit, data governance, AI usage oversight, security protocols. The bureaucracy doesn't necessarily shrink; it pivots. Instead of helping researchers win grants, administrators ensure they use AI responsibly. The paperwork shifts from promises to permissions.
The Uncomfortable Questions
This isn't technological determinism, nor is it a prediction of the future; it is just a game: simplified economic logic under specific assumptions. But those assumptions seem increasingly plausible. If intelligence becomes a commodity, what happens to institutions built on its scarcity?
Even in a world with a 300 IQ machine, though, none of this has to happen. If this dystopian nightmare comes to pass, it will be because we chose it. (Or, rather, we chose to do nothing to stop it.)
Universities, funders, and individual researchers face choices. We can reduce the substitutability between humans and AI, emphasizing skills and tasks that remain fundamentally human. We can raise oversight floors through regulation and professional requirements. We can flatten the tournament dynamics that concentrate resources. We can reweight metrics away from pure throughput toward quality, reproducibility, and social impact.
But each choice has costs. And the clock keeps ticking.
The Challenge
So I return to my challenge: In your chosen role, what does 2030 look like? More importantly, what should it look like? The future of the university isn't predetermined, but if we stick to our old incentives and behaviours, we might allow the world above to come to pass.
The academy has always adapted to technological change, from the printing press to the internet. But this feels different. We're not just automating the distribution of knowledge or even its discovery. We're automating intelligence itself.
And if the hypothetical world we just investigated does come to pass, the question isn't whether we can compete with machines at their own game. It's whether we have the resolve and grit to structure research into a different game entirely.Disclosure
I am uncomfortable writing an AI disclosure statement because I worry that it frames the use of AI in an unhelpful way.
I use AI as a cognitive mirror and sparring partner. I throw ideas into it and ask it to challenge me, to push perspectives that are absent in my own thinking and to criticize the content, and style, of my writing.
The process is iterative and nontrivial and I do not feel that reductionist labelling (as might fit a tool like a grammar checker) is honest.