We the Doomers: Taking a closer look at agency in “Agentic AI”
It is hard today to open the technology section of a newsfeed and not hear about the disruptive transformation that a multitude of AI agents will be bringing about in our lives. There is often a subtext of “runaway robots taking over the world” in many of these reports (Wired, 2023; Metz, 2023), but unlike the embodied runaway robot that we have cast in our imaginations as a physical entity, these agents are invisible software entities. They can take control over our social and economic systems, from social media feeds to the software integrated into our financial systems, healthcare, education, and every other system that, over the decades, has been transformed into the “digital age.” In this age, instead of humans moving paperwork, we have software moving virtual “paperwork.”
As much as we are all tired of hearing about AI (at least I am, and it takes 100% of my research time), almost everyone reading this article has a perspective on AI. At two extremes, pejoratively, we have AI Boosters and AI Doomers. AI Boosters are a big-tent party with a wide range of personnel, including economists who believe in perpetual improvements to human welfare driven by continual disruption and productivity growth; our high-school buddy who has never failed to keep up with the latest tech gadget; machine-learning academics who are putting in 80 hours a week trying to beat performance benchmarks (or creating one if that doesn’t succeed); Nvidia stockholders; venture capitalists; the current US administration; and the Canadian Minister of AI.
AI Doomers, on the other hand, are a relatively smaller but nevertheless heterogeneous camp. A partial tally would include machine-learning academics who are putting in 80 hours a week fighting to stop today’s slop from turning into tomorrow’s superintelligence; workers worried about job stability and employment; academics concerned about disruption and what the future holds; and Geoffrey Hinton. This is, of course, a shallow analysis, and most of us are probably somewhere in the middle, drawing from a mix of the two extremes. In my case, although it may seem that as a faculty member in engineering working on AI, my stance should align with the booster camp, it is hard for me to shake off the Ā̃tel bamponthi roots as a Bengali academic. If a line is drawn in the sand, I find myself in the latter camp, and that is what I want to focus the rest of this article on.
The AI Doomer camp is a quite interesting one. It includes AI executives and computer scientists who have a back-of-the-envelope estimate of p(doom), which is their subjective estimate of the objective probability of existential catastrophe brought about by runaway AI beyond our control (Wikipedia, n.d.). These calculations have now captured the attention of important political figures like Gavin Newsom, who recently, in a podcast with Ezra Klein, mentioned carefully listening to very smart people who are highlighting the dangers that new generations of AI pose to humanity’s existence (Klein, 2025). This camp also includes social scientists, who have been increasingly concerned about the absence of agency and oversight that we as public have over disruptive technologies (Future of Life Institute, 2023).
The two Doomer camps—one of which I will call the techno-centric view and the other the socio-centric view—are presenting less-than-rosy views of AI agents, but they come from strikingly different backgrounds and perspectives and therefore deserve further discussion. What I offer here is my diagnosis of the divergence between these two perspectives, which, to me, lies in the difference in how each group views the agency of artificial agents versus the agency of our social institutions.
Techno-centric view of ‘AI doom’
The techno-centric view considers the artificial agent as something that can sense and observe, make inferences, plan and strategize, and act toward achieving its goal. Who sets the goals for the agents are less scrutinized; rather, computer scientists are concerned mainly about how the agent can achieve those goals. These concerns include questions about better designs for coordinating multiple agents, communication protocols between agents, how agents learn and create plans, how agents update their knowledge in light of new information, and how to deal with information that may not be perfect. Improvements in the design of agents that address these questions have been underway for at least three decades, and what we see today is a progression along these trajectories.
I caution here on the interpretation of the word “agents” in this perspective. In this world, agents were viewed as an abstraction of an “intentional agent.” Computer scientists care deeply about abstraction; after all, progress in computing happened after we figured out higher and higher levels of abstraction, when programmers moved from writing programs on punch cards to expressing instructions in symbolic languages that other computer programs could understand. Artificial “intentional agents” were always designed to be an abstraction1. During the turn of the century, as computer systems became more complex, with wider networks of e-commerce, financial systems, business processes, and other transactions, “agents” became a sophisticated abstraction that avoided procedurally specifying every little instruction in code. Instead, it became attractive to specify broad goals and design procedures for a computational “agent” to achieve those goals.
These agents were already deployed in many real-world tasks. In fact, when I worked at IBM, I had the experience of working with such agents (and they were called that as well) that were designed with goals such as figuring out which aisle a bulk business order should be picked from in a warehouse—pretty boring stuff. With the advancement of reinforcement learning, later augmented with deep learning, the abstraction became higher still. It became possible to specify the environment the agent operates in (for example, what happens if an agent takes a certain action), formulate goals in terms of rewards, and use a variety of optimization algorithms to let the agent figure out a plan. Large language model–based agents are an evolution along this path of abstraction, where it is now possible to specify goals in natural language and hook the agent’s actions to other procedural code, or even have one agent’s action become an instruction to another agent.
What is missing in this picture is the question of how the agent’s goal is set. It has always been the case that the goal is given externally. In applications where the agent is part of an organization, it behooves a human product manager to translate requirements into specific goals. Whether those goals are mundane business processes or surveillance systems for unhoused populations (Business Insider, 2017), the goal is external to the agent itself. As a result, in applications with broader social consequences, goals were never the question that captured the attention of computer scientists. Most viewed their role through the frame of technology as a means of making tasks more efficient. In fact, an early textbook on agents published in 2001 observes that the growing popularity of multiagent systems can be attributed to American-style individualism (Weiss, 1999).
Socio-centric view of ‘AI doom’
The socio-centric view of agents developed in parallel to the techno-centric view during a similar timeframe. “Agency theory” in these models was built on a formal microeconomic framework in which the agent is typically associated with a principal (individuals, organizations, institutions, with their own subjective ‘goals’). The principal is in control, initiates the relationship with the agent, and has their own goals. Although the agent has its own goals, the principal can constrain the agent’s actions, since what the agent can do is conditioned on what the principal does as the initiator of the relationship. The result of the agent’s action affects both the principal and the agent. Typical examples of such relationships include employer–employee or service provider–consumer relationships, where both principal and agent are envisioned as human or social entities. The main questions, therefore, deal with what actions the principal can take so that it is in the agent’s interest to act in ways that achieve the principal’s goals (and, of course, the agent’s own goals as well). Complications arise when the principal cannot observe the actions of the agent, but with some liberal use of mathematics, problems of this nature can be addressed (Spremann, 1987).
As in the techno-centric view, this perspective also assumes that goals are set externally (and hence follows the age-old principle of value neutrality). However, by shifting the focus to the principal rather than solely to the optimization problem of the agent, this view enables analysis of social interactions where the goals and agency of one actor impact the agency of others. Models of this nature have been widely used to understand interpersonal trust in the behavioural economics literature, where the one who trusts is the principal and the one who is expected to reciprocate is the agent (Charness & Dufwenberg, 2006). Similarly, the social contract between people and institutions has been analyzed under the utilitarian notion of maximizing the value of public good under reluctance to pay (taxation), with the electorate as the principal and agents of the state as, well, the agent (Bergman & Lane, 1990). In this socio-centric view, the agent is never analyzed in isolation; rather, the goals and actions of the principal are analyzed concurrently with those of the agent, and therein lies the key difference from the techno-centric view.
With that detour, we return to the two perspectives of AI sceptics. To illustrate this, I take two Nobel laureates as examples. Geoffrey Hinton (Nobel Prize in Physics, 2024) has been one of the leading voices warning about the existential dangers posed by new classes of generative AI agents (Zuckerman, 2023). Daron Acemoglu (Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel, 2024) has written extensively about how, unless people reclaim their agency and build inclusive, legitimate institutions, economic progress that relies solely on AI-driven technological improvements will not lead to broad-based improvements in human welfare (Acemoglu & Johnson, 2023). I view their perspectives to be fundamentally rooted in the origins of the techno-centric and socio-centric views of agency, respectively. Given the perspectives from their respective discipline, Geoffrey Hinton, as a celebrated computer scientist, approaches the problem from a techno-centric perspective, whereas Daron Acemoglu, as a celebrated economist, adopts a socio-centric one.
For a long time, I considered the avoidance of the principal in the techno-centric view to be a blind spot. After all, it is hard to talk about the social harms of artificial agents if we ignore the agency of the principal that controls them. However, proponents of the techno-centric view argue that this may not matter, because the principal, in this case, the social and democratic institutions that regulate organizations deploying autonomous agents, face monumental headwinds when it comes to regulation (Scherer, 2015). If the principal cannot effectively act, then ignoring it may not change the analysis. This argument carries some weight after all and, interestingly, converges with Acemoglu’s concerns about the erosion of institutional agency.
In the end, both perspectives converge on a common diagnosis of the ineffectiveness of the principal that controls the agent. Where they differ is in the solutions they propose. The techno-centric view focuses on the agent, with the optimism that computational (re)designs can after all enable better oversight of the goals of autonomous agents. The socio-centric view focuses on the principal, with the optimism of strengthening its agency so our social institutions can effectively control the agent. The solution may well lie somewhere in between.
1. I refer the reader to this insightful podcast on the use of ‘abstractions’ and its history in Computer Science.
The Challenge
Your challenge this week is to consider agency in agentic AI through the different lenses provided here. Reflect on the following:
-
- What are the five primary concerns you have regarding the deployment of autonomous agents
- What potential pathways could address these concerns?
- Do these pathways align more closely with a techno-centric perspective or a socio-centric perspective?
- Is there an alternative pathway that falls within the blind spot of both approaches?
References
-
Acemoglu, D., & Johnson, S. (2023). Power and progress. PublicAffairs.
-
Bergman, M., & Lane, J.-E. (1990). Public policy in a principal-agent framework. Journal of Theoretical Politics, 2(3), 339–352.
-
Business Insider. (2017, December). Security robots are monitoring the homeless in San Francisco. Business Insider. https://www.businessinsider.com/security-robots-are-monitoring-the-homeless-in-san-francisco-2017-12
-
Charness, G., & Dufwenberg, M. (2006). Promises and partnership. Econometrica, 74(6), 1579–1601.
-
Future of Life Institute. (2023). Pause giant AI experiments: An open letter. Future of Life Institute. https://futureoflife.org/open-letter/pause-giant-ai-experiments/
-
IMD. (n.d.). AI on the brink: How close are we to losing control? IMD. https://www.imd.org/ibyimd/artificial-intelligence/ai-on-the-brink-how-close-are-we-to-losing-control/
-
Klein, E. (Host). (2025). The contradictions of Gavin Newsom [Audio podcast episode]. The Ezra Klein Show. Apple Podcasts. https://podcasts.apple.com/us/podcast/the-contradictions-of-gavin-newsom/id1548604447?i=1000740622979
-
Metz, C. (2023, October 16). A.I. agents could replace some workers, experts say. The New York Times. https://www.nytimes.com/2023/10/16/technology/ai-agents-workers-replace.html
-
Scherer, M. U. (2015). Regulating artificial intelligence systems: Risks, challenges, competencies, and strategies. Harvard Journal of Law & Technology, 29, 353.
-
Solomon, E. (n.d.). Canada’s minister of AI. The Walrus. https://thewalrus.ca/evan-solomon-ai/
-
Spremann, K. (1987). Agent and principal. In Agency theory, information, and incentives (pp. 3–37). Springer.
-
Weiss, G. (Ed.). (1999). Multiagent systems: A modern approach to distributed artificial intelligence. MIT Press.
-
Wikipedia. (n.d.). P(doom). In Wikipedia. https://en.wikipedia.org/wiki/P(doom)
-
Wired. (2023). Statement on AI extinction risk. Wired. https://www.wired.com/story/runaway-ai-extinction-statement/
-
Zuckerman, A. (2023). Why neural net pioneer Geoffrey Hinton is sounding the alarm on AI. MIT Sloan Management Review. https://mitsloan.mit.edu/ideas-made-to-matter/why-neural-net-pioneer-geoffrey-hinton-sounding-alarm-ai
Disclosure
I use generative AI to identify any grammatical errors or how to make sentences more readable.
