On being Agentic Humans
At the start of this round of the Generative AI Challenge, I pushed back against the AI inevitability narrative being touted by tech companies and the media. As Audrey Watters warned, a false sense of inevitability can make individuals abandon our own agency and decrease our ability to shape the future. This forsaking of our own agency is particularly worrying at a time when we urgently need to consider what futures are possible and what futures are made more likely by ignorance, inaction, and passivity.
To reiterate where I began, ubiquity and inevitability are not the same thing. It is undeniable that generative AI is pervasive, both in conversation and in digital environments. As Alec Mullender points out, AI features are being added to existing technologies and workflows, often without the consent or knowledge of users. Rachel Sandieson similarly discusses how AI tools have started showing up in digital course materials, where they are seen to have the potential to enhance interactivity and to personalize the learning experience. However, in both cases, the long-term impact of these tools being added to existing digital environments are, as yet, unknown.
There is absolutely discomfort in the unknowability of the future. However, because the future is not yet written, in spite of attempts to speculate and predict, we retain our agency to make choices that will shape that future.
Throughout this series, our challengers continually raised the issue of agency. Carson Johnston started us off by exploring how agency and intelligence are entangled and ill-defined. With agency must come responsibility, and as Joanne Paterson points out, there is consensus among academic publishers that generative AI technologies cannot be held responsible, thus should not be tasked with scholarly judgement. Andrew Richmond further problematizes the idea of machine agency in the pedagogical relationship by highlighting how AI tools, unlike humans, cannot hold themselves to standards. Christine Bell raises the importance of keeping the human in the loop and being deliberate about developing processes that enable human decision-making. Wendy De Gomez tackled the challenging discussion of the environmental impact of AI development, but offers multiple paths forward to mitigate this impact, suggesting that environmental devastation is also not inevitable. Mark Daley concludes the series by painting a dystopian possibility of the future of research, but argues that whatever future we find ourselves in will occur “because we chose it”, either through action or inaction.
Lately, there has been a lot of talk about Agentic Artificial Intelligence. What I think we might be losing sight of is Agentic Human Intelligence. How do we retain our own ability to make decisions, take actions, and remain autonomous? As we wrap up the second series of the Generative AI Challenge, I hope that each of you had an opportunity to reflect on your own agency and think about how the decisions you make today may impact what tomorrow will look like.
Although this isn’t a challenge post, I will leave you with three suggestions on how to remain agentic humans:
- Stay informed. Increase your own capacity for taking informed action (or considerate inaction) by building awareness of generative AI technologies, capabilities, and the complex sociopolitical conditions that shape them.
- Be true to your values. Much of our AI programming in the CTL has been shifting to a values-first lens. This means, start by identifying your own values and allow these to inform your decision-making.
- Actively imagine. Don’t allow the inevitability narratives to turn you into a passive recipient of someone else’s vision of the future. Use your own Agentic Human Intelligence to take action and shape the future into somewhere you want to be.
This concludes our second series of the Generative AI Challenge, but I’m excited to share that we will return in January 2026 with a new round of challengers and topics. Keep an eye out for more information in mid-January.
Disclosure
I did not knowingly use generative artificial intelligence in drafting this post. I did rely on a human copy editor, but residual typos, grammatical errors, and linguistic oddities are all proudly my own. However, as pointed out by other challengers, it can be difficult to avoid generative AI tools, so I cannot say with certainty that they did not impact the content seen here.