Critical Hope and Collective Agency in the AI Era

Subscribe to the weekly posts via email here

In her recent posts, Dani Dilkes set the stage for this exploration of critical hope and collective agency. Whose Values? highlights the impact of values-friction or values misalignment on how individuals may choose to engage (or not) with AI. On Being Agentic Humans calls for each of us to stay informed about generative AI (genAI), to be true our own values, to use our human intelligence to actively imagine a better future and to take action to shape it. To do so requires a sense of agency.

It can be challenging to envision a better future and to feel agency with the uncertainty, fragmentation and fatigue many are feeling as a result of multiple recent and current disruptions in our world (societal, technological, economic, geopolitical, environmental), in higher education, and in our lives. Artificial intelligence is a significant socio-technological disruption. In particular, the rapid adoption of genAI by the population in general and by students everywhere has disrupted teaching and learning in higher education (Attewell, 2025; Bearman et al., 2025; Freeman, 2025; Thomson et al., 2024).

Individually, I assume that most people on campus are grappling with, if not already engaging with, generative AI. They are considering whether, where, and how to use it in their work, teaching, research, and learning. At the same time, many are wrestling with what responsible use means in light of ethical and educational concerns, including bias and exclusion; privacy, safety, and security; misinformation, disinformation and hallucinations; transparency and explainability; academic integrity; impacts on human relationships, cognition, and work; and environmental and climate impacts. While many of these issues predate genAI, its scale and transformative potential have significantly raised the stakes. In addition, the control of the technology by the commercial interests of Big Tech, the relative lack of regulation, and the current vulnerability of the predominant model of higher education (Dilkes & Daley, 2026) can make it difficult to resist disengagement.

According to Kari Grain (2022), the practice of critical hope enables us to believe it is possible to take meaningful action and enact change while contending with complex disruptions. Critical hope is a practice that is required to feel agency when faced with uncertainty and structural constraints. We may not be able to do anything to shape the direction and impacts of artificial intelligence on global society, but we can do something within our spheres of influence. Critical hope is not about blind optimism in the face of wicked systemic problems, nor does it deny the depth of the challenges posed by genAI. Rather it is a commitment to act in alignment with your values while remaining attentive to power, inequity, and unintended consequences. As West (2008) suggests, it is a way of staying engaged “in the midst of the muck,” when outcomes are uncertain. Optimism isn’t required, but an affirmative courage and vision is (Grain, 2022; Wheatley, 2023). Hope and action engender agency.  

In the context of genAI in higher education, what does critical hope look like? It can be spotted across our campus and others. Critical hope looks like an instructor choosing to redesign an assessment not because of the risks of AI misuse, but because pedagogical experimentation creates opportunities for learning alongside technology. It looks like students participating in conversations about how assessments are evolving in response to genAI. When students are invited to reflect on what supports their learning, they can help shape assessment practices that feel fair, meaningful, inclusive, and educative as opposed to punitive (Chase, 2020; Meyer et al., 2025). It also looks like academic leaders making space for difficult conversations about matters such as academic integrity and disclosure, workforce and labour implications, and AI ethics and responsible use while supporting faculty and students in meeting immediate academic needs.  

Critical hope is shored up by collective sense-making. Bringing people together to discuss the implications of and approaches to AI in higher education –engaging a diversity of perspectives and recognizing a variety of positions and experiences and ways of knowing - helps individuals and groups construct meaning. Applying a critical lens to genAI together can help us identify power dynamics, recognize inequities, sit in discomfort, and hold multiple truths at once. Collective sense-making enables the surfacing of alternate possibilities, eliminates false dichotomies, and embraces pluralism (Grain, 2022). It creates a fuller sense of the situation we find ourselves in and helps assign meaning. From this point, it is easier for us to identify meaningful contributions and renew a sense of agency, individually and collectively.  

Collective agency emerges when groups move beyond individual deliberation or isolated experimentation toward shared understanding and collective action. For example, at the university this can take the form of cross-role working groups that bring faculty, students, librarians, educational developers, student services, and academic leaders together to surface concerns, examine assumptions, and identify points of alignment in their approaches to AI. Across the higher education sector, this takes the form of councils, consortia, associations, and societies doing the same in their groups and across their networks. Optimally, this extends to partnerships and coalitions with groups from civil society, the community, industry, K-12 education, government, and non-government agencies. Enacting collective agency can manifest in many forms including advocacy; policy development; principles, guidelines and standards development, technological and scientific advancement, educational awareness and community engagement, capacity building and experimentation, and new programs and service developments. The Ontario Council of University Libraries (OCUL) Artificial Intelligence and Machine Learning initiative is an example I am involved in and there are many, many others.

Through collective action on our campus and beyond and through broader public engagement, the disciplinary and professional expertise of our campus (your knowledge and expertise) can provide leadership for, model, and make a meaningful contribution to the critical development and use of AI in higher education and for the public interest.

The Challenge

It has probably occurred to you that this Generative AI Challenge series demonstrates critical hope and is an exercise in sense making. For this week’s challenge I am proposing a reflective exercise with the hope to inspire acts of collective agency. I see this as critical step to moving toward a shared understanding and approach to critical use of genAI on our campus and in higher education more broadly.

Consider the following questions:

  • Do you see the need to build a sense of collective agency around the considerations of genAI in the campus, disciplinary or interdisciplinary groups and networks you belong to?  If so, why do you think it is missing? What is in the way of feeling that the group can take agency?
  • Do you belong to groups that are exercising their collective agency to determine whether and how to engage with genAI as a community? What does that look like?  What enables this?

Subscribe to the weekly posts via email here

 

References

Disclosure

Claude was used to create the summary with a few tweaks.  

ChatGPT was used in conjunction with a review of the literature to help identify relevant examples of critical hope for this community.

Your Challenger: Catherine Steeves