Countering Technological Somnambulism through AI-Awareness
“We should know by now: anytime someone maintains that a technology is inevitable, they're asking us to give up any say we might have over that technology – asking us to give our ability to question, to alter, to refuse; they're asking us to abandon our agency, the control of our future.”
(Watters, 2025)
Technologies are rarely merely tools. New technologies have the ability to reconfigure social life and may be one of the most important sources of social change (Winner, 2004). There are many examples of how emerging technologies didn’t merely offer new capabilities but fundamentally changed society for both good and bad. Printing technologies, including but not limited to the Gutenberg printing press, allowed for the mass production of written texts. In doing so, this altered who could produce and who could access written knowledge, leading to significant changes in both media and education. The invention of the car increased individual mobility. But car culture has also had catastrophic effects on the environment, and changed the logic of urban design, prioritizing space for cars and contributing to urban sprawl. The Internet pushed us into the Information Age, providing an unprecedented ability for global connection and nearly unlimited access to information; however, it has also increased digital inequity and fueled political and ideological divides through the creation of echo chambers and filter bubbles. I could go on, but the point is that technological progress has a significant impact on human social life, from changing our perceptions on knowledge, to impacting the physical design of social spaces, to changing how we interact with each other. Technology and society are mutually constituting; technologies are shaped by social forces, and society is shaped by technological advancement.
Though a relatively recent development, generative AI is already having a significant impact on social life. As James Shelley asserted in our last series of the Generative AI Challenge, the release of ChatGPT “permanently changed something fundamental about our relationship with information” (Shelley, 2025). This is hardly the only impact, as other articles in the series highlighted: generative AI is changing assessment, the nature of creativity and intellectual property, and our ability to trust information. In the upcoming series, Challengers will reflect on the impact generative AI is having on how we understand intelligence, the research process, teaching and learning, and ideas of machine and human agency. The real and possible impact of generative AI is not insignificant. It is likely to be as impactful as the printing press, cars, or the Internet, if not more so. The nature of this impact will similarly be complex, with both negative and positive outcomes.
Watkins argues that AI is unavoidable, not inevitable (Watkins, 2025). Similarly, Furze cautions us against equating ubiquity with inevitability (Furze, 2025). However, I believe it is inevitable that generative AI will have a significant impact on humans and social life; it already has. The nature of this impact, however, is not inevitable. In the media, AI possibilities are presented as forgone conclusions, fueling something of a self-fulfilling prophecy akin to “in the future, everyone will be AI users, so we should teach everyone to use AI now”. As AI use increases, humans will come to expect AI-infused solutions, resulting in increases in both supply and demand (Chen, 2023). This creates an urgency to embed AI skills into education for fear of “being left behind” (Tamez-Robledo, 2025). This sense of inevitability and urgency can fuel technological somnambulism (sleepwalking), the uncritical adoption of emerging technologies (Winner, 2024) because it creates a sense that we don’t have time to pause and think, question and discern. This urgency is also fueled by the economic ambitions of generative AI companies that seek to make money off the mass adoption and normalization of these tools (Furze, 2025). If we resign ourselves to this technological determinism, we give up our own agency to shape what social and educational futures might look like.
I’m not arguing in favour of or against the adoption of Generative AI broadly. I think a binary is far too simple. I believe adoption should be situational, values-aligned, and considerate of both affordances and harms. What I am arguing is that we as individual humans and as institutions do still have the agency to decide what adoption looks like and what the impact of generative AI might be. The decisions that we make today will shape what tomorrow looks like, and those decisions must be rooted in critical awareness of what generative AI is, how it functions, and its potential to reconfigure social life. This series offers one opportunity to increase our own AI-Awareness and avoid sleepwalking through critical times of significant social change. I hope you’ll join us!
The Challenge
This week, I offer not a challenge so much as an invitation. In my introduction to the Summer 2025 Generative AI Challenge, I introduced the Domains of AI-Awareness. This has been expanded into a full-length OER: Domains of AI-Awareness for Education. Take a few minutes to explore the domains, complete some of the activities, and consider where you might best focus your attention to help you maintain your own agency and ability for informed decision-making.
References
-
Chen, J. (2023, November 9). Beyond doomsday: Why AI promises a brighter future. Forbes. Retrieved from https://www.forbes.com/sites/joannechen/2023/11/09/beyond-doomsday-why-ai-promises-a-brighter-future/
-
Furze, L. (2025, April 28). The myth of inevitable AI. Retrieved from https://leonfurze.com/2025/04/28/the-myth-of-inevitable-ai/
-
Milberg, T. (2025, May 22). Why AI literacy is now a core competency in education. World Economic Forum.Retrieved from https://www.weforum.org/stories/2025/05/why-ai-literacy-is-now-a-core-competency-in-education/
-
Shelley, J. (2025, June 23), AI and the Information Ecosystem. University Centre for Teaching and Learning. Retrieved from https://teaching.uwo.ca/genai/posts/2025/article_seven.html
-
Tamez-Robledo, N. (2025, March 27).Teachers believe that AI is here to stay in education. How it should be taught is debatable. EdSurge. Retrieved from https://www.edsurge.com/news/2025-03-27-teachers-believe-that-ai-is-here-to-stay-in-education-how-it-should-be-taught-is-debatable
-
Winner, L. (2004) "Technologies as Forms of Life". Readings in the Philosophy of Technology. David M. Kaplan. Oxford: Rowman & Littlefield, 2004, ISBN 978-0742564015. 103-113
-
Watters, A. (2025, March 31). Automating Mistrust. Second Breakfast. Retrieved from https://2ndbreakfast.audreywatters.com/automating-distrust/
-
Watkins, M. (2025, January 17). AI is unavoidable, not inevitable. Substack. Retrieved from https://marcwatkins.substack.com/p/ai-is-unavoidable-not-inevitable
Disclosure
This post was written by a human without Generative AI assistance. Typos, grammatical errors, and linguistic oddities are all my own. I did, however, use ChatGPT 5.0 to format my references and used search engines for finding resources, many of which have embedded AI technologies.