Whose values? How generative AI is, and has always been, a values conversation
Values play a significant role in shaping the development of AI systems and social practices, including how educators and educational institutions are responding to generative AI disruption. The question is whose values are having the greatest impact on these emerging technologies and practices and where are these values located?
Personal values directly impact how individuals perceive and respond to generative AI. Personal values act as guiding principles, shaping our individual decision-making, our priorities, our attitudes, and even our identities. In educational environments, values may impact how educators design and deliver learning experiences and how learners perceive and engage with those learning experiences. Different values can lead to significantly different responses to generative AI: someone who values critical thinking may engage with AI technologies in a vastly different way than someone who values convenience; someone who prioritizes certainty may opt to not engage with them at all.
Consider, for example, someone who values transparency and accountability. They may be reluctant to use popular generative AI technologies because the proprietary nature of these tools means that many of the systems-level instructions remain invisible. Users do not know what rules are shaping generated outputs and how these outputs may reflect biases or particular ethical stances. For example, recently, I was running a workshop and prompted Copilot with: “Explain whether an avocado is a fruit or a vegetable. You are a slightly curmudgeonly, old-fashioned professor.”
Copilot started to process the request but then responded with: “Hmm...it looks like I can't chat about this. Let's try a different topic.”

Somehow, my request triggered Copilot’s safety system, a system that is intentionally obscured from the user. I asked Copilot which safety rule triggered the response, and received a vague reply about impersonating real, identifiable people or leading into sensitive or adult themes.

This lack of transparency may cause concern for some (I was intrigued!), raising questions of what other hidden rules are shaping outputs. Users who place a high value on transparency may be drawn to other AI technologies that better align with this value, such as Claude, which publishes their system prompts, or may avoid generative AI technology entirely, since the lineage of sources that informed output generation cannot be easily traced. Someone else who values efficiency and productivity over transparency may not be as concerned with the inner workings of a tool or the processes that led to the outputs. Instead, they may choose technologies that are best suited to streamlining their workflows.
This is not to suggest that there is a direct correlation between a specific value and a specific AI-adoption pattern. Individual values are complex, dynamic, and performed in configurations where multiple values, often aligned but sometimes conflicting, are being performed simultaneously. Naming and acknowledging our own values is an important first step to recognizing how we are feeling about and responding to generative AI.
Collective or cultural values are also playing a significant role in emerging AI systems and practices. Many ethical frameworks for AI development and use make appeals to universal human values. The Organisation for Economic Co-operation and Development argues for “democratic and human-centred values throughout the AI system lifecycle”. UNESCO’s recommendations for ethical AI are built on four core values “that work for the good of humanity, individuals, societies and the environment”. However, attempts to align AI systems with universal human values have proven to be challenging, as values vary across cultures and communities.
This raises the important question of “Who is the human in human-centred AI?”. Whose values are shaping emerging practices, but also whose values are encoded in technological designs? In my example above, the safety systems of Copilot reflect the corporate values of Microsoft who defined the system rules and made the policy decision to keep these rules hidden to users. Although the system rules are hidden, there are numerous other examples of how corporate values drive design decisions around how these tools function, including what data sources are used and how they’re obtained, what goals and behaviours to prioritize, and what guardrails or safety measures to put into place.
In 2025, an increased corporate focus on profits and rapid development in an attempt to dominate the generative AI market meant that many big AI companies have shifted their attention from research to commercialization. This resulted in companies taking shortcuts on testing and safety measures, pushing out releases before full safety evaluations had been completed. Linked to increased safety concerns, companies use deliberate anthropomorphic designs to keep users engaged in “relationships” with AI tools. In a recent lawsuit against Open AI, it was argued that the company’s implementation of anthropomorphic designs prioritized market dominance and engagement metrics over mental health and human safety considerations, resulting in mental health crises and, in some cases, death by suicide. AI companies, like Google and Open AI, have also pushed for national policy changes to American copyright laws, arguing that “limiting AI training on copyrighted materials could weaken America’s technological edge and slow down innovation”, surfacing an ongoing tension between the rights of artists and other content creators and the drive for rapid innovation and growth.
Tensions between corporate or technological values and individual values can be another site of values-friction or values misalignment. This can also shape individual responses to generative AI technologies, resulting in a growing AI refusal movement. My work with the Centre for Teaching and Learning, including this series, has always left space for the right to refuse, but I continue to advocate for AI-Awareness whether you choose refusal or adoption.
I also have increasing concerns with the conflation of corporate models and sensibilities with all generative AI technologies and practices. AI technologies are being used to support projects with diverse values and goals, such as fueling grassroots initiatives for local development or supporting Indigenous language revitalization if data sovereignty is carefully preserved. This takes us back to the idea of agency and our ability to decide for ourselves. Users do not need to inherit or perform the values of technologies or tech companies. The ability to enact our values within complex sociotechnical systems requires awareness firstly of our own key values and secondly of how different practices, technologies, and tools align or misalign with those values.
In the upcoming series of the Generative AI Challenge participants will have many opportunities to reflect deeply on different considerations for agency and values-aligned practices: Multiple challengers will surface how certain uses of generative AI support or conflict with different teaching and learning values and goals. Others will explore various facets of both human and non-human agency. One challenger will offer an opportunity to think about the importance of acknowledging diverse cultural values and drawing on non-Western ethics when developing ethical frameworks for generative AI. Another will provide specific techniques to allow users to reconfigure generative AI tools to address values-misalignment, by decreasing the effects of anthropomorphic designs and customizing how different tools generate results.
Overall, the challenges presented over the next 10 weeks will hopefully expand your perspectives and awareness of various AI possibilities and concerns and empower you to make decisions that align with your values.
The Challenge
For those who have been following along with the previous series of Western’s Generative AI Challenge, it should come as no surprise that my challenge is this: Pay attention. Stay mindful. Increase your own awareness of generative AI technologies to inform your own practices.
- Start by reflecting on your own values. How do they show up in your responses to generative AI?
- Then, consider how your values are entangled in larger systems of technologies, policies, cultures, politics, and institutions. How are you able to practice your values? How are you constrained by technological design, policy, or other factors? Where do you experience values friction?
- Finally, imagine things differently. What would a generative AI technology or practice that fully aligned with your values look like?
For readers based at Western, if you wish to explore values and AI further, we also have been explicitly weaving values into our programming, including an upcoming workshop on Aligning AI-use with your Pedagogical Values.
References
- Aliu, T. (2025, September 2). How AI is powering grassroots solutions for underserved communities. World Economic Forum. https://www.weforum.org/stories/2025/09/how-ai-is-powering-grassroots-solutions-for-community-challenges/
- Booth, R. (2025, December 1). ‘It’s going much too fast’: the inside story of the race to create the ultimate AI. The Guardian. https://www.theguardian.com/technology/ng-interactive/2025/dec/01/its-going-much-too-fast-the-inside-story-of-the-race-to-create-the-ultimate-ai
- Field, H., Vanian, J. & Elias, J. (2025, May 14). AI research takes a backseat to profits as Silicon Valley prioritizes products over safety, experts say. CNBC. https://www.cnbc.com/2025/05/14/meta-google-openai-artificial-intelligence-safety.html
- Hamman, J. (2025, September 29). Who is the human in human-centred AI?. World Economic Forum. https://www.weforum.org/stories/2025/09/human-in-human-centred-ai/
- Hessie, J. (2025, April 3). Generative AI Is A Crisis For Copyright Law. Forbes. https://www.forbes.com/sites/hessiejones/2025/04/03/generative-ai-is-a-crisis-for-copyright-law/
- Hussain, K. (2025). Humanizing generative AI Brands: How brand anthropomorphism converts customers into brand heroes. Computers in Human Behavior Reports, 19, Article 100707. https://doi.org/10.1016/j.chbr.2025.100707
- OECD. (2026). Respect for the rule of law, human rights and democratic values, including fairness and privacy (Principle 1.2). OECD.AI Policy Observatory. https://oecd.ai/en/dashboards/ai-principles/P6
- Saner, E. (2025, June 3). ‘Nobody wants a robot to read them a story!’ The creatives and academics rejecting AI – at work and at home. The Guardian. https://www.theguardian.com/technology/2025/jun/03/creatives-academics-rejecting-ai-at-home-work
- Sagiv, L., Roccas, S., Cieciuch, J., & Schwartz, S. H. (2017). Personal values in human life. Nature Human Behaviour, 1(9), 630–639. https://doi.org/10.1038/s41562-017-0185-3
- Salvaggio, E. (2025, June 17). The Black Box Myth: What the Industry Pretends Not to Know About AI. Tech Policy Press. https://www.techpolicy.press/the-black-box-myth-what-the-industry-pretends-not-to-know-about-ai/
- System Prompts. Claude API Docs. https://platform.claude.com/docs/en/release-notes/system-prompts
- Tech Justice Law Project and Social Media Victims Law Center lawsuits accuse ChatGPT of emotional manipulation, supercharging AI delusions, and acting as a “suicide coach”. (2025, November 6). Tech Justice Law Project. https://techjusticelaw.org/2025/11/06/social-media-victims-law-center-and-tech-justice-law-project-lawsuits-accuse-chatgpt-of-emotional-manipulation-supercharging-ai-delusions-and-acting-as-a-suicide-coach/
- UNESCO. (n.d.) Ethics of Artificial Intelligence. UNESCO. https://www.unesco.org/en/artificial-intelligence/recommendation-ethics
- World Economic Forum. (2024) AI Value Alignment: Guiding Artificial Intelligence Towards Shared Human Goals. In Policy File. World Economic Forum. https://www.weforum.org/publications/ai-value-alignment-guiding-artificial-intelligence-towards-shared-human-goals/
Disclosure
I value authenticity, multiplicity, and agency. I did not use generative AI to generate ideas for, write, or edit this post. I lead workshops and programming on generative AI in education, so I do use these tools in my day-to-day work, primarily to demonstrate capability. These experiences provide rich examples (like the avocado and curmudgeonly professor prompt) for investigating how these tools function. In previous posts, I have used ChatGPT to help format citations but realised that the number of erroneous DOIs made this practice more work-creating than time-saving.
This text has been shaped by human influence, other than my own. Thank you to Alec Mullender for copyediting. Thank you to Dr. Ken Meadows who codeveloped the Aligning AI-use with your Pedagogical Values workshop with me. Our conversations have helped shape how I am thinking about values as entangled in sociotechnological designs.
Dani is an Educational Developer with the Centre for Teaching and Learning (CTL) at Western University. She specializes in digital pedagogy, accessibility, and approaches to inclusive teaching and learning. She is also currently a PhD candidate at the University of Toronto researching Futures of Higher Education and how to shape them to be more inclusive in times of great disruption, including the emergence of new technologies like Generative AI. Dani leads the Generative AI programming for the CTL and has approached this holistically by balancing technological, pedagogical, ethical, and affective considerations of Generative AI use.