Inclusive, non‑Western Ethical Frameworks for GenAI in Higher Education: Why look beyond Western ethics?

Subscribe to the weekly posts via email here

Whenever I discover a new idea or innovation, I tend to plunge into it. When I first heard about GenAI, I started to wonder whose knowledge it learns and repeats. Over the past two years, I have used GenAI for many things, such as creating images of Africa for conference presentations and making lecture videos. What have I found? It doesn’t reflect my own way of knowing or being. Do I blame it? Not at all. A machine can only do what it is taught. Still, alongside the excitement about GenAI, I keep coming back to a bigger question: Whose knowledge does it highlight, and whose gets left out? 

For Indigenous communities, this question is very real. GenAI learns from vast digital datasets, but the systems behind it rarely include Indigenous philosophies, governance, or rights (UNESCO, 2023a). As a result, GenAI risks repeating the same patterns of extraction and misrepresentation that have long characterized colonial encounters with Indigenous knowledge (UNESCO, 2023a). The problem? GenAI is rapidly making its way into university classrooms, and many teachers, like me, are beginning to seek a more inclusive ethical approach (Holmes & Miao, 2023). What is emerging from the GenAI research space, however, is not simply a critique but new possibilities for Indigenous knowledge. Indigenous scholars and communities are offering radically different ways of imagining intelligence, responsibility, and technology itself. Their work contests the assumption that GenAI must follow Western trajectories and instead opens space for GenAI systems grounded in relationality, sovereignty, and collective wellbeing (Khurana, 2025). 

In this post, I examine how non-Western ethical frameworks can be a useful resource for those interested in using GenAI in teaching and learning in more open and responsible ways. I focus on Indigenous scholars who are already reimagining GenAI with Indigenous ways of knowing and ethics. Bringing these ideas into the classroom can enrich the curriculum, empower students, and honour different ways of knowing.

Western AI ethics frameworks, such as fairness, accountability, and transparency, are valuable, but they do not cover everything. Or a better way to say it is, they do not fully consider the value of other worldviews, therefore privileging dominant epistemologies over others (Maldonado & Córdova-Pintado, 2025). Recent studies on GenAI ethics guidelines, point out that such guidelines miss key principles in non-Western ethical frameworks such as the Indigenous emphasis on relationality and mutual responsibility for knowledge (Quince et al., 2025). Though, the research landscape on GenAI and AI ethics is still taking shape, several strong themes are beginning to emerge with a clear message, we (teachers and institutions) cannot achieve ethical GenAI by simply adjusting Western frameworks (Khurana, 2025). Western frameworks are often built on specific philosophical assumptions such as individualism, human centricity, and secularism which are not universally applicable. For instance, human centricity sees humans as the only moral agents excluding other forms of existence in the pluriverse.  The implication of such perspective for GenAI design is the exclusion of other value systems like environmental guardianship that is central to many Indigenous philosophies (Lewis et al., 2025; Taylor et al., 2024). GenAI is not an independent entity, it reflects the biases, assumptions, and perceptions of those who designed and deployed it. When Western ethics is treated as the default it creates a form of ethical imperialism where the values of a few are coded for many (Khurana, 2025).  As such, many scholars argue that AI must be re-imagined, not simply regulated, in ways that reflect Indigenous philosophies of knowledge, responsibility, and relational flourishing (Khurana, 2025; Lewis et al., 2025; Pham & Joubert, 2025; UNESCO, 2023a). The argument here is not simply that Western GenAI ethics needs “improvement,” but that the core foundations of GenAI need rethinking altogether if they are to be in accordance with Indigenous ways of being and knowing (Khurana, 2025; Maldonado & Córdova-Pintado, 2025; Pham & Joubert, 2025) 

Many Indigenous worldviews teach that being and knowing are grounded in relationships and connections among people, communities, and the natural world. In these traditions, knowledge is not separate; it comes from relationships and brings responsibilities to those connections. People are seen as part of a living network that includes the land, plants, animals, ancestors, and future generations (Cruz, 2025). Indigenous perspectives understand relationships as extending far beyond interactions between people, encompassing responsibilities and mutual accountability to the land, water, plants, animals, cultural practices, ancestors, and future generations (Belarde-Lewis et al., 2024). From this viewpoint, knowledge arises through these connected relationships with both human and non‑human worlds. This relational way of knowing contrasts with Western paradigms that often treat knowledge as objective information separate from the knower (Lindstrom, 2022). Instead, Indigenous perspectives emphasize that the validity and usefulness of knowledge come from the relationships in which it is embedded. At the core of these relationships is respect, a core value in Indigenous knowledge systems. Knowledge is not viewed as power over others or over nature, but as a gift to be considered with humility and used for the collective good (Flores et al., 2026). This means honouring the sources of knowledge such as our elders, community, the land, and spirits and recognizing the mutual obligations that come with that respect. 

In an Indigenous ethics of AI, this means how GenAI is used matters as much as what it produces. For instance, Indigenous scholars emphasize concepts like respect, reciprocity, and accountability to the community when dealing with technology (Frank-Siegel, 2022; Mutswiri et al., 2025). The major concerns of Indigenous scholars are the potential of systems to misrepresent or appropriate Indigenous knowledge without permission and, in doing so, reshaping old harms in new ways i.e. digital colonialism (Belarde-Lewis et al., 2024; Roberts & Montoya, 2023). Digital colonialism is a new form of colonialism where instead of the traditional resource extraction and social control, dominant powers use digital technology for socioeconomic and political control (Kwet, 2019).  It is within this context that UNESCO created its Ethics of AI: The recommendation. The recommendation comprises 10 normative principles and 11 actionable policy domains that guide AI governance, including safeguards for human rights, mechanisms for transparency and accountability, and commitments to co-sovereignty and sustainability. Since its release in 2021, UNESCO has released two additional guiding documents specifically for education the  AI Competency for students and teachers and Guidance for generative AI in education and research, which gives a roadmap for students, teachers, and institutions on ethical use and deployment of GenAI in education in ways that promote social justice and enhance human capacity and dignity.  

UNESCOs Ethical Guidelines and Competency Frameworks for AI

(Figure 1: UNESCOs Ethical Guidelines and Competency Frameworks for AI

Beyond serving as a technical guideline, these documents represent a response to growing concerns that GenAI might become another tool for colonization. Like earlier technologies developed with Western influence, GenAI systems are mostly created and managed by organizations in the Global North. This centralization of power can repeat old patterns mentioned above, where data and cultural knowledge from marginalized communities are taken without proper consent or a fair share of benefits. When GenAI models focus on dominant languages and ways of thinking, they can push aside local knowledge and sustain existing hierarchies. These are all forms of digital colonization. If we do not make a conscious effort to include multiple voices and to use ethical data practices, GenAI could make global inequalities worse rather than helping to solve them. Particularly, given that data now functions more as the global currency, flowing across borders as recent geopolitical discourse shows. Yet these flows remain asymmetrical, with value extracted from Indigenous cultures and consolidated with dominant powers. These UNESCO guidelines challenge the extractive nature of GenAI by centering cultural diversity, knowledge pluralism, and shared governance as core competencies for building GenAI. GenAI governance is framed as a question of power, ownership and whose knowledge is prioritized in the digital future (Holmes & Miao, 2023). As the first intergovernmental AI ethics agreement adopted by nearly every country in the world, UNESCO’s framework uses a human rights approach to ensure that GenAI and AI serve humanity rather than powerful institutions (UNESCO, 2025). The guidelines were created to: 

  • Globalize AI governance by including voices from the Global South and Indigenous communities 
  • Counter Western dominance in technology guidelines and ethical decision-making

  • Protect human rights and cultural heritage in digital systems

  • Reduce algorithmic inequality and bias 

  • Align AI development with sustainability and social justice goals 

This is not exclusively a call to policymakers but to us as teachers to revisit how we use GenAI in teaching. My position, as an educator, is not to judge the right or wrong use of GenAI; rather, it is to advocate for a pluralist worldview that recognizes and values multiple knowledge systems. Relying only on Western ethical ideas when making decisions around AI adoption, may fail to consider the communal values, duties to the environment and future generations that are central to Indigenous way of being. Meaning that teaching and learning practices may unintentionally favour or privilege one worldview. By looking to inclusive, non-Western ethical frameworks, perspectives on questions like the following are broadened: Who benefits from GenAI in classrooms? Whose values are embedded in GenAI tools? and Is “responsible use” defined through a pluriversal lens? Embracing multiple worldviews helps ensure the use of GenAI is culturally aware and aligned with different student values.  

What UNESCO has done with this framework is representative of the position of many Indigenous scholars that AI can only be ethical in Indigenous contexts when Indigenous peoples lead its design and governance (UNESCO, 2023b). Leading in this sense means embedding Indigenous participation into the process of GenAI development and deployment. Doing so will shift the image of GenAI from a tool to optimize performance to one that supports cultural continuance and relationality (UNESCO, 2023a). Ubuntu ethics, a Southern African philosophy, is often summarized as “I am because we are.” It views people as persons through their relationships with others, in contrast to the Western mantra “I think, therefore I am” (Frank-Siegel, 2022; Mutswiri et al., 2025). In an Ubuntu approach, technology like GenAI should serve the community rather than just individual users. What might this look like in teaching? For one, we can encourage students to consider the communal impact of GenAI outputs. If you use a GenAI to draft quiz questions or illustrations, discuss with your class who benefits or could be harmed by those outputs.  

Ubuntu ethics calls for communal responsibility and shared benefit. In practice, you might organize a group activity where students train a small GenAI model or simulate one using class-generated data, emphasizing that the outcome is a shared creation with shared responsibility. This can lead to rich conversations about collective ownership of knowledge and the relevance of context. Researchers applying Ubuntu to GenAI governance highlight themes like communal data stewardship and relational accountability (Frank-Siegel, 2022; Mutswiri et al., 2025; Norren, 2020) ideas that can translate into classroom norms. Norms in which students and teachers share responsibility and are accountable to each other for the use of GenAI to ensure that it is used in a respectful manner that doesn’t create harm.  For example, if students use a GenAI tool for a project, have them work in teams to ensure the tool’s use is fair and transparent to all team members. Teachers might also integrate discussions that emphasize the importance of cultural diversity and shared governance in technology.  For example, classroom policies can include projects that analyze GenAI tools not just for their outputs but for the underlying ethical frameworks they adhere to. Students can be encouraged to explore how GenAI systems can be redesigned to reflect different viewpoints that emphasize community impact and inclusive governance. Teachers could ask students to study Indigenous ethical principles, such as reciprocity and respect, and use them to design GenAI tools that meet community needs. Teachers might break this project into steps: researching the principles with both academic and community sources, brainstorming practical uses for GenAI, and then building a prototype with feedback from classmates. Another idea is to have students create lessons that use GenAI to promote cultural understanding and respect. Teachers can help by giving templates, motivating students to include Indigenous stories and examples, and setting up roleplay activities to practice these lessons. These examples show how to bring Indigenous wisdom into teaching, helping students appreciate different knowledge systems (Anderson, 2026). By creating classroom environments that value these UNESCO principals, teachers can help students connect global ethical considerations with their everyday learning experiences. 

We need to teach students to see GenAI not as a neutral tool but as something that needs to align with ethical obligations to real communities. As teachers, incorporating simulations or roleplaying activities prompts critical reflection on how GenAI can contribute to the marginalization or 'othering' of Indigenous perspectives. Through these discussions, students can examine questions of representation, data use, and cultural inclusions/exclusion, including whose knowledge is being shared, by whom, and under what conditions. Such an experiential approach will deepen students’ appreciation of knowledge pluralism and potentially deepen their understanding of Indigenous perspectives on respect for cultural knowledge. To further enhance this learning, teachers should actively involve Indigenous or local communities in the design of classroom activities to allow students interact directly with the different voices directly affected by GenAI. Actively involving Indigenous knowledge brokers moves beyond consultation to partnership, and according to UNESCO and there are several ways to go about it. At the institutional level, every institution needs to have an Indigenous curricular designer who is able to provide guidance to teachers from the onset. Additionally, UNESCO advocates shifting the language from inclusion to co-sovereignty. Inclusion often implies inviting the other into an already established system versus codesigning the system together from the foundation. Collaborative projects with community members, such as co-developing case studies or inviting local experts to share their knowledge, can promote authenticity and deepen students' cultural understanding. It can also teach students communal stewardship. A classroom example would be teachers including a constitution on how the class will interact with GenAI beyond the standard GenAI-use blurb present in syllabus. For, example “We (students and teachers) agree not to post stories or knowledge shared by Indigenous guest lecturers into GenAI platforms without their explicit permission.” The idea here is to remember that GenAI landscape is a fast-changing space and as such any relation with Indigenous partners must be sustained through living agreements. 

As we continue to navigate the fast-changing domain of GenAI in higher education, drawing on a wider set of ethical traditions can be inspiring and practical. Inclusive, non-Western ethical frameworks offer teachers new questions to ask and new principles to uphold, from honouring the relationships and communities behind knowledge, to ensuring GenAI use contributes to the common good and personal growth of our students. Blending different perspectives with Western ones place us as teachers to model a more global ethical outlook for learners. The journey will not end here, in fact, it is just beginning. Embracing multiple ethical lenses is a forward-thinking strategy, one that prepares us (and our students) to engage with GenAI critically, compassionately, and responsibly. In the years to come, higher education will play an important role in shaping GenAI’s impact. With an open mind and a respectful, reflective stance, teachers can chart the way in making ensure that GenAI stays a support tool for enhancing learning while honouring the moral foundations that defines true wisdom. 

Differences in ethical frameworks 

Framework 

Core Philosophy 

View on AI & Data 

Indigenous Relational Ethics(Maldonado & Córdova-Pintado, 2025) 

"All My Relations”: Humans exist in a web of kinship with other humans, animals, and the land. 

Data is not an object to be owned; it carries a spirit and a responsibility to the ancestors and future generations. 

Ubuntu (Frank-Siegel, 2022) 

"I am because we are": Personhood is achieved through our relationships and service to the community. 

AI should be judged by whether it strengthens social harmony or creates division/isolation. 

Confucian Ethics (Muyskens et al., 2024) 

"Role-based Harmony": Focuses on the virtues of Ren (humaneness) and the duties within specific relationships (teacher/student). 

AI should support the cultivation of virtue and respect the hierarchical but reciprocal duties between humans. 

Western Utilitarianism (Guo & Kühler, 2025) 

"Maximum Utility" – Decisions are made to produce the greatest happiness or benefit for the greatest number. 

Data is a resource to be optimized; AI is successful if it increases efficiency or solves problems for the majority. 

 

The Challenge

Take a few moments to critically evaluate both the GenAI output and the process of using it through a non-Western ethical lens. The table “different ethical frameworks” above would be useful for this activity. 

  1. Begin by considering the scenario below: 
    • During a lesson on environmental sustainability, a writing AI generates an example that discusses land use solely from an economic development perspective, omitting Indigenous relationships to land and guardianship practices. 
  2. Choose an ethical framework from the table above a distinct ethical framework (Indigenous relational ethics, Ubuntu, Confucian ethics, or Western utilitarianism) and reflect on the following: 
    • Critically evaluate the GenAI output and process using your chosen framework
    • Identify ethical shortcomings or potential biases in the example 
    • Propose how an educator could revise the prompt, the output, or a classroom practice to produce a more culturally responsible response 
  3. Use the reflection template (attached here) to structure your thinking.  
  4. Compare your reflection to other perspectives across ethical traditions and consider how difference worldviews shape interpretations of fairness, responsibility and sovereignty in GenAI use.  

This activity can also be used in class to get students thinking about cultural lenses in technology and model how to critically evaluate GenAI through multiple ethical frameworks. It is a great way to practice inclusive thinking and can simulate meaningful conversations about respecting diversity in knowledge and values. 

Subscribe to the weekly posts via email here

 

References

Disclosure

In the interest of transparency, I developed the arguments and insights in this post through my own research and reflection. I then used GenAI, including conversational drafting as my writing partner, to evaluate and refine my statements. As someone who has lived on Grammarly since my PhD, it will be remiss of me not to acknowledge this role. I will always use GenAI to check and evaluate my writing. This does not mean I attribute the grammar errors to the GenAI—these are all mine. Given that everything is now AI-enabled, I doubt scholars can write without some form of AI presence. That said, I take full responsibility for my work.

Copyright: CC Attribution 4.0

Your Challenger: Mary Ndu