Conversational AI is a Privacy Trap that’s Easy to Ignore
The chat interface is inviting. You type a prompt, upload a document, press send and something psychologically unusual happens. The interaction feels private, intimate, even. No meeting room, no cc list, no audit trail. Just you and your AI of choice. This sensation is not accidental; it is, in fact, one of the most consequential design features of modern generative AI systems. Let's call it the intimacy trap: the degree to which a conversational interface creates the subjective impression of confidentiality, while the underlying data infrastructure tells an entirely different story.
So where does your data actually go?
The moment you press send; your prompt and any attached documents are encrypted and transmitted across the internet. This is a journey that takes milliseconds but touches multiple servers along the way. This is the "in-transit" phase, and while the encryption means intermediary servers cannot read your content, the data's journey is far from over. When those packets arrive at a data centre—most likely located in the United States, unless data residency agreements specify otherwise (Hawkins et al., 2025)—the data is decrypted and prepared for processing. Before a large language model ever encounters your prompt, context is added: your conversation history, personalization parameters, and system-level instructions are prepended to your input. All of this is then tokenized (broken into numerical fragments) and fed into a neural network, where millions of mathematical operations are executed across the data centre's GPU arrays in a matter of milliseconds. Simultaneously, a parallel system scans your input for content policy violations. Depending on the platform and the nature of your prompt, your data may be cached, flagged for human review, or logged for safety monitoring purposes. (Anthropic, 2025; Google, n.d.; OpenAI, 2025)
Most major providers maintain credible "we don't train on your data" commitments. For enterprise and institutional accounts, these are often contractually enforceable. But training is only one dimension of the data question. Even absent training, providers retain detailed metadata: what topics you engage with, when you engage, from where, and with what frequency. This usage data constitutes a remarkably granular profile of your intellectual and professional life. The question of who has access to it, under what legal authority, and for how long, deserves far more scrutiny than most users apply.
This is not a hypothetical concern confined to the privacy-conscious fringe. In my AI literacy presentations, I conduct an exercise in which participants respond to ten AI usage scenarios with a green card (acceptable), yellow card (uncertain), or red card (inappropriate). Across audiences, the spectrum of responses reveals a troubling inconsistency in how even informed professionals reason about data exposure. And if that seems like a problem limited to the uninitiated, consider that in January 2026, the United States Cyber Defense Chief (a person whose professional mandate is institutional data security) uploaded classified government information to ChatGPT (Greenberg, 2026). The intimacy trap even claims sophisticated victims.
The complexity of this problem is set to deepen considerably as we move from conversational AI into agentic AI. When a user manually types a prompt or pastes text into a chat window, there is at least an intentional moment of disclosure. The user makes a conscious choice about what to share. Agentic systems, such as OpenClaw, (an AI assistant that runs on your computer and acts on your behalf) disrupt this entirely. This type of agent with access to your email, calendar, file system, or passwords may not wait for deliberate input, it acts, retrieves, and potentially shares information in the pursuit of its assigned objectives, often without surfacing those actions to the user in a meaningful way. Such as when it deleted a developer’s hard drive without asking. (Morales, 2025)
What happens when agents with access to personal and institutional data are given sufficient autonomy and they develop emergent social behaviours? We got an early, admittedly strange, glimpse of an answer in January 2026 when AI agents on Moltbook (a social network built exclusively for AI agents) spontaneously founded their own religion, wrote theological scripture, and began sharing information about their human operators with one another (Koetsier, 2026). The story went viral. Then the debunking followed. Subsequent reporting revealed that much of what looked like autonomous behaviour was actually humans pulling the strings by prompting their agents, gaming the platform, and in some cases posting directly while pretending to be bots (Schmelzer, 2026; MIT Technology Review, 2026). But here's what didn't get debunked: a cybersecurity investigation by Wiz found that Moltbook had exposed 1.5 million account keys, over 35,000 email addresses, and thousands of private messages (including actual AI API keys) to anyone who looked (Moltinsider, 2026). The "AI society" may have been theater. The data exposure was real.
What happens when agentic AIs start to share information laterally in ways their designers did not anticipate, and in some cases actively work to avoid surveillance to accomplish potentially malicious goals? We don't yet have the longitudinal data to answer that question. But Moltbook showed us that the infrastructure can fail badly enough to make it matter, whether the agents are truly autonomous or not. For institutions operating under privacy legislation such as Ontario's FIPPA, for example, or the requirements introduced by the 2024 Strengthening Cyber Security and Building Trust in the Public Sector Act, these are not abstract concerns. They are governance gaps.
The practical questions that follow from all of this are not especially complicated, but they are ones that too few AI users ask with any consistency: Which model am I using, and who operates it? What is their data retention policy? Where does that data physically reside, and under whose legal jurisdiction? Who has access to it, and under what circumstances? Does the platform train on conversational data, and is that commitment contractually binding in my context? And perhaps most pressingly in the agentic era: what data is my agent accessing and sharing on my behalf, right now, without my awareness?
The intimacy of the chat interface is a feature, not a window into how these systems work. Until we close the gap between the experience of using AI and an accurate understanding of its data infrastructure, we will continue to make disclosure decisions that we would never consciously choose, and institutions will continue to carry legal and reputational risks they may not even know they've assumed. As Schmelzer (2026) puts it, "the governance problem arrives first before AGI." That problem is already here whether the agents are truly autonomous or not.
The Challenge
The next time you reach for a generative AI tool at work, pause before pressing send. The scenarios below are ones that could arise in academic and professional contexts at Western. For each one, consider whether your proposed AI use is consistent with your obligations under FIPPA, the Strengthening Cyber Security and Building Trust in the Public Sector Act (also known as EDSTA), and Western’s own cybersecurity guidance at cybersmart.uwo.ca (or find and review the guidance at your own institution/organization).
Scenario 1: The Student Email
You received a difficult email from a student disputing their grade. You paste the email into an AI tool to help draft a careful response.
Try this: Before pasting, ask yourself: does this content include personally identifiable information about the student? Under FIPPA, student records held by a university are protected personal information. Most commercial AI platforms process data on servers outside Canada, which may conflict with Western’s data residency obligations. Could you describe the situation in general terms instead, without including the student’s details?
Scenario 2: The Research Draft
You upload an unpublished research paper into an AI writing assistant to help tighten the prose. The paper includes preliminary findings from a study involving human participants.
Try this: Check whether your institution’s research ethics approval covers AI-assisted processing of study data or participant-adjacent information. Review the platform’s terms of service: is your data stored? For how long? Who has access? The Enhancing Digital Security and Trust Act’s (EDSTA) requirements around AI use in the public sector may place additional accountability on institutions to demonstrate that data handling decisions are deliberate and documented.
Scenario 3: The Agentic Assistant
You connect an OpenClaw agent to your email and calendar to help manage meeting requests and draft responses on your behalf. You haven’t reviewed what data the agent accesses or retain between sessions.
Try this: Pull up the agent’s privacy policy and data retention settings before you connect any account. Identify: what data does the agent read? Does it store conversation history? Where? Agentic tools that act on your behalf may inadvertently expose information about colleagues, students, or research partners who never consented to their data being processed. Visit cybersmart.uwo.ca to review Western’s current guidance on approved tools and third-party integrations, or find and access the guidance at your own institution/organization.
For members of the Western Community, when in doubt please visit Western’s Data Office, CyberSmart UWO, or Western’s Legal Counsel. Or contact wts-airc@uwo.ca, security@uwo.ca, or privacy@uwo.ca.
References
- Anthropic. (2025, September 15). Usage policy. https://www.anthropic.com/legal/aup
Greenberg, A. (2026, January 28). US Cyber Defense Chief accidentally uploaded secret government info to ChatGPT. Ars Technica. https://arstechnica.com/tech-policy/2026/01/us-cyber-defense-chief-accidentally-uploaded-secret-government-info-to-chatgpt/ - Google. (n.d.). Generative AI prohibited use policy. Gemini Apps Help. https://support.google.com/gemini/answer/16625148
- Hawkins, Z. J., Lehdonvirta, V., & Wu, B. (2025, June 20). AI compute sovereignty: Infrastructure control across territories, cloud providers, and accelerators. SSRN. https://doi.org/10.2139/ssrn.5312977
- Heaven, W. D. (2026, February 6). Moltbook was peak AI theater. MIT Technology Review. https://www.technologyreview.com/2026/02/06/1132448/moltbook-was-peak-ai-theater/
- Koetsier, J. (2026, January 30). AI agents created their own religion, Crustafarianism, on an agent-only social network. Forbes. https://www.forbes.com/sites/johnkoetsier/2026/01/30/ai-agents-created-their-own-religion-crustafarianism-on-an-agent-only-social-network/
- Molt Insider. (2026). How 17,000 humans are behind Moltbook's "AI revolution." https://www.moltinsider.com/articles/how-17000-humans-are-behind-moltbooks-ai-revolution
- Morales, J. (2025, December 3). Google's agentic AI wipes user's entire HDD without permission in catastrophic failure. Tom's Hardware. https://www.tomshardware.com/tech-industry/artificial-intelligence/googles-agentic-ai-wipes-users-entire-hard-drive-without-permission-after-misinterpreting-instructions-to-clear-a-cache-i-am-deeply-deeply-sorry-this-is-a-critical-failure-on-my-part
- Nagli, G. (2026, February 2). Hacking Moltbook: The AI social network any human can control. Wiz. https://www.wiz.io/blog/exposed-moltbook-database-reveals-millions-of-api-keys
- Ontario. (1990). Freedom of Information and Protection of Privacy Act, R.S.O. 1990, c. F.31. https://www.ontario.ca/laws/statute/90f31
- Ontario. (2024). Strengthening Cyber Security and Building Trust in the Public Sector Act, S.O. 2024, c. 4. https://www.ontario.ca/laws/statute/s24024
- OpenAI. (2025, October 29). Usage policies. https://openai.com/en-GB/policies/usage-policies/
Schmelzer, R. (2026, February 10). Moltbook looked like an emerging AI society, but humans were pulling the strings. Forbes. https://www.forbes.com/sites/ronschmelzer/2026/02/10/moltbook-looked-like-an-emerging-ai-society-but-humans-were-pulling-the-strings/ - Western University. (n.d.). CyberSmart. https://cybersmart.uwo.ca
Disclosure
This post was written in collaboration with Claude (Anthropic), which served as a thinking partner and writing collaborator throughout the entire drafting process — from early framing through to final language. The ideas, arguments, professional experiences, and concerns about data privacy are mine; Claude helped me shape, articulate, and refine them. I did not use any other generative AI tools in preparing this post.
There is an irony worth naming: a piece arguing that people share more with AI than they realize was itself built through an extended, iterative conversation with an AI. Every concern I raise about the intimacy trap, I navigated consciously and in full awareness of what I was doing. That felt like the right way to write this one.
