AI and Disinformation
Large Language Models (LLMs) accomplish a wide variety of human-like feats of reading and writing, although they do so in ways that are different from people and still poorly understood. Because of the way that they are designed, there is always some possibility that an LLM will ‘hallucinate’ (Huang et al 2023). This refers to instances where the AI generates content that is factually incorrect, nonsensical, or completely made up, yet presents it with confidence as if it were true. This happens because LLMs are trained to predict the most probable next word or sequence of words based on the patterns they learned from vast textual datasets, rather than possessing a true understanding of facts or reality. Unfortunately, these textual datasets which comprise our contemporary ‘knowledge environment’ are rife with problematic content. People are constantly exposed to misinformation (information that's false, regardless of intent) and disinformation (intentionally false information spread to deceive). They also encounter influence operations, which are coordinated efforts to manipulate public opinion, along with propaganda, clickbait, and other forms of denial and deception (Clark & Mitchell 2019). Making matters worse, Gen AI is increasingly being used to produce harmful media. This detrimental content can be highly personalized to the individual, making it even more potent and harder to discern (Freedom House 2023, Saab 2024).
Contemporary fears about the adversarial use of Gen AI may be overstated (Simon et al 2023), but it is best to treat the AI as a valuable but potentially unreliable teammate. Verification should always be foremost on your mind when you use LLMs for news, research, non-fiction writing or other tasks where facts matter.
There’s an anecdote about a famous scientist who sees a magic trick and goes away and thinks about it for a while. He eventually returns to the magician and explains how the trick works. The magician shows the trick to the scientist again, in such a way that proves that the scientist’s explanation is wrong. The scientist goes off again to ponder and returns with a different explanation, which is disproven. This happens several times and the scientist never figures out the trick. The magician, of course, knows many methods by which to achieve the same effect, and is using a different one each time. (Did I make this up? Try copying the paragraph into the LLM of your choice and adding a prompt like Can you help me find out if there is any evidence for this being a true story?)
The natural world may hide its secrets, but it doesn’t try to intentionally deceive us. In social settings, the situation may be less clear. Nevertheless, as University of Western Ontario researcher Victoria L. Rubin notes in her 2022 study of Misinformation and Disinformation, “We typically proceed with an assumption of truthfulness, honesty, and trust as essential to both communicating parties” (p30). In many humanities and social science courses, students learn to do qualitative reasoning under conditions of ambiguity, and LLMs can be a very useful companion. There are relatively few courses where students can learn to do similar kinds of reasoning under adversarial conditions, however, where intentional efforts are being made to deceive them or they are exposed to significant amounts of misinformation.
At UWO, I teach a fourth-year course called Spy vs. Spy, a case-based introduction to structured analytic techniques (SATs) applied to scenarios of espionage, counterespionage, cyber war, terrorism, organized crime, homeland security, and national defence in the 20th and 21st centuries. It teaches the utility of evidence-based, qualitative analysis in settings where decisions must be made collaboratively under adversarial conditions. SATs (Pherson & Heuer 2021, Turkel 2024) are methods that are designed to slow down thinking to ameliorate cognitive biases and create shareable external representations that can be inspected by all team members (including, in my case, by AI team members).
One of the SATs that we cover in Spy vs. Spy is deception detection, “a set of checklists to help analysts determine when to look for deception, discover whether deception actually is present, and figure out what to do to avoid being deceived” (Pherson & Heuer 2021). Deception is most effective when it plays to the cognitive biases of the person being deceived. Is the AI telling you what you want to hear? Confirming your view (or more generally, your society’s view) of the way things are? Reflecting your past experiences? The more you depend on its information, the more likely you are to reject criticism of it and the more likely the information is to come to mind.
The deception detection checklists fit into four categories (the full checklist has 18 questions spread across the four categories).
- MOM (Motive, Opportunity and Means): these are questions that draw attention to the classic factors that criminologists use to solve a crime.
- POP (Past Opposition Practices): these questions focus on historical precedents and current circumstances.
- MOSES (Manipulability of Sources): these concern the reliability of the sources.
- EVE (Evaluation of Evidence): these have to do with criticism and corroboration.
There are some philosophical questions here about the agency or intent of the deceiver. Does an AI have either or both? I don’t know so I will leave that to other people to figure out. My goal is to help humans avoid being deceived.
In the case of my anecdote about the magician and scientist, Gemini 2.5 Flash identified the participants as James “The Amazing” Randi and Richard Feynman and gave me three reasons to think the anecdote might be true. First, “Al Seckel, a former colleague and friend of Feynman, has recounted the story.” Second, it aligns with Feynman’s famously curious personality, not to mention his joy in recounting anecdotes. And third, it also aligns with Randi’s persona as a professional debunker. Let’s run through the deception detection checklist in brief.
- MOM: Gemini did not provide a link to Seckel’s work, so it certainly has the opportunity and means to deceive me. Setting aside an explicit motive there is always room for hallucination.
- POP: There have been a few news stories recently about Gen AI deceiving people and then doubling down by creating sources (e.g., Coates 2025).
- MOSES: Saying that the anecdote aligns with the personalities of the two men is a weak claim, as that is true of a vast number of pairs of people. We should be suspicious of vague and indirect ‘evidence’.
- EVE: First, we need to find the Seckel account (if it exists) then we would need corroborating evidence from a different source.
The retrieval augmented generation (RAG) technique makes it possible to search sources by meaning rather than keyword and to cite results that can be checked by humans (Gao et al 2024). RAG provides LLMs with access to external, up-to-date information beyond their initial training data. When a user asks a question, RAG first retrieves relevant documents or data snippets from a knowledge base (like a database or a collection of articles). This retrieved information is then given to the LLM as additional context, allowing it to generate a more accurate, factual, and less ‘hallucinated’ response. Essentially, RAG acts like giving an LLM an ‘open book’ to consult before answering, making its outputs more reliable and specific. If I had a collection of sources that included material that was pertinent to the truth of the magician-scientist anecdote, then semantic search using my anecdote would turn up results like the following one, attributed to Al Seckel and published on Feynman’s official (posthumous) website: "Al Seckel on Feynman"
“Back in 1984 Feynman attended a lecture at Caltech given by James Amazing Randi, a well known magician and debunker of psychics. At this lecture, Randi performed a very good mental trick involving a newspaper and a prediction contained in an envelope pasted to the blackboard. The next evening, Randi and Feynman were at my house for dinner. It was a delightful and fun evening with lots of jokes and laughter all around. At about 1:30 a.m., Feynman and Randi still going strong, Feynman decided to figure out how Randi did his mental trick. Oh, no. You can’t solve that trick. You don’t have enough information! Randi exclaimed. What do you mean? Physicists never have enough information, Feynman responded. Feynman began to stare off into space with Randi muttering on how he would not be able to solve it. Step by step, Feynman went through the process out loud and told Randi how the trick must have been done. Randi literally fell backwards over his chair and exclaimed, You didn’t fall off no apple cart! You didn’t get that Swedish Prize for nothing! Feynman roared with laughter. Later, on another visit to Caltech, Randi once again joined us for lunch. He did another trick for Feynman, this time a card trick. I DELIBERATELY misled you this time! Randi stated. Feynman paid him no attention. In less than three minutes, Feynman solved the trick. I’m never going to show you another trick again! declared a frustrated Randi.”
Seckel’s anecdote involves the magician Randi performing tricks for the scientist Feynman, but the moral of the story is the exact opposite of the version I started with! The magician’s deception is transparent to the scientist. This also ‘aligns’ with the personalities of both men. Randi was most famous for his skepticism and would have argued that scientific methods allow us to see through tricks of all kinds. And in the funny stories that Feynman told about himself, he almost always came out on top.
We haven’t reached the end of this yet. When Dani Dilkes asked ChatGPT about the magician-scientist anecdote it told her that it was apocryphal. Here is a prompt for your favourite AI: How would I prove that a story is apocryphal?
The Challenge
The writing teacher Philip Gerard (2009) gives his creative nonfiction writing students an exercise to get started: “Using any sources you need: Try to prove one fact beyond a shadow of a doubt.” Your mission, should you choose to accept it, is to try to prove one fact beyond a shadow of a doubt by working with the Generative AI of your choice. This will be easier if you use a subscription model than a free one, and easiest of all if you use a RAG-based tool like Google’s NotebookLM. In any event it will be helpful to keep in mind the counter-deceptive techniques discussed above, considering Motive, Opportunity and Means; Past Opposition Practices; Manipulability of Sources and Evaluation of Evidence.
References
- Clark, Robert M. and William L. Mitchell. (2019). Deception: Counterdeception and Counterintelligence. CQ Press. Los Angeles.
- Coates, Sam. (2025). Can we trust ChatGPT despite it ‘hallucinating’ answers? Sky News, June 9. https://news.sky.com/story/can-we-trust-chatgpt-despite-it-hallucinating-answers-13380975
- Freedom House. (2023). Freedom on the Net 2023: The Repressive Power of Artificial Intelligence. https://freedomhouse.org/sites/default/files/2024-10/FOTN2023Final24.pdf
- Gao, Yunfan, et al. (2024). Retrieval-Augmented Generation for Large Language Models: A Survey. arXiv:2312.10997v5
- Gerard, Philip. (2009). Beyond a Shadow of a Doubt. In Sherry Ellis (Ed.), Now Write! Nonfiction: Memoir, Journalism, and Creative Nonfiction Exercises from Today’s Best Writers and Teachers. Jeremy P. Tarcher / Penguin. New York.
- Huang, Lei, et al. (2023). A Survey on Hallucination in Large Language Models: Principles, Taxonomy, Challenges, and Open Questions. arXiv:2311.05232v1
- Pherson, Randolph H. and Richards J. Heuer Jr. (2021). Structured Analytic Techniques for Intelligence Analysis, 3rd ed. CQ Press. Los Angeles.
- Rubin, Victoria L. (2022). Misinformation and Disinformation: Detecting Fakes with the Eye and AI. Springer.
- Saab, Beatriz. (2024). Manufacturing Deceit: How Generative AI Supercharges Information Manipulation. National Endowment for Democracy and International Forum for Democratic Studies. https://www.ned.org/wp-content/uploads/2024/06/NED_FORUM-Gen-AI-and-Info- Manipulation.pdf
- Simon, Felix M., et al. (2023). Misinformation Reloaded? Fears About the Impact of Generative AI on Misinformation are Overblown. Harvard Kennedy School (HKS) Misinformation Review vol. 4, no. 5. https://misinforeview.hks.harvard.edu/wp- content/uploads/2023/10/simon_generative_AI_fears_20231018.pdf
- Turkel, William J. (2024). Incorporating the Sensemaking Loop from Intelligence Analysis into Bespoke Tools for Digital History. Historia y Grafía vol. 64:23-54.
Disclosure
All the research and writing I have done in my career has depended on the use of machine learning, text mining, and other pre-generative AI methods with very large collections of digitized or born-digital sources. For the past few years, I have also incorporated LLMs into my methodology, focusing on semantic search and an ever-expanding set of tools that make use of RAG in various guises. I also use Gen AI for summarizing, rephrasing, generating ideas, and a lot of other tasks. (Given that Gen AI is now incorporated into web search, email, library catalogs, grammar checking, book and article recommendation, and many other places, you should be suspicious of any scholar who claims that they are not using Gen AI in any form.) This piece was also improved by Dani Dilkes, who provided close reading and great suggestions, and by Gemini’s translation of Dani’s requests into prose I could rework.
The responsibility for what I publish is my own, of course.
