Website copyright © 2002-2025 by Dennis D. McDonald. From Alexandria, Virginia I support proposal writing & management, content and business development, market research, and strategic planning. I also practice and support cursive handwriting. My email: ddmcd@ddmcd.com. My bio: here.

AI, Misinformation, and Trust—The Scarce Commodity

AI, Misinformation, and Trust—The Scarce Commodity

By Dennis D. McDonald

A recent article in Science magazine, The Misinformation Accelerator, describes how generative AI is transforming the creation of misinformation.

In one case, researchers observed a disinformation site dramatically increase its output after switching to AI, building what one researcher described as a “bigger, better forest” of content to hide misleading narratives.

At the same time, controlled studies have not shown AI-generated propaganda to be more persuasive than human-written content. Researchers also continue to struggle to demonstrate that misinformation consistently changes people’s attitudes or behavior.

Perhaps focusing on the use of AI to generate propaganda and misinformation is shortsighted. I’m reminded of what happened when “Web 2.0” and social media emerged. Back then, I naively believed that lowering barriers to communication would bring people closer together. The logic seemed straightforward: if information flows more freely, understanding should improve.

That’s not necessarily what happened. I began writing about the decay of social media as advertising and political misinformation increasingly dominated these platforms. Information flows have become more fragmented. Online groups have grown more insulated. The same tools that made sharing information easier have also made it easier to reinforce existing beliefs—along with biases and outright falsehoods.

What I took from that experience is simple: making information easier to create and distribute does not necessarily improve how people decide what to believe.

Many years ago in the United States, most people received their news from a small number of television networks, along with major newspapers and magazines. Figures like Walter Cronkite occupied a central role in a more constrained information environment.

That model has steadily eroded. With so many voices competing for our attention, which ones do we choose to trust?

My personal approach is to seek out a wide variety of sources, particularly since I no longer fully trust traditional media such as The Washington Post. By scanning as many sources as possible, I hope to increase the likelihood of uncovering the truth about what is happening in the world.

One of the more important shifts described in the Science article is that researchers are beginning to focus less on the content of misinformation—which can now be created at the drop of a hat—and more on who is spreading it and how they behave.

That shift matters. The list of news sources I follow has evolved over time as I learn more about who is trustworthy, who can be relied upon, and who might be held accountable for telling—or not telling—the truth.

Technical solutions—detection tools, AI countermeasures, and platform policies—may help at the margins, but they cannot solve the trust problem. That is something each of us must work through individually.

Social media platforms such as Bluesky—and before it, Twitter—can unfortunately be significant sources of misinformation, given how easy it is to post short snippets of realistic-looking news without a source or verification.

If the past 20 years have taught me anything, it is this: making information easier to create and share does not bring us closer to the truth. It can, however, fundamentally change how we decide whom to believe.

Copyright © 2026 by Dennis D. McDonald

Can LLMs be taught to forget?

Can LLMs be taught to forget?