How Not to Use AI: A Personal Story
Many years ago, I worked on a series of government research contracts focused on the impact of technology on copyright protection. One project examined how the photocopying of academic journal articles by libraries affected academic journal publishers. Some groups were adamantly opposed to such photocopying, arguing it reduced demand for subscriptions. Others saw it as a useful tool that promoted academic communication.
The project I worked on focused on collecting nationwide statistical data about such photocopying and made use of a variety of statistical and research methods. It was a fun and stimulating effort that helped spark my lifelong interest in data analysis, communication, technology, and applied research.
Eventually, those of us involved in the study moved on to other things. I wrote a couple of long-forgotten articles on the topic, gave some conference presentations, and pursued other interests.
This past week, I was reminded of that project when I received an email from Academia.edu. It happily informed me that an AI-generated podcast had been created based on one of my old articles about library photocopying and copyright. Intrigued, I clicked the "play" button included in the email. I listened for a while—but stopped halfway through.
Why? Because at no point did this "AI-generated" podcast indicate that the article had been written decades ago. It failed to mention that law, technology, and scholarly publishing have changed significantly since then.
All I could think was, "If someone even slightly knowledgeable about this topic listens to this, they're going to think I’m nuts!"
So, I unchecked the box on the Academia.edu website that allowed the podcast to be published.
This incident raises several issues. The most serious is the increasingly common practice of publishing content online without sufficient context. In this case, what was missing was information to orient the reader—or in my case, the listener—to the historical context of the research. From what I heard, the AI-generated content was a reasonably accurate description of the article’s content (as far as I can recall). But it was completely lacking in any indication that much of what we researched back then has since been superseded—or even rendered obsolete.
Sure, the research may have some historical significance—perhaps as an early example of using objective statistical data to inform potential changes in copyright law. But sharing such out-of-context information through a brief, decontextualized podcast is not the most responsible way to present historical research.
Note: I’m not particularly upset that I wasn’t asked for permission before the AI-generated podcast was created. I’ve been publishing material online since the early 2000s, and I’ve gotten used to the loose and often attribution-free way information circulates online. Sometimes that matters; sometimes it doesn’t.
What often troubles me more is when I see, on a platform like Bluesky, an intentionally inflammatory video clip posted without any date, source, or context. The viewer is left to determine the validity of the content, and asking for source information is often met with a curt response like, "Just Google it!"
Trust me—these days, with the rise of realistic-looking AI-generated images and media, it’s not always easy to tell what’s real and what’s not.
In a world where half-truths and lies can circle the globe in seconds, maybe my concerns about context seem old-fashioned. But I do tend to pay more attention when my name is attached to something. Even if it sounds reasonable—like that podcast from Academia.edu—if the online presentation doesn’t tell the whole story, I have to say, “No.”
Copyright © 2025 by Dennis D. McDonald



