Website copyright © 2002-2025 by Dennis D. McDonald. From Alexandria, Virginia I support proposal writing & management, content and business development, market research, and strategic planning. I also practice and support cursive handwriting. My email: ddmcd@ddmcd.com. My bio: here.

To Control AI, First Understand Your Relationship With It

To Control AI, First Understand Your Relationship With It

By Dennis D. McDonald

Introduction

If you want to control AI, you first need to understand what you mean by control. That also means understanding what influence you have over your relationship with it. There are several ways to look at this;these are discussed below.

Discussion

1. Control Depends on Relationship Type

To talk meaningfully about “controlling” AI, we must first understand our relationship with it. Control operates differently when AI functions as a subordinate tool versus when it acts as a collaborator or advisor. Assumptions about authority, trust, and autonomy shift depending on that role.

What came to mind when considering these relationship types was how novelist Robert Harris described the evolving role of Tiro across his three novels about Cicero—Imperium, Conspirata, and Dictator. At first, Tiro was an educated slave who could read, write, and take notes. Over time, his relationship with Cicero evolved into that of a trusted advisor and eventually a free man.

In just the short time I have been using various AI tools, I’ve seen my relationship with them evolve—partly in response to what the AI learns about my interests, roles, and responsibilities. I’ve learned, for example, to be very explicit about what I want the AI tool to do, even for seemingly basic tasks such as parsing unstructured text into structured spreadsheets and databases. The more the AI understands about your background and intent, the more helpful it can be—though it can also be annoyingly overhelpful with its suggestions.

2. Roles and Relationships Evolve

AI systems already occupy a range of roles—secretary, trusted advisor, coworker, teacher, artist, or analyst. Each role carries its own expectations for reliability, creativity, and accountability. Understanding which role we’re engaging shapes how we use AI and how we evaluate its behavior.

One complicating factor is that these roles and relationships can shift instantly, which can cause confusion. Again, being explicit about one’s expectations when crafting a detailed initial prompt—including specifying what prior stored information should be consulted—is important.

3. Feedback Personalizes the Interaction

AI’s adaptive responses to user feedback can create a sense of personalization that mimics not only social interaction but also learning that influences subsequent decisions. This can make interactions feel cooperative or even intuitive. It can also mask the complexity of the system beneath.

I experienced this recently while using ChatGPT to diagnose and resolve a series of hardware and software problems—including figuring out why the inside light of my dryer worked but the control panel did not.

Being able to provide and receive feedback over the course of an interaction is essential, especially when dealing with systems or issues outside your own expertise. Understanding the steps the AI is following to solve your problem is critical. That’s why I find it helpful for the AI to provide ongoing descriptions of how it is addressing a problem. I may not understand a scrolling display of Python code, but I will understand periodic text explanations of what the AI is “thinking” as it works through an issue.

4. Collaboration Can Blur Human–Machine Boundaries

When AI assists in problem-solving, it can be difficult to separate human input from algorithmic contribution. You may find yourself wondering whether the AI’s response is (a) copied from something it read that resembles your query, (b) derived from well-understood rules or conventions, or (c) simply fabricated.

I encountered this recently when I asked an AI tool how House Atreides made its money before being sent to Arrakis. (Those familiar with the Dune universe will understand the question.) I haven’t read all the Dune books, but I assumed that somewhere there must be a description of the Atreides’ revenue sources. Still, I was surprised when one supposed source turned out to be training the Emperor’s Sardaukar terror troops. Who knew?

5. AI as Creator and Critic

Recent experiments where AI systems both author and review scientific papers show how far this partnership can go. A recent Science magazine article, At futuristic meeting, AIs took the lead in producing and reviewing all the studies, described a conference called Agents4Science that intentionally flipped conventional academic norms by making AI systems the lead authors—and the reviewers—of all presented research papers.

Of 315 papers submitted, 48 were accepted, with AI handling everything from hypothesis generation (57% of submissions) to substantial writing (90% of all papers).

Reading that article inspired me to write this one. I had already been thinking about how AI can support many stages of the research communication process, including:

  • Designing and conducting research

  • Drafting manuscripts

  • Preparing figures, tables, and supplementary materials

  • Ensuring compliance (ethics, conflicts, data sharing)

  • Selecting target journals

  • Formatting manuscripts

So, how “good” were the AI-generated and -reviewed papers? According to the Science article, results were mixed. But the real issue—when considering what kind of relationship we want with AI—comes down to judgment. There may be benefits such as speed, cost-effectiveness, and scalability, but judgment, expertise, transparency, and—see next section—accountability remain essential.

6. Acknowledgment and Attribution

If an AI contributes meaningfully to a task, how should that contribution be recognized? We credit human colleagues for intellectual input; should AI systems be treated merely as instruments, or as collaborators deserving acknowledgment?

This article, for example, has been developing in my mind for several months and reflects my long-standing interest in scholarly publishing and creativity. (My Ph.D. dissertation, many years ago, involved developing a mathematical model of how astrophysicists and cancer researchers select the journals in which they publish.)

The actual writing began with ten handwritten pages of notes about control, AI, and relationships. I converted those notes to text and fed them into ChatGPT with the prompt:

The following is a stream-of-consciousness set of notes about my view that controlling AI depends at least partly on the type of relationship we want to have with it, and this will be impacted by the complicating factor that AI can play so many different roles. Please summarize this into 6–10 key points for further writing.

This initiated a back-and-forth where ChatGPT not only listed key points but also—without my asking—drafted ideas for discussing them. While I often use ChatGPT to edit this website’s text (“please edit the following text for grammar, spelling, and clarity”), I rarely use it to generate text. I sensed that, if allowed, ChatGPT would have written this entire article.

Which brings us to the question of acknowledgment and attribution. Should the fact that a researcher uses AI tools at various stages of the process always be made public—just as a colleague’s contribution might be acknowledged in a published paper?

For some tasks, disclosure would be like reporting the statistical package used to calculate basic descriptive statistics. But for others—such as developing an original hypothesis—shouldn’t that involvement be disclosed?

Conclusions

The role of AI in research and communication is still evolving. Until we have a better sense of how to control and relate to its many different uses, transparency about how AI is used will be essential. This will be just as important for research published in academic journals as for the images and news carried by legacy and social media.

Copyright © 2025 by Dennis D. McDonald. Concept and final composition of the accompanying image was created through an interactive exchange with ChatGPT (GPT-5), exploring variations on light, form, and symbolism to illustrate the balance between control and automation.

Using New Media To Buy Old Media

Using New Media To Buy Old Media

Teaching Kids to See Through the Algorithm

Teaching Kids to See Through the Algorithm