Role Awareness Is Essential in AI Tool Design and Governance
Why Experience, Judgment, and User Roles Matter in Intelligent Systems
It started with a letter
This article began as a handwritten letter to an old friend—a former client now retired—with whom I share long-standing personal and professional interests. I wanted to give him a candid perspective on my use of AI tools such as ChatGPT in my consulting including work for an IT contractor whose business development efforts I support.
What I have been encountering in my day-to-day use of AI tools for both personla and professional use raises questions about system design and, ultimately, AI governance. For example, I believe an understanding of an AI tool user’s role—including the user’s nresponsibilities, scope of authority, and decision-making context—can and should influence how that tool is designed, configured, and managed.
Before sending the letter, I scanned it and uploaded it to my ChatGPT Plus account. After confirming that the system could accurately convert cursive handwriting into editable text, I asked it to summarizee the letter’s main ideas as the basis for a website post while deliberately removing references to specific products, services, or companies.
Since then the following text has gone through several iterations as my letter’s original ideas were revised, expanded, and interpreted with the help of ChatGPT.
Experience, Judgment, and the User’s Role
Some contemporary software systems—including those incorporating AI—are designed to manage complexity by modeling and applying entire workflows or lifecycles. These systems are often comprehensive by design. They assume a sequence of activities that begins with discovery, moves through evaluation and decision-making, and continues into execution, monitoring, and documentation.
Logic behind the tool
From a design perspective, this approach is understandable. Systems built around end-to-end processes provide structure, continuity, and consistency, especially for organizations seeking repeatability and oversight. Knowing that there is a well structured logic behind what an AI tool does or suggests helps to build confidence in its interpretations and recommendations, akin to the confidence engendered by a professional that broadcasts his or her training, qualifications, or certifications.
When combined with AI-based capabilities, such systems can also operate at a scale and speed that would be impractical using manual methods alone. However, this same strength can create friction when users engage with the system from different roles and with different objectives.
One user’s focus
In my work with one consulting client, for example, I operate primarily at the front end of a complex process. I help identify business opportunities. My responsibility is not to execute an entire business development lifecycle but to help determine which opportunities warrant management’s attention. This requires broad scanning, comparison, and prioritization of potential opportunities rather than sustained involvement in downstream activities.
When a system is designed around full lifecycle participation, it may implicitly assume that all users intend to progress through every stage. Recommendations, prompts, and suggested “next steps” can be generated on that basis. For users whose role is intentionally limited, this can result in guidance that is technically sound but operationally misaligned and time-consuming to address.
Adapting to individual users
This highlights a key design consideration: complex AI-supported systems may need to adapt not only to different organizations, but to different users within those organizations. A junior analyst, a subject matter expert, and a senior advisor may all rely on the same data and tools, yet their goals, constraints, and measures of effectiveness differ. Systems that recognize and accommodate these differences are more likely to support effective decision-making. They also support more efficient resource allocation, since time is money for any organization concerned with controlling costs.
One area where AI-enabled systems can excel is their ability to learn. Tools that remember prior searches, decisions, stated preferences, and patterns of interaction can significantly reduce repetitive work and improve relevance. Such embedded learning and memory features allow the system to adapt to how a user actually works and how that user’s responsibilities are defined.
It is also important to distinguish between tasks that benefit most from automation and those that do not. AI is exceptionally effective at large-scale research, information retrieval, and pattern detection. When guided effectively, these capabilities can significantly reduce the time and effort required to assemble context and options. Given my own experience managing market research for technology-based systems, I continue to be amazed at how AI-based search tools accelerate market and competitor analysis.
Informing judgement
Mature judgment, however, operates at a different level. Decisions about priority, risk, and strategic fit are shaped not only by information, but by experience, organizational knowledge, and an understanding of strategy and consequences. In my view, the most productive use of intelligent systems is not to replace judgment, but to support it—by handling repetitive and computationally intensive tasks while leaving role-specific decision-making in human hands.
As systems continue to evolve, their effectiveness will depend less on how much process they incorporate and more on how well they adapt to varied modes of use. The ability to adapt may ultimately matter more than attempting complete automation of a process, particularly when that adaptation takes into account awareness of the user’s role and responsibilities.
From Role Awareness to AI Governance
The observations above naturally extend beyond system usability into questions of governance. If AI-enabled tools adapt to users and their roles over time—learning preferences, remembering decisions, and shaping what is presented next—then how those adaptations are guided, constrained, and reviewed becomes an organizational concern.
Multiple pathways
Software developers have long recognized that “one size fits all” is rarely appropriate for complex systems. Today’s tools can learn about user preferences, experience levels, and past behavior, enabling them to better anticipate expectations and support decision-making when users return.
What is new is the scale and influence of modern AI systems. These systems often exceed the experience of individual users not only in the volume of information they process, but also in their ability to analyze and determine what information is surfaced and how it is packaged. Those that incorporate embedded process or lifecycle models can shape both what the user sees and what the system encourages the user to do next.
Experience and context
This is where user experience and organizational context become critical. As noted earlier, one of my own roles involves identifying potential business opportunities for one client across multiple technology domains. How I interact with AI tools has been shaped shaped by decades of consulting, research, project management, and corporate management experience. This experience informs how I assess relevance, risk, and priority when reviewing a list of search-retrieved opportunities and deciding which to forward to my client for consideration.
At the same time, the analytical capacity of AI systems far exceeds what any individual can match. Research tasks that might take me a full day can often be completed in seconds. This significantly accelerates parts of the research process, but it does not eliminate the need for judgement and collaborative decision-making. Decisions about whether to pursue an opportunity still require management judgment—particularly when considering proposal effort, pricing strategy, staffing, and partner engagement.
This brings us back to role awareness. For AI tools to be effective and responsibly deployed, they must account for the user’s responsibilities, experience level, and scope of authority, as well as how that user’s work is reviewed and acted upon within the organization. Ideally, the system learns about both the user and the organizational context in which the user operates and adapts its behavior accordingly.
This is where AI governance enters the picture. Managing how AI tools are used begins with understanding what they are intended to support and how they align with organizational goals. Traditional consulting engagements typically start by developing an understanding of a client’s business strategy and operating model. The same principle applies to AI systems.
One possible approach to facilitating AI governance in such situations might be creation of an evolving, two-part profile: one describing the business or organizational context in which the tool is used, and another describing the roles, responsibilities, and experience levels of its users. These profiles should be created, reviewed, and updated over time and made available to both users and management.
Alignment with organizational priorities
Doing so increases the likelihood that adaptive AI systems remain aligned with organizational priorities rather than simply optimizing for an externally designed process’s completeness or technical capability.
In that sense, this article’s origins—as a personal letter reflecting on everyday work—are not incidental. Questions of AI governance often surface first at the point where real people, real roles, and real responsibilities intersect with increasingly capable systems.
Governance must adapt
How governance operates in this context will necessarily differ from organization to organization. Rigid, hierarchically structured organizations will govern AI use differently from more collaborative organizations where departmental boundaries are less distinct.
Another difference lies in whether AI tools are viewed as traditional, purpose-designed applications that support consistent execution of repetitive processes, or as embedded “advisors” tuned to individual preferences and informed decision-making. Either way, failing to consider how AI tools should be governed is a major mistake.
Copyright 2025 by Dennis D. McDonald, Ph.D. This article was written with the aid and support of ChatGPT, which acted as editor, advisor, and—in some cases—collaborator. While the core ideas are my own and originate in a personal letter, ChatGPT helped frame and refine the discussion through interaction over several days in December 2025.



