Skip to content

AI

When AI joins the team: Three principles for responsible agent design 

A practical guide to build agents that stay differentiated, intent-aligned, and bias-resistant

By
Hilary Dwyer, Jennifer Wang, and Jeffrey Basoah

  –   The estimated reading time is 6 min.

A futuristic, white interface displays a message: “Create an agent with a natural, but professional tone.” Various colorful icons, sliders, and buttons surround the central dialogue box.

Imagine you’re in a brainstorming session at work. As your colleagues share ideas openly and quickly, an overeager AI agent offers a long idea based on old practices at your company. Quieter teammates disengage, conversation falters, and the ideas feel repetitive rather than innovative.

Let’s imagine the same scenario designed differently. The AI agent introduces itself as an “AI facilitator” that will track ideas and time left. It gently prompts the group to consider the views of those who haven’t spoken. The tone is professional, warm, and diplomatic.

In the first scenario, the AI agent experience is intrusive and negatively shifts team dynamics. In the second, the interaction and design patterns make the experience intuitive and useful.

AI agents are not passive tools; they’re active contributors in enterprise workflows. When we give them style, tone, avatars, icons, or corporate identities, we shape how people at work perceive and interact with them. Done well, these attributes can make the experience more natural and delightful. Done poorly, they can erode trust, amplify bias, or even distort decision-making. Navigating this continuum is an immense design challenge.

To help our teams strike the right balance, our research group – AI Futures & Insights – investigated the nuances of responsible agent design in enterprise settings. At work, not only do AI agents need to be functional, accurate, and secure more than entertaining, but they also need to be socially-adaptive as they navigate team dynamics, contexts, and organizational processes.

These UXR studies culminated in three principles to clarify how we design AI agents to operate side-by-side with humans at work. These provide a way for product teams to experiment responsibly with style, tone, metaphors, and visual representation. These AI agents are (1) differentiated from humans, (2) built for intent, and (3) bias resistant.

Principle 1: Differentiated from humans

null
Responsible AI emphasizes that human users must understand when they are interacting with AI. If users cannot tell whether an AI agent is human or machine, they may over-rely on outputs or feel deceived. These AI agents must have clear visual, linguistic, and functional markers that signal them as AI and avoid human names, pronouns like he/she, and avatars that mimic real faces.

In our research studies, we learned that the agent’s name was the first visual cue that users notice when interacting with agents. Human-like or vague names (e.g., “Alex” or “Mentor”) can confuse users, making them think they’re interacting with a person. This erodes transparency. Clear naming conventions that include an “AI” qualifier or describe the agent’s function (e.g., “Approval Management Agent”) help set expectations.

In practice, agents that are differentiated from humans should:

  • Be represented symbolically with icons, not photorealist images, and clearly labeled as “AI” or with names that signal function (e.g., “Design Assistant Agent” vs “Alex”).
  • Avoid simulating emotions or implied lived experience (e.g., “I’m proud of you” or “I’ve seen this happen before”).
  • Use a consistent and bold visual treatment that differs from human profiles.

Principle 2: Built for intent

While users may want more personality, style, or tone in their non-work AI agents, these attributes can be distracting in enterprise settings. Most AI agents do not need additional personality in work settings; if they do, these additions must serve a clear user intent. AI agents that ignore users’ intents at work can show up at the wrong time, derail workflows, or constrain creativity.

Across our research studies, we learned that using AI agents regularly at work can spark strong reactions. Those who favor using agents in their workflows are extremely positive, while those opposed are equally firm. Further, these preferences vary by context. Roles involving empathy, creativity, or coaching can be personified more, while transactional or technical agent tasks need to be neutral.

In practice, agents built for intent should:

  • Adapt to context. If a team’s intent is brainstorming, the AI agent avoids pushing “best practices” or past examples unless asked.
  • Map to users’ language and mental models that support the task (e.g., “guide” for onboarding, “teammate” for collaboration).
  • Offer flexibility: Let users adjust tone, style, avatar presence, or a neutral mode.

Principle 3: Bias resistant

AI agents can lead to risks such as unrealistic expectations, over-reliance, group think, or echo chamber effects. Social dynamics can be altered by the agent’s language, recommendations, visual metaphors, or cultural idioms. Left unchecked, these patterns can reinforce stereotypes or unproductive team behaviors. When we monitor and adjust an AI agent’s behavior and styling over time, we can uphold fairness, protect user autonomy, and maintain ethical integrity in decision-making.

Our research showed that giving AI agents human names, conversational tones, and friendly visuals might feel intuitive, but it can create an illusion of capability. These systems can inflate trust, leading users to overlook errors or accept inappropriate actions. When agents are perceived to be “more human,” users expect skills like social awareness and nuanced decision-making which most systems cannot deliver.

In practice, agents that are bias resistant should:

  • Experiment with persistent but lightweight AI disclosure (e.g., “AI-assisted” badges, repeated contextual reminders).
  • Surface uncertainty and what the AI agent cannot do (e.g., confidence ranges for output, sources, or “I don’t know” when appropriate).
  • Design for friction at critical decisions (e.g., require human confirmation for high-impact actions).

Wrapping up

Designing AI agents in work settings is not about making agents entertaining, but about crafting workflow experiences that feel natural, trustworthy, and intuitive. Here are some quick tips to consider if you’re designing AI agents for enterprise settings:

  • Make the “non-human” unmistakable: Names, visuals, and introductions should clearly be AI.
  • Gate personification by intent: Only add human-like attributes where they clearly improve team outcomes.
  • Account for social dynamics: Experiment with agent design that cultivates talk time balance, idea diversity, escalation patterns, etc.

As we push the boundaries of AI agent design, we invite others to join us in exploring new approaches and sharing lessons learned. The journey ahead will require ongoing dialogue, critical reflection, and a willingness to adapt as our understanding evolves. By working together, we can ensure that AI agents that not only serve our teams but also uphold the highest standards of responsibility and trust.