Skip to content

Design Thinking

Outcomes over output: Designing shared cognition 

How we are shaping systems that help people think better, not just type faster.

By
Jay Tan

  –   The estimated reading time is 9 min.

A close-up of a colorful, artificial flower with curled blue petals, pink center, and yellow stamen-like structures, set against a plain light blue background.

Designing for Shared Cognition

In Stanley Kubrick’s 2001: A Space Odyssey, HAL 9000 was built to be perfect. An AI that never made mistakes yet ultimately faltered in the face of ambiguity. HAL could process flawlessly but not reason wisely. It didn’t misunderstand data. It misunderstood intent. 

More than fifty years later, we face a similar design challenge, though in subtler form. Today’s generative AI systems are dazzlingly capable, yet their usefulness depends less on raw intelligence than on the wisdom of their design. When content can be produced in seconds, the real work shifts to orchestration – planning, reasoning and aligning outcomes. The challenge for product makers is not how to make AI faster, but rather, how to enable better thinking through AI. 

At Microsoft, teams like Tools for Thought and the Office Design Lab are actively shaping the next frontier of product design: the shared cognitive space between people and machines. Tools for Thought, led by researchers such as Richard Banks, explores how software can scaffold human reasoning, helping people clarify intent and orchestrate complex work. Meanwhile, the Office Design Lab, where designers like Philipp Steinacher lead experimentation, focuses on prototypes that keep cognition in flow while introducing productive friction at the right moments.  

These efforts aim to create tangible systems where thought is co-authored, context is understood, and reasoning unfolds in rhythm, building tools that amplify human capability rather than replace it.  

Why now

The pace of work has never been faster. As humans, we are neurologically wired to seek shortcuts, and AI offers them in abundance. However, Steinacher cautions: “Our tools have excelled at automating tasks and scaling output, but they can just as easily diminish the depth, creativity and cognitive gain that meaningful work requires.” 

Research in AI-assisted decision-making echoes this tension. Systems that optimize for fast or accurate answers often undermine longer-term goals like learning, calibration, and capability building. In one study, improving accuracy was straightforward, but supporting human learning proved significantly more complex and required different strategies. This suggests that designing for better thinking is not simply a matter of building intelligence. It’s about understanding which parts of a task AI should support.  

This presents us with a moment to rethink our design priorities. As routine and repeatable tasks become automated, the value of human work moves upstream toward higher order activities such as framing problems, interpreting ambiguity, and generating insight. Some steps are easier to automate than others, but the real challenge is deciding where assistance adds value without eroding human judgment or growth.  

Designing Systems for Thought, Not Just Throughput

For decades, software has been defined by static parameters – files, modes, commands. But as the complexity of work grows, so does the need for tools that flex and adapt. “We are learning that tools need to not only reflect what a person is doing, but what they are trying to accomplish,” says Banks. 

No longer are we designing interfaces for answers. We are designing environments for thinking. 

When a system becomes a living environment that is shaped, reshaped, and tuned to the demands of consequential work, it moves from tool to active partner. Research on human-AI cognitive coupling shows that the most meaningful progress happens when systems evolve with human reasoning rather than simply responding to it. When AI joins the rhythm of our reasoning rather than standing outside it, authorship returns to the human and clarity follows. A single, well-timed question can prompt users to clarify their purpose, uncover hidden assumptions, and embrace change with confidence. 

At Microsoft Ignite this year, we unveiled early signals of what this future might look like. Work IQ is a new intelligence layer designed to understand not just the content people create, but the patterns, relationships, and rhythms of their work. By weaving signals across emails, files, meetings, and chats, it forms an interpretive layer that helps Copilot and agents anticipate what matters most. This is by no means a solved system, but it represents our move toward software that understands intent while honoring explicit instructions. 

When Speed Becomes the Enemy of Clarity

When AI is treated purely as a vending machine for content, the result is workslop: documents without authorship, countless versions of drafts, and a quiet erosion of trust in what is real. We sometimes see this pattern emerge in Ask Mode, where people interact with AI in chat threads that live outside their actual work. Instead of strengthening what exists, users may spawn parallel drafts that increase noise without increasing understanding. 

We do not have a productivity crisis. We have a crisis of thought. 

Research on process-oriented AI collaboration shows that people learn and decide more effectively when AI supports the reasoning journey rather than automating the destination. Over automation can deskill and narrow ideas while augmentation grows capability and creates room for higher order work. Systems that show their work, and invite ours, preserve engagement and prevent cognitive off-loading that leads to shallow thinking. 

In our design practice, we are beginning to explore what it means for AI to support the sense-making layers of cognition. Not only producing information, but helping people interpret it, structure it and identify patterns that matter to them. We envision Copilot not simply answering questions but contributing to the work of understanding. The new Notebook experience is one of our first steps in this direction. It gives users the freedom to explore with AI and still preserve authorship. By providing a flexible space where people can think alongside Copilot, while shaping their page in distinctly human ways, Notebook helps transform raw information into insight and supports deeper conversation.  

The antidote thus is not to slow down, but to design with intention. We need systems that help people focus, clarify, and move their living artifacts forward. To create this requires a new design vocabulary, one that treats thinking itself as a collaborative space. 

Designing for Shared Cognition 

When we design AI systems, we are essentially designing part of the user’s cognitive apparatus. This demands care in how information is surfaced, attention is guided, and reasoning is supported. In our design practice at Microsoft, we center on four emerging principles that turn AI from a task completer into a cognitive collaborator.

1. Clarify intent before action

A true thought partner begins with purpose.Banks observes: “Definition of purpose is becoming the new space for work. People are poor at setting goals, thinking about milestones, planning and orchestrating.”  Prompting people to define these strengthens every downstream step. Purpose anchors velocity, ensuring progress doesn’t drift into vanity as teams shift from outputs to outcomes.

2. Make reasoning visible and interpretable

Transparency turns review into dialogue.Thoughtful systems show inputs, options and trade-offs, and pair claims with sources to explain reasoning. Studies on human-AI decision making show that systems extending human reasoning, rather than replacing it, lead to stronger ownership of outcomes. 

Work IQ and Foundry IQ support this principle in early ways. By grounding suggestions in a person’s work history and organizational context, these systems give AI a meaningful basis for interpretation. We continue to explore how much transparency is useful and how to surface reasoning without overwhelming users. Through visible reasoning, we can enable different cognitive styles to engage at their preferred level.

3. Introduce productive friction

Not all friction is undesirable. Well-designed friction can sharpen thinking. Steinacher explains: “We’re continuously embedding constructive friction into our prototypes, designing moments where Copilot elicits critical thinking, not just obeys.” These micro-pauses can create constructive friction that sustains understanding. Even deliberate checkpoints, such as gauging confidence or highlighting trade-offs, can transform automation into collaboration. When people can see the why behind the what, trust deepens and comprehension compounds. Responsibility becomes shared.  

4. Keep cognition in flow

Humans and AIform what researchers call a cognitively coupled system, each shaping the other’s next move. The most effective designs keep this loop alive by proposing changes within the artifact, offering reversibility, and maintaining  Review becomes reflection rather than redo.  

Research on scaffolding creative ideation also finds that progressive, constraint-based prompts improve quality while preserving authorship. Insights often arise from cycles of foraging information, schematizing patterns, and generating new understanding. AI should support this loop, not replace it. The Work, People, and Learning agents showcased at Ignite are early experiments in maintaining flow by meeting people in the context of their roles rather than isolating tasks. Their design points to a future where AI supports continuity rather than fragmentation. 

Design as Stewardship

Designing AI as a thought partner is no longer just a matter of interface. It is an act of stewardship. It requires empathy, systems thinking, taste and ethical foresight to anchor experiences in human intent and judgement. We need to shape outcomes beyond what data alone can decide and create tools that honor complexity, invite dialogue and sustain trust.  

As AI accelerates the pace of change, our responsibility is not to chase faster answers, but to cultivate deeper user understanding, beyond the software medium. We have the opportunity to shape the invisible logic of how AI reasons with us, what it attends to, how it weighs trade-offs, and when it seeks guidance. This is where design leadership matters most. Not in making AI feel magical, but in making its logic feel human. 

Read more

To stay in the know with Microsoft Design, follow us on Twitter and Instagram, or join our Windows or Office Insider program. And if you are interested in working with us at Microsoft, head over to aka.ms/DesignCareers.