Product – Design Thinking, UX/UI
Design isn’t dying. It’s shifting left.
When the model is the new medium, shaping how it behaves in humanistic ways is design
Note: The prototypes embedded in this feature are interactive and we encourage you to engage and experience them.
Every week there’s a new hot take. The design process is dead. Vibe-coding killed the UX role. AI ate the wireframe. It’s loud out there and some of it deserves the noise. Things are changing fast.
But the conversation keeps focusing on the wrong thing. Everyone is debating how design is happening — the more important shift is where.
The devil’s in the stack
“Shifting left” has taken over software engineering since the early 2000s. The idea is simple: move important thinking earlier in the process, before problems get expensive. Test-driven development. Threat modeling. Linting. Catch it before it ships.
You might think this sounds silly for design. How much farther left than “ideation” can you really get? But for us, shifting left refers to the stack — the underlying layers of technology that product interfaces sit on top of. It means engaging with those layers earlier, with the same goal of catching problems in design thinking before anything moves downstream.
For those of us on the Tech Futures team — a multidisciplinary team of UX engineers and applied scientists approaching design through a technical lens — we’ve spent years applying this instinct in practice. We design in code instead of static tools, wire real APIs into prototypes to pressure-test actual data, and catch the gaps that static Figma screens can’t surface.
Show > tell. These small interactive examples illustrate the intricacies of our work better than we can explain it. This one highlights the value of real data in a design.
That’s what shifting left looks like in practice: design as close to real data and real constraints as possible, as early as possible. As we explore the future of Copilot at Microsoft Design, that same instinct has now pulled our team somewhere we didn’t entirely anticipate: into the model itself. Design decisions aren’t just happening on top of the product—they’re happening inside of it.
Our tools have changed dramatically for the better. Prototyping that once took weeks of technical know-how now takes an afternoon with Copilot or other AI tools. But the bigger shift isn’t speed, it’s access. Every design practitioner can now engage with real inputs, real outputs, and real constraints from the start. More design thinking makes it into the technology before anything ships.
And that’s exactly how we’ve been rethinking our design process.
Model-forward, human-driven
In a model-forward environment, the output is the experience. And with probabilistic systems like LLMs, that output is never the same twice. Which means the design problem is fundamentally different.
We used to design for predictable navigation: personas, roles, and narrow paths through a product. Now we’re designing for focused intent. Users show up expecting the system to already understand their files, their calendar, their context, their way of working. They’re not navigating a product. They’re in a conversation with one.
Each of these layers represents the user’s context and offers different surfaces for new kinds of design thinking
As we shift to designing the output itself, the design challenge becomes cohesively scaffolding human intent across an array of endpoints and behavioral patterns alike. The model needs to show up in a way that feels reliable and intentional for each specific user, across wildly different inputs and intents. A visual thinker should always get a visual response. A detail-oriented person shouldn’t get a breezy summary. The constants are still there. They’ve just moved from interface patterns to behavioral ones.
So, the real question is: how do you design for a model that behaves differently during every single interaction?
Different dimensions and cognition patterns produce different outputs, though some constants remain. A visual thinker, for example, should always get a visual response.
While the example above might feel relatively straightforward, it gets complex fast. The outputs are different, but the underlying question is the same: how does the system understand who it’s talking to, and what is that person really asking for?
Take something as personal as the workday. Motivations are unique to us all and there are stark differences in how we prioritize tasks, structure our time, and navigate the blur between work and life. This isn’t just true from one person to the next, but from who you are today versus who and where you may be next year.
This starts to give you an example of how different dimensions across different people might compound across a more complex scenario.
We have to shift left and design through the model, and that means getting into the data. The right human signals have to be encoded at the model and intelligence level from the start, not bolted on later. That means understanding how your data behaves, what it looks like at its edges, and what a “good” output actually means to this person, in this moment. Not as an average. As an individual.
And that’s exactly the point. People are not averages. The diversity of human motivations we’re designing for is enormous, and the problem space is just as large. But that’s the real work: capturing how people actually work and live, so the model has something true to build from.
The process changes. The point doesn’t.
Design was never about pixels. It’s always been about people.
The industry is moving fast, and that speed only elevates the importance of human-centered design work – it doesn’t diminish it. The tools will keep evolving. The way we work will keep changing. What won’t change is the core of it: deeply understanding people and making sure the systems we build actually work for them.
That work is just showing up in different forms now. Further left, deeper in, and more consequential than ever.