Cutting through the noise: Leveraging AI to activate research insights
Does this sound familiar? You’re a UX researcher watching your feed fill with sweeping claims about AI. Meanwhile, your company’s leadership wants AI integrated yesterday—but you’re still sorting out what’s real, what’s hype, and where it actually creates value. You’re not alone.
Welcome to Think First: Perspectives on Research with AI.
Each week, we’ll feature candid conversations with research leaders and practitioners who bring diverse viewpoints on how AI is reshaping research practices, team dynamics, and the future of our discipline.
Our first post spotlights Josh Williams, Head of Core Product Research at Superhuman, an AI-native productivity platform that was recently acquired by Grammarly, where he pioneers practical applications of AI in research and insight activation. With previous leadership roles at Indeed and experience spanning hardware, medical devices, and both B2B/B2C contexts, Josh is known for championing researchers as decision influencers. He advocates for thoughtful, human-in-the-loop use of AI, focusing on signal over noise and applying technology where it delivers real value and impact.
From Skepticism to Strategic Adoption: Josh’s AI Origin Story
In our discussion, we began by exploring Josh’s “AI origin story”—how he got started with AI and when he realized it would change the way he worked. He described how he moved from skepticism to seeing AI’s transformative potential in research.
“There are two parts to this question. One is where I started experimenting with AI but was skeptical. The other is when I was like, ‘oh damn, AI can really change the game.’ I really started exploring AI and how it can help researchers in particular when I was at Indeed. Our CPO at the time was like, ‘we are data rich, insights poor,’ so how do we cut through the noise?”
To answer that question, Josh and his team at Indeed began experimenting with tools like Dovetail and Glean, aiming to centralize insights and make them more accessible. The challenge was clear: with hundreds of researchers generating vast amounts of data, how could they surface what truly mattered?
“We thought AI could really help cut through all of that. And I think that made a huge difference—not having researchers be the sole curators really unlocked the power of research. But there was a lot of skepticism, of course, because of the lack of nuance. Especially at that time, AI was very superficial and didn’t have the business context.”
It’s not just about insights, but also how AI shows up in data collection. A pivotal moment came when Josh’s team ran a validation study comparing AI-moderated research—using tools like Outset.ai—with traditional methods. The results were eye-opening:
“We were trying to see whether we get the same level of validity and reliability from something that’s AI moderated. And we essentially found the answer was yes: AI moderation does a pretty good job. Yes, it lacks some business context. Yes, it lacks the ability to maybe find some nooks and crannies and some corners that researchers are able to do. But I think that was the aha moment of like, wow, maybe not now, but in the future, we’ll be able to strategically unlock researchers to do the things that we always wanted them to do.”
Building an AI-Native Research Culture at Superhuman
At Superhuman, Josh has found a culture that embraces experimentation with AI. The team is smaller and more agile, but the ambition is high.
“Here at Superhuman, we are an AI-native company. Our goal is to literally enable Knowledge Workers, like researchers, with AI-native tools. So, a lot of folks aren’t so scared to embrace AI. They’re willing to bend and break AI as necessary. There’s a healthy level of skepticism, of course. It’s not like we’ve offloaded all of our thinking to AI. We’re still skeptical and thoughtful. But there’s also a willingness to be creative with AI.”
We’re still skeptical and thoughtful. But there’s also a willingness to be creative with AI.
One frontier Josh’s team is exploring is the use of synthetic users—AI models trained on their own data to simulate real user behavior and feedback. In fact, his team’s synthetic users project recently won “most innovative” award at their internal hackathon—one of 5 awards granted, with 100+ submissions company-wide.
“It allows us to offload some of our work because, particularly at Superhuman, we’re a much smaller research team. So enabling a designer to stress test the design against these synthetic users, or a PRD can be stressed tested against them too, is a great democratization practice, which leverages AI in new ways.”
But for Josh, the real excitement is in activating insights—not just collecting or synthesizing them. The team has experimented with creative approaches, like building games to help others engage with research findings:
“We just had a UXR team workshop on how we can leverage AI. One activity, for instance, was to live-code or vibe-code a game. We took all of our insights and wanted people to actually engage with them, so we made a Frogger-style game where you had to answer a question based on the data…It’s not about the collection of insights. It’s not even about the synthesis of insights, but about the activation of insights, and that’s really fun and exciting.”
Navigating Challenges and Misconceptions
Despite the promise, Josh is clear-eyed about the challenges of integrating AI into research, especially when it comes to authenticity and nuance.
“AI can converge to the mean, right? It’s very unnuanced, and I think that’s really hard when you need to make nuanced decisions. For example, when it comes to activating insights, AI podcasts can be a little monotonous or come across as a little too fake. Synthetic users are sometimes still too superficial. So, challenges with AI’s efficacy pops up in lots of different places. That’s just a fundamental challenge of the technology. It’s not necessarily something we can fix. We have to work around it.”
He also points out the hidden costs—both environmental and organizational—of AI, and the importance of understanding where AI truly adds value:
“Every time we prompt or every time we engage with AI, it impacts the climate. Sometimes you spend so much time prompt engineering, you’re like maybe I should have just analyzed it myself…Where is it really gonna be fast, efficient, and worth the cost of the time?”
Education and governance are ongoing needs, especially as AI becomes more embedded in business processes:
“There’s still a challenge of making sure everyone understands the power of AI…and where there should be guardrails and governance. We have to educate our stakeholders, like ‘we actually can’t do an AI-moderated study for this type of thing’ or ‘let’s not actually outsource that to an AI tool.’ We’re integrating these into the risk profile of using AI with research.”
Signal Over Noise: The Researcher’s Evolving Role
Josh is wary of the trend toward “research slop”—the idea that more data or more insights are always better. For him, the core responsibility of researchers remains unchanged: maximizing signal over noise.
“No matter who’s generating the insights, too much data is actually noise. As researchers, our core responsibility is always to maintain a high signal-to-noise ratio and to break through the noise. I think AI is about empowering researchers to think critically about when we do research, how we activate those insights, and how we leverage that research.”
He’s also a strong advocate for researchers as influencers, not just data gatherers:
I don’t believe that the primary thing that researchers should do is research. I think the primary thing that they should be doing is influencing, and research is a tool in which they can influence.
Looking Ahead: The Future of Research with AI
Where does Josh see all this heading? He predicts a future where data is cheap, but human discernment and connection are more valuable than ever.
“I think as the pendulum swings and maybe kind of comes to a more of a middle ground, I think what it’s gonna show is that data is going to be very cheap, and it is still going to need someone that can roll up their sleeves and know how to uncover the nuggets of information that are going to unlock influential product decisions or competitive advantage. I do think AI again enables us to maybe move quickly or to unlock certain things, but I think researchers are still going to be needed to be the advisors of the business and to be the conduit to the real humans that are engaging with our products.”
Advice for Teams Starting Their AI Journey
For teams just beginning to integrate AI into their research, Josh’s advice is pragmatic:
“Find where you’re most comfortable. I don’t think you should try to make AI work in every context, especially the ones you’re afraid of. Start in a place where you can maybe even prove out the validity, where the validity is low risk. My advice would be to look across your team, look across your business, and understand where are those low-risk or high-appetite moments. And just play around with it and then stress test it.”
Final Thoughts: Know Your Value
Josh closes with a call for research teams to reflect on their core values and purpose:
“I’m just such a strong believer in understanding the value of a researcher. If you think your whole purpose is to influence decision-making, that tells you where you should use AI. I would love for research teams to sit and reflect on what our values are, where we hope to show up, how we hope to show up, and what our role is. If you can answer all of that, you can figure out everything else after that.”
Josh’s experiences—from overcoming skepticism to championing creative, AI-native approaches—remind us that while data and automation are becoming ever more accessible, the true value of research lies in our ability to cut through the noise, activate meaningful insights, and influence decisions that matter. As AI continues to reshape our field, let’s take Josh’s advice to heart: start where you’re comfortable, experiment boldly but thoughtfully, and never lose sight of the core values that define our role as researchers.
Next week, Savina Hawkins takes us inside her AI-augmented workflow at Anthropic—where end-to-end research that once took weeks now happens in days, without compromising quality. Don’t miss her unique perspective on designing human–AI collaboration that delivers speed, scale, and quality at once.