Skip to content

Design Thinking

Expanding human intelligence, not just automating tasks 

  –   The estimated reading time is 9 min.

A smiling woman with curly hair wearing a dark turtleneck, surrounded by diagram lines labeled Framework, Expansion, Intelligence, Learning, and Curiosity, with abstract illustrations in the background.

Welcome to another installment of Think First: Perspectives on Research with AI. Last time we heard from Chelsey Fleming from Google Labs, who emphasized the importance of maintaining a human touch when experimenting with AI tools. Today, we’re featuring Rose Beverly, Senior Staff AI UX Researcher at PayPal. With an academic foundation in anthropology, psychology, and philosophy from Berkeley, Rose brings a multidisciplinary lens to the evolving intersection of technology and human behavior. Her career spans nearly a decade across innovation and foundational projects in industries such as cybersecurity, fintech, banking, and education technology. 

In our conversation, we explored Rose’s journey into AI and UX research, her practical frameworks for integrating AI into research workflows, the evolving role of researchers, and her forward-looking perspectives on the future of intelligence, technology, and work. 

AI origin story: From NLP to expanded intelligence

Like many other researchers, Rose’s path to AI began with curiosity and experimentation.  

“I think I was introduced to natural language processing (NLP) way back at my time at Berkeley…understanding how to use that for qualitative analysis, being able to upload several books, cross-reference and cross-analyze for different ideas, that was interesting to me then.”  

But for Rose, the pivotal moment came in November 2022, when generative AI technologies like ChatGPT entered the mainstream. 

“As an anthropologist, I remember thinking, this is going to fundamentally change everything about how humans live, work, breathe, right? And how we fundamentally understand the concept of intelligence itself. That was really the moment I realized that AI would transform our research discipline…Reasoning and analysis and synthesis and framing could be extended through an intelligent system that reflects the collective data or intelligence of humanity, but accessed at a really crazy speed.” 

It was at this point, that Rose began experimenting in earnest and discovering new possibilities. 

“So, I taught myself how to prompt and I was just so excited about the truly endless possibilities, really just experimenting a lot, and realizing that AI could expand our intelligence, not just automate our tasks.” 

Current practice: Frameworks for integrating AI into research

Rose notes that early adoption often felt like trial and error, before researchers had a shared playbook for where AI fits best.

Today, I use AI in every part of my workflow, but in the beginning of my experimentation phase, I just randomly guessed where to plug and play AI tooling…I feel like a lot of people did that because it was so brand new and there wasn’t any framework to say, ‘this is how you use AI.’

Rose’s experimentation with AI led her to develop a practical, role-agnostic framework for integrating AI into research workflows—what she calls the MASTER framework. 

“Through the last few years, I actually created a process and a framework that would bring me more structure…It’s called MASTER. M stands for mapping workflow, different phases; then you Audit your tasks within each phase. S is for scanning AI opportunities, after you scan, you Test these out that segues into the Embed phase and then R stands for rinse and repeat.”  

This framework, adopted by over 20 organizations, helps teams responsibly and efficiently integrate AI into their workflows. Rose emphasizes its adaptability: 

“It’s role agnostic…I just want to be able to give people a structured framework to be able to adopt AI into their workflows pretty easily. But beyond that, I use it for almost everything…anything that I can use AI for to speed things up.”  

At PayPal, Rose applies AI across her workflow, leveraging internal tools and foundational models to increase the quality and efficiency of her work: 

“We have access to AI tools and foundational models, but I had to adjust that framework based on the constraints within the company…We have additional company-wide internal AI tooling that is not available to the public…It’s been incredibly helpful for increasing the quality of my work, understanding what has been done before, and not repeating work.”  

She also uses AI for live moderation, desk research, and real-time transcript analysis, illustrating the versatility of these tools in modern research practice.  

“We had repositories before, but it’s different when you can access it via ChatGPT or Gemini or Copilot. You can also use tools like Otter.ai, which is a recording tool you can invite to meetings, and ask questions of it in real time if you missed a part of the conversation.” 

Challenges and risks: Hallucinations, synthetic data, and responsible adoption

Rose is candid about the challenges and misconceptions surrounding AI in research, especially the issue of hallucinations. 

“There is a specific misconception about hallucinations that I want to address…you can actually prompt models now in a way that they significantly reduce their hallucination rates…You can prompt for verbatim, direct quotes and you can compare with another model, use tools like AutoAI to verify that the quote is correct…But as a researcher, you still want to be really close to your data. You still want to know what it says, what the numbers say.”  

She draws a provocative comparison between machine and human error. 

What would be really interesting is to see an honest benchmark comparison between how AI hallucinates to the way humans fabricate or misremember information or misinterpret or misquote…We do it constantly. People panic if machines hallucinate at all. So, I think that contrast between human and machine hallucination rates would be really interesting to see.

Trends and hot topics: Vibe coding and the changing landscape

Rose observes that AI is “underhyped and underrated,” and shares her fascination with emerging trends like vibe coding. 

“Vibe coding shocked me in terms of how we can use natural language to literally create a fully functioning prototype in seconds, using tools like Replit, CursorLovable.dev…But there’s a lot of really crap ideas still being developed, most people don’t know how to find that product market fit, how to identify and define a problem space using data.” 

On synthetic personas and data, Rose is measured but optimistic. Like Chelsey Fleming, she acknowledges the limitations and challenges of synthetic data, while recognizing the potential in certain contexts—like the synthetic users experiment Josh Williams described previously.  

“I know people have strong opinions about synthetic data or synthetic personas…I mostly agree that the technology isn’t quite there yet to have valid research results, but I think we’ll get there eventually. I do think that synthetic personas or synthetic data can be used for ideation, not necessarily for validation…it can be used as a fancier proto-persona. Not really validated, but it’s an amazing starting point.”

Future outlook: Redefining roles and agent orchestration

As AI moves from a set of discrete tools to more autonomous, agent-like systems, it’s not just workflows that are shifting—it’s the contours of knowledge work itself. Rose observes that this rapid change can trigger a visceral uncertainty about value, identity, and where human judgment fits when a “more intelligent entity” seems to enter the room. In that context, she notes the primal, fear-based reactions many people have to AI’s potential to change (or even replace) roles.  

“I see a lot of people react to something that might ‘replace them.’ A more intelligent entity has entered the room”  

It’s natural to have a primal response to something that could potentially not just replace a lot of your tasks, but fundamentally change your role as a knowledge worker.

At the same time, Rose argues that moving through that fear opens up a clearer view of what’s actually changing: not just individual tasks, but the boundaries of the research discipline and the shape of the roles that carry it forward. Looking ahead, Rose envisions a future where research dissolves into broader, hybrid roles. 

“Researchers will not be researchers in the traditional sense moving forward. The discipline is dissolving into broader hybrid roles…We are also faced with an identity change as a knowledge worker, researcher, if you will…I see us moving away from specialized skill sets and into redefined roles.”  

Like Savina Hawkins, Rose predicts that researchers will become orchestrators of AI agents, extending their skill sets into domains like marketing, design, and product. 

“I see us managing, becoming managers and orchestrating AI agents in addition to also using generative AI tools…Research will just be a part of our larger skill set, but no longer pigeonholed into just a specialist role.”  

Advice for researchers: Ride the wave and embrace lifelong learning

When Rose looks ahead, her guidance lands in two places at once: a mindset shift toward accepting constant change, and a concrete commitment to AI learning.  

The best way to prepare for it is to just ride the wave…Embracing that technology and, you know, being a little anxious, sure…But the best way, in my opinion, is just to embrace it. Ride the wave, learn how to prompt…If you want to remain valuable in the market, you have to have AI skills.”  

She doesn’t frame preparedness as having all the answers but as staying open to experimentation, acknowledging the anxiety that can come with new tools, and continuing to invest in skills that keep you adaptable as the market evolves. 

We will be required to be lifelong learners…Our roles will be redefined eventually. Research will just be a part of our larger skill set…so ride the wave, become a lifelong learner, and just understand that this is the world that we live in right now and be adaptable and develop multiple streams of skills.”   

Conclusion: Intelligence as collective human endeavor

Rose closes with a reflection on the nature of intelligence and AI’s role as an extension of human capability: 

AI is not artificial—it’s intelligence…it’s human intelligence, it’s our collective intelligence accessed and really connected at such an unbelievable speed…Humans created all of the data that is trained on these AI models, and that’s how I see it, as an extension of human intelligence.”  

Her multidisciplinary perspective invites us to reconsider what intelligence means and how researchers can shape the future by embracing change, staying close to the data, and cultivating a mindset of lifelong learning. In the end, embracing AI means staying curious, adapting our skills, and keeping an open mind about where research goes next. 


 Next up, Jeremy Williams offers a grounded, human-centered look at how AI can serve as a practical research assistant—supporting writing, ideation, and data analysis without replacing the rigor and judgment researchers bring to the table.