Debug Faster, Think Deeper: AI for Coding and Quant Research
Welcome back to Think First—Perspectives on Research with AI. Last time we heard from Rose Beverly from PayPal who shared a practical framework to guide AI adoption, while exploring possibilities for AI to expand human intelligence. Today, we’re hearing from Jeremy Williams, a mixed-methods researcher who brings a unique perspective to research shaped by an unconventional career path.
After serving six years as a Navy medic, Jeremy transitioned into technology, exploring computer engineering before discovering his passion for user experience. His journey spans freelance projects, teaching at the University of Washington, and roles on rapid research teams at companies like Google and Amazon. Jeremy is an advocate for leveraging AI as a supportive tool—an assistant rather than a replacement—using it to streamline tasks like writing, ideation, and data analysis while maintaining rigor and human insight in research.
In our conversation, we discussed Jeremy’s evolving use of AI. For Jeremy, AI acts as a hands-on partner in his coding and quantitative workflow—helping him clean large datasets, troubleshoot Python errors, and move from descriptive statistics to inferential methods while keeping his own technical judgment firmly in the loop.
From Curiosity to Practice: Jeremy’s AI Origin Story
Like many others, Jeremy’s introduction to AI began with the rise of ChatGPT. For him, AI’s promise was personal as well as professional:
I have severe ADHD and a learning disability, so my initial thought with this whole thing was like, oh my goodness, this can help me with my work a lot. So, I’ve always looked at it as a tool or an assistant. I never really saw it as something that could replace my work or do my work for me.
He started by using AI to tighten up presentation decks and serve as a second pair of eyes, quickly realizing its value as a productivity partner.
Evolving the Workflow: AI as a Research Assistant
As AI tools matured, so did Jeremy’s use of them. Like Savina Hawkins, Jeremy found early gains in treating AI like a cognitive thought partner. He began using AI for ideation—generating usability test tasks and interview questions to broaden his perspective and avoid tunnel vision.
“We kind of get these blinders on and we start to look at things in a specific light, and we can’t see the forest for the trees. I started using it because it helped broaden my perspective and come up with questions and things that I didn’t even think about.”
He also explored AI for qualitative data analysis, running transcripts through ChatGPT after his own manual review to see what new patterns or insights might emerge. Trust grew with tools like Notebook LM, which limited analysis to user-provided sources.
On rapid research teams, Jeremy developed a practical framework for integrating AI into fast-paced projects: create a Slack channel for each project, debrief after every session, manually analyze data, then use AI to polish and synthesize findings. This approach allowed him to balance human judgment with AI’s speed and breadth.
“We would do a kickoff on Friday, pilot the study on Monday, run it Tuesday and Wednesday, analyze the data and put together the deck on Thursday. And then Friday do the readout and just do that all over again. And so, I started to analyze the data as I go along and started using this framework to increase my productivity.”
Quantitative Research: AI as a Coding and Data Partner
In addition to leveraging AI for qualitative research tasks—such as ideation, interview question generation, and transcript analysis—Jeremy also applies AI to quantitative work, using it to clean large datasets, troubleshoot code, and extract descriptive statistics from large volumes of data. This dual approach highlights how AI can support both sides of the research process, enhancing productivity while keeping human expertise at the center.
“I put the dataset into Gemini or ChatGPT and ask it what things need to be cleaned up in the dataset. Then I use Python to kind of parse through the dataset, even if it’s thousands of lines. Once I clean the data, I can get those descriptive statistics…mean, medium, standard deviation, counts…get it to describe the data and see what the data’s saying descriptively. And then I’ll go in and use Python to answer some research questions using inferential statistics, Bayesian analysis—AI definitely helps with the top part of that funnel.”
Jeremy also leverages AI for troubleshooting code, copying error messages into ChatGPT for quick fixes, but always keeps his own skills sharp by writing and reviewing code himself.
Sometimes I get stuck writing code…something happens, my code breaks, and I can’t figure out what’s happening. I can copy and paste the error code into ChatGPT, and it’ll tell me what’s happening and how to fix it. So, I definitely take advantage of that, too. It definitely helps with quant work, but I’ve yet to just let the AI do all the analysis. It’s not like I don’t trust it; I just sometimes like getting in there and writing code and keeping my skills sharp.
Challenges and Cautions: Knowing AI’s Limits
Despite the promises and excitement surrounding AI, Jeremy approaches its capabilities with a grounded perspective, recognizing both its potential and its boundaries. Jeremy is realistic about AI’s limitations
“Sometimes when you’re using it to help write code or fix an error…it’s been trained on the Internet some years back, right? So, some of the libraries that you might use in Python may have been updated, and so it’ll give you a fix that doesn’t work now.”
He emphasizes the importance of foundational knowledge and not relying on AI to do the job for you:
“I know how to write code, so I know if it’s wrong, because I know foundationally what I’m doing to begin with. LLMs are good for some things, and not good for other things. I think we’re at a point now where people are trying to have these large language models do things that they’re not ready to do yet.”
Industry Trends: Hype, Skepticism, and the Human Factor
Jeremy remains cautious about certain industry trends; for instance, he’s skeptical of the current hype around “vibe coding” and rapid prototyping with AI. While other researchers, like Rose Beverly and Josh Williams, are exploring creative use cases for AI-coded prototypes, Jeremy reinforces the importance of having a strong technical foundation.
“You can go into Google AI Studio and build almost anything. But what happens when you run into a problem? If you don’t know how to write code, you don’t know how to fix what’s going on…It’s good for quickly building a prototype and testing it. But actually building something production wise, I don’t think it’s good for that at all.”
At the same time, Jeremy worries about the industry’s push to replace people with AI, warning that nuance, creativity, and context are lost when humans are removed from the loop.
“We know that there’s a lot more nuance to what we do than just putting together a script for a usability test or user interviews. That kind of human creativity and thought and being able to read between the lines—AI can’t do that. I think at some point we’re kind of going to run into a roadblock where we aren’t going to have that human kind of thought and cognition to be able to look at things objectively and make a decision based on the context in which things are happening.”
Looking Ahead: Risks, Roadblocks, and the Future of Research
Jeremy’s outlook is both pragmatic and cautionary. He raises concerns about the shrinking pipeline for junior roles and the long-term impact on the field:
Those junior people turn into senior people, who turn into principals and staff, who turn into managers, who turn into VPs and directors. What happens when all of those people who would have been in companies learning and growing don’t have jobs?
His core worry is not the technology itself, but how people use it—hoarding information, making short-term decisions, and risking long-term harm to the profession. He draws an analogy to The Walking Dead—a television series about a zombie apocalypse:
“It’s not really about the zombies. It’s about the people, and the people are way more dangerous than the zombies are.”
Advice for Researchers: Keep It Simple, Keep Learning
Jeremy’s advice for researchers starting out with AI is refreshingly straightforward:
“Keep it simple. Look at AI as a partner in crime, an assistant, your own personal intern…Don’t try to offset your job to AI. AI can help with writing, polishing your writing, looking at things with another set of eyes…But also, don’t stop learning. Continue to build your knowledge base…because you still need to know what you’re doing, even though you have something else to kind of help you out.”
Jeremy Williams’s experience shows that AI is best used as a practical tool—one that can support everything from ideation to coding, but not replace the judgment and creativity that people bring to the work. His approach, rooted in simplicity and ongoing learning, offers a pragmatic perspective for researchers adapting to new technologies. As the field continues to evolve, Jeremy’s advice is straightforward: use AI to enhance your process, but keep building your own expertise and stay focused on the values that matter.
Next time, we’ll hear from Utpala Wandhare from GlaxoSmithKline (GSK), who shares how she’s using AI to streamline research, strengthen cross-functional collaboration, and stay human-centered in an increasingly automated world.