The next time your phone’s virtual assistant gives you a quick answer instead of nudging you to think harder, remember Andre Ye, ’25. He’s part of a new generation of researchers reimagining how AI supports human thought.
“I study how computers ask questions so people can ask better ones,” Ye says.

The University of Washington undergraduate blends computer science, philosophy and design to create systems that do more than deliver answers — they invite reflection. If we’re going to live alongside AI, he argues, it should challenge our assumptions, not reinforce them.
Curiosity runs in Ye’s family: His father’s graduate work in neural networks laid early foundations for AI, and his mother, a finance professor, nurtured a love of science and technology. Fueled by this environment, Ye entered the Robinson Center for Young Scholars’ Transition School, an early college program for advanced eighth graders, with a strong focus on AI and quantum computing. There, classes in history and English reshaped his thinking.
“History wasn’t just dates; it connected to art, science, economics,” he says. “Before, I might ask how to build a circuit. Now I asked, what does it mean to be human?”
That shift set Ye on an interdisciplinary path: majors in computer science and philosophy, with minors in history and math. He’s not only interested in how technology works, but also in how it reflects and shapes human values. While computer science centers on clear right-or-wrong logic, philosophy challenges him to engage with ambiguity, shifting contexts and the nature of questioning.

Ye and his friend Mark Pock, also double-majoring in philosophy and computer science, wrestle with big questions of the day.Photo by Jayden Becles
“Philosophers study what a question is, what it means to answer one and how questioning works,” Ye says. “For me, this shows how philosophy and computer science come together.”
Ye’s interdisciplinary mindset deepened when a presentation from the Office of Undergraduate Research revealed that research isn’t limited to labs and white coats. It spans disciplines and worldviews. Ye dove into the UW’s research ecosystem.

“Confidence Contours,” Ye’s paper he co-authored with Quan Ze Chen and Amy Zhang, was published in Human Computation, presented in Delft, Netherlands, in 2003 and earned honorable mention for best paper.
His first project at the Najafian Lab at UW Medicine focused on tools to detect kidney disease. He expanded this work by developing Confidence Contours, a method that helps AI communicate uncertainty. This is a crucial step for ensuring that technology makes safer and more responsible decisions, especially in healthcare.
Over three years, Ye shared research at the UW Undergraduate Research Symposium on topics ranging from kidney imaging to how AI interprets language, morality and meaning across cultures.
As a junior, he earned a Mary Gates Research Scholarship to study how people express complex ideas through images. Instead of relying on fixed categories, he focused on concepts like fairness, morality and social norms that shift across cultures and contexts. The goal: to build AI systems that reflect human nuance.
“The Mary Gates Research Scholarship provided me with resources to develop AI labels that mirror social structures and complexity. It gave me the much-needed time and space to thoughtfully bring that vision to life,” Ye said.
Collaboration as a catalyst
“Researchers in every field share similar struggles and successes,” Ye says. “It creates a real sense of belonging.” Revising ideas and contributing to a broader academic conversation has been one of the most formative parts of his college experience.
That sense of belonging isn’t just emotional; it’s intellectual. For Ye, research means stepping into a dynamic network of mentors, peers and collaborators. It’s where ideas take shape, get challenged and grow. And it is where his own contributions, from developing frameworks to presenting at conferences, have begun to shape the conversation.
Ye’s latest project explores AI itself as a partner. He interviewed 21 philosophers to learn how they frame questions, wrestle with uncertainty and shape thought. Drawing from their insights, he developed the Selfhood-Initiative Model, a system that guides AI to pose thoughtful, open-ended questions designed to broaden human thinking.
He presented this at the Conference for Language Models, where it sparked discussion about AI’s role in fostering curiosity and reflection. Building on that momentum, Ye collaborated with Stanford Ph.D. student Jared Moore and a fellow undergraduate to explore how AI models understand and apply moral concepts. Their joint research was showcased at a leading workshop on Neural Information Processing Systems, highlighting new possibilities for ethical AI design.
“Building algorithms that ask good questions isn’t just a technical challenge,” Ye says. “It’s a deeply philosophical one, because the nature of questions itself shapes how we think.”

Andre talks with one of his mentors, computer science professor Amy Zhang, about his future at MIT.Photo by Jayden Becles
Ye’s journey reflects the value of a research community that is both rigorous and reciprocal. He credits faculty mentors like philosophy professor Rose Novick and computer science professors Amy Zhang and Ranjay Krishna, along with mentors like Moore, as critical to his growth. And he contributes by advancing ideas others can build on.
“Research isn’t a solo pursuit,” Ye says. “It’s shaped by collaboration, feedback and exchange. The right question can change how you see the world.”
Infinite questions and endless possibilities

Ye and philosophy professor Rose Novick, whose seminar on philosophers like Gilles Deleuze and William Wimsatt helped broaden his perspective on what’s meaningful.Photo by Jayden Becles
Seminars with Professor Novick deepened Ye’s thinking on concepts like truth and morality, reinforcing his belief that building meaningful AI is as much philosophy as it is engineering.
He wants to design AI that navigates real-world nuance, models that take into account moral gray areas and shifting social contexts, rather than defaulting to black-and-white answers. He’s curious how such systems might influence political, ethical and social understanding, from policymaking to content moderation.
“There are so many challenges in how we understand the world,” he says. “AI might help us bridge those gaps.”
In a world increasingly shaped by AI, Ye believes machines must engage human values efficiently and ethically. His work points to a future where AI supports critical thinking and helps people navigate complexity with insight.
His next question is whether AI can help us be more thoughtful about decisions of all scales, from the ballot box to the operating room. If it can, then the smartest tech of the future won’t just answer faster. It will help us think more deeply about justice, complexity and the future we want to build.
This fall, Ye heads to MIT to begin a research-based Ph.D. in electrical engineering and computer science (EECS), supported by the Paul & Daisy Soros Fellowship for New Americans. There, he plans to continue combining technical innovation with philosophy to create AI that makes ethical and functional sense. At the heart of his work is a central question: What values should guide the machines we build to shape our thinking?
At the UW, Ye’s path was shaped by mentorship, research and a community that values big questions. At MIT, he will keep asking them.

Unlike typical machine learning datasets with clear labels, Ye’s work tackled complex concepts like morality and social norms, shaped by ongoing, context-dependent conversations. He created visual datasets capturing this dynamic knowledge and explored how computer vision and human-AI interaction could learn from it.Photo by Jayden Becles
Written by Danielle Holland // Photos by Jayden Becles // Creative direction by Kirsten Atik
Thank you, from Andre
I would like to deeply thank all of my Transition School teachers, especially my English instructor Amanda Zink and my history instructor Michael Reagan for introducing me to serious humanistic inquiry; my research advisers Amy Zhang, Ranjay Krishna and Rose Novick; my research collaborators and mentors Jared Moore, Jim Chen, Sebastin Santy, Mark Pock and others; and my family and friends for their support along the way.
— Andre Ye