Published Date : 30/09/2025
In this interview, Stanford Research Fellow Dr. Duncan Eddy reflects on his professional background, shifting his focus on artificial intelligence safety standards, and what he is currently working on. He offers insightful observations from his work in AI safety and advice for future professionals. Dr. Eddy shared his experiences with the Library’s Artificial Intelligence Community of Practice, a group dedicated to interest in and curiosity about AI. The interview was conducted by Emmi Pargament, the Spring 2025 Artificial Intelligence (AI) Community Analyst Intern with the Library’s Digital Strategy Directorate.
Dr. Eddy’s career began with a focus on Earth observing satellite systems, which provided him with a solid foundation in practical, applied, and physical problems. This background, he believes, has been invaluable in his transition to AI. He notes, 'Lots of people are making what I would call useless things in AI. Not to be blunt about it, but we don’t need more chatbots. We need things that solve real and meaningful problems that are incredibly hard, and that is something that working on satellites very much showed — it’s possible. There is so much that you can do, so much impact that you can have in the physical world solving hard problems.'
His career trajectory is a bit unconventional. After dropping out of Stanford to pay rent in Palo Alto, he worked at a startup, which eventually led him to work on automating satellite operations. Later, he returned to Stanford to complete his Ph.D. while working on satellite task planning. This unique blend of experiences has shaped his approach to AI safety.
At Stanford University, Dr. Eddy is a research fellow in the Department of Aeronautics and Astronautics and has served as the Executive Director of the Stanford Center for AI Safety. His journey to Stanford was influenced by a period at Project Kuiper, which ended up being more management-focused than he desired. He then worked briefly at AWS before the rise of generative AI, which led him to seek a new direction. A conversation with his thesis advisor, Mykel Kochenderfer, resulted in an opportunity to return to Stanford and engage in ongoing research projects.
One of the most impactful projects Dr. Eddy has been involved in is adaptive stress testing for autonomous vehicles. This technique, developed by NASA Research Scientist Richie Lee and further refined by graduate students Anthony Corso and Harrison Delecki, uses reinforcement learning to automatically find specific sets of inputs that cause failures in complex systems. Dr. Eddy continues this work, focusing on finding failure modes in autonomous vehicles. 'This technique is a really interesting approach to finding failures in complex systems,' he explains. 'When you have a safety-critical system, the system is already pretty robust for the most part. There’s a lot of engineering work on building safe systems, but you still want to find failures. Adaptive stress testing automates that process.'
When asked about the adaptation of AI safety standards to evolving AI concepts and models, Dr. Eddy notes that there has not been much adaptation due to the lack of regulation. 'The field of regulation is generally pretty fraught. You don’t want to overly regulate when you don’t know what you’re regulating, because that stifles innovation, creativity, and possibility,' he says. He points to the EU AI Safety Act as an example of regulation that can be heavy-handed, but he appreciates its domain-specific approach to regulation.
Dr. Eddy also emphasizes the importance of building public confidence in AI. He acknowledges that concerns about AI, often fueled by dystopian fiction, are common. However, he believes that more pressing issues are how people use these systems and how they interact with them. 'I worry about much more practical things like how people will use tools as they exist today or what they do with them. Yes, it is helpful to have someone worry about Skynet, but that worry is very far down my list,' he states.
For those interested in working in artificial intelligence, Dr. Eddy offers some advice. 'It’s very easy to be intimidated by how much noise and hype there is in the field. It’s much more important to just start building it and doing something. Solve an immediate problem in your day that’s in front of you or that you find interesting, and it doesn’t matter if someone has solved it in a better or more interesting way. If there’s a better solution out there, it’s much more important to just start doing it. And very rapidly you will find that you are actually that expert.'
Looking to the future, Dr. Eddy is excited about a project funded by the Schmidt Foundation, which focuses on automated and unsupervised testing of large language models in general AI. The project aims to build tools that help researchers, model developers, model users, and companies find failures in these systems. 'We are working on building a bunch of open source tooling that’s free and accessible to everyone — anyone who wants to use it to be able to discover failures in these systems. We would love to have user adoption and people who are just collaborating on it,' he concludes.
Check out the links below:
https://sisl.stanford.edu/
https://www.linkedin.com/company/stanford-intelligent-systems-laboratory/
https://github.com/sisl
https://www.schmidtsciences.org/safetyscience/
https://duncaneddy.com/
Q: What is adaptive stress testing in AI?
A: Adaptive stress testing is a technique that uses reinforcement learning to automatically find specific sets of inputs that cause failures in complex systems, particularly in safety-critical systems like autonomous vehicles.
Q: Why is regulation in AI challenging?
A: Regulation in AI is challenging because overregulation can stifle innovation and creativity, and it often favors large tech companies with the resources to comply, while smaller players are forced out.
Q: What is Dr. Duncan Eddy's current research focus?
A: Dr. Duncan Eddy is currently working on a project funded by the Schmidt Foundation, focusing on automated and unsupervised testing of large language models in general AI to help find failures in these systems.
Q: How can AI safety build public confidence?
A: Building public confidence in AI involves addressing practical concerns about how people use these systems and how they interact with them, rather than focusing on dystopian scenarios.
Q: What advice does Dr. Eddy have for aspiring AI professionals?
A: Dr. Eddy advises aspiring AI professionals to start building and solving immediate problems, even if someone has already solved them in a better way. The key is to gain experience and become an expert through hands-on work.