Published Date : 03/11/2025
Since the release of the chatbot ChatGPT in late 2022, there has been frantic debate at universities about artificial intelligence (AI). These conversations have primarily centered on undergraduate teaching — how to prevent cheating and how to use AI to improve learning. However, a quieter, deeper disruption is unfolding in research, the other core activity of universities.
A doctoral education has long been seen as the pinnacle of academic training, an apprenticeship in original thinking, critical analysis, and independent inquiry. However, this model is now under pressure. AI is not just another research tool; it is redefining what research is, how it is done, and what counts as an original contribution.
Universities are mostly unprepared for the scale of disruption, with few having comprehensive governance strategies. Many academics remain focused on the failings of early generative AI tools, such as hallucinations (confidently stated but false information), inconsistencies, and superficial responses. But AI models that were clumsy in 2023 are becoming increasingly fluent and accurate.
AI tools can already draft literature reviews, write sophisticated code with human guidance, and even generate hypotheses when provided with data sets. ‘Agentic’ AI systems that can set their own sub-goals, coordinate tasks, and learn from feedback represent another leap forward. If the current trajectory continues, we’re fast approaching a moment when much of the conventional PhD workflow can be completed, or at least be heavily supported, by machines.
This shift poses challenges for educators. What constitutes an original contribution becomes unclear when AI tools produce literature reviews, acquire and analyze data, and draft thesis chapters. Students might need to pivot from executing research tasks to framing questions and interrogating AI outputs.
To explore what the near future of research training might look like, I conducted a role-play simulating a PhD student working with a hypothetical AI assistant. I used Claude, a leading AI system built by the firm Anthropic in San Francisco, California.
I fed the chatbot a detailed prompt describing a fictional AI research assistant called HALe — inspired by the AI character HAL 9000 from the science-fiction film 2001: A Space Odyssey. I gave HALe capabilities that are already under development and are likely to improve in coming years. These include accessing external databases, integrating environmental and biological data, and performing advanced analyses autonomously. I then played the part of the student, asking questions and responding to the chatbot’s replies. The dialogue was generated in a single, unedited session — offering a fictional, yet plausible, glimpse of how future doctoral research could unfold.
The simulated goal was to complete a PhD project investigating how extreme ocean temperatures affect marine species — an ambitious task involving data synthesis, statistical modeling, and writing a paper for publication. In this fictional scenario, HALe didn’t merely assist; it took initiative. It searched and extracted data from scientific literature, identified knowledge gaps, harmonized environmental and biological data sets, ran complex statistical analyses, interpreted the results, drafted a manuscript, suggested peer reviewers, and even created an open-access data repository. The entire process, which would realistically take a student several months, played out in a short sequence of guided exchanges that might occupy just a few hours.
Although today’s AI models cannot yet perform these tasks with anything approaching full autonomy, the simulation was grounded in what current systems can already do with human guidance. For example, ChatGPT, Claude, and other state-of-the-art chatbots can draft credible literature reviews, propose hypotheses, suggest analytical approaches, and generate code that — when reviewed and validated by a human — can process real data sets and produce meaningful outputs. They can even help interpret statistical results and visualize findings. What struck me, while conducting this exercise, was how much of the conventional PhD process could now be driven and accelerated by AI. At times, it felt like working with a hyper-competent and astonishingly rapid research assistant. It was both exciting and unsettling.
Of course, this simulation reflects a particular kind of project — analytical, data-rich, and computational in nature. Experimental or field-based PhD programs, especially those that require collecting samples, laboratory work, or interacting with other people or with the natural world, will remain less susceptible to full automation. But even in these areas of science, AI is likely to play a growing part in experimental design, autonomous data collection, literature synthesis, and post-experiment analysis.
This experience brought home how training in academic skills will need to be fundamentally reconsidered in an era of AI.
Q: What is the primary concern with AI in PhD education?
A: The primary concern is how AI is redefining what constitutes an original contribution in research, making it unclear where the line is between human and machine contributions.
Q: How are universities responding to the AI disruption in research?
A: Many universities are unprepared, with few having comprehensive governance strategies. They often focus on the failings of early AI tools rather than their potential benefits.
Q: What are some capabilities of current AI tools in research?
A: Current AI tools can draft literature reviews, write sophisticated code, generate hypotheses, and even perform complex data analyses when guided by humans.
Q: How might AI change the role of PhD students?
A: PhD students might need to shift from executing research tasks to framing questions, interpreting AI outputs, and ensuring the accuracy and relevance of AI-generated content.
Q: What types of PhD programs are less susceptible to AI automation?
A: Experimental or field-based PhD programs, which require hands-on activities like collecting samples, laboratory work, or interacting with the natural world, are less susceptible to full automation.