Published Date : 22/09/2025
There is no question that classroom structures look drastically different than ever before due to the evolution of artificial intelligence and technology. As a public resource and tool, students immediately took advantage of its benefits. The ability to streamline basic tasks, organize classroom information, and get another set of technical eyes on a high-pressure assignment or email was revolutionary for college students nationwide. As professors and students continue to navigate this growing resource, recent studies on the effects of AI use in professional fields have emerged, showcasing the risks of using AI for students of various backgrounds and its effects on our learning.
Utilizing AI as a student becomes complicated when having to comply with the UT System’s guidelines. At the time of ChatGPT’s launch, The University of Texas System started an early initiative called “AI Forward, AI Responsible”. Generally, this initiative was designed to allow faculty to use AI to help students learn, improve instructional design, and boost administrative efficiency. Since SFA’s adoption into the UT System, the rules allow for faculty to have their own policies for the classroom.
Not only do students have varying opinions about the use of AI in their classrooms and learning habits, but so do faculty. While most teachers would never want students to use generative AI to do assignments for them, some teachers promote the use of AI when it comes to learning or studying. Contrastingly, other teachers in the same department may ask students to avoid it entirely.
In a recent study conducted in Poland, evidence suggests that physicians who used AI to assist in identifying cancer signs in colonoscopy resulted in a negative effect on doctors' ability to identify signs of cancer. This has recently been coined as “Deskilling” or “Deskillification”. This is the first real-world example we have seen that showcases how the incorporation of AI into the workplace can affect our abilities. With studies showcasing eroded thinking skills, these results can be applied to how AI is affecting students in the classroom.
Similarly, a key drawback of utilizing AI when studying is its tendency to add incorrect information or miss critical information entirely. AI is not always truthful even if you are solely feeding it information from a textbook. AI-generated study material, although sourced from the textbook, may not contain important details or the context needed to understand the topic as a whole. As the study may suggest, if students rely on AI to summarize assigned reading or turn it into an “easily digestible” podcast, then it takes away the student’s responsibility to interpret text, derive meaning, and understanding of the information and internalize the knowledge for their academic and professional career.
On the other hand, data has been gathered by scientists to better understand how neurodiverse students, such as those diagnosed with ADHD or dyslexia, can utilize AI to help with writing challenges. Those who are already prone to errors when it comes to reading and writing might be able to comprehend classroom or research material better by using AI’s resources. Those diagnosed with anxiety concerning reading and writing comprehension may benefit from the use of this technology as well.
The question is when does our dependence on this resource reach an unhealthy level, and how can AI regulations and policies consider those who may benefit from it more than others? With research suggesting that AI is causing us to unlearn skills, students and teachers should be wary of how we are incorporating AI into the classroom and study habits. Utilizing AI as a tool when studying can be a slippery slope, as students risk the deterioration of critical thinking skills and the ability to discern information out of textbooks and into usable information.
Q: What is the 'AI Forward, AI Responsible' initiative?
A: The 'AI Forward, AI Responsible' initiative, launched by the University of Texas System, is designed to allow faculty to use AI to help students learn, improve instructional design, and boost administrative efficiency. This initiative aims to balance the benefits of AI with responsible use.
Q: What is 'Deskilling' in the context of AI use?
A: Deskilling, or deskillification, refers to the negative effect on professionals' abilities when they rely heavily on AI. For example, a study in Poland found that physicians who used AI to assist in identifying cancer signs in colonoscopy had a reduced ability to identify these signs independently.
Q: How can AI impact neurodiverse students?
A: AI can be beneficial for neurodiverse students, such as those with ADHD or dyslexia, by helping them with writing challenges and better comprehending classroom or research material. However, it is important to balance these benefits with the risk of over-reliance on AI.
Q: What are the potential risks of using AI in studying?
A: The potential risks of using AI in studying include the addition of incorrect information, the omission of critical details, and the erosion of critical thinking skills. Students may also lose the ability to interpret and derive meaning from text independently.
Q: How can AI regulations and policies be balanced to consider different student needs?
A: AI regulations and policies should consider the varying needs of students, including those who may benefit more from AI, such as neurodiverse students. This can be achieved by allowing faculty to have their own policies and by promoting responsible use of AI in the classroom.