AI Summary • Published on Mar 5, 2026
The increasing prevalence and capabilities of generative AI (genAI) tools, particularly for programming, present significant challenges and opportunities for physics education. While genAI can accelerate routine tasks and assist with problem-solving, its impact on students' computational modeling skills, especially in open-ended assignments, remains underexplored. Existing research suggests potential benefits like increased efficiency, but also risks such as over-reliance, reduced deep cognitive engagement, and a decline in critical thinking and agency. This study aims to understand when, why, and how students utilize genAI in computational physics assignments and to identify implications for teaching practices.
This qualitative study employed thematic analysis of semi-structured retrospective interviews with 19 physics students from 13 groups. These students had recently completed an open-ended computational essay assignment in a third-semester electromagnetism course at the University of Oslo, where genAI use was encouraged. The interviews, lasting approximately an hour, covered students' backgrounds and detailed accounts of their work with genAI. Data triangulation was achieved through the analysis of computational essays and, from seven groups, genAI chat logs. The thematic analysis followed Braun and Clarke's 6-stage method, using an adapted framework for computational modeling in physics by Phillips et al. to organize and interpret findings. All 12 groups who used genAI utilized ChatGPT-4o, with some also using Claude, Perplexity, and GitHub Copilot.
The study found that genAI significantly influenced several aspects of students' computational modeling, particularly in planning, implementing, debugging, and optimizing code, as well as in the novel practice of inspecting AI-generated code. Students frequently used genAI to initiate projects, implement specific code segments, and resolve debugging issues efficiently. While this often saved time, some students noted a trade-off with learning foundational skills, and over-reliance sometimes led to incorrect model assumptions or a lack of understanding of the generated code. GenAI was also moderately used for understanding physics theory, manipulating equations, and finding resources, though students consistently verified factual information with trusted sources. Most students prioritized learning and attempted to moderate their genAI use, viewing it as a tool for assistance rather than a complete solution, yet time pressure occasionally led to increased dependence.
The findings suggest several implications for physics education. Firstly, teaching assistants and instructors remain crucial, especially for validating models and addressing conceptual physics questions, even with extensive genAI use. Secondly, educators must actively guide students in productive genAI use, emphasizing critical verification of AI-generated output to ensure deep understanding. Lastly, open-ended assignments that allow genAI access may not effectively foster foundational programming and modeling skills, as many students quickly turned to genAI when stuck. Therefore, other instructional and assessment methods that restrict genAI use might be necessary to ensure students develop core competencies. The study also notes that while professional physicists use genAI for efficiency, novices risk decreased skill acquisition, suggesting students should prioritize fundamental skill development rather than mirroring professional genAI workflows.