AI Summary • Published on Mar 24, 2026
The rapid integration of Generative Artificial Intelligence (GenAI) tools into programming education, especially at the secondary school level, raises significant questions about how young learners critically engage with these tools and perceive their ethical responsibilities. Current research predominantly focuses on university students or professional developers, leaving a critical gap in understanding the experiences of secondary school novices—the next generation of software engineers. This lack of insight is concerning, as GenAI tools, while offering benefits like debugging and code generation, can also challenge the development of essential skills such as code comprehension, frustration tolerance, and critical thinking. Furthermore, there is limited understanding of gender differences in AI-assisted programming, despite documented disparities in traditional programming education, and how cultural contexts like Germany's strict data protection regulations influence students' attitudes toward AI ethics. Without addressing these gaps, educational institutions risk developing curricula that fail to adequately prepare students for an AI-driven software landscape, potentially exacerbating existing gender gaps.
This exploratory study involved 84 German secondary school students, aged 16–19, who were recruited from extracurricular software development workshops. The researchers employed a mixed-methods approach, utilizing a survey with both five-point Likert scale questions and open-ended questions. The questionnaire was structured into three thematic areas: demographic and background information, critical thinking practices in AI-assisted programming (to address RQ1), and AI literacy concerning ethical learning (to address RQ2). Survey items for critical thinking were adapted from existing instruments for university-level courses, and AI ethics items were drawn from a validated AI Literacy Questionnaire for secondary students. Data collection occurred online during the summer of 2025. Participants provided voluntary consent after receiving explanations of the study's purpose and key terms like GenAI. Data analysis involved descriptive statistics, chi-square tests with Cramér’s V for gender-dependent differences in quantitative responses, and thematic analysis for the open-ended questions, with an interrater-reliability of 0.89 for coding. The study acknowledges limitations such as self-selection bias, reliance on self-reported data, potential differences in item interpretation by younger learners, and limited generalizability due to the specific sample and cultural context.
The study revealed an "AI paradox" among students. While a majority (88%) showed strong commitment to testing code and checking for defects (86%), a significant portion (21% agreed, 32% neutral) indicated willingness to integrate AI-generated code without full understanding. Gender differences emerged, with boys reporting more frequent and experimental use of AI tools, often consulting AI before peers, and being more likely to accept privacy risks. Girls, conversely, expressed greater skepticism towards AI outputs, prioritized human collaboration, and were more inclined towards privacy-conscious data filtering or avoiding AI tools. In terms of AI ethics, students demonstrated high awareness: 88% understood AI misuse risks, 90% expected transparency, and 95% supported rigorous testing. Girls placed a slightly stronger emphasis on human accountability for AI systems. Thematic analysis on responsibility for AI ethics highlighted multi-stakeholder governance, emphasizing the roles of politics (especially the EU), companies, and developers, with some students also pointing to individual user and educational institution responsibility. Students showed pragmatic skepticism regarding corporate self-regulation, aligning with Germany's strong regulatory environment.
The findings underscore a critical "AI paradox": students possess strong ethical reasoning and awareness regarding AI risks but often struggle to translate these insights into concrete, responsible programming practices, such as critically vetting AI-generated code. This suggests that awareness alone is insufficient without explicit integration into coding workflows. Gendered patterns indicate that boys' experimental, AI-first approaches and girls' collaborative, privacy-focused strategies could offer complementary strengths in team-based software engineering, requiring educational approaches that foster both. The strong emphasis on EU-level governance and regulatory accountability among German students highlights the profound influence of cultural context on ethical reasoning, suggesting that international software engineering education must balance local expectations with global collaboration. Key lessons include the need to understand informal learning pathways for GenAI, strengthen critical thinking by requiring code explanation and documenting AI assistance, and connect ethical awareness directly to concrete programming tasks. Future research should explore these patterns across diverse regulatory environments, investigate how students develop GenAI practices outside formal teaching, and examine targeted interventions to foster critical AI literacy and responsible programming behaviors in an inclusive manner.