AI Summary • Published on Mar 24, 2026
The knowledge gap hypothesis and digital divide research traditionally focus on disparities in information access and technology use. However, the rise of generative artificial intelligence, which produces new content rather than just distributing existing information, challenges these frameworks. With generative AI becoming widely accessible, the core issue shifts from access and usage to the user's ability to critically evaluate, contextualize, and interpret AI-generated outputs. This evolution suggests that existing models may not fully capture the new dimensions of informational inequality introduced by these advanced systems.
This paper adopts a conceptual approach, presenting a theoretical extension of the established knowledge gap perspective. It revisits the foundational knowledge gap hypothesis and subsequent research on the digital divide, analyzing their limitations in the context of generative AI. By examining the unique characteristics of AI-generated content—where information is produced dynamically rather than merely retrieved—the paper develops a new framework. The contribution is purely theoretical, aiming to provide a conceptual basis for future empirical research on the relationship between education, AI use, and knowledge inequality, without presenting any new empirical findings.
The paper's core proposition is that generative AI introduces a new dimension of informational inequality centered on the critical evaluation of AI-generated content. Unlike previous forms of inequality focused on access or basic usage, this "generative AI knowledge gap" suggests that individuals with higher educational backgrounds are more likely to critically question, contextualize, and verify AI outputs. Conversely, individuals with lower educational levels may tend to accept and rely on AI-generated information more directly without extensive critical assessment. This differentiation in epistemic competencies, such as interpreting information in context and recognizing limitations, becomes a crucial factor shaping how knowledge is acquired and processed.
The proposed framework has significant implications for education, society, and future research. For education, it underscores the growing importance of fostering critical and reflective skills to assess AI-generated content, moving beyond basic digital literacy. Societally, a potential "generative AI knowledge gap" could exacerbate existing inequalities, influencing opinion formation and the acceptance of potentially inaccurate information if users lack critical evaluation skills. For future research, the paper calls for empirical studies to investigate how diverse user groups interact with generative AI, the specific role of education in shaping these interactions, and how these dynamics evolve over time. This includes exploring various contextual factors and the need for systematic, longitudinal analyses.