AI Summary • Published on Apr 14, 2026
Artificial intelligence (AI) systems are increasingly integrated into daily life, influencing how individuals access health information, make medical decisions, and receive medical care. Despite this pervasive influence, the field of epidemiology currently lacks comprehensive frameworks to adequately measure AI exposure or systematically study its health effects at a population level. Existing experimental research paradigms are often limited to short-term, low-dimensional treatments and are deemed insufficient to capture the chronic, sustained, and multifaceted health consequences of AI use across populations.
The authors propose a conceptual framework, adapted from environmental epidemiology, to study AI as a determinant of health. This framework distinguishes between two types of AI exposure: "ambient AI exposure," which refers to the shared algorithmic layer of information and institutional environments (e.g., algorithmic curation, AI-mediated institutional decisions affecting populations regardless of individual choice), and "personal AI exposure," involving direct, volitional interactions with AI systems (e.g., using a chatbot for health advice). The framework also characterizes AI's potential causal roles in epidemiological models—as an exposure, confounder, mediator, or effect modifier. It emphasizes the need for population-level study designs, such as prospective cohorts and quasi-experiments, to address the complexity of AI exposures, including their time-varying, adaptive, generative, and non-stationary nature, as well as violations of the Stable Unit Treatment Value Assumption (SUTVA).
Illustrations using nationally representative US survey data highlight that AI use is not uniform across the population but varies significantly by demographics such as education, race, and ethnicity. For instance, daily AI use is concentrated among college-educated adults, and health-related AI use is disproportionately reported by Black adults. These observed patterns generate specific, testable hypotheses regarding health disparities, suggesting that AI may mediate socioeconomic health gradients or impose concentrated harms on certain vulnerable populations. The authors also point out a critical measurement gap: while current data can describe *who* uses AI and *how often*, it cannot yet establish *whether* this use affects health at a population level due to a lack of linked AI use and health outcome data.
The proposed framework calls for routine measurement of AI exposure in population health studies, similar to diet or physical activity, using methods like self-report and passive sensing. Study designs must accommodate the dynamic nature of AI exposures, employing advanced causal inference techniques. The paper underscores significant health equity concerns, noting that both lack of access to quality AI and disproportionate exposure to its harms are critical. It advocates for mandated data sharing, independent auditing, and AI exposure registries by AI companies to enable independent research, akin to pharmacovigilance. Despite certain limitations, such as potential overlaps between ambient and personal exposure, the authors argue that systematic, population-level scrutiny of AI's health effects is imperative as AI continues to shape global health outcomes and policies.