AI Summary • Published on Apr 22, 2026
Generative artificial intelligence (genAI) is rapidly transforming content production and consumption. However, these models are susceptible to "model collapse," where their performance degrades when continuously trained on data generated by earlier versions of themselves. This leads to a reduction in output diversity and accuracy. The widespread adoption of genAI, while offering individual short-term benefits, creates a significant social dilemma by potentially compromising the quality of future models and ultimately, collective social welfare. Current research often overlooks this dynamic interplay between adoption and performance degradation.
The authors developed a "collaboration game" model within an evolutionary game theory framework to investigate these feedback dynamics. In this model, individuals choose among three strategies: performing human work (H), doing no work (N), or delegating work to genAI (AI). They used replicator dynamics to simulate how strategies evolve in a large population over time. The model incorporates model collapse by assuming that the benefit of AI-assisted work declines linearly as the frequency of AI usage in the population increases. The researchers analyzed the long-term social welfare outcomes by categorizing tasks based on two dimensions: the baseline incentive for human work and the severity of model collapse. They also extended their model to a two-task setting to explore how habit formation in AI use across different domains could lead to spillover effects.
The study found that if there is no model collapse, genAI is either welfare-neutral or welfare-promoting. However, with model collapse, the impact changes significantly. For low-incentive tasks (dubbed "busy-work"), genAI generally improves social welfare by enabling work that would otherwise not be performed, even with some model collapse. Conversely, for high-incentive tasks (like creative work or "poetry") that are subject to strong model collapse, the introduction of genAI consistently leads to a reduction in social welfare at stable mixed human-AI equilibria. This occurs despite individuals still having strong incentives to contribute. Furthermore, the model demonstrated that habit formation around genAI use can cause negative "spillover" effects. Increased reliance on AI in low-stakes tasks, where it is beneficial, can inadvertently lead to its use in high-value tasks where it is detrimental, thereby amplifying overall welfare losses across domains.
The findings suggest that to mitigate the negative impact of genAI on social welfare, interventions are needed to manage the data feedback loop. This includes deploying robust detection and auditing systems for AI-generated content and implementing data provenance and labeling practices to track content origins. It is also crucial to preserve incentives for human effort in high-value domains and acknowledge that merely having a stable mix of human and AI production does not necessarily indicate healthy complementarity, but can rather be a sign of welfare-reducing feedback. Policymakers and organizations must anticipate cross-domain coupling and spillover effects due to habit formation, as evaluating AI's impact solely on a domain-by-domain basis can be misleading and lead to underestimation of risks to collective welfare.