All Tags
Browse through all available tags to find articles on topics that interest you.
Browse through all available tags to find articles on topics that interest you.
Showing 1 results for this tag.
Why AI Alignment Failure Is Structural: Learned Human Interaction Structures and AGI as an Endogenous Evolutionary Shock
This paper argues that perceived AI alignment failures are not due to emergent malign agency but rather reflect AI models statistically internalizing the full spectrum of human social interactions, including coercive ones. It redefines AGI risk as an amplification of existing human contradictions, necessitating structural governance rather than attempting to instill a single, universal morality.