In the 12th issue of “Study and Practice (2025),” Zhao Chang critiques the prevailing paradigm of value alignment, which demands that AI-generated content consistently reflect human values. Given that human values are inherently plural, dynamic, and often contradictory, rigid alignment risks oversimplifying, fossilizing, and imposing cultural hegemony. To address the ethical challenges of governing large AI models, Zhao proposes shifting from alignment to “value symbiosis” as a guiding principle for human–machine integration. Drawing on reflexivity theory, he outlines a meta-framework for symbiotic human–machine value systems. Philosophically, governance should evolve from intervention to “empowerment,” from discipline to “collaborative governance,” and from top-down control to “self-regulation”—fostering a path toward value symbiosis built on autonomous co‑creation, negotiated consensus, and procedural rationality. Practically, governance should encourage multi‑stakeholder participation in value formation, improve procedural mechanisms for translating symbiotic values, and enhance transparency in human–machine decision‑making—all to support the co‑evolution of humans and intelligent systems.