Never Compromise to Vulnerabilities: A Comprehensive Survey on AI Governance

Kavli Affiliate: Zheng Zhu

| First 5 Authors: Yuchu Jiang, Yuchu Jiang, , ,

| Summary:

The rapid advancement of AI has expanded its capabilities across domains, yet
introduced critical technical vulnerabilities, such as algorithmic bias and
adversarial sensitivity, that pose significant societal risks, including
misinformation, inequity, security breaches, physical harm, and eroded public
trust. These challenges highlight the urgent need for robust AI governance. We
propose a comprehensive framework integrating technical and societal
dimensions, structured around three interconnected pillars: Intrinsic Security
(system reliability), Derivative Security (real-world harm mitigation), and
Social Ethics (value alignment and accountability). Uniquely, our approach
unifies technical methods, emerging evaluation benchmarks, and policy insights
to promote transparency, accountability, and trust in AI systems. Through a
systematic review of over 300 studies, we identify three core challenges: (1)
the generalization gap, where defenses fail against evolving threats; (2)
inadequate evaluation protocols that overlook real-world risks; and (3)
fragmented regulations leading to inconsistent oversight. These shortcomings
stem from treating governance as an afterthought, rather than a foundational
design principle, resulting in reactive, siloed efforts that fail to address
the interdependence of technical integrity and societal trust. To overcome
this, we present an integrated research agenda that bridges technical rigor
with social responsibility. Our framework offers actionable guidance for
researchers, engineers, and policymakers to develop AI systems that are not
only robust and secure but also ethically aligned and publicly trustworthy. The
accompanying repository is available at
https://github.com/ZTianle/Awesome-AI-SG.

| Search Query: ArXiv Query: search_query=au:”Zheng Zhu”&id_list=&start=0&max_results=3

Read More