Generative AI and Global Governance: A Study of International Policy, Regulation, and Ethical Frameworks for Managing the Risks of Autonomous Content Generation and Societal Impact
Main Article Content
Abstract
This study investigates the global governance of generative artificial intelligence (GenAI) by analyzing international policy frameworks, regulatory mechanisms, and ethical guidelines developed to address risks associated with autonomous content generation and its societal consequences. Using a mixed-methods approach, the research combines qualitative content analysis of 22 major policy documents from the United Nations, European Union, OECD, G7, and national governments with quantitative assessment of GenAI adoption, investment, and risk incidence data. Findings reveal a fragmented governance landscape dominated by principle-based frameworks and early regulatory experiments (e.g., EU AI Strategy, 2021), with significant gaps in enforcement, global coordination, and risk-specific protocols for deepfakes, bias amplification, and labor displacement. Despite $67.2 billion in global AI investment in 2022 and 22% enterprise adoption of GenAI by late 2022, only 14% of organizations had formal governance structures. The study concludes that governance laid foundational principles but failed to anticipate the scale, speed, and societal penetration of GenAI, necessitating adaptive, binding, and inclusive global mechanisms.