Governance in Global AI Value Chains

Abstract

Every time someone trains, deploys, or queries an ‘artificial intelligence’ system, whether they know it or not, they are implicating the practices and services of an enormous number of interlinked businesses in the AI ‘stack’. These actors — data generators, collectors and labellers; model developers, finetuners and hosting marketplaces; and cloud hardware, compute and storage providers —have a huge impact on the prevention (or not) of emerging AI-related harms, such as the generation of synthetic child sexual abuse material, the leakage of personal data from foundation models, and the creation of sophisticated fake content by scammers and confidence tricksters. And yet, somehow, the specifics of these companies and actors — the way they intermediate, interconnect, facilitate functionality, and widen or foreclose regulatory possibilities — are, for the most part, entirely absent from discussions around the governance of AI.

The goal of this project is to provide the first focused exploration of the public-private regulatory practices that are currently shaping high-stakes AI systems on-the-ground, combining empirical analysis of the developing networked realities of AI governance with a conceptual framework rooted in the history of internet, software, and platform regulation. The project engages with a complex socio-technical policy landscape where the legal instruments of contract, intellectual property and data protection come together with the technoregulatory realities of scanning, bespoke machine learning hardware, content hashing, and an ever growing landscape of APIs and increasingly sophisticated forms of predictive content analysis. A main focus of the project involves explores the importance of softer forms of private regulation — such as developer practices, documentation norms, and the principles emerging in online model-sharing-and-tuning communities.

Early results from this project, being pursued collaboratively in partnership with University College London, have been presented at the top interdisciplinary AI policy conference (ACM FAccT), have already informed policy being developed by regulators in the United States and Australia, and are being developed into a peer-reviewed book manuscript.

Main content
Selected Publications

Robert Gorwa, Michael Veale (2024): Moderating Model Marketplaces: Platform Governance Puzzles for AI Intermediaries. Law, Innovation and Technology 16 (2): 1-51.

Michael Veale, Kira Matus, Robert Gorwa (2023): AI and Global Governance: Modalities, Rationales, Tensions. Annual Review of Law and Social Science 19 (2): 1-21.