Five Considerations To Guide the Regulation of “General Purpose AI”

Policy guidance from international AI experts

Artificial Intelligence illustration
The rapid adoption of artificial-intelligence-powered systems including ChatGPT — which gained more than one million users within weeks of its launch in November 2022, and has been used by more than 100 million people worldwide — has made clear that the question of whether (and how) so-called “general purpose artificial intelligence” (GPAI) should be regulated is not hypothetical, gratuitous, or premature. 
 
The reality of AI’s rapid adoption, in circumstances that lack adequate or effective regulation, is the subject of a debate around the Artificial Intelligence Act, the EU’s flagship AI regulation, which has been evolving for nearly two years. Similar debates are taking place elsewhere, including Canada, as lawmakers struggle to update a legislative framework that has been outpaced by the proliferation of unregulated technologies, such as AI. While these technologies have provided unimaginable benefits for many, many have also been proven to have profound impacts upon personal autonomy, privacy, and democratic freedoms.
 
Given that EU regulation will likely become the de facto global standard for General AI in much the same way as the GDPR did for privacy, an international group of leading researchers and institutions from across domains has published a policy brief of considerations to guide the regulation of “General Purpose AI” in the EU’s AI Act, which will set the regulatory tone for addressing AI harms. 
 
The policy guidance — endorsed by more than 75 organizations and internationally-recognized experts in computer science, data protection, law and policy, and the social sciences who agree that General Purpose AI carries serious risks and must not be exempt under the EU AI Act or equivalent legislation in Canada or elsewhere — offers thoughtful recommendations applicable to regulating artificial intelligence globally. It argues the following:
  1. GPAI is an expansive category. For the EU AI Act to be future proof, it must apply across a spectrum of technologies, rather than be narrowly scoped to chatbots/large language models (LLMs). The definition used in the Council of the EU’s general approach for trilogue negotiations provides a good model.
  2. GPAI models carry inherent risks and have caused demonstrated and wide-ranging harms. While these risks can be carried over to a wide range of downstream actors and applications, they cannot be effectively mitigated at the application layer.
  3. GPAI must be regulated throughout the product cycle, not just at the application layer, in order to account for the range of stakeholders involved. The original development stage is crucial, and the companies developing these models must be accountable for the data they use and design choices they make. Without regulation at the development layer, the current structure of the AI supply chain effectively enables actors developing these models toprofit from a distant downstream application while evading any corresponding responsibility.
  4. Developers of GPAI should not be able to relinquish responsibility using a standard legal disclaimer. Such an approach creates a dangerous loophole that lets original developers of GPAI (often well-resourced large companies) o the hook, instead placing sole responsibility with downstream actors that lack the resources, access, and ability to mitigate all risks.
  5. Regulation should avoid endorsing narrow methods of evaluation and scrutiny for GPAI that could result in a superficial checkbox exercise. This is an active and hotly contested area of research and should be subject to wide consultation, including with civil society, researchers and other non-industry participants. Standardized documentation practice and other approaches to evaluate GPAI models, specifically generative AI models, across many kinds of harm are an active area of research. Regulation should avoid endorsing narrow methods of evaluation and scrutiny to prevent this from resulting in a superficial checkbox exercise.
The recommendations are valuable for lawmakers, developers, lawyers, insurers, and privacy practitioners in all sectors and countries, and provide a global approach to addressing AI harms globally that is essential to ensure the laws and regulations governing the design, testing, production, sale, and use of AI are as consistent and future-proof as possible.