OpenAI lobbied the EU to avoid stricter regulations for its AI models

OpenAI has been lobbying the European Union to water down incoming AI legislation. According to documents from the European Commission obtained by Time, the ChatGPT creator requested that lawmakers make several amendments to a draft version of the EU AI Act — an upcoming law designed to better regulate the use of artificial intelligence — before it was approved by the European Parliament on June 14th. Some changes suggested by OpenAI were eventually incorporated into the legislation.

Prior to its approval, lawmakers debated expanding terms within the AI Act to designate all general-purpose AI systems (GPAIs) such as OpenAI’s ChatGPT and DALL-E as “high risk” under the act’s risk categorizations. Doing so would hold them to the most stringent safety and transparency obligations. According to Time, OpenAI repeatedly fought against the company’s generative AI systems falling under this designation in 2022, arguing that only companies explicitly applying AI to high-risk use cases should be made to comply with the regulations. This argument has also been pushed by Google and Microsoft, which have similarly lobbied the EU to reduce the AI Act’s impact on companies building GPAIs.

“GPT-3 is not a high-risk system, but possesses capabilities that can potentially be employed in high risk use cases”

“OpenAl primarily deploys general purpose Al systems – for example, our GPT-3 language model can be used for a wide variety of use cases involving language, such as summarization, classification, questions and answers, and translation,” said OpenAI in an unpublished white paper sent to EU Commission and Council officials in September 2022. “By itself, GPT-3 is not a high-risk system, but possesses capabilities that can potentially be employed in high risk use cases.”

Three representatives for OpenAI met with European Commission officials in June 2022 to clarify the risk categorizations proposed within the AI Act. “They were concerned that general purpose AI systems would be included as high-risk systems and worried that more systems, by default, would be categorized as high-risk,” said an official record of the meeting obtained by Time. An anonymous European Commission source also informed Time that, within that meeting, OpenAI expressed concern that this perceived overregulation could impact AI innovation, claiming it was aware of the risks regarding AI and was doing all it could to mitigate them. OpenAI reportedly did not suggest regulations that it believes should be in place.

“At the request of policymakers in the EU, in September 2022 we provided an overview of our approach to deploying systems like GPT-3 safely, and commented on the then-draft of the [AI Act] based on that experience,” said an OpenAI spokesperson in a statement to Time. “Since then, the [AI Act] has evolved substantially and we’ve spoken publicly about the technology’s advancing capabilities and adoption. We continue to engage with policymakers and support the EU’s goal of ensuring AI tools are built, deployed, and used safely now and in the future.”

OpenAI has not previously disclosed its lobbying efforts in the EU, and they appear to be largely successful — GPAIs aren’t automatically classified as high risk in the final draft of the EU AI Act approved on June 14th. It does, however, impose greater transparency requirements on “foundation models” — powerful AI systems like ChatGPT that can be used for different tasks — which will require companies to carry out risk assessments and disclose if copyrighted material has been used to train their AI models. 

Changes suggested by OpenAI, including not enforcing tighter regulations on all GPAIs, were incorporated into the EU’s approved AI Act

An OpenAI spokesperson informed Time that OpenAI supported the inclusion of “foundation models” as a separate category within the AI Act, despite OpenAI’s secrecy regarding where it sources the data to train its AI models. It’s widely believed that these systems are being trained on pools of data that have been scraped from the internet, including intellectual property and copyrighted materials. The company insists it’s remained tight-lipped about data sources to prevent its work from being copied by rivals, but if forced to disclose such information, OpenAI and other large tech companies could become the subject of copyright lawsuits.

OpenAI CEO Sam Altman’s stance on regulating AI has been fairly erratic so far. The CEO has visibly pushed for regulation — having discussed plans with US Congress — and highlighted the potential dangers of AI in an open letter he signed alongside other notable tech leaders like Elon Musk and Steve Wozniak earlier this year. But his focus has mainly been on future harms of these systems. At the same time, he’s warned that OpenAI might cease its operations in the EU market if the company is unable to comply with the region’s incoming AI regulations (though he later rolled back on those comments).

OpenAI argued that its approach to mitigating the risks that occur from GPAIs is “industry-leading” in its white paper sent to the EU Commission. “What they’re saying is basically: trust us to self-regulate,” Daniel Leufer, a senior policy analyst at Access Now, told Time. “It’s very confusing because they’re talking to politicians saying, ‘Please regulate us,’ they’re boasting about all the [safety] stuff that they do, but as soon as you say, ‘Well, let’s take you at your word and set that as a regulatory floor,’ they say no.”

The EU’s AI Act still has a way to go before it comes into effect. The legislation will now be discussed among the European Council in a final “trilogue” stage, which aims to finalize details within the law, including how and where it can be applied. Final approval is expected by the end of this year and may take around two years to come into effect.