How AI Giants Try to Influence the Design of European Regulation
The Conference Board uses cookies to improve our website, enhance your experience, and deliver relevant messages and offers about our products. Detailed information on the use of cookies on this site is provided in our cookie policy. For more information on how The Conference Board collects and uses personal data, please visit our privacy policy. By continuing to use this Site or by clicking "OK", you consent to the use of cookies. 

How AI Giants Try to Influence the Design of European Regulation

June 14, 2023 | Report

Rémi Bourgeot, Principal Economist for Europe with The Conference Board, recently spoke with a reporter at Atlantico, a leading French online news website, about the ways that US AI companies are trying to shape European regulations. This interview was translated from the original French.

Atlantico: OpenAI CEO Sam Altman threatened to shun Europe if EU regulation was too restrictive, before backtracking. What does he criticize the European Union for, and what would he like to see happen?

Rémi Bourgeot: First of all, Sam Altman, CEO of OpenAI, spoke out in favor of regulation, pointing out the existential risks of AI at an important US congressional hearing. He supports the introduction of a licensing system for AI models that exceed a certain capacity threshold. In this way, he gets in the good graces of the legislative system by asserting his sense of the common good, while simultaneously encouraging regulations that raise the bar for new entrants.

The EU's draft regulation also emphasizes certification. It is, however, more advanced and more restrictive than the first elements circulating in the US. In this respect, it worries OpenAI and all the digital giants. The EU's AI Act was originally devised before it became clear that language models had the potential to unify AI, enabling the current explosion in generative AI by treating all types of content as language. The EU based its approach on various levels of risk depending on AI applications, from a harmless spam filter to unacceptable facial recognition. Then it took into account the issue of foundation models (to which generative AI belongs), which will be subject to particularly strict constraints, due to the multitude of uses and the level of manipulation they allow.

In terms of certification, the AI Act will restrict the sharing and reuse of models, for security reasons. With a significant degree of extraterritoriality, this will raise issues of consistency between various regulations around the world, particularly with the USA, from the perspective of innovation and competition promotion.

In addition to imposing licenses, the EU's AI Act targets the opacity of models in terms of data use, for example, copyright compliance and the display of sources used by models. This regulatory approach calls into question the very nature of AI models, whose structure is optimized with no clear understanding of their various ramifications. It also challenges the way the giants of the sector operate, including OpenAI, with Microsoft's backing. OpenAI does not disclose the nature of the database used by GPT, or even the number of parameters involved.

To what extent is it in Europe's interest, or not, to comply with his expectations? What would be the consequences for the EU if OpenAI really did leave Europe?

It is difficult for an AI giant to simply avoid Europe in the long term. European regulations will target models that are made available to European users, wherever they are based, and will impose an arduous certification process in any case.

Google, for example, has chosen not to make its own chatbot, Bard, available to the European market, for the time being, apart from a few almost-deserted territories in Norway (outside the EU). This is both a precaution in the face of existing regulations, such as that on data (GDPR), and a signal sent to EU officials as to the direction of the AI Act, currently in the making.

The issue of regulating generative AI is crucial, and the political sphere is only just beginning to grapple with it, while the sector is experiencing major technical developments at an almost weekly pace. All the digital giants are getting involved in this kind of negotiation or showdown with the European authorities. As with GDPR, the EU tends to set standards that act as a benchmark and influence the rest of the world, not least because of the direct repercussions of these regulations on this global market. Finally, Sam Altman confirmed his plans to set up operations in Europe; which is concretely more advantageous for these companies than undergoing European regulations away from the local market.

What would be the risks of implementing overly restrictive laws on artificial intelligence?

The open-source community is disrupting generative AI in the US, by developing more flexible models that are lighter on data and computing resources. Recently, an internal Google memo sounded the alarm about the potential of these independent AI players, whose inventiveness and responsiveness surpass that of the giants. Europe, with its chronic shortage of funding for startups, would have the potential to become a more significant AI force, based on the open-source dynamic. Contrary to what the announcements of billions poured in by tech giants and venture capitalists suggests, the development of generative AI models is becoming increasingly accessible, thanks to remote tools.

European regulators are well-advised to regulate the development of generative AI in terms of data and criminal use. However, they must be careful not to stifle the emergence of new players, particularly from Europe, and the platforms they use. There is a danger that this dynamism could be neutralized by the threat of excessive fines, which would scare off investors, or by making the developers of the models responsible for all their uses too systematically. The weakening of the open-source scene that could result from European regulations, both in the US and in Europe, risks increasing the concentration of the sector and slowing down the development of more reliable models, which a more open approach allows.

Creating exemptions for R&D, as envisioned in the AI Act, is a step in the right direction, but it is probably not enough, given the fragmented and informal reality of the developers’ community. Striking the right balance between the risk of criminal or even apocalyptic behavior, and the need for openness and transparency, is a particularly difficult question. Given the effervescence of open-source AI, this issue of control calls into question the very status of the Internet and its level of openness.

This interview was initially published in French by Atlantico, a leading French online news website featuring daily interviews with experts and decision-makers on business and politics.

AUTHOR

RemiBourgeot

Principal Economist, Europe
The Conference Board


OTHER RELATED CONTENT

RESEARCH & INSIGHTS

WEBCASTS

hubCircleImage