Regulating AI without hindering the open-source boom
The Conference Board uses cookies to improve our website, enhance your experience, and deliver relevant messages and offers about our products. Detailed information on the use of cookies on this site is provided in our cookie policy. For more information on how The Conference Board collects and uses personal data, please visit our privacy policy. By continuing to use this Site or by clicking "OK", you consent to the use of cookies. 

AI: The Next Transformation

Your guide to real-world application

Sign Up for AI Event Updates

Regulating AI without hindering the open-source boom

June 22, 2023 | Report

Rémi Bourgeot, Principal Economist for Europe with The Conference Board, recently spoke with a reporter at Atlantico, a leading French online news website, about the EU AI Act and how policymakers need to strike a balance in regulating the booming open-source AI scene. This interview was translated from the original French.

Insights

  • The open-source community has become a key driver of AI development, challenging multinationals. It provides Europe with an opportunity to partly catch up with the US in generative AI, drawing on its scientific capabilities
  • The EU is leading the indispensable effort to regulate AI, with its AI Act. However, despite exemptions, it still threatens the open-source scene
  • Given the frantic pace of innovation in AI, policy makers need to build regulation on future-proof principles of reliability and transparency rather than overly meticulous criteria at risk of obsolescence

Atlantico: The European Parliament has adopted its position on AI regulation in plenary session. How difficult is it to agree on what AI encompasses and on what we want to regulate?

Rémi Bourgeot: The first difficulty for policymakers is to understand and define artificial intelligence. Radical innovations in large language models are now taking place at an almost weekly pace, bolstered in particular by the surge in open-source AI.

The European AI Act has, in the first instance, been designed on the basis of previous developments, following distinct specialties within AI, and related levels of risk. These range, for example, from an innocuous spam filter to the unacceptable political use of facial recognition. The unification of AI, which began quietly in 2017, is fundamentally changing the game, following in particular the emergence of generative AI models, which handle and generate content of an extremely varied nature. The EU is attempting to adapt its regulation to this development, with specific clauses for foundation models (which include generative AI). In particular, these include specific requirements for certification and relative data transparency, especially with regard to copyright.

A further difficulty lies in the emergence of new players. OpenAI became an AI giant almost overnight, with the support of Microsoft. Add to this the explosion of open-source AI. The shake-up of the technological landscape is even more pronounced than the emergence of new giants might suggest. An internal Google memo recently sounded the alarm about the major advances offered by open-source players, which often surpass those of multinationals in terms of efficiency and flexibility. The European authorities are therefore working hard to regulate a technology that will have again changed radically when the text is supposed to come into force in 2025...

Regulation aims, as best it can, to make AI more reliable, less opaque and to limit the risks of manipulation. Neural networks operate like black boxes, with little insight into their ramifications. This opacity becomes particularly problematic when AI, for its part, aspires to general competence and to interfere in all aspects of life. Regulation necessarily challenges the in-depth evolution of this technology.

Whether in Europe, the US or China, everyone wants to regulate AI in their own way. What are the risks if we don't achieve some form of global harmonization on the subject? Is Europe in danger of shooting itself in the foot?

The general trend is to set up licensing systems. However, the criteria will obviously differ greatly from one political system to another. Segmentation has already been taken on board in China, whose Internet is largely separated from that of the US, not least because most of Google's products have been banned in favor of local alternatives. Extending this logic to generative AI, ChatGPT was banned there without delay, while the Chinese giants are gradually offering their alternative chatbots.

The segmentation of regulation raises the question of the fragmentation of the Internet, particularly problematic between Europe and the United States. This segmentation also raises the issue of extraterritoriality. The EU usually sets the major regulatory constraints, while the United States develops the digital offering. This has been the case with the RGPD to a large extent, which has acted as a global benchmark for data. However, things are more complicated with AI. Sam Altman threatened to desert Europe because of the AI Act. He soon had to backtrack and confirm plans to set up operations on the continent, but the ongoing tug-of-war can't be ignored. At the same time, Google has refrained from making its chatbot Bard available in Europe, sending a clear message to regulators.

The key issue for Europe is autonomy and, in any case, the development of a European supply of AI models, capitalizing in particular on the open-source dynamic, which is based on sharing model elements. The AI Act strikes a very delicate balance in this respect. European certification requirements go a long way, especially when it comes to foundation models. However, exemptions are provided for R&D and open source. It is essential to contain the risk of criminal or even apocalyptic use, without crushing developers under the weight of bureaucratic constraints and fearsome fines. To this end, it is essential to develop targeted control of usage in particular, and to ease the pressure on open source developers below a certain threshold of power in their models.

How can we ensure public confidence in these new and potentially profoundly disruptive issues?

We're seeing a spectacular uptake among users, who, often amazed despite the flaws, are still hardly questioning the opacity of the models and the use of their data by these services. The rise of open source will bring greater clarity, in the sense that it will be easier to start a debate on how AI works and to evaluate models, at least among insiders. However, there have been recent trends to the contrary. OpenAI, now open in name only, has made GPT increasingly opaque, disclosing little information about the data used and now even concealing the number of parameters used.

In addition to focusing on models, which are bound to evolve radically all the time, regulations, which are constantly threatened with obsolescence, will probably have to focus more on their use, in a more robust way. The major challenge for the public is to preserve a sense of reality in the face of the infinite manipulations made possible by generative AI.

How can we involve both large companies and smaller AI developers in the decision-making process, so as to avoid stifling any small initiative?

The giants of the sector readily invite themselves to the regulatory table. Sam Altman presented himself as the hero of model licensing at his recent hearing before the US Congress. But that did not stop him from threatening the EU, because his project is stricter than the elements circulating in the US. Digital giants naturally have an interest in embracing regulations that raise the bar for entrants. Regulators must take into account the risk of regulatory capture.

However, the boundary between giants and more independent initiatives is blurred by the fascinating logic of open source. Meta's model, LLaMa, is open source, which means that developers can reappropriate it. Even OpenAI is talking about the possibility of making open source models available in the future, alongside its main model. Making technology reliable requires a much broader involvement.

Europe has shown that it wants to be the first to legislate, given the scale of the issue. Is there not a risk of moving too fast? Can we really update regulations to enable secure, trusted AI?

The subject is urgent, given the risks pointed out by the very designers of these models, from Sam Altman, the head of OpenAI, to Geoffrey Hinton, "the godfather of AI", who resigned from Google to express his concern for humanity. The EU began its work of regulating AI on the basis of a technological landscape extremely different from that carried by language models since 2017. Beyond the urgency of regulating AI, it is above all necessary to do so robustly, on the basis of principles that stand the test of time, particularly with regard to the opacity of models in terms of data use and the elaboration of results.

This interview was initially published in French by Atlantico, a leading French online news website featuring daily interviews with experts and decision-makers on business and politics.

AUTHOR

RemiBourgeot

Principal Economist, Europe
The Conference Board


Publications


Webcasts, Podcasts and Videos


Upcoming Events


Press Releases / In the News

hubCircleImage