It is clear that if we want to use AI for the good of society, we need to start providing guidelines, regulations, and most likely legislation around AI. The industry has been talking about this for many years, but governments have been slow to react. In general, governments are reactive rather than proactive when it comes to regulations, and in many cases, that is indeed the best way to foster innovation. However, there are some existential issues with AI that require a swifter reaction. We have seen the negative impacts of social media. We cannot simply let technology – driven by commercial organisations – dictate how we use AI. These organisations are here to make a profit and are not here to safeguard the common good. So we need to act now, there is simply too much at stake.
As this is a global issue, we need to cooperate to find the best way forward. While most of these developments are coming out of the United States, at the same time, this country favours the minimum level of regulations, so we cannot expect much social leadership from them. Over the last few years, the European Union (EU) has proven to be more effective as a regulator. The EU has taken on social media with hefty fines in those cases where they have trespassed European law. They are also the first to start working on international guidelines for AI.
In my opinion, it makes sense for countries such as Australia, New Zealand, Canada, Japan, Korea, and others that have similar values to look at the EU model and, if possible, join them or work with them, rather than trying to invent their own set of guidelines. As mentioned, this is a global issue and requires a global approach.
Below is a summary of an article written by my European colleague J. Scott Marcus
Adapting the European Union AI Act to Deal with Generative Artificial Intelligence
Generative artificial intelligence (AI) and the Foundation Models (FMs) on which it relies are a rapidly developing field with the potential to be both beneficial and harmful. Generative AI models can be used to create realistic and convincing text, images, and videos. This has a wide range of potential applications, such as in the creation of art, entertainment, and education. However, generative AI also has the potential to be used for malicious purposes, such as the creation of disinformation, the spread of hate speech, and the exploitation of vulnerable individuals.
The EU is currently in the process of drafting an AI Act, which would set out a regulatory framework for AI in the EU. The draft AI Act, as amended by the European Parliament, does not adequately distinguish between different types of generative AI models, and it does not differentiate the monitoring and compliance requirements imposed on different providers of generative AI models.
This article argues that the EU should amend the draft AI Act before enactment to regulate foundation models and generative AI in a way that better balances the need to protect the public with the need to promote innovation and productivity. The proposed amendments would include:
- A more nuanced approach to different foundation models and generative AI;
- A re-thinking of the provisions on the use of copyrighted data for training purposes, and a reflection as to whether they belong in this legislation at all;
- a mandatory incident reporting procedure as part of the quality control framework.
The article also re-emphasises the importance of good cybersecurity as regards foundation models and generative AI.
A Nuanced Approach to Different Foundation Models and generative AI
The current legislative seems to do a reasonably good job of protecting the public against harms but treating both large and small foundation model providers exactly the same risks impeding innovation by consolidating the market dominance of firms that already have a considerable lead in FMs. Larger firms are likely to be systemically more important, and also to be better able to afford regulatory compliance. At the same time, even small firms might produce FMs that work their way into applications and products that reflect high-risk uses of AI, so they cannot get a free pass. The principles of risk identification, testing and documentation should therefore apply to all FM providers, including non-systemic foundation models, but the rigour of testing and verification could be different.
Exactly how to implement this differentiation is likely to require guidance, probably from the European Commission. As for identification of foundation models that are so important as to require the most intensive possible monitoring, this would benefit from Internationally agreed frameworks, technical standards and benchmarks.
Re-thinking the Provisions on the Use of Copyrighted Data for Training Purposes
EU copyright law as revised in 2019 already provides for an exception from copyright for text and data mining. Conditions under which royalties must be paid have also been modernised in the revised copyright law. The AI Act as amended by the European Parliament nonetheless require providers of generative AI to publicly document the use of any copyrighted material in training data. If the use is explicitly permitted by copyright law, one must wonder whether the burdensome task of maintaining a directory adds any value. Aside from that, there is no obvious reason for treating the use of copyrighted material differently for generative AI than for other online use, which begs the question why changes, if they were needed at all, are proposed to be made here rather than as an amendment to EU copyright law.
A Mandatory Incident Reporting Procedure as Part of the Quality Control Framework
The current text of the AI Act requires any provider of a foundation model to provide a quality control system, but says nothing about what that entails. The article suggests that there be a mandatory incident reporting procedure for foundation models. This would help to ensure that the risks posed by generative AI models are identified and addressed in a timely manner.
Requirements for Safety and Security
The article emphasises the importance of providers of generative AI investing in safety and security. This helps to protect users from the risks of malicious attacks on generative AI models.
In conclusion, foundation models and the generative AI that they enable are powerful technologies with the potential to be both beneficial and harmful. The EU should revise the draft AI Act before enactment to help ensure that the benefits of foundation models and generative AI are realised while the risks, including risks to productivity and innovation, are mitigated. These amendments would help to create a regulatory framework that is fit for purpose in the age of generative AI.
Paul Budde