On June 14, 2023, the European Parliament voted on a bill that aims to thoroughly regulate artificial intelligence in the European Union. The Artificial Intelligence Act (AI Act) was adopted by a large majority (499 votes in favor, 93 abstentions, and 29 against), despite some controversy surrounding it. This controversy concerned allowing authorities to use artificial intelligence systems to identify people, including to prevent terrorist attacks. This authorization is highly controversial, given that such use of AI systems violates the fundamental principles of a democratic state. This is particularly evident in the example of China, which has been using so-called scoring, i.e., classifying citizens based on their behavior, social status, or physical characteristics, for several years. Concerns about such use of AI systems in European Union countries led to the rejection of an amendment enabling the identification of citizens by investigative authorities. As the European Parliament emphasized in its position paper, the overarching goal of the AI Act is to protect democratic values and limit the negative effects of the use of artificial intelligence. Therefore, using artificial intelligence systems to identify citizens would violate the basic principles of a democratic state.
As for the content of the act itself, the legislator focused primarily on classifying AI systems based on their level of risk. The AI Act divides AI systems into four categories. Each category is associated with the level of risk associated with using the AI system: unacceptable risk, high risk, low risk, and minimal risk.
Unacceptable risk refers to a risk that poses a threat to citizens. Artificial intelligence that uses biometric technology to identify and categorize people falls into this category. This applies in particular to technology that recognizes people in real time.
Artificial intelligence, on the other hand, which is characterized by a high degree of risk, threatens citizens' security and fundamental rights. Therefore, it can only be used after meeting a number of requirements and will be subject to evaluation throughout its operational life. The AI Act categorizes high-risk artificial intelligence systems as:
- Artificial intelligence systems to which EU product safety regulations apply.
- Artificial intelligence systems, which are required to register in an EU database, will be divided into eight areas: school education and vocational training, employment, workforce management, management and operation of critical infrastructure, law enforcement, access to essential private services, legal aid, criminal investigation, and biometric identification and classification of individuals.
AI systems that are characterized by low or minimal risk will also have to meet certain requirements. Primarily, this is the requirement of transparency, meaning that such AI systems should be transparent enough for users to recognize that they are using AI and enable users to make informed decisions.
The main entities affected by the AI Act are providers and users of artificial intelligence. Numerous obligations have been imposed on providers of artificial intelligence systems, depending on the risks associated with using their products. If a given provider's artificial intelligence system is classified as high-risk, the provider will be required to register the system in the appropriate database. The issue of Chat GPT, which is classified as a generative artificial intelligence system, is interesting. Generative artificial intelligence systems create new texts, images, and graphics based on descriptions provided by users. Such systems pose copyright issues. For this reason, Chat GPT and other generative artificial intelligence systems have been classified as high-risk systems. Therefore, Chat GPT and other generative artificial intelligence systems will have to meet the following requirements:
- creating an artificial intelligence system that will not publish illegal content,
- informing the user that the content was created by artificial intelligence,
- documenting the use of copyrighted works.
These requirements imposed on AI system providers significantly hinder the development of these systems. Although many major providers, such as Google and Facebook, have advocated for the introduction of AI regulations, they ultimately believe the current shape of the AI Act is not beneficial for the entire industry. Representatives of the largest AI companies have emphasized that the current AI Act significantly restricts the use and development of AI systems, which could lead to a slowdown in the development of this technology. Therefore, after Parliament adopted the AI Act, Google canceled the launch of its new chatbot, Al Bard, in the European Union. Will this decision lead to a relaxation of the AI Act? The current shape of the AI Act is not final, as after its adoption, negotiations will begin with the Council of Europe and among member states. Will the negotiations lead to significant changes to the AI Act, easing it or tightening the requirements for AI system providers? This is a difficult question to answer. The European Union is certainly committed to implementing an act on artificial intelligence that will protect citizens from the negative consequences of its use and serve as an example for other countries on how to regulate AI issues. This is similar to the GDPR, which the US based its personal data protection regulations on. However, it is unclear whether the demands of AI system providers will influence the final shape of the AI Act. The EU should certainly take the industry's views into account, but relaxing the AI Act seems unlikely.
This alert is for informational purposes only and does not constitute legal advice.
author: series editor:
