Regulation (EU) 2024/1689 of the European Parliament and of the Council, known as the Artificial Intelligence Act (AI Act), enters into force on August 1, 2024. This important document aims to establish uniform regulations for artificial intelligence (AI) across the European Union. The AI Act aims to ensure that the development and use of AI is safe, transparent, and compliant with citizens' rights. Here are the key aspects and implications of this act, explained in an accessible way for anyone interested in AI.
Key aspects of the AI Act
- Categorizing AI Systems: The AI Act divides AI systems into four risk categories: minimal, limited, high, and unacceptable. Each category is subject to different regulations:
- Minimal Risk: The least restrictive category, it includes AI systems that pose no significant threat to users. Examples include email spam filters or simple product recommendation systems in online stores.
- Reduced risk: AI systems must meet basic transparency requirements, such as news chatbots or social media sentiment analysis apps. Companies must inform users that they are dealing with AI.
- High Risk: These systems are the most regulated. Examples include AI used in medical diagnostics, facial recognition systems used by law enforcement, and traffic management systems in autonomous vehicles. They must undergo extensive compliance assessments, audits, and monitoring to ensure their safe and ethical operation.
- Unacceptable risk: This includes AI systems that could manipulate human behavior in unethical ways, such as AI used for subliminal advertising or social rating systems that could lead to discrimination. These systems are strictly prohibited.
- Transparency and explainability: The regulation requires AI systems to be understandable to users. Companies must provide information about how their AI systems work and the decisions they make. For example, if a medical app uses AI to make a diagnosis, the patient must know the basis for the decision. AI systems must be designed so that users can understand and, if necessary, challenge the results.
- Personal Data Protection: The AI Act closely aligns with the General Data Protection Regulation (GDPR). AI systems must protect users' personal data by ensuring its anonymization and security. For example, healthcare apps will be required to ensure patient data privacy. Companies will be required to implement safeguards to prevent unauthorized access and misuse of data.
- Oversight and accountability: Companies providing AI systems will be required to regularly verify that their products comply with regulations. If the AI causes harm, the company could be held accountable. This includes conducting audits and risk assessments. Suppliers will be required to ensure that AI systems are monitored throughout their lifecycle and undergo regular security reviews.
- Support for innovation: The AI Act provides financial support for research and development of AI technologies. The European Commission aims to support innovation by establishing AI centers of excellence and funding research projects. The goal is to create a favorable environment for the development of AI technologies that are aligned with EU values and ethical standards.
Benefits and challenges
Benefits: The AI Act aims to increase trust in AI technology by ensuring its security and transparency. This could accelerate the implementation of AI in various fields, such as medicine, transportation, and education. Uniform regulations across the EU will also facilitate the development and implementation of innovative solutions. Users will have greater confidence that AI systems are operating in their interests and are safe to use.
Challenges: Complying with new requirements can be challenging, especially for small and medium-sized businesses. Certification, auditing, and monitoring processes can be costly and time-consuming, requiring additional resources and expertise. Companies will need to invest in training and skills development for their employees to meet the new regulatory requirements.
Summary
The entry into force of the AI Act on August 1, 2024, is a significant step in the regulation of artificial intelligence at the global level. Through this legislation, the European Union aims to ensure that AI technologies are developed and used ethically, safely, and in accordance with citizens' rights. The AI Act aims to benefit both users and technology developers by ensuring transparency, security, and support for innovation. This legislation will enable the EU to become a leader in ethical AI, promoting the sustainable development of technology worldwide.
This article is for informational purposes only and does not constitute legal advice.
Legal status as of August 1, 2024
author: series editor:
