The era of unregulated artificial intelligence is gradually coming to an end. Since the boom of ChatGPT and other open-source tools, regulations have struggled to keep up with the development of generative AI. However, this has changed with the publication of the European AI Act. The European Union has taken the first steps to reduce the risks associated with the development and implementation of AI systems.
This is certainly just the beginning. The EU’s regulations primarily address the safety of AI systems. Issues raised by creators, particularly those related to copyright and the definition of GenAI creations, have been postponed for now.
What has changed in business terms? As a company using AI in your internal or external processes, do you need to face any new obligations? In this article, we summarize the key assumptions of the AI Act from a business perspective.
Key Dates for Compliance with the EU AI Act
The AI Act was published in the Official Journal of the European Union and entered into force on August 1, 2024. Here are the key compliance dates you should know as an entrepreneur using or planning to implement AI:
- February 2, 2025: Prohibited practices provisions become applicable.
- August 2, 2025: General-purpose AI models, governance, and sanctions provisions become applicable.
- August 2, 2026: General applicability of the AI Act after a 24-month transitional period.
- August 2, 2027: Full compliance required for certain high-risk AI systems.
- August 2, 2030: High-risk AI systems used by public authorities must comply by this date.
How Does the EU AI Act Classify AI Systems?
Minimal Risk
- Definition: AI systems that pose little to no risk and face no obligations under the AI Act.
- Examples: Spam filters, AI-enabled video games, automated email sorting, recommendation algorithms for movies or music, weather prediction models.
- Obligations: No mandatory requirements, but companies may voluntarily adopt codes of conduct.
Specific Transparency Risk
- Definition: AI systems that must clearly inform users about their interaction with AI or label AI-generated content.
- Examples: Chatbots, AI-generated content labeling, virtual customer service agents, AI writing assistants, deepfake technology used for entertainment.
- Obligations: Transparency obligations, such as informing users they are interacting with an AI system and labeling AI-generated content.
High Risk
- Definition: AI systems that pose significant risks and must comply with strict requirements, including risk mitigation, high-quality data sets, clear user information, and human oversight.
- Examples: AI-based medical software, AI systems used for recruitment, facial recognition technology for law enforcement, credit scoring systems, automated driving systems.
- Obligations: Compliance with stringent requirements, including risk assessment and mitigation, high-quality data, clear user information, and human oversight.
Unacceptable Risk
- Definition: AI systems that are considered a clear threat to fundamental rights and are banned.
- Examples: AI systems enabling “social scoring” by governments or companies, predictive policing algorithms, AI used for real-time biometric identification in public spaces, systems that exploit vulnerabilities of specific groups such as children or disabled individuals for manipulation.
- Obligations: Prohibition of these systems, as they are banned due to the significant threat they pose to fundamental rights.
AI Act Obligations for Providers, Deployers, and Importers
Providers
Providers of high-risk AI systems must ensure their products meet stringent requirements before entering the market. This includes:
- Implementing robust risk management processes.
- Maintaining high standards of quality assurance.
- Ensuring the systems are safe and transparent.
Providers are also responsible for ongoing compliance through regular monitoring and updates of their AI systems. They must maintain detailed technical documentation and keep logs of the system’s operation to facilitate traceability and accountability.
Deployers
Deployers, or users, of high-risk AI systems must operate these systems according to the provided instructions. Their responsibilities include:
- Monitoring the AI system’s performance.
- Reporting any serious incidents or malfunctions immediately.
- Ensuring the AI systems are used as intended and do not compromise safety or fundamental rights.
Importers
Importers play a crucial role in ensuring only compliant AI systems enter the EU market. They must:
- Verify that the AI systems meet all regulatory requirements set forth by the AI Act.
- Ensure appropriate conformity assessment procedures have been carried out.
- Confirm necessary documentation and CE markings are in place before marketing the AI systems in the EU.
You can verify your obligations through the EU’s compliance checker.
Where does the EU AI Act apply?
The EU AI Act applies to companies providing, deploying and importing AI systems within the European Union. It can also impact companies outside the EU if their AI systems are used within the EU market.
Non-EU companies must comply with the same regulations and standards as EU-based companies, including ensuring system compliance, documentation, CE marking, and registering high-risk AI systems.
How to navigate AI implementation in the times of AI Act? Let us do the job!
By choosing our solutions, you don’t have to worry about compliance. Our team of specialists ensures that our AI systems adhere to all regulations, allowing you to fully leverage the power of AI without the burden of navigating the legal landscape. This way, you can focus on innovation and growth while staying aligned with the latest standards.