Recently, the EU has managed to strike a deal on the world’s first regulatory framework for AI. The goal of the Artificial Intelligence Act is to ensure that AI is used in a safe and fair manner. Let's explore the contents of the upcoming AI Act and understand why it’s important for businesses and organizations to start preparing without delay. In essence, the AI Act is a roadmap leading organizations toward a responsible and future-proof use of AI.
1. What's the deal with the AI Act?
The big idea of the AI Act is to sort AI into risk-based categories and target the most stringent requirements on AI use cases that include significant risks.The four categories are as follows:
- Prohibited practices: Some AI applications are banned altogether because they’re not deemed ethical by the EU, such as remote biometric identification in public places
- High-risk AI: This covers AI tools that pose major risks for the public, such as automated recruitment practices and the handling of critical infrastructure. The bulk of the AI Act’s requirements are aimed at this category
- Limited risk AI: Some AI applications, such as certain chatbots, will be deemed as limited risk with less stringent regulatory requirements
- Everyday AI: Applications such as AI in video games or spam filters won't have extra rules because they're not seen as risky applications.
2. How to identify in which category your AI belongs to
The most important part of preparation for the AI Act is to map out your planned and existing AI applications, determine which of them fall into the scope of the AI Act, and identify the likely risk categories they’ll fall into. This is often also the most difficult part – while some applications can be categorized with a high level of certainty by referring to the list of high-risk use cases included in one of the Annexes of the AI Act, many use cases are far from clear-cut. It’s advisable to err on the side of caution and treat anything that might fall into the high-risk category as such. The good news for organizations struggling with this task is that the lawmakers decided during the final stage of the negotiations to give some more discretion to the deployers of AI systems as to what counts as high risk.
3. How to play by the rules with high-risk AI
If your planned or existing AI application is a potential high-risk use case you need to:
- Manage risks: You need to put in place a procedure for identifying, evaluating, and mitigating risks of an AI system throughout its life cycle
- Use quality data: Training, validation, and testing data needs to meet high-quality standards and be free of significant biases and errors
- Ensure accuracy: High-risk systems must achieve a reasonable level of accuracy and reliability
- Keep records: You need to document and maintain records of what your AI does to show compliance with the AI Act
- Maintain transparency: End users should always be made aware that they are dealing with AI
- Arrange human oversight: The AI’s conduct must be monitored by a person who can step in if AI starts going off the tracks
4. When will this start?
The AI Act is estimated to be applicable to all AI systems from 2025-2026 onwards.
5. What happens if you don't follow the rules?
Failure to comply with the AI Act can result in a fine as large as 40 million EUR or 7% of annual turnover.
Conclusion
With the compromise agreement that was recently achieved between the EU institutions, the AI Act is set to be finalized soon. After the Act is formally adopted, technical details will be formulated — some of these details can be as impactful for many businesses as the Act itself.
Two years is a short time to prepare. Thus, companies interested in reaping the benefits of innovative AI technologies should start preparations now. The AI Act shouldn’t be seen as an impediment to the use of AI. On the contrary, when properly understood and implemented, the AI Act provides organizations with a clear roadmap to the ethical and compliant use of AI.
Stay tuned, and let's get ready for an AI future that's not just smart, but also safe and sound.
✍️ This blog was a joint effort between Jonne Sjöholm, Vincit's Data Architect and Petja Piilola, Partner at Blic, a consulting agency specializing in societal analysis and lobbying.