As we continue to monitor advancements in artificial intelligence, here are some significant developments shaping the future of AI and implications for data governance, privacy and cybersecurity.
European Union: The AI Act Takes Effect
The European AI Act officially took effect on August 1, 2024, marking a pivotal moment in AI regulation. This ground-breaking legislation is risk-based and aims to ensure responsible AI development by categorising AI systems on risk levels—from minimal to unacceptable. High-risk applications, such as those in healthcare and recruitment, will face stringent requirements, including risk mitigation and human oversight.
The EU is positioning itself as a global leader in safe AI and aims to foster an ecosystem where human rights and public safety are prioritised and one where there is public trust in AI as a transformative technology that benefits society as a whole.
The EU AI Act, similar to the EU GDPR, has a broad territorial scope. It applies to organizations within the European Union and outside the EU if their AI systems are used within the EU or affect people located in the EU. This means that public and private AI technology providers or users of A1 technology, regardless of their location, must comply with the Act if their AI systems are placed on the EU market or impact individuals in the EU.
What does this mean for organisations developing and using AI systems?
The EU AI Act sets out clear governance, privacy and cybersecurity requirements for AI systems based on risk classification.
The cybersecurity requirement for high-risk AI systems includes four main elements:
- High-risk AI systems should be designed to be resilient against attempts to alter their use, behaviour, and performance and to compromise their security properties by malicious third parties exploiting the system’s vulnerabilities. An AI system is defined as software that includes one or several AI models as key components alongside other components such as interfaces, sensors, databases, network communication components, computing units, pre-processing software, or monitoring systems.
- Organisational and technical solutions shall be implemented to address these goals
- A cybersecurity risk assessment must be conducted for high-risk AI systems. Compliance of high-risk AI systems to the EU AI Act requires a security risk assessment.
- Technical solutions shall be appropriate to the relevant circumstances and risks.
Here are key compliance dates:
February 2, 2025: Prohibitions on certain high-risk AI systems will start to apply.
August 2, 2025: Rules for notified bodies, general-purpose AI models, governance, confidentiality, and penalties will come into effect.
August 2, 2027: Providers of general-purpose AI models placed on the market before this date must comply with the AI Act.
By 2030: Obligations will apply to AI systems that are components of large-scale IT systems established by EU law in areas such as freedom, security, and justice
Penalties for Non-Compliance with EU AI Act
The penalties for non-compliance with the EU AI Act are as follows:
- €35 million or 7% of the company’s total worldwide annual turnover for the preceding financial year, whichever is higher for prohibited AI violations
- €15 million or 3% of the company’s total worldwide annual turnover for the preceding financial year, whichever is higher for most other violations including those relating to high-risk AI systems.
- €7.5 million or 1% of the company’s total worldwide annual turnover for the preceding financial year, whichever is higher for providing misleading, incorrect or incomplete information.
The penalties for non-compliance with the EU AI Act will come into force on August 2, 2025
Here is how we can help
Saracen Consultancy specialises in guiding organisations through the complexities of AI governance and cybersecurity compliance. Our expert team is dedicated to helping your organisation navigate these regulations seamlessly.
We offer comprehensive services covering risk management, data governance, and compliance audits, to ensure your AI systems adhere to standards. We work with organisations to identify and mitigate risks, enhance transparency and implement robust human oversight mechanisms. Partner with Saracen Consultancy to achieve compliance with the EU AI Act and build trustworthy, compliant and secure AI systems.
Contact us today to learn how we can support your journey to compliance.