The EU AI Act is in the works and promises to provide a legal framework for the use of AI systems within Europe. In this article, read what this means for businesses and how it will regulate the use of artificial intelligence.
There has been much discussion recently about artificial intelligence (“AI”). The use of AI systems (such as the well-known Chat GPT) has rapidly become widespread and the possibilities it offers seem endless. However, significant risks (for example ethical and privacy) are also attached to use of such systems.
The European Union has therefore decided to create a legislative framework for the use of “AI systems”. On 1 August 2024, the “Artificial Intelligence Act” (“AI Act” for short) officially entered into force. The AI Act is a “regulation”, that is European legislation that is directly applicable throughout the European Union.
The AI Act uses a broad definition of AI systems. It covers all systems that are “machine-based” and can therefore learn from data to adapt to new information. This includes a range of technologies, from simple algorithms to highly complex machine learning models.
The AI Act divides AI systems into several categories, taking into account the level of risk:
This classification allows the rules for each type of AI system to be defined based on risk.
As with the GDPR, European legislators are now also targeting companies that are based outside of the EU but want to put their AI systems on the EU market.
The AI Act will enter into force incrementally. Some important dates are:
The aim of the AI Act is to allow the legislation to evolve with the technology – updates and amendments will therefore also be necessary.
With this AI Act, the EU hopes to create the same minimum standard for the use of AI systems as it has previously done in the area of privacy (with the well-known ‘GDPR’)
The intention is therefore – as with the GDPR – to make the legislation enforceable against companies that do not comply with the new rules. As for the GDPR, there will be an “AI regulator” in each member state to oversee the rules (with the competence to issue fines). For the big “GPAI systems” there will also be an EU regulator to ensure appropriate monitoring.
Depending on the type of infringement, fines may be issued up to a certain percentage of the annual global turnover. For example, fines for the most serious category of breaches may go up to EUR 35,000,000 or 7% of global turnover.
Experience with the GDPR shows that such fines are actually issued, and the EU is serious about protecting its citizens.
Entrepreneurs will have to find out which AI systems are being used within their company, either knowingly or unknowingly, and in which category these systems will be classed under the AI Act.
Based on this analysis, it will then be possible to determine which (internal) rules need to be introduced, for example on how employees should deal with AI systems, and to provide the necessary training.
Concretely, by February 2025, companies will have to take measures to ensure that employees who deal with AI systems have an adequate level of knowledge of AI.
Obviously, our experts at PKF BOFIDI Legal can offer support and guidance on this. Feel free to contact our team at info@pkfbofidilegal.com.