With the AI Act, the EU aims to create a uniform legal framework for the development, placing on the market, putting into service and use of AI systems within the EU. This is being done in accordance with the EU’s values; namely to promote the uptake of human-centric and trustworthy AI, whilst ensuring a high level of protection of health, safety and fundamental rights as enshrined in the Charter of Fundamental Rights of the European Union (the ‘Charter’).
The AI Act applies to providers that place AI systems on the market or put them into service, as well as to ‘deployers’ who use these systems. If, as an employer, you use AI systems – for example, in the recruitment process – you will mainly have to take into account the obligations on deployers.
AI literacy
The first obligation that has now come into effect (as of 2 February 2025) is to ensure a sufficient level of AI literacy. The goal is to ensure that all individuals involved in AI systems within the organisation have the necessary skills and knowledge to make informed decisions and use the AI systems responsibly.
When taking measures to ensure a sufficient level of AI proficiency, the following must be taken into account:
- the technical knowledge, experience, education and training of staff and other people dealing with the operation and use of AI systems; and
- the context in which the AI systems are to be used.
Employers covered by the rules should also consider the people or groups of people on whom the AI systems will be used.
The AI Act does not specify what measures an employer must take to achieve a ‘sufficient’ level of AI literacy. This makes it difficult to demonstrate compliance with this obligation, but it also offers an opportunity for employers to determine what is ‘sufficient’ for their organisation and employees.
For this reason, organisations that use AI systems should organise training courses on AI literacy. Implementing a detailed responsible AI use policy would also contribute to meeting the AI literacy obligation.
But not all employees need to achieve the same level of AI literacy. It is not a ‘one size fits all’ obligation, but one that requires a more tailor-made approach. Nevertheless, everyone who comes into contact with AI is expected to understand the basic principles, as well as to be able to deal responsibly and critically with AI systems. Compliance with this obligation is an ongoing and dynamic process.
It is also of note that the AI Office, a body established within the European Commission as the ‘centre of AI expertise’, has published a ‘living repository’ of AI literacy practices. These are non-exhaustive and expected to be updated regularly. The aim of the repository is to encourage learning and knowledge sharing among providers and deployers of AI systems in respect of AI literacy. The document confirms that implementing the practices set out will not, however, automatically grant compliance with the AI Act.
Prohibited AI practices
From 2 February 2025, the AI Act has prohibited a number of practices in the field of AI deemed to be unacceptable. These are practices that are contrary to European fundamental norms and values, such as violating the fundamental rights enshrined in the Charter.
For example, the following AI practices (among others) are now prohibited:
- AI systems that deploy subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques. This includes systems that push people to make decisions they would not otherwise make which can lead to significant harm;
- AI systems that exploit any of the vulnerabilities of a person or a specific group of people due to their age or disability, to materially disrupt their behaviour which can result in significant harm;
- AI systems that evaluate or classify people based on their social behaviour or known, inferred or predicted personal or personality characteristics (known as ‘social scoring’), and that lead to detrimental or unfavourable treatment;
- AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage; and
- AI systems that infer the emotions of a person, except where the AI system is used for medical or safety reasons.
Companies that develop or use prohibited AI practices shall be subject to administrative fines of up to EUR 35 million or, if the offender is an undertaking, up to 7% of its total worldwide annual turnover, whichever is higher. When fines are imposed on SMEs and start-ups, their interests and their economic viability are taken into account, and a lower fine may be imposed.
AI policy
In accordance with its obligations under the AI Act, the EU’s AI Office shall encourage and facilitate the drawing up of codes of practice, taking into account international approaches.
‘Codes of conduct’ is interpreted quite broadly in this context, and it is not entirely clear whether this includes an AI Policy or not. Nevertheless, establishing such a policy is certainly recommended. In an AI policy, employers can create clear guidelines for the use of AI within the company. This can include which AI systems may be used, by whom, and to what extent AI systems may be used in respect of certain employees. The policy can also determine how staff can remain sufficiently AI-literate.
Takeaway for employers
The first obligations under the AI Act are now in force. This long-awaited, and first global AI regulation is now starting to take root within the real world of business, and so it is important that employers are fully aware of their obligations. Employers should:
- Map out which AI systems are used within their organisation;
- Qualify these AI systems based on their risk level; and
- Stop using any AI systems which carry an unacceptable level of risk.
Following this, they should map out the current level of AI literacy within the organisation and assess what additional measures are needed (e.g. training, internal regulations etc.).
Although not mandatory, we also recommend drafting an AI policy with clear guidelines on the use of AI within the company. We believe that drawing up an AI policy is a relatively straightforward and accessible way for employers to take the first step towards achieving a sufficient level of AI literacy.
Finally, it is important to have an eye on developments coming down the track: the next obligations under the AI Act will take effect from 2 August 2025.
As explored above, the consequences of getting it wrong are material and so we recommend that employers consider seeking professional legal advice if they have any questions about HR, privacy and AI within their organisation, or would like assistance with drawing up an AI policy or organising training courses.