open search
close
Internationales Arbeitsrecht Neueste Beiträge

Balancing risk and reality: using AI at work

AI in the workplace can no longer be seen as a problem for tomorrow – it is firmly an issue for today. In this article, we explore how employers can balance the risks and realities of AI at work through good governance, building employee trust and innovating responsibly.
Why good AI governance is important

Like many things AI-related, ‘AI governance’ has become a buzzword in recent years. However, good governance underpins both legal and regulatory compliance. The provisions of the EU AI Act continue to take effect via phased implementation and various elements of the Act are now in force. These include the prohibition on certain AI practices (including the use of AI systems to infer emotions in the workplace), requirements to ensure staff have a sufficient level of AI literacy, and obligations for providers of general-purpose AI models. Alongside other existing legislation which may impact the development and/or deployment of AI, it’s fundamental to have a governance framework which helps rather than hinders your organisation’s compliant adoption of AI. In any event, the coming into force of the enforcement provisions of the EU AI Act, on 2 August 2025, may also prompt more organisations to get their governance in order.  

But good governance doesn’t just protect businesses. Governance, and AI literacy, helps build trust with employees to use the tools in a safe way. AI isn’t a precursor to efficiency – that only comes after trial and error. Governance can facilitate this by ensuring it is done safely. Employees are encouraged to use the tools but within the necessary guardrails. For example, explaining to employees why they can’t put confidential information into the free version of any online AI tool helps them understand both what they should not be doing but also why and how they can do the same thing with licensed products. Building the trust and a collaborative culture helps ensure safe use but also that both businesses and their employees are getting the most out of the technology.  

Equally, a well-governed AI environment can offer competitive advantages. Unlike other tech trends, AI is becoming commonplace in people’s lives. Enabling and encouraging the use of AI will allow employees to reduce the menial tasks and focus on the more meaningful parts of their role they enjoy.  

Shadow AI: The wild west of the workplace

Without effective governance and giving employees a way to use AI safely, there is a significant risk of the workforce using ‘shadow AI’ – i.e. unauthorised or unmonitored use of AI by employees. This isn’t a new concept – IT professionals have been grappling with the use of unauthorised tech for years – but given the nature of AI which can ingest huge amounts of data, this does present a number of risks. 

For example, inputting personal data into an AI tool could breach data protection laws, while using outputs from a tool without appropriate licenses could infringe IP law. As mentioned, trust in AI use is important, but it’s a two-way street and employees using shadow AI could unknowingly be putting themselves at disciplinary risk without realising.    

There is no perfect solution to completely avoid the use of shadow AI, but governance is a key mitigation tool. For example, employees may not understand the difference between free and enterprise versions of the same AI tools or understand implications from using a tool on their personal device for a work question. You can of course block certain websites (and monitoring does have a valid place) but AI literacy can also help build a broader understanding of AI opportunity and risk. Equally, policies are important to guide people on what tools they can use and how they can use them. These factors won’t turn a workforce into AI experts, but it will give them enough to understand what they can/cannot do and why.  

A jurisdictional jigsaw

All jurisdictions are taking different approaches to regulating AI. Many countries are increasingly innovation-focused while others are implementing prescriptive rules. However, the core principles and risks remain the same. Security is security, transparency is transparency. There may be local nuances, of course, but a global organisation cannot adopt 20 different governance structures. It can, therefore, be effective to adopt a ‘principles-focused’ approach that is jurisdiction agnostic (with escalations as necessary for said nuances).  

That said, it is worth remembering that regulatory interpretation of risk can vary. Take DeepSeek, for example. Some regulators banned the AI Assistant incredibly quickly, before assessing it fully, whereas others issued warnings and communications on risks or commenced investigations first.  

In any event, while regulatory responses may differ, the risks of shadow AI are universal and jurisdiction agnostic governance can be an effective way to manage those risks.  

Takeaway for employers

AI isn’t going away, and neither are the risks – it is something to be addressed here and now. Whether it’s ensuring compliance with the EU AI Act, managing the risks of shadow AI, or navigating a patchwork of global regulatory responses, effective governance is pivotal. 

It is, however, important to remember that it isn’t just about risk mitigation. It’s also about enabling responsible innovation. With the right governance, policies, and employee engagement, AI can enhance productivity, support ethical decision-making, and even strengthen your brand. 

It’s time for employers to take stock and be honest about where your AI governance stands today – and think ahead to where it needs to be tomorrow. Remember, the best AI strategies are not just built on rules, but on trust, transparency, and collaboration.  

Ius Laboris




Ius Laboris is a leading international employment law practice combining the world’s leading employment, labour and pension firms. Our role lies in sharing insights and helping clients to navigate the world of labour and employment law successfully.
Verwandte Beiträge
Digitalisierung in Unternehmen Neueste Beiträge

Does AI mean a safer workplace?

AI is reshaping workplace health and safety by predicting risks and improving organisational efficiency. However, it also raises concerns around employee privacy and mental health. We take a look at these challenges below, together with solutions for employers. To mark the annual World Day for Safety and Health at Work on 28 April 2025, a technical conference was organised in Spain by the National Institute for Occupational Safety and Health in…
Digitalisierung in Unternehmen Neueste Beiträge Private Equity M&ATransformation

KI und M&A: Zwischen Gamechanger und Governance – warum die Zukunft jetzt verhandelt wird

Stellen Sie sich vor, Ihre Due Diligence dauert nicht mehr sechs Wochen, sondern sechs Stunden. Ihre Post-Merger-Integrationspläne liegen vor, noch bevor der Kaufvertrag unterschrieben ist. Und die Arbeitnehmervertretung? Sie haben auf Knopfdruck eine erste strategische Einschätzung zu der Hebelwirkung von Betriebsrat und Gewerkschaft, bevor Sie auch nur daran denken sich an den Verhandlungstisch zu setzen. Was nach Zukunftsmusik klingt, ist längst technologisch möglich. Künstliche Intelligenz…
Compliance Datenschutz Individualarbeitsrecht Neueste Beiträge

Deepfakes im Bewerbungsprozess

Was passiert, wenn KI nicht nur zur Unterstützung, sondern zur Manipulation, etwa in Bewerbungsprozessen, eingesetzt wird? Mit digitalisierten Bewerbungsverfahren, Remote-Recruiting-Prozessen und virtuellen Jobinterviews steigt nicht zuletzt auch das Risiko, dass Bewerber Deepfakes einsetzen. Deepfake-Technologien ermöglichen es, Bewerbungsunterlagen wie Zeugnisse und Referenzschreiben täuschend echt zu fälschen oder gar Vorstellungsgespräche zu manipulieren und sich so einen Vorteil im Bewerbungsprozess zu verschaffen. Für Arbeitgeber stellt sich die Frage,…
Abonnieren Sie den kostenfreien KLIEMT-Newsletter.
Jetzt anmelden und informiert bleiben.

 

Die Abmeldung ist jederzeit möglich.