The AI Act and Management Responsibilities

Companies that prepare early for the EU’s AI Act will not only avoid fines and reputational damage — they will strengthen customer and investor trust and gain a head start over competitors. This legislation will fundamentally reshape the way AI is used and governed across Europe. Now is the right time to ensure your organization not only meets the new requirements but also turns them into a strategic competitive advantage.

The EU’s AI Act (Regulation (EU) 2024/1689 laying down harmonised rules on artificial intelligence) will reshape how companies design, deploy, and monitor AI systems across Europe, and management cannot afford to stand on the sidelines. In this blog, we look at what the Act really means for management and how preparing now can protect your business while building lasting trust.

The Role Of Management

The AI Act sets obligations mainly for providers and deployers of AI systems. Even though it does not impose direct statutory duties on individual managers, the overall responsibility for compliance and governance lies with management. This means management must take ownership of creating structures that enable responsible AI use and ensuring sufficient financial and human resources are dedicated to compliance. Oversight cannot be handed over entirely to IT teams or outsourced to service providers.

For Finnish companies, this responsibility rests with boards and executives, who must ensure their organizations are ready to meet not only the legal requirements but also the growing expectations of customers, stakeholders, and authorities. Management needs to demonstrate that effective processes exist for identifying, monitoring, and mitigating the risks connected to AI systems.

Good governance goes beyond ticking boxes. It requires a clear understanding of how AI influences the business model, the organization’s reputation, and the rights of customers and employees. By taking an active role, management not only ensures compliance with the AI Act but also builds resilience against legal, financial, and operational risks, turning responsible AI use into a strategic advantage.

High-Risk AI in Practice

The Act places particular emphasis on high-risk AI systems, such as those used in recruitment, credit scoring, healthcare, and public services. Systems falling into this category are subject to stricter requirements for testing, monitoring, and documentation.

Take the example of a Finnish company using an AI-powered recruitment platform to streamline hiring. While the tool may improve efficiency, the organization must ensure it does not unintentionally discriminate against applicants on the basis of age, gender, or other protected characteristics. Meeting this responsibility requires establishing regular monitoring processes, training HR staff in the responsible use of AI, and maintaining thorough documentation that shows risks have been identified and addressed. Being able to demonstrate these measures is essential if authorities request evidence of compliance.

Why And What Actions Are Needed

The consequences of non-compliance with the AI Act can be severe. Beyond potential administrative fines, organizations risk significant reputational damage if AI is used irresponsibly. Loss of customer trust and public confidence can be difficult to repair, and authorities are likely to scrutinize companies that fail to meet the expected standards.

Neglecting the responsibilities of the Act can expose boards and executives to questions under broader corporate governance and risk management standards, which increasingly emphasize ethical and responsible business practices.

A critical element of readiness is continuous training and education. Boards and executives need to understand the evolving landscape of AI risks, from algorithmic bias to security vulnerabilities. Only by embedding responsible practices and a culture of accountability can management safeguard both compliance and trust in AI-driven operations.

How To Prepare

Now is the time for management to take concrete steps toward AI Act readiness. Priorities may include setting up an internal AI governance group to coordinate efforts, providing targeted training for staff, and reviewing contracts with AI providers to ensure that issues such as liability, risk allocation, and data protection are clearly addressed.

Strong vendor risk management is essential, as the responsibility for compliance cannot be transferred to external providers. Regular audits and ongoing monitoring are particularly important for high-risk AI systems. Management should also embed AI oversight into the company’s broader compliance framework, ensuring that it aligns seamlessly with existing data protection and cybersecurity obligations.

The AI Act should not be viewed merely as a regulatory burden. For Finnish management, it represents an opportunity to demonstrate accountability, strengthen stakeholder trust, and turn responsible AI use into a strategic advantage. Organizations that act early will be best placed to manage risks effectively and to reassure customers, partners, and authorities that their use of AI is both lawful and trustworthy.

The AI Act is more than a regulatory challenge — it is a chance to show that your organization uses AI responsibly and transparently. We help leadership teams build clear processes, train staff, and manage risks so you can confidently tell customers, partners, and regulators: we are ready for the AI-driven future. Start preparing today and get in touch — we will help you turn compliance into a competitive advantage.


We are pleased to assist with any questions or challenges related to the Data Act and to support your organization in effectively preparing for these new obligations.






    Key contacts

    Otto Michelsen

    Otto Michelsen is an expert in ICT contracts, data protection, and the legal aspects of emerging technologies. He is particularly skilled at guiding clients through data protection compliance, handling authority inquiries, and managing data-related disputes. Otto actively monitors the evolving EU data regulatory landscape and advises international organizations on how upcoming regulations impact their operations. He also supports companies in establishing effective data governance practices.

    In addition, Otto has hands-on experience in building compliance programs and navigating complex scenarios involving sanctions legislation.

    He holds the CIPP/E and CIPM certifications in data protection, awarded by the International Association of Privacy Professionals (IAPP).


    The materials on the Eversheds Sutherland website are for general information purposes only and do not constitute legal advice. While reasonable care is taken to ensure accuracy, the materials may not reflect the most current legal developments. Eversheds Sutherland disclaims liability for actions taken based on the materials. Always consult a qualified lawyer for specific legal matters. To view the full disclaimer, see our Terms and Conditions or Disclaimer section in the footer.