8.12.2025

EU AI Act – what accountants and small businesses need to know

The EU AI Act came into force on 1 August 2024 and it rolls out in stages until 2027/2028. Its aim is clear: ensure that AI systems used across the EU don’t put health, safety, or fundamental rights at risk.

If you’re an accountant or running a small business, you don’t need to become an AI law specialist. But you do need to know what’s coming and when.

Where things stand now

A significant milestone arrived in August 2025, when the first major obligations took effect for businesses, public authorities, and AI solution providers. The regulation established a unified framework across the EU already in February 2025, but from August 2025 the focus shifted to three key areas: penalties for non-compliance, obligations for general-purpose AI models (GPAI), and national and EU-level supervision.

Prohibited AI systems had to go out of use from February 2025, and employers must now ensure their staff have adequate AI literacy. Non-compliance can result in fines of up to €35 million or 7% of global turnover.

Supervision works on two levels. At national level, each member state must designate a market surveillance authority and a notifying authority. At EU level, the AI Office, the AI Board, and a scientific panel of independent experts share responsibility.

However, in November 2025 the European Commission published proposals for the Digital Omnibus on AI Regulation. It’s the EU’s attempt to tidy up a digital rulebook that’s grown complex, fragmented, and costly for businesses to manage. Once this has passed the EU process, things might look different.

The timeline: what has happened and what’s coming

The AI Act rolls out in stages, giving organisations time to adapt. Here’s the AI Act timeline:

  • 1 August 2024: Regulation entered into force
  • 2 February 2025: Prohibitions on dangerous AI uses began to apply
  • 2 August 2025: Rules for general-purpose AI models (GPAI) came into force
  • 2 August 2026: Most obligations apply, including rules for high-risk systems
  • 2 August 2027: Remaining rules for high-risk applications come into force

Note! The Digital Omnibus would extend the implementation timeline for high-risk AI rules to 2027/2028.

What’s banned?

Some uses of AI are simply prohibited. The ban applied from 2 February 2025. Examples include:

  • Social scoring systems
  • Predicting criminal behaviour based on personality traits
  • Emotion recognition in workplaces and schools without medical justification
  • Manipulative or deceptive applications that could cause harm

Generative AI: what does the regulation require?

Generative AI (GPAI, General Purpose AI) refers to models that can produce text, images, audio, or video. Think ChatGPT or MidJourney, for example. Under the AI Act, these models must provide technical documentation, publish summaries of training data, and comply with copyright laws.

Where a model poses systemic risk, the requirements go further covering incident reporting and cybersecurity, among other areas. GPAI rules came into force on 2 August 2025.

You can read more on this topic on the European Commission’s website on generative AI.

High-risk systems: what counts?

High-risk AI systems sit at the centre of the regulation. These are systems where things going wrong could seriously harm people or society. The AI Act defines them in Annex III. Examples include:

  • Critical infrastructure (energy, water, transport)
  • Healthcare and medical devices
  • Recruitment and HR, for example automated candidate screening
  • Credit and insurance
  • Law enforcement and border control
  • The justice system, used to support decision-making

If a system falls into these categories, it faces strict requirements: risk management frameworks, CE marking, human oversight, and detailed documentation.

Managing business and HR: what you need to know

If you manage people, this regulation touches you directly. AI used in recruitment, career decisions, workforce analytics and employee monitoring now faces tighter rules. In practice, that means:

  • Non-discrimination and transparency in recruitment systems
  • Employees’ right to know how AI affects decisions about them
  • Taking responsibility for verifying that third-party vendors’ systems meet the requirements

For accountants, it’s also worth understanding how your clients’ systems measure up. That means getting comfortable with the key concepts.

Yes, compliance takes resources. But it also creates opportunity. Responsibly built AI can differentiate your organisation and build genuine trust with clients. The EU’s human-centred approach to AI creates a solid foundation for long-term, ethical development.

We follow the implementation of the AI Act closely and work to ensure our solutions stay current with both the regulation and the technology. Our aim is straightforward: responsible, transparent AI that supports your business.

What is a regulatory sandbox?

A regulatory sandbox is a controlled test environment where companies can develop and test AI solutions under supervision under national competent authorities and with lower risk. The AI Act mandates that Member States establish at least one AI regulatory sandbox.

Sandboxes target SMEs in particular and can be free and easy to access. They lower the barrier to trying new AI solutions and help you confirm they meet regulatory requirements before going live.

EU AI Act: key takeaways

The AI Act introduces new requirements in phases. For accountants and small businesses, the priorities are:

  • Understand the regulatory status of the systems you use
  • Follow guidance from national authorities
  • Build your own working knowledge of the regulation

AI in our solutions

We’re bringing AI into our solutions thoughtfully. We introduce new capabilities gradually, in incremental updates, so you get real value in your day-to-day work without surprises. Your needs, data security, and the requirements of the AI Act guide every development decision.

The goal is to give accountants and business owners more time back from routine tasks, and better insight for the decisions that matter.

Responsible AI. Practical benefits. That’s what we’re working with.

Regulation