This is how we work - efficient QA processes for digital excellence Learn more
Read time: ca. 6 min
Header_AI_Act

The AI Act: how the EU wants to create a safe and transparent future for artificial intelligence

written by Janina

 

Since August 1, 2024, the new AI Act has come into force. With this new law, the EU has reached a significant milestone in the regulation of artificial intelligence (AI). The goal is to strike a balance between fostering innovation and protecting citizens' rights. The AI Act is a response to the rapid development and potential risks and ethical challenges associated with the use of AI technologies. In this blog article, we highlight the key aspects of the AI Act and the implications of the law for businesses, software developers, and AI users.

What is the AI Act?

The AI Act is a comprehensive legislative act that aims to create a framework for the development, commercialization and use of AI in the EU. The Act regulates all AI models and systems. It aims to ensure that AI systems are safe and transparent and respect the fundamental rights of citizens. The AI Act categorizes AI applications according to their risk and sets out specific requirements and regulations for each category.

 

Categorization of AI systems

The AI Act divides AI systems into four main categories:

  1. Prohibited AI applications:
    AI applications that are classified as unacceptably risky are prohibited. Six months after the Act comes into force, the first applications with unacceptable risk will be outlawed. These include systems that manipulate people or unethically influence their behavior, such as social scoring by governments.

  

  1. High risk:
    This category includes AI systems that are used in sensitive areas such as health, education, the labor market or law enforcement. This includes applications that have a negative impact on people's safety and fundamental rights. Such systems must fulfill strict requirements in terms of transparency, security and fairness. For example, developers must ensure that their systems are based on robust and validated data. The regulation will apply from August 2026.

 

  1. GPAI:
    Large language models belong to general purpose AI or general purpose AI models. Such applications can entail so-called systemic risks. The EU Commission mentions that such powerful applications can also be misused for far-reaching cyber attacks.

  

  1. Minimal risk:
    This category includes applications with minimal risk that do not have to fulfill specific requirements, such as AI-powered video games or spam filters.


Main requirements of the AI Act

The AI Act places various requirements on the development and use of AI systems, particularly in the category of high-risk applications.

Transparency: Users must be informed that they are interacting with an AI system and be made aware of its potential and limitations.

Safety and accuracy testing: High-risk AI systems must undergo extensive testing and certification to ensure their safety and accuracy.

Data governance: Software developers must ensure that the data used to develop AI systems is of high quality and representative. Discrimination and skewing must be minimized.

Monitoring and control: Operators of AI systems must introduce mechanisms for monitoring and control to prevent misuse and malfunctions.

 

How will the regulations be implemented?

A national authority must be established in each EU Member State. Each state appoints a representative of this authority to the European Committee on AI. In Germany, this could be the state protection authority or the Federal Network Agency. There will be an advisory forum and a European AI office responsible for monitoring the GPAI models. A scientific committee will support the office and the AI Office will also contribute its expertise.

Impacts on companies and software development

The AI Act also has a number of changes in store for companies and developers who want to offer AI technologies in the EU. For providers of high-risk AI systems in particular, the new law means increased responsibility and additional compliance costs. Companies may need to rethink their development processes and invest to meet the new requirements. However, this could also lead to a competitive advantage, as certified security and transparency can increase user confidence.

 

Benefits for consumers

The AI Act offers numerous advantages for consumers. The strict transparency and security requirements are intended to minimize the risks of AI applications and strengthen citizens' trust in these technologies. The data governance and anti-discrimination measures are intended to help promote fair and equitable AI systems that respect the rights of all citizens.

 

Conclusion

The AI Act is an important step in the regulation of artificial intelligence and shows that the EU is determined to build a safe and transparent AI era. While companies and developers have to face new challenges, the Act also offers the opportunity to have a positive impact on society through higher standards and more trust in AI systems. Only time will tell how the AI Act will prove itself in practice and what adjustments may be necessary in the future to keep up with the rapid technological advances.