The European Union (EU) has taken a significant step by introducing the AI Act, which focuses on regulating high-risk areas of AI technology usage. This legislation, referred to as “historic” by EU Commissioner Thierry Breton, adopts a risk-based approach to oversee AI applications.

The AI Act Overview

The AI Act targets high-risk areas such as government use of AI for biometric surveillance and systems similar to ChatGPT. It emphasizes transparency before these technologies are introduced to the market. The recent landmark vote follows a December 2023 political agreement and marks the culmination of meticulous text tailoring for legislative approval.

The agreement signifies the conclusion of negotiations, with the vote by permanent representatives of all EU member states held on Feb. 2. This pivotal step paves the way for the act to progress through the legislative process, including a vote by a critical EU lawmaker committee scheduled for Feb. 13 and an expected vote in the European Parliament in March or April.

Key Highlights of the AI Act

The AI Act’s approach is centered on the principle that the riskier the AI application, the greater responsibility placed on developers. This principle is particularly significant in critical areas like job recruitment and educational admissions. Margrethe Vestager, Executive Vice President of the European Commission for a Europe Fit for the Digital Age, emphasizes the focus on high-risk cases to align AI technologies with EU values and standards.

The implementation of the AI Act is anticipated in 2026, with specific provisions taking effect earlier to facilitate a gradual integration of the new regulatory framework. The European Commission also supports the EU’s AI ecosystem by creating an AI Office responsible for monitoring compliance with the Act, with a focus on high-impact foundational models posing systemic risks.

Regulatory Framework and Enforcement

The EU’s AI Act is set to be the world’s first comprehensive AI law, aiming to regulate the use of artificial intelligence in the EU to ensure better conditions for deployment, protect individuals, and promote trust in AI systems. It is structured around four different levels of risk, providing a clear and easy-to-understand approach to AI regulation. The enforcement will be carried out through national competent market surveillance authorities, supported by a European AI Office within the EU Commission.

Stricter Crypto Regulations in the EU

In addition to the AI Act, the EU has proposed categorizing cryptocurrencies as financial instruments and imposing stricter regulations on non-EU crypto firms. These measures aim to curb unfair competition and standardize regulations for crypto entities operating within the EU.

The European Securities and Markets Authority (ESMA) has introduced a second set of guidelines to regulate non-EU-based crypto firms, emphasizing the need for regulatory clarity and investor protection. This initiative is part of a broader effort to establish regulatory clarity in the crypto space, safeguard investors, and promote the growth of crypto services within the EU.

For more updates and news on cryptocurrency regulations and AI developments, stay tuned to Global Crypto News.