This year, the world's first comprehensive regulation on artificial intelligence (AI) – the European Union's Artificial Intelligence Act – came into effect, binding for Latvia as well. In some areas, member states will have the right or obligation to make adjustments, and the Act will be implemented gradually until the summer of 2027.
When clarifying the Act’s requirements at the national level, it’s important to avoid the mistakes made with the General Data Protection Regulation, which resulted in an excessive bureaucratic burden.
The purpose of the AI Act is to ensure that AI systems are developed and used responsibly. However, it’s essential to prevent all responsibility from being placed solely on developers, which could impact competition and the pace of digitalization.
Considering the AI Act’s impact on digitalization, system development quality, and costs, it's important to remember why such an Act was needed and its objectives. The need for this regulation arose because, alongside the many positive benefits of AI, there are also risks of misuse, such as tools being created for manipulation or social control. Simply put, the new Act is designed to promote ethical, safe, transparent, and trustworthy AI use.
Four Risk Levels
The AI Act classifies all AI systems into four levels: unacceptable risk, high risk, limited risk, and minimal risk. The new regulation completely prohibits AI systems deemed an unquestionable threat to human safety, livelihoods, and rights. This includes government-led social scoring, as seen in China, and toys that use voice assistance to encourage dangerous behaviour, categorized as “unacceptable risk.”
High-risk systems, such as those used in critical infrastructure, surgical applications, exam assessments, hiring procedures, or migration and border control management, will be subject to the strictest requirements.
Next, there are limited-risk systems, such as recommendation systems based on previous purchases or search history, virtual assistants, translation systems, etc.
Finally, low-risk systems, such as email filters for spam and unwanted content detection, are included.
Since the AI Act includes penalties for non-compliance, including significant fines, it will significantly impact IT system development and implementation processes. For developers and implementers, understanding the Act’s rules on permitted and prohibited actions in various situations will be crucial.
At the same time, it is equally important that system clients or owners, including the government, understand the Act's goals and avoid creating an excessive bureaucratic burden.
Maintaining Digitalization Advantages
Analysing AI use and digitalization in the EU shows that Latvia is ahead of some Western European countries, like Germany, in implementing and maintaining various critical infrastructure systems and making state services accessible. It’s important not to hinder this progress or repeat mistakes made during the GDPR implementation, where misunderstandings about the regulation’s requirements not only complicated various IT sector processes but also limited journalists' work.
Small and medium-sized businesses face particular challenges, so this time it’s crucial to focus on finding solutions to reduce financial and administrative burdens for this group.
Companies that use AI will need to balance innovation and legal compliance. In the system development process, initial risk assessment, data traceability procedures, quality control mechanisms, and detailed technical documentation throughout the development cycle will be required.
Responsibility for compliance with the AI Act’s requirements must be shared between the system owner (which in some cases includes government and municipal institutions) and the developer.
If most of the burden and risk falls solely on developers, responsible developers may be unwilling to participate in various IT system projects, potentially reducing competition and affecting quality. Another complicating factor is that penalties for violations of the Act’s requirements are calculated as a percentage of the company's total annual revenue from the previous fiscal year. This means that fines for violations could exceed the total cost of the respective development project.
This is one reason why we joined in founding the Latvian Artificial Intelligence Association (MILA) – to promote responsible AI use, facilitate cross-border cooperation and knowledge exchange, and share experience with government institutions.
The Association can be a strong and professional partner for the government, helping ensure proportionality in the AI Act's implementation process, so that responsible AI use is achieved without slowing down digitalization.