: Peter Hense, Tea Mustac
: AI Act compact Compliance, management& use cases in corporate practice
: Fachmedien Recht und Wirtschaft
: 9783800597734
: InTeR-Schriftenreihe
: 1
: CHF 67.50
:
: Handels-, Wirtschaftsrecht
: English
: 329
: Wasserzeichen
: PC/MAC/eReader/Tablet
: ePUB
The EU AI Act is here, and contrary to popular opinion, it is not just Europe's problem. As the first comprehensive law to regulate AI systems, the AI Act attempts to establish a global framework, setting limits on dynamic technological developments and creating new legal responsibilities for organisations worldwide. The AI Act's definition of AI systems is expansive, covering a wide range of technologies, even those that, until recently, were considered traditional machine learning models. This makes understanding and preparing for compliance even more critical, because if your business involves AI, the AI Act is now your business. 'AI Act Compact' is your go-to tool for tackling the challenges imposed by the Act. Written by Tea Mustac and Peter Hense, experienced legal experts and hosts of the podcast 'RegInt: Decoding AI Regulation,' this book provides a deep dive into the AI Act's key provisions, processes, and real-world implications. The AI Act introduces a new risk-based framework, establishing compliance assessments and relying on harmonized standards. It imposes obligations for data governance, data quality management, accuracy and robustness, risk management, explainability, non-discrimination, accountability, liability, human controllability and more. Implementing these requirements presents significant practical challenges, especially given their broad application to numerous actors along the AI supply chain. Drawing heavily on international technical standards from CEN/CENELEC, ISO, IEC, and IEEE, the authors provide a practical toolkit for managing AI risks and ensuring compliance. Whether you're a lawyer, data scientist, or machine learning engineer, this book offers clear, actionable strategies for staying compliant and competitive in this fast-evolving landscape.

Peter Hense, lawyer and partner at Spirit Legal, specializes in the fields of technology, data, research and development and privacy engineering. For over a decade, he has been working with leading R and D-companies on machine learning, in particular artificial neural networks and knowledge graphs. His focus is on compliance and data governance in the context of automated decision-making systems (Accountable AI). Translated with DeepL.com (free version) Tea Mustac, Mag. iur. is an expert in European and international technology and IP law at Spirit Legal. She advises and publishes at the interface of law and technology, with a particular focus on artificial intelligence. Together with Peter Hense, she hosts the English-language podcast 'RegInt: Decoding AI Regulation'.

1. The Scope of the AI Act


a.On AI Systems


(1)Introduction

After multiple conceptual changes to the definition, the AI Act was finally based on theOECD definition of an AI system. This is problematic for several reasons, starting with the fact that adopting this definition, intentionally or not, broadens the scope of an already very broad definition. Furthermore, the OECD definition was never intended to serve as a legal definition but rather as a programmatic statement typical ofpublic policymaking. This makes itper se too vague to be one. Nonetheless, Article 3(1) of the AI Act now defines an “AI system” as:

  • A machine-based system

  • Designed to operate with varying levels of autonomy

  • That may exhibit adaptiveness after deployment,

  • That infers from its inputs how to generate outputs such as predictions, content, recommendations, or decisions, and

  • That can influence physical or virtual environments.

Recital 12 attempts to provide some clarity on the matter by stating that, firstly,autonomy is to be understood assome degree of independence of the AI system from the human operator. However, this clarification fails to consider that most systems today possess at least a minimum degree of independence associated with process automation. Just think of your spam filter. Yes, of course, we can go check the spam filter and see what the algorithm has sorted out as “spam”. We can also choose to override the algorithmic label. However, the main point is that the algorithm independently sorted an incoming email into the spam folder, which also means you saw the email five days later than you would have otherwise. Not to say that anyone is complaining, as this situation is still preferable to receiving all the spam emails in our regular folder. Still, if any degree of autonomy is sufficient to satisfy this criterion, then it may very well be the case that even very simple programs we have been using for years, or even decades, fulfil it.

Secondly, in terms ofadaptiveness, Recital 12 clarifies that it refers toself-learning capabilities, which allow the system to change while in use. Here, one might be tempted to sigh in relief as many systems do not have such capabilities. However, this is where the AI Act definition crucially deviates from that of the OECD one making the material scope of the AI Act virtually unlimited. While the OECD definition demands that AI systems be adaptive, the AI Act merely states that these systems“may exhibit adaptiveness”, meaning that they do not necessarily have to. To continue with our previous example, this implies that even our old-school spam filters, which do not improve over time and initially sort our emails automatically, still fall within the definition, as adaptiveness is apparently not a decisive factor. The third criterion is also clarified in the Recital.Inferences should be interpreted in light of development techniques that enable inference, which include“machine learning approaches that learn from data how to achieve certain objectives, and logic- and knowledge-based approaches that infer from encoded knowledge or symbolic representation of the task to be solved.” Furthermore, inferences are not a specific feature of artificial intelligence, but a general process used in many fields of science, philosophy, and daily life, such as in statistical calculations or medical diagnoses. Inferences, according to the internationally conception of the term,1 are used to draw conclusions from data and models, for example, in predicting outcomes or classifying data. In machine learning, theinference phase is when the trained model is used to make new predictions or decisions. While this specific application is technical, the underlying process of reasoning exists in many other scientific and practical disciplines. Unfortunately, this again fails to serve as a distinctive criterion between many traditional systems used since the nineties and an “AI system”.

The fourth component, which involvesinfluencing the AI system’s environment is not further clarified. However, it is fairly safe to say that integrating any kind of system into anything will necessarily influence that system’s environment. The environment is to be understood as“the contexts in which the AI systems operate, whereas outputs generated by the AI system reflect different functions performed by AI systems”. This can encompass