After multiple conceptual changes to the definition, the AI Act was finally based on theOECD definition of an AI system. This is problematic for several reasons, starting with the fact that adopting this definition, intentionally or not, broadens the scope of an already very broad definition. Furthermore, the OECD definition was never intended to serve as a legal definition but rather as a programmatic statement typical ofpublic policymaking. This makes itper se too vague to be one. Nonetheless, Article 3(1) of the AI Act now defines an “AI system” as:
–A machine-based system
–Designed to operate with varying levels of autonomy
–That may exhibit adaptiveness after deployment,
–That infers from its inputs how to generate outputs such as predictions, content, recommendations, or decisions, and
–That can influence physical or virtual environments.
Recital 12 attempts to provide some clarity on the matter by stating that, firstly,autonomy is to be understood assome degree of independence of the AI system from the human operator. However, this clarification fails to consider that most systems today possess at least a minimum degree of independence associated with process automation. Just think of your spam filter. Yes, of course, we can go check the spam filter and see what the algorithm has sorted out as “spam”. We can also choose to override the algorithmic label. However, the main point is that the algorithm independently sorted an incoming email into the spam folder, which also means you saw the email five days later than you would have otherwise. Not to say that anyone is complaining, as this situation is still preferable to receiving all the spam emails in our regular folder. Still, if any degree of autonomy is sufficient to satisfy this criterion, then it may very well be the case that even very simple programs we have been using for years, or even decades, fulfil it.
Secondly, in terms ofadaptiveness, Recital 12 clarifies that it refers toself-learning capabilities, which allow the system to change while in use. Here, one might be tempted to sigh in relief as many systems do not have such capabilities. However, this is where the AI Act definition crucially deviates from that of the OECD one making the material scope of the AI Act virtually unlimited. While the OECD definition demands that AI systems be adaptive, the AI Act merely states that these systems“may exhibit adaptiveness”, meaning that they do not necessarily have to. To continue with our previous example, this implies that even our old-school spam filters, which do not improve over time and initially sort our emails automatically, still fall within the definition, as adaptiveness is apparently not a decisive factor. The third criterion is also clarified in the Recital.Inferences should be interpreted in light of development techniques that enable inference, which include“machine learning approaches that learn from data how to achieve certain objectives, and logic- and knowledge-based approaches that infer from encoded knowledge or symbolic representation of the task to be solved.” Furthermore, inferences are not a specific feature of artificial intelligence, but a general process used in many fields of science, philosophy, and daily life, such as in statistical calculations or medical diagnoses. Inferences, according to the internationally conception of the term,1 are used to draw conclusions from data and models, for example, in predicting outcomes or classifying data. In machine learning, theinference phase is when the trained model is used to make new predictions or decisions. While this specific application is technical, the underlying process of reasoning exists in many other scientific and practical disciplines. Unfortunately, this again fails to serve as a distinctive criterion between many traditional systems used since the nineties and an “AI system”.
The fourth component, which involvesinfluencing the AI system’s environment is not further clarified. However, it is fairly safe to say that integrating any kind of system into anything will necessarily influence that system’s environment. The environment is to be understood as“the contexts in which the AI systems operate, whereas outputs generated by the AI system reflect different functions performed by AI systems”. This can encompass