Reflections on The EU AI Act – Intent and Causation in Prohibited AI System Use Cases

By Zera Ong, Amazon Web Services, Sweden

July 18, 2023

Much has been written about AI, Generative AI and the EU Parliament’s adoption of the AI Act recently. However, there remain outstanding questions – what could the AI Act imply for AI go-to-market, and what key legal questions ought to be considered? The first instalment of this series aims to discuss the potential implications of the AI Act by focusing on the legal concepts of intention and causation. In upcoming instalments, we will query when legal liability should arise, seek clarification of important concepts which would engender transparency or compliance obligations, and discuss questions surrounding the Act’s implementation.

The “What” and “Why” of the AI Act

First and foremost, readers might ask: What is the AI Act, and why should public and commercial actors as well as individuals pay attention to developments in this area? The AI Act is a set of regulations governing the use of AI within the EU, with the objectives to: 1/ Ensure that AI systems placed in the EU market are safe and in line with law on fundamental rights and Union values, 2/ Ensure legal certainty to facilitate investment and innovation in AI, 3/ Enhance governance and effective enforcement of existing law on fundamental rights and safety requirements, 4/ Facilitate the development of a single market for lawful, safe and trustworthy AI.

The Act is currently in the legislative process – Members of the European Parliament adopted the Act on June 14th; the Act will now proceed to the final stage involving negotiations among the European Parliament, the Council of Europe and Member States.

It is important that organisations and individuals understand the Act, as the use of AI systems is likely to increase and perhaps become pervasive in our daily lives. The Act seeks to foster innovation in the AI sphere and at the same time ensure that Union fundamental values such as the rule of law, human rights, freedom and democracy are protected. The use of AI is also likely to have consequences on aspects of our daily lives. A concrete, recent example as reported by the BBC involves a US lawyer using ChatGPT for his legal research. “Hallucinations” in the output, coupled with a lack of authentication by the lawyers, resulted in the court submission containing six cited “cases” which did not exist. The use of AI systems has the potential to impact our daily lives and the way society, including how our justice systems, operate. Thus, the AI Act which governs the use of AI merits attention from organisations and individuals alike.

Legal Basis for the Act

Two legal concepts are key to understanding the Act: 1/ Subsidiarity, where EU-wide co-ordination is needed to ensure a harmonised internal market for AI systems, 2/ Proportionality, where the stringency of measures is proportional to the degree of risk associated with specific AI systems.

Where proportionality is concerned, the AI Act classifies AI use cases into three categories: 1/ Cases where there is an outright prohibition on the use of AI, 2/ “High-risk AI systems” which are subject to compliance obligations (including but not limited to user transparency and data quality obligations), 3/ All other AI systems, which are not subject to compliance obligations but where voluntary compliance is encouraged.

Definition Dilemma: What Counts as an “AI System”?

The first legal question that lawmakers have to grapple with is often one of definitions. Definitions are key as they impact the scope of the legislation; a good definition provides legal certainty whilst ensuring sufficient flexibility to capture relevant future developments.

In this case, the definition of “AI system” has been widely debated. The Proposal defines “AI systems” as “software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with”. There are three techniques and approaches referred to in Annex I: “(a) Machine learning approaches, including supervised, unsupervised and reinforcement learning, using a wide variety of methods including deep learning; (b) Logic- and knowledge-based approaches… (c) Statistical approaches, Bayesian estimation, search and optimization methods”.

Criticism of the “AI system” definition includes proponents arguing both that the definition is overly broad or overly narrow. On the one hand, future developments could mean that AI systems are developed with new techniques and approaches not covered under Annex I. On the other hand, Ebers and others caution that the current definition does not provide sufficient legal certainty, and could result in over-regulation and stifle innovation. Others such as Smuha argue that, instead of defining “AI systems” with functional characteristics and techniques used, the Act should instead regulate based on high-risk domains.

As critique of the “AI system” definition has been widely discussed, the focus here is on other pertinent questions, specifically legal concepts of intent and causation.

Prohibited Use Cases – Why Intentions & Causation Matter

Intent and causation matter as these arise in two of four prohibited AI use cases. Specifically, the Proposal states that “the placing on the market, putting into service or use of an AI system that deploys subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behaviour in a manner that causes or is likely to cause that person or another person physical or psychological harm” is forbidden. Also forbidden is “the placing on the market, putting into service or use of an AI system that exploits any of the vulnerabilities of a specific group of persons due to their age, physical or mental disability, in order to materially distort the behaviour of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm”.

“In order to” refers to intent, and “causes or is likely to cause” invokes the element of causation.

What Counts as “Intent”?

It is important to clarify what counts as “intent”, and the content of such an intention, as this dictates AI system use cases which are prohibited. Here, we can invoke parallels from UK and US law in the discussion of intent. In UK criminal law, according to the Criminal Justice Act 1967, “a court or jury, in determining whether a person has committed an offence, (a) shall not be bound in law to infer that he intended or foresaw a result of his actions by reason only of its being a natural and probable consequence of those actions; but (b) shall decide whether he did intend or foresee that result by reference to all the evidence, drawing such inferences from the evidence as appear proper in the circumstances”. On this interpretation, intent is highly contextual, cannot be inferred, and in the UK criminal law context, depends on the jury’s assessment of the facts i.e. whether a defendant actually intended to commit a crime. In the US Model Penal Code (“MPC”), there are four categories of criminal intent: acting “purposely”, “knowingly”, “recklessly” and “negligently”. Interestingly, it is stated in the MPC that “intentionally” or “with intent” refers to the highest of these categories (“purposely”). This means that intention only exists where the goal of the defendant was to cause the crime.

Turning back to the AI Act, similar concepts could apply to intentions. Actors who place AI systems on the market must have actually had the goal to bring about a certain result, such as materially distorting a person’s behaviour. Whilst this author agrees that intent cannot be imputed where factors are outside the provider or user’s control, there is a line between ethical and legal liability. Who should bear legal liability if such distortion of human behaviour results from factors outside the provider or user’s control? Where an AI system involves machine learning, with human actors setting learning objectives, weights and biases, where is the line between factors within and outside our control? Questions of agency and responsibility will be discussed further in the next instalment – safe to say that these topics merit further discussion.

Causation in the Context of Prohibited AI Use Cases

Aside from intent, causation is an important concept to examine. This is because, for the AI use case to be prohibited, an element aside from intent is required. Referring to the Act: “the placing on the market, putting into service or use of an AI system that deploys subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behaviour in a manner that causes or is likely to cause that person or another person physical or psychological harm” is forbidden. Similar wording is adopted in the prohibition against using AI systems that exploit persons’ vulnerabilities.

This means that for the prohibition to be effective, there must be a result, and the threshold of that result is that it actually causes or is likely to cause a person physical or psychological harm.

What Does It Mean to “Cause” Harm?

On the surface, causation seems easy to establish. Upon diving deeper, the question is a fraught one. Here, we will refer to English tort law principles in negligence, as the categories of causation can make for a helpful reference. Firstly, are we referring to factual or legal causation? Secondly, what counts as a ‘cause’ – how strongly must the cause be proved to be tied to the effect for legal liability to be triggered? In elucidating the element of causation, we will assume here that the intent requirement has been proven. The following discussion will also focus on factual causation.

In factual causation, there are several gradations of “causing” a result to occur. The highest bar can be thought of as the ‘but-for’ test (Wright v Cambridge Medical Group (2011)): but for the cause, would the result have happened? If we were to interpret the AI Act in this manner, assuming the existence of requisite intent, we need to ask: but for the placing of the AI system on the EU market, would the resultant physical or psychological harm have happened? This might be a high bar to reach evidentially, and warrants a discussion on whether other standards of causation should be considered.

Consider an example of a person who might be more vulnerable to psychological harm, such as someone who had an anxiety disorder. Assume also that the intent requirement was fulfilled. What would happen if one could not prove that but for the AI system, the resultant psychological harm (e.g. a panic attack) would have occurred? Would it be sufficient to prove that the AI system materially contributed to physical or psychological harm (Holtby v Brigham & Cowan (2000))? In discussing material contribution, it is worth noting that this is also a subjective standard. As in Holtby where several employers contributed to the resultant physical harm (asbestos), the court ruled that “the question should be whether at the end of the day, and on consideration of all the evidence, the claimant has proved that the defendant is responsible for the whole or a quantifiable part of his disability”.

Applying this to the AI Act, AI systems which materially contributed to physiological and psychological harm would be caught by the prohibition. This author argues that the material contribution test would be more appropriate than the but-for test in interpreting the AI Act. This is because the but-for test might be too high an evidential bar, especially in the likely scenario that resultant harm has been caused by several AI systems. In addition, it is worth considering that asymmetry may exist between providers and consumers of AI systems, which would weigh the argument in favour of a “material contribution” evidential bar.

Finally, in discussing causation, it is worth considering the approach in Fairchild v Glenhaven Funeral Services (2003). This case revolved around an employee who had been exposed by different employers to asbestos, resulting in an increased risk of harm (mesothelioma).  Where that risk had led to actual harm but where the onset of the disease could not be attributed to any particular or cumulative wrongful exposure, proof that each employer’s wrongdoing had materially increased the risk of harm was sufficient to prove causation. This is an interesting standard worth considering in the AI context. What if an AI Act were placed on the market with the requisite intent, where this might lead to an increased risk of harm? Is that sufficient as having caused the harm, such that an AI system which might lead to increased risk of harm would be subject to the outright ban? What might also be especially relevant about the Fairchild test is that it acknowledges the reality of harm sometimes being caused by multiple actors. This parallels our evolving reality where we are likely to have an increasing number of touchpoints with AI systems. However, there could also be a risk of overregulating and stifling innovation. Developing technologies come with an inherent amount of risk – judgement calls must be made on what types and levels of risk society can tolerate to ensure we continue to reap the benefits of innovation.

The Ability to Foresee Harm: What is “Likely to Cause” Harm?

We have discussed intent (“in order to”) and what causing harm could entail. For the sake of completeness, it is worth mentioning that the clause “likely to cause (harm)” also raises legal questions. How remote must the physical or psychological harm be to be considered a “likely” result? This author argues that, at minimum, the threshold for “likely to cause harm” should adopt the test of foreseeability. In other words, where the AI system provider had the requisite intent, the outright ban should apply where the AI system provider “foresaw or could reasonably have foreseen the intervening events which led to” the harm, even if the damage was not a “direct” or “natural” consequence of the AI system being placed on the market (Wagon Mound No.1(1961)). This is because, taken together with the intent to materially distort behaviour or exploit vulnerabilities, a standard that incorporates foreseeability would provide a minimum level of protection against such AI use cases.

Some might even go a step further and argue that foreseeability extends not just to the resultant harm, but could even encapsulate cases where: 1/ there was a “known source of danger”, 2/ the source of danger acted in an unpredictable way (Hughes v Lord Advocate (1963)). Proponents would argue that in such cases, the harm was reasonably foreseeable. This author is doubtful as to whether this would be an appropriate test in the AI context. This is because, at times, we do not have full insight into how machine learning makes predictions based on inputs. By adopting the above standard for “reasonably foreseeable consequences”, we risk overregulating AI and hindering innovation. Agency, duty of care and how we attribute legal responsibility will be discussed in future instalments.

Summary

In this article we have discussed potential interpretations of the AI Act, especially prohibited use cases, through the lens of legal concepts surrounding intent and causation. If there were one takeaway from the above discussion, it would be that there is no clear-cut interpretation. Further debate would be needed to iron out the legalistic creases within an evolving AI landscape. That being said, the legislative process has also shone a light on these pertinent questions which will affect our daily lives. Let us take heart in the fact that AI and its implications have been brought to the forefront of dinner table conversations, and in the healthy debate that our society is engaging in.

Disclaimer: Please note that this article has been written in the author’s personal capacity. The discussion above is based on the Proposal (the “Proposal”) for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (“the Act”). This Act is in the legislative process – as the final wording has not been released, all commentary has been made as of the article publication date.