Top

Αναζήτηση


Lex&Forum
Περιοδικό
Αριθ. τεύχους
2
Έτος
2024
 
Περισσότερα »

Συγγραφέας


Lex&Forum, 2 (2024)


D. Svantesson, AI and Private International Law

Πλοήγηση στα περιεχόμενα του τεύχους +

« Προηγούμενο     Επόμενο »

A- A A+    Εκτύπωση   

347AI and Private International Law

Prof. Dan Svantesson

Faculty of Law, Bond University, Australia

Introduction

Artificial Intelligence (AI) is currently dominating the discussions in the legal community. However, so far, the private international law aspects of AI regulation have received comparatively little attention[1]. In this respect, the almost obsessive AI focus follows a familiar pattern. We saw the same, e.g., when it was data privacy the “hottest topic in town”. Virtually all attention was initially directed at the substantive law issues, and it is only now that the private international law type issues are being properly noticed; commonly in the context of disappointment as to ineffective cross-border enforcement[2].

I think we are heading to the same direction in our debates about the regulation of AI. And with this paper I am aiming to make at least a small contribution towards highlighting some private international law issues that ought to be front of mind already now rather than come up as an afterthought with unfortunate results.

In more detail, I will discuss AI in private international law from two perspectives. First, I examine how private international law regulates AI and what challenges we face in that setting. In this context, it is worthwhile to reflect on how much is just hype and how many are the real challenges. The second part will go into something I initially wrote about in 2019; namely, can AI help us solve some of the private international law issues we have in relation to the Internet?

Finally, in this attempt to set a scene, I want to note that it is, of course, the case that AI is not only going to be, and is already not, only an issue for us in relation to the online world. However, my focus will be on the online side, even though AI doubtlessly will raise lots of private international law issues, potentially also in a more physical dimension.

348Generally about private international law regulating AI

When we look at the online environment, the challenges we face in private international law are primarily which connection factors justify a claim of jurisdictional law; that is, how we link a certain legal claim to a particular jurisdiction and a particular law. This is a first challenge on which we have made remarkably little progress, despite the fact that a substantial number of highly skilled legal scholars have been working on these issues for several years.

The second challenge is that of the so-called “spillover effects”. If we have claims of jurisdiction or application of law online, they often go further than what was intended, or indeed what is reasonable. The result in such situations is that different countries might be affected, even though the country claiming jurisdiction or application of their law only has the right to do so in a more limited setting.

Relatedly, we have to confront a third challenge; that is, how we deal with competing, and overlapping, claims of jurisdiction and application of law. This is a common situation in the online environment, and it is a concern I will get back to below in the context of how AI actually may help us in relation to private international law.

To describe the situation we are in, with competing and overlapping claims of jurisdiction and application of law, I have previously used the term “hyper regulation”. We face a situation of “hyper regulation” where: “(1) the complexity of a party’s contextual legal system amounts to an unsurmountable obstacle to legal compliance, and (2) the risk of legal enforcement of—at least parts of—the laws that make up the contextual legal system is more than a theoretical possibility”[3]. This is a major challenge.

The competing claims and overlapping claims discussed above represent one side of the proverbial coin, but we also have to address the issue of “gaps”; which is our fourth challenge. Sometimes we find situations in which no country’s laws or jurisdiction applies to a given situation and that obviously also raises concerns.

Fifth, we can talk about the challenge associated with the level of cooperation we can expect between states. The traditional vehicle for cooperation when it comes to private international law is in the form of recognition and enforcement of foreign judgements, but that has been a big challenge to the online environment given the different laws we might have. Put simply, there is a clear discrepancy between how willing states are to make broad claims of jurisdiction, and how reluctant they are to recognise and enforce foreign judgments.

A sixth key challenge impacting all the above is the question of what role should internet intermediaries play. Internet intermediaries can (for good and bad) be 349gatekeepers for online content, and they can be easy targets for litigants unable to identify (or unconcerned about identifying) the person who actually post content online. They are criticised for not being sufficiently proactive, and at the same time they are said to exercise content control going beyond their authority and mandate; a tricky situation indeed. The exact roles and responsibilities of internet intermediaries remain key issues to be settled, without which we will never reach an effective online regulation.

All these six challenges outlined above apply equally to AI situations. And given that private international law is generally aiming to be technologically neutral, in many ways, I do not think that the AI context in general adds anything all that new. For example, we typically end up with the same search for connecting factors. Indeed, the more AI can do, and the more AI resembles a human in output, the more we may see that the issues are the same as those we faced before the AI hype.

Anything novel?

If we try to identify some challenges specific to the application of private international law in the AI context, we find very little in the literature[4]. Of course, AI can complicate things like identifying who we take legal actions against, and this can become more complicated with phenomena such as decentralised autonomous organisations (DAOs). And generally, the more complex a product is, if something goes wrong, the harder it gets to identify who is to “blame”. This search for who is liable obviously has a private international law dimension.

Furthermore, there will be issues relating to contract formation and performance when the contract has some sort of AI involvement. These issues may well be a challenge, but it is not a new challenge as we already face it in different settings currently. The same applies to so-called “targeting”. For example, in the consumer protection setting, private international law rules may require that a supplier has been targeting a certain jurisdiction for the consumer to benefit from certain consumer protection rules (e.g., the right to sue at home and take action under their “home law”). We may ask, how do these rules play out in an AI setting? New twists, but same old issues. Similarly, we encounter the same thing with non-contractual wrongs; whether AI is involved or not, we seem to face the never-ending search for a relevant “location”, often anchored in the type of legal fictions that only a lawyer can take seriously.

The location of training

The one area that has received more attention than others when it comes to the cross-section of private international law and AI is the intellectual property (IP) area. 350In particular, issues have arisen where data is being scraped and used for training AI in another jurisdiction, with the resulting question being where any copyright infringement takes place in such situations.

Defining the location of training in the context of AI developments frequently takes us deep into the territory of legal fictions. Imagine, for example, that an Italian company is seeking to develop an AI tool. The AI model and the training data sits on cloud servers in the US and Singapore, and the staff working on the AI model are based in India and Nigeria. Where is the location of the training? Only a lawyer would imagine that we can point to one specific location in such a scenario. We may of course seek to establish some sort of centre of gravity of the operation and point to that as our location of the training. But we may equally well conclude that the training takes place simultaneously in all the above mentioned jurisdictions.

Furthermore, any attempt at identifying the location of training must carefully evaluate the facts. For example, what is the impact of an employee spending a couple of hours working on the AI training during a stopover in Singapore if all other work takes place in the US? Should such a comparatively small matter impact what we see as the location of the training?

These types of complications are, however, not entirely new. Rather, lawyers have had to confront them in a range of different settings such as defamation law –what is the location of internet defamation? I doubt that lawmakers and courts will have any greater success in harmonising the definition of the location of AI training than they have had harmonising the definition of the location of internet defamation. Thus, companies –both AI developers and rights holders– operating in this environment will have to learn to work with conflicting legal systems with the help of experts in private international law.

The Getty Images case[5] is, at the time of writing, ongoing in the United Kingdom (UK) and it is a good illustration of the issues involved. Put simply, StabilityAI is said to have been scraping images from Getty and using them to train AI. The question was whether or not StabilityAI could be said to be located in the UK. This was the key question from a private international law perspective. The Court held that this needs to be heard properly and cannot be dismissed on a summary basis –a win in the sense that the issue was taken seriously.

As to these IP law-related issues, perhaps the solution is to be found in the substantive area of copyright law because, if you look at private international law, and taking the federal court rules in Australia as an example, there is plenty of basis for claiming jurisdiction –similar to what we are talking about in the UK Getty Images case. Consider, e.g., the following grounds for jurisdiction found in the Federal Court Rules 2011 (Cth), s 10.42, as amended:

351“(a) if the proceeding is founded on a tortious act or omission: (i) that was done or occurred wholly or partly in Australia; or (ii) in respect of which the damage was sustained wholly or partly in Australia;

[…]

(d) if the proceeding: (i) is for an injunction to compel or restrain the performance of any act in Australia;

[…]

(j) if the proceeding arises under a law of the Commonwealth, a State or a Territory, and: (i) any act or omission to which the proceeding relates was done or occurred in Australia; or (ii) any loss or damage to which the proceeding relates was sustained in Australia;

[…]

(k) if the person to be served has submitted to the jurisdiction of the Court;

[…]

(n) if the proceeding is founded on a cause of action arising in Australia;

[…]”.

Therefore, the question is just whether or not copyright law wants to commit to making this sort of activity a violation of copyright. If it is, then the grounds of jurisdiction could be activated so that private international law works properly in this setting.

As can be seen, point (j), for example, shows that if there is a breach of Australian copyright law, the court can claim jurisdiction, so it is not so much a private international law issue in some ways.

A possible creative response – Intentional infliction of economic harm by unlawful means

Before leaving this topic, I want to briefly reconnect to a case that I discussed when I last had the privilege of contributing to the work of Lex&Forum; namely, Sapphire Group Pty Ltd v Luxotico HK Ltd [2021] NSWSC 589. To avoid too much repetition, I refer the reader to my 2023 article for more details on this case, but put simply, the matter related to a trademark for candles in a glass. Sapphire had a trademark in Australia and Mr. Staples, a former employee of Sapphire, started his own company and registered the same trademark in China. There were arguments that he had done so just to prevent Sapphire from marketing in the Chinese market, which he denied. In a complex set of proceedings, there was parallel litigation taking place in Hong Kong SAR, in China, and in Australia, including an action taken by Sapphire against Luxotico and Mr Staples in the courts in New South Wales. In that action, Sapphire presented their claim as a tortious one (intentional infliction of economic harm by unlawful means).

This tort was explained in Hardie Finance Corporation Pty Ltd v Ahern (No 3): “The key elements of the unlawful means tort [are] that the defendant intends to cause harm to the plaintiff, and that the interference with, or damage to, the plaintiff’s business 352is brought about by wrongful or unlawful means on the part of the defendant”[6]. Sapphire argued that registering the trademark in question in China was an unlawful means of intentionally causing economic harm to the Sapphire group. Based on that, they sought an order from the Australian Court in New South Wales requiring, amongst other things, that the defendants be restrained from exploiting the Chinese trademarks and take steps to remove them or assign them to the plaintiff.

This approach represents an attempt to use Australian law to indirectly affect the trademarked rights under the Chinese system. In terms of jurisdiction, the Court asserted that damages have definitely been sustained in Australia, pointing out that, based on earlier decisions, any time an Australian company suffers economic loss, it does so in Australia and therefore damages are suffered there. Thus, the threshold for claiming jurisdiction seems low in an action like this. Furthermore, the Court did not consider itself to be a clearly inappropriate forum.

Applying the Court’s logic in the Sapphire case to the typical AI training type dispute is interesting. Let us say that the data of an Australian company has been used to train an AI system developed in the US. Let us also imagine that the data was scraped from multiple servers in different countries, and that both the data scraping and the training are seen as lawful under the US law. In such a scenario, it is perhaps possible for the Australian company to invoke the tort of intentional infliction of economic harm by unlawful means before an Australian court. In doing so, the Australian company would presumably argue that the defendant intended to cause harm to the Australian company, and that the interference with, or damage to, the business via the data scraping and subsequent use of the data for AI training amounted to wrongful or unlawful means on the part of the defendant. Whether a court would accept this line of reasoning obviously remains to be seen, but it may be an option for creative litigants.

The European Union’s AI Act

The most famous of AI regulations is the European Union’s (EU) AI Act. I am not going into it in any detail here, but Article 2(1), in many ways, is a private international law type provision just as Article 3 is of that nature in the EU’s General Data Protection Regulation (GDPR).

Focus here could be placed on, for example, Article 2(1)(a) of the EU AI Act which applies to providers placing on the market or putting into service AI systems or placing on the market general-purpose AI models in the Union, irrespective of whether those providers are established or located within the Union or in a third country; clearly, an extraterritorial type of control, which private international law might otherwise cater for. The aspect of Article 2(1) that arguably is most complex is Article (2)(1)(c), which relates to providers and deployers of AI systems that have their place 353of establishment or are located in a third country (non-EU based ones), where the output produced by the AI system is used in the Union. The rationale for it is clear and fair enough, but I think it can end up going quite far; indeed, perhaps further than intended. An example I used elsewhere is as follows: imagine we have an AI system outside of Europe that is used to generate new sounds that can be incorporated into music software. This would be an output produced by an AI system which might end up used in the EU and it is odd that the Regulation would apply in that situation. It is quite a harmless use of AI so there might not be real issues as such, and it is not a high-risk thing. Nevertheless, from a private international law perspective, this could end up in quite a broad reach that is hard to justify.

Looking at the EU AI Act in the private international law context, we should also note Article 22(1), which demands that, “Prior to making their high-risk AI systems available on the Union market, providers established in third countries shall, by written mandate, appoint an authorised representative which is established in the Union”. The application of this provision is limited to high-risk systems. That is appropriate. However, this type of rule remains problematic due to its lacking scalability. If everyone acting online needs to have a representative in every market they are active in, the whole idea of e-commerce as we know it is under threat. Since the GDPR, the EU has sought to justify this sort of “rep localisation” requirements by reference to a level playing field and that reasoning is seen also in the EU AI Act context:

“In order to ensure a level playing field and an effective protection of rights and freedoms of individuals across the Union, the rules established by this Regulation should apply to providers of AI systems in a non-discriminatory manner, irrespective of whether they are established within the Union or in a third country, and to deployers of AI systems established within the Union”. (Recital 21)

However, while there is no burden imposed on a Swedish company having to have a representative in the EU, an Australian company wanting to be active on the EU market, and thus having to have an authorised representative in the EU, incurs a potentially substantial cost. Thus, the level playing field argument may quickly lose its appeal. That is even more so if we then say that all EU companies with some presence on foreign markets are required to have rep localisation in all those foreign countries, too.

The noted provisions can be seen to affect the role of private international law. First of all, obviously, we see more and more harmonisation as to the substantive rules for AI regulation, as we see for the Council of Europe’s Convention and the EU AI Act. In a sense, any harmonisation of law undermines the role of private international law since the question of applicable law becomes irrelevant. If a law is the same everywhere, it does not matter which country’s law is applied and I am not saying that it is a bad thing at all. But it does impact the role played by private international law.

354Similarly, we could say that provisions like Article 2(1) of the EU AI Act or Article 3 of the GDPR undermine the law of private international law in the sense that they are rules of private international law even though they are not portrayed as such. So, they compete with private international law in the regulation of application of law. Also, Article 22(1) undermines the role of private international law because it ensures enforcement, which means that there is no need for cross-border recognition and enforcement, in a sense.

Finally, on this, on an international level, there are already different approaches on how to regulate AI and, because some such regulatory issues go to the core of how we want society to operate, it seems likely that we will see more applications of public policy exceptions in the application of private international law to the AI context.

The AI Act and the EU’s increasing “lawbesity”

The EU’s approach to law-making has resulted in an ever-growing matrix of partly overlapping legal instruments with what may be described as a tremendous “system complexity”. Private international law rules must be understood to form a part of this big and complex system, that in the AI context is made up of, e.g., the AI Act, AI Liability Act (soon), GDPR, E-Commerce Directive, national laws, international treaties, fundamental rights, and the Digital Services Act. With the private international law rules ending up as components of this bigger system, it may be said that they lose some of their standing.

International convergence?

If we look into the future, which is always a dangerous undertaking, perhaps we can see possibilities of convergence on an international level. For the past ten years, I have talked about what I have called “market sovereignty”. Put simply, my point is that, instead of focusing on the location of persons, acts or physical things –as traditionally done for jurisdictional purposes– we ought to focus on marketplace control (“market sovereignty”). A state could be said to have market sovereignty, and therefore justifiable jurisdiction over internet conduct where it can effectively exercise “market destroying measures” over the market that the conduct relates to. Importantly, in this sense, market sovereignty both delineates and justifies jurisdictional claims in relation to the Internet.

Arguably, in the mentioned Article 2(1) of the EU’s AI Act, we are seeing examples of a thinking that fits within the doctrine of market sovereignty. Similarly, we see a trend in Australia with a focus placed on the “carry-on business” test, which is quite similar and focuses on “are you on the market, or are you not?”. And of course, if you think of the US, there is the “minimum contacts” test, which goes back to 1945, which also has a more sort of “are you present on the market?” feel to it.

Perhaps, in this, we are heading towards a convergence as to how private international law approaches issues like AI regulation, but we need safeguards and 355I do not think that we have that yet. I will not delve into that in detail, but at least I wish to flag here that we need something like a forum-non-conveniens test that can balance interest involved, while the market sovereignty-type tests ensure that the state claiming jurisdiction has a substantial connection to, and legitimate interest in, the matter.

AI as a tool to improve private international law

Let us now briefly turn to how AI may be used as a tool to improve private international law. To understand the challenge, we must first be clear on the fact that, for any given activity, that activity is regulated by a “contextual legal system” comprised of norms from any applicable national legal system(s) and other laws, such as international law. Thus, for example, a person X in Greece sends an email to a person in Malta, relating to the activities of a person in Australia. In this example, primarily three countries’ laws (Greece, Malta, and Australia) are relevant in making up the contextual legal system for this activity. In contrast, imagine that person X instead posts information on a US social media site, on which they have “friends” in 100 different countries. In the context of this latter activity, person X is exposed to a contextual legal system comprising the laws of a vast number of countries due to the great reach of the posting. The problem is that we humans are poorly equipped to identify the applicable laws that make up the contextual legal system for any given activity. We are also poorly equipped to find and access those laws, and indeed to understand them (e.g., due to language issues).

In 2019, I published a piece on Harvard International Law Journal Online[7]. In that publication I envisaged an AI system capable of: (a) identifying the norms from multiple legal systems that together make up the relevant contextual legal system for a given activity; and potentially (b) reconciling –or at least intelligently and appropriately balancing– those norms in a manner that makes for a coherent system even where individual norms clash.

I will not repeat that discussion and proposal here, but it is needed to be flagged even though this admittedly takes us into the territory of hype. We are very far away from succeeding with anything like that as of yet, but I think that it is a very interesting aspect on how AI can help us with some of the private international law issues.

Concluding remarks

This short paper has sought to highlight the need to address private international law matters as our regulation of AI takes shape. Leaving private international law as a matter to be worked out later is sure to set us up for failures as we have seen in other fields.

356At the same time, it has been observed above that attempts at identifying challenges specific to the application of private international law in the AI context, we find very few new. Some readers may now be tempted to conclude that, if AI does not give rise to novel issues, there is no justification for my concern about how little attention private international law is getting from our pioneering AI regulators (both formal and informal). Such a conclusion is misguided and lacks sting. The issues that arise within private international law are unavoidable and must be confronted in detail as soon as possible by AI regulators and the fact that the challenges are the same that have arisen in relation to other areas simply means that there is much that AI regulators can learn from the experiences of other fields[8].

The paper has also sought to re-emphasise that AI has great potential as a tool to help improve private international law. That proposal goes far and would require a strong willingness to be creative and open to substantial reform. Regrettably, these are not characteristics commonly associated with the discipline of private international law.



[1] Only a small number of publications, such as E. Benvenuti, Private International Law as a Means to Project EU Digital Values Abroad, 7(Special Issue) EU and Comparative Law Issues and Challenges Series 2023. 227 and S. Chandra, Private International Law and Artificial Intelligence: A Critical Analysis of Jurisdictional Claims and Governance, 2(1) Indian Journal of Artificial Intelligence and Law 2021. 31, directly address private international law and AI.

[2] Study on the enforcement of GDPR obligations against entities established outside the EEA, but falling under Article 3(2) GDPR. Final report (2021).

[3] D. J. B. Svantesson, Are we Stuck in an Era of Jurisdictional Hyper-regulation? (2018), in P. Wahlgren (ed.), 50 Years of Law and IT: The Swedish Law and Informatics Research Institute 1968-2018, pp. 143-158, (Scandinavian Studies in Law; Vol. 65) Stockholm Institute for Scandinavian Law, at 148.

[4] For an interesting account of private international law and AI, see: E. Benvenuti, Private International Law as a Means to Project EU Digital Values Abroad, 7(Special Issue) EU and Comparative Law Issues and Challenges Series 2023. 227.

[5] Training GenAI, Getty Images (US) Inc & Ors v Stability AI Ltd [2023] EWHC 3090.

[6] Hardie Finance Corporation Pty Ltd v Ahern (No 3) [2010] WASC 403 at [685].

[7] [https://journals.law.harvard.edu/ilj/2019/08/a-vision-for-the-future-of-private-international-law-and-the-internet-can-artificial-intelligence-succeed-where-humans-have-failed/].

[8] For a discussion of this, see, e.g, M. Czerniawski/D. Svantesson, Challenges to the Extraterritorial Enforcement of Data Privacy Law – EU Case Study (January 16, 2024). Dataskyddet 50 år – historia, aktuella problem och framtid, 2024, at 144.