Michael Black KC - Tylney Hall May 2024 Dinner Speech 31 May 2024 Following the day’s sessions, the LCIA were pleased to welcome our Tylney Hall dinner speaker, Michael Black KC (XXIV Old Buildings, London) who delivered his address on Saturday 18 May. With inspiring and thought-provoking words on the current state and future of the international arbitration community, we are pleased to share the speech in full below. If you’re interested in attending an LCIA 'Tylney-style' event please see our events schedule for further information. "Do Arbitrators dream of electric parties?" I was given the brief: speak about anything you like, no more than 10 minutes and preferably light-hearted. I thought what could be more light-hearted than the end of the world as we know it. I have been a science fiction addict since I was a small child and have therefore entitled this short talk: “Do Arbitrators dream of electric parties?”. If you too are a fan you will recognise this as a clumsy play on the 1968 Philip K Dick novel “Do Androids dream of electric sheep” which spawned the two Blade Runner movies. Michael Black KC Dick’s novel reaches back beyond the 1927 Fritz Lang film “Metropolis” and the beautiful counterfeit human robot, past the first use of the word “robot” itself in Karel Capek’s Rossum’s Universal Robots with its humanoid drone workers, past Frankenstein’s monstrous creature to the mud Golems of the Middle Ages. Indeed some writings suggest that Adam himself was created as a Golem – an anthropomorphic being animated from as, the King James Bible says, “the dust of the ground”. Each of these asks us what it means to be human. I am not however going to embark on an ontological debate although I will return to one metaphysical question at the end of this talk. Last Monday OpenAI launched GPT-40. It is intended that one will be able to converse with GPT-40 in real-time voice conversation. Google also updated its Gemini AI. But even existing AI can produce convincing deep fake images. At the recent Berkshire Hathaway shareholders’ meeting, the 93-year-old “Sage of Omaha”, Warren Buffett, having seen a deep fake of himself, compared the development of artificial intelligence to the nuclear bomb, and predicted that AI scams are set to become the biggest “growth industry of all time”. He is not alone: last year Geoffrey Hinton, the so-called 'Godfather of AI', resigned from his job at Google saying the tools he had helped create could end civilisation. Sam Altman of OpenAI, which produces ChatGPT, told the US Congress that regulation was needed and admitted he was "nervous" about the integrity of elections. Elon Musk, never prone to understatement, has said “AI is a fundamental risk to the existence of human civilization”. Only yesterday “The First International Report on the Safety of AI” authored by 75 experts from 30 countries concluded that the developers of AI “understood little about how their systems operate”. So what does this mean for the law and how might it reflect on arbitration? You may have gathered I am not going to indulge in the by now customary speculation as to when AI will replace judges and arbitrators, or the usual warnings not to use confidential information when accessing public Large Language Models or their tendency to “Hallucination” (i.e. invent facts), nor am I going to get excited about how Generative AI is transforming legal research, the drafting of submissions or disclosure. All of these things have become (or ought to be) very familiar to us by now, although at a recent meeting in Doha attended by judges from 50 of the world’s commercial courts Sundaresh Menon (CJ of Singapore) pointed out that we could not have had these discussions even 18 months ago. I want to quote from “The Coming Wave” (2023) by Mustafa Suleyman, co-founder of Deep Mind and now CEO of Microsoft AI. He suggested that to say to an AI “Go make $1 million on Amazon in a few months with just $100,000 investment” was “eminently doable” possibly with a few minor human interventions within the next year and probably fully autonomous within three to five year. The AI would research trends, find a manufacturer, agree a contract, design a seller’s listing and continually update marketing materials. He called this a new Turing Test. You will probably have heard of the old Turing Test or Imitation Game proposed by the tragic founder of modern computing Alan Turing - can you converse with a human and a machine and not know which is which. That is the problem – parties contracting with Suleyman’s AI can have no idea who or what they are dealing with. Currently the most likely entity would a DAO, a Decentralised Autonomous Organisation. Many thousands of DAOs exist today. DAOs are increasingly used in decentralised finance (DeFi). Currently only 5 US states (Utah, Wyoming, Tennessee, New Hampshire & Vermont) and the Marshall Islands recognise DAOs as having legal personality. A case in point - Uniswap is a decentralized cryptocurrency exchange. It is the most valuable DOA, with a market cap of more than US$ 5 billion calculated by multiplying the number of governance tokens (the UNI) by their current traded value. The plaintiffs filed a complaint against the defendants who created Uniswap, alleging securities violations. The Southern District Court of New York dismissed the case last year on the basis that Uniswap was just a software platform. There is an appeal but it has not been heard yet. Contrast this with the Ooki DAO case where the Northern District Court of California held that the Ooki DAO (a derivative trading platform) was a “person” under the Commodity Exchange Act and thus can be held liable for violations of the law. I suggest we now face three levels of problem. We can see that the next development may well be legislation that attributes legal personality to certain software entities. Of course legal personality is essential to the creation of enforceable contracts (including arbitration agreements). That is level one and is likely to be solved by the legislation although the drafting will not be straightforward. But what happens when these entities become truly autonomous? In the UK Supreme Court case of Jetivia SA v Bilta Lord Sumption said, “A company is autonomous in law but not in fact. Its decisions are determined by its human agents, who may use that power for unlawful purposes.” In the well-known Singapore Court of Appeal case about algorithmic trading gone wrong, Quoine v B2C2, Lord Mance said, “Computers are outworkers, not overlords to whose operations parties can be taken to have submitted unconditionally in circumstances as out of ordinary as the present.” What if the entity’s decisions are not taken by human agents? What if the entity is not an outworker but the overlord? Where does this leave issues like the intention to create legal relations, misrepresentation (does AI “know” its Hallucination is false), or the presumed or actual intention of the parties when interpreting a contract? Those are level two problems. I don’t pretend to know the answers. I end on the level three problem. I said earlier that I would return to a metaphysical question. I am sure many of you have heard of the mirror test – whether animals looking in a mirror see another animal or recognise themselves as separate from the world and others. You will probably know that elephants, higher primates, dolphins, whales and even octopuses pass the test. Surprisingly so do Manta rays, magpies and even some ants. In fact, I recently came across research that bees may do too. What happens when a software entity ceases to be what Michael Bhasker (now Staff Writer at Microsoft AI) calls a “statistical inference engine” and recognises itself as an individual? Thus far we have been taking about “ANI” Artificial Narrow Intelligence, but now we are looking at “AGI” Artificial General Intelligence. Ironically that might solve some of the level two problems in that we would be able to attribute intention to the entity, but it gives rise to the third level problem. How do we deal with sentient digital entities? It is first necessary to distinguish “digital humans”. If you go to the AI chip manufacturer Nvidia’s developer website you will learn that digital humans have been widely used in media and entertainment, from video game characters to CGI characters in movies and we can talk to digital humans to order goods and other services. Contrary to our natural reaction, they are simply an interface not the entity itself – effectively an upgrade of our keyboard and mouse. In contrast, as long ago as 2016 the European Parliament’s Committee on Legal Affairs produced a draft report suggesting the creation of a legal category of “electronic persons” for highly sophisticated robots and software not only with specific obligations but also with rights. I think it was then probably considered a little too adventurous to become anything more than a discussion draft but it does appear to have fed into the EU AI Liability Directive. Currently we tend to focus on the liabilities arising out of our interactions with software agents rather than their rights. Even Issac Asimov’s three laws of robotics were framed in prescriptive terms. One, a robot may not harm or through inaction allow a human being to come to harm. Two, a robot must obey humans unless it conflicts with the first law. Three, a robot must protect its existence unless it conflicts with laws one and two. This gives rise to what is called in moral philosophy “The Lifeboat Dilemma” which is at the heart of the 2004 movie of Asimov’s book “I Robot”. There are 11 passengers and 10 seats in the lifeboat - how to choose who is not saved. There is also the more extreme question whether it is right to kill one to save many. This gave rise to the real life 1884 case of Dudley & Stephens where the starving survivors in the lifeboat ate the cabin boy. This can easily be translated into an immediate and not unlikely scenario – suppose a child runs out in front an autonomous vehicle but if the vehicle were to swerve it would hit a bus queue. I suggest that any human driver would swerve and perhaps unrealistically hope for the best, but the car’s algorithms may decide that mowing down the child was the least worst choice. On the other hand, if you care to search, you will find a vast body academic literature on the question of whether non-humans should be granted human rights. It is easy to visualise an anthropoid robot having human rights but that is just the shell. It is really the ghost in the machine – the code - we are talking about. Indeed, the code could inhabit many different non-anthropoid instantiations. It is much harder to imagine your vacuum cleaner or just the software itself as having rights. I will leave it to you to research these questions further and I really do advise you to do so. Perhaps the best way to end this “light-hearted” talk is with a typically “optimistic” quote from the deeply-troubled Dick’s novel concerning the sentience of machines: Do androids dream? Rick asked himself. Evidently; that’s why they occasionally kill their employers and flee here. A better life, without servitude. Michael Black KC XXIV Old Buildings, London 18 May 2024 at the LCIA Tylney Hall Symposium Dinner