Artificial Intelligence and the law

By Anton Katz SC

A question used to be asked, ‘can a robot beat a human in chess?’ But today chess computers are practically unbeatable.

It is unlikely that even the best human chess players would beat a computer. This is because a computer can analyse millions of possibilities and compare them against each other within micro-seconds. The first winning chess computer started claiming victories in 1956. Deep Blue, a chess-playing super-computer played and beat Gary Kasparov, the reigning world champion in 1996. So in forty short years non-humans play the best chess, a game started more than a thousand years ago.

Nobel prize-winning British writer Kazuo Ishiguro recently wrote about the lovely and so-special Klara, in his latest book, Klara and the Sun. Klara is an Artificial Friend (AF) who is selected from a shop window and bought by Josie, a sickly 14-year-old child some time in the future. Klara’s role in life is to ensure that while Josie is alive, she is looked after socially and academically. Although Klara is exceptionally intelligent and observant, her knowledge of the world and of emotions is somewhat limited. But how limited? Does she have emotions at all? And how do these emotions manifest?

These questions dominate as we learn to love a robot, Klara. As Klara concludes, “I believe I have many feelings … The more I observe, the more feelings become available to me.” As I read the novel, I came to love Klara — or at least what she stood for — more and more.

There are established robot applications which we now take for granted. ATMs, Siri, the talking machine, and automated driving are simple examples which already exist, and are being developed.

So, what is artificial intelligence? It is intelligence demonstrated by machines, rather than animals and humans. And there are robust and tough debates concerning whether a machine which can think can also feel. Because if a machine can feel, then it could also suffer; and accordingly, be entitled to certain rights. Most commentators and critics suggest that debates of this kind are premature, and the development of the law as to the potential rights and obligations of robots should be slow and gradual.

While it may be strange to even think of a robot as having legal rights and obligations, it was probably strange and alien for many of those living at the time to think of slaves, women and black persons under apartheid as having rights and obligations. Indeed, under Nazi Germany many humans, and particularly Jews, were stripped of all legal rights. And I remember growing up accepting that marriage was only for different and opposite-sex persons; but today many countries have enacted laws specifically recognising same-sex marriage.

However, there are perhaps at least two key features of artificial intelligence which are of immediate concern. First, artificial intelligence provides many tools which are particularly useful for authoritarian governments. Smart spyware, face recognition and voice recognition allow widespread surveillance; such surveillance allows machines to classify purported enemies of the State, and can prevent them from hiding; systems can precisely target propaganda and misinformation for maximum effect; deepfakes aid in producing misinformation. And this is apart from targeting, murderous drones. Also, advanced AI can make centralised decision-making more competitive with liberal and decentralised systems such as markets. And terrorists, criminals and rogue states may use other forms of weaponised AI such as advanced digital warfare and lethal autonomous weapons. By 2015, over fifty countries were reported to be researching battlefield robots. The law certainly needs to protect individuals and communities from abuse by governments and large corporations through artificial intelligence. This protection is required as soon as possible.

Secondly, super-intelligent AI may be able to improve itself to the point that humans could not control it. This could, as the late physicist Stephen Hawking put it, “spell the end of the human race.” Philosopher Nick Bostrom argues that sufficiently intelligent AI, if it chooses actions based on achieving some goal, will exhibit convergent behavior such as acquiring resources or protecting itself from being shut down. He concludes that AI poses a risk to mankind, however humble or ‘friendly’ its stated goals might be.

Political scientist Charles T. Rubin argues that, “any sufficiently advanced benevolence may be indistinguishable from malevolence.” Humans should not assume machines or robots would treat us favourably because there is no a priori reason to believe that they would share our system of morality. Bill Gates, history professor Yuval Noah Harari, and SpaceX founder Elon Musk have all expressed serious misgivings about the future of AI. Musk says the dangers of AI are the greatest threat to humanity. Prominent tech titans including Peter Thiel (PayPal) and Musk have committed more than $1 billion to nonprofit companies that champion responsible AI development, such as OpenAI and the Future of Life Institute. Mark Zuckerberg (CEO Facebook) has said that artificial intelligence is helpful in its current form and will continue to assist humans. Other experts argue is that the risks are far enough in the future to not be worth researching, and that ‘malevolent’ AI is still centuries away.

Can you imagine robots controlling humans? I can’t. But then just a few years ago I couldn’t possibly envisage simultaneously face-
timing my cousins all over the world on my mobile phone. Or attending funerals, prayers, bar and bat mitzvahs and other events via Zoom. I would not have believed it if it had been suggested that I would be arguing court cases through virtual platforms rather than in person in a physical court.

Our new reality has developed so quickly. We must be vigilant to protect against what today may be unimaginable, but tomorrow is commonplace. Indeed, there are growing initiatives to analyse and consider responsible and trustworthy AI. The Global Partnership on Artificial Intelligence (GPAI) was launched in June 2020 by the governments of Canada and France to share values and bridge the gap between theory and practice on AI. Data governance is one of GPAI’s key pillars. It intends to provide expertise on data governance, so as to promote data for AI being collected, used, shared, archived and deleted in ways that are consistent with human rights, inclusion, diversity, innovation, economic growth, and societal benefit. The law on data governance is just one example of how the law must develop taking into account the possible and far-reaching developments in artificial intelligence. One wonders what Deep Blue and Klara would say about the legal regime.

Anton Katz is a practising Senior Counsel, former United Nations special rapporteur on mercenaries and human rights, former Acting High Court Judge, and an admitted attorney in New York. He was born and raised in Sea Point.

• Published in the PDF edition of the December 2021/January 2022 issue – Click here to get it.

• To advertise in the Cape Jewish Chronicle contact Karyn on 021 464 6700 ext. 104 or email advertising@ctjc.co.za. For more information and advertising rate card click here.

Sign up for our newsletter and never miss another issue.

• Please support the Cape Jewish Chronicle with a voluntary Subscription for 2022. For payment info click here.

Visit our Portal to the Jewish Community to see a list of all the Jewish organisations in Cape Town with links to their websites.

Follow the Chronicle: Facebook | Instagram | Twitter | LinkedIn

LEAVE A REPLY

Please enter your comment!
Please enter your name here