RLDG Discussion 2 ON Artificial Intelligence in Warfare

Discussion of 2021 Reith Lecture 2, AI in Warfare

Lecture date: 8th December 2021

Discussion date: 8 April 2022

PRESENT: NO, NB, CA, TB, HS, SJ, AB

--- Contents

[AB hosted this discussion, recorded it and then transcribed it (3 May 2022), inserting links, notes and comments. AB adopted two roles in doing this, (a) of editor "[Ed: ...]", e.g. giving links to other material, adding "***" to important points, or explaining something, or attaching Unique labels for future reference; (b) of contributor "AB: ...]", inserting responses to what had just been said, which are added in order to further the discourse, especially in a way that could contribute to understanding AI with the help of Dooyeweerdian and Christian perspectives. Sometimes he will even criticise himself for what was said on the day! ]


[Recording starts]
[00.00]

# AB: So, Stuart Russell, Reith Lecture on Artificial Intelligence in War.
# Would anyone like to give me some general comments on that, that they have, to get going? Or anything else that they'd like to say.

[In this Discussion we allowed discussion of AI in war to extend to some general issues about AI, but always came back to AI in warfare.]

----- Impacts of AI in War [za201]

--- AI and War: Scary [za202]

# NO: I'd like to make a comment. I'm not sure it will be universally appreciated.
# I thought (and this is rather ironic) that all the people are worried about climate extinction should sense some relief after reading this. Because clearly we're probably gonna wipe out about one or two billion people with some of this weaponry, over the next ten or twenty years. So I don't think we'll have to worry about denigrating the planet. [za230]
# It's very terrifiying. [za231]
# AB: What was terrifying about it for you? # NO: It's terrifying - and I do keep up with military aspects through friends and some publications I get. And there were a few things there that I hadn't really heard of before, particularly the fact that they can use some of these Quad copters to target people by facial recognition. And yknow, assassination drones and things like that, which / I'm familiar with some of the drone capabilities. But I thought that (Facial recognition) was particularly disturbing. [za232]
[02.10]

--- Effect on Politics [za203]

# AB: Anyone else on that?
# HS: Interestingly, Stuart Russell brings up aspects / I think it's rather utilitarian. It means that he claims that AI in warfare can minimize the casuality [rate] in war because it's more para???effect so it can avoid civilians. So it creates something like / [za233]
# It makes people hesitate to go to war because the cost is very high. [za234]
# t don't think he can propose that can minimize the casuality, but I think it's rather utilitarian because, of course, /
# First: how can we make sure that this power is controlled by good guy. So, the government is not always the good guy. Sometimes it can become the evil one. [za235]
# [Second] The other thing is, if the cost is too high, every countries hesitate to go to war, the problem is, big country can bully the smaller country, because they have the stronger weaponry and can destroy a nation.

# It's not a useful solution. It's like pulling / I don't think it's a good solution for warfare. # AB: what is not a good solution?
# HS: Stuart Russell claims that it will prevent war because the cost is too high, right. Because if you go to war, your the problem is, your enemy will use AI and you will have efficient ???target and the war will end very fast, because everything will be destroyed in a moment. The problem is, OK that makes sense, but it just say that a stronger country which has AI capability can ???delete a smaller country that does not have AI capability. [za236]

--- Different Perspectives on AI and War [za204]

# I think, from the balance of power point of view: / I don't think its's / We can see this problem from several perspectives:

But how other perspectives?
# AB: Were you thinking about aspects there, Dooyeweerd's aspects?
[05.45]

# NO: So, the troubling issue again for me, that if it becomes more economically feasible for bigger and smaller countries [to go to war], then, are we looking at a future that is only going to have more and more conflicts all over the world? [za237]

# NB: That depends to some extent [on] to what extent are economic barriers the main thing that keep us from war now? [za238] ***
# Certainly to some extent I'm sure that plays a part of it.
# [But] I'd like to hope that's not the main reason we try to not go to war - not because it's expensive but because it's bad.
# But I'm not disagreeing with you in any way, NO. I think that lowering the cost of barrier, both economic and political is not a great idea. [za239]
[06.50]

# TB: But even then, maybe it's not actually much of an economic gain. Because there will be failed cases, and you'll lose a whole system - maybe it's blown and you have to rebuild it again. [za240]
# So, I think at a certain point, that would probably be less feasible economically - and you still have to employ people at the other end. [za241]
# I think probably the other measure of cost there is risk to life. That's probably what it's taking away actually. [za242]
# Potentially the greater things they might be able to do without risk to life /

# [mumble from someone else] Well, it's effectively sending out autonomous weaponry, missiles and whatever and, I dunno, UAVs [Umnammed Autonomous Vehicles]. And so it's giving responsibility over to the machines a bit more. [za243]
# And of course, that was the main con of this - that I remember him pointing out - is how the intelligence of the machine learning would potentially start to even backfire towards score an own goal - well, that's an understatement. It would be fighting back against the group it's meant / it's meaning to defend. [za244]
# That would be a great danger, isn't it, in terms of how its using information to try and discern an action - and how that, from a Dooyeweerdian perspective, starts to then influence the different aspects of what the component of decision making there is on forming attack. [za245]
# That's kind of getting quite deep, isn't it.
[08.35]

--- Predicting Impacts of AI in Warfare? [za205]

# NO: Again, this troubling thing is that it's all part of the technological issues of our new era, which is bleeding and blurring many lines that were more siloed in the past. [za246]
# In the military, they talk about what's called the "centre of gravity". It's a concept that came out of Klausewitz some years ago. It's basically, "What is the essential and the pivot point that drives your army?" What's happening today is that the technology and the digital information seems to be becoming a new centre of gravity for milirary operation. [za247]
# And what that's gonna mean is again I think something that we really don't know a lot about now, which is again kinda scary. [za248]
[09.55]

# NB: One prediction we can make with fairly good certainty is that the most important effects of whatever we implement will be the unintended ones. # [laughter] Unintended consequences almost always overshadow the intended conseauences. [za249] ***
# And that leaves us kind of helpless. To make good decisions, we have to be able to make sort of reasonable predictions of what the future might be as a result of our decisions, and we know that we cannot. [za250]

[AB: Dooyeweerdian Comment: Is this because when we predict, we do so according to what's meaningful in aspects that everyone already acknowledge as being important, whereas the unintended consequenves are those meaningful in aspects that everyone currently overlooks? Maybe they "overshadow" them not by being actually more important but because they surprise us, drawing our attention to different aspects that we had overlooked. ] [za251]

[10.30]

# AB: We laughed at that, because it is ironic. But is it actually true? It's true sometimes, but how true is it? Any ideas? I'm asking that [rather than contradicting it] because I think it's a very important point. [za252]
# NB: In the sense of, for example, someone who makes an assassin drone might be thinking in terms of saving lives, saying "Well, I don't have to kill all of the enemy soldiers; I only have to kill the few ones that are the decision-makers, the leaders, and we will have achieved our military objective with fewer lives lost." [za253]
# And so they may start with what, from a utilitarian perspective, is a legitimate argument, but then, by not recogising the way in which it will change how we think about war, the long-term consequences of a country adopting assassin bots remains opaque to them. And so that becomes / the fact that more wars might get fought would be an unintended consequence of trying to make the wars that do get fought less lethal. [za254]
[12.00]

# AB: So that's an example of an unintended consequence. It changes how we think abour war, [AB: pistic aspect] [za255]

[AB: Dooyeweerdian comment: Is this a good example of the formative aspect of technological development and objective-achievement 'targeting' (forgive the pun!) aspects that make the reason for doing it meaningful: the biotic of life, and the social aspect of leadership? This might help us think through the problems with this, in that we could ask, "Which other aspects might be targeted? And is targeting these aspects valid and good? And might those aspects bring about the unintended consequences?" ] [za256]

--- What is the Difference in War with AI? [za206]

# NO: Again, the thing that is scary because: remember war / let me put it this way: the weaponry of war is the weaponry that anyone has. We all have guns and bows and arrows and knives and some people that I know have automatic rifles and they go shoot them and hunt with them, and that sort of stuff. # NB: Remember there are only two Americans on this call, NO! [Laughter] # HSS: ??? # NO: Well, sorry, there's 330 million of us over here! Some of us are well-armed. [za257]
# But the issue is: even drug cartels, even people who are having tribal disputes over water. What happens if they have a drone and decide "I'm going to take this son-of-a-bitch out"? [za258]
[13.15]

# NB: I think it's important to try to have clarity on where the distinctions are.
# The fact that I as an individual may have the capability to kill someone is not new and different in the era of AI and autonomous weapons. But there is something new and different, and I'm trying to put my finger on what it is, and I don't know exactly what it is. It's not as though I couldn't go and get a handgun right now and take some one out. But there is something different and I don't know what it is. [za259] ***

# NO: I would say that there are two things that are different perhaps.

# AB: Interesting that you say Stealth. Yeah.

# NB: And perhaps plausible deniability. that no-one has to know who sent the drone. [za262]
# You get a lot more unsolved murders.
# So in an area with autonomous drones and assassin bots, you could have a lot of unsolved murders, basically.
# NB: Which / Noticed that we've suddenly switched a little bit from talking about war to talking about crimes. It wouldn't shock me if AI weaponry might blur that line, some. [the line between war and crime.] [za263] ***

# ABL That's interesting, especially from a Christian point of view. [za264]
[16.07]

[AB: Dooyeweerdian and Christian Comment: Both war and crime are meaningful in the juridical aspect. Killing someone in each is a juridical dysfunction. Even though maybe sanctioned it is not what God intended in the Creation, but is from sin, the Fall. Does it, perhaps, throw into sharp relief the flimsiness of the reasons for killing in war? A very few wars might be for justice (as meant in God's eyes) but most wars are for reasons of pride, competition, hatred, etc. ]

--- On Legitimacy [za207]

# NB: One things I found about his talk (and I have to admit I didn't read the whole way to the end, so correct me if he got to this later) but the whole discussion was not grounded in a larger philosophical discussion, of what makes war legitimate or not legitimate. [za265] ***
# In Christianity there are the two broad streams of Just War Theory and Christian Pacifism. And there are a lot of different flavours in Just War Theory, of course. [za266]
# But I think we have to kind of have to go back to those basics of Christian thinking about what / in what ways is war fought legitimately, in order to ask questions about "Could AI weaponry be legitimate or not?" [za267] ***
[16.50]

----- Aspects of AI in War [za208]

# AB: I was about to say, "What I've noticed, in our discussion so far, and this is interesting, quite a lot of aspects have come up. There's the formative aspect of technology, there's the juridical aspect, there's various other aspects. [za268]
# And I wonder whether, if we applied the whole suite of aspects, they might bring up issues that are not normally talked about. [za269] ***

--- The Pistic / Faith Aspect of AI and War [za209]

# For example the faith aspect. One of you said, "We see war in a different way" or "We have a new idea of war." Well, basically, that's the faith aspect - what we believe about war.
# But also the faith aspect is whether war is good or not. I mean, Christian Pacifists especially, for example, believe / they are pacifists because the believe, not because they fear. (There might be some who fear. Or don't want economic loss or something.) But the basic idea behind Pacifism is belief. And the basic idea behind Just War Theory is belief - which is the pistic aspect. [za270]

--- The Ethical Aspect of Self-giving Love [za210]

# NB: I think that an argument could be made that the ethical aspect of love is also there.
# Certainly in Christian Pacifism, Christ's command to "Love your enemies" is one of the underlying principles of a lot of Pacifism. [za271]
# And even in Just War Theory, I would say that sometimes the most loving thing I can do for my enemy is to forcibly stop them from committing acts of evil. [za272]
# AB: Yeah.
# NB: And we don't usually think of love when we are talking about war. But if we're going to be serious as Christians, we have to. That is still central to who we are. Even in war.

# AB: And it has is place in Dooyeweerd's aspects. Yknow: So we can understand this. Not just a mystical command. It [love] is something solid and it has its place among the aspects.
# NO: Love is nothing solid. It's an abstraction. Would you agree with that?
# AB: No. Well, yes and no. I would say that love is solid, in the sense that it actually works. My belief is that the fabric of Creation - the way God created Creation to work - includes the ethical aspect of self-giving love. And we have self-giving love, then Creation works better than if we go against that norm. [za273]
# And usually in ways that we don't expect. [za274]
[20.38]

# AB: I've come across lots of stories, and not just about war but about / Well, I'll tell you a story. In one of the Gulf Wars, the American forces killed about 13 civilians by some sort of mistake. And this American Christian went over / (it was one of his friends that told me this, and I think it's actually on Youtube.) This guy recounts his story. He went over and he was really devastated about what these American forces had done. So he went to the place where this atrocity had been committed. And he went to see the men and he said, "I'm an American, and I'm really really sorry." And various other things; I cannot remember what else. Because he felt the Lord had sent him. And they were just stone-faced: they'd lost thirteen people and here was an American coming and saying Sorry. And he [feeling utterly at a loss] said the the Lord, "What shall I do?" And Jesus said to him, "Pray!" So he got down on his knees and started praying. And one of the men said to him "What are you doing?" And he said to them, I think unrehearsed, "In my back pocket is a loaded pistol. You can take it out and shoot me in the head. I give my life. I can only give one life; you've lost thirteen at our hands, but I'm willing to give my life for at least one of those." And then the men just picked him up and said, "You are family!"
# And that, to me, is self-giving love. It was not actually a romantic idea of love, but it was very solid. It was this ethical aspect, willing to sacrifice. It was the Lord that did it through him.
# And if I can remember what the Youtube is, I'll send it round. But I've always remembered that.
[23.36]

# NO: By the way, I asked that question not necessarily because I believe love is an abstraction, but because I wanted to hear what the answer or thinking is. # AB: Well, so that is the way I see it as solid; I don't mean solid in a physical way, but ??? it works.
# NB: It's [love is] as much part of reality as the laws of physics are. # AB: Absolutely. I think that's Dooyeweerd theory and Christian experience. [za275]
# NO: That's helpful; thank you.

# So, there is some kind of / for example, in war, we know of lots of examples of willing to sacrifice. And I don't mean just for the country, but love even for the enemy. Action works.
# In the First World War, that famous Christmas Truce. A lot of the people weren't Christians fired by Jesus there, but the ethical aspect is not just purely for Christians. I believe it works as part of Creation.
# So that [love/ethical] aspect needs to come into war, as / as much as AI weaponry and so on.

--- AI and 'Emotion' [za211]

# AB: And AI, especially Machine Learning AI, is probably not going to recognise that love; it will be surprised by it. [za276]

# NO: That's an interesting point. Because the issue with AI is that we cannot program something like love into it, can we. It doesn't have emotions. So it will never eponymously by itself be able to make that distinction. Is that true?
# AB: It doesn't have emotions of itself. But data about human loving and doing the surprising sacrificial thing, that data could still be [programmed and] processed [in principle]. [za277]
# NO: But it cannot make an emotional decision. [several people spoke together!] # NO: In the Lecture, Stuart Russell said that. [za278]
# HSS: If we train enough data about self-sacrificial decisions, maybe I will pick up the patterns. [za279]
# AB: But we are unlikely to have enough training data to train it in this surprising sacrificial. That's why it won't operate according to it; it's nothing to do with whether computers have emotions. [za280]

[AB: Explanation of what was in AB's mind. AI works by having in the algorithm, by which it runs, 'rules' in its knowledge base (total collection of such rules) about how to respond to input patterns.

In Machine Learning (ML) AI, these 'rules' are gleaned from masses of training data. But one needs typically 100,000 records of training data to properly train a ML AI system. Since sacrificial behaviour is rare, there is unlikely to be enough to properly and reliably generate rules about self-sacrificial behaviour.

However, the other kind of AI, involving Knowledge Representation (KR) does allow such rules to be included. In KR AI, the rules are manually edited into the AI's knowledge base, having been gleaned from human expertise. If the source expert is aware of such rare exceptions then they will mention them and so they can be programmed in as rules. Good knowledge elicatation engineers take pains to seek out exceptions. Dooyeweerd's aspects have been proven to help unearth exceptions of this kind. See MAKE - Multi-Aspectual Knowledge Elicitation.
] [za281]
***

[27.05]

# AB: That's what I believe. But the reason I say that is because I think there's a kind of unclarity in the thinking that yknow, "Computers have to have emotions in order to make an emotionally correct decision." And I would question whether that's the case. And it's very widely believed. And I think Russell believes that. [za282]
# NO: I don't think you can make that statement without giving us a definition. You cannot say "They cannot have emotions because they cannot have emotions." You have to say, "OK, This is what emotion is, and this is why I don't think they can have it." [Ed: Good point. ] [za283]
# What we seem to regard as emotion is something that requires a very discriminating and involved intellectual decision that you cannot possibly program into a computer. That's what that man has said.
[28.13]

--- Q1: Empathy, Emotion, Consciousness, etc. [za212]

# AB: That comes onto our first Question.

Russell addresses the question of unintentional killing of humans in warfare and mentions several things that inhibit human killing of others unintentionally, including empathy, emotions and consciousness. He also mentions codes of conduct etc. So, what do we think of that approach? Can Dooyeweerd's aspects help? And which Christian values help? [za284]

# So, thank's NO, you've brought us to that question. [laughter]

# HSS: I think we need to differentiate between emotion and the ethical aspect, right? Emotion is more of the sensitive aspect, and ethical is totally different realm. [za285]
# AB: But I doubt if Stuart Russell meant just the psychical / sensitive aspect. He probably intended to mean something of the ethical aspect, especially when he said "empathy". [za286]
# AB: So there's some kind of quotes-emotion, I think, yknow what is loosely called "emotion", I think we can / So the question is / [za287]
# AB: That's a good point, though, to be clear what we mean by "emotion".

# NB: The psychic-sensitive aspect, of feeling emotion, whether or not an AI can feel emotion, is perhaps is something we cannot know the answer to. [za288]
# It almost certainly wouldn't feel the same way as humans feel it. But when we are talking about the ethical aspect, what we're talking about is behaviour. The internal state of some AI is not really relevant to whether or not its behaviour shows love. [za289]

----- On Training ML Systems [za213]

# NB: So, in a ML context, what matters is the data you train it on [Ed: See AB's explanation above]. If you train military bots on data that shows examples of ethical soldiers self-sacrificing, I think that the question of emotion is a bit of a distractor. The question really is "What does it do?" # AB: Interesting. Any response to that? [za290] ***
[30.50]

[silence!]
# AB: Good conversation stopper, NB! [laughter] But thank you, because I think it's a very important point.
# We could do with training data on instances of love.

--- How Do We Understand Data and Algorithms? [za214]

# NO: So we can program things by establishing rules, which are algorithm things, right, which are code, and we can put code in to operate on?
# NO: But all of that is something that is operating as bits and bytes. Even if we get into the nano technology, almost down at the molecular level.
# But the problem with love or emotion, is that it has to operate at a macro level, it has to operate in a very wide context. So, the issue is: If you have the machine that operates at a very micro level, how do you give it enough of that universal information to empower [zoom mumble]? [za291]
# HS: [zoom not clear:] Micro can be macro I think, including the relationship like network data and other things, and ??? data also. [za292]
# So it doesn't matter whether it's micro or macro in the machine learning context. [za293] # AB: why not? # HS: Because the data is not always / I think how we humans learn the bigger information, it can be treated the same as the context of machine learning, because we use the same data as / like how we give ??? our children the procedure of right and wrong in the situation, in the context of ??? family. And the same data can be inputed into the same machine learning context. [za294]
[34.10]

[AB: See the file "pfuis-noc.pdf" on the RLDG Google Drive, in the AI directory. It explains the various 'levels of description' of a computer and its operature, from physical materials like molecules, "bits and bytes", and information. In fact, the levels are different aspects of the computer's functioniing. This is why computers might, in principle, be able to make decisions relevant to love etc.]

--- The Data Needed To Train Machine Learning [za215]

# CA: I think that one of the things that they were talking about the data, is that it has to capture all elements so that it doesn't become biased. [za295]
# So this is a big problem, especially like for [example] banks. We are trying to predict the movement of the stock or we are trying to predict good things. So, we could predict the good things better than predicting the bad things, like the financial crisis, simply because we don't have enough data for all the bad things. We have a lot of data for good things, like how many times we are doing very well, the economy is booming and so on. We have more data for that. So the precision is better compared to the ones for bad data. For example, when is the next global financial crisis gonna happens, we cannot predict that properly, because we don't have much data on how many times the global financial crises happened. So, that - the prediction - becomes innacurate as a result. [za296]
# And also they talk about people with a minority. For example, people like me. How many people like me apply for a bank loan. Just for example, if I'm the only one to apply for a bank loan, and I always default in paying up my loans, then that would be taken as a proxy for the next person like me who is going to take up the loan. So that's why they said it becomes biased. It's biased because there's not enough people like me taking up a loan. [za297]
# AB: Yeah, that's how machine learning works, isn't it.
[36.13]
# NO: That's an interesting point, that we can program based on the good but it's hard to program based on the bad. Back to what HS was saying, the point I was trying to make: So we program a computer Big Blue to play Chess, and we actually have one that's working on to play Go. But yknow, if you can program the computer to tell whether the room feels too hot or too cold, to one or two people, you are trying to input so much data that, at this point, it cannot operate. [za298]
# So, I think the same problem of trying to put emotion, or even ethics into a computer, I don't know that we could program it to operate out on a battlefield somewhere. [za299]
[37.15]

--- Three Questions [za216]

# NB: I think we've got the debate going on at two levels [za2a0] ***

# I don't know that we have to answer them separately or not.

# HS: I think that we start with, OK, suppose it's theoretically possible and practicaaly possible [that] we have general purpose AI. I don't know whether it's theoretically possible, but

# NO: that's the third question.

[Ed. Note: NO's "two levels" are not like levels of description of the computer, above, but rather entirely different issues, of use and development. ]

--- Data and Values [za217]

# CA: We are struggling with the value as it is. If you remember our conversation about the value, saying "How do we put that into perspective? Some values cannot be measured." [za2a4]

[Ed. Economics discussions on Measuring Value and Non-measurement of Value]

# CA: We are having this debate. Now, only if we solve this debate can we then try to put it into a machine. But we cannot even solve this debate.
# How are we going to put what data into the machine? [za2a5] ***
# AB: Good point.


[AB: That's a question that needs addressing. Allow me here, in transcribing, to posit an initial answer based on Dooyeweerd; wil try to make each nugget of the argument clear.

To answer the question "What data do we put into the machine?" :

Now the problematic part, which is related to the question of how far we can 'measure' value:

]

----- How to Understand Artificial Intelligence as Such [za218]

--- How Smart Need AI Be? [za219]

# NO: Let me throw another analogy out here. I don't know whether this will confuse the issue or not. It reminds me of a situation where you have (how can I say this?) a person of diminished capacity. Years ago there was an amazing book, that was written very prescient at the time, called Brave New World, where we generated in this future state, the alphas, betas and the delta people. And the delta people did all the drone work, and everything. So, what I'm wondering is, "Can AI as weaponry become just smart enough to be that dumb person, that you say, 'Go over there and and charge those machine guns'?" So, it never has to really get totally intelligent; it just has to get smart enough to do the job. [AI for specific tasks] [za2a6]
# What is scary about this guy is, again, some of the things they are doing in the military, which you may object to. [za2a7]

# I had some big issues with it, is taking animals, like dolphins, and attaching mines to them and training the dolphins to attack ships and ships and stuff like that. [za2a8]
# That's an aspect of using some degree of AI that he didn't really get to that in the talk. But again, it's something that, "I've got something I can go use that doesn't have to be really smart. Just have to be smart enough that I can go use it." [za2a9] ***
[40.50]

# NB: Military philosophy, military leadership wants things intelligent enough to obey without question. [laughter] Intelligence [Ed. Presumably human intelligence rather than military intelligence?] is an issue of morality but a soldier that does exactly what he's told without ever asking is exactly what an AI would be. So I can understand why the military might find that attractive.
# CA: I think that humans have used animals to put bombs and things on machines as well during Hitler times. And used dogs to do that. [za2b0]
[41.30]

--- A Background Understanding of AI [za220]

# AB: What I think is worth keeping in mind is what is going on in machine learning.

# So I'm really going back to my old experience in AI, which is not machine learning but was knowledge representation. [za2b1]
# So there are two kinds of AI: [za2b2] ***

# In both, what you want is to be able to embody in the machine an algorithm that will behave and decide in an 'intelligent' way, in a way that takes account of a lot of the complexity of the laws of certain aspects. So, whether it's Chess, or marketing, or whatever it is; it doesn't matter.

# [Ed. On the source of knowledge.] In the 1980s, what we were doing with KR, was going and interviewing experts and say [ask] "What do we take into account here?" [Example] So, with Chess, we would go and interview Chess experts, and say "What are the rules of Chess?" I don't just mean the basic rules, but "What are the rules of winning at Chess? What are the rules of strategy?" And so on and so forth. [za2b3]
# Two or three of the expert systems that I developed [Ed. Expert System is a kind of AI, in which good expertise was embodied].

[Examples] One was to do with rules of which fungicides to spray in farming - [laughs] I was an environmentalist and didn't like it! But I can talk about that later. Another was about stress corrosion cracking in metals. How can you predict stress corrosion cracking, given all the / it's a very complicated, unpredictable thing. And there were one or two others. One was about trying to understand, in the company I was working in, ICI, the potential profitability of a business sector in the company.

# And we used expert systems in all those.

# Now, the latter one, about potential profitability, we didn't try to predict, we put in as much rules as we could, gleaned from experts, and deliberately said [deliberately designed the expert system to say to users, once they had entered details about their business sector], "Well, this is what the machine thinks; what do you think, as a human being?" And it was very useful because what that did, was help the human, who would actually make the real [business] decisions, help the human refine their knowledge and perhaps make sure that they hadn't overlooked something. [za2b4]
# But the stress corrosion cracking was more about physical laws, mostly, so its own prediction could be more reliable. [za2b5]


[AB: Dooyeweerd helps explain why some AI systems have been more successful than others: the successful ones deal with the earlier aspects, where the laws are simpler and also more determinative. ] [za2b6]

# Now, the big enemy of that [KR] approach is tacit knowledge, "We know, but we cannot tell what we know." So, if I were to say, "Can you tell me how you do cycling?" or something like that, then you cannot, because it's bodily knowledge. Or "Can you tell me how to be successful in making friends?" - no, you cannot because it's social knowledge, tacit knowledge. [za2b7]

[AB: It seems that each aspect yields a different kind of tacit knowledge, and the reason it is difficult to express in explicit rules is different in each aspect. ] [za2b8]

# Now, the ML thing [AI] what it did, was just take data of know the various things - let's say, being successful in company, making friends, it would take data of the / biometric data of the person, their facial position, and all sorts of things - and you can choose which [kinds of] data you choose - religious background, anything like that. And get data on that and, like [records in] a database and, on the end, whether they were successful in making friends or not. And put that in, and it learns, "This combination of data is successful in making friends, and this combination is not." Kind of thing. [za2b9]
# And you don't have to elicit knowledge from experts and it automatically takes care of some of the tacit knowledge. [za2c0]

# But, it doesn't know / It cannot tell you why you made that decision, whereas the KR one can [in principle]. It can tell you "Well, I made the decision because of this rule and this rule" whereas the ML one cannot tell on what it made the decision. [za2c1]
# So, where am I going with this? I'm going here, because most [AI today] is ML. But one of the problems that was mentioned was the opacity of ML algorithms. So, for example, you want to somehow be able to get what he called a "white box". ML gives a black box; you cannot open it; it's just a box: input, output. A white box is where you can see what's inside and see some of the knowledge. [za2c2]
# And that is what they realise they need.
# I haven't heard that they are going back to KR. In some ways, I think they probably are, in these cases where it really matters what decision is made. [za2c3]

# And, as I said, some of the expert systems that we made in the late 1980s, early 1990s, were not relied on to make decisions, but were relied on to help the human make decisions. And say "Have you forgotten anything?" [za2c4]

# So, there's a kind of overview of the technologies of AI. And especially the ones that had been forgotten for a couple of decades. [za2c5]
# But it's still the same principle: "How do we get into the machine the knowledge to make decisions or actually say something?" [za2c6]
# And, in that view, I believe, for example going back to ethical, we can actually program in with KR, laws of what it is to be ethical. To some degree, because I think there's a limit on that. [za2c7]
# So, similarly, if you get the right sort of data, it's in principle possible to program an ethical predictor / Not, not, no! A predictor of ethicality, not an ethical predictor, a predictor of ethicality. [za2c8]

[Ed. But see above about the fundamental limits of training data. ]

# But, as CA says, if you don't have enough data, all you get is, what you do have a lot of data with [about], and it's biased towards that. [za2c9]
[49.10]

# AB: Does that help?
[long silence! So obviously not! :-( ]

--- Explainability of AI: Black and White Boxes [za221]

# HS: I think [some] ML has explainability.
# Some type of ML cannot be explained. So, ??? told me, to give more ones on / whether ML is truly black box or not. # NB: Grey box! # AB: Do you know anything about that, HS, about black and white boxes? # HS: Yeah. Usually, ML is more inspired by how neurones send electrical signals. Like, Neural Network is more like black box. Because it's just keep updating the signal and weight. [za2d0]
# But some ML like ???based on statistical integration is based on more statistic, right. [za2d1]
# So this can be /

# NB: In either case, the control that we have over what the system does is largely put in place by the data we train it on rather than by the rules we lay out kind of a priori. [za2d2]
# In a sense, I think it's like: As I'm raising my kids, I'm trying to raise good kids. I teach them things that are important, and I try to model living a good life. I'm well aware that the modelling is probably more important in how they turn out than in the things that I say that are important.
# And I think that in AI, what we say - the rule-based stuff is sort of the KR side - and the modelling that I do is sort of the data that we train it on. [za2d3]

[Ed. Is that a simile / metaphor? In a metaphor, the aspectual laws that apply are not those of the metaphor but those of the domain to which it is applied. Raising kids by setting an example is probably of the ethical aspect, whereas ML and KR training is of lingual and earlier aspects. ] [za2d4]

--- Quality of Knowledge Embodied in AI Systems [za222]

# NB: And CA's point, that there exist biases in the data that we have for training / If I train my assassin bot on a thousand people that look like the enemy, and a 100 people that look like the allies, one thing I'm implicitly assuming [is] that you can tell an ally from an enemy by looking at them. And that's probably not a true statement. [za2d5]
[51.40]
# AB: You've gone into a different thing. If I train / depending on looking at, what they look like - and that brings me to aspects.

# AB: There's a kind of decision: "Which aspects do we choose to get meaningful data?" [za2d6]
# "What they look like" is a different aspect from "How they behave" or "What altruistic things they've done in the past" or "Where they went to school" or whatever. Which is probably: some of these things are really important in knowing whether someone's an enemy or a friend. [za2d7]
# Knowing [deciding] whether someone's an enemy or a friend, if we rely only on the psychical aspect of what it looks like visually, then the thing's going to make a lot of wrong decisions because it's not bringing in the other aspects. [za2d8]

# Where ML has been really successful is where it has only needed to take into account only the early aspects, such as the spatial aspect, as in Chess of Go. And maybe there's some other aspects that make it difficult for Go, but it's mainly the spatial aspect. [za2d9]
# And The physical aspect.
# AI has been successful in detecting cancers in x-rays, well that's basically a spatial aspect. # NO: And psychical. # AB: Maybe there are a few biotic things in there, but it doesn't have to take into account [for instance] the social aspect.
[53.35]

# CA: I've a point to raise: about mistakes that AI makes. 3 examples. There are more but I'll just give three [AB: CA sent web links for these, and I'll comment on aspects for each of these]:

# CA: So, there's a lot of / list of things AI is supposed to do, but it did it wrongly. And this caused people's life to be at stake. AI is supposed to kill the enemy. but it turns around and kills everybody else [laughter]. What happens then? All these people are dead. [za2e0]
# NO: That's exactly why biological and chemical warfare has really not been deployed a lot, because you cannot control the dispersion very well. [za2e1]
[56.10]

# NB: I've got two responses to that, CA.
# 1. That mistakes happen is not in and of itself necessarily problematic. The question is, "Does the AI make more mistakes than a human would?" The Tesla hit someone and killed them, but 100,000 people have been hit and killed by human drivers this year. So the utilitarian in me at least wants to compare with the alternative: If I didn't use the AI would I have avoided that mistake or would I have had more of them? [za2e2]
# But I also think that the question of what happens if the AI doesn't work, is the less important question. [za2e3] ***
# 2. The important question is, "What happehns to us if the AI does work as exactly as planned?" What it's going to do to warfare is gonna have a lot more to do with how it behaves when it works than how it behaves when it doesn't. [za2e4] ***
# A B: Can you give a little bit more on that, because it's something that hasn't been discussed properly. [za2e5]
# NB: I think that it comes back to unintended consequences. For example, if I make an assassin bot and send it out to kill an enemy commander, it's possible that it fails at the task and kills someone else instead, perhaps a civilian, and that would be an atrocity and of course we need to work hard to avoid that.
# But I think, if we spend too much time on that side, we might ignore the question of what happens if it does successfully kill the enemy commander, and no-one else, exactly as we had in mind. There are also gonna be consequences to the way we fight war, which are going to come about because of that. [za2e6]
# Perhaps the larger impact overall than if it screws up and kills someone else by mistake. [za2e7]
# AB: So are you saying that an assassin bot goes and kills an enemy commander, and that changes the nature of warfare[za2e8]
# NB: Well, I have no idea how it would change the nature of warfare, but I have to assume that the military strategists who are coming up with their new way to fight a war are going to take into account the fact that commanders are in a good deal more danger than they used to be. And I don't know what the implications of that will be in military strategy. We might be able to make some good predictions of what might happen, but even those predictions will be highly uncertain.

# CA: I've been thinking about the self-driving car. You said that self-driving cars basically, if it kills people and people are getting killed every day by bad drivers on the road, but if the self-driving car is doing a better job by not killing people, will the future be nothing but self-driving cars, to stop killing people, perhaps.
# NB: I think that's a strong argument, that I think there will be pressure to get human drivers off the road because they will eventually be proven to be safer. And then you are going to get all kinds of / you're going to have the recreational drivers who don't want to stop driving, and you're going to have issues about justice for people who cannot afford a self-driving car but already have a manual one; are they not allowed to drive any more? It's going to be a mess.
# AB: Well, with every technology it's a mess, isn't it. # NB: exactly that.
# HS: Perhaps the caveat??? is, if self-driving cars kill people, who takes the responsibility? If the driver kills people, the driver takes the responsibility. [za2e9] [Ed. That sound like it relates to Q1 above, about failure.]
# Perhaps that's a philosophical question we need to think. [za2f0]

[AB: Even more serious, perhaps, is that, if the AI in SDCs is a success, suppose we increase safety with SDCs, then will that encourage more and more driving, first in the affluent cultures and then worldwide, and since all driving will generate climate change emissions and destroy nature for decades to come, will those problems become worse rather than better? ]

[AB: NB's two questions: Q1 is about the development of AI technology. Q2 is about the use and societal impact of using such technology. The latter, especially, requires holistic (multi-aspectual) treatment, and especially taking seriously the aspirations of individuals and society in the cultures that lead the world. This is functioning in the pistic and ethical aspects - and currently those are dysfunctional (idolatry, stubborn refusal to change, and self-centredness. ]

[1.00.25]

# AB: Well, folks, it's gone over the hour. I think the conversation is going very well; it's very interesting. We could go on for hours and hours but I guess that some people might need to get away, and so on.

----- Some Philosophical Matters on AI [za223]

--- Ethics of AI [za224]

# AB: [invited SJ to comment]
# SJ: I have been just listening. I can see that discussion like this, what I've seen so far, from the ethical perspective, most of the discussion ends up to go through opposing arguments from three angles: utilitarian, deontology, and ethical relativism. Not seen a discussion beyond those three angles. And I think Dooyeweerd could be helpful, in taking the discussion beyond these; beyond deontology and utilitarian. [za2f1] ***
# HS: We can also see it from the perspective of the juridical [aspect], whether it's lawful or not. [za2f2]
# Because, maybe ethical [aspect] is too difficult to discuss, because so many / but if we go from the juridical angle, whether its for crimes or not, it's more clear, right? For the missing???aspects. [za2f3]

# NO: I found it ironic that towards the end of the lecture and questions, he [Stuart Russell] was talking about how the world is trying to come up with some rules around limiting some of the weapons. But the rules that people most seem to be agreed to was how to limit the size of the explosive at that low per-person level. But everybody was OK with the fact that if it blew up a hell of a lot of people, we're not going to limit that. But if it killed ??? individual autonomously like an assassin, that's not good. But the fact that we have these weapons already to kill a whole city, nobody's going to limit that. Don't you find that a little ironic? [za2f4]
# AB: I noticed, NO, as well, and I thought, "That is really horrible!"
# NB: But the concern of / it's going to change the way we fight war, is on the anti-personnel side. The possibility of killing 100 people at once isn't new. We maybe should get rid of that possibility, but it doesn't bring up new unsolved philosophical questions the way anti-personnel ones do. [za2f5]
[1.04.23]

--- Conscience: The Forgotten Topic [za225]

# AB: [invited other comments]
# NB: TB?
# TB: I think where I was thinking there was something useful, of the points raised.
# The one bit I got caught with there, the one thing that's not been mentioned is conscience [za2f6] ***

# That's an interesting question, isn't it. If you are getting the machine to go and plot the attack, whose conscience is that on? You have instigated the machine and the machine goes off and does it job [e.g. assassinating someone], but is that actually linking it to any humans involved, and their conscience? [za2f7]
# There is a kind of removal factor there, isn't there. It's a bit like paying for something electronically: it doesn't feel as painful as when you pay it in cash. # NO: That's a great find, I like that. There is a sort of analogy there; the transaction is happening in another place and you don't see it. [za2f8]
# NB: A very similar conversation has taken place with the whole idea of bombing: that you can kill a lot of people without ever looking at their face. Is an active debate within the military world, I think. [za2f9]
# TB: And it's the same problem as mentioned before with the dogs and dolphins. [za2g0]
# TB: The Conscience has been handed on to them. [za2g1]
# And does it actually then become a crime that is cruelty to a robot, as cruelty to animals or mammals? Big question, isn't it. [za2g2] ***

# SJ: Would it be the conscience of the developer, conscience of the regulator who allowed the rollout of the / or could it be the conscience of the society who accepted it and kept quiet? [za2g3] ***
# All of this could be considered. And it doesn't stop after the rollout of the machine. Human conscience continues. [za2g4]
# HS: Consciousness [conscience] of the people who capture the [ML] data.
# TB: Actually we are touching here the pistic aspect of Dooyeweerd quite deeply. Not any other aspect, is it, really?

[AB: Dooyeweerd's idea of subject and object might help us here. We could say that conscience and also responsibility is related to subject-functioning, i.e. an agent responding to laws of aspects. Object-functioning, as an object in some other agent's subject-functioning, incurs no responsibility and no conscience.

Now, humans can function as subject in all aspects, but computers only up to the physical aspect. However they can function as proxy-subjects, proxy to humans - the developers, the appliers, the users, and maybe also society that encourages it. Conscience and responsibility need not be confined to one agent, but spread across many agents. (Despite many legal systems restricting responsibility to a single agent.)
] [za2g5]

# SJ: Sometimes it's society who keeps quiet that can create the biggest crime. [za2g6]

[AB: c.f. Ezekiel 16:49, one of the three parts of why God destroyed Sodom is "unconcern". ] [za2g7]

# NB: Responsibility is not only an individual thing, it's a corporate thing, and we're not very good at taking that seriously, that as a society we are corporately responsible. I'm a voter; I'm a little bit responsible for what the armies, that my tax-payer money funds, do. Not entirely responsible, obviously, but a little bit. [za2g8]
# And we don't have a good way to think about that. [za2g9] ***
[1.07.50]

[AB: Actually, Dooyeweerd's juridical aspect can help us, because it does not presuppose individual responsibility, but responsibility as a whole. ] [za2h0]

# NO: That's a great point. Something that probably should have come out of this conversation, is: "That responsibility and accountability, where does it end up?" [za2h1]

[Ed. What is the link between conscience and responsibility? ]

--- Cultures [za226]

# CA: I think that different countries have got different cultures about these type of things. [Example] There's some countries, like Japan. The question was asked, "Why, during the Covid-19 there was less Japanese people trying to help people who have Covid?" And they said that, because of their beliefs in the community, like; they said that everybody has to have the same belief, and stand together. So if somebody is stronger or weaker they get kicked out. So it's like a different culture as well. [za2h2]
# SJ: It would be nice to see how these are received in different / these discussions, how these are received in these different cultures and countries. [za2h3]

----- End of Meeting [za227]

# AB: Well, the conversation has really livened up. Folks, do you want to continue or shall we bring it to a close?
# NB: I encourage anyone to continue but I at least have to step away. This has warmed my heart. Thank you very much, everyone.
# TB: Me to. Thanks.
# NO: I probably need to get a couple of things done.
# SJ: Me too.

# TB: Closed in prayer, thanking for space to think wider about the world and technology and a wider perspective, in protecting the world. Praying for what this might lead to, and contribution to knowledge that we are making through these discussions, that we produce from it. May be important to draw to the world's attention.

# AB: I'll plan the next one for the first Friday in May. Presumably for everyone here this time is right? [agreement]
# Friday 6th May provisionally, the same time. [za2h4]
# Next Reith Lecture will be on AI and economics.

# HS: I owe you a draft, on aspectual ML. Are you familiar with LaTeX. # AB: Yes. # HS: The paper is on LaTeX.

[1.14.11]