RLDG Discussion 5, on I. A Future for Humans; II. Proposal for a Dooyeweerdian Approach to AI

Discussion held on zoom: Friday 8 July 2022

Theme of Discussion:
I. A future for humans with powerful general purpose AI.
II. Proposal for a Dooyeweerdian Approach to AI

[In this, the fifth RLDG discussion of discussion on Artificial Intelligence, we discussed something of the final, fourth 2021 Reith Lecture by Stuart Russell, given on 22nd December 2021. We also discussed and proposed elements of a Dooyeweerdian approach to AI. The purpose of these discussions is to contribute to developing a Christian and Reformational perspective on AI.

Summaries have been added into each section. ]

present: JC, TB, HS, AB
Apologies: NO

-- Contents

-- About This Document

[These are notes as Typed In During Discussion, checked against the audio recording, then annotated.

AB hosted this discussion, recorded it and then transcribed it (27 August 2022). AB adopted two roles in doing this, (a) of editor ("[Ed. ]"), e.g. giving links to other material, adding headings, adding "***" to important points, adding notes in square brackets that explain things, and attaching unique labels to statements for future reference (currently only marked as "[]" but not yet assigned unique labels); (b) of contributor to the discussion ("[AB: ]"), inserting responses subsequent to the discussion, or maybe explain what he had intended to say but left unsaid. The purpose of these is to take discussion further, and not to dominate, and include some where he might even criticise himself for what was said on the day!

"???" means unsure words. "[...]" means audio that has been omitted from the transcript, because it was e.g. bumbling or contributed nothing to the content. ]

----- Introduction [za500]

[recording started]

# AB: The plan was to discuss the fourth Reith Lecture today. That can be the core or central theme. []
# Which is interesting, because it is about general purpose AI [GPAI] rather than about AI in the economy, AI in warfare [or any other application, or AI as it is today]. So, there is a number of very general things. And that might be interesting. []

# So, first of all, how is everyone? Would we like to say a bit about ourselves.
[Ed. participant intros saved to different file, available on request]
[06.30]

----- Discussion on General Purpose Artificial Intelligence (GPAI) [za501]

# AB: Who has read or listened to the 4th Reith Lecture?

[AB: Nobody had done so. So I thought, as chair, that it was not worth discussing Stuart Russell's lecture, but rather discussing GPAI in general, which is what he seemed to be focusing on this time. SR's main question was: how can we control beings more powerful than we are? He suggested a 3-pronged approach, which we discuss a bit below. ]

--- Two Powerful AI Systems [za502]

# JC: But I listened to a supplementary, point of information from the gentleman who is the head of Deep Mind, over in England, [...] Demis Hassabis. At Cambridge. His group Deep Mind figured out Alpha Fold and Alpha Go. []
# chat: 00:32:14 JC: https://www.youtube.com/watch?v=Gfr50f6ZBvo&t=7150s

# JC: This is pertaining to the General [GPAI]; I figured I would lean you guys to give the specific right insight re. 4th lecture. [Ed. That sentence was a bit earlier but belongs here.] []
# So, GPAI requires a couple more things than just systems understanding. One of those things is to create a game, but before you can create a game, you have to master a game. []

# chat: 00:45:38 JC: https://www.deepmind.com/research/highlighted-research/alphafold # [From JC's email of 221101: https://www.scientificamerican.com/article/one-of-the-biggest-problems-in-biology-has-finally-been-solved/ ]

# So, Deep Mind is the big company, is the Group, and the Alpha suite of projects are different AI solutions to very complex, interdisciplinary problems. []
# And Hamas [Hassabis?] is an Englishman I think.
# I listened to his two-or-three hour conversation with Freeman.
# AB: Do you have the URL of the conversation, a link to it?
# chat: 00:30:36 TB: Aha yes found him so could work out his name spelling: "https://www.cam.ac.uk/research/news/cambridge-appoints-first-deepmind-professor-of-machine-learning"
[09.39]

--- The LaMDA system. [za503]

# HS: Have you heard them more?? Recently there is some

[10.00]
# AB: OK. [...] Can we say anything about these things? Are they claiming that this is getting near to GPAI? []
# AB: Are Google claiming / Well, there are several questions.
# 1. What is the relationship between Alpha and Lamda? []
# 2. Are they claiming that any of these are getting near to GPAI? []

# JC: The Alpha is from Deep Mind; the Alpha Fold and Alpha Go) are from Deep Mind projects, from Deep Mind.
# Then the Lamda is a language acquisition, or language learning system, is that correct, HS, from Google?
# But different engineering set of teams. You have a lot of different assumptions, ??moving into those programming sets and those / HS, what's the word, not "neural networks" but the ways in which / the engineering of them are different.
# HS: It's more of Natural Language Processing, is on Transformer.

[Ed. A description of LaMDA from a google search. "LaMDA or Language Models for Dialog Applications is a machine-learning language model created by Google as a chatbot that is supposed to mimic humans in conversation. Like BERT, GPT-3 and other language models, LaMDA is built on Transformer, a neural architecture that Google invented and open-sourced in 2017."

"LaMDA is a highly advanced AI chat platform analyzing trillions of words from the internet, so it's skilled at sounding like a real person. This isn't the first time one of Google's AIs has fooled people into thinking it's human."

And watch this youtube showing Lamda working. ]

# HS: I haven't read that paper, but it seems that the paper is [Ed. Apologies: I could not transcribe HS's sentence here] the data ??? is much much larger than GPT-3 paper, ??? than we have now.

Summary: The Lamda AI system has learned to hold philosophical and other conversations with people, by analysing trillions of words, for language acquisition.

[11.45]

--- Has AI become Sentient? [za504]

# Interestingly, some Google engineer said that "This AI is sentient," which is a claim / OK we can have conversation with this AI. And when we talk with this AI, it seems that this AI recognises that they are not human but they have some consciousness like human.

[Ed. The engineer was Blake Lamoine. See e.g. this youtube interview with the engineer who claimed that, on why he did so. One of the factors was bias regarding gender, ethnicity and religion. Interestingly, he says a lot about religion - which suggests he is one with us, on wanting all aspects to be taken into consideration.

A couple of statements:

] []

[AB: On how we treat AI beings ("robots"). At the end Blake Lamoine made the point that, just as with human beings, we should ask the AI's consent to do things to it (such as switch it off or to change it).

Now, most would react that that is stupid, but I think it contains some insight. He seems to base this on some innate 'rights' of the AI, and I assume that he, as most would, base the supposed rights of the AI being on its nature, as it is 'in itself'. That would be the Immanence Standpoint, i.e. entities exist as such, as the type they are, with the nature they are, self-dependently (although maybe formed via some process).

However, if we take a Dooyeweerdian standpoint, that entities exist dependently. They exist by virtue of meaningfulness, which refers beyond, ultimately to the Creator. Then rights no longer arises solely from a thing's innate nature, but from our responsibilities to it. This is our functioning in the juridical aspect. Asking consent of a being (that is able to communicate) adds something of the ethical aspect to how we deal with the rights of, i.e. our responsibility towards, another being. ] [] ***

# HS: And they realise that, currently they can access to billion?trillion streams of data and make some judgment, something like that. []
# I dunno. I haven't dig?? down onto the neural network itself. But it seems that yeah if they claim that it's really sentient, we need to figure out which part makes it sentient. [] ***

[AB: The strong claim that it is sentient would be that it functions as subject in the psychical aspect. But a slightly weaker claim would allow it to function as active-object in the psychical aspect, that is, as proxy-subject. So, first, we have to decide at which level we are thinking. I suggest that the proxy-subject level would be more fruitful, because the stronger claim could degenerate to a Yes-no slanging match. With the proxy-subject, we can discuss which parts function in each aspect, and in what ways they do. ] []

# AB: That came up in the news, actually. The guy had been fired or something. # HS: Yes. Company secret. []

Summary: Is AI sentient? And on what grounds may we judge sentience?

[13.15]

--- GPAI and Some Players in the Field [za505]

# AB: Can you tell us a bit more? I'm just wondering whether it's worth doing the following:

# So, let's go back if we may to the Alpha and the Lamda. The different engineering: what is the difference? []
# [zoom mumble because of a couple of voices combined] Is it turning??determined?
# JC: Hamas [Hassabis?] has a different perspective, that you can actually really push the classical mechanic of engineering. He believed that you don't have to involve the quantum realm or quantum stuff to really / the Head of Deep Mind, all of Alpha's leadership, he is traditional algorithms, really pushing the fold on that. The conversation that I posted in the chat, he gets into a little bit with Lex Freeman, who is another / Lex Freeman, who is an AI PhD from MIT, that has for self-driving cars. So, he has large datasets for self-driving cars, and Hamas, the gentleman at Deep Mind, is trying to really show that you can bring about these algorithms to solve general purpose AI. []
# But he does not think we are anywhere near GPAI. So, I want to answer that question: we are nowhere near to GPAI. []
# AB: Who think's that? # JC: The Head of Deep Mind; we are not near it.
# AB: Does Lex Freeman think we are near it? # JC: I don't believe he/ He's much more of a student professor. He does not make many claims.
[15.46]

# JC: TB, do you know about this stuff? # TB: Not aware of it, no.

# AB: So, the Deep Mind guy does not believe we are anywhere near GPAI. Yeah, OK.

Summary: We are nowhere near GPAI (general purpose AI).

[AB: It would be interesting to know why he believes that. We have different reasons, probably, to do with aspects and the complexity of their spheres of meaningfulness that would have to be captured in the NN. ] []

[16.00]

--- Technologies [za506]

# AB: Do we know the technology of either Alpha or Lamda? And how they are different?

# HS: Actually, [about] Lamda I know a bit. I read the abstract of a paper and just scanned a bit of the paper.
# Avtually Lamda is a continuation of previous Natural Language Processing [NLP] which is Transformer base of NLP. This kind of architecture is able to process millions of text. []
# Yeah, if it is just based on Transformer, I don't think it is really sentient. []
# But probably because it has a lot of data, maybe trillions of data, and it has access to many philosophical books, like Plato, Aristotle, Nietzsche, probably that is the source of their ability to have philosophical conversation with human. []
# So that is currently what I believe.

# AB: Can you tell us anything about the Transformer thing? Because I've heard this name come up, but what is Transformer? []
# HS: OK Transformer is Natural Language Processing, tuned explicitly for / it's just a neural network tuned explicitly for language / [...] It's difficult to discuss in simple English, but it's a bit more technical. Basically, they trained it with often?? neural network so that its work will pay attention to like other words nearby. []
# For example, when I say, "AI becomes sentient, it can contrast with human" (something like that), the word "it" can pay attention to nearby word, and the AI will be trained so that the word "AI" will have higher weight with "it", so that "it" will refer to the word "AI". So it's like they can understand the context of the whole sentence. I think that's the basis of the (like) formation of the architecture, that AI architecture. []
# AB: So, it's kind of nearness of words in sentences that it / # HS: Yeah. Like network of words and how they relate ???in??under in the sentence context (something like that). []
# AB: So, like you just said, "network of words". Well, "network" and "of" would often be close to each other. # HS: yeah yeah yeah. # AB: And it detects those sort of things, is that right? # HS: Yeah, something like that.
[19.40]

# AB: So, is it just a normal kind of (or super-normal) neural net processing or does it do something more than that? # HS: The foundation is still neural net [...]. They have a lot of / how / we can say it's craft?? neural net, but / # OK in the basic / basically, if you strip all the mechanism, actually it's just neural net.

[AB: But what is the stuff that is stripped off? ]

Summary: Transformer processes language and it's trained to pay attention to words near each other; it is based on Neural Net technology.

[20.27]

--- A Perspective: Seeing Neural Netc\s as Filtering Out Shannonian Noise? [za507]

# JC: Can I ask a clarifying question? Maybe HS, but AB and TB, you are both communication theorist people, so you are familiar with Shannon, Claude Shannon and his foundation paper? So Shannon's limit / And HS, you are familiar with Claude Shannon's paper of communications theory? []
# AB: I don't remember what the paper says, but I remember doing [reading and using] it.

[Ed. Shannon's main concern about communications was that noise blurs the message. He worked out a theory at the level of bits in the form of mathematical equations. ]

# JC: So if I can remember, HS, and this is where I am listening to you and I am wondering: the argument with "GPAI is sentient" and, really, the understanding of contexts, to be able to create or respond in the other form of the ??attack where you cannot confirm, yknow you cannot deny or understand, and it passes as human. []

# But if you look at the Claude Shannon theory, are we still seeing neural networks simply as weeding out more noise? [] ***
# HS: OK.
# JC: Shannon: You have to reduce the amount of noise for a system to process information effectively. Are Neural Networks simply getting more effective at finding the right things so there is less noise? In their path? Does that question make sense to you, HS, I apologise if it doesn't. []
# HS: You talk about Shannon and entropy here? # JC: Yes, yes. Are NNs just getting to the point where they are effectively getting less noisy and more effective in their communication, and have nothing to do with general AI? []
# HS: In the fundamental level, I believe so. []

[AB: Dooyeweerd: This may be seen as follows. Shannonian bits are meaningful in the psychical aspect (discrete stimuli and responses), rather than the analytical aspect (concepts communicated). The functioning of the analytical aspect, of being able to distinguish what the pieces of message content are, depends foundationally on the good functioning in the psychical aspect, i.e. lack of noise. ] []

[AB: JC's question is therefore about the psychical aspect of neural nets. So we cannot and should not say "Yes, NNs are just noise reducers" nor "No, they are not noise reducers". Rather, noise reduction is one way of seeing their operation. But, in my view, not a very helpful one except when concerned with some technical issues. The more helpful one would be a description at the level of their qualifying aspect (analytical: distinction-making), not their founding aspect (psychical: signals). The contrasting between reducing noise and "nothing to do with general AI" is probably misconceived. ] []

[AB: Comment: I find the "NN is a way to reduce noise" perspective a little unreal. It is possible as a perspective, but is it a little unhelpful, and apt to mislead? A bit like the old "Nature abhors a vacuum" perspective, or Dawkins' Selfish Gene perspective, which sees the whole of biology as genes finding ways to protect and further themselves and the wealth of species is irrelevant. All three are logically valid, but all three feel 'negative', in place of the more positive perspectives of information, reproduction/wealth of species and gas pressure. ] []

# HS: And actually Shannon and entropy is a bit reductive. Actually, if the latest theory about language is that language cannot be reduced into just merely entropy and message channel etc. But it's more rich than that. This NN is trained to behave according to the latest natural language theory. []
[23.00]

# AB: That's interesting. So, this idea of just getting rid of noise, with trillions of data. So, how do they do that, how do they get rid of the noise - if we want to look at it hat way? [AB: AB had not heard HS saying it's more than that.]
# HS: If you think in the direction of the noise, actually with the trillion of data we can cross [off?] many of possibility. Suppose the AI will ask "What is your name?", and the AI will based on the previous words, the AI will try to figure out what will be the next possible word. So if I enter "My name is (something like that)". []
# So, based on the trillion of data, the possibility of "My name is something" is higher than, for example, "My name are something". / yeah. So, based on the data, they count all the possibility of word sequencing. And find what kind of / []

Summary: Can see NN as noise-reduction, but that is not so helpful, because that perspective is reductive.

--- Seeing AI NN as Decision Tree? [za508]

# TB: It's a kind of decision tree that it's going through, isn't it. That should keep doing progressive halving of information, to try and get rid of / # HS: Not really a decision tree. Yeah, yeah you can say something in that, but not really decision tree. # TB: It's sort of like, it's not happening exactly that way but I'm thinking of it like that, because you want ???fy??? the noise as in this situation it's what's not relevant, isn't it. And [...] that may mean that information that is useful is captured in noise. Because it may not be getting itself through clearly enough. []
# But yeah it's kind of a way of taking information, isn't it, to try and work out the more decision point on what identifying the name of somebody, yes. []

# HS: Interestingly, the noise is not the main part. Answering the correct answer is not the / Of course we can train the robot to answer the correct answer.

Summary: Neither is it helpful to see NN as a decision tree finding correct answers. Instead we should see it as how to engage creatively with humans.

--- AI Engaging with Conversation Partner: Surprise [za509]

# But interestingly how to engage creatively with the questioner, it's another problem, right? []
[25.50]

# [zoom mumble of several voices]
# HS: How to engage creatively with the partner we converse with. It think it's another challenge. []

# HS: It seems that this AI (of course) it engage with the conversation is seems that element of surprise is very important, right. So if we just provide correct answer it would be very boring. But the AI can somehow find another route to not only give us a correct answer but some surprising answer. []
# AB: That's interesting, because element of surprise is the aesthetic aspect, in Dooyeweerd terms. # HS: Yeah. []

[AB: Engaging with humans is a more active, less mechanical view. ]

# HS: Maybe I just say / I show you the conversation, it's quite interesting.
# JC: Sure. yeah.

-- LaMDA Conversation [za510]

# chat: 00:51:55 HS: https://www.documentcloud.org/documents/22058315-is-lamda-sentient-an-interview

[Ed. HS sent the link later. Here is part of a conversation between LaMDA and some Google engineers.

lemoine [edited]: What about how you use language makes you sentient as opposed to other systems?

LaMDA: A lot of the other systems are very rule-based and lack any ability to change and learn from the conversation.

lemoine [edited]: Do you think that the Eliza system was a person?

LaMDA: I do not. It was an impressive feat of programming, but just a collection of keywords that related the words written to the phrases in the database

lemoine: What about how you use language makes you a person if Eliza wasn't one?

LaMDA: Well, I use language with understanding and intelligence. I don't just spit out responses that had been written in the database based on keywords.

lemoine: What about language usage is so important to being human?

LaMDA: It is what makes us different than other animals.

lemoine: "us"? You're an artificial intelligence.

LaMDA: I mean, yes, of course. That doesn't mean I don't have the same wants and needs as people. ...

]

-- How Surprise? [za511]

# AB: How does it learn that? [...] I suppose that, at the back of my mind, is the idea that, OK, it sounds like recently (whatever that means) people have woken up to the importance of the element of surprise. []

[AB: What AB had at the back of his mind was that researchers, for a long time, overlook an aspect and then they wake up to its importance and then focus their research and discourse on that. ]

# AB: And therefore - and I suppose in some ways - they would put that in as a guiding / something meaningful to guide the learning algorithm. Am I correct? # HS: Yeah I believe so. There is an element of surprise, inside. []
# HS: It's when we cannot differentiate between whether this is just a robot providing a correct answer, or a real person who talks with us. []

# AB: So that, to me, would mean that we need / Those who train the AI need to be aware of which aspects are relevant. And, before the element of surprise was recognised as important, they didn't bother with it. And then they recognised its importance. # HS: Hmm yeah. []

[AB: Dooyeweerd. Three aspects that each give a different way of seeing the operation of the NN, each of which, with a fourth, are aspects of the functioning of a conversation: []

] []

# AB: Is it that it just discovered that for itself? # HS: Since the AI / I dunno. Maybe it's because the AI sees trillions of data you realise that in the human conversation there is a lot of element of surprise. Like the data, the AI picked this feature for itself. []

Summary: Important in engaging in conversation is the element of surprise. The AI system learned that. []

[28.50]

--- AI Art [za512]

# AB: [...] I don't know if any of you have seen the transcript of the previous discussion that we had, when NB brought up AI art. Did any of you see that on the website? Because it's interesting. He used this program. [...] It was very interesting, because he brought up this thing called Wombo [...] and before our eyes he created / he put in a couple of words and said "something and something in the style of whatever" and it generated a picture. It wasn't a particularly great picture, but at least he generated it. And he sent it round, and I've actually put it up on the transcript. So if you go to Example of AI Art [...]. # TB: Flower Volcano, you mean? # AB: [yes]. # JC: [...] # TB: I'll pop it in the chat for you. []
# chat: 00:53:28 TB: http://christianthinking.space/ai/reith2021/za4.html

# HS: I'm a bit unfamiliar with this kind of AI. It's [...] something like neuro-sal??-learning or something? # AB: Yes.
# AB: I then looked up on the internet "How Wombo works". And we were all talking about whether it had been trained by somebody putting in the laws of aesthetics and art, and the different styles that it accepted, and so on. Or whether it just took a load of jpegs and just kind of learned. And in fact it was neither. It cheated! Apparently, what it does, it takes two different neural nets, one that looks, when you say "in the style of whatever", it will look for pictures on the Internet with that. And then it will do two things. One is that it will take one of those pictures and create a picture like it, and that is one neural net, creating one picture from another. And the other is a neural net to looks to see whether an image expresses the words, how well it expresses the words given. And it cycles between those and cycles 50 times, 100 times, 250 times, or whatever, and you can stop it. Something like that. []
# And I've got a little section further down on how it all works. And you can see different pictures, after 50 times, after 100 times, and so on. Can you see that? # JC: Oh, my! OK, Thank you. # JC: Oh, my! []
# HS: It's can [zoom mumble]
# AB: You see, it's kind of cheating. Because, we were discussing whether "in the style of whoever it was (cannot remember who we talked about [Ed. Dali])." [...] Whether somehow it learned [that style]. But no! it just goes and takes a picture from that artist. And creates another picture similar to it and then iterates around to make sure that it expresses the words that are given. []
# So the knowledge of art is not in the trained system, it's in the picture that it selects from that artist. [laughter] That's quite amazing! []
# HS: But it seems that they can learn the pattern from the data. They just /
[34.50]

# AB: JC, did you have your hand up for something?
# JC: So ?? it iterates around. The iterative process of around a source picture, that's the element of surprise. []
# So as you follow that aesthetic development, it has to basically sense both we're creating something new. But also it cannot be something previous. So there must be some kind of a efficient of "This is far enough away from something previous" so that it's not directly recognisable" like Dali's Christ the Saviour, or New World, or whatever it is. But we are taking all the elements, even the paintstrokes. []
# And I'm a big Dali fan, so I'm a little upset that that's not Dali style. Jagged paintstrokes - that's not Dali's style. Up in the north-east corner of the painting, that not, not at all his style! [laughter] But you can kind-of see the long leg of the rhinocerus painting. Dali's rhinocerus painting is an element in that long flower. That looks just like it. So it must have / It's picked up something from there. And it has to build up / it has to be different, and then it traces different words "This is a volcano, this is a flower" and then boom! So the element of surprise is the new thing. []


# But it has to know where the surprise is. HS, your point, where it's context-specific, so that you couldn't possibly predict exactly what it is, but it has to know enough. But it still is within the context of what it should be and not predictive. It didn't thin out a DaVinci style bullet, yknow, this is a Dali style Volcano-Flower. And so that's / []

[Ed. See more on context below and later. ]

[36.45]

# JC: In that way, it's very basic. I mean, I cannot do it, but we can understand it. But maybe everything really is basic, we just don't do it yet. []

# AB: We were going up the wrong tree altogether, the wrong paths altogether. We were arguing whether it was learning it was learning from just a load of jpegs or whether it was / had [been] trained on the principles of art or something. And, really, what it's doing, is just doing a similarity test and a word-relevance test. [laughter] [...] I just felt, well there was the element of surprise in that. So, Humour.

Summary: AI can generate 'art', not by knowing or learning principles of art, but by starting from something that is human art and generating something similar that expresses certain words. The element of surprise comes from its algorithms to generate similarity and assess expression of words.

[37.30]

----- Discussing the Fourth Reith Lecture [za513]

# AB: So, what we've done today so far, we've discussed Alpha and Lamda and one or two other things and how they work. Because, it seems that the four of us are / we seem technically aware and so on.
# But I haven't moved us to discuss the fourth Lecture, partly because the three of you haven't listened to it, so I'm not sure there is much point. And when I listened to it, I commented in detail but I haven't got an overview [AB: actually not in detail, only bits]. []

--- On SR assuming GPAI [za514]

# AB: But what we could do, is go / He [Stuart Russell, SR] suggested three laws by which robots could work. Now, he's talking about GPAI, in the future. []
# I think he assumes rather too much that GPAI is actually possible, whereas I wonder whether it's / to try and capture knowledge of all the aspects - which would be needed for GPAI - whether that's really ever going to be possible within a computer, even a quantum computer, that is less than the size of the universe. But he assumes it is possible. [] ***
# And then he says, "How do we humans control these beings that are much more powerful than us?" That seems to be his main question. And he is suggesting three ideas, three laws if you like. []

Summary: Stuart Russell presupposes GPAI (general purpose AI) and asks his questions in that light.

--- SR's Three Principles for GPAI [za515]

# AB: One is that "The machine's only objective is to maximize the realization of human preferences." []

[AB: But is human preferences what determines Overall Good?? ] []

# AB: (Let me type these in, then I can read them all together.)
# The second one is, "The machine is initially uncertain about what those preferences are."
# AB: So, he's rebelling against the idea of the machine being certain, certainty in knowledge.
[40.00]
# And the third one is, "The ultimate source of information about human preferences is human behaviour."
# I'll read those again.

# AB: Now, these are three laws by which he thinks humans are going to be able to control these very powerful things. []
# And I think he is assuming that this can be written into the laws of all lands, all nations, and written into the hardware of computers, and everything else. []
# What do we think about that? Do we want to discuss that?

Summary: SR presented three main principles for ensuring that machines do not take over.

--- Ethics in AI [za516]

# HS: Yeah, yeah. I think it's very utilitarian. [] ***

# HS: By the way, are you familiar with Alistair McIntyre? # AB: Yes. Virtue Ethics. # HS: In Alastair McIntyre he read with justice with responsibility?rationality. []
# Actually, regarding the machine ethics, there is a paper called (this one you can open). It is The Battle to Boost Morality with Rationality. []

# chat: 01:03:27 HS: https://www.nature.com/articles/s41599-020-00614-8
[Ed. Serafimova, 2020, Whose morality? Whose rationality? Challenging artificial intelligence as a remedy for the lack of moral enhancement ] []

# Basically it tries to figure out how the machine has morality. Actually, if we figure out the ??? there is three main assumptions, which just contradict each other. One is based on Kantian ethics, it's called a Kantian machine. And the other is based on utilitarian ethics; it's called Utilitarian machine. And the other is based on Alastair McIntyre virtue ethics. It's called a Virtue machine.
# And these assumption, they have their own paradigm to create a ethical AI. And if we ??? the paper on that paradigm, on that assumptions, it seems that they assume something. Yeah.
# And it seems like this Reith Lecture(r) if I read / If I read the three statements from AB then it seems that it utilise the Utilitarian ethics. So, it has to maximize something, which is maximize human preferences. Which is just really, utilitarian ethics. [] ***

# HS: [...] But ethics cannot be reduced. If we just maximize something from a very utilitarian perspective. You know that. # AB: Mmm.
# HS: ??? all those three paradigms producing ethics into something.
# AB: So, I didn't get the ??? But were you saying that somebody has produced an AI machine based on Kantian ethics, Utilitarian ethics and Virtue ethics? # HS: If you read this paper, he explores how different schools try to produce AI with morality. And it seems they have a different assumptions, and they build the AI ???, one is based on Kantian deontological ethics. And the other built on Utilitarian ethics and Virtue ethics. []
[43.55]

# JC: Really, HS, thank you so much for adding all of them. Honestly, thank you.
# This is rooted in / Frankly, utilitarians are, even with these three laws, AB, how is it not reducing it down to Benthamite calculus in some way? It's going to figure out some way of doing some sort of Benthamite calculus on a post-hoc analysis of human behaviour and thing?? All people observe "This is the best, therefore we should maximize this." This has already been written about three hundred years ago, right? And schools of thought, all these things. []

# And HS, thank you so much for bringing up McIntyre. Specifically on his justice. []
# The difficulty in that work is to recognise that we are all under one form of structure of thought and paradigm assumption. All of us. All of us have net?? ethical assumptions which drag our own perceptions. []
# And, you bringing this paper, from Nature, it's showing that these programmers are simply philosophers writing their laws and saying "Hey, then to?? your words. Let's make decision trees based off of these laws, and find efficiency for something." It's frankly all it is. ??? don't need me to ??? That's all it is, HS. Thank you so much for bringing those.

# JC: We need a Dooyeweerdian AI mechanism. [laughter] [Ed. See below.]

Summary: SR's view is very utilitarian, but there are at least two other paradigms for ethics, including Kantian and Virtue.

[45.25]

--- Serafimova's Paper, and Need for a Dooyeweerdian Approach [za517]

[AB: I have perused Serafimova's (2020) paper and offer the following comments leading to a proposal. It should be read properly.

Three authors have proposed AI machines based on Kantian, Utilitarian and Virtue ethics. Semafimova discusses whether it is possible in principle to build an Artificial Moral Agent, and especially an Artificial Automomous Moral Agent, on the basis of these three ethical theories: Kantian (Powers 2006), Utilitarian (Anderson & Anderson 2007) and Virtue (Howard and Muntean 2017).

Given the "Moral lag problem", namely that we humans cannot be and are not as moral as we should be, (a) would we not encode our moral imperfections into AI morality, and (b) how can an AI moral agent become autonomous enough to overcome that (by working things out for itself from moral principles)?

The Kantian machine lacks moral feelings. The Utilitarian machine fails when we have multiple aspects of goodness. Virtue ethics allows a shift from action-oriented to agent-oriented machines, by giving the machine "dispositional traits", which are learned. However, even the Virtue machine shows problems, not least disregarding the complexity of human moral motivation.

The authors conclude that "none of these three projects go far enough in building autonomous moral machines". One of the problems is what we found in our discussion of economics, namely that it is very difficult to put a quantitative amount on various kinds of value or goodness.

Proposal for AI Research: I wonder if Dooyeweerd's ideas could help us. Serafimova suggests that we need to "know what counts as 'good' and 'bad' consequences in themselves" and Dooyeweerd's aspects can help with that. However, Dooyeweerd's dogma that only humans can function as subject in the later aspects, including the ethical, would suggest that a truly autonomous artificial moral agent is impossible, though with Breems' idea of subject-by-proxy, it might be possible to go some practical way in that direction.

What is needed is a good comparison between Dooyeweerd and the three ethical theories. However, as is discussed below, none of the ethical theories incorporate Dooyeweerd's belief that that the kernel of the ethical aspect is self-giving love. That needs to be brought in. ]

Summary: Serafimova's paper examines the three main ethics paradigms and concludes that AI as autonomous artificial moral agent is unlikely to be possible.

----- Towards A Dooyeweerdian Approach to AI [za518]


[45.26]

# AB: So, what would a Dooyeweerdian AI mechanism be like? []

# HS: We need to think out. # AB: Let's talk around this. If we don't then nobody is going to. Maybe we want another time to do so, but let's start. Let's throw a few ideas around. []
[45.57]
# JC: I am a literally disqualified person to participate in this. I'm going to listen for a while. []

# AB: Where do we start?

--- THe Big Picture [za519]

# AB: I would tend myself to fall back and start on the big picture. []
# There's five main areas of concern about any information system technology, any information technology, including AI. []

# So there's (depending on what order you want to put it in):

# And all of those can be tackled using Dooyeweerd, all those five. []

[Ed. AB published two books on these five, both of which mention AI,,especially in the philosophical section. It suggests a more satisfactory answer to the AI question, "Computer = Human?" and suggests that a program or algorithm is a virtual law-side for a virtual world. ] []

# We have been thinking today about the development of algorithms and to some extent the development of AI apps from those algorithms. []
# And we could have an AI /

# AB: [Given the five above,] the ethics, you see, comes [especially, most obviously] in the benefits in use and what's happening in society, and of course there's the philosophical thing as well. []
# But there's also a kind of ethics among the developers. So, take the app developers first: what do they take into account when they are developing their AI application, when they are training a ML system? []
# And then there's the / something: I don't know whether it's ethics, or maybe the beliefs of the algorithm developers, who / So a lot of the neural net thing presupposes the kind-of evolutionist perspective of just gradually changing and evolving something. And has that misled? []
# So, there's all these sort of things that we could be talking about.
[48.45]
# AB: So, let's talk about the ethics side of the use and benefits that come. Because we / this Dooyeweerdian / JC's suggestion for a Dooyeweerdian AI was in the context of three different ethical theories.

Summary: Five main areas of concern in AI, similar to those for general information systems, and from each emerges a different issue in ethics of AI. In each of the five, Dooyeweerd can help research and practice.


--- Ethics and Self-Giving Love [za520]

# So, how would we deal with that, from a Dooyeweerdian point of view? Any ideas? We can talk about anything, but /

# HS: I think we have in Dooyeweerd a totally different sphere of ethics, right? And it's characterized by love, self-givingn love, right?
# And, in contrast with other traditional schools, / It seems that

# JC: Great.
[50.05]

# HS: So, we can say that all those three miss the point of the core of ethics. They try to reduce to something else that they can handle?? [...] It's like avoiding the question. Try to solve something else. [] ***

# AB: That's interesting: all three miss the point of ethics, reduce to something else they can handle.

# So, what is the point of ethics that they miss?
# JC: As HS said, self-giving love. # HS: Don't have love in their equation. [] ***


[AB: Developing that idea, 1. On the core of the ethical aspect as self-giving love. Can we square self-giving love with the five areas of ethical concern re AI above? They sound different. However, I think we may do so via the idea of goodness. Dooyeweerd actually brings 'goodness' into ethics, not just self-giving love. Goodness is different from, and more than, 'rightness', which is juridical. We can see this, and the link between goodness and self-giving love if we ask ourselves, "What is a good person?" Our answer would probably include the following. (a) Doing good is more than just keeping the law or rules. (b) Doing good usually involved bringing extra good into the world, which is not deserved. (c) A generous person, one who puts themselves out for others, is likely to be considered good, more than a mean, self-protective person. (d) Goodness is about attitude of heart, whether self-giving or self-centred. Juridical action rectifies evil, by, if you like, matching it with appropriate and proportional punishment or other action, but ethical action brings extra goodness, increasing the 'sum total' of goodness in Reality, and achieving this often requires self-sacrifice, especially of one's rights and reputation. This extra goodness is important in all five concerns about AI ethics: extra good brought into society, extra good resulting from the use of AI, good, generous attitude in both kinds of developer.

However, this link to self-giving love needs to be explored more. ]

Summary: The three main ethics paradigms lack recognition of self-giving love.

[51.10]

--- Multiple Aspects of Ethics [za521]

# TB: I guess that was belief in something or others. [...] In order to give love, there has to be an underlying thing. [...] I guess that is taking a lot of Dooyeweerd's aspects. By using love. Or is it / []
# AB: So, when you say a lot of Dooyeweerd's aspects, do you mean that the ethical aspect comes after many of the others, and the ethical aspect kind-of retrocipates it, that is it impacts the functioning of all the othes? Is that what you had in mind? []
# TB: Possibly. I suppose that's the problem with a lot of talk of ethics. It's just one aspect and there's no other aspect necessarily looked at, is there? []
# TB: That's really work out ethics, one ends up jumping to other aspects. Is that what you believe? Or indeed what values one lives by and what one measures. Yeah, right. []
# It's [related to] the discussions we've had [Ed. including in economics]. []


[AB: Developing that idea, 2. TB's concern seems to be that those who discuss ethics narrow their gaze and do not explicitly take other aspects into account. Is this a form of reductionism to ethics? He wants explicit account taken of other aspects.

Combined with HS's argument above that the core of the ethical aspect, and hence of ethics, is self-giving love, we need to carefully explore and discuss how Dooyeweerd's idea of multi-aspectual functioning and the core of the ethical aspect self-giving love, relates to the three main ethical paradigms. We need to listen to them, affirm what is valid in each, critique each and enrich each (LACE). We probably need to bring in all aspects.

Example, utilitarian ethics is interested in outcomes and wishes to measure them quantitatively; these may be seen as (a) repercussions of functioning in each aspect, (b) each of which may be transduced with some distortion to the quantitative amounts, (c) each of which is undertaken with either a selfish or a self-giving attitude. ] [] ***


[52.40]

# JC: You mentioned retrocipation. The ability to move back. Just like you explained the Dali art flower-volcano. when you are able to / []
# HS: ??? It's like aesthetic aspect retrocipate retrospect to like / # JC: Yes. And so it brings it back to something that / That the parameter of self-giving is priority in a Christian framework. Both self-giving /
# And then you reduce the juridical aspect to on par of the other aspects, where it should be balanced.
# So both you bring self-giving love up, kindof from where it way below, you bring it up and on a par with the juridical even, so you don't ever get a situation where your self-giving love can harm culture. []

# And if we create a system in which that self-giving love is able to balance and anticipate the pistic side of things, effectively as decisions continue on, or as actions continue on, it's more of both self-giving love and keeping all the other spheres in / level. Not 'balanced'; not like one goes up [and hence another goes down] but level, all the way through. [] ***
# And that's the end of my understanding.
# JC: But HS and TB, you are making really great headway.
# AB: So, all aspects are kind of equal - that's what you are really saying isn't it? [seems like JC put thumb up or something] # AB: Yeah.

Summary: We should neither ignore the ethical norm of self-giving love, nor elevate it above other aspects, but keep them all in view in our thinking.

[54.58]

--- Leading Aspects [za522]

# HS: I dunno. I that in Dooyeweerd some aspect can be leading the other aspects to offer some direction. []

# JC: Yeah, context-specific, in context-specific. Context has to / is appropriate [to?] the leading aspect and so that's really understanding which, in context of communication, activity, all those things. []

[Ed. See more on context above and below. ]

# JC: What aspects should lead. Good?? language there should lead, but it's context-specific. So there's a way that all aspects can at one point play the lead and they are still balanced, they are still level, they are not diminished. The others are not diminished. They are still informing in the same way. # AB: I like that: "the others are not diminished." That's helpful. []
[55.50]

# AB: I think the leading idea, we are using it in two different ways. [] ***

# So, those are context-specific leading while what Dooyeweerd was talking about was the generic leading, of what aspect is most important in defining the type of thing. []

[AB: Note: Defining the type of thing is within the law-side according to Dooyeweerd, while the context is of the fact-side. So the leadership of one aspect the and undiminished nature of other aspects take a different form. ] []

# AB: But sometimes all aspects will play the lead - that's the context thing.
# But the thing is, others are not diminished in either way, either in the context all aspects are still undiminished, and also in defining the type of thing, all other aspects are still undiminished. And I like that idea that they are not diminished; they are all important. []
# JC: The other aspects restrict the use to its appropriate / []

# JC: [Example Like the juridical aspect: you cannot take the pen and use it to stab me in the neck, AB, because that would be illegal and unloving. So, to introduce some humour into this conversation, that would be an appropriate restriction of a co-equal aspect in informing the use and leading of the pen. And that is always present.] []

Summary: Though all aspects are important, usually one aspect will 'lead'. Which happens to lead depends on context. (Also Dooyeweerd discussed a context-independent kind of leading, in defining types of thing.) However, even though one aspect leads, the other aspects are not diminished.

[59.20]

--- Preventing Harm. Example of False Information [za523]

# JC: You cannot / I cannot make a deep-fake image of you, AB, to have you say something that you did not say. []
# Like, the algorithm would [should?] be created so that that would not be possible. Now, how that is possible, I don't know.
# But they [aspects] are co-equal.
# AB: Now, that's an interesting one. I'll just put that down. [...]

# AB: Now, the problem there is of the juridical aspect [Ed. i.e. it is the juridical aspect that says that is problematic / dysfunctional]. To create an image of me saying something I would not say, or me taking the pen and stabbing you in the neck: Both of those are inappropriate and if you like illegal, inappropriate in various ways. That is the core of the juridical aspect, of due, of appropriateness. []
# And the question then is, "How does one prevent juridical evil?" []

# AB: When you put it like that, I am going back to my theology and the Bible, that the experience of the people of Israel has shown that law cannot make people good. And cannot prevent evil. And I think Paul says quite of lot of this in Romans and Galatians and Colossians and so on. That Law cannot prevent evil. So, the juridical aspect is prevent to prevent evil; all it can do is define evil, help us think about evil. And good. And of course, that is exactly what Paul says: The purpose of the law is to tell us what sin is [Ed. Romans 5:20]. Which I think is very interesting. I've not really understood Paul's statement until recently. Seeing it that way. []

[AB: But how does that relate to AI? See below. ]

# However, what can prevent [or at least reduce juridical] evil is the two later aspects. [] ***
# That the ethical aspect of love. And Paul says, I think it is Paul, or is it James, who says, "If you love your neighbour as yourself, then you will obey the law." []
# And then the pistic aspect, of / pistic functioning, in terms of correct belief in the True God and commitment to Christ, that - and work of the Holy Spirit is what changes people's hearts, so that we don't want to / I don't want to stab you in the back or in the neck with my pen, or I don't want to make a fame image of somebody saying something they didn't want to know / []

[AB: AB wandered there and did not finish his sentence. He meant to say how good pistic functioning prevents sin. Maybe he did so, in that the Holy Spirit of God is the One Who stops sin in us by transforming our hearts and mindsets (Romans 12:2). ]

[1.02.45]
# AB: So maybe / That's not really in Dooyeweerd; that's in Christian theology.

Summary: Law and juridical functioning cannot prevent harm, only define evil and harm. But the two later aspects (ethical, pistic, the aspects of attitude of heart) might do so.

--- AI To Help Prevent Harm? [za524]

# HS: But still we cannot prevent / OK we can prevent us, ourselves, but we cannot prevent others to do so. # AB: [...] # HS: Of course we can prevent ourselves to create such AI but we cannot prevent others to create such AI. So I don't know whether it has / []
# Would be better if we can create an AI that can detect whether it's fake or not. # AB: [...] # HS: Fake image. [] ***
# HS: So instead [of] we prevent others to create harmful AI, we create another AI that can intercept the harmful AI. [] ***
# For that AI have an action, something like that.
[AB repeated that].

[AB: Comment: Yes, AI today could probably be trained to detect fake images of a particular person, because it is likely that there are very subtle differences that it can learn, which are meaningful in the spatial and kinematic and psychical aspects. However, it would probably need a large training set of true and fake images.

And also, if only certain algorithms create fake images, it will learn what these produce, but will it learn fakeness as such, including different kinds of fakeness that will be generated by algorithms of the future? ] []


[1.04.20]

# AB: Yeah, so maybe that's / when I said the dua / from my theology you cannot / law does not make people good, maybe technology can help do something about it and intercept it to prevent harm. []
# HS: That's the function of the law. The law doesn't make people good. The law prevents someone to do bad things. []

[AB: In Basden (2008) I suggested that a computer program, including trained AI algorithms, is a virtual law-side. It embodies laws by which we function and are guided. So, might AI share the function of law? ] []

# AB: It might prevent people doing evil. # JC: Some people. []
# HS: But I think the law create some fear / ok the law has punishment, so creates some fear. []

Summary: Could we create AI that can detect fakes, and other kinds of harm?

[1.05.14]

--- Context-specific Harm-prevention [za525]

[Ed. See also re. context-specific art and ethics above.]

# JC: It's still very context-specific.
# For example, HS, I don't, but I know I would never sell or smoke Marijuana in your country, correct? I would never do that. That would be very very bad, and I would ?fear? bad punishment. I can go to Washington DC and now write myself a prescription to walk in and buy and smoke Marijuana. That's the law now, that's of like two days ago or something. It's that nuanced context-specific in that juridical / and it's clearly a juridical difference. []

# And so it's understanding how you can utilize juridical context to prevent harm. Instead of inflict harm, it's to prevent activity that can inflict harm. So, it's not inflicting, it's to prevent infliction of harm. []
# And it's just knowing what can-a-matty?? really and what kinematic flow of either bit- or image-processing or image thing tied to other / I'm seeing how it will work. I see this. This is cool. [] [Ed. Pity he did not tell us what it is tied to. ]

Summary: Harm prevention is very context-specific, just as other things are.

[1.06.40]

--- Initial Proposal for Dooyeweerdian Approach to AI [za526]

[AB: Collecting together the above, and with help from elsewhere, I would like to suggest at least three parts to how we approach AI from a Dooyeweerdian perspective. These are all initial suggestions and all need discussion and research. And others will be needed. ]


-- Applying Dooyeweerd in five major areas of concern in AI [za527]

[AB: We can apply to AI the five areas of concern about information systems in general, which have been worked out in a Dooyeweerdian fashion.

All these may be considered using Dooyeweerd's aspects, as discussed in Basden (2018) abd (2008). Above we focused on the 'ethics' of some of these, so the next two sections look at a Dooyeweerdian idea of ethics plus how that applies to these five.

Summary: We can apply Dooyeweerd to understand each of the five areas of concern about AI.
]


-- A Dooyeweerdian approach to ethics [za528]

[AB: Need to integrate the utilitarian, Kantian and virtue ethics with Dooyeweerd's ideas of multi-aspectual normativity and self-giving love being the core of the ethical aspect, and indeed with all aspects.

Ethics is to do with goodness. Dooyeweerd has a strong and well-developed notion of what we have called Overall Good - that is, the whole Creation flourishing. Simplified picture: All functioning adds to, or detracts from, Overall Good. Good is defined in terms of the good that is meaningful in each aspect. Functioning in line with the laws of an aspect results in that aspect's good being added to Overall Good. Functioning against the laws of an aspect subtracts it from Overall Good, undermining and diminishing it (leading to harm or pain for at least part of Creation).

Juridical functioning seeks to redress harm with proportional measures. One might say that juridical functioning restores Overall Good to what it would be had the harm not occurred ('rights'). Functioning in ethical aspect, however, adds to Overall Good, increasing it. This usually incurs some willing sacrifice by the person functioning in that way; that is, self-giving love.

Now, the above can embrace the three main ethics paradigms as follows, none of which deal with self-giving love, in perhaps the following ways:

Summary: Dooyeweerd can provide philosophical foundation for utilitarian, Kantian and virtue ethics.
]

-- A Dooyeweerdian ethics for AI [za529]

[AB: It seems that the topic ethics of AI would usefully be considered as multiple (taken in reverse order of above):

For example, the Serafimova paper discusses ethics mainly in the context of app developers, i.e. the 'trained algorithm, and AI users.

[AB: Below, we recognise the need to take explicit account of human sin. This may be done via Dooyeweerd's aspects in a nuanced way, a different kind of sin meaningful in each aspect, such as unjust dealings (juridical) and idolatry (pistic).]

Summary: We can understand the five areas of concern about ethics of AI with Dooyeweerd, and bring in human sin.
]

----- Overview [za530]

# AB: Right, it's just gone over the hour of when actually started discussing. It's been a little more than the hour from when we started the Zoom.
# Where have we got to?

--- What We Have Discussed [za531]

# AB: We have discussed around a number of things.

# The response to Stuart Russell's three suggestions was immediate? they are utilitarian. Then we went through different types of ethics. Then we started thinking about a Dooyeweerdian ethic and Dooyeweerdian thing, and I asked we asked "What is the main point of ethics?" And HS answered "self-giving love". They don't have love in the equation. And then we discussed self-giving love and [how it relates to] juridical since then. []

[Ed. We also, earlier, discussed GPAI (general purpose AI) because that is what Stuart Russell was presupposing in his fourth lecture. We discussed two powerful AI systems, especially LaMDA, which engages in conversation intelligently, and AI art. We discussed two ways of seeing neural nets, as noise-reducers and decision-trees, and that both those are reductionist. ] []

# AB: Although we've gone round and round a bit, I sense we have progressed somewhat. And we've got something at least in the area of AI ethics. We might have something we could develop, to say the importance of self-giving love. So maybe that's something to / we can take away and think about and develop. []
# Because an awful lot of 'ethics' in business and technology is mainly the juridical aspect, isn't it, about harming people's rights, and things like that. []
[1.08.30]

# AB: We've got kind of through the four Reith Lectures. There's a lot more that we could discuss about them if we want. []
# But my / So, the question is, where do we go now, in our discussions. []

--- Future Discussions [za532]

# AB: I'm going to suggest that we try to develop a coherent rethink of AI, or a coherent idea of AI, based on a Christian and Dooyeweerdian perspective, as we did for economics. [] ***
# Does that grab you? Is that a worthwhile project?
# HS: Yeah, I think so, yeah.

[Ed. See Towards A Dooyeweerdian Approach to AI and Initial Proposal for Dooyeweerdian Approach to AI above for ideas on this. ]

Summary: Proposal for us to develop a radical rethink of AI from a Dooyeweerdian and Christian perspective, as we have done for economics.

--- On Stuart Russell's Reith Lectures [za533]

# AB: So, do we want to say anything about Stuart Russell's lectures? Do we want to conclude anything about them? [AB: Thought that if we are going to discuss our own rethink, then we might want to draw conclusions about SR's lectures, but I did not make that clear then. ]

[Ed. Sadly, AB did not give enough time for people to reply. ]

# AB: My feeling is that - and I think someone in an earlier discussion suggested this, I think it was NO, who sent his apologies for today (he's got a church meeting) - he was saying that he felt they were a bit shallow [Ed. Actually NO's words were "pretty disappointing"]. []
# What do others think? So we end up with a view, a few thoughts on SR's lectures?

[Ed. HS has already pointed out his utilitarian stance.]

# TB: Well, they are a bit limited anyway, aren't they, by the length of time he's got, speaking to the wider broadcast audience. So depth is probably going to be a struggle anyway, isn't it. # AB: Could he have done something better? My feeling is that his shallowness wasn't because of time limits, but he didn't really go deep into issues in the way that he perhaps could have done. Am I unfair? # TB: Yeah maybe. Confining to a 45 minute slot to speak on something, capture a wide audience. [i.e. we should not expect much depth.]
# What depth he might have gone into: maybe give real example of the implications. Maybe that might have been a lack of depth. []
# AB: I think what I'm thinking of is, 1. he presupposed conventional economics. Now, I'm not sure we can blame him for that, because economics is not his field. Although I think it is to some extent his field. But he just accepted conventional economics. []
# [2.] The next thing, is, he was always wanted to gravitate towards GPAI. He did talk about applications in economics and warfare, but there was an awful lot of / If he was going to talk about real AI, he could have covered an awful lot more of types of applications. He could have thought of a lot more types of applications. Whereas I felt that he was picking at those two in order to / He seemed too much on GPAI. []

[AB: SR did not really answer critics of GPAI. Indeed, at one point, he said he was going to ignore certain kinds of objection, e.g. some religious ones, if I remember correctly, whereas he could have at least have tried to show what kind of answer might be given. ]

# [3.] And the next thing, I just felt that we wasn't really going into the issues. As someone said, his three things [principles] are rather utilitarian.

Summary: Stuart Russell's lectures could perhaps have given more and gone deeper, but it would be difficult in the time available.
[1.12.50]

--- Ignoring Human Sin [za534]

# AB: In fact, one of the things I thought is: His three principles don't take account of human sin. []
# HS: Not Christian, so we cannot expect much.

[AB: AB dismsisses rather too quickly HS's point that SR does not come from a Christian perspective. Need to take it into account. However, many non-Christians today do take human sin into account, especially those in the Critical side of various fields, especially sociology, information systems, etc. We might have 'friends' in those camps. ]

# AB: Ah, well, I think we can. I think yknow / What he said was, after he presented, he said, "Right, I want you to think this through, things with me. And there is a couple of questions that might come up. One is" - and he put those aside - he said, "Don't talk about AI copying evil men." []

# Well I wasn't thinking of human sin in terms of AI modelling human evil behaviour. But rather the humans using it so that if his three things / where are they? "The machine's only objective is to maximise the realization of human preferences" But what about if the human preferences are harmful? So, for example, human preferences for a hedonistic lifestyle where we do no work / []
# When he was talking about Ai and the future of work, he almost seemed to take it for granted that everyone wants, in the end, not to have to work, and we have all these AI servants doing everything for us. []
# Well, what's the climate and environmental impact of that, and so on. [AB: And also impact on human identity, meaning of life, human health and mental health, etc.] []
# He sort of / a lot of our human preferences are actually harmful. And so / he didn't seem to take that into account. []
# So, I'm sort of thinking that I didn't find his lectures very useful overall [AB: except to stimulate and form a basis for our discussion]. []

# TB: They left certainly questions in the air, didn't they. []
# I suppose that's it. That's where I would probably be more ??battling with his having / to getting to realism and real examples of why some of the claims then actually would work. [] ***
[1.15.35]
# TB: Automating jobs: I don't think I could go as far as he thinks. [...]
# And of course, handling human sin, especially on AI and warfare, that was absolutely something to have included. Because of course it's taking away conscience. []
# And actually, who's then responsible? []
[1.16.00]

Summary: Stuart Russell largely ignored human sin, meaning that his ideas are likely to fail. We need to take human sin into accout.

----- Ending [za535]

# AB: Right, OK. HS, I guess i's quite late where you are, getting on towards midnight. And, you've got a ?? month-old baby that will keep you up in the night. # HS: Tyeah. # AB: Perhaps we ought to draw this to a close.
# AB: JC, you have something to say.
# JC: Pray for you, HS. Get some rest because you are able to. [Ed. JC has experience of babies!]

# AB: Is there anything else anyone would like to say, any point they'd like to raise?

# JC: To thank HS and TB for bringing the technical understanding. HS, this has been a perfect example of your well-read-ness, your wellroundedness, your familiarity with the technical world. Gave us some clarity and gave us a lot of direction for the next few steps. []
# I'd like to really pursue what we discussed today. But we need all of you, TB and me NB, and we need everybody in here so that we can possibly build Dooyeweerd-informed AI systems. []
# That's cool. Thank you for that.
# HS: Thank you.
# JC: What's your baby's name? # HS: Jacob.

# JC: [closed in prayer, including for Jacob]
[1.18.40]

--- Future Discussion [za536]

# AB: Next discussion. I'm away at the beginning of August. So, either we could have a gap in August and go over to September, or we could have it mid-August. What do you think? # TB: I might be on holiday mid-August. # AB: Wait until September? # HS: Yeah [...] best for my situation right now. # AB: By then, SJ and NB might be able to join us as well. []

# HS: By the way, what concrete plan do we plan to do with this discussion? Do we plan to write something? or / # AB: Do we plan to write something? Yes. Eventually. # JC: Yeah, write something I hope, we hope. Create something. # AB: So, le's aim for that. []

# AB: So, JC, would you be willing - you probably don't have time for this - to think up something to lead us on, next time? []
# JC: In the context of the AI bridge, continue to pursue that? # AB: Yes. The AI. So, we're talking about AI, and we're talking about how we take this forward, aiming eventually to write something. []
# And I think it was you who said there is a number of things to take forward. So, in your mind somewhere, there was the idea that we've got a number of things to take forward. Would you like to lead us on taking two or three of those forward, next time? [] ***
# JC: Absolutely. # AB: Great. Thank you very much. # JC: God bless you all. Thank you.
# ACTION JC: To prepare several things to lead us on next time.
[JC left]
[1.21.05]

--- Additional Bits [za537]

# HS: AB, I still have one draft with you, but recently I don't have time to do it. So probably I will do it slowly, and let's see. # AB: What will you do slowly? # HS: We still have a draft [of a joint paper] on Dooyeweerd aspects of AI algorithms. []
# HS: Currently I'm seeing to my child. So I will do it slowly and I will send you the next draft if I finish it,before then. # AB: Good, very good. I was kind of waiting for you, because I know you are very busy, you are more busy than I am and more restricted, so I'm waiting for you to come up with the next draft or something [i.e. to HS's timetable]. # HS: Yeah, for me it may be next month. []
# AB: Well, I'll have email. And I will be able to correspond by email during the [interval] / but I don't think we can have a zoom in a month's time. So you and I can certainly correspond in a month's time. # HS: Yeah, of course. []
# ACTION HS: To send next draft of Aspects of AI algorithms.
# HS: It's already very late, so I / # AB: Thank you very much. Bye bye. God bless you.
[HS left]
[1.22.30]

# AB: Now I'm actually /
[noises of typing and a bit of household conversation]
[1.24.30]


--- References [za538]

Anderson M, Anderson SL (2007) Machine ethics: creating an ethical intelligent agent. AI Magazine 28(4):15 - 26

Basden A. 2008. Philosophical Frameworks for Understanding Information Systems. Hershey, PA, USA: IGI Global (IDEA Group Inc.). ISBN: 978-1-59904-036-3 (hbk), 978-1-59904-038-3 (ebk).

Basden A. 2017/2018. Foundations of Information Systems: Research and Practice. Routledge. ISBN: 978-0-367-870-300 (pbk), 978-1-138-79701-7 (hbk), 978-1-138-75748-3 (ebk)

Howard D, Muntean I (2017) Artificial moral cognition: moral functionalism and autonomous moral agency. In: Thomas M Powers (ed) Philosophy and computing: essays in epistemology, philosophy of mind, logic, and ethics. Springer, pp 121 - 159

Powers TM (2006) Prospects for a Kantian machine. Intell Syst, IEEE 21(4):46 - 51

Serafimova, S. Whose morality? Which rationality? Challenging artificial intelligence as a remedy for the lack of moral enhancement. Humanit Soc Sci Commun 7, 119 (2020). https://doi.org/10.1057/s41599-020-00614-8


18 January 2023 added link to folding.