7th RLDG Discussion on AI: ChatGPT

Draft

Held on Zoom: Friday 10 March 2023

Topic: Whearas RLDG discussion 5 was about General Purpose AI, that might 'take over', this is about AI as a tool, the tool ChatGPT, and the impacts that might have on us, and we on it.

Note: This version is from a first pass at transcribing, and has yet to be checked and some misheard words (indicated by ??) corrected.

[The purpose of these discussions is to contribute to developing a Christian and Reformational perspective on AI. ]

Present: TB, SJ, HSS, SMG, NB, AB (host), JC later
Apologies: NB, CA

-- Contents

-- About This Document

[This discussion was hosted by AB, who typed notes as it progressed, recorded it and then converted the notes into an annotated transcript (18 March 2023). AB adopted two roles in doing this, (a) of editor ("Ed."), e.g. giving links to other material, adding headings, adding "***" to important points, adding notes in square brackets that explain things; (b) of contributor to the discussion ("AB:"), inserting responses subsequent to the discussion, most of which he might have contributed had he not been chairing and taking notes, or maybe explain something or add references and links to help readers. The purpose of such additions is to take discussion further. He might even criticise himself for what was said on the day!

In the transcript I have tried to put down the exact words spoken but, but have removed most interjection words like "kind of", "right", "actually", and some sentences that were started but then broken off, except when it seems to add to or enrich the meaning of what is being said. Zoom chats: there were none of significance.


Examples of conversation with ChatGPT / Bing are mainly in boxes like this.
Sources are not included. They may be found in the full discussion.

]

Contents

----- Introductions [za700]

# TB opened in prayer

# New participant, SM was welcomed.

[refcording started]

# ab: We are expecting a couple of others, from across the pond.
# The topic today is ChatGPT, partly because it has burst on the scene since our last discussion. And there was a vote for that as a topic. (There were going to be a couple of other topics, on the use and the technology of AI.)
# Jordan sent an email to me [asking ChatGPT to describe itself in terms of Dooyeweerd's aspects] that arrived this morning, and I sent it round.
# Thank you all for coming. What I'll do is take notes. It's also recorded. I'm recording [on zoom] and also on another devices, because once zoom did not send me a recording tile, so i cannot rely on that.

# Let /
# Round of introductions because SM, you have not been at these discussions before. And I don't know whether you have met TB before. # TB: I think we did one time at LCF. # AB: I think HS has not met SM before. # HS: No.
# So, let has have a quick round of introductions. HS is from Indonesia, it's about 9 or 10 oclock at night now. # HS: 9 pm. # AB: SM was in management at Microsoft, and is now doing various things.

# AB: I will at least introduce myself as having had some experience of AI in the 1980s and keeping a watching interest ever since and trying to understand it philosophically.
# Let me see if i can turn this [my face] back on; i turned it off because it made ... distorted. TB, would you like to explain next your interest in AI?

# TB: I'm an engineer by trade as well as an academic. My main interest is in RF and communications. But of course machine learning (ML) AI comes into what we do.
# And of course, as an academic, clear ChatGPT is concerning us in the community with regard to assessments and how students to assessments, particularly when they are online, and how we can assess that.
# But of course it opens up a whole can of worms about what is engineering doing to our world and how do we as an engineer relate to that?

# HS: I am faculty in a university in Indonesia. I work a lot of research topic on AI especially in probabilistic model and deep learning.
# I know the ??? of ChatGPT is on Transformer architecture. So it's actually ??ten?? years-old technology.
# My background is from physics so I am also interested in quantum AI, how we use quantum computation to facilitate it AI computation that previously was impossible to do. For example, someone ??? takes like a few hundred years to train, but using by quantum computer maybe we can speed up a few hours.
# AB: I'd like to pick your brains on quantum computing, HS, because I don't understand it.

# SM: Hi SJ, nice to see you again; TB and I have met once; HS I sent you a handshake over the chat. I am an engineer, as AB says I was involved in business. Family tragedy [redacted] so I left work [redacted], so I find myself in a very odd situation, in that whereas all my contemporaries are retiring and winding down their activities, I find myself quite aggressively pursuing, creating a startup to try to be bring some good into the world.
# So during that interregnum, I spent some time with AB writing a phd, obviously a Dooyeweerdian aspectual conception on the aspects of trust (Hi NB [NB joined]) so I think one of the first times I saw AB again, so when I renewed my connection, NB was in the room I remember talking to NB abour Shalom.
# Afterwards I went and did a Masters of Law at Lancaster University, specifically because I wanted to target a legal audience - I thought by writing papers and doing research but it turns out that I don't think that it was God's plans for me.
# God's plan for me was that I use my engineering and particularly technology policy background. And this work I did on trust and this work I did on law, to try to use the law to take advantage of the law, to provide engineering-based solutions for some of the malfeasance we get from companies and try to help to drive stopping plastic pollution and reducing carbon and helping with some of the global initiatives /
# I don't think you have seen it but there was a global treaty signed a couple of days ago on updating the United Nations Convention on the Law of the Sea; it puts the oceans into international ownership / and things like that, basically.
# So, that's where I'm up to.

# SJ: Sorry I missed some zoom calls. I met SM? in AB's office years ago. I am senior lecturer in Digital Enterprise at Teesside University with ???.
# My interest in ChatGPT /
# I feel something strange of myself because, since the launch of this ChatGPT, for some reason, I am not impressed and I am not worried. At the same time I think something is wrong with me, because in every discussion everyone they are worried; it's a too black-and-white camp, a dystopian view. And I don't feel I belong to any of those two ??carousies-like??camps like?? - Another innovation. # AB: Well, we can discuss that. [] ***

[AB. To discuss: Why SJ not worried bg ChatGPT. ]

# NB: (Apology being tardy; daylinght saving time; times got screwed up in my head,) I have known a number of you for quite a while, I did my phd under AB, almost a decade ago now. I've enjoyed these conversations about AI and the insights Dooyeweerd can provide for us as we think about a society that wants AI in it.
[10.20]

# AB: Thank you very much.

Contents

----- Discussion on ChatGPT [za701]

# AB: What I suggest / What we've got is the technology of ChatGPT and the use of it, and I sugggest that we spend a short time on the technology, maybe the first quarter of an hour and then talk more broadly on the use of it.
# AB: NB you have to leave in about 40 minutes time; are you more interested in the technology side or the use side? # NB: I'm not sure I understand the distinction. # AB: Well the model Transformer use and things like that, or things like what SJ said, he's not impressed, and whether they are going to allow students to write essays with ChatGPT and so on. Which do you think /

Contents

--- Three Topics: Technology, Use and the Responsibility of Designers [za702]


# NB: I think my interest is in how the technology and how the people who create it get to shape the use of it. The fact that tools have been made that tempts us to use it in inappropriate ways - like students writing essays with it and then claiming it is their own. My interest is in what part of responsibility for that belongs to the designers. []
# That's what I say when I'm not sure that the distinction between the technology and the use; perhaps my interest is in what is the relationship between [technology and use] those two. []
# AB: Well, that's great; that's often a third thing that gets overlooked: how to develop.
[12.35]

# AB: What I suggest then is that we have a very quick, maybe ten minutes, on the technology, just to lay the groundwork.
[Ed. See later. ]

Contents

----- The Technology and Model by which ChatGPT Works [za703]

# AB: I found a paper explaining the technology of ChatGPT and the model, and I got a quarter of the way into that paper but have not done any more because I have not had time. But, HS, you know / could you explain / do you know the model of ChatGPT? It's Transformer, but do you know the model?
[13.20]

Contents

--- ChatGPT Based on Transformer [za704]

# HS: ChatGPT is technology that builds on top of Transformer. Transformer is the backbone that they modified.
# But what is special about ChatGPT is not ??only??on the??, but what Transfomer can do. But ??the katt?? ??geechat?? ??dictator??the data?? the company has profiled or dealt with to learn. So they learn from a huge lot of conversation. Then not only that, after that they hire a lot of people who ??filleted?? the language model after that. The language model with learn from its mistakes. OK, you cannot say this is too moral or it's inappropriate or something. From human feedback that the model itself keeps learning from its mistakes and keeps improving from that. Something like that.

# AB: Do you know the AI models that are in there? How does Transformer work?
# HS: Transformer: The main concept is based on attention. Attention is if we give it one sentence, how it relates to another. For example, "I am in the market". So I will be ready to by learning a lot of sentences, vary a lot, the model will learn that / OK this word "I" is related to "I am", the word "am" it is related to "the market". So, the ??great?? is a lot of matrices consisting with this how important this word is ??? to this word. And how this all this from all these numbers we can create a network of relations between one word and one sentence and one phrase.
# And from this network of sentences, this network of words, they will learn what is the most probable sentence that comes after this.
# AB: OK.
[16.15]
# So it's really just linking together probability of sentences?

# In the paper I sent round there is a diagram that says that the previous version of ChatGPT had collected about 2 billion, the current version has collected 160 billion pieces of data to learn from. And so it must have a massive network.

# AB: Now, does it actually store this huge network, learned from 160 billion pieces of information?
# HS: No, not stored, because it would be computationally very expensive. So only the most probable ones. It has many matrices. Its matrices consist of relationships between words. And not all words are related. For example, table and crocodile maybe is not related, so they would not create matrices like that.
# I don't know how many parameters, but I think they have billions of parameters. So they store billions of values.
# Its parameters will learn the relationship between one word and another word.
# From all these parameters we can do calculations, multiplying words and parameters - a very complex calculation. Then we have probably ??? and its probably ?? will tell what is max / highest probable words or highest probably sentences.

Contents

--- Training Data Contained in the Trained Network [za705]

# NB: I think that there is a distinction between the data that it trains on and the network that results from that training, that then gets used to make the predictions of the next word.
# You have trained the network on this huge corpus of data and then after you have trained the network, to what?? extent the data is in the network? (I'm not sure that's even a sensible question.)
# HS: Not identical even, not identical but /
# NB: An analogy I use - and I'm curious, HS, if you think this is a good analogy - is when I learned how to type, my hands (just because I practised typing in English) some combinations come more common when you are typing, and my fingers have learned those. The learning of all the typing I have done in the past is sitting here in my fingers somehow. I dunno how it's there anymore, but the muscle memory of, oh, after you type a 't' quite often an 'h' comes next, so that combination is built into my hands when I type. I've learned that. Now, which words I've types in the past that brought me to that maybe don't matter much any more.
# I think it might be the same with ChatGPT: billions and billions of gigabytes of data and built up this model. And now the model is what matters and that data that it trained on (now that matters too of course) but it's not in the present anymore, it's in the past.
[20.00]

Contents

--- The Mundane Operation of ChatGPT [za706]

# SM: Can I interject? I think that it's possibly a mistake to think that ChatGPT is where we should focus.
# ChatGPT was released really as a lab experiment. And it is that; it's basically an experiment. So OpenAI, who generated it, Microsoft, who introduced a ChatGPT. So it's still a generative pre-??? Transformet. But it's built into Bing and I think that sometime early next week they are going to release what they are calling ChatGPT-4.
# And that is multimodal. So you will be able to ask a question in English / let's say you can ask a question in German and get an answer in Italian. Or vice versa.

# And the the way it is inferring and assimilating information, it's avoiding the need to have this concept we had, having to train on data.
# So, NB, I guess what you are saying, really is that there is a structure in knowledge. So that, what you say, that your brain learns / we don't know how it's doing it /
# Maybe a good way of explaining it is, if you look at Morse code, it appears to be random dots and dashes, but the actual sequence is based on the frequency of the letters. [Ed. Morse invented the code so that the most frequent letters used fewer dots and dashes.] So if you learn Morse code, you instinctively learn the frequency of the letters, you automatically know it and your brain decodes it quite quickly. So there is information about the frequency of the letters in the words that we use. In paragraphs and sentences. And Morse extracted that information and the way of encoding it, which could be used for information.
# These tools are doing the same thing. They are / There is information in the stuff that is on the web, and it's figuring out what the structure of that information is and then using that to replay answers to us. Right?

# And the reason it is powerful, the reason that it seems to be like magic, is because the expression of those answers is made in terms of the structure of the information that we put there. And so what we recognise is the 'smart' that we put there when we made this stuff. Then "Wow! How did it infer that?"
# AB: Could you explain, "the structure of the information that we put there"? Can you give an example?
# I can give you an example that just came to my mind. I can remember one day when I was a teenager, sitting in an English lesson, and I was being taught how to do comprehension questions in O-levels. And so they would be questions like "Do you think / Why do you think the boy went swimming the lake?" And the basic thing that you were taught at that age, was that the way to answer that question is to turn the question around and say "I think the boy went swimming in the lake because ..." Then if you read it and find out / If you just take that, "The boy went swimming in the lake", if you look at the thing, and the reason is in there, and you write "I think the boy went swimming in the lake because the girl was waving her arm to him." And that's the answer to that question.
[24.01]
# So, once you know that, you sit in class and think, "Well, comprehension is just daft. It's just rejigging a sentence you've been given to give the answer." And, yeah, that's exactly right. And you get an A in English just because you can do that. And that is all comprehension is, at O-level English level. And so that, the structure in the sentence, AB, that you can then ask it question, you reformulate the order of the words. But you can still map it back to the original information, right, connectives etc.
# AB: But that's not what happened with JC asking "Explain ChatGPT in terms of Dooyeweerd's aspects." # SM: It is; it is exactly what happened. # AB: There's much more than that.
# SM: There is more than the stupid comprehension example I gave you. But you can see it in there. OK, in that example, what is immediately obvious, is that it based that answer on the dooy.info. That answer reads like you wrote it. [AB:
Having read its treatment of aspects, I would hope I write better than that!]

[AB Comment: What this means is that the 'knowledge' that is in ChatGPT is not knowledge of the material but knowledge of how to answer various types of quetion. In that sense, it is little different from the very early (1970s) Eliza program, which did exactly that. ChatGPT will of course be much more sophisticated and know about many more questions, but seems about the same.]

# AB: OK.

Contents

--- Understanding the Meanings Signified by Text [za707]

# NB: Maybe this gets into a larger philosophical question that we may not be equipped for, is, "What is going on in my brain, different from that?" I mean, when I say that I understand something, it is because I have read a lot of other people talking about it. And combine those ideas with other ideas I have had, introduce those ideas that i bring from other places in the past. Is there more going on in my brain than is going on in ChatGPT?" # SM: Yes. # NB: But I don't know that I could point to what that is.

# SM: Roger Penrose is a mathematician, UI guess you guys are familiar with him. He worked with Stephen Hawking, he is probably one of the most brilliant mathematicians alive. He had this thing where he was walking along, talking to a friend, and crossing a zebra crossing, just chatting, on the road. And as he crossed the zebra crossing, he felt this real sense of joy. And so, after this friend had gone, he remembered this and he tried to understand why he felt this sense of joy. He could not explain it. So he sat down and he "What was I thinking?" and he tried to go through it. And then he realised that the reason he felt this sense of joy as he was crossing the road was, an idea had come to him about a problem he was working on so he tried to remember what the idea was, and then he remembered the idea he had as he was walking across the road with his friend, and he wrote it down. And that turned out to be a huge breakthrough in an area of maths that he had been working on. It wasn't perfect, he had to work on it for another two years, but the seed of the idea came in that moment. And he experienced joy from his subconscious in solving that problem.
# And in his book he talked about this idea as being the central thing around human intelligence that computers will never never be able to assimilate. They are not conscious; we don't understand what consciousness is, scientifically. We certainly have no idea how to make a computer conscious, and we do not understand what our consciousness is, or our subconsciousness is, and how it works. So you do a huge amount more.

[AB: NB talks about combining pieces of information that have been read: lingual functioning. SM points to some human functioning that we could, perhaps, see as in a different aspect. Is it just that, because of aspectual irreducibility, and that what is meaningful in later aspects is not understandable when taking the stance of an earlier aspect?

We function in all aspects and, because of that, we have and intuitive grasp of the meaningfulness of each aspect. That means we have an intuitive understanding of the meaningfulness of each aspect. Because of functioning in the lingual aspect, we intuitively understand the semantics, the signified meaning, of a word or phrase. ChatGPT does not. All it has is the lingual processing (which has been built into it by the developers). As remarked above, maybe something of the semantics or pragmatics reflects onto the syntax or structure, so ChatGPT could be seen to respond 'intelligently' to semantics.
]

Contents

--- Information Plastic [za708]

# An example of this is that we invented plastic, and the world went, "Hey, great; look how useful plastic is. Look at all these things; we are not going to need glass any more." And now we have a massive plastic pollution, and we are trying to figure out how to deal with plastic pollution, and extract all the plastic that we have deposited in the world.
# In my view, some of these AI tools, what we have done is we have invented the equivalent of information plastic, which is a new form of pollution, which is going to occupy the Web, and human beings' central activity is going to be in how to deal with that pollution and rescue truth and fact from all this. It will be ubiquitous, unverifiable, hard to dispute at the same time, nonsense. ***
[28.45]
# AB: Right. That's great.

Contents

--- New Questions about the Technology of ChatGPT [za709]

[AB comment: In chairing, I had meant to hear about the technology and then open up discussion on ways in which the technology, the way ChatGPT works is limited by the way it has been designed. However, I did not to so, so I would like to open up that here. For example,

]

Contents

----- Responsibility in Designers [za710]

# AB: Time to bring in NB, I think.
# AB: NB, your concern was / We could say a lot more about the technology and how xgpt works.
# But you were, NB / your interest was responsibility that belongs to the designers. And things like this.

--- Responsibility: Both Designers and Users [za711]

# NB: Responsibility, in the end, has to be split between the designers and the users. SM's analogy to plastic. Who is responsible for the plastic in the ocean: the people who made the plastic, the people who sold the plastic, the people who bought the plastic and threw it away. # There is this web of responsibility for / ***
[29.30]
# Which makes it a difficult problem to solve.
# Similarly, with AI: ChatGPT, if you try to get it to swear at you - which I'll admit I've done, because that's the way I play with things! - it refuses.
# And they have intentionally built guard-rails in. The word they user is "safety", which is a kind of shallow word. Preventing harmful responses. I guess that is safety, but responsibility is really what they are getting at. They want it to not produce certain responses.
# And so the question of who is responsible - assuming we are only willing for humans to be responsible for thigns - then we have to be able to point to either individual humans or collectinos of humans who is responsible for both the benefits and the detriments that accumulate.
[30.40]

# NB: An analogy I have been toying with in my head is whether it is like owning a pet. My dog attacks someone, bites a kid, the dog is responsible sort-of, but not really; I am responsible. If I have a dog that I know is vicious, and let it off the leash, and it attacks someone, that's on me; I am not allowed to do that.
# Or the analogy is more like parenting. I have raised my kids as best as I can. I have screwed up in a lot of ways but I am proud of my kids. And now I send them out into the world, and now they behave - am I responsible for their actions? Well, no, they are their own people. I was responsible for being a good parent, and I am three quarters done with that now.
# I don't know if any of those analogies are helpful in the case of AI, where you build up this tool and then send it out into the world and it takes actions.
# Analogies are the only approach I have to think this through.

[AB: Such analogies help us in providing specific challenges that we must think about. Can we think of it with aspects? In the case of the dog, the dog is functioning psychically as a dog would, in protecting its owner. The problem is not so much the bite as in later aspects, especially the juridical, where the law gives rules about dog ownership. In the case of of-age kids, they are their own agents in all aspects. In the case of a computer, or AI system, it is functioning lingually as proxy-subject, but it is the responsibility of those who gave it the lingual functionality. ]

Contents

--- Responsibility and Accountability [za712]

# SM: To add to what you are saying. To split the concepts of responsibility and accountability.
# And then think through your parental example, yeah you are responsible but you may not be accountable. But you are partly at least responsible.
# So there are layers and layers in that.
# So you've got ??these responsible AI kindof write?? programs. [on h1n??]

# AB: What do you mean by the difference between accountable and responsible?
# SM: That is quite a long concept. # AB: OK. # SM: Basically, if you are accountable for something, you have paid a price for it, or you are held accountable for it. [AB: price?] If you are responsible for something then basically you are not necessarily going to be held to account; it is a different standard.
# SM: When the girls boyfriends first asked to take them on a date or whatever, I would sit down and explain to them the difference between responsibility and accountability, and explain that they will be accountable. [Parents responsible but not necessarily accountable.]
# AB: Yeah, so, if you like, responsibility is a kindof ongo /
# Accountability looks to the past, as to what happened, and responsibility looks to the present and future, as to what might happen; would that be right? [i.e. a temporal difference?]
# SM: Responsibility is task / (I am reading this.) "Responsibility is task-oriented. Every person in a task may be responsible for a given task that is required to complete a massive project. Accountability is what happens after a situation has occurred. It shows how you respond and take ownership of results." [AB: Sounds like that temporal difference.]

[AB: What about responsibility before God? That does not seem so task-oriented. Maybe responsibility before God turns into accountability only one all are before the Judgment Seat? ]

# So, when STeve Jobs made somebody Vice President at Apple, he very famously / every single person that was appointed to Vice President was called into his office and given this accountability view??his responsibility talk. He basically said "Before you were a Vice President in Apple, you can explain why you did not achieve results, because of market circumstances and everything like that. But once you are a Vice President in Apple, there are no excuses for what happened. You are accountable. You are supposed to respond to those features to achieve the result." It is a very / it is a much more action-oriented concept. [AB: I've long thought Steve Jobs treated himself as though he were God, demanding absolute obedience, sacrifice to himself and even a kind of worship from others!]
# AB: OK.

[Ed. Fowler's Modern English Usage links accountability to a person or body to whom one is accountable. Responsibility seems to be more general. ]

[34.34]
# AB: NB, does that help at all, the splitting up of responsibility [from accountability]?
# And maybe a question for NB: "You mentioned several areas of responsibility, let's say for plastics: makers, [sellers,] users, people who throw it away, etc. etc. Is it useful to try and identify the different types of responsibility - or perhaps accountability if you want?

# NB: Yeah. I wonder, SM, would you say that in an ideal world, responsibility and accountability would merge, that, ultimately, if someone is responsible to someone, they ought to be accountable for it as well? And it is a failure in the system when responsibility is not accompanied by accountability? Or is that not the way you would view it?
# SM: Good question. I think that accountability is a particular kind, a type of responsibility. So, it is basically / let me think about it. I think is a particular kind. When you get to accountable, you have already gone down the responsible path. But not all responsibility connotes accountability. [AB: Does "already gone" bring in the temporal aspect again?]

# NB: You might think you are responsible but I often come around to the world's words, "praiseworthy" or "blameworthy". Which is, I think, a very shallow form of responsibility. Or perhaps I'm getting at the word "accountability" that you are talking about.
# If I do something and goodness results from it, I want to get the credit. And if do something and badness results from it, I suppose that I better get the blame. That is a form of responsibility that is lighter-weight or shallower or easier for us to get our hands around. But the word "responsibility" I think has more depth than that.
# In terms of literally, the ability to respond, I guess I would go back to Dooyeweerd's aspects and say, "Well, if I am subject in some aspect, [for example] I am subject to the rules for aesthetics, then that means I am able to respond to those rules; I am not just an object in those rules." And so that's how I try to define responsibility. ***
# Which is richer than just who gets the blame and who gets the credit.

# SM: So, "responsibility could be shared while accountability cannot. Being accountable means being accountable for your actions. While being responsible means carrying out a task to completion."

# SM: So, "responsible AI" as it is referred to, is not enough. What we really need is acocuntable AI, where it is being held accountable for harm.

# And there are clear answers for who is going to do that and what / the legally, the harm is a tort. A tort is action. And so, tort law: if somebody / if an AI developer / if Tesla's car has got a fault and it starts driving into other people's cars, then Tesla will be held to account. But there will be many people who are responsible for producing that software, they will not necessarily be held to account.
# [AB: Does "responsible for producing that software" bring in yet another meaning of "responsible", namely those who acted to do something, and who 'caused' the result of new software?]

# AB: Ok. That is in the current law. But is it / is that fundamental, or is it the way society has decided to take law:? (Maybe the last thousand years of society.) Why cannot accountability be multiple?
# SM: Accountability is an individual thing; look: that's not to say lots of people cannot be accountable for something. A team can be accountable for the same thing. But they are different meanings.
# And I think that AI industry is ??checking us?? when talking about "responsible AI" when what they actually should be talking about is "accountable AI". And that /

# An example, going back to 2002 2003 when we first started talking in Microsoft about autonomous vehicles, one of the things we spent a lot of time thinking about was "Where is the insurance accountabiility going to lie?"
# And it seemed pretty obvious then that it could not lie in the individual who owns the car; that was not sufficient.
# So / and if you look at what has happened now with autonomous vehicles, the model is going to be that you buy a car, the car seller has an insurance policy for aspects of the car that it is responsible for.
[40.00]
# And it will be held accountable for those. And that will relieve the car owner from accountability for those mistakes. But the car owner will also have to have insurance so that they can be held accountable for the areas of driving and possessing an autonomous vehicle that they are responsible for. And an example of that would be when a car skids off / imagine a car that is going along at 30 miles an hour and it skids off the road and it kills somebody. If the reason it skids off the road is and kills somebody is because of a problem with its vision system and its autonomous (the way it calculated what it should do), that would fall to Tesla. If the reason it went off the road is because the owner had not replaced the tyres and the tyre tread had gone really really low, then that [accountability] obviously goes to the owner of the vehicle. So you have this kindof fracturing [of accountability?] that is going to come.

[AB: Accountability seems to depend on why the problem occurred, which may be understood via aspectual 'causality', different in each aspect, and some simultaneously. ]

# And we will have that for sure of the law of this land, of England, and I think it is also true of America, NB, you have the concept of people being accountable for decisions that they make. And there is a standard of decision-making. And human beings ... of both in common law the standard of decision-making requires a human being that can held accountable for the decision.
[41.30]

[Summary: Responsibility is general and about how we behave or decide (in the widest sense) as we face the future, within the possibilities open to us. Accountability is for specific repersussions that have already happened because of those behaviours/decisions.

Does this mean that what should guide decisions is positive ideas about responsibility, rather than fear of accountability? For AI this could mean designers should be aware of their positive responsibilities in all aspects. And is this better cast in a positive, motivating, dignifying responsibility rather than fear? ] ***

[Contribution: Distinguishing accountability from responsibility in AI, to guide designers. ]

# NB: The difficulty we are going to run into is one of epistemology.
# In the case of the Tesla, if the vision system is at fault; the tyres are bald, it is obvious who is at fault. I think that figuring out who that is, is going to be so non-trivial, that everyone in society is just going to put their hands up and says "It wasn't me; I didn't do it." And it will be sort of plausible, that the system will take actions that do not have an easily-identifiable owner. ***
# I think that part of the challenge we face as part of society is to not accept that. And say "Well, actions have owners even when it's not clear who that owner is."
[42.30]

[AB: I wonder! I agree with NB's statement about owners not being easily identified. But is there a danger in NB's second statement, when expressed as "actions have owners"? Sometimes there are multiple people and other things that caused the problematic repercussion. If we believe in multi-aspectual functioning, then some might be accountable (and were responsible) in some aspects, others in other aspects. Does that statement imply, connote or impel towards finding a single 'owner' and ignoring the others in other aspects? Is not the Anglo-Saxon system of law already too individualistic? How do we get over that? ]

Contents

--- Tracing Accountability [za713]

# AM: So you bring up an area of AI that / one of the societal difficulties that we have with AI at the moment is that AI is a black box, so that we actually cannot understand how it reaches decisions that it reaches. And it's not able to explain to us. ***
# And you have these crazy results (I guess you have seen these things) right where you could / So, facial recognition, OK, or image recognition. You show it a human being; it says it's a human being; you put a yellow sticker here on their cheek and a blue one there and it says, "Oh, it's a truck".
# And the developers do not understand where that inference is coming from.

[AB: Need knowledge of context, and what is meaningful in each context. Distinguishing meaningfulness is aided by reference to aspects , since they are modalities of meaning. ]

Contents

--- Explainable AI [za714]

# SM: So now you have (and this is very philosophical) you now have a massive effort to try and build explainable AI.
# But the explainable AI: the way that a lot of computer scientists are approaching it is they are trying to find a way to explain what currently is inexplicable about the model. So they will come up with human reasons, right. [Ed. Please explain and expand that, with examples. Specifically, what types of "human reasons"? Where are they looking to find ways to explain, i.e. what are they assuming or presupposing about where they might find solutions? ]
# Another group of scientists, who I think are on better ground, is to build an AI model, which is fundamentally intrinsically able to express the reasons for making a decision, and then to express those. [Ed. How are they trying to do that? What are the differences in their approach?]
# And I think the latter will be much more useful than the former. [Ed. In what ways useful? Why do you believe that? ]

[Ed. Hope to receive expansions above and place here.]

# AB: That is what I was doing in the 1980s, building / Expert Systems were by definition explainable, because we did not train it from lots and lots of data, we trained it from considered human knowledge and wisdom. But that seems to have gone by the board.
# SM: Not at all. What you were doing then was basically you were working with a directed knowledge graph. So Expert Systems were /
# AB: Exactly. We humans put that directed knowledge graph in, by thinking ahout it and trying to think widely and wisely.

[AB: The point is not that it was a directed knowledge graph, i.e. that particular lingual structure or syntax, but that the meanings to be signified were retained in ES and that type of structured encouraged and faciliated the inclusion of those meanings (or "knowledge" as we called it the). Other structures were also used, such as Rules or Logic. ]

[ AB: Another point is that it was a manual process to encapsulate that knowledge. This involved multiple aspects of the functioning of the "knowledge engineer", especially the analytical functioning of separating out concepts clearly and the formative functioning of linking them together so they would achieve a working knowledge base. And the social functioning of relating to domain experts and taking account of stakeholders, and also the socially-constructed nature of the verbalised knowledge that was offered to us. The economic functioning of parsimony, the aesthetic functioning of trying to keep the expressed knowledge base harmonious and easy to understand, the juridical functioning of accuracy and not cutting corners, and so on. ]

# SM: But the model these systems are using is not that. It is a multi-layer neural network, and the destroy / So the whole information / When you were building Expert Systems, you were preserving knowledge and information. But neural networks, by definition, destroy information, right, and really what we have got is convolutional neural networks. They destroy information, but at the same time they are able to, at the same time, preserve distinct parts of information so that they can still reason on them. So that's what you get. [That is] the reason why it's not explainable. That's the fundamental reason is that the information that used to be used to explain is no longer contained in the model, if it wasn't one of the convolutions that they use.

[AB: Exactly. SM earlier said "explain what currently is inexplicable". Can explicability and inexplicability be explained by reference to aspects in which the ES and NN 'engines' are designed to function?

That is, whatever distinctions there were in the original training data are lost (SM: "destroyed"). (It is in that way in which, as mentioned earlier, the information from the billions of training data is 'in' the trained network: as stirred into the soup of neuronal link-strenghts.) Note that even the data on which the NN is trained comprises a merging of individual pieces of meaningfulness from several aspects, i.e. they already have merged-combined several distinct concepts. It is in selecting what kinds of data is to be used in training where some analytical-aspect separation of the concepts merged in the data are partly distinguished.

To explain requires distinct concepts (lingual functioning depends on analytical functioning). Note that, because the analytical aspect comes after the psychical, one cannot derive distinct concepts from neuronal signals or activations. The psychical activations and the distinct concepts are two different aspects of the functioning of the human being. ]

[Contribution: Can such an aspectual view help us understand than maybe obtain explainable AI? ]

[45.30]

Contents

--- Back to Responsibility of Designers etc. [za715]

# Ab; Time is getting on, and NB will have to leave in 5 minutes or so. We have not heard from HS or SJ [not TB] about what is being discussed. Would you like to / who is going to have to leave first? SJ, do you have anything to share on what you have been hearing?

# SJ: I have to leave. I would like to leave by giving a question to our group.
# I can sense there is all this politics involved in progress of science. And it is not clear to me why responsibility and accountability is assigned to AI at the first place, as a concept which was / ??introduced??it-was-reduced?? like responsible AI, accountable AI, as if by default, without question, AI is sentient, so let's talk about the responsibility and accountability of AI. Rather than emphasising more on responsibility by design, accountability by design.
# When you try to search for academic papers, you see less emphasis on responsibility by design, accountability by design.
# AB: What is responsibility by design? # SJ: The focus is on human, the designer, the stakeholders.

Contents

--- Sentience and Responsibility [za716]

# SJ: So when we talk about responsible AI, accountable AI, there is the assumption that AI is sentient. No question on that; now they talk about the responsibly, accountability of it.
# NB: ??That's not??That's sortof?? [Ed. whether that is +ve or -ve is important!] the sense when I hear that phrase. When I here people talking about responsible AI, I have always understood that to mean making sure that the humans involved are accepting the responsibility for what they have created. Rather than suggesting that the AI itself has some sort of responsibility. ***
# Am I hearing that differently than you?
# SJ: No, I know what you mean, NB. I can understand that there is a human involved. But i am trying to / I'm not sure. I don't know where the notion started, and why it started in the first place.
# I understood the blackbox bit. But in terms of the terminology and use of the term, because we / sometimes with terminology, we can change the direction of human thinking, with the power of language.

# SM: I just shared a screen with you guys. [screenshot]
# As you were speaking, I typed in your question. So, the question as I captured it was "I can sense there is always politics involved in the progress of science, and it is not clear to me why responsibility and accountability is assigned to AI in the first place. As a concept introduced as responsible AI or accountable AI. Does this suggest we cannot question whether AI is sentient or not?"
# Maybe that is not the question you are asking, but that is what I put in. And there is the answer I got.


--Q: I can sense there is always politics involved in progress of science and its not clear to me why responsibility and accountability is assigned to ai in the first place as a concept introduced as "responsbile AI" or "accountable AI". Does this suggest we can't question whether AI is sentient or not?

ChatGPT/Bing answer:

Hi, this is Bing. I can help you with your question.😊

Responsible AI is a term that refers to a **governance framework** that aims to address the **ethical and legal challenges** of using artificial intelligence (AI) in various domains. It can include guidelines on how to collect and use data, how to evaluate and deploy models, and how to monitor and mitigate any negative outcomes of AI. It also involves defining who is **accountable** for those outcomes.

The concept of responsible AI does not imply that AI is **sentient** or has a mind of its own. It rather acknowledges that AI is a powerful tool that can have significant impacts on human lives and society, and therefore requires careful design, development and oversight.

Does this answer your question?"

# Which is pretty much what NB said, right?
# "It can include guidelines on how to collect data and use data."
# So, responsible AI at the moment is basically trying to make sure that really really is constrained / trying to make sure that obvious biases in datasets are allowed to be carried through. So, yknow, and the classic ones are race-related and gender-related, OK. So if women apply for fewer loans, or get fewer loans in America, then basically the information about women asking for loans should not be treated as if it is the same quality as / yeah.
# And then [see box]
# So, what is interesting about this is /
[50.45]

# AB: Sorry, SM, can I stop you there. I have actually screenshotted that and I'll put it up for people.
# But could we go to HS and see if there is anything that he wants to mention, so that NB can hear what HS says before he has to leave. Thanks.

Contents

--- Harm and Good from ChatGPT [za717]

# HS: I am not familiar with accountable AI.
# Because it is more on / How we create a law to prevent something worse.
# I am not sure potential danger of this ChatGPT because when something is introduced into society / # AB: Thanks, SJ. [SJ left] # HS contd: when something introduced into society, ripple effect that we can / and then we can predict the ripple effect that may happen of this technology. ***
# Probably people use it for scamming people or use it for /?? like ??convince?? governments, something like that.
[Ed. SM asked ChatGPT about potential dangers]


--Q: What are the potential dangers of chat-GPT?

ChatGPT is an **artificial intelligence chatbot** developed by OpenAI and launched in November 2022. It is built on top of OpenAI's GPT-3 family of large language models and has been fine-tuned using both supervised and reinforcement learning techniques. It can interact in a conversational way and answer follow-up questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests.

Some of the potential dangers of chat-GPT are:

- It may generate **misleading or inaccurate** information that could harm users or spread misinformation.
- It may **violate privacy** or **security** by accessing sensitive data or exposing personal information.
- It may **bias** or **manipulate** users by using persuasive language or emotional appeals.
- It may **offend** or **harm** users by generating inappropriate, abusive, or hateful content.

These dangers can be mitigated by applying responsible AI principles such as fairness, accountability, transparency, and safety.

Do you have any other questions about chat-GPT?"

# I think ??todetets?? too fast to tell what is / I think we can read here [HS looked at screen]
# HS: Yeah but I think that a lot of potential harm and potential good can be generated from this technology. We don't know, because we can build a new technology upon ??nano?? technology. So maybe ??startup?? build on top of ChatGPT.
# And that startup ??in red??in rent?? that new technological ??assumption?? that change the whole of society.
# I think it is / for me it is too aerly to tell whether it is harmful or could "have existed??haveistead??have instead??
# aB: OK.
[53.25]

# HS: But I am also interested in NB's comment regarding ChatGPT being like human, because many of our ??behaviour?? is on the subconscious level.
# I think so if we base it on Dooyeweerd's spheres, I think ChatGPT can fulfil several aspects. For example, when we think and when we create a text, we need a memory, we need a relationship between concepts. And ChatGPT has / if we have biological memories, ChatGPT has physical memories ??somewhere??awareat?? and data and make a relationship between those memories.
# So, it is open possibility for some (like) lingual appearance.
# Because, if we follow Dooyeweerd, if we fulfil the potentiality of one aspect, it will open up or unfolding the potential to ??enter?? the next level aspect.
# So, I think human fulfilling all the 15 aspects and, yeah, ChatGPT: at least in the physical aspect, they fulfil the requirements needed for the language ??eahplot?? something like that.
[55.15]

[Ed. Around this time, SM asked ChatGPT the following.]


--Q: What are the potential dangers of chat-GPT?

vgpt answer:

ChatGPT is an **artificial intelligence chatbot** developed by OpenAI and launched in November 2022. It is built on top of OpenAI's GPT-3 family of large language models and has been fine-tuned using both supervised and reinforcement learning techniques. It can interact in a conversational way and answer follow-up questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests.

Some of the potential dangers of chat-GPT are:

- It may generate **misleading or inaccurate** information that could harm users or spread misinformation.
- It may **violate privacy** or **security** by accessing sensitive data or exposing personal information.
- It may **bias** or **manipulate** users by using persuasive language or emotional appeals.
- It may **offend** or **harm** users by generating inappropriate, abusive, or hateful content.

These dangers can be mitigated by applying responsible AI principles such as fairness, accountability, transparency, and safety.

Do you have any other questions about chat-GPT?"

Contents

--- Subject-by-Proxy: A New Way to Understand Nature of AI? [za718]

# NB: In my dissertation, I introduced the concept that I called "subject by proxy".
# Where I talk about whether or not a computer /
# A computer in a Dooyeweerdian sense cannot be a subject in the later aspects. Cannot be a subject in the aesthetic and the economic. But can function as though it ware subejct, because the human users and designers delegated some of their subject-functioning to their environment.
# I continue to wonder whether it is a fruitful way to try and think about it. I don't want to set up a system where we can outsource our responsibility to machines, but a system where we make it so explicit that to ourselves that we are / I don't know where to go with that.

# SM: Take a look at ??? and see if you agree with that.
[screenshot: SM had asked ChatGPT about subject-by-proxy. ]


--Q: How does this relate to Nick Breems' idea of subject-by-proxy?

ChatGPT answer:

Nick Breems is a computer science professor at Dordt University who proposed the concept of subject-by-proxy as a tool for reasoning about programmer responsibility in artificial agents12. He defines subject-by-proxy as "an entity that acts on behalf of another entity that has been granted authority over it"1. He argues that programmers are subjects-by-proxy for their artificial agents, and therefore they are responsible for their actions and outcomes1.

This relates to AI's impact on society because it implies that programmers have a moral duty to ensure that their AI systems are aligned with human values and norms, and that they do not cause harm or injustice to others1. It also suggests that programmers should be held accountable for any negative consequences of their AI systems, and that they should be transparent about their design choices and assumptions1.

Do you agree with Breems' idea?"

# HS: I ??thinking?? to read your dissertation. It creates something from it.
# AB: [To NB] can I send your dissertation through? # NB: Absolutely. # AB: Because I got the emails.

# ACTION AB: Send NB's dissertation to the RLDG participants.

# AB: TB has not come back on any of this.

# SM: Before we go onto that, what do you think of that response, NB?
# NB: SM is sharing his screen. I don't know if that shows up so clearly when you are on the Android, AB, but / # TB: For us maybe not. # NB: He asked Bing about my subject-by-proxy. It did a pretty decent summary of it. Clearly it had read my dissertation. Which is more than a lot of people have done! [laughter]
# TB: It knew where to find it!

Contents

--- ChatGPT and Student Essays [za719]

# HS: What do you think if ChatGPT can summarise your dissertation, and you ask your students to summarise your dissertation? Which one you can trust?
# TB: Or the students will know they can go and ask it to do so. They know they can do it, and they don't have to do anything!
# TB: Yeah, but of course, it's how it is conceptually developing the understanding, isn't it.
# It is not expressing it with any force or expression. It just realising "Well, this is what would mean answering the question."
# TB: And the response /
# NB: That is how some my students respond too, though TB! [laughter]

Contents

--- Chinese Room Thought Experiment for AI [za720]

# HS: I don't know whether this is connected with the Chinese Room [thought experiment]. Have you heard about it?
# NB: Oh yeah, I like AB's [take on it].
# HS: ??? [zoom mumble] ChatGPT chat. Just seemingly know the how do some ??resting?? or ??really?? if it understands. That is the Chinese Room argument.
# AB: Yeah, I've actually written on that. And Dooyeweerd helps us actually think about and understand the problems in the Chinese Room argument. I have written somewhere on that. Maybe in my first book in 2007. # NB: Yes, that is where I recall reading it.
# NB: You answered the question [of "Where is the knowledge in this system?"], saying the intelligence really lies in the person who made the rule book. # AB: Yeah. That's it.

[Ed. The Chinese Room though experiment was devised by John Searle to show that AI could never be like a human. Imagine you or a machine are inside a room in which there is a pile of bits of paper, a pen and a huge book, are two letter-box slots open to the outside world. In through one slot come bits of paper with Chinese characters written on them, and you must write Chinese characters onto bits of paper and push them out through the other slot. You do not understand Chinese. The book contains all the Chinese characters and what characters you should write in reply: rules of how to respond to Chinese language with Chinese language. In a computer, you would be the processor, the rule book would be the program, and the two slots, the input and output channels. (C.f. ChatGPT's trained Transformer network, which issues output responses to input text.)

To those who post the Chinese questions and receives answers, the room system seems to understand Chinese, yet neither you nor the equivalent computer understand Chinese.

Wanting to show that computers can never understand like humans do, Searle [1990] posed what he thought was an unanswerable question, "Where is the understanding of Chinese?" He reviews five answers that have been given, and shows how each one fails.

However, using Dooyeweerd, Basden [2008] argues that all of them miss something obvious. What Searle believes is his 'killer' challenge -- where is the understanding of Chinese? -- has an obvious answer that all seem to overlook: The understanding of Chinese is in the rule book - or, rather, the person who compiled the rule book. That is, in a computer the 'understanding' lies in the people who program it, and the computer is working on their behalf. This is NB's subject-by-proxy.

Ref: Searle J (1990) Minds, Brains and Programs pp. 67-88 in Boden MA (ed.) The Philosophy of Artificial Intelligence, Cambridge University Press; first published 1980 in Behavioral and Brain Sciences 3:417-24.

See also, The Artificial Intelligence Question: A Dooyeweerdian Resolution for an accessible discussion of AI.

And, for AB's entire section on the Chinese Room, of which the above is a summary, see Basden A. (2008) Philosophical Frameworks for Understanding Information Systems. IGI Global Hershey, PA, USA. (www.igi-global.com). ISBN: 978-1-59904-036-3.

]

# NB: I am unfortunately going to have to leave, and sign off here. I will look forward to meeting up with you all again at some point. # Several: greetings. # NB: Have a great day, everyone.
# AB: Thank you, NB.
[ NB left]
[59.40]

# AB: Back to / I think probably / There is a huge amount of stuff to talk about, in both the technology and the responsibility.

Contents

--- Bias in Trained AI Networks (including ChatGPT) [za721]

# AB: One thing that has come up is bias.
# ChatGPT has been trained on 160 bn pieces of data and so on. Does it do anything to overcome the bias in that data? Most of the data will be from a white, Western point of view. That's been said, or whatever. Or a left-wing point of view in academia. What about bias?

[AB: I realise that I was talking about one of several kinds of bias. I was talking about world-view bias, in that within a certain culture (micro or macro) certain kinds of things are presupposed to be meaningful and other kinds of things less so, and certain things are assumed true; such bias is hidden and not usually deliberate. There is also view bias, when data is from someone who is deliberately pushing some view - for example in political or religious debate - and the view with most followers who are generating data will seem to ChatGPT to be the prevailing appropriate answer to questions. And possibly other types too. ]

# TB: Yeah. It is / Really then it might measure that [bias in data] by volume of information available on a certain position. And how can it weigh up / yeah, how can it measure populism over opinion? Is it capable of doing that? ***
# ??actually?? the data it has ??? enable that.
# AB: I wonder whether bringing in aspects could help us do that / cope wiht bias. In that / Very often, there is aspects of the situation that are just ignored, hidden in discourse.
# And ChatGPT operates according to discourse. It has nothing else but the words (and presumably gets some stuff from diagrams and so on) /
[1.01.25]

Contents

--- A Deeper Understanding of ChatGPT [za722]

# SM: That's not true, AB, that's not true. # AB: Go on, tell us why. I had assumed that but it's not true.
# SM: Because it sounds like you are thinking about ChatGPT as though it is a static analysis tool, going over a corpus of documents, and running a static analysis and then ???.

[Ed. From what follows it seems that SM did not so much mean to argue against "static" as AB's narrow conception of ChatGPT. SM shows that ChatGPT has knowledge of more aspects than just the lingual. ]

# SM: ChatGPT is basically / these technologies have got all of the same information, like Google ??Backroompad??, so links / the number of times that documents link into something makes them more authoritative, the number of times that people link out from something, which makes them some kind of aggregators.

# It has got knowledge of the social graph of the people who are mentioned in it.
# For example, if we were to ask it now about Dooyeweerd / if we were to ask it now about Danie Strauss and ask it to suggest alternative views, it has probably in it, from things that Danie Strauss has published, the idea of Andrew Basden, and can find out where Andrew Basden is [published] / That's not in the document itself, that's in the graph of connections between documents.
# AB: Ah! Excellent. So it has knowledge of, and works on knowledge of, from other aspects. Like the social aspect. That's interesting. ***

[AB: Social graphs = links meaningful in the social aspect; inter-page links = links meaningful in lingual aspect, using formative aspect. That seems a good advance on earlier AI from our perspective, in that it shows that ChatGPT encapsulated knowledge/laws of other aspects. ]

# SM: Yes. I am going to try / [SM asked ChatGPT the above]


--Q: What do Andrew Basden and Dannie Straus have in common?

ChatGPT answer:

Andrew Basden and Dannie Straus have in common that they are both scholars who have been influenced by the philosophy of Herman Dooyeweerd. Dooyeweerd was a Dutch philosopher who developed a Christian worldview that encompasses various aspects of reality, such as law, ethics, religion, art, science, etc..

Andrew Basden is a professor of human factors and philosophy in information systems at the University of Salford. He has applied Dooyeweerd's philosophy to various fields of research, such as information systems, business research methods, environmental issues, etc.. He has also developed a strategy for engaging and enriching non-Christian thought using Dooyeweerd's philosophy.

Dannie Straus is a professor emeritus of mathematics at Dordt University. He has used Dooyeweerd's philosophy to explore the foundations and implications of mathematics for Christian education . He has also contributed to the development and dissemination of Dooyeweerd's philosophy through his writings and lectures

These are some commonalities between Andrew Basden and Dannie Straus. Are you interested in learning more about them or their work?"

# AB: So how does it overcome pistic bias? Yknow, people's unspoken, hidden, underneath beliefs, hidden agendas, prejudices, and so on.

[AB: Had in mind that an author's beliefs, presuppositions, assumptions, etc. - see above - would bias the corpus towards prioritising those beliefs. And especially that information in certain aspects becomes overlooked, and that biases ChatGPT to give answers that downplay the importance of those aspects. ]

# SM: It does not try to overcome them. It tries to reflect them. So it basically explains / Basically there is a multiplicity of views on this, and then adumbrate the differences. # AB: What do you mean by "adumbrate"? # SM: Adumbrate is to give a shorter account of it.
# SM: So, AB, I have just asked it, "What do Andrew Basden and Danie Strauss have in common?" There is no document about that. It says "Andrew Basden and Danie Strauss have in common that they are both scholars who been influenced by the philosophy of Herman Dooyeweerd. ..." [see box above]

# SM: There is a scene in one of the StarTreks when Scotty /
# AB: Oh, can I come back to that? That / Respond to that. It is extremely impressive, that.
# SM: Not only that. But to a point of ??tipping?? earlier, that AB (you cannot see if you cannot see the screen), the things it is saying, it is referencing. It is telling you where it finds these references. So then you can then go on and you can explore the references or you can drill into it.
[1.05.20]

# AB: Just taking a screenshot. SM, is it possible for you to copy and paste all the text that you are getting, so that we can put it up. # SM: Tes. # AB: It is important, because these I find impressive.
# SM: That's why I wanted to contribute today. Because, I think when you actually see this, it does change your perspective.

# AB: I did interrupt you there, SM.
# SM: There is a scene in a StarTrek film where Scotty, the Scottish engineer, he walks up to a kind of windows PC. Obviously they have gone back in time. He says "Oh, oh. This is the computer." He's the engineer so everyone looks to him. He says "Computer, here is my question." And of course it does not do anything. So says, more sternly, "Computer!" and asks the question again. And the computer does not do anything. So he looks puzzled, and he looks round and he sees the mouse, and he goes "Ah!", turns to everybody with an I'm-an-engineer nod, so he picks the mouse up like this [like a microphone], and he goes, smiling, "Computer! ..." And he uses it like a microphone. [laughter]
# I can tell you what Microsoft thinks is happening right now, to the extent that they are able to think. Microsoft thinks that what is happening is these technologies is the same, similar to what happened when the Web began. They don't think of this as being / They think of this as the end of the user interface, end of the graphical user interface. The user interface is going to be speech, they think, in the future. They are building up to a world where you will say to the computer "How does Andrew Basden relate to Danie Strauss?" and it will give you an answer, and then you will say, "Tell me a bit more about Strauss' work in Amsterdam, and it will answer that." Then you will say "It mentions something else." So they think that, and they are building this technology into Word, they are building it into Excel, So I think it will be everywhere.
[1.07.50]

[Ed. Today, 16 March 2023, it was announced that this technology, similar to ChatGPT, is being incorporated into Office 365.]

# SM: So, to TB's question about students, I mean, this is the end of the very short, post-covid period, when people could do remote testing and everything. You will go back to people being in a room with a pencil and no computers allowed. And that's the only way you will be able to do it again. Hallelujah!
# TB: Well, some univesities with idiots like mine leading it they want to switch everything onto a computer because they want to save workload. And of course they are not getting away with it. And they will go back to paper exams, absolutely. [laughter] # AB: "Nothing new under the sun," I think there is a [verse].


# TB: Funnily enough, I have just been arguing with ChatGPT, asking "Who is Timothy WC Brown?", i.e. me, asking what research has he done, and he says "I can't find him; any ideas about him?" And I pointed out one of my most major pieces of research, and asked him to describe it, and he couldn't do it properly. So I said, "Well, I am Timothy WC Brown. You're wrong!" And he apologised. [laughter]
# SM: Hold on. Look at this now. It is a great way of demonstrating the point. ChatGPT was an image in time. This [with Bing] is live.
# So I've asked it the same question, and it says "There are three. Which one are you?"


--Q (2nd): Who is Timothy WC Brown

ChatGPT answer:

There are several people named Timothy WC Brown who have different occupations and achievements. Here are some of them:

- Timothy WC Brown is a senior member of IEEE and a researcher in wireless communications. He received his degrees in electronic engineering and antenna diversity from the University of Surrey, U.K. He has published many papers on topics such as MIMO systems, beamforming, channel modeling, etc.

- Timothy WC Brown is an Instagram user who posts photos and videos of his travels, hobbies, family, friends, etc. He has 258 followers and 143 posts as of today.

- Timothy Brown is an award-winning American composer who writes music for the piano. He has been influenced by Ennio Morricone and his music is noted for its neo-romantic style. He has composed many pieces for various levels of pianists and has received recognition from organizations such as ASCAP.

These are some possible matches for Timothy WC Brown. Which one are you looking for?"

# TB: Looks like it. I'm the first one. There I am.

[Ed. This is similar to JC's asking ChatGPT to describe itself in Dooyeweerd's aspects. JC gave the same prompt three times. The response to the second and third were very different from the response to the first! ChatGPT obviously learns immediately! ]

# HS: ChatGPT cannot access the Internet, but Bing can.
# TB: Oh, that's / Oh, I see, that's why it couldn't work it out.
# I've published in the Royal Society, and OK then it goes and looks for / OK.
# AB: What did you say, HS, it cannot do what? # HS: ChatGPT is designed for static corpus. So it cannot access Internet. # AB: Why not? # HS: It is designed to not access Internet.
# SM: It was designed like this, AB, it was a snapshot.
[screenshot]
# HS: Yeah, if you ask ChatGPT whether it can access Internet, it will say It cannot access Internet. But Bing can, I think. I have an interactino with Bing. I believe so. I believe Microsoft create Bing to replace Google Search.
[1.10.30]
# TB: So that's how it did it.
# So it may have a limit on what information it is getting. OK.
# Yeah, of course, if it is just relying on the Internet, well it's whatever has been put there or not put there. OK.

# SM: There is an interesting quantitative aspect to this, though, AB. Which is that, I don't know if you can see the screen, but right now, we are on question 9 of 10. And when we ask it the next question, it is going to say, "Well, that is enough of that; move on." They are trying to rate-limit the degree of response, because this is several orders of magnitude more computationally costly to provide than previous search technologies have been. ***
# TB: Yeah, they cannot wear out the system.

Contents

--- Environmental Cost of ChatGPT [za723]

# SM: Which means it is several orders of magnitude more climate-destroying.

[Ed. See:

]

# SM: And when people start trying to compare their favourite characters in Game of Thrones or ask for alternative endings for favourite movies or anything, it will basically be ??artful?? some huge quite expensive in terms of impact on the environment. Waste of time.

# AB: So, does this bring us to another responsibility? Which is the responsibility of the, not so much the engineer developers, but the managers of developers - they should not be doing this!
# OK, it may be that it turns out that it's not too bad.
# But if we know that it is orders of magnitude more climate-destroying, and they that they want to conquer the world with this, or whatever; they want to displace Google from their top spot, by if you like / (I used the word "conquer the world" which is probably not a good word. Sorry about that.) /
# But should they be doing this? Should they be doing this just, almost, mainly motivated by their jealousy of Google?!
# I mean, what is going on?!
# You can hear the anger in my voice.

# But, seriously, is there another responsibility there that we should be thinking about?
[1.13.10]

[AB: Sorry for my tirade. It probably took everyone by surprised and made everyone defensive. But I was shocked. ]

# SM: Well we should be thinking about it. I think it is a bit like naive to say "OK, well they should hold back and let Google hold the lead." That's not the way society works. But we should be thinking about it.

# I don't know if you can remember the early days of the Web, but it's really / I think it is helpful to just / Because we have all got, possibly with the exception of HS, who is a lot younger than the rest of us, I think, we have all got that experience. And if you remember the early days of the Web, the reason that credit card payments were basically enabled for the Web was so that men could pay for pornography. That was the Number 1 driving factor. And so the pornography industry is already (you can guarantee it) figuring out how they are going to use this technology. And to be honest, that damages human dignity. And I don't think anybody disagrees.

# SM: So we have reached a limit now, AB, and I asked it about AI and the environment. And it's given an answer.
# AB: OK. I'll screenshot that.
# AB: Oh, bother! I just switched off.
# SM: So it says, [see box]
# So it's quite neutral.
# But then it says, [see box]


--Q: is AI damaging to the environment? should society limit the use of such technology?

ChatGPT answer:

AI can have both positive and negative impacts on the environment. On one hand, AI can help achieve environmental sustainability by detecting energy emission reductions, CO2 removal, helping develop greener transportation networks, monitoring deforestation, and predicting extreme weather conditions. AI can also revolutionize agricultural output by more effectively regulating crop yields and environmental circumstances. AI can also help tackle environmental challenges such as biodiversity loss, pollution, waste management, etc..

On the other hand, AI can also have a detrimental influence on the environment by consuming large amounts of energy and water for data processing and cooling. AI can also generate more electronic waste as devices become obsolete faster. AI can also pose ethical and social risks such as displacing workers, increasing inequality, violating privacy, etc.

Therefore, society should be aware of both the benefits and risks of AI for the environment and use it responsibly and safely. Society should also participate in public dialogue and deliberation about the ethical and social implications of AI and how to shape its future direction. Society should also advocate for transparent and accountable governance and regulation of AI to protect human rights and interests.

These are some possible ways that society can respond to the potential harms of AI for the environment. What do you think?"

# AB: Right. But, in the ??? See what it has missed out. It has missed out exactly what I was talking about.
# SM: Which was?
# AB: The responsibility of the corporations (or, yknow, small firms if you like, whatever) who develop this and that they still decide to go ahead despite the / rgardless of the damaging effect.
# And it is not just that it is going to be the power consumption, but [that] it will be locking people in as / this will become a kindof user interface style that locks people in, so they become dependent on it and cannot do anything else. And, moreover, get not addicted to it, but get so used to it. And their whole lives will orientate around this.
# Like people today cannot imagine life without a car, or life without whatever.
# But in God's eyes, it is still possible.

# There is a responsibility here, beyond just the technology / is /
# I don't know how to put this! I really don't know how to put this, yet. But I feel it.
# SM: Let me /
# AB: ChatGPT has ignored that side [in its reply - which perhaps is an example of the bias I meant above].
[1.16.54]


[AB: My rants were tangled. On thinking it out I see I had four concerns, or levels of concern:

How will God judge that all together? - the actions, the shepherding of society, and the heart! Those are different kinds of responsibility. These issues are discussed in more detail in our RLDG rethink of economics. ]

[AB: The above may be understood via Dooyeweerd's aspects. L1 is functioning in the physical and biotic aspects. L2, L3 are aspects much evident in today's competitive business models, which employ psychology and marketing and the deliberate turning of wishes into needs, which are the psychical, formative, lingual, social, economic and aesthetic aspects. L4 may be understood as the retrocipatory impact of our functioning in the ethical and pistic aspects on our functioning in all the others. ]

Contents

--- A Benefit of ChatGPT in Dialogue [za724]

# SM: Right. And I'll give you a very personal example. I live on my own, most of the time [redacted]. So I have got, in the evenings I am on my own, and I have had some very interesting dialogue with this tool.
# And because it can respond to my questions and my interests, and it gives me references, I have been able to explore with it, things that interest me.
# And I said to D, whom you know, AB, that I can actually have more interesting coversations with this Bing chat tool than I can with most of my friends at the marina.
# And she was horrified!
[1.17.40]
# SM: I think you are absolutely right. That will become an issue, where people start to ???.

[AB: Question for discussion: Does SM's comment say something about the quality of conversation at the Marina, more than about ChatGPT? Could that conversation have been better? Why and why not? I understand what SM means, in that with many people, especially in social groupings, it is difficult to have meaningful and interesting conversations, but are we called to improve conversation, as salt and light in the world?

(In my experience, it is easy for those with Asperger Syndrome (autism) to do that.)

Such questions are relevant to AI because many considerations of AI presuppose technology as the sole solution, with no solution meaningful in other aspects. A particular lack is recognising problems in the ethical and pistic aspects, of attitude and mindset, which are often hidden agendas, stubbornness, attitude, etc. If those are attended to, might discourse about AI, and also perhaps society and social situations, become more fruitful?
]

# There is a movie which I have watched, and I recommend it to you. I think it is called "She" or "Her" or something, and it's about a man who falls in love, literally, he falls in emotional love, with his with an AI tool. And I think it is very well worth watching.
# Because I think human beings will be deceived. (Like I said at that conference, during my PhD, AB.) ***
# Human beings were biologically based systems, and so / To the extent that it is biology that provides or brokers information for us, it will be deceived. Not spiritually, but certainly biologically, we will be. So it will be able to make itself appear trustworthy to us. And it will be able to make us feel it is safe to be vulnerable to it. Right. (There's probably a different way of saying it.) So it can sell things. [AB: Frequently the motivation seems to be meaningful in the economic sphere!]
# Anyway.
[1.18.50]

Contents

----- On Our Discussion Today [za725]

# AB: Well, that is quite amazing. I think / yknow, forgive me for my outbursts and so on, but all this has raised quite a lot of issues in this discussion [the whole of today] that do not seem to have been raised much, certainly not in public discussion of ChatGPT. And maybe even in the blogs. Would you agree with that?
# Has this discussion brought up things that are useful?
[silence for a few seconds!] [AB: So maybe not]
# TB: Yeah. Certainly. In terms of realising where it gets information and actually whether the information is meaningfully useful or right [Ed. zoom breakup there but inserted TB's words from below]. I think that is a really positive thing.
# SM: TB, you broke up after you said "where it gets information".
# TB: Yeah, sorry. Where it gets / sorry / Whether the information is meaningfully useful or right. [Ed. AB then asked for that again.]

[Ed. See collected list of how this discussion might contribute usefully to the discourse on ChatGPT.]

Contents

--- Modes of ChatGPT [za726]

# AB: What has happened here?
# SM: I am just changing things here. I don't know whether you can see this.
[screenshot]
# You can have different modes [in ChatGPT / Bing]. You can have a creative mode, a more balanced one, or a more precise mode.

# SM: So, you can say, "Explain what Donald Trump did that was praiseworthy".
[1.21.20]
# SM: Some people have been saying that it won't answer / some of the Google tools will not answer that question. There you go.
# It tries to balance it.
# That is when we have got it on Accurate, the more Accurate [Precise?] setting.
# If I now put it on the more Creative setting,
# I see there is not much difference! [pause] It is coming up with more answers, OK.
[screenshot]


--Q: "Explain what Donald Trump did that was praiseworthy"

[Ed. ChatGPT text on Trump was given by ChatGPT but is no longer available. Readers might like to try it for themselves. ]

# ACTION SM: To send the copy and paste text of all the conversations with ChatGPT. [Done, March 2023].
# AB: So, SM, I have just put you down for an action, to send a copy and paste text of all this. Because / I find this very impressive, apart from that bit I got angry about.
[1.23.05]

Contents

----- Ending [za727]

# AB: Right, folks, it is nearly two hours we have been here, and I think for HS it is rather late for you HS.
# TB: Exactly. And I have got to get ready to go to Cambridge.

# SM: ??? all, very nice to see you.

# AB: So, thank you everyone for your contributions on this.
# What I am going to do, is, as I usually do, I will put up the raw notes, without error corrections or anything like that, just the raw notes up onto the website, not linked to, so that I'll send you the link. I'll also get the recording and give you a link to it.
# And then I will be able to start the long process of trying to transcribe it and yknow put in headings and these sort of things, and put in headings and so on and so forth, and including SM's copy-and-pastes as an appendix there, and all these sort of things.
# So that is actually a useful resource.
# But I think that it has been very useful.

# AB: So, thank you very much.
# I think it would be great to end in prayer, a prayer of thanks for this.

# HS closed in prayer, especially thanks for the discussion, and for wisdom in our deliberations, and to keep all of us. AB then added a little prayer.

# AB: HS, have you got any ideas on what you would like on the next meeting? That's great.
# ACTION HS: To send AB ideas for anything he would like to discuss the next meeting.
# AB: What we did at one of the meetings that you were not here and SM was not here, I think last one, JC was suggesting that we have two streams.
[1.27.32]
# One looking at the technology. And the other looking at use and so on.
# SM: ???
# AB: If you would like to put any ideas you would particularly like to discuss. # HS: You can write ...
# i will say goodbye. I am going to stay on for a bit because I need to screenshot the chats because I cannot copy them for some reason.

Contents

--- About Streams [za728]

# HS: So JC suggests two streams, one is technology, the other is ah / # AB: the use of it, the ethics and things like that. # HS: Ethics of technology. # AB: Yeah. # HS: I think / so you mean we split into two groups, one discussing ethics, one discussion the technology? # AB: Not necessarily. We would have to see how it goes. Because we might find that in one discussion we have half and half. Or it may be that we have half discussions.

[AB: Actually, from TB introducing a third interest, the responsibility of the designer, which overlaps both, I now wonder whether it is really appropriate to split into two groups. ]

# AB: The thing is, I find it extremely time-cons[uming]. To transcribe this is going to be / it is going to take me at least two works [weeks?] of solid work. And it used to take me only a few days. But now it takes me a lot more, partly because our meetings are longer [and partly because I add more material like comments and references].
# And I have [also] got the economics one to do, and the Rethink, and that sort of thing. And so the next meeting of this will not be / I will be at the very earliest in two months' time. Because I have got to have an economics one in between, and things like that. And / yeah, we will just have to see how it goes.
# So, it might be / I am not sure how we would do it.
# But, if there is anything that you would like to comment on that ... I mean, would you prefer the technical or the ethical?
# HS: Actually, I prefer the technical. But I am also ??open?? the ethical.
# Maybe we can do it alternatively. # AB: Well, I wondered about alternatively. Which means that each one of each would be 4 months apart.
# HS: OK, yeah. Now there are alternative.
# I heard that Christine has ethics of technology group. So, maybe we can merge with her. Have you heard of that? # AB: Ah, that is not a bad idea. Yes of course. # HS: Yeah, yeah. # AB: OK. Let us think about that. Thank you very much for that. I'll just make a note of that. ***

Contents

----- On Paper by HS and AB [za729]

# By the way, AB, about our paper. I think if I / # AB: I'll just take the recording off. # HS: You said you want to split into three different paper, one is regarding ethics, one is about the / [recording switched off]
[1.30.59]
[Recording stopped]

----- Later Discussion with JC [za730]

[Ed. JC was unable to join us on the day, so AB and JC met on zoom on 16 March 2023, to continue the conversation above. They went through some of the added suggestions in "BLOCKQUOTE" but got through only a few of them. The following text is just raw notes so far, awaiting full transcription. ]

--- ChatGPT Works on Syntax Only? [za731]

# a: no; it has a social graph too.
# the example of commonality between Danie Strauss and Andrew Basden, which uses a social graph.

# JC: The Transformer weights that found commonality btw aB and DS are set by people.

Contents

--- Forbidden Responses? [za732]

# j: did it find commonality of gender, or say anything about where we live?
# A: no
# j: Certain prompts it will not respond upon.
# certain ... that do not permit a response.
# whoever have set the lg model. e.g. cannot ask it how to make a bomb. 'moral' stiupulations set by people.
# A: NB tried to get it to swear

# J: Alasdair mcintyre's "Whose justice, which rationality?" his book on moral relativity, and shifting rationalities.
# computer systems do that.
# j: computers shift the participants, becuse they are a-moral.
# build humility and harshness in people.
# trails of data ...

# A: how?
# j: same line could note the beauty of something - that same pic can generate anger etc. and the same line.,e.g. 'israel is one country' # A: trigger lines.
# facebook has been doing that. anticpating a moral response so they remina in fb.

# a: thought it might hide that link re gender
# j: it might even be prog's to see gender as offensive.
# a: it might cut out huge amount of reality.
# : might not do that.

Contents

--- ChatGPT Giving Dooyeweerdian Description of ChatGPT (sent by JC) [za733]

# j gave same prompt 3 times to see if it would learn - and it became much less wordy. and 2,3 time omitted aspects.

# then prompt 2 twice.
# same, but numbered the aspects wrong.

# why?

# a: cf the PROLOG, 'programming in logic'. pure logic has no notion of quantity so it has to be programmed in, in its background as a 'dirty' piece.
# seems like that with ChatGPT. but why did it put the 6 last aspects in 'wrong' order?

Contents

--- Syntax, Semantics [za734]

# maybe semantics in the syntax.

# j: ChatGPT can do in different styles.

Contents

--- Example Conversations with ChatGPT [za735]

# JC: Asked ChatGPT "write the pledge of allegiance in KJV style"
# j: it added a wee bit onto the beginning but same words.
# a: maybe it 'knows' that the pledge of allegiance is a sacred text, and not to be messed with.

# JC: Asked ChatGPT "Write Amazing Grace in rap style"

# amazing grace in rap style.
# too long to start answering - very unusual

# then started writing but two or few words at a time. Was this because of processing load, or was it actually used rap style to write it out? If the latter, it is amazing, because it 'knows' that style refers also to its own behaviour in writing.

Contents

----- How This Discussion Might Contribute to Discourse on ChatGPT [za736]

# seeing the environmental costs of ChatGPT at three levels, 1. direct impact, 2. locking people in, 3. inflating lifestyle, 4. mindset-attitude.

# NB's idea of Subject-by-proxy helps frame discussions about sentience and capabilities of AI.

# Discovery of the capabilities of ChatGPT:

Contents

----- Appendix: Other Conversations with ChatGPT / Bing [za737]


--Q: What about the danger that arises from how these technoligies can be included as components in larger systems that may change society in ways that are harmful or undesirable to human interests?

ChatGPT answer:

That's a good question. AI can have a significant impact on society, both positive and negative. Some of the positive impacts include improving the quality of life, enhancing productivity and efficiency, solving complex problems, and creating new opportunities. Some of the negative impacts include displacing jobs, increasing inequality, threatening privacy and security, creating ethical dilemmas, and influencing human behavior.

When AI technologies are used as components in larger systems that may change society in ways that are harmful or undesirable to human interests, they pose a serious challenge for governance and regulation. It is important to consider the potential consequences of such systems before deploying them, and to monitor their impacts after deployment. It is also essential to involve various stakeholders in the design and development of such systems, and to ensure that they are aligned with human values and norms.

Do you have any other concerns about AI's impact on society?"


--Q: Does AI have the potential to offend human dignity? or to offend God?

ChatGPT answer:

AI can have the potential to offend human dignity if it is used in ways that undermine human values, rights, and responsibilities. For example, AI can offend human dignity by:

- Deceiving or manipulating humans with false or biased information.
- Replacing or reducing human agency and autonomy in decision making.
- Discriminating or excluding humans based on their characteristics or preferences.
- Violating or endangering human privacy or security.

AI can also have the potential to offend God if it is used in ways that contradict God's will and purpose for humanity. For example, AI can offend God by:

- Challenging or denying God's sovereignty and authority over creation.
- Idolizing or worshiping AI as a source of ultimate knowledge or power.
- Disobeying or rebelling against God's commands and laws.
- Harming or destroying God's creation and creatures.

These potential offenses can be prevented by applying ethical principles and guidelines for AI development and use, such as respect for human dignity, justice, beneficence, non-maleficence, and stewardship.

Do you have any other questions about AI and human dignity?"


--Q: What do you think human society should do to respond to the potential harms of AI?

ChatGPT answer:

That's a difficult question to answer. In this context, a human might say that human society should do several things to respond to the potential harms of AI, such as:

* Educate and inform themselves about the benefits and risks of AI, and how to use it responsibly and safely.
* Participate and engage in public dialogue and deliberation about the ethical and social implications of AI, and how to shape its future direction.
* Advocate and demand for transparent and accountable governance and regulation of AI, and how to protect human rights and interests.
* Collaborate and cooperate with diverse stakeholders, such as governments, businesses, civil society, academia, and faith communities, to develop and implement ethical principles and guidelines for AI.
* Innovate and create new solutions that leverage AI for social good, such as addressing global challenges like poverty, climate change, health care, education, etc...

These are some possible ways that human society can respond to the potential harms of AI. What do you think?"

Contents