RLDG Zoom Discussion 1 on Artificial Intelligence

This is a new series of Reith Lectures discussions, arising from the 2021 Reith Lectures on Living with Artificial Intelligence, given by Prof. Stuart Russell in December 2021.

Discussion of Lecture 1, The Biggest Event in History

Lecture date: 1st December 2021

Discussion date: 4 March 2022

PRESENT: AH, CA, SJ, JC, NO, NB, HSS, TB, AB (host, transcriber, annotator)

Contents

See Themed Overview.

[AB decided not to contribute much to the actual discussion, because of difficulties writing notes while speaking, and also to give others more time, but to add what he would have said here in the notes. These are introduced in square brackets with "[ABC ...]" (AB contribution). Editor comments by "[Ed ...]".]

[Recording time: 1:08:13. Transcribed by AB, 8-11 March 2022, in role of Editor. Editor inserted links, notes and comments. Also, AB inserted material he would have said as "ABC". AB as contributor and as editor should be treated as different personages. Unclear words, usually because garbled by Zoom, are indicated by "???". Chat messages have also been inserted, though not necessarily in order because we could not work out how they fitted. Times on recording are inserted every few minutes so that readers can listen to that part of the recording. ]

----- THE DISCUSSION [za100]

# AB: Opened in prayer
# AB: Welcome everybody. AB did some admin.

--- Introduction [za101]

# AB: Stuart Russell has given four Reith Lectures on Artificial Intelligence (AI). That is a good starting point for our discussions. So what I propose is that we discuss each lecture once, then see where we go from there. [za130]
# Have we been able to listen to the first lecture?
# NB: "Not currently available". # AH: Was ok. # NO: Clicked on Reith Home and then I could download the pdf. [za131]
# AB: Will put the audio up on Google Drive. ACTION AB. **
[04.20]

# Introduced each other. [listen to audio for introductions]
INTRODUCTIONS: AB, NB, AH, CA, JC, NO, SJ, TB, HSS [za132]
[10.08]

# AB: How many have prepared notes or questions?
# Questions indicated from: NO, SJ, JC.

--- On the Nature / Definitions of Intelligence [za102]


[11.10]

# NO: I only looked at the 1st lecture. But I found it to be pretty disappointing in a couple of regards. [za133]

[ABC: So did I!]


# One is, it didn't have a whole lot of details in it that in general terms we have not heard of already. I think the guy speaks like a fool, because he has often contradicted himself in what he's laid out here at different points. He first tells us that AI cannot possibly replicate the human intelligence, then he goes on to define the AI where he says human intelligence is where we can be expected to achieve our objectives, then he takes that same statement and applies it to AI. [za134]
# Well, that sort of limits the whole scope of our intelligence to some very causal terms, something similar to what Kant did. [za135] **
# That sets up a nice scenario for discussion but it sets up false boundaries around the whole thing. It ignores a lot of metaphysics. [za136]
# Then (I won't go through some of my other notes here) but then he goes on and has this tremendous leap. So, after he tells us that it doesn't do much, then he goes on to say that "inevitably general purpose AI [GPAI] could far exceed human capabilities in many important dimensions, and this could be an inflection point in civilization." [za137]
# Well, that somewhat contradicted what you are telling me before. [za138]

[ABC: See reflection on intelligence below.]

# Then he seems to - yknow again - have this, what he calls, one rather prosaic goal of GPAI to do more at far less cost and greater scale. [za139]

[Ed. Does this link to our rethink of economics? ]
[za140]

# The problem with this is that it ignores the complexity of the world that we built at this point in time. And that I think is a fundamental issue that we ought to talk about. [za141] **

[Ed: Did we discuss this?] **

[ABC: Complexity: multi-aspectual functioning. So intelligence as multi-aspectual; see below.] [za142]

[14.00]


[-- The following contributution introduced by NB actually occurred later, at 15:40 on the recording]
# NB: Haven't watched the video, but the fact that he contradicts himself need not be a show stopper, simply in that there are some / [za143]
# We either run into actual paradoxes - that I'm comfortable with. [za144]
# But also the terms that we use we have not really nailed down the meaning of. [za145]
# A term like "intelligence", he described it as "the propensity of our actions to achieve our objectives." That's a slippery concept to define. So if we talk about whether computers can achieve human levels of X (fill in the blank for X) we probably would come up with good examples of failure of them doing that. [za146]
# That's a separate question from do they have the same moral weight as human beings? To which of course the answer is "No". [za147]
# And so the fact that that he grapples with things where he seems to be saying the opposite of what he said earlier I take as a symptom of the fact that we don't really have a great vocabulary for a lot of this yet. [za148]
[-- return to 17.30]
[Ed. From there, NB continued with the idea of subject-by-proxy]


[-- The following contribution introduced by HSS actually occurred later, at 20.55 in the recording]
# HSS: One is the definition of artificial / intelligence itself.
# Because as the field of AI moves to the future, the definition keeps changing.
# The first one is as we know it's from Alan Turing himself. If the AI can produce language that cannot be distinguished by humans, it's called intelligent already.
# But currently this definition is keep revised.
# I think the most recent one is by a Pragmatist philosopher, Prondom??, which says that: If the AI can commune in the community, can interact in the community as a human, we just say this AI is truly intelligent.
# If it's just communication, it's not, because communication can be interpreted in different ways, right. [That's] my first remark.

[-- Ed. In the actual discussion, at time 22.00, at this point, HSS introduced his second remark, on the topic of "#s-mrai">. Morality in AI. ]


[ABC: On understanding and defining intelligence. Using Dooyeweerd, could intelligence be defined as the harmony of the rationalties of all aspects? Would that offer a useful, workable definition? It would take into account all kinds of rationality. It would also take into account everyday experience in that this involves harmony of aspects (as multi-aspectual functioning). We could not of course achieve this in a machine (in near future) but at least it might offer a yardstick, or a guide to research, assessment, standards and decision-making. It also shows why and in what ways other definitions are false. e.g. ability to meet objectives restricts it to the rationality of the formative aspect. ] [za149] **

--- Problems from AI in Use [za103]

# NO: I do like his statement that social media has created a lot of problems. [za150] **
# I think that is a similar issue for AI as we walk through this thing.
[14.22]

# NO: He goes on to make some other comments at the end, where he said something very interesting,

"I wish the entire information technology industry had a different structure. If you take out your phone and look at it, there are are 50 or 100 corporate representatives sitting in your pocket just sucking out as much money and knowledge as they can. None of these things on your phone really represent your interests at all." **

# NO: To me, that's really the issue that I have concerns about. It goes into some of our economic discussion, the fact that AI can be used as an instrument to do evil as much as good. [za151] **
# That's where again I have real concerns and intersts about, what can be done and what Christian viewpoints can do to connect with this. [za152] **

[ABC: Here we are talking not about the nature of intelligence, which is a philosophical topic, but about use in the real world, bringing benefits or detriment, and even its relationship with society. See below on areas of interest. ] [za153]

[15.36]

# AB: Thank you [NO] very much. There's a lot there. Anyone like to respond?


[-- Ed. NB remarked on contradiction and nature of intelligence again; his contribution has been moved earlier. ]

--- Subject-by-proxy [za104]


[17.30]
# AB: A bit more about how you would tackle that [NB's definition ].
# I'm giving you an opening for your idea of subject-by-proxy.

[Ed. SOME BACKGROUND: In Dooyeweerd, the idea of being a subject brings together both meanings of the English word "subject" - being an active agent rather than passive object, and being subject to laws. Dooyeweerd suggests that being an active agent is made possible only by being subject to laws of the various aspects (which he calls law=-spheres). Usually manufactured things function in any aspect as object, but only humans can function in the analytical to pistic aspects as subject. This is obviously so with things like pens or paper (lingual aspect: I am the lingual subject, they are lingual objects), or when I play with friends in hide-and-seek (subject-subject relationship in aesthetic aspect). But with computers it is not so clear. Unlike a pen, the computer can write prose or poetry without my giving it the words. In a computer game, I play hide-and-seek with the NPCs (non-player characters) whose actions are controlled by AI. Yet the computer itself is 'merely' a physical thing, in that, without humans, it functions as subject only up to the physical aspect. Yet, while computer used as word processor could be seen like pen and paper, as lingual object, it does not seem right to treat the poetry-writing program or the computer game characters as objects. How do we understand this? NB came up with an idea that might help. ... ]

# NB: The way in which we assign moral weight, I said earlier - probably responsibility is the most commonsense word. [za154]
# The way we assign responsibility for actions in an AI era is really going to challenge us because we, I think rightly, suggest that only humans can be responsible, in the literal sense of being able to respond ultimately to God's word for reality. [za155]
# And yet we are going to see examples of computers taking actions where there is not a nice obvious human we can point to as the one who is ultimately responsible. [za156]
# And I've been struggling with a way to come to grips with how to assign responsibility in a way that maintains a meaningful sense of that word, at the same time as being able to reason about the actions of machines. Because machines are going to be taking actions that we do need to, one way or another, assess. [za157]

# So I've come up with a concept called subject-by-proxy. [za158] **
# Where I say that only humans are subject in the later aspects. At least in orthodox Dooyeweerdianism, only the human is subject for the rules of aesthetics, economics, and social, juridic, pistic, and ethical. [za159]
# And yet whenever we make technology, we are taking some part, some values of ours and putting it out in the world. Setting it up to run by itself, living on as some subset of who we are, becoming embedded in our environment. [za160]

# And AI is going to turn the knob up on that propensity. So can we say that an artificial intelligent agent is acting as a proxy for me? [za161]
# And therefore is in some sense subject to the rules and the norms of the later aspects, not truly subject but subject in the way that I as a human programmer have delegated my own responsibilities to this external environmentally embedded technology that I have made. [za162]
[20.15]

# NB: BUT every time that I describe it, I feel deeply uncomfortable. I don't like where I'm coming, and I don't see a good alternative. [za163] **
# And so, that's one of the things I'm excited to get from this group is envisioning what other answers to that conundrum there could be. [za164]
# Because I'm not willing to say that the AI is subject in the modalities is truly responsible, but it seems inevitable that the AI will act in a way that we have a difficult time not treating is like that. [za165] **

[Ed. I don't think we got any discussion on that. So, maybe this is a TOPIC TO DISCUSS in future. ] [za166] **

[20.50]

# AB: Any responses to that?

[20.55]
# HSS: I have a few remarks on these topics.
[-- Ed. HSS said something about definitions of intelligence, which have been moved earlier.]
[22.00]

--- On Morality / Responsibility of AI [za105]

# HSS: The 2nd remark:
# I think the responsibility of AI, the moral responsibility or moral compass of AI, especially of contemporary AI, [is] realised on the data, and data is generated by society. So if the society / [za167]
# So I think the AI behaviour is a reflection of society that produced the AI. So [for example] if society is racist, the AI will become racist. [za168] **
# So it's like the moral of AI will reflect the society that generated the data. [za169] **
# I think that's my opinion.
[22.50]

# NB: I'm curious HSS in your analysis, what is role of the distinction between special purpose AI and general purpose AI? [za170]
# Most of the examples we are seeing right now are very special purpose AI. For example, image classifier or a fraud detector in a banking system, where the data we've given it, it really quite constrained; it really does only one thing.


[Ed. BACKGROUND: Note that there are two roles of "data" in AI, at least in machine-learning AI. 1. Training data. Data used to train the AI system so that it 'learns' the knowledge it needs to work well when it is run / used in real life (e.g. 100,000 past cases of fraudulent and non-fraudulent transactions so that the AI can learn to distinguish them faithfully). 2. Running Data. Data given to the AI system when we run it (e.g. data about one banking transaction that we want to know whether it is fraudulent). I suspect that there, by "data", both HSS and NB are referring to the training data, and thus that data that determine how it will operate, not the data supplied when in use. Indeed, research has shown AI systems, e.g. for selecting candidates for work, reflect the biases of recent years. ]

# NB: The dreams that a lot of the AI people aare talking about when we bring these topics up, is GPAI where it can handle any task you throw at it. Does your analysis change between those two? [za171]
# HSS: First we need to go into the detail of what mechanism generates this AI. Because currently I cannot find any mechanism that can create GPAI. [za172]
# Maybe the closest one is the evolutionary algorithm. But it's still very in the early stages. So I'm not sure about that.
# Currently we have only spatial AI, we don't have GPAI yet. [za173]
# NB: I think that's one of the challenges we face, is we are seeing examples of the special purpose AI and starting to get a picture of both the magic and the dysfunction that could come. And we're trying to prepare for a world where there might be GPAI. We don't see the path from here to there yet, but we don't doubt that there are people looking for that path ??in our yard??. [za174]

[ABC: Continuing from above, would it be useful to suggest that GPAI as fully multi-aspectual AI and special purpose AI as single-aspect AI? GPAI may be understood as incorporating the full rationalities and laws of all aspects, while special purpose AI, as incorporating some part of the rationality and laws of one aspect. When that one aspect is within the first four (quantitative, spatial, kinematic, physical) then the laws and rationalities are relatively simple and most could be transduced to quantities with reasonable accuracy and processed in a computer. As HSS remarked, most SPAI is spatial - e.g. examining X-ray photographs to find cancers. Note that to incorporate into an AI system the laws of aspect X, one must also incorporate those of aspect X-1, X-2, i.e. all the earlier aspects on which its laws depend. So, as we attempt AI in later aspects, will we find not just an exponential but a permutationaly increase in volume and complexity of the total rationalities and laws what must be incorporated into the AI? (c.f. the difference between pre-theoretical and theoretical thought in Dooyeweerd, the abstraction and Gegenstand relationshsip.) ] [za175] **

[24.35]

--- Infatuation with AI [za106]

# NO: It troubles me that we as humans get so infatuatived with what we can do. I forget who it was that said we are homo faber, man the maker. I reflect on the last century, because that's what I came out of. The 100 million innocent deaths of people who died because of ideology, who said "Everyone can be equal, everybody can be well-fed, everybody can have the same kind of opportunities and money."
# All of that is also concerning when we have people who say, "AI is going to be able to solve all our problems."
# And then what. /
# We don't have GPAI now, I guess. Will we ever have it?

# And how much resource should we throw into it? Is that a good question? [za176] **
[25.55]

--- What Makes a Good Future? What is Responsibility? [za107]

[Ed. This is SJ's question indicated earlier.]

# SJ: What would make a good future?" [za177] **
# And I think that was a question that I found very interesting and because there is no answer to that yet.
# GPAI is not really coming. And I suppose, because there is no common answer to that, that is one reason behind the delay on progress in GPAI. [za178]

# Second, my question for the group is: How we can define responsible? [za179] **
# Which is related in my mind to "What makes a good future" as well.
[27.09]

[Ed. Have we discussed that adequately? A question for FUTURE DISCUSSION. ] [za180] **/

--- AI in Existing and New Sectors [za108]

# TB: Interesting thing there, I was thinking, is: I would say that AI has probably not progressed, on [the basis of] what exists, quite so fast, because it is quite hard to adapt to what is already there.
# But in new enterprise, and certainly e-business - it keeps reforming itself - and new e-business can change itself any time it wants to. The AI is certainly there; we don't see it but it's certainly developing. [za181]
# I would say that that is what I would call general purpose AI, GPAI. [za182] **

[Ed. A different view of definition of GPAI? ]

# Because these businesses are getting consultants in, or getting recruitment in, graduates basically, to come and generate that kind of general AI that they need to have for the business to operate and to be able to promote themselves, and help target their audience in a way that the AI will enable them to do so. Through the many channels that they have now got to do that. [za183]
# So, I think that's really where it's coming about.
# Would there be a consumer AI: can you sell AI to the consumer? To get them to use it to their benefit. Probably that's less likely. [za184]
[28.40]

# Chat 01:09:05 NO: Don't we as Christians have an answer to what is the better future? [za185]
# Chat 01:09:36 JC: where should we apply our ability to respond to be responsible in that sphere? [za186]
# Chat 01:11:47 NO: To Jordan - yes where?

# AB: Say something a bit more, TB, to explain what you mean by: It is GPAI because business keeps getting graduates in to do AI. In what way is that general purpose?
# TB: I would say, in that it is helping e-business be competitive, and to do that, they need to know how AI can reach its target audiences through social media, following patterns of visiting websites, cookies, etc. It's playing with all those information tools, that are generally there, and there for the whole world. And it's how each business will compete in learning how to /
# AB: Why call that general purpose AI?
# TB: I suppose it's because it's going to a general area of cyberspace. [za187] **


[Ed. So it seems we can define GPAI in two ways. (a) that it incorporates, in its engine (from its training), the rationalities of all aspects. (b) that it obtains as input data when running, information from a very wide range of sources. That seems to link with two meanings of "data".
]

# TB: But I can see that but yes, still there's lots of ways in for differnt reasons you can specifically do AI to get different results. [za188]
# Elections would be another example of how AI could influence elections; And it can. [za189]
# And how it can proliferate news. And journalism. That would be another aspect, as well as online e-business. [za190]
# So I suppose all of those are a plethora of statistics, aren't they, er specifics, rather, I should say.
[30.28]

--- The AI Tools [za109]

# NB: I think perhaps that one of the first steps we are seeing towards GPAI is a set of tools that can be used to generate a special purpose AI. [za191]
# But they are fairly general tools that can be used to generate a special purpose AI. The tool itself becames a step towards a GPAI. [za192]
# The statistical analysis and machine learning tools are pretty general. What [training] data you feed them then affects what their purpose is, but the tool itself in some sense is AI-ish. [za193]

[ABC: Each kind of tool itself incorporates the rationality and laws of a single or few aspects. A tool / program may be seen as a virtual law-side. e.g. statistical tools: quantitative aspect, neural net: psychical aspect. But in statistics of some topic, the quantitative aspect reaches out (targets) the aspect that makes that topic meaningful. Similarly, learning that topic via neural net. ] [za194]

[31.00]

# JC: Was going to mention something. [but too much noise]
# TB: ??Youanme?? is already doing that. You can learn the learn the general ...

# AB: The two examples you gave, TB, are both informational AI, that is AI that deals with information, to change people's information and knowledge and so on. And I wonder if that's relevant or not. But if information is the lingual aspect, the aspect that deals with meaning, meaningfulness, and meaning is, if you like, what all aspects are [Ed. aspects are "modes of meaning", ways in which things can be meaningful], then maybe it covers all aspects. [za195]

[ABC: So, I tended to think that such AI was limited to the lingual aspect. However, TB might have a point. Since all words etc. signify something meaningful in any aspect, then the AI whose running-data is words etc. will somehow have to cater for all aspects of signified meaning. If GPAI is defined as covering all aspects, then this could be classed as GPAI. Even if it does not over every aspect fully. That, it seems to me, what TB had in mind. ] [za196]

# NO: By the way, there's an interesting book that came out not long ago, called "The Ethical AI" I think, written by a couple of scientists involved with it. I don't know whether you guys have read that or not, but if not, it really speaks to the issues of this embedded complex AI, that makes decisions inside of the / I'm not describing this properly / but they say that these algorithms. They say basically that the problem gets so complicated, you cannot really tell what kind of biases are resulting.
# One thing that they recommended is that we develop independent committees that audit these complex routines. Pretty interesting book.

# Chat 01:15:34 NB: The *opacity* of AI is a real difficulty, epistemologically.

[ABC: On Opacity of AI Systems. I used to work in AI in the 1980s,, in which many of us built Expert Systems. Instead of relying on machine learning, we used to elicit knowledge from experts and represent it explicitly in a knowledge representation language, as a kind of network of inference and semantic relationships. When it was run by users, the 'engine' of the system would traverse the network and seek information from the user, make inferences according to what is in the network, and then give results. One of the major advantages of this was that the reasoning by which it came to a result was transparent rather than opaque - that is, the very opposite of machine learning AI today. In fact, I built Expert Systems that were not only transparent, but even stimulated users to think more critically about their problems. See Basden & Hibberd (1996); Basden et al. (1996); Basden & Brown (1996). That was because, unlike today's AI which had limited roles of replacing the human, Expert Systems had several other, human-friendly roles (Basden 1983).. ]

# Chat 01:17:15 NO: The Ethical Algorithm: The Science of Socially Aware Algorithm Design [za197]
# Chat 01:22:28 SJ: I read the book Neal. It is interesting. My understanding from their book is that there is no solution at the moment to ethical issues from Mathematical perspective. It is a matter a creating a good trade-off between conflicting goals. which make me go back to the importance of Human in the loop and ask what good means to the HIL? [za198] **
# Chat 01:23:56 NO: Sina - I agree - Human's control the box, nitch wahr?

[ABC: I would approach that "no solution at the moment to ethical issues from Mathematical perspective" from an aspectual perspective. "Ethical" often means a mix of both ethical and juridical aspects. But those are so late in the aspectual sequence as to make it very inappropriate to try to mathematize it. Or even to think it can be reduced to analytical logic. "trade-off between conflicting goals", I would see as different aspects, the norms of which cannot be directly sompared, so we need to "go back to the importance of Human", as the one entity type that can function as responsible and responsive subject in all aspects together.
] [za199]

[34.00]

--- Where Should We (RLDG) Apply Our Time? [za110]

# Chat 00:52:37 JC: i have one questsion
# Chat 00:52:40 JC: question

# JC: Where should we apply our time? [za1a0] **

# Chat 00:56:08 JC: Should our priority to address the impact of AI on employment (automation eliminating many jobs) or on the meta ethical assumptions informing AI/machine learning and where the AI system/Machine learning should/ought be deployed? Should we be luddites (:)) or should we be gatekeepers on informing the public on where the systems should/ought or not be applied?
# Chat 00:57:02 NB: Answer to Jordan's question: "Yes". :-)
# Chat 00:59:53 NO: Good point Nick!

# JC: I know this is is a theoretical conversation, and NB's answer is appropriate in the chat, but should our focus be on understanding the ethical hurdles that should be in place for the application of GPAI? Assuming that it will come. (And thank you for the conversation round this.) Or [on] something else? [za1a1]
# The ethical considerations: is that where we pursue this, or just something more parochial? e.g. Do we protect jobs, being luddites?
# I don't think we should be luddites. I don't think that's very / I mean, dominion, meaninig servicing / providing the opportunity for the flourishing of creation. [Ed. "Dominion": Christian jargon here; see 'Radah', which shows it is service not dominating.] [za1a2]
# But that is an aspect that we should be concerned with ... justice and other spheres. That's my question. I listen.
[35.20]

--- One Priority: Danger to Humanity [za111]

# AH: I apologise for my silence up to now. This is AH. I'm ignorant about so many of these.
# My interest in this discussion was spurred by what I've read, especially from the bulletin of the atomic physicist, I think it was. They said that the Doomsday Clock was so close to midnight for so many reasons, but one of those reasons was the threat to humanity of AI. [za1a3]
# Chat 01:20:43 AH: doomsday clock: https://thebulletin.org/doomsday-clock/
# Chat 01:22:17 NO: Thanks Andrew, was not aware of this

# I say that because it was sort of ???-quoted in the Lecture that we were all supposed to listen to for this discussion. So those two things amplify together, and accentuate each other, for me.
# From the point of being a threat to humanity and what happens when these machines are smarter than humanity and cannot be controlled any more by humanity.

# That affects the way I kind of answer the question of "What are our priorities?" [za1a4]

--- Short-term and Long-term Concerns [za112]

# AB: So, that sounds like both a question and a response to JC's question. Can I just clarify, AH, that your concern is a long-term concern, about possibility of some big disaster? Is that right? Whereas I think JC's concern is somewhat different. [Ed. Short- and long-term concerns] [za1a5]
# AH: Yes but they may be differenent but I see them as complementary rather than as conflicting. [za1a6]
# It seems to me that JC's concern is more short-term, where AI and automation too, globalization - many of these developments - are threatening jobs, and therefore human wellbeing. But mine is more long-term. Where what happends in say 30 or 40 years from now, where we've created machines that are 'smarter than humans'. [za1a7]
[38.04]

# AH: I use that term "smarter" of course loosely. But we're talking about what that itself means: intelligence.
# AB: I've got a lot to respond to there. I think that what I'll do is to add in my responses when I go through these notes.
# Definition of intelligence as "intelligent ... their actions can be expected to achieve their objectives."
# NO: It's a vacuous statement.
# Chat 01:20:33 NO: From the Lecture "Machines are intelligent to the extent that their actions can be expected to achieve their objectives." ALMOST A VACUOUS STATEMENT [za1a8]
# AB: Yes. # SJ/AH: Truism? # NO: It's a tautology. Really, when you look at it, it doesn't say anything. If you assume that the word "intelligent" hat the idea that there is a logical outcome to it, then you are just saying "they are intelligent because they are logical." # What does that mean? It does not mean anything. # AH: Maybe it's a question-begging; it begs your question. [za1a9]
[39.30]

--- Other Priorities [za113]

# AB: Can we go back to JC's concern, of where should we apply our time? Should the focus be on the ethical or something else? On "parochial" things like protecting jobs. Or what?
# It seems to me that what JC's getting at is, "Which issues should we be focusing on?" Probably the answer is "All of them."
# But it's things like /

--- Five Main Areas of Interest in AI [za114]

# I tend to think of it as five things:

[41.20]

# Those five are five areas of interest in information systems in general, and I / [ABC: AI is one kind of information system.] [za1b0]
# From my book, my information systems book. [za1b1]
# I always had in mind as I was writing it AI as an example. So I tend to think of all.

--- Focus on Development and Use of AI [za115]

# AB: JC, are you thinking of any one, two or three of those.
# Chat 01:23:14 JC: development of ai and use
# CA: I think JC is saying the development of AI.
# JC: the development of AI and use.
# And why I say that, is where we use our time, is because, when we review the lectures, where we can use the skills and the unique giftedness of everyone on this call.
# In a month or two's time, when we go through the other lectures, where do we want to apply our hearts, our minds and our worship to help shape the impact of these things? [za1b2]
# I think it really is in the development and use [of AI].
# But I'm putting a way-too-early signal-player, for "Let's go this way, once we get a grounding for where we are and were these Lectures put us, so we can use what we all have, for some sort of beneficial purpose." [za1b3] **
# So, development and use.
[43.05]
# AB: Development and use; Thank you. That means that if we put our emphasis there, that means that things like the nature of AI and whether we can have GPAI, by when, and what intelligence means, is somewhat less important as an issue. What do people think about that?
# NB: I think that the questions in that will naturally come up, as we discuss the development and use part. In an unavoidable sense. But I think that JC is right, that we can choose where to put our emphasis, so that even when those questions come up, we can avoid getting lost down too-deep rabbit holes. [za1b4] **

[ABC: Indeed, all five work together in practice, and each informs the other. However, development and use are the two that most directly require us to think of our individual responsibility for AI. ]

# CA: I think that we need to put our emphasis on objective. Because he talks about objective and that is, for me, the main thing. He was saying about the soccer game: if we come up with a plan and we decide like who the players are, and we lose, then it's basically our fault. So he was saying, don't blame the AI if the input - you have to be careful about the input that you put in, because that will create the objective, the output. So it's just like when you bake a cake. You make sure that the ingredients you put inside are good ingredients so that, when the cake comes out, it's ??? cake and you can eat it.

# And the other thing that I was thinking anout and was reading on, was that, if you have a vacuum cleaner the job of the vacuum cleaner is to clean floors. Now, the vacuum cleaner cannot wash dishes, simply because it was not programmed to do so. So, what is the program you are putting into AI? What is the input, what is the data you are putting into AI. [Ed. CA there is talking about training data, not the running data.] That will determine the objective and the output.

# NB: I disagree a little bit. Perhaps with the vocabulary. But the data that you put in will inform how it solves the problem, but it does not define what the problem is. [za1b5]
# If the problem is a dirty floor, then give it some data that creates a robot that cleans floors; a vacuum cleaner. If the problem is dirty dishes, then the vacuum cleaner robot isn't the right thing.
# And so, at some level, the humans ???users are the ones that defines the objective, which is the problem to be solved. The data that we choose to feed into the algorithm deeply informs how it goes about solving the problem but the humans were the ones who decided what's problematic in the first place. [za1b6] **
[46.45]
# CA: So that's what they said: We as humans are the ones who control the input. Which means that we need to check that the data not biased. [za1b7]
# And the data [should?] represents the whole population. [za1b8]
# So the output that comes out is going to be the objective that we want.
[47.10]

--- The Importance of Heart, Mindset, Ideology [za116]

# SJ: It's not just / I mean, the human defines the algorithm in the first place.
# But then again who is that human? What type of frame of mind and ideology and mindset, is very key nowadays. [za1b9] **
# That's why I am interested, when it comes to responsible AI, what what makes a Good Future. [za1c0]
# We need to know what are these different categories of human in the loop. What do they like the future to be? Because they design the algorithm, but what is in their heart? That's the question for me.

[ABC: Heart is a very important issue, which I had not thought about. But (as Jesus says) from an evil heart comes evil things, from a good heart, comes good things. Including choice of objective, choice of training data, choice of parameters during training, etc. (And, if not a machine learning but a knowledge representation kind of AI, then choice of sources from which to obtain expertise, choice of method for elicitation, choice of which pieces of knowledge to include and which to exclude. ] [za1c1]

[48.00]

# CA: So we were talking about the person who programs and designs the algorithm to work closely with the company, the CEO of the company, so that they can align what is the output that they are looking for. That's basically what they were talking about. [za1c2]

# AB: Can we go to SJ's question, because we have not adequately discussed it, the idea of what is a good future, what makes a good future.
# Someone mentioned Christian Values, I think.

# Chat 01:30:16 JC: God bless you all. got to run for my lunch meeting. thank you for letting me listen and join in on this important conversation
[-- JC left --]

# Chat 01:30:26 NO: [Ed. Re. Stuart Russell] HE HAS NO IDEA HERE....

CLAIRE FOSTER-GILBERT: Claire Foster-Gilbert from Westminster Abbey Institute. Thank you very much indeed for your lecture. I wanted to ask you if you had any wisdom to share with us on the kinds of people we should try and be ourselves as we deal with, work with, direct, live with AI? [za1c3]

STUART RUSSELL: I'm not sure I have any wisdom on any topic, and that's an incredibly interesting question that I've not heard before. I'm going to give a little preview of what I'm going to say in the later lecture. The process that we need to have happen is that there's a flow of information from humans to machines about what those humans want the future to be like, and I think introspection on those preferences that we have for the future would be extremely valuable. So many of our preferences are unstated because we all share them. [za1c4]

# NO: So I just posted in the chat one of the questions of the lecture, that went almost to that question. They asked, "What kind of people should be be to deal with this AI thing?" To me it is very close to the question of "What is the better world?" that we want to live it?"

# Again, his answer / his answers are very troubling for a guy that's supposed to be super-intelligent. [laughter]
# So then he says he's going to talk about it some more.
# NO: What CA's talking about and what SJ said: The whole issue is that AI is a kind of instrument. So it's going to depend on / it's like a hammer; it's going to depend on how we use the hammer. [za1c5]
# Chat 01:32:32 NB: "When your tool is a hammer, all of your problems start to look like nails." -- Maslow [za1c6]
# Chat 01:32:58 AH: NB, I see that all the time, in my discipline of statistics

# Now, the issue that these great thinkers are telling us is that some day the hammer will be able to come pick us up and pound us on the head. That, I guess, is this GPAI that people are talking about, that has so much intellectual capability. [za1c7]
# So, if that's the case, then I still think that Christian values and having a society that can reflect Christian values, would be something that we would try to add some value on, because hopefully, if the hammer picks itself up, and it has some Christian values in it, it won't be pounding us all on the head so ???. [za1c8] **
# AB: I like that, the hammer thing! [ABC: Christian values in a hammer! LOL ]

# AB: SJ's thing about heart is ??? to what kind of people we are wanting to be. [za1c9]
[51.20]

# NB: The future that we are creating any time we make any kind of technology, is perhaps something we don't think enough about. [za1d0]
# [ABC: This is the fifth area: impact of AI on society and vice versa.] [za1d1]
# NB: I think that the norms, the aspectual norms, that we get from, y,now, the social aspect, and the aesthetic aspect and the economic aspect, can really guide us. [za1d2] **
# Even if we haven't painted a picture, we have at least chosen our paint brushes when we are painting that picture of what that future could be. [za1d3]
# And so I think that is actually a very profound contribution that Dooyeweerd can make to this, is: It can give us a shared language for arguing about which future is preferable through the use of aspectual norms. [za1d4] **

# NB: Unfortunately, I gotta run. I've got 2 minutes to get to. Thank you all for the rich conversation. I look forward to joining you again.

# AB: Next time, in a month's time but listen to the second one. If you find difficult NB then email me and I'll see what I can do.
[-- NB left --]
# Chat 01:34:44 AH: I too should bow out at this time. bye
[-- AH left --]
[52.33]

# AB: 6 of us left. Just about on time [1 hour] but we can continue. Any other comments. Two people on my screen have not said much so far. Any other issues>
# TB: Not so far.
[53.45]

--- Assumptions and Reductionism in AI Technology [za117]

# HSS: Actually we can also analyse idolatry in / [za1d5] **
# OK, because AI is not neutral, but the content has certain assumption. [za1d6] **
# I think that the Transcendental Critique that Dooyeweerd proposed really can expose the ideological ??? behind the algorithm assumption. [za1d7] **
# AB: Any ideas how; have you any ideas.
# HSS: I sent you my paper actually, so I put my ideas there.
# AB: Would you like to summarise it for the five of us on screen?

# HS: Basically, certain algorithm / [some unclear zoom-garbled words here]. It is they use specific modal aspects as their Archimedian Point (if you want to use that term) to make sense of everything else. For example,

# So, I saw that different algorithm tend to have different idol. And that's idolise something, a modal aspect, and try to reduce everything to this kind of modal aspect.
# And this kind of idolatry is really sometimes create a problem in reading the data, because we force the data in a certain way. [za1e2] **
# It cause certain bias, I think. And we can analyse the bias, and show that this is because we want reduce everything into spatial aspect. We have this kind of problem.
# I think Dooyeweerd / Reformational Philosophy can have a substantial critique, especially on the Transcendental Critique of the / its algorithm of AI. I think.
[57.10]
# AB: That's very helpful actually.

[ABC: I think that's great! It is about the AI technology area of the five areas above. It helps us separate out the different types of AI technology and understand why and how they differ, what norms guide their algorithms, and also their limitations. I argued below about which aspects but that's a quibble to ignore for now. The principle itself is very important. ] [za1e3] **

# NO: Very interesting. This "reduces it to a spatial aspect", what does that mean? # HSS: Certain machine learning algorithm, [for example] if you are familiar with Support Vector Machine (SVM) or even Embedding. They try to reduce every concept to geometrical space and find the 'distance' between / If two concepts are similar, the distance is closer. [za1e4]
# NO: Ah, so, it's linear progression, its sort of the calculus type of example, right? # HSS: Yeah, like calculus. # NO: They try to put it in the quants?? so they can find the variances, the similarities and the variancies, right? # HSS: Yeah, something like that. # NO: So that's spatial? Yeah, now I get it.
# NO: I like what you've said.
# AB: I think it's very helpful, actually.
[59.00]

--- Aspects of Knowledge Representation [za118]

# AB: One of the things I was particularly interested in, especially in the 1980s, was the different, if you like, logic languages or computer-?? languages. There seemed to be several different aspects they were based on. So [for example],

[ABC: AB called these "languages", but that is only half the story. Each is a way of trying to represent the world in a computer. Each tried to embody in a computer algorithm system the laws of certain aspects, irreducibly to others, which the computer would emulate when it ran any programs, but most also formulated languages with which 'knowledge engineers' could express concepts in that aspect.]

[ABC: This was in the 1980s, when AI was using a knowledge representation approach, not a machine-leaning approach. So it might be slightly different from HSS's machine learning algorithms today - but probably not that different, in that each ML technology will require

]

# AB: So, that links with what HSS said, links with my old interest in knowledge representation languages, and so on, and and maybe there's some link there [with HSS].

[ABC: The following is the quibble I mentioned above.]

# AB: Neural nets, I saw, was trying to reduce it to the biotic aspect, rather than the / what did HSS say?
# HSS: Physical causality.
# Actually, depending on neuroscience, trying to reduce intelligence or consciousness into electrical or chemical interaction. So I tend to see that as a physical aspect.
# AB: You're right. I think it's because especially when it looks like chemistry and so on.
# AB: No. In fact, I thought Neural Nets were reducing it more to the psychical aspect of brain stimul / brain signals.
# HSS: I think there is something of an overlap here.
# HSS: I remember one more algorithm, Reinforcement Learning. So, usually they train the model by stimulus and response, which is very similar to psychology, stimulus and response. By the condition[ing] the AI is likely to become ???path-of-??? yes. Somehow receptive to the word. Somehow similar to the sensitive aspect in Dooyeweerd. [za1e5]
# AB: I agree. [Ed. sensitive aspect is also called psychical aspect]
[1.03.00]

# AB: What you've said, HSS, it opens up another area of discussion for us, along with several of the others, like what sort of world we want to be, and NO's original one. Then JC's question. [za1e6]
# I think yours is another area of discussion, and they have all to come together somehow. [za1e7]

----- ENDING

# So, it's gone the hour.
# AB: I understand, HSS, it's around midnight where you are. # HSS: Already past midnight, 09 past. # AB: So I should let you go.
# AB: Does anyone else want to say anything?
# NO: AB, are you going to transcribe some of this, coming up with the list of those questions? It would be helpful to see at least the list of the questions, so we can then focus in on where we can tack more of the problem.
# AB: With the Economics one on Wednesday, my notes were more faithful to what we said; this one has been all over the place and I missed a lot, so I will focus on transcribing this one [first], listening to the recording. The other one, I'll probably just correct spellings etc. add a few comments and save it as text. Might be two or three weeks before I do. I'll send round the plain text before I do, so you can at least have a look, and I'll be making available the recordings.

# So, thank you very much every one. I suggest we meet again on the first Friday of April if that's OK, either 1st or 8th April. [omitting answers from folks]

# TB: Closed in prayer.

[1.00.13]

----- THEMED OVERVIEW [za119]

JC suggested that we should focus our discussion on those areas in which we have most expertise, and the ensuring discussion about this may be found in the following:

We expected our main expertise to be in development and use of AI. However, from the distribution of topics among all five areas of discussion below, it looks as though we have interest and/or expertise in all five areas. Use the following as an index to our discussion in each area:

1. On the nature of AI

2. On the technology of AI

3. On the development of AI

4. On the use of AI

5. On AI and society

These five areas are those discussed in Foundations of Information Systems: Research and Practice and may be applied to AI because it is a kind of information system.

So, where should we focus our discussions? Let us listen to the other lectures and then decide.



----- RESPONSES AFTER THE DISCUSSION

--- Responses from MM, 14 March 2022

[Ed. Numbers identify comments, to answers below.] Dear Andrew,

I have read the discussion, thank you for sending it to me. I am wondering if a lot of what is currently being discussed is a self-inflicted problem?

Is the term "artificial intelligence" itself problematic? [1]

How do we define / meaningfully distinguish artificial compared to natural or human? How do we define intelligence? [2]

Is there not a hidden threat which folks perceive AI offers to us as humans? I think we need to de-couple the perceived threat and reform the question / problem. [3]

[AB: I asked MM what he had in mind by "perceived threat", and he clarified as follows:]

"To clarify, I was not thinking about some religious people seeing a threat similar to 'Imago Dei' - but rather a general feeling I have got from talking with many folks. They see / are afraid of AI "taking over and humans loosing control".

That could be

This sort of concern is emotional rather than necessarily rational."

How do we learn? In part through the data we collect called experience. So what is the difference when machines learn by analysing data (apart from the obvious that machines can process more digital data whereas humans are better at processing analogue data). [4]

My first job after leaving University was a marketing statistician with Grattan. When I joined they had a credit scoring system based upon data provided by customers on an application form plus data obtained from debt collection agencies. It was developed intuitively by senior management. Subsequently the company purchased a revised scheme from an American company who analysed our data. The difference was that the American company had a computer which evaluated substantially more data than senior management were able to do manually. So why would we call one 'natural' and the other 'artificial'? [5]

Similarly some AI algorithms are based upon neural networks similar to the way out brains work. So why is one artificial and the other natural? [6]

Just because there is a common method - the only difference being the amount of digital data being processed - so why is one termed 'artificial' and the other 'human or natural'? [7]

Is the computer method more 'intelligent' than the senior management method referred to above? How could one be more intelligent than the other when essentially they are using the same methods? [8]

The discussion of moral implications are surely no different whether the decisions were made by the human data analysis or computer analysis? (In fact one might argue that the 'artificial' method was more moral because it was based on more data and therefore more accurate?) [9]

I hope that something in this might help the discussion?

[AB: It already has, in that my answers above are clearer than any I gave before!! So, thank you.]

Bye for now

MM.

-- Some initial responses by AB:

1. AB: Yes; it was coined in the 1950s, at a time when many assumed that what set us apart from animals was 'intelligence'.

2. AB: Definition is always a problem. It is almost the wrong question because it presupposes precision, which many aspects of reality do not offer. It is probably to do with the challenge of types-of-thing. Dooyeweerd offers a different approach, a multi-aspectual structure of individuality, in which each aspect contributes something to 'defining' a type of thing in various ways. e.g. lingual aspect is the main aspect that makes a pen meaningful, but there is also physical aspect of materials, kinematic aspect of flow of ink, formative aspect of manufacture, etc.

3. AB: Not sure what you mean by that. What do you have in mind? Is it what Stuart Russell was talking about, of AI taking over, or is it what some religious people talk about, of threat to 'Imago Dei', or is it practical e.g. removing jobs? If you can clarify what you have in mind, I will put the clarified version up.

4. AB: I think that can be answered again by aspects. I tried to deal with this in my original book "Philosophical Frameworks for Understanding Information Systems" in 2008, in chapter V. The key is to ask at which aspect the various differences are meaningful. Analogue v. digital is meaningful in the psychical aspect of types of activation, but is not meaningful in the analytical aspect of distinction as a 'bit'. Both analog and on-off can be interpreted as a bit value of 0 or 1, regardless of whether they are analog or digital. For more, see

"http://kgsvr.net/private/pfuis/pdf/readme.txt"

and then download for your private use,

"http://kgsvr.net/private/pfuis/pdf/f2.pdf"

which is a chapter on the nature of computers, including AI. See Table 5.2. Note that it has a wee section on analog computers. (For other chapters just delete back to "pdf/" and you will see a list of files.)

The difference between computers and humans lies not primarily in the way it functions, but in how the two can function as responsive subjects to the laws of the aspects, rather than just to stimuli from their environment.

5. AB: I would say that both apps are forms of artificial intelligence, but that the difference lies in the AI technology used. Earlier AI used knowledge representation, requiring the human-intensive process of knowledge elicitation then representation in a computer language, while recent AI uses machine learning, trying to bypass the need for that. What is 'natural' intelligence is the functioning of the humans themselves. So there are three things with 'knowledge': humans, AI built by knowledge representation, AI built by machine learning.

6. AB: NN are artificial. However there is a more nuanced answer, which I have in my chapter f2.pdf above, in section 5-5.7, table 5-6. This gives two equally valid answers to whether the computer is intelligent, which come from Dooyeweerd, with two other answers from the two opposing AI camps."

7. AB: See above, about the three things. There is another difference, being that between KR AI and ML AI. ML AI needs a huge amount of training data as a substitute for human knowledge elicitation.

8. AB: see above, re. table 5-6.

9. AB: Agreed, but the responsibility lies in different places, and always goes back to the humans who develop and who use the AI app. Even in ML, because it is human developers who decide which data variables, and which data sources, to employ in training the ML AI.


REFERENCES [za120]


Basden A, Brown AJ, Tetlow SDA, Hibberd PR. 1996. Design of a user interface for a knowledge refinement tool. Int. J. Human Computer Studies 45:157-183. (Significance: Introduces the notion of Proximal User Interface, which is based on Polanyi's ideas.)

Basden A, Hibberd PR. 1996. User interface issues raised by knowledge refinement. Int. J. Human Computer Studies 45:135-155. (Significance: Shows why Proximal User Interface is needed when ICT is used to stimulate ideas and thinking.)


Basden A, Brown AJ. 1996. Istar - a tool for creative design of knowledge bases. Expert Systems v.13, n.4, pp.259-276, November 1996. (Significance: Expert system software that is much more flexible than most, with a Proximal User Interface.)


* Basden A, (1983). On the application of Expert Systems. Int. J. Man-Machine Studies, 19:461-477. (Significance: Introduces a non-technical way of looking at the application of a technology like expert systems, focusing on its role or meaningfulness, rather than its functionality or technical prowess.)


Basden A. 2017/2018. Foundations of Information Systems: Research and Practice. Routledge.
ISBN: 978-0-367-870-300 (pbk), 978-1-138-79701-7 (hbk), 978-1-138-75748-3 (ebk). See description of book.


Funt BV. 1980. Problem-solving with diagrammatic representations.Artificial Intelligence 13(3):201-30.


Created 15 March 2022; Last updated: 23 March 2022 MM comments.