Discussion held on Zoom 6 May 2022.
This discussion is the third discussion on Artificial Intelligence by the Reith Lectures Discussion Group, It is in two main parts.
Part I comprises an excellent explanation of AI and machine learning by HS, for the benefit of those who come from different disciplines, and it served to bring us all into the same understanding, followed by discussion that brought out other points about AI, including its benefits, limitations, etc. This Part is worth reading just for its own sake.
Part II begins discussion of Stuart Russell's third 2021 Reith Lecture on AI in the Economy. This discussion will continue next time.
PRESENT: NO, NB, CA, HS, HS, AB (host), and TB via email.
[AB hosted this discussion, recorded it and then transcribed it (19 May 2022). AB adopted two roles in doing this, (a) of editor, e.g. giving links to other material, adding headings, adding "***" to important points, adding notes in square brackets, starting with "Ed: ...", that explain things of link to material, and attaching unique labels to statements for future reference (actually only marked as "[]" but not yet assigned); (b) of contributor, inserting responses in square brackets to what had just been said, starting with "AB: ..." The latter are added in order to further the discussion, especially in a way that could contribute to developing a Christian and Reformational perspective on AI. Some are responses he would have made at the time had be been able to. In others, he might even criticise himself for what was said on the day! ]
# NB opened in prayer
[recording started]
# AB: How many people have listened to the third Reith Lecture on AI and Economics? [Ed. [Knowing how many listened gives a useful context for the discussion.] NO, CA, HS; NB had read a quarter of the transcript. # AB: I tend to listen to the audio while I'm doing the washing up, etc. - though that makes it difficult to write notes!
# AB: I sent through a number of questions, that we might discuss, but would anyone like to 'kick off' with any comments that they have on SR's approach to AI and Economics.
# NB: Were we going to start with a brief explanation of AI from HS? # AB: Yes we were. Great.
# AB: HS kindly offered to give a brief explanation of AI and machine learning (ML) and so on. Over to HS; the floor is yours.
# HS: It is better if I use slides to organise my thinking.
# I think most of us already know AI so I just clarify some different terms, so that when we go to discussion, when SR describes about general AI we know what he means.
# First, we define what is AI. Actually it is very difficult, because this research area as moving and it keeps changing. And because it's moving, it's difficult to be defined.
# So, basically according to ??? there are four definitions
# I think current ???research tend toward the definition [of] whether the system can act like humans.
# Because humans not always think rationally and act rationally.
[03.50]
# And also, OK, maybe I move to next slide.
# There are several types of AI, which is in / actually in the [Reith] lecture.
# When they talk about AI, it's [usually] more artificial general intelligence. And actually current progress is like get a different AI, which is widely used now. But still not general initelligence.
# So, maybe I describe them one by one.
# [AI type 1.] The old approach, the early approach of AI, or the classical AI, is called Reactive AI, which cannot learn anything. It's just automatically responding to limited sets of combination of inputs. So it's like if you remember the AI of Gary Kasparov. All the rules are predefined. The AI does not learn anything. It just automatically responds. []
# [AI type 2.] The current AI that they use is Data-driven, or maybe we also call it Artificial Neural Intelligence. And this kind of AI is capable of learning from the observed data, to make decisions. So if the data changed or we input new data, the decision of the AI will be changed accordingly. And this is currently what we use in many research and many industries. We call it "contemporary AI". []
[05.30]
# NB: Can I ask a quick question? Do modern approaches to AI combine the first two, where they start with a set of programmed rules, and that learns going beyond, or does the ML approach really not get programmed with any rules at all, and only starts with the data and learns what it needs from the data? # HS: I think the current approach tends towards less rules. Of course, there is a combination of rule and data but they try to have less rule and more data - something like that. # NB: That was the picture I had as well, that the ML approach really just starts with an empty slate and starts filling it in as you dump in gigabytes of data. # HS: Correct. []
# Later I will explain why.
# [AI type 3.] And this is a Theory of mind type of AI, which is described by Stuart in the Lecture, which is called artificial general intelligence. And this kind of AI is still in progress. We don't have this kind of AI yet. And this kind of AI is expected to be capable to better understand entities [that] it is interacting with, whether it is interacting with things or interacting with humans or interacting with other things. So it shows understanding what. / It should be capable to discern their needs, emotions, beliefs or pub??? processes. So if AI is capable of doing this, we have reached the artificial general intelligence. []
[07.15]
# [AI type 4.] And this is the final type of AI, which Self-aware, which we call Artificial Super Intelligence. And this kind of AI is still hypothetical. We do not know whether we can achieve this kind of state, whether the AI have emotions, beliefs and potentially desires of its own. So this kind of AI is / If this exists in the future, this kind of [zoom mumble] should be able to create another AI, ???is on their own desire, without any human guidance or something like that. []
[08.00]
# OK, maybe we discuss first of all??? because now we have only Reactive and Data-Driven [AI]. Maybe I describe it as Classical AI and Contemporary AI. []
# So this is some descriptions that I created, for us to better understand.
# In Classical AI, the approach is knowledge engineering, which is: All the knowledge is predefined by human experts. And in Contemporary AI, usually we use machine learning (ML), the machine can learn from the data. Because nowadays we have trillions, billions of gigabytes of data. []
# So we can ???turn all the data, and the machine can learn from the pattern of the data.
# And in terms of the knowledge. All the knowledge in Classical AI is hard-coded through preconceived rules by human experts and in Conteporary AI (or we call it Machine Learning), everything is learned through observed data by ML model.
# Of course, we still need some of modelling to understand the data.
# And the attitude. Usually, the Classical AI is close to what reality they have. They have their own work??? (we can say like that). And Contemporary AI is, like, open attitude towards new insights from reality. "Reality" here, it means observed data. So, the knowledge of the AI can change accofdingly according to the new data given.
# And the style is usually: In Classical AI, it is more deterministic and explainable, and in Contemporary AI, it is more fuzzy and tacit. We can call it tacit knowledge because sometimes we don't understand why the AI behave like this. It's like tacitly understood.
[09.50]
# OK. In the Contemporary AI, usually we call it ML plus Big Data (BD). We have reality, which is the world that we lives on. And from this reality we abstract the / we call it "Big Data". We observe as many data as possible and we abstract it into BD. Then from this BD the model will observe this data. And through certain ML paradigms this model will create a certain representation. So that this AI can react back to the reality.
# So, it's like / This AI can adjust accordingly to the reality that it observed.
# And actually, how does the AI learn? It's through ML, which is quite interesting because, in traditional programming, usually we give the data and we give a program, how the data behave. Then we get the output. But in ML, we input the data and we also input the output. Then we get the progam. So the program is learned through the data, which is the input on and the output of the data.
[AB: Dooyeweerd. From the mass of big data, the trainer of the ML system will extract data records that are meaningful - that is, meaningful in certain aspects of reality that interest them. The extraction process is abstractive, which Dooyeweerd investigated in his Transcendental Critique. This means that the data that is extracted is not a full representation of reality. That implies a fundamental limit on any AI. ]
# And actually, in ML there is two types of learning.
# The first is supervised. I will explain briefly here. Suppose we have three lines of images, and we label these images as "cat". So, the machine will understand that this data is cat. So, when we give the machine a new data, for example, cats and dogs, the machine will somehow distinguish between cats and dogs, because we already label the data and the machine will learn how to classify different characteristics of the data.
# Some problems that are more suited to this supervised ML [include] classificaton, which is sorting items into category, or regression, like identifying real values of specific objects. []
# And there is another type of ML, which is unsupervised. Actually, [in] unsupervised [ML], we don't give any label at all. We didn't supervise the machine. We give all the data and the machine will learn how to cluster the data, cluster different data on the similarity of the data. It's used for anomaly detection and clustering usually. [Examples: "Has a hacker entered our network?" "Are there patterns in the data to indicate certain patients will respond better to this treatment than others?"] []
[AB: Dooyeweerd: What distinguishes supervised from unsupervised ML aspectually? Is it that unsupervised is purely on distinction-making, while supervised is also on the target (application) aspect (e.g. biotic for cats, dogs)? I don't think so, because the laws of the target aspect do not come into force, only the distinction between cats, dogs, etc. This distinction is purely analytical, because the labels could be anything that distinguishes the classes. So both analytical. Moreover, in both types, the data types input are also analytical distinctions. The difference seems to lie in whether the human provides distinctions or not re. groupings. Can that be understood aspectually?
] []
# I think that this kind of information can help us understand maybe what Stuart Russell in explaining. []
[13.05]
# HS: I think I will stop here. Maybe if you have a question /
# SJ: I have a question, HS.
# It was a very good presentation, and thank you. Clarified and put it very nicely and simply for us to see how things are sitting with regard to each other. []
# Just one question (I have many questions, but will ask this one): Where do deep learning and reinforcement learning happening??? fit in? []
[Ed. Deep learning is ML by neural networks with three or more layers. Reinforcement learning is a way to sharpen up ML by using 'rewards' and may be seen as half way between supervised and unsupervised ML.]
# HS: When we talk about ML we need to distinguish between different ML. Because actually ML have hidden presuppositions. And it's from different scientific tradition. []
# Actually I have appendix that contains a few slides, anticipating [laughter] if you ask this question.
# OK, actually, in different ML, it stems from different scientific tradition. For example, different scientific tradition interpret intelligence differently. [AB: I add aspects to each in square brackets; see below.]
# HS: So, deep learning, when we say neural network or deep learning, it's more like physical reduction of the reality. So we, like, reduce the reality into causal relation between ??? interacting. It's like physical reduction of reality. []
# And when we say about reinforcement learning, it's like scientific reduction of reality, which stems from psychology. Because the intelligence is just stimulus and response conditioning. So the learning process is reward and punishment. So we have AI agent that reacts accordingly to the environment. Something like that. []
# So when we say ML, its like depends on what scientific tradition you are from. [] ***
# [15.40]
# NB: HS, would you share the slides, the slide file, with us? The slide where I am right now is a way of looking at it that I have never seen before, and it's extremely explanatory. That's very very helpful. # HS: Yes OK later I can send it to you. NO: Yes, I would love to get these slides and think about some of this.
[AB: Dooyeweerd: I wonder if it is explanatory because each focuses on different aspects, which I have added above. Each science focuses on a different aspect. Each of the above kinds of ML has an algorithms that performs the learning according to the laws of different aspects. Basden 2008 suggests that a program or algorithm is a "virtual law side", i.e. set of laws (expressed of course in computer code) by which the algorithm operates. That is why I add aspects above. ]
# HS: OK. Maybe I reply to AB and AB send to you? # SJ: [zoom mumble] that would be helpful, yeah.
# ACTION HS, AB: Send screens file to AB. AB send it round.
[16.20]
# NB: The takeaway that I have from all this is that modern ML is not programmed in the sense that we usually think of the word. There's no human who knows why the computer is doing what it does. []
# SJ: And I think - yknow, the slide that you had about reality, big data and ML, I think the reality is the problematic thing there. Depending on the context and project. [] ***
# From experience of a couple of knowledge exchange projects, I can see how those in charge of the project, the funders of the project, and then the industry supervisor, and the academic team, they make a lot of compromises. And then reality is /
# In Dooyeweerd's sense, [reality] is just reduced to a couple of aspects. Where for the project to be fulfilled. And then at the end you can see there are too much focus on data and / which is idolisation of data, and "Look what AI can do!" []
# No-one is interested on the sustainability. And the consequences that comes out of it, because the Manager wants to just / join the bandwagon of "I should digitalize or die." And this is the kind of political environment that consultants are pushing every companies towards. []
# And so, the context [of AI?] is very polluted, to be honest. It does not really help us to develop a good understanding of reality that can help us in these type of projects.
[AB: This links to our Rethinking of Economics, which recognises the multi-aspectual reality of human functioning, including dysfunction of this kind, especially isolatry of the firm and its financial survival and growth.
]
[18.30]
# NB: That's precisely the problem hidden behind the word "abstracts", in that diagram that you had (Screen 4), in that we're taking reality abstracting it into Big Data. And that's not a neutral process. It's always shaped by our values. []
# HS: Correct, yeah. That's why I think that intelligence in the contemporary AI is like reduction of reduction. [] ***
# Because ML is not neutral and objective. It's driven by hidden presupposition behind the scientific tradition that developed this type of ML.
[AB. I think that would make a good academic paper in the AI field. ]
[AB: If we wished we could explain it with Dooyeweer as follows. We have multi-aspectual reality reduced to single aspect in two ways: (a) The multi-aspectual reality the ML is learning about reduced in collecting the data with which to train the ML, in that we select limited aspects as meaningful about which to collect data. (b) The multi-aspectual reality of the learning process reduced to a single kind of learning. ] []
# NB: And that does not make it useless, it just makes it dangerous. [] ***
# HS: Yeah.
[19.20]
# NO: I have a question: How would this, as this develops into the super machinie, or (and this was very helpful; I love this information), as this develops in this Theory of Mind [AI] category. []
# How will this type AI ever deal with innovation? []
# In other words. those paradigm shifts that happen, where (I hate to use this example but) for example, how when Einstein came out with the theories of relativity that were so different from the Newtonian physics. Or, going back to when Louis Pasteur found out that there are germs. If that's not in any of the experience or data, how would the machine ever come to deal with an innovation? [] ***
# HS: Interestingly, becuase we can say that, in ML, there is some kind of taciticity in the learning process, interestingly, some processes can discover something new that humans did not learn before. []
# For example, if you are familiar with protein folding research, and this kind of research already started a long time ago until ML came in and ??? aspects.
# I think it can be helpful in a certain sense, of course. []
# But innovativeness of the machine is different from human. I think, human is still much much better in terms of innovativeness and creativity. []
[AB: Comment: Two kinds of innovativeness? To discover new facts like new protein foldings, is a variation that emerges from taking laws of existing known aspects into possibilities not yet realized. That could be product innovation. But the innovativeness that humans exhibit often involves incorporating the possibilities that are meaningful in a different aspect, and usually one that had not much been considered. ] []
[21.30]
# HS: The taciticity of the ML may be the ingredient for creative ???mosyn or womething like that. []
# AB: Taciticy or Plasticity? Tacit knowledge? # HS: Tacit knowledge is correct. # NB: Things that you know but you don't know how you know that you know them. []
# AB: Maybe I'll make a comment on this new paradigm thing.
# The ML discovering new things sounds to me so far, to have been, like a new protein or something, to have been within the current paradigm. Just discovering new facts, or even new theories or something - new rules. Something like that. But (correct if I'm wrong) but a new paradigm means shifting to a new type of meaningfulness, almost a new aspect. []
# SJ and I have published a paper where there's a section on paradigms as linked to aspects, and so on. Whether a ML algorithm can somehow shift to a new aspect that others are not thinking about / []
# When you put it like that it might be possible - but far away down the line. []
# HS: Because the current paradigm cannot do that, because it's a reduction of reality. So, if you see some ML that perform well, they are very very specific, which is / for example the spatial data and the algorithm is various focused on to scan of aspect. Yeah, it can perform well and even discoverned that things humans did not discover unti now. like co-rules or ???chest-rule. []
# However, this kind of message is not general. We call it narrow, because it is very narrow in specific aspects. It cannot generalize to many different aspect.
# That's why I think Reformational Philosophy may contribute if you want to create a really genuinely general intelligence that is multi-aspectual and non-reductionist. [] ***
24.15]
# AB: Could you explain that?
# NO: OK. Current ML is very reductionist because we reduce the reality into specific aspects when we observe the data, and we reduce it again by using specific paradigm of ML [Ed. which again is meaningful in one aspect]. []
# And I don't think this kind thinking / because this kind of thinking is very constrained to specific aspects, and this kind of thinking cannot generalise well into very different aspect (for example, if the AI has soeciality on spatial facial recognition, it cannot generalise into like social interaction - something like that), that's why I think Reformational Philosophy can not only engage critically with current paradigms but also offer to the AI researcher that actually reality is not a single aspect, but is multi-aspectual. And this is the reason why current AI cannot generalise. Because the AI didn't realise the existence of other aspects. [] ***
[Ed. HS's "reduction of reduction" seems an important contribution. Two reductions to one or few aspects,
- reduction in the ML engine to learn according to the laws of a single aspect,
- reduction in abstracting data to just one or a few aspects.
This sounds like a good academic paper. It might also be framed as opening up a possible route to general purpose AI.
[25.44]
# AB: This is blue skies thinking, but if, in your gigabytes of data, you decided to /
# I see data as a record, like N inputs and one output, yknow [like] "cat" or "dog", and N inputs might be, like: how many legs, how many whiskers, what type of ears, and how it moves, and so on. And those are kinda biological and spatial and kinematic meaningful inputs, and the output is whether it's cat or dog, which is biologically meaningful, i.e. species. But if we expand the types of inputs to include data about [or meaningful in the] social aspect, data about juridical aspect, data about the lingual aspect, data about the yknow etcetera aspect and, in principle, data about all fifteen aspects, then - maybe in chunks of data - then presumably ML AI could in principle cope with multiple aspects. Would that be correct? []
# HS: OK, ah / # AB: Do you understand what i mean? # HS: There is a limitation of, because when you say that we create / we describe into database of column, and with different aspect, it means that we reduce those aspects into a certain number, right, for example. It's like numerical reduction. And we can also do like spatial reduction, when we reduce the dock-in??? more like image of ???doc into different spatial location, it's like spatial reduction of the data [Ed. I think HS means that image processing algorithms often work on what the colour or brightness is at each spatial location, and the locations spatially around it]. We can also have temporal reduction, kinematic reduction I mean, and ??? reduction. I don't think we can achieve biological reduction unless we can record everything into like our genetic info???, genetic memory, or something like that. I think [that, in] current methods of recording, the data is reduced into certain aspects. And I think this is mostly numerical right now.
[AB: Here HS is talking about the abstraction process, abstracting 'pieces of meaningfulness' from reality into 'codes'. These he calls numerical, but in fact many of them are more like analytical concepts and classifications (even though numerals are ostensitvely used the important thing is not their quantitative amount but their discrete difference). ]
# HS: And I can say that the characteristic of other aspects is not clearly manifested in good data structure?? unless numerical spatial and lower?? aspects that we have right now. [Ed. If "lower", this probably refers to what Dooyeweerd called the "earlier2 aspects, i.e. quantitative to kinematic.]
[AB: That makes sense because the laws of those aspects are simpler and also fairly deterministic. ]
[28.40]
# AB: I wonder if it's right that it reduces it to numbers, i.e. quantities, or whether it reduces it to classes, categories. # HS: Category is analytical reduction, right? # AB: Yeah.
# AB: But anyway, I get the point, that there is the issue of reducing meaning in any and every aspect to numbers or categories. # HS: Yeah.
# NB: At the end, if computers are dealing it, you're with zeroes and ones, one of the things I am amazed by is how well we are able to reduce wide swathes of meaningfulness into zeroes and ones. [laughter] I maybe shouldn't be shocked, because we have been doing it with text for millennia. Yknow, when you write something, you are taking this whole panoply of human meaningfulness and encoding it in a pretty small little alphabet of possibilities. And yet we've gotten very very good at doing that!
[29.55]
# AB: I don't think it's zeroes and ones, actually. I think it's bit patterns.
# The way I see it, it's bit patterns. To call it "zeroes and ones" is actually a misnomer, actually a metaphor. It's kinda "ons and offs", whatever. Two states; you can call them "X and Y", "On and Off". [AB: two distinct states in digital electronics seen not as voltages - meaningful in electronics - but as states; the "seeing as" implies meaningfulness in a different aspect]. []
# NB: As many combinations of those two states as you care to have. As long as you have enough combinations of those states, we can embed a lot of meaning. # AB: Yeah.
# AB: If you are coding - say you take eight bits, forming a byte [Ed. each bit in its own state of X or Y, on or off, 0 or 1, so we might have, for example, "01000101"] - then you can have the bit pattern representing: [] ***
# And that's kind of a transduction [or interpretation] between a bit pattern and a meaningful symbol. And I see that as transforming [interpeteting] from the psychical aspect of signals to the analytical aspect of actual data, yknow distinct data. []
# AB: I don't know whether that's helpful, whether it's understood or not.
[AB: On Coding Symbols with Bit Patterns. Maybe a better way of putting it is: seeing the bit pattern as a set of signals is seeing it from the perspective of the psychical aspect; see it as a number, letter, colour, etc. is seeing it from the perspective of the analytical aspect, as a distinct symbol - which is then a component part of a full symbol such as a word. ] []
[AB: Notice how numbers, letters, colours, truth values are concepts meaningful in the quantitative, lingual, psychical and analytical aspects respectively. In principle, and in similar ways, a bit pattern could represent something meaningful in any aspect, e.g. spatial position as Cartesian coordinates, e.g. spatial pattern such as an image, and so on.
] []
[Ed. This leads nicely into what was discussed next: on the applicability of coding reality into data for ML. ]
# [several people responsed together.] # NB: NO had a question he was trying to ask about three minutes ago.
[31.45]
# NO: THis is all great, so no worries.
# My question is: As we build this more intelligent AI capability, there would seem to be also the possibility that the machine could fool itself or could fool us. []
# Let me give you an example. Here in the USA, we have what's called the DSM: Diagnostic and Statistics Manual. This is what defines psychiatric conditions, that's used for all the clinicians. Now, in 2000, the manual had 365 psychic conditions that were clinically identified conditions. Today the latest release has 158. They eliminated some, consolidated others, etc. This is a progression. []
# So, my issue is: if the machines are build on a certain dataset and the dataset changes, if it's not continually refreshed all the time, then the machine, the AI, could also give us an incorrect conclusion, right? [Ed. It's not the training data set as such that changes, so much as the coding system by which we encode reality to produce that dataset, using agreed codes.] So, to me, some of this becomes very subjective to "What is the refesh rate of what data is going to be in that AI calculation?" Is that a fair question? # AB: excellent. [] ***
[33.55]
[AB: Dooyeweerd re Data Abstraction: To do with abstraction of knowledge and coding it. We humans abstract what is meaningful to us in reality and, in order to input it to the machine we encode it as symbols (which are the analytical aspect interpretation of bit patterns, as above) and to do that we employ an a coding system that is agreed between those who do the coding and those who set up the AI learning. Often, both would agree on a publicly agreed coding system.What NO seems to be talking about is changes in the publicly agreed coding systems. Those changes come about because of movement in scientific theoretical beliefs about the nature of (in this case, psychical) reality. This could occur because of either (a) progress according to genuine scientific opening of the psychical aspect and/or (b) a change in the community's prevailing beliefs (paradigms, worldviews) about the nature of psychical reality. Those two factors need to be taken into account in all coding. Hence, understanding in ML. (Indeed any programming must take account of currency of codes in data abstraction.)
] []
# HS: Currently, that is the state of AI. I think that most in the AI used by industry, they refresh it every month, something like that. From my experience in industry previously. []
# AB: You mean / when you say "refresh", do you mean that they go through all the codes and it the codes have changed, they change all the data? # HS: No, no. but because if that changes, so the learning process also changes. The code is the same, but the new ML responding to the new data, they update the different parameters, so they have new rules. []
[34.40]
# NB: The analogy might be: if you had a psychiatrist who is using Version 4 of the DSM and now Version 5 comes out, they will start reading Version 5 and they start diagnosing things differently, because they are using a new diagnostic manual. As the data gets updated, the ML that is built on that data also gets updated. []
[Ed. Clarification needed: Does NB mean that the ML automatically adjusts itself to the new coding, as the new data comes in that is based on the new coding? If so, how long would such adjustment take? How long would it take to 'clear out' decisions based on the old coding where that has been replaced? And what happens with psychic conditions for which there is now no code? The question higher level question is: during what period can we expect errors in the AI system's decisions?We need to clearly distinguish between "dataset" as the training data, the dataset as input when running the trained system, and "dataset" as the set of input and output variables used to train it for which a certain coding system is expected. I think we are talking about the latter in terms of changes in codes, but the first and maybe the second in terms of data.
] []
# That does not mean there's no problem with out-of-dateness, but it's not a new problem. []
# NO: But my point is, we have this issue now. There's a book that I read, on the algorithm (cannot remember the name) The Honest Algorithm or whatever. But the problem is that some of these things are so complex, particularly in social media, that the engineers don't even necessarily know the decisions that are being made. So, my point is: the degree of reliance we put on these things should be problematic, because at some point we cannot tell what was updated or if a particular dataset was not updated that should have been updated, and the machine then will fool itself. It will give us an answer that is not necessarily correct. [] ***
# AB: Yeup. I used to work in medical records, medical coding. I don't know what's happened in the DDM, but going from 365 down to 158 / but presumably some of the 365 are no longer there, some will have been said "That combines with that to produce /" Yknow, X and Y combine to produce Z, and it's just a redefinition. That's fairly simple from a machine point of view: you just make a rule. But sometimes, X might be split into, part of it goes into Y and part of it goes into Z, and / []
# It's always the problem of coding. Again, I think it's a problem of meaningfulness, which Dooyeweerd is very good at. []
# So, am I right in thinking that, that's part of the problem: it's not just a 1-to-1 or a many-to-1 transposition from 365 to 158, but there might be a 1-to-many or even a many-to-many, that one thing becomes split into many things. Am I right? # SJ: I think so. []
# HS: Perhaps, to respond to NO's question, what happens to the machine, can also be what happens to humans. So, it's like a new situation, because you know humans can change their mind. But if you realise that the thinking process of previous generation and this generation is very different. So, it's like ML new thing, or something like that. []
[38.05]
# SJ: HS, you know, on this slide Type of AI: Reactive and then Data driven, would you say that Expert Systems (ES) falls in the Reactive, and current ML into on the Data Driven? # HS: Correct. []
# SJ: My question is: Why for Expert System there is always / I mean, I understand for ES there is a focus on human knowledge, but for ML there is a focus on data, and human knowledge is less emphasised, even in the literature. []
# HS: Can you repeat the question, please?
# SJ: During ES (AB has actually done this in 1970s 1980s) the domain expert and developers sit together and they develop the system. # HS: Yes, correct. []
# SJ: With ML, there is so much excitement about what ML does, that, from researchers' and developers' perspectives, it seems there is no need to go to domain expert as much as they used to go to expert during the ES development period. Is my question clear? # HS: OK, you asking why there is a / # SJ: Why there is such an atmosphere, trend at the moment that because ML has a certain capability, let's just get data and forget [the] domain expert, or there is less emphasis on the domain expert. []
[40.04]
# And even if you look at the data science methodologies, like Crisp-IM, and the other one, domain expertise / they don't even mention the domain expert / there's one stage in the methodology call "Business Understanding". []
[AB: Two kinds of knowledge or expertise here. They may usefully be understood in terms of the framework for understanding information systems in Basden 2018. One ("business understanding") is knowledge about the application context and requirements which falls into the area of Use of IS. The other is domain expertise, which falls into the area of "Developing the IS for application". The kinds of knowledge elicited by each is different. Knowledge about the application context is specific, while domain expertise is general, because it is about the laws by which the domain operates - although in any given application there will be an overlap. ] []
# HS: Actually, the reason is, because the ML intelligence is like aggregate intelligence from the entire [set of] individuals. So their intelligence is / because expert is just one individual, right. But there is much thing the individual doesn't understand about. But if you gather all the knowledge not only from one expert individual, but billions of users, your knowledge is much more than one individual, and that is why ML can perform much better than one single expert, something like that. [] ***
[AB: AB disagrees with that below, but HS has a very good point that should not be missed. ]
[41.05]
# AB: I would disagree with that. I'd like to come back on that, if I may.
# AB: As SJ said, I was doing this expert / experts in the 1980s. []
# Now, I did take a different line from most of my colleagues. Most of my colleagues want to just get some rules and put them in [to create an Expert System (ES)] and that was it. What i did was a kind of inference net. I asked the experts for inferences, between different probabilities - using Bayesian probability and so on. []
# And the first expert I worked with was in Stress Corrosion Cracking (SCC) in metallurgy in the chemical industry.
# And what he did / he was actually a world expert, so it's a bit unfair to call him just one individual. Yes he was one individual; I will come back to that. But his expertise was developed over 30-40 years, in interacting with other individuals with lots of books and papers read. So he was taking in what other engineers were saying. And he was also learning from experience. [AB: Hence the expertise from a good expert is a kind of aggregate from many sources.] []
# SCC is very unpredictable. It's a combination of stress in a metal and corrosion. Corrosion starts a crack and stress opens it up, and you can get explosions in chemical works because of SCC, and thinks like that. And it's very unpredictable. []
# And he wanted to develop an Expert System that would advise on the likelihood or probability of SCC happening, and the reasons why it happened. And he had this long world-class expertise. []
# AB: And the challenge of me to go to him was to get him to express it [his expertise]. And, because it was physical expertise, it was not too difficult to experss in terms of, like, / the very simple final thing is:
Probability of crack initiation, and the probability of crack propagation gives the probability of stress-corrosion cracking. []
# And it was all things like that. And there were things like what chemicals were used, stresses in metals, whether there were welds, and all sorts of things, occlusion pockets where chemicals got trapped, and all sorts of things that he was talking about. []
# And his ES functioned fairly well against reality.
# Now the individualness that I discovered, was that there was another expert, and they disagreed over a certain set of rules. And I discovered that one of them worked at temperatures below 300 degrees Celcius, and the worked with temperatures above 300 degrees Celcius. And at that boundary, 300 degrees, different physical conditions apply, and each was assuming their own physical context [with which they were familiar]. So that's the individualness of it. []
[45.00]
# Now, the thing is, if we did it with ML, we could get a lot of / in principle, we could get billions of records of machines of chemical plants and whether SCC happened. And all the data. And the machine would work it out for itself. In principle. []
# The advantage of ML is that if there was something not yet discovered, some physical-chemical thing not yet discovered, that the expert had not properly discovered (they were always researching things in the lab to discover new things) - if there was some knowledge not yet discovered [but which actually happens in reality] the ML would probably detect it, although it wouldn't know it had. It would give correct results [AB: whereas, with an ES in which all the knowledge was included except for that undiscovered bit would probably give poorer results]. []
# The problem with ML and the benefit of the knowledge representation point of view was that it could explain. It could explain its knowledge base and say "I believe that this is likely because of that and that, and that is likely because of this and this." And so on - and you could trace the inference net backwards, in priniple. Which is very difficult in ML. []
# So, I think it's /
# AB: And what I developed (and this is my final thing), I recognised that, in reality, in the chemical industry, we didn't use Expert Systems just to predict (put input and get an output "Will SCC happen?"). What we did - the real benefit of it - we would put the input in and then explore with the user whether they had overlooked something. And then [a later project] we used ES in business analysis for this sort of thing. []
# It wasn't to predict, which most ML seems to do, it was this sort-of conversation between the knowledge base and the user. It was a different paradigm [of the roles and benefits of expert systems].
# And that was the one that was successful, especially when I did an expert system for the Construction Industry. []
[AB: Papers written on all that:
- Basden A, (1983). On the application of Expert Systems. Int. J. Man-Machine Studies, 19:461-477. Introduces a non-technical way of looking at the application of a technology like expert systems, focusing on its role or meaningfulness, rather than its functionality or technical prowess. Sets out eight 'roles' that ES could play with users, most of which involve more than prediction.
- Attarwala FT, Basden A. 1985. A methodology for constructing Expert Systems. R&D Management, 15(2):141-149. Presents a perspective on expertise and how it can be elicited for expert systems. Knowledge analysis technique that separates out understanding from context-dependent problem-solving and arrives at much better knowledge.
] []
[47.40]
# NO: I think that's fascinating, and a really critical distinction.
# The problem, the issue with some of the AI that is being used in business is: A lot of it is predictive because they are dealing with transactions, and they want to know, e.g. Did more people buy the yellow box as opposed to the blue box? Did they buy it on Sunday? Did they buy it at $2 or $5? All of this AI / a lot of business AI is transaction oriented. It's not necessarily research oriented??? At least in the commercial side of it, / What I've been exposed to: Even in medicine, a lot of what we used AI for is predictive. If this patient has this many characteristics and symptoms when it presents at the ER, and he's got this kind of profile, what does this tell me, how should I probably treat this guy. It's all predictive. []
[AB: Just like the 1980s? Most others just assumed that AI was to predict rather than e.g. help refine the user's knowledge. The preponderance of prediction applications might be because the AI research and development community is almost all on that one track of prediction AI. There is a gross presupposition there. What I found in the 1980s was that it was the 'human' oriented roles that were beneficial, not the 'automation' oriented ones. ] []
[49.05]
[Ed. From here on we began discussing the topic of Stuart Russell's lecture, AI and the Economy. Our main thrust was that this is a multi-aspectual issue, whereas SR treated it in conventional, narrowed ways in which only the economic and technological aspects are considered. ]
# SJ: I think there are cases / I know for ML it comes from thousands of data, comes from thousands of users. But I think those data, they are still not representative of human knowledge. [] ***
# SJ: In data science practices, current data science preactices, there is really a need to sit at least with a representative of the different demogrfaphic groups. []
# I mean if the project is client-based, customr-faced like a B2C [business to customer] project, to really understand human knowledge. []
And if it is like internal, say it's to do with stock management within a company and they need to / [Ed. Example is of a chain of retail shops with a warehouse in which it is decided which stock should go to which branches.] []
# Five staff sitting in one room, they are in charge of stock optimization [asking] "Who we should send pair of shoes to, which of the shops in downtown (sorry, city centre in UK terms), we should send, yknow, certain type of brand or not?" They decide based on experience with those shop managers. Some shop managers are difficult to deal with. []
# And they [the staff in warehouse] know that they should say. If the shop manager says "I know that I want certain brand, all the sizes" they know from experience that shop manager is just / they like to think, for example, that the shop manager is getting greedy now. But shop manager really has the knowledge that "Oh no, I really need [that], because I know that the customer walk into my shop." []
# SJ: These types of thing, these are not captured as data. []
# Is their attitude versus attitude: the attitude of the stock optimizer versus attitude of the shop manager. And, based on that, the stock optimizer says "Oh no, I'm not going to send that!" And decide differently. Manipulate the data on Excel sheet or whatever happens. So this is human-human, human versus human. []
# And this is just one example. So, this is / yeah
# SJ: And then these type of companies, they heavily rely on machine learning for stock optimization. []
# And then when you talk to the data scientist ("Will you go, would you like to go and talk to the staff? Because the output of your ML solution is going to be represented with an interface for them to help them with decision-making. It's affecting their job; part of their job is getting automated. Will you go that office and park?") they avoid [going], because they know they are going to have a difficult conversaton - five staff with 5 different opinions! []
# SJ: This is the reality of ML practices at the moment. []
# (I'm not talking about Google, Amazon, Apple, Facebook, etc. because they are perfect! They don't have this problem! [laughter]) I'm talking about the majority of businesses, who fancy ML. []
[53.05]
# TB: [by email] I did get chance to listen to Reith. My inputs were that such topic is covering what is generally encountered when talking about AI and business, first that there is anticipation that robots will be almost capable of anything.
# He addressed painting as one example but didn't really address the real problem that first one has to take everything out of a room or clear an area before painting it.
# I would question the lecturer on that as to whether it really is viable that the full operation can be done without human interaction and then really if there are any parts of the job actually useful to do without a human. Also I've seen spray paint 'machines' used by humans that actually do the job as fast as I think a robot could do it if not faster so I'm not convinced.
# Also they inevitably talked about autonomous cars and that is one area that's just not getting traction. The biggest problem with car autonomy is that it's most effective at top speed on the motorway, but that's the one place nobody would want to be a passenger.
# I've always argued driver assistance is the way forward to help with parking and other awkward things in driving as well as better safeguarding to reduce accidents.
# NO: Yknow, that brings up another issue here, and I think, one of the questions in the discussion by Stuart, was "Are we going to get to the point where AI can do all these wonderful things for us," e.g. manage the climate and all these resources and things. []
# But there's an issue with liberty. Example: my Dad was upset that the utility company was going to put a device [smart meter] in his meters that would tell how he was using the power, when he's turning the lights on and off, and how many devices he had that were connected that were requiring so much power, and the reason (I explained to him) the reason they do this is so they can estimate how much they need to build, how much capacity, where they need to put the energy, where they put the transformer stations and everything like that. So ultimately that is a benefit. But his view is, "They have no business knowing this stuff."
# And as we build this data society out, are we putting ourselves into the belly of the machine? []
[AB: Dooyeweerd. The company thinks only about information (lingual aspect) for purposes of distributing resources (economic aspect), but in reality all aspects apply. NO's Dad exhibits the pistic aspect, which makes dignity meaningful, and juridical aspect about a feeling of right and wrong. Might one also detect a slight selfishness there, which is a dysfunction in the ethical aspect? NO's own action involves the aesthetic aspect of harmonizing, in seeing the wider picture. But, what is needed by the company and all involved is to recognise the multi-aspectual nature of such situations.
] []
# SJ: Someone said: Workers on the shop floor in manufacturing companies, who work with robots, robotic space, robots that have ML engine [that] learn from them, is like, for them, shooting themselves in the foot. Very soon they are not needed. []
# And for managers who invest quickly in those type of robotic arms with the intention of increasing productivuty: where to move these staff when 80% of their job is automated? Where to move them? How to design a new job role for them. If they are really good managers, who don't want to get rid of / don't want to sack them these people, but design new roles for them. They call it politically "value added roles". That only happens after investment in robotic arms. []
# SJ: So, always economic apsect elevated at the cost of other aspects, at the expense of other aspects, at the expense of the wellbeing of the human on the shop floor. [] ***
# So, this is one of the things that /
[56.30]
# HS: That is not really economics. Because economics is about distributing resource fairly, right? Maybe it's reduced to numbers, the numerical aspect, probably. []
[Ed. In our discussion on Economics, we have come to the important decision that economics must not isolate itself but take all other aspects into account, at its heart not just the periphery. Including, as HS mentioned, fairness, which is of the juridical aspect. See, for example, 3.1 From Detached Economics to Multi-Aspect Embedded Economics.
] []
# SJ: What really is: People like us in this group, we know about Dooyeweerd. But we are not mainsteam. No-one hears our voice. When we go to workshops or presentations, we talk about Dooyeweerd - AB, NB knows, we've done that - they always like Dooyeweerd, because you give them a perspective how to think holistically, how to think / And they like it. But rarely hears our voice. Rarely anyone knows about / [] ***
# NB: In this discussion we are having here, I think HS just said it, that Stuart Russell (Russell Stuart? Two first names! I don't know which direction!) / # Anyway, in his lecture, he was talking about, if we have this explosion in general AI we are going to need a new economic system. And he talked about the economists and the science fiction writers lock them together in the same room. []
# But HS is right, in that we are not really talking economics; what we are talking about is the juridical aspect: of what do we owe to people who used to have an economically valuable task and no longer do, and, as a society, how do we make sure that the benefits accrue to everyone rather than just those with the capital? And that's fundamentally not an economics question but a juridical one. [] ***
# AB: Yeup; that's good thinking.
[58.20]
[Discussion of AI and the Economy continues below.]
# AB: Well, folks, it's been very interesting. I have been typing away and we haven't officially started on the questions that I had. # NB: I noticed that too, AB. []
# AB: However, we have covered an awful lot of those questions. []
# AB: One of the things that I thought it would be / Because a lot of this group has actually discussed economics quite deeply - and I don't know whether others got the impression that Stuart Russell's view of economics was rather shallow and rather conventional, I had a couple of things to ask CA about his views. []
# But because it has now gone over the hour, I don't want to prolong things too much, unless people want to stay / # NB: I have a student appointment that was supposed to start 5 minutes ago.
# Let's formally close but continue if we wish.
# But, look at the questions I sent through. I think an important one is: "AI and Economics: to what extent is SR presupposing conventional economics and (that's almost a rhetorical question), how would his arguments change or how would we want his arguments to change, if we presupposed instead, either (a) the recent thinkers that we have been discussing, such as Mazzucato, Raworth, Dasgupta and so on, bringing in environmental responsibility, [unpaid] household [activity], volunteering, and non-necessity of economic growth, and (b) how would they change if we brought in our multi-aspectual rethink of economics, and all the various parts of that. Would anybody like to actually discuss that? []
# NO: I would like to think about it, now that I have some definitional framework. []
# AB: The question really is, The next discussion, should we continue in AI and Economics and discuss that, or should we go for his final thing and discuss his final [Lecture] ?
# NB: I would like to cover yknow, Fundamentally, his lecture is about AI and jobs, in some sense, and I think there is a lot of rich material that we need to mine a little more before we move on. []
# NO: I would agree. I think / And I hope HS can be there, because he is the expert on the intelligence AI side of the equation. []
# AB: Well, thank you very much everyone. What we will do, is we will continue with AI and Economics next time and perhaps with the understanding of AI that HS gave us we can then begin to address some of the questions that are sent through - and there might be other questions. []
# If you have any / What I would like to suggest, offhand??: Each person look at the question that I sent through and see if you wish to add to those questions, any other questions that come up, either related to those or in addition to those. I thought of one or two others. Send them through, and I'll send them all back for discussion next time.
# ACTION ALL: See if there are any other questions you would like to ask. See Questions Prepared for Discussion.
# NO: Will HS send his slides over? Is he comfortable with that? # HS: I will check that I have your emails and will send it.
# AB: HS, actually there is a Google Drive opened up. I sent the link in one of the emails a couple of weeks ago, but you can send it to me if you like, and I will put in on Google Drive, and send it round. []
# NB: A very helfup deck of slides. # AB: Yeah, very useful, very helpful.
# AB: Shall we close in prayer. Would anyone particularly like to close in prayer otherwise I would. # NB: I haven't heard from CA today. # NO: She should say a prayer. # AB: Would you be willing ...?
# CA: Well I had actually prepared for the session, and I brought some notes with me, but was just waiting for AB?? to finish. And ... we have finished. # AB: I am sorry. Some of us can continue after some of us left, but we can continue next time. []
# NB: I think CA's economic insights here are going to be very important to us. []
# CA closed in prayer. Thanks for bringing us together. Prayed for fruitful, meaningful discussion. []
[1.05.00]
# NB: Thank you everyone. Every time I do this it warms my heart, so I have a better day after this. I look forward to the next one. []
[NB, NO left. SJ]
# SJ: So, what are we going to discuss. # CA: If everyone is leaving then ...
# AB: What I suggest, CA, is that you set out your questions / []
# I've actually got one or two technical questions for you [CA], if I may, which is that:
# AB: SR mentioned Besson's Inverted U curve - productivity increases and decreases with technology. Do you know about Besson's Inverted U Curve and can you say whether SR is right or whether he is oversimplified? Yes or no. []
# CA: When we talk about labour and capital, the capital could be the same machinery. OK, they always said "Now we have excellent money, so what are we going to do with it?" So some argue that, if we are going to more money towards labour, then there will be less money for capital. So, that's the argument. And some argue that we need to put money for both, because they come together. So, when we talk about / when we put more of our money in one area, then we have less money to go in the other area. If that makes any sense. []
[AB: But our Rethink of Economics questions money as owned commodity, seeing it rather as enabling more human functioning that can bring Good. ]
# AB: And there was also this "great decoupling" he talked about, of productivity from median real income. Eric Brinjolfsson, Eric Macabee. Do you know about that? []
# CA: When we talk about productivity, we talk about all sorts of productivity, ???reaching?? to GDP itself. There is a list of productivity that we talk about. So we need to visit all of them. []
# AB: So, basically, he was oversimplify? []
# CA: I think that there is a different way to look at things. He's prompting economics in a very bad place, and I don't understand why. So, there's a different way to look at things. []
# AB: OK. So would you like now to itemize the points or questions that you had; we don't need to know what they are. You can put them over now if you like, but, because there's few of us, it might be better to bring them next time. What do you think? It would be nice to know what they are, though. [to send them round]
# CA: Yeah, OK.
[1.09.00]
# AB: So, which do you want to do? Do you want to give us a summary now, then fill them out later, next time?
# CA: So, one of the things that I was thinking about, is let's talk about what is actually happening in the world today. We talk about the Great Resignation that is actually happening. []
# The Great Resignation is happening because of lot of stresses. One of the stresses is because people are doing all sorts of things, when all they want to do is just be creative in their field. []
# AI in that sense can actually help us because AI can take over all the jobs which is ??? that is boring, which nobody wants to do. And then people can move to a stage where they can work on creativity. So then they are not bothered with all those repetitive boring jobs that, yknow, now that we have to do.
# So, now that AI is good at doing repetitive boring jobs, why cannot we just let the AI do that, because AI is faster at it??, AI can do it quicker, better with less errors and things. So, we're there.
# So, it that is going to really happen, [with] more people doing more creative work, like / Think about this / [Example] We are academics and we are focused on just doing / just like coming up with research or coming up with better teaching pedagogy, or something which is the creative kind of things, instead of doing marking and yknow boring things like this, then the work is going to be more meaningful. []
# AB: Yeah, I think all three of you academics would love that: Get rid of the marking to AI!!
# CA: Yeah!
[AB: Interestingly I remember hearing very similar things in the last 1970s: let automation do all the boring jobs so that we can have huge amounts of leisure time. Nowadays, I would hear pundits ask whether the above is a very middle-class view. ;-) ]
# AB: That's one. I think it opens up a lot of issues that we can discuss next time. Have you got a second one? Or do you want to say more on that, at the moment? Or do you want to expand on it next time?
# CA: OK. There are certain jobs that we would feel insecure to let AI do things.
# [Example] One of the things that we talk about in the banks, is that when people have large sums of money. Now, if they have large sums of money and they are bringing it to the bank and they want to deposit it, they want to give it to some human, they don't want to give it to the machine. Because what if the machine deposits it and then the machine says "I didn't receive any money from you; didn't come up with any receipt." Yknow, these type of things. []
# So there are certain areas like, shall we say, focus points in certain professions where there really needs the presence of people to do the job of overseeing the job being done. []
# I think that people in the medical line would agree that as well, because these are very / you cannot let the machine do this kind of job because it's very voguish. []
[1.12.45]
# AB: OK. I'm just making a note to add these [to the questions to pass round]. []
# CA: Then we have what we call it as people on / know we have to hireity??? of it, the people who do the boring and repetitive jobs. They we say "What happens to these people, when these boring and repetitive jobs now has been taken over by AI. What happens to these people? These people will not have a job." Now, the thing is: What is wrong with these people moving up the ladder to do more creative job? Instead of doing a boring and repetitive job? []
# AB: Any comments?
# HS: Yes, it means we need to change the education system. Because I don't think, at least in my country, the education system doesn't prepare for more creative jobs, I think. []
# TB: [by email - so not directly in response to the above] The other typical thing [discussed by Stuart Russell] is that it's questioned what it is to be human, since that has also I think always been the other big question as to whether humans need to interact with humans to live and what lack of value it would give to life if they didn't and then interacted with robots that could only impersonate love and relationship.
# So that would then matter for how a business ran.
# It seems to boil down to the same questions of what 'monkey jobs' can usefully be covered by AI, yet still we need people behind those Ais but it might give them some benefits like a four day working week.
# AB: [continuing from CA's and HS's comments above] I wonder whether there is a presupposition there [in what CA and HS said]? That we middle-class academics look at those people doing 'boring' jobs and assume that they are not being creative, whereas, actually, a lot of people doing what we think of as boring jobs maybe don't find them so boring. And certainly there is some creativity there. Even / [Example] I remember once immediately after I got married, I was on an agency and I went to sweep floors in a technical college. It was a boring job, and so on, but actually I was able to put some creativity into it, e.g. how do you get the dust [AB: actually it was more like wood shavings which are more interesting than dust!] out of a corner under a table, and things like that. []
# CA: That's what I'm trying to say: You can do that creative part of it, then let people know that part, and then they will implement it. This is what I meant. So, there are two sides of the job in the ???. One is the thinking through the process and coming up with the fact that there is more interesting way of doing it. And then doing the job itself based on the process. So you do that thinking part and that job itself will be done by the machine. []
[1.15.50]
# AB: But sometimes it's actually nice to do that actual job. Satisfying.
[1.16.00]
# AB: Do you have another issue? For discussion next time.
# CA: He also talked about what would people do when they have all the time in the world and then they get bored and all of that. []
# Now, he's wrong.
# Because at this point of time, if you are if you are just doing ??? job, it means you are just thinking about the creative ??? things, it means you have lesser working hours, you can spend more time with your family, and exercise, and do all the other things you ??? do as a human being to be physically and mentally healthy, which people are not doing because they don't have time. []
# AB: OK. I had wondered about this leisure time thing, myself, actually.
[AB: What I had wondered but did not want to take up time saying that day, was about how all activities have an aesthetic aspect, which is the aspect that makes leisure meaningful and possible. And, some at least (including I) find that if they raise their sights to 'do the good' rather than just be dominated in our thinking by perceived lack of time, then things go better and we do in fact find we have time. Though the 'economic equation' says I don't have time to take the slower and more ecological means of transport (for example train to Europe rather than flying), when I do, I find rest, fun, interest, and also time to get more done. As a Christian I interpret that as: if I obey what God wants then I get unexpected bonus blessings / benefits. The logic that bullies current thinking in AI and economics does not recognise that.] []
# AB: HS or SJ, do you have any comment on that?
# HS: I think I agree that some operations in the current work system. But I think about more, create a new economic system. OK we realise there is some operation but I don't think ML will solve that. Perhaps it elevates some problem and creates other different problems, especially in resource distribution. [] ***
[AB: Yes, one of the questions to discuss next time is about if we presuppose our rethink in economics. ]
# Of course, if we let the machine do all the boring and dangerous jobs, all the money that previously goes to that family will disappear. We need to think more about how to distribute this kind of income more balancedly. [] ***
# AB: That's a good point, actually. It might solve some problems but create other problems unforeseen. []
# AB: CA, you look as though you have something to add.
# CA: Everything has a problem. Nothing is perfect. Everything will have their advantage and disadvantages. []
# So, this is what I'm trying to say. People have to elevate themselves so they are in the creative part of things. And then the machine can do other things.
[AB: Just thought: I wonder whether we could affirm that but widen it, in the spirit of our rethink of economics as multi-aspectual. Maybe see the boring-to-creative opposition not as the whole issue, but as one bad-to-good opposition among several. Boring-creative is an opposition that is directly meaningful in the aesthetic aspect. If AI enables that change, then it may be seen as good w.r.t. that aspect. However might AI also be good w.r.t. all the other aspects too in which good and evil are possible? ]
# It's just like when we were in the factory, the guy presses the buttons and the machine does the job. He's not chopping things and cutting things up, but the machine is doing it. What I'm trying to say: You be the brains and do the top thing, and the machine will do the ???repetitive??thing. That's what I'm trying to say. It's like the machine and labour, ??? working the machine. []
# HS: I agree that productivity will increase. My problem is that in / because previously you need for example 100 labourers to do those things, now you need only 10 people. Then you will / Of course the middle management will fire, sack the rest.
[Ed. c.f. what SJ said earlier. ]
[1.19.50]
# AB: Is that an issue of the meaning of life? []
# Yknow, because even the supposedly creative things / I've come across people who at the end of their careers think "OK, I've been very creative, I've done this and this and this. But so what! What's it all been for?" [AB: On the day I transcribed this part, I heard of a musician who had committed suicide.] So, I wonder if we ought to bring the meaning of life into our discussion. []
[1.20.20]
# AB: Right, how are we doing? Shall we close there, or is there any other issues you would like to bring up for next time?
# CA: Yeah, I was just thinking about the "substitute and complement" [theme on the role of AI]. The words that they are using, "substitute" and "complement".
# I think that we can use both. # HS: Yeah yeah. # CA: Substitute and complement [humans with AI].
# Substitute in the area that is boring and things, dangerous jobs.
# But why do we need to have yknow, why do we need to have soldiers at all? I mean war - why do they just go there and get themselves killed? There shouldn't be such a thing! # AB: So, what we should do, of course, is to turn all the soldiers over to designing and manufacturing weapons that robots can use to kill each other!! [AB was joking there!] # CA: We shouldn't even have all these things.
# AB: That's a very important thing, because its / I was just making some notes on the Economics Rethink, and the things about non-essential economic activity. Yknow, what should we be expending human effort and creativity and resource on? I mean, from God's point of view. Yknow, we shouldn't have soldiers, we shouldn't have an arms industry, for example. (Of course it's easy to say we shouldn't and maybe it's a bit more complicated than that.) But maybe we / even if we allow some soldiering, the leaders of society should have a direction about what is meaningful in life.
# CA: In those days they were conquering each others' countries and ... anthem. So they need to have soldiers to ... extent. But ... we've come to a certain like em / we are not barbarians any more. We have kind of understanding and religion and things like this.
# So, I don't understand ...
# HS: ... Neighbours like Russia and Ukraine. # CA: Do you know what led to that? What is the psychology behind that, what led to that happening? That's the thing we need to look at. # AB: I would suggest that it's / there is something of the pistic aspect there.
[1.23.30]
# AB: OK, folks. I think that, when we get into an hour and a half like this, it takes me a lot longer to transcribe these things. So I'd like to close now so that I'll have less to transcribe. :-u [tongue in cheek!]
# Thank you very much for those who remained.
# I suggest that the next one - and we will continue in AI and Economics - will be on the first Friday in June same time and same place and so on, unless we need to move it. Wait a minute! No, the first Friday in June I'm not available.
# Shall we bring it forward to the last Friday in May?
# HS: Yeah last Friday in May OK. # CA: Last Friday in May is good. # AB: Yeah, OK.
[Next AI discussion: Friday 27th May 2022]
# Then we seem to have a lot to discuss. We are inserting a new one, so bring it forward a week, that might be good.
# Thank you very much everybody.
# Thank the Lord for your thinking. May the Lord bring blessing out of this. And as CA prayed, may the Lord bring real good out of this, so we can actually impact the world in some ways. []
# Every blessing!
# HS: By the way, AB, these slides, I will put it in the Drive. # AB: Yes.
# HS: Actually, there is a longer version of the paper that I send you today.
# By the way, because it uses LaTeX, do you have Overleaf account?
# AB: Write that down for me in an email. Because I've only used LaTeX once. But I've taken an interest in TeX because it was designed by a committed Christian, Donald Knuth. It's actually beautiful. So I support it. Not just because he was a Christian, but because he did such a wonderful job. It's almost I see it, I see TeX, as expressing some of the laws of the lingual aspect, that he wanted to express. And he had the versions of TeX, and he didn't say Version 1, Version 2, Version 3, but do you know what he did? # HS: He changed the name? # AB: No, he used pi. He recognised that you will never get to the perfect, so he had Version 3, then he had Version 3.1, then he had Version 3.14, then he had Version 3.141, and so on, I think he got to about 10 digits of Pi. When he died, they decided that was the final one.
# AB: So, I'd love to use TeX better, more, so if you were asking me about a software facility or something, put it in an email.
# HS: Yes, I already put it in an email, with the link. So we can edit together.
# AB: OK. Thanks very much. # HS: OK, see you. # AB: See you next time, and we can be in email contact as well. OK. Bye.
[1.28.29]
[The following are questions sent by the host before the discussion. They avoid merely repeating standard questions about AI and concentrate instead on questions that a Dooyeweerdian and/or Christian perspective might be able to address.]
# Q1. Preamble: Stuart Russell discusses two questions:
» Will AI take over our jobs?
» Is that a good thing?
# Question: What presuppositions lie behind those questions, in SR's approach? On what basis does he try to address them?
[Example: He presupposes that jobs are necessary, and unpaid activity can be ignored.]
# Q2. Preamble to Question: Stuart Russell, after a short time, discusses the impact of GPAI (general purpose AI) on the economy. Given the development of that is still in doubt, can we think about the impact of AI for specific tasks on the economy. We have e.g. AI advert targetting, which will impact which kinds of things are sold.
# Question: Can we think imaginatively about application of AI not currently in use that might affect the economy for good or for ill?
# Q3. To what extent is SR presupposing conventional economics? How would his arguments change if we presupposed instead:
(a) recent thinkers in economics
(b) our multi-aspectual rethink of economics
# Q4. Where might aspects help us in tackling the questions that SR or others posed? For example:
# Q5. Where might the idea of ground-motives help us?
» Example: Can they help us understand the root of the opposition of the two camps he mentions, namely Strive v Enjoy? In the three dualistic ground-motives discussed by Dooyeweerd, might Enjoy side with Form, Grace, Freedom, and Strive side with Matter, Nature, Nature.
# Q6: Where might Dooyeweerd's understanding of theoretical thought (his "transcendental critique") help us?
[Example: We may understand that AI systems overlook much tacit knowledge because they work mainly by explicit (conceptualized) knowledge. c.f. SR's saying that in e.g. human skills we don't yet know how to do them well.]
# Q7. Where might Dooyeweerd's theory of history / progress help us?
# Q8. Where might elements of a Christian perspective alter the discourse?
# Q9. What is the role of leisure in the future (assuming AI does all the work and we have leisure)? [For example, leisure is meaningful in the aesthetic aspect, which depends on the economic, and superfluity (dysfunction in the economic) harms aesthetic functioning. Hence too much leisure is bad.]
Basden A. 2008. Philosophical Frameworks for Understanding Information Systems. IGI Global.
Part of Christian Thinking Space