RLDG Discussion 27 May 2022 on AI and Economics, Part 2

Discussion held on Zoom 27 May 2022

Notes as Typed In During Discussion
(with errors, and missed bits as "...".
AB's own contributions are mostly omitted. This, the fourth RLDG discussion of discussion on Artificial Intelligence, continues the topic, AI and Economics, which was started last time.
This discussion was prompted by Stuart Russell's third 2021 Reith Lecture on AI in the Economy, but this time, we make more reference to our RLDG Rethink of Economics, which continues parallel with this.

This time I have started adding summaries of each section.

Present: NO, NB, TB, AB
Apologies: HSS, CA

Contents

--- About This Document [za400]

[AB hosted this discussion, recorded it and then transcribed it (27 June 2022). AB adopted two roles in doing this, (a) of editor, e.g. giving links to other material, adding headings, adding "***" to important points, adding notes in square brackets, starting with "Ed: ...", that explain things of link to material, and attaching unique labels to statements for future reference (actually only marked as "[]" but not yet assigned); (b) of contributor, inserting responses in square brackets to what had just been said, starting with "AB: ..." The latter are added in order to further the discussion, especially in a way that could contribute to developing a Christian and Reformational perspective on AI. Some are responses he would have made at the time had be been able to. In others, he might even criticise himself for what was said on the day!

"[...]" means audio that has been omitted from the transcript, because it was e.g. bumbling and contributed nothing to the content. ]

----- DISCSUSSION STARTS

[Recording started]

# AB: Looks like it might just be the three of us. So, we'll see how we get on. [omitted stuff]
# NB: Sometimes make more progress with a small group than a large one.
# AB: Just looking for the questions [sent out beforehand].
# NO: [maybe get interrupted]

# AB: What I thought we might do is look at a couple of the questions in depth, and see where we go from there.

# AB: [opened in prayer]
[02.55]

----- Initial Issues [za401]

# AB: Any feedback or comments you wanted to make before we start?

--- The Politics of Unhappiness [za402]

# NO: There is an article that I would like to send around, called The Politics of unhappiness. In it, he develops / goes through three ways that have shaped a lot of public attitudes in the modern world. He goes back to how psychoactive drugs became so prevalent in the 60s, 70s, 80s. He goes into Cognitive Behavioral Therapy, how that came up, how that became the New Age treatment to make people happy. []
# Then he identifies AI as this third wave that is just beginning to capture people, and how for example young girls who don't get enough Likes on FaceBook become depressed. And how social media functions which is sponsored?? by a lot of AI is beginning to dictate happiness. Or unhappiness. []
# One of the other things he mentioned, I'll just read here. He says,

"The third wave's [which is AI] economic effects are only beginning. VR and AR [which is virtual augmented reality] contribute $46 bn to the global economy. That number is expected to rise to $1.5 trillion by 2030. AI is expected to contribute an additional 14% or $16 trillion to the global GDP by 2030. In 2019 in the United States the market for chat bots, the robots that keep people company, was $400m. By 2027 $it is expected to rise to $2 bn, and sex robots alone may double the global market for sex devices by 2027." []

# So he says, "This will be part of the political effects that are going to either keep people happy or stupefy, which makes them easier to govern."
# NO: This is a very interesting article. that goes to the issue of "What is AI doing? What is it going to do?" []
# Not necessarily like Stuart Russell who wants to talk about what is AI is going to do to mechanical and to produce things. But I'm much more concerned about what AI is going to do to mental perspectives. [] ***

[AB: An important difference: the impact on AI on the economy, and on mental health and identity. We can see these as the impacts of AI in the economic, psychical and pistic aspects. ] []

[06.10]

# AB: Wow. That's interesting. [AB: The horror of it!]

Summary [za403]: AI impacts psychological perspective as well as economies.

[continued below]

----- AI and Economics: Conventional and Rethink [za404]

[AB: In back of mind, AB saw a link from that to the following. Especially in the author dwelling on growth of GDP as a measurement of this horror.]

# AB: One of my questions, that I suggest we really look at - because I think we [RLDG] can do it - is "What would be the role of AI, or how would we see AI, if we presupposed our Rethinking Economics?" Or, at least, "if we presupposed the recent thinkers in economics, like Mark Carney, or Mazzucato or Kate Raworth or others. [AB: which questions the validity of GDP as measure of good, and the need for growing GDP.] [] ***
# Because an awful lot of the AI-and-economics people presuppose conventional economics. []

# AB: [Referring to one section of RLDG rethink of economics] So I can immediately think of a riposte to this article, this third wave, in that / We can hold up our hands in horror. But if we take our view of economics as doing Good, Harm or Useless, then perhaps that can bring some light on it. [Also the mandate of economics as contributing towards Overall Good.] []
# I don't know about you, but chat bots, sex robots, looking for a lot of Likes, VR and AR, are all stuff that I would argue is 'non-essential'. And so, what would Marianna Mazzucato say about that, and what would David Graeber, with Bullshit Jobs, say about that? []
[08.20]

# AB: Presumably, the article is mainly trying to bring the horror of it?

Summary [za405]: In considering AI and the Economy, take a new understanding of economics, not the old growth-GDP-oriented one.

--- Essential and Nonessential [za406]

[Ed. it seems the conversation then went onto the issue of essential-nonessential for some minutes.]

# NB: ?? you're talking about the banality of it. # AB: Yeah, banality would be a good word. []
# NO: He does talk about how it acts to stupefy people. Which is, again I think, go right to this nonessential issue. []

# NB: You very quickly cannot distinguish the economics from the psychology. But there's some felt need in the human heart that AI appears to address. (Notice I'm using my language very carefully: I'm not saying it's a real need; I'm not saying that AI actually addresses it.) But there is something something attractive to humans about chat bots, because otherwise they would not get used. And so why / how we have reconstructed a society that leaves these holes in our soul, that AI is coming in and filling in a very unfulfilling way? [] ***

[AB: But what about human sin? ]

# AB: Go on and say a bit more about that, NB.
[10.00]

# AB: If you want, think in terms of which aspects are in play, and so on?
; NB: I'll have to think about this. Certainly I would say the psychic and aesthetic are the two that I would home in on. Probably overlooking the social.

# NB: That we have been built with needs. I don't know if Maslow is the most useful way to think about those needs in the Biblical sense, but we we have things we need from this world and AI is trying to meet them, because they are not being met in some other way. []
# NO: I think that is indeed part of what this guy is trying to speak to in the article. []

[AB: Maslow's hierarchy of needs may be seen needs meaningful in a subset of Dooyeweerd's aspects. Moreover, while Maslow focuses on the slightly negative idea of needs, Dooyeweerd focuses on the positive-and-negative idea of meaningfulness, functioning and Good, which includes the idea of needs, but also the idea of possibilities not yet conceived. (However, maybe the focus on needs it appropriate while we are trying to differentiate essential from nonessential.) While Maslow puts them in hierarchy, Dooyeweerd sees aspects as all equal, though with inter-aspect dependency placing them in sequence. See a brief comparison between Maslow and Dooyeweerd in Tabular Comparison of Suites of Aspects. ] []

Summary [za407]: AI being used for many things that seem non-essential. Should that be?

--- Spiritual needs [za408]

# NO: It occurred to me - I was looking at one of the slides in HS's presentation. [...] He is comparing the contemporary to classiv views on AI on the slide, and he talks about it as rational AI and then he talks about it as behavioural AI. []

# And, again, to my thinkinkg, this is the problem we continue to have: we always seem to want to walk down this binary view of things and leave out the spiritual and the soul. That's where this need comes in that I think NB is talking about: we have a spiritual need for hope and for a fulness of our life, a meaning in our life. (We could go do Victor Frankl's work that he did on the meaning of life.) [] ***

# So, this AI in this sense is not fulfilling a rational need really but it's a behavioural response that is sort of taking over a spiritual need that we have. []
# AB: Now, how do we see that, then?
# NO: Well, I would like to suggest that, as we rethink the economics - again, in a much bigger way - we got to, in some way, factor in this idea of the spiritual or the do-good aspect of our behaviour. [] ***

# That may work with this idea you have of essential-nonessential. But again I think we have to broaden that, because it needs to be essential and nonessential wellness, or something like that, that can bring in the idea that it's addressing that we are human beings in a different way. []

Summary [za409]: Remember spiritual needs when thinking about the usage of AI; they are important too.

[14.10]

-- Essential-nonessential not binary opposites [za410]

# NB: I think that the essential-nonessential distincton might also in some ways be too binary. []
# I think that if we apply the concept of essential-nonessential to Maslow's hierarchy, things that are essential at one layer are nonessential at the next layer downl. [] ***
# My needs for social interaction and respect in camaraderie - yknow, those are absolutely needs up at that level, but at the level of physical air and water and food, they are not needs.
# AB: yeah, good point. [Ed. The non-binary nature of essential-nonessential was made by RG early on in the Economics RLDG discussions.]

Summary [za411]: Essential and nonessential are not binary opposites, so thinking about them requires more nuanced thinking. [AB: Can Dooyeweerd aspects give that?]

--- Responsibility of AI: Embedded AI? [za412]

# NO: [...] Why couldn't we say that AI has a responsibility, as any of the other technologies? And particularly where it deals with the psychic nature of people, their interactions and social wellbeing. That is has to address this broader sense of what it is contributing. []
# AB: How do you mean? Just as we are thinking of economics as embracing all others [aspects, spheres] or embedded with all others, that AI should be as well? # NO: Yeah. [] ["Embedded AI"] ***

[AB: Egbert Schuurman [1980] argued from philosophy that technology should be governed, not by the norms of its own qualifying aspect, the formative, but by the norms of all other aspects. That is, it should 'serve' other aspects. AI is a technology. That may be a seminal work in arguing for "embedded AI".

Schuurman E, (1980), Technology and the Future: A Philosophical Challenge, Wedge Publishing, Toronto. ,p.
]

Summary [za413]: AI has responsibility.

--- Which kind of technology is AI? [za414]

# NO: Yknow, I am still working on something that I worked on here a bit for a while. I think there are three technologies in a broad sense. There's: []

# AI technologies is informational technology. []
# NB: Yeah, but it [AI] reaches down into the other two. # NO: Oh, sure. They can all end up ??fying?? in the different categories. But fundamentally, what it is with AI is information. []
# AB: What do you mean by "transformative"? # NO: A transformative technology is something that changes things. Its primary forces are chemistry, biology, genetics, it's secondary orders are nano-technology, additive manufacturing, applied nuclear energies. Those are transformative technologies. []
# AB: OK. So, by transformative, you mean kind of chemistry, and so on? [AB: I now think AB misunderstood in focusing on chemistry; rather, it's formative; see below.] # NO: Well, I'd kinda have to lay out the whole matrix for you guys, which I'm just kind of finalising. # NB: Work in progress. []

[AB: Sounds like technologies that focus on different aspects: kinematic, formative, lingual. And NO's "fundamentally" sounds like Dooyeweerd's idea of qualifying aspect. But then transformative technologies (formative aspect) are founded in various aspects like the physical, biotic, etc. including even the formative (additive manufacturing, applied xxx). ] []

Summary [za415]: Contribution of AI is founded in information.

[17.40]

--- Responsibility of AI, continued [za416]

# NO: So, again, if this AI is going to do all these wonderful things, like Stuart [Russel] says it is, and impact our society, to me, there should be measurement for it: which says, "Is it contributing to the good of mankind, or is it not?" [] ***
# AB: Ahah! That sounds very useful.

Summary [za417]: Challenge AI to contribute to the Overall Good.

[Ed. AI and Economics in the light of our rethink, continued below.]

----- On the Capabilities of AI [za418]

# NB: I have a thought that I want to try out and see what you think. I'm just thinking out loud. I'm thinking in terms of Maslow's hierarchy, and I'm curious for your help on more translating into kind-of the aspectual framework. []

--- AI Benefits More at Lower Levels, in Earlier Aspects [za419]

# NB: Would it be fair to say that we would be more comfortable with using AI to meet our needs at lower levels of Maslow's hierarchy? Meeting our physical needs, our security needs, are better than meeting our social needs and our self-actualization needs? [] ***

# AB: First [to compare Maslow with Dooyeweerd], go to the Dooyeweerd Pages, "dooy.info/compare.asp.html" and you'll see a comparison of Dooyeweerd's aspects with Maslow there. # NO: Very nice.

# AB: The second thing, is that I've always thought that AI is going to be most useful in the very early aspects, the first four aspects, because the laws are simpler, the laws are more determinative, and there's not the human or animal subject interfering with these things. [] ***
# And whether AI can ever move up the aspects or not is a discussion to have. []
# But certainly the first four aspects, AI can be reasonably successful in. Maybe the first six, up to psychical. []
# NO: AI in the first 4-6 aspects? # AB: Yes.

Summary [za420]: Machine learning AI seems more useful and successful in contributing to applications in the quantitative, spatial, kinematic, physical aspects, and maybe biotic and psychical. Because their laws are simpler, more determinative.

[21.00]

--- Using AI in art? [za421]

# NB: In a sense, AB, if an AI produces 'beautiful' artwork - and now I use the word "if" and I'm putting "beautiful" in quotes there - is that / Let's say that I have an AI that has produced what I consider to be beautiful artwork - is that a form of reductionism, where, by me accepting it as beautiful, I am willing to say that what I previously had thought was aesthetic functioning is really just an amalgamation of a whole bunch of lower-aspect functioning? [] ***

# AB: I think, OK, let's take that one. Are you talking about visual art or are you talking about music? Which do you want? # NB: Visual art. []
# AB: Well, what AI is doing, I think, is probably doing a bit of both. Remember, my view is - and to some extent your view of subject-by-proxy. So, there are humans involved in the design of the AI, and even if it's machine learning, we decide what columns to put into the data records to train it. []

[Ed. "Columns" as in a spreadsheet record, each column contains data with a particular signification. See On Coding Reality into Data for ML/AI in previous discussion. ]

# And so let's imagine / Let's go through this in detail. lET's assume it's machine learning, not knowledge representation. []
# I realise that I've not seen / that's better, isn't it. [...] [laughter]
# We would train / We would take as columns for the training record, of which the output is beautiful or not beautiful, or degree of beauty, or beautiful in different ways, or something. We would take into account colour. And we would probably / what we would probably do, is we would put in the red, blue and green components of colour as columns of the thing, right? And / it may that we would put in something else, like the names of colours, but certainly the red, green and blue components, let's say. []
# Which I would say is a kind of physical thing, it's the physical makeup of the wavelengths of a particular colour. []
# Let's say we also take into account the spatial proportionality of the thing - so the Greeks had the idea of the Golden Ratio, so we would put that in, the spatial proportionality [width/height]. Let's say we put into it the aesthetic law that things are strong on the four points of the thirds and two-thirds, and weak in the middle. So we do that. []
# Then / You see, what we've done, is we've already translated in our minds that beauty, from the Greeks, there's this theory that beauty is best on the thirds and two-thirds. []
# NB: ??? by the examples that we have selected.
# AB: What we do, is we take a million or a billion examples of paintings or pictures and say "beautiful" or "not beautiful", and we code the qualities, the aesthetic qualities, into various things like that. Or where it / spatial arrangement. []
# We might also code into it the use of colour and occasional use of complementary colour. Now, complementary colour is probably coded in terms of an RGB translated into a complementary RGB.

Summary [za422]: Building an AI system involves entry of human expertise, in aesthetic or any other aspect, in machine learning (ML) AI as well as knowledge representation (KR) AI.

[25.30]

--- Minimal Human Coding in ML AI [za423]

# NB: I don't think / There's no humans involved in that coding, though. That, with machine learning, at least, if you give it enough examples, and it / if you give examples of things humans think are beautiful, it might be able to figure out, "Oh, when humans think something's beautiful, there often will be these complementary colours in it." But I don't think any human decision went into telling the computer that. The human decision went into choosing which things were beautiful in the first place. []
[26.00]

# NO: Well, there's the different kinds of AI. There's the AI where you supply the examples, and then there's the AI where you tell it / yknow, "Go tell me what people like." Then it does this statistical regression that brings it all in. So, in some sense, you are still doing some definition in this whole process, right? []
# NB: Human values ??exist? in one way or another in the examples that have been selected, yes. []

# AB: Let's assume, then, that we don't put these laws of aesthetics in, but what we do is we just present it with a load of jpeg files. And we say "This one's beautiful; that one's not beautiful." []
# Well, what we're doing there, is we're essentially doing a kind of - jpeg is RGB values over a spatial distribution. So we're essentially presenting it with physical and spatial information [those two aspects]. []

[AB: I suspect that doing that would require many many more examples, because telling the computer some of the components of beauty in the pictures would give it a helping hand. ]

Summary [za424]: But sometimes, in principle, cannot the human input be minimal?

[27.30]
[AI in aesthetics continues below.]

--- Unhappiness, contd [za425]

# NO: Yknow, there's no way out of this conversation. What we're trying to say at the beginning is, what is beautiful artwork. Well, how? That is going to be subject to whoever is in the drivers seat. [] ***

# And anybody that it leaves out will have an issue with it. [] ***

[AB: This is, of course, a juridical aspect of developing the AI system. ]

# NO: [...] There is AI that achieves mechanical purposes; it helps us build a better car. But AI that has to do with behavioural issues is where we are I think running into serious problems with our psychic [functioning]. []
AB: How do you mean "running into serious problems with our psychic?" NO: [...] We are finding, if you look at a variety of studies (unfortunately there are academics that are getting paid on the back end to find findings that all of this screen time doesn't hurt people []) / But just by the fact that we have things like Facebook (that have an economic value that is larger than most companies producing normal things) you've got evidence that people are tied up in this. We have people who are becoming very unhappy, based on what they see being shown to them and interacted with, all driven by a lot of this intelligence. And not even the intelligence of the self-aware intelligence that we're supposely going to get to.

Summary [za426]: AI for applications meaningful in the human aspects are likely to generate unhappiness.

[AB: That sound like an hypothesis, which we could test in some ways. ] []

[29.45]

--- Back to aesthetic aspect: Reductionism [za427]

# NB: AI is useful /
# AB: [,,,] Trying to answer NB's thing about "What about the aesthetic aspect?" And so on. # NB: Is it reductionism? []
# NO: What does that mean? # AB: Is it really helpful [...] to call it that? []

# AB: It's more / What I've done is: I've suggested that there's a mechanism by which, if we look at the mechanism by which we train the AI, whether it's columns saying what is beautiful or whether it's just jpeg files [Ed. arrays of pixel colours], that most of it is translated into the early aspects, the first four or six aspects. []
# Now, I put that up as a hypothesis. That that's what current AI is doing even when it chooses something beautiful or tries to make something beautiful. []
# Actually, the first one, where it has columns, I was thinking of an AI robot that actually draws a beautiful picture. The second one, in which we present jpeg files, I was thinking it's more just saying whether something's beautiful or not, and, if you like, how much it might fetch in an art auction or something. []

[AB: Are there two tasks that AI might do, and AB and NB might have been initially assuming different tasks? (a) AI evaluates something, (b) AI generates something. In (a) can we usually translate into early aspects? In (b) do we usually need to embody/encode rules of the aspect in which the generation of something is meaningful? Or, rather, when in each? ] []

# Either way, I've argued that it's really boiling down to the first four aspects. []

# Now, how do you see that argument? Can you think of (a) strengths of that aregument, (b) the weaknesses? [] ***
# I'm not saying that I necessarily believe that argument. But it's an argument that I would use from Dooyeweerd, whether it's right or wrong.
[31.50]

# NB: If that's correct, then, in a sense, we're saying that, in a whole lot of what we mean by the concept of aesthetic functioning really can be boiled down to those first few aspects. Which is antithetical to what we are claining. []
# NO: Ah!
# AB: No, I'm not saying that. I'm saying that we do that when we give it records. []
# Because even if we only give it jpegs, our coding of beautiful/not-beautiful, or any spectrum in between, our judgement that it's beautiful, in reality depends on the social aspect, but we're not giving it information about the social aspect. We probably don't even know it. Yknow, that someone in Africa might think something is beautiful that we don't and something non-beautiful that we do. And so, I'm definitely not saying that we are reducing it, but that in the process of trying to train the AI we are transducing it, and in our transducing we are very narrow. So there's a reductionism. []
[33.20]

[AB: The fact that we have earlier aspects is because the aesthetic functioning depends foundationally on those earlier aspects. For example every artwork requires a medium, which is physical and either spatial (painting) or kinematic (music). ] []

# AB: Does that make sense? I'm not saying [that] this is what I believe and it's got to / know an argument. I want it picked apart and so on. []

Summary [za428]: Hypothesis: What AI is doing is working with material from early aspects, and we translate/transduce other aspects into that.

--- Example of AI Art, 1 [za429]

# NB: Can I share my screen and show you a ?? demonstration a sec? # AB: Please do. [NB shared screen]
# NB: OK, this is a website called Wombo. I do not know where they get that word from. But anyway, I can give it a couple of prompts and I could say "clock" and - what's another noun? - "cat". I can pick a style and say "Do clock and cat in the style of Surreal" and click "Create". And it's going out and doing a Google search for clocks and cats and now it's going to draw some art.
# And so, there's my artwork. And you can see there's some clockness to it and some catness to it, and / I dunno that I would call this beautiful, but if I don't like it, I could try again.

[Ed. See below for how Wombo actually does work. Most of our discussion about this, from there to there is actually mistaken! But it can be useful as discussion. ]

# AB: Well the question of course is not whether it's beautiful but whether it's art.

[AB: Actually, Dooyeweerd's insight into the aesthetic aspect was neither, but whether it has harmony. ]

# AB: But I could see that, yeah it could be. # NO: It's not art. # NB: It may ... I like to hang on my wall. # NO: It's colourful. But art is an imitation of reality; that's the traditional definition. # And this abstraction phase that we're going through is far from ?? That's my personal opinion. # NB: [laughs] I was going to say, that might not be widely agreed on. # NO: It was at one time. []
[35.20]

# NO: That's symptomatic of the problems we have. It's symptomatic of the delusion we're going through as a society, a culture and a nation. We're so detached from reality. Which is the point that I'm trying to make about AI. It is detaching us from reality. And, as this author pointed out, it's increasing stupefication. [] ***

Summary [za430]: There are current apps for AI art. But is it real art?

--- Example of AI Art, 2 [za431]

# AB: NB, could you do two things. One is post the URL of that thing. And also, could you take a screenshot of that cats and clocks thing? # NB: Sure, I can put it back on screen.
# NB: By the way, Hi, TB.
[TB joined]
# AB: I can put it up on the website if you're happy with that.
# NB: It went away, but I'll try again. Do we want different words this time? Clocks and cats didn't look so good. # AB: Well I thought it looked very colourful.
# NB: "Flower" and "Volcano" in the style of Dali. # NO: Dali [laughter].


Wombo created this from 'flower' 'volcano' 'Dali' 1080,1920

# NO: Wow, look at this.
# NB: There's no human / I shouldn't say there's no human aesthetic functioning that went into it; that would be incorrect. The human aesthetic functioning that went into it is of a very different nature than of a traditional work of art. [] ***

[AB: Very good point. I think Dooyeweerd would see the difference as: In art, the human is functioning aesthetically pre-theoretically, i.e. without thinking about it. In training an AI system, the human is functioning theoretically, selecting things according to their aesthetic aspect, of which the human is analytically aware. What do you think? ] [] ***

[Trying to get the picture saved.]
# I wonder if there's a way for me to get just this picture on my / Can I / # AB: Just take a screenshot, then we can crop it. # NO: You using Clip-tool? Shift-control-Windows. # NB: Alt-Windows-PrintScreen. Need a new app to get this. # AB: On the PC it puts the screenshot into the clipboard, but on Android it's wonderful: it puts screenshots into a file, so you can just keep doing screenshots. # NB: Alright, I'll get this / and I'll email to AB when I got this figured out. # AB: [...] []
# NB: OK, I'll ??start?stop??cheering?? now because we've seen the Dali version of a flower volcano.
# AB: Or even upload it to the Drive, see if that works for you. But email it to me as well.
[38.30]

# NB: By the way, AB, did you see that TB is here? # AB: No. [...] Did you see the flower volcano? []
# TB: Yes.
# AB: [Summary for TB] What we are talking about is, which aspects AI might successful in. And I was suggesting the first four aspects or the first six or something. And then NB said, "Well, what if AI produces a beautiful picture?" and I went into the ML algorithms that I thought would do it, either by columns of saying rules of aesthetics that we are putting in, or by a whole load of jpegs, and we were going through a little bit of that. But then NB got this software that does AI art in various styles. []

# NB: Presumably, NB, there styles: there is some laws there that / well, they'll have put some laws in. Presumably in terms of ??? of RGBs. []
# NB: By examples. The laws are there but they are by example rather than concretely stated. The human has to know / No human was involved in saying "This is what a painting looks like Salvador Dali." We showed the computer a whole bunch of Salvador Dali and it came up with a SalvadorDaliness score. []
# AB: So, in that case, we just put in jpegs? # NB: Yeah.
# NO: Well, how was that possible. I mean, if something / all those styles that were there, they had to feed it something to say "This is what that style was like." # NB: Yes, jpegs: examples of Salvador Dali. # NO: Yeah, that's human.

[AB: So, selecting the Salvador Dalis is human input?]

# TB: In terms of the computer, Salvador Dali is something decent. It's going to take quantity. It's how it develops its metrics. Sort of make its interpretation of human perception. And that might be, I dunno, number of downloads or usage or / But does that really tell us a lot of useful information about aesthetic aspect of understanding? [] ***
# TB: It could be ?? imitating the discipline, isn't it.

Summary [za432]: NB showed a piece of artwork generated by AI. How does it do this?

[41.30]

--- How Wombo Actually Works [za433]

[Ed. I went and investigated how Wombo works. This is what I found - very different from what we are all surmising above! Except for TB's hint that it is imitating the discipline, which is partly correct. :-O ]
# The following is from How does the Wombo Dream app work?.

"Wombo Dream's app - like many other apps that create generative art - is basically based on two artificial neural networks that work together to create the images. The names of these two networks are VQGAN and CLIP.

VQGAN is a neural network used to generate images that look similar to other images. CLIP, on the other hand, is a neural network trained to determine how well a text description fits an image.

CLIP provides feedback to VQGAN on how best to match the image to the text prompt. VQGAN adjusts the image accordingly and passes it back to CLIP to check how well it fits the text. This process is repeated a few hundred times, resulting in the ki-generated images.

...

I would like to use an example to show how the procedure works. The text input for the following project was "nether portal rendered in Cinema 4D". A total of 250 iterations were run and I saved a screenshot every 50 runs. Here the result:

Image 1: Everything always starts with a rather inconspicuous "seed" - a colored area with some light structures.


wombo seed image 768,512

Image 2: After 50 iterations, a lot has happened and you can see a kind of nether portal from Minecraft.


wombo pic after 50 iterations 768,512

Image 3: After the first 100 iterations, the roughest structures, the colors and the main motif have in principle already formed.


wombo pic after 100 iterations 768,512

Image 4: In the last calculations, the focus is mainly on subtleties.


wombo pic after 150 iterations 768,512

Image 5: We've now gone through 200 iterations and the bright flames at the top of the portal are still doing something.


wombo pic after 200 iterations 768,512

Figure 6: The result is there after 250 iterations. Basically a pretty work of art for Minecraft fans.


wombo pic after 250 iterations 768,512

There are some people who run up to 2000 iterations, but I think that's more of a special case, rather overkill for amateur artists like me."

Summary [za434]:Wombo 'cheats': it takes existing pictures of a given style and uses AI to generate something similar to those that fits the text typed in. There is indeed true aesthetic functioning in what Wombo generates, from the original artist.

--- Fear that All Is Math in the End [za435]

# NB: My fear, that I have at the back of my head, that I'm afraid of in the back of my mind, is that this exposes all my high, mighty thoughts about the importance of the aesthetic aspect, as just a mirage, and [that] really what I thought was beautiful, wonderful, rich aesthetic functioning, has really just been math all along. []
# Now, I don't really believe that. []

[AB: I can think of two thinks that might lessen NB's fear.

1. Since the aesthetic aspect is post-social, its functioning depends on, and is affected by, social functioning. So what we deem beautiful etc. might be socially constructed and thus liable to vary across cultures. The thing about aesthetics that is not socially constructed, which transcends us and is independent of cultures, is the meaning-kernel of the aesthetic aspect, which Dooyeweerd argued is harmony.

2. The way wombo works is not (as I think we all assumed) to generate pictures from a neural net trained with lots of examples of art, but it 'cheats'. It goes and gets a suitable picture from the Internet of the chosen style (Salvador Dali or whoever), and then generates a picture similar to it (using VQGAN) that matches the other words given (CLIP). This means that there is a huge amount of human aesthetic functioning in the final result - that of Salvador Dali! So, there has been "beautiful, wonderful, rich aesthetic functioning" even in Wombo's artwork. ]

Summary [za436]: Might it all be just numbers in the end? No, at least not in Wombo.

--- Aesthetics and Number [za437]

# AB: Well, let's talk about that. Because, the Greeks tried to not reduce, well maybe reduce, but certainly explain aesthetics in terms of number. []
# NO: If you were a disciple of Pythagoras, you would say, "Yes. The numbers are real and that's what you are experiencing." []
# NB: Sure. But I'm not, and I don't want to be. I want to be a disciple of Dooyeweerd. But it just feels like evidence to the contrary. [Ed. That was said without the knowledge above.] So I've got this internal tension in myself, that I don't know how to explain how this software works without pushing back on Dooyeweerd. []

[Ed. See above, on how Wombo works. It no way pushes back on Dooyeweerd!]

# AB: How do you mean?
# NB: Because in the /
# [zoom mumble]
# AB: Because Dooyeweerd might have been wrong about the aesthetic aspect. And [in principle] it may well be that we got / He said "These aspects are gonna change." And this might be one where it has to change. []
# TB: Well, change it.
# AB: What he would argue. And the people in aesthetics would say that, yes, mathematics does align with aesthetics quite a lot, and spatiality and so on. But it's not the whole thing of aesthetics. It's got a very strong aspectual link [with the mathematical aspects], possibly stronger than most aspectual links. But there's something more. Especially when Dooyeweerd calls it [the central meaningfulness of the aesthetic aspect] "harmony" rather than "beauty". []
# So, I dunno. We'd have to think.
[43.55]

# NO: That makes some sense, because I think if you / even if I say, "Make a beautiful picture" and the computer can create this wonderful picture, realistic picture as opposed to an abstraction, it's in a sense that it's only telling that art again is an imitation of reality. []
# It's not the whole harmony. it's not the universal truth. [] ***
# NO: And so, if a machine does it, should I really surprised? Well, maybe not. []
# AB: Good point.

[AB: Interesting point. Art is not the kernel of aesthetics. Rather, art is produced multi-aspectually, led by the aesthetic aspect. But harmony (let us say) is the real kernel. And mathematics probably has no clear idea about harmony. ]

Summary [za438]: Aesthetics has strong link to mathematics, so machines can do a lot, but there is something more.

[44.55]

----- AI and Economics: Jobs [za439]

# AB: Now, can we go back a little bit to [AI and] economics?
# Because, what we've been doing is really about AI itself, trying to understand AI from an aspectual point of view. []
# Well, actually, maybe that's what we should be doing, not following AI and Economics at the moment. []

# NO: We'll never get to this discussion! We'll never get to Stuart's wonderful comment. []
# AB: Well, in some ways it doesn't matter, because it's important for us to try to understand what we can say about AI from a Dooyeweerdian and Christian point of view, without being reactive and taking sides, and so on. []

--- Meaningful Jobs [za440]

# NB: Where it intersects with economics, particularly in terms of the world of work, really does get to some of the same questions we've been talking about. []
# In terms of "What is human meaningfulness?" That the meaning we get from having a job is not so different from the meaning we talk about when we talk about aesthetic functioning, like with this artwork that we've been examining. # AB: Good. [] ***
# NB: Putting your finger on why having a job seems so important to living out my calling as a human being, is a little bit vague. []
# But that's my fear about where AI and Economics are going to intersect, is that, if things that used to making people feel useful get done by machines now / I hardly need to finish the sentence: that seems Bad. []
[47.00]

# AB: Well, let's look at that. I think that's a really helpful thing, you've just said. Because I've been thinking about that. []

# AB: [...] You get people that put an awful lot of effort into their careers. Even to the extent of sacrificing their families. Than, at the end of their careers, they think, "So what? What was that all worth?" []
# Example: I understand that Bill Gates [...] made a lot of money [...] wanted to leave a legacy. [...]
# Now, why am I saying this? Just as an example of getting to the end of your career, the thing that made you meaningful for 30-40 years, that you thought was meaningful, turns out not to be meaningful. []

# AB: And I would suggest that an awful lot of human life that seems to be meaningful, if it's not grounded in Christ, if it's just there for its own sake, turns out not to be meaningful in the end. []
# And the meaningfulness of a thing is part of our pistic aspect, our pistic functioning: what we idolise, what we commit to, what we believe to be most important. And, if that's not linked to the Living God, then it's ultimately meaningless. []

Summary [za441]: Jobs are meaningful, not just earning money.

# Does that make sense?
# NB: I agree [to some extent, but]
# NO: Yes and no.
[49.45]

--- Unconscious Meaningfulness (in Everyday Life) [za442]

# NB: I'm not sure that the link needs to be conscious. # AB: No, it doesn't.

# NB: [Example] I think of my grandfather, who was a farmer his whole life, and at the end of his career he looked back at 40 years spent pulling weeds out of his Soybeans, with great satisfaction, with a rich / pride is not quite the right word, but "I did what I was called to do." # AB: Yeah. NB: And he was a Christian, but I don't know that he would ever use the words, "God called me to pull weeds", so much as "I worked hard and that's what I was supposed to do." []

# NB: Maybe I'm not making much sense here. # NO: You make sense to me. # AB: [zoom mumble]
[50.45]

# NO: [...] You have to go back and also say that God's command was to be fruitful and multiply, and to be stewards of the Earth, you could say. And to love one another. []
# That doesn't mean that, at the end of the day, everything we create is a new form of something. [AB: NO seems to be meaning that the supposed humdrum can be good and meaningful. c.f. Foundational Economics. ] [] ***
# NO: After all, what you [AB] seem to be saying is "We should all be monks." # AB: No, I wasn't really meaning that. I had a grandfather who was a farmer as well. And there's some / []

# AB: I think it's a very helpful example, NB, this grandfather farmer of yours, because it modifies what I just said. I fully accept that. It's not whether we are consciously aware of being called. It's whether we are - well, he was a Christian, and so he had a high / []

[AB: (Towards understanding AI?) All we do is meaningful in some way, especially in the sight of God, even if not in the sight of economists or technologists. Maybe a sound understanding of AI application might be grounded in meaningfulness - which of course aspects help us think about. ] [] ***

# AB: Meaningfulness must refer beyond, Dooyeweerd said, ultimately to the Creator. So, there was that: Because he was a Christian, there was a Creator, and a loving Creator, and because he just enjoyed working within his Heavenly Father's Creation. # NB: He just loved to see those plants come up every Spring. It warmed his heart. # AB: It warms my heart. You're right. So, thank you for that. []

Summary [za443]: Treat all everyday life as meaningful, not lightly to be replaced by AI.

[52.25]

--- AI and Meaningful Tasks [za444]

# NB: So then, where AI comes in is, if an AI robot was pulling those weeds instead of my Grandpa, that's not bad as long as we have an answer to the question, "OK, but what would my Grandpa do?" To replace human labor without replacing the task for the human doesn't get us anywhere. [] ***

[AB: Presumably NB means "meaningful task", "task worth doing" not just "any task". This corroborates our, and Dooyeweerd's, emphasis on meaningfulness not just activity. ] []

[AB: Meaningfulness and value imply each other. Value is what economics concerns itself with. So, maybe this is where the link with economics lies. As NB seemed to intuit above. ] []

# NO: Well, [...] the robot would need lubrication, it would need new gaskets, it would need to be programmed for the field, it would need somebody to put the seeds in it. []
# NB: So, now you're being a servant to a robot. That's not better. [laughter] # NO: Well, we are today! We are all servants of stuff. []
# AB: Well, think of car mechanics over the last 100 years. Someone who is a good car mechanic, and spend their lives servicing cars. # NO: And thank God for that because otherwise I would have worn out my knees much earlier. [laughter] []

[AB: Car mechanic tasks become a satisfying skill. Just as growing food had been. How do we see this? Maybe as satisfyingness, meaningfulness, in different aspects: formative and biotic, here. The real meaningfulness problem therefore, in a farmer using robots might not be lack of meaningfulness per-se, but rather that it is forced on them. That is an evil in the juridical aspect and probably the ethical too. (Towards understanding AI?) ] []

# NO: So, I think that's the march of technology. And the issue is, what if that robot is growing poppies to make heroin? Now we've got the issue that comes back to "How does this affect our human state?" []

Summary [za445]: Humans need meaningful tasks once robots replace their existing tasks.

--- Good, Harmful and Useless [za446]

[Ed. "Useless" seems to cover nonessential, wasteful and pointless. But the word "nonessential" is often used as a synonym below in a possibly unhelpful way. ] []

[Ed. Note: This is a major issue in our Rethink of Economics, and has been discussed in the Economics discussions, z905, za05, za06, za08, zc28, zeg13
]

# AB: There [in talking about growing heroin] we are coming into the issue of Good, Harm and Useless [which we discuss in Rethinking Economics]. []

# NO: Good, Harm and Useful is very similar, it seems, to Essential, Not-essential and Spiritual. []
# AB: [Indeed so.] Well, what it is, is that [AB: I had to clarify my thinking about this in economics, and came to the following]

So it's [the Useless] is a different kind of thing [from the duality of Good and Harm]. And thereby [we] idolise it and also sacrifice the good that could have been done in that time. []
[55.20]

[AB: NO's linking to the Spiritual seems very perceptive, if the idea of idolising is valid. And also if spirituality and meaningfulness are linked. ] []

# AB: [Reiterating with clearer statement:] So the Good and Harmful are a duality in each aspect, and the Useless or the Wasteful is when we idolise something and sacrifice to it the good that could have been done. [] ***

# NB: "The Good that could have been done" is in a different aspect from the one being idolised? Is that what you are saying? []
# AB: ???ly yes. Because let's say somebody idolises, well not the economic aspect, and / no let's say the social aspect, Graeber's Bullshit Jobs, jobs that are created just to fill roles in organisations and so on [over-elevating the social aspect]. Well, those people could have been employed by for example enhancing justice for, let's say, the developing nations of the world. But because they are in this bullshit job their [...] 8 hours' work a day, 5 days a week, is not being used fruitfully for good. [...] Opportunity costs. []
# NB: Sure.

# NO: So, let me make sure I got this right. So, Good and Harmful is the duality of each aspect, right? So there's a good aspect to it and there's a less-good aspect. # AB: versus evil or/ []
# NO: So then you say that useful is the good that could have been done. Are you say that is Essential versus Nonessential, or are we talking about something else? []
# AB: [AB: Basically, my answer to NO is "Yes", but what I said following is, I think, rather confused. ] []
# [AB: AB's confusion starts here, because he went rather defensive, having had the idea of Essential-nonessential distinction challenged early on in the Economics discussion. So he began to think it out more, and is still doing so. And he brings that defensiveness and still-thinking, into his reply to NO. Also, he goes off on the track of the non-binaryness of the Essential-nonessential distiction.]
# AB: I'm still thinking about this, trying to / on the basis of what we've been talking about.
# AB: But the Nonessential: it was made a point earlier that what is Nonessential in one aspect is Essential in another, or seems essential, for a later aspect.

# NO: So, how does this differ from Good and Harmful?
# AB: Good and Harmful. [AB: It might have been helpful here if AB had mentioned they are two different dimensions, at right angles to each other. You can have good and harmful essentials, and good and harmful non-essentials, such as:

2 by 2 matrix of essential-nonessential, good-harmful. 1200,450

] []

# AB: [AB: I now find the following confuses this issue, though it might be helpful to some.] Let's say growing a crop beings biotic good, and painting / composing a beautiful piece of music is aesthetic good - two different aspects. Now, if someone elects to use their whole life just composing and not contributing anything else, and sort-of demanding that the next-door farmer grows enough food for both of them, then there is (in some ways that is Good because it is specialization of tasks and so on) - but if it's overdone, then it's probably starts to become harmful, or wrong, let's say. And especially when examples of this, when someone decides, "Well, I'm going to do my aesthetic thing of leisure reading or art or something, whereas really what I should have been doing was to go and help in a soup kitchen to feed the poor. God called me to do that but I disobeyed." []

# AB: See what I mean? it's not an either-or, this Essential-Notessential, but it's a matter of what could we have done? []

[AB: Essential-nonessential is at least a spectrum, but, better, it's multi-aspectual. ]

Summary [za447]: When considering AI and the Economy, we must recognise that some economic activity that AI might encourage or enable is harmful rather than good, and also that, orthogonal to this, much is nonessential or useless rather than essential and useful.

[AB: If that is so, should proponents and developers of AI be more cautious, less keen, about their proposed applications? Are too many AI applications developed solely for fun, challenge or competition? ]

# AB: [AB: Do I sound like a puritan in saying that?] The Puritans were too much for only looking at the essentials and cutting down on the non-essentials. But God said, "OK, on the 7th day I rest." So there's a proportion there [it's proportion, not binary either-or]. []
[59.45]

--- Two Steps in Understanding AI Use [za448]

[AB: The following was not in the discussion, but it might add something to help clarify and understand.

Here, in the AI being used to pull weeds in a crop that is either Soybeans (good) or heroin (harmful), I can see that we have an two links of usage/benefits/harm: [] ***

AI -- for growing X -- the X is for Y

where the Y is some human functioning, good or bad. At both stages, for both X and Y, we need to consider the good and bad (and the useless?) of the human-and-AI functioning involved. For X:

For Y:

Summary [za449]: The use of AI has two "for" steps, each of which can be good, harmful or useless.

Towards understanding AI?]

--- Assessing Essentiality [za450]

# NB: The difficulty is that it / this / it's so difficult to make sure judgments about the value of things, because people will disagree. # AB: yeah, of course. # NB: That doesn't /

# AB: I would say that's a different issue. Related. But then that's the issue of judging (and God calls us to be wise in our judgment), and it's not whether we get it right or wrong but whether (because I see this as a training ground for the Next Life) it's whether we are humble to learn, and be corrected, and let the Holy Spirit work in us, and maybe convict us, when the Holy Spirit says, "you did wrong there; you were selfish, self-indulgent, and so on" and we either justify it or we say "I'm sorry, Lord. Next time, please, I want to do it differently." []
# There's the pistic aspect coming in, and the ethical aspect. []

Summary [za451]: Judging value (essentiality) is not easy, and requires wisdom. [AB: Aspects can help.]

[1.01.00]

--- Jobs, Identity and AI [za452]

# NB: And I think that, when we are talking about AI and the Economy, and particularly about AI and jobs, that pistic aspect is one that is so frequently overlooked, in that "What does my job / what does my view of my job say about my view of the world as a whole?" [] ***
# AB: Very good point. Can you expand on that, please?

# AB: [Noticed that an hour had gone] Now, bear in mind, that if anyone had to go after the hour, they should / # NB: I'm afraid that I do need to bow out fairly soon because I've got a couple of things waiting for my attention.

# AB: OK. Can you expand on that?
# NB: Yeah. When someone says "What are you?" at least in an American context (and I think when I was in England it was similar) the answer is "I am a professor" or "I am a plumber" or "I am a farmer". Our job and our identity, who we see ourselves to be, are very related in a way that, in my mind, that's pistic functioning. []
# And so, perhaps we ought not to do that. Perhaps when someone says "What are you?" my answer should be "I'm a father" or "I'm a husband, and I'm a husband who happens to do some teaching at a college once in a while, but mostly I am a husband." I don't answer that way; I say "I'm a professor." []
# AB: Yeah.
# NB: And so, that may be an illness, but I think that may be a case of /

# If AI starts dramatically changing or reducing the nature of employment, we are not ready for that as a society, at that pistic level of who we see ourselves to be. []

# TB: But wouldn't we still have an expertise per-se? With AI there is still going to be a human expertise or specialism, isn't there. []
# And that might not necessarily our paid job. It may be our / well, it's vocation, I suppose, and that may be something we do voluntarily. And of course, for some, to be a housewife, or househusband even, is a vocation. # NB: Yeah. []

[AB: c.f. the issue of unpaid household work in our Rethink of Economics. ]

# TB: So, yeah. And of course, AB, you ??taught me what you say about yourself, there , isn't it. Outside of work I'd probably never talk about the fact that I'm an academic to everyone, because it's so boring to them??. [laughter]

Summary [za453]: In the Economy, jobs often provide a person's identity, but sometimes the unpaid activity provides our identity. Using AI should take account of that.

[1.03.33]

--- Four Questions on AI and Jobs [za454]

# NO: Yeah, that's something interesting, that I want to think about some more. The change in the nature of work, which we have here in the States and in most of the developed countries, moved into a service economy basically. Back in the 1800s and earlier, 70% of the people were farmers. And then factory workers. And now, that's only thirty summat % of the population. []

# NB: So there's the question [Q1], "If AI reduces the number of employees needed, will other new jobs come into existence to take up the slack?" [] ***
# After the Industrial Revolution, it's not as though no-one lost their job in the Industrial Revolution, than when automation came in. But, over time, new types of jobs came into existence and we still had full employment. []
# NB: I don't know that we're promised that will happen with AI again. [] ***

# NB: So, we've got the question [Q2] of, "Can we make sure that the benefit of AI are shared evenly rather than yknow aggravating wealth in a small area?" [] ***

# NB: And we have the question [Q3] of, "Even if we could find a way to share the economic benefits of AI evenly, would we still be impoverished by not having meaningful things to do with our lives?" [] ***

# NO: Those are two separate issues. These are two separate issues. These are great questions to dwell on. []
[1.05.50]

# NO: I'm going to dismiss the question of whether we can achieve some kind of equality of stuff, because I think that's a fantasy. []
# NO: But I think the more important question, I'm would rephrase this a little bit. The issue I think is the fact that AI is allowing us more leisure time.
# The work week has diminished. It use to be 60-70-80 hours. Used to start at 16. We've now got a society, again, that's involved with AI. People are working ???in-house/health??, people now are working at home. They don't even want to go in to work per-se. They're just going to jump on and off the machine whenever they need to. []

# NO: So, we have a great deal of leisure time. My issue [Q4] is, "Is AI filling that in a non-meaningful way?" [] ***

Summary [za455]: Four important questions:

[AB: These four questions require serious consideration. I believe Dooyeweerd can help us answer them.]

----- Usage of AI [za456]

--- 'Delivery' of AI: What is AI Used For? [za457]

# That's my second point that I think that you are asking. And I think that's a real problem. I think that's an issue with the nature of /
# It's not the AI per-se, it's the delivery of it, it's the fact that it's coming like this and not like this. [] ***

# That's the problem with AI. It all goes back to the eye. I've said it many times. Aristotle said "The eye is the organ of desire; the ear is the organ of instruction." In all we're doing, we're creating more desire, which is idolatry. []

[AB: John: "Lust of the eyes, lust of the flesh and the pride of life" ]

# AB: Can I just ask you to expand on the delivery. What do you mean by delivery of it?
# (Then I've got something to suggest about jobs, but what about the delivery thing?)
# NO: So, how does AI in the non-industrial sense, how do we engage with AI? We do it from an information standpoint, from a visual standpoint. []
# NB: An awful lot of where AI gets used in today's world, you're saying is like through our social networking feeds, where AI is selecting which things to show us, to keep our attenion. []

Summary [za458]: AI, an information technology, is used to increase our desire, attention.

--- Two Purposes of Using AI [za459]

# NO: This funny dichotomy. Because a whole lot of AI is focused on producing things, like for example in our grocery stores, we can go to our grocery stores and we know there will be so many of potato chips there because algorithms that have decided that the store in this area needs these kind of potato chips and the store in the Mexican area needs Fritos. []
# I'm exaggerating, but the point is, there is a lot of useful productive AI that's not dealing with [aimed at altering?] our leisure mental state. []

Summary [za460]: Two kinds of purpose of AI: (a) "productive" AI, to reduce waste and increase efficiency e.g. by tailoring delivery to suit the customers, (b) "mental state" AI, which tries to change our mental state in our leisure.

--- How to Understand that with Dooyeweerd [za461]

# NO: And that's the concern I have. I don't know how to put that in Dooyeweerdian terms, but that's where I fear that the AI /

[AB: Dooyeweerd might help us drill down to the root of the problem of mental-state AI. Look for the primary purpose or motivation of those who design its features, which, NB suggests, is things like "to keep our attention". Then ask , "Which aspect makes that purpose or motivation meaningful? And it is with or against the norms of that aspect?" The answer is almost certainly: the ethical aspect and, since it is a selfish motive, it is against the laws of that aspect. Hence it will do harm in the long-run. Then, having done so with a primary purpose, do similar for secondary ones. For example, one might be to give people what would interest them rather than what would not. This again is ethical aspect, but this time, it is with the norm. Note: Declared reasons are often the secondary ones, with hidden reasons being the primary ones. ] []

# NO: I'm worried about my Grandkids watching their little video-games all the time. They are going to have a different reality than the reality I ??grew up with. []

[AB: Dooyeweerd can help here too. "watching ... video-games" involves functioning in the formative and aesthetic aspects, and often today the social, aspects, in line with their norms, but also the economic aspect, of consuming time, usually too much time, which is against the norm of that aspect. "Different reality" - what is wrong or right about that? On the plus side, a different reality can stimulate thinking (analytical aspect, distinctions). On the negative side, the different reality can be misleading, which is juridical dysfunction. ]

Summary [za462]: Dooyeweerd can help us understand the issues relevant to use of AI by asking which aspect makes things meaningful, and whether it is good or bad in that aspect.

[1.09.36]

----- AI and the Economy in the Light of Our Rethink [za463]

[Ed. We now turn back to how to think of AI and the Economy in the light of our Rethink, rather than conventional economics.]

--- Presupposing Paid Jobs? [za464]

# AB: Can I go back to jobs, because from our Rethink in Economics, we are recognising the need for, if you like, human fulfilment in the Creation. And we can see that from Dooyeweerd as multi-aspectual if we want. []
# But that does not necessarily mean / need paid jobs. There's an awful lot of unpaid work that is fulfilling and is not inferior to the paid work done in jobs. [] ***

# So traditional economics of both left and right assume - they make a presupposition - about the need for paid jobs. And I think we are questioning that. []
# We are not saying "reject jobs" but we are standing back and saying, "What's really important about jobs?" []
# And it's the pay, because in our society as it is structured in the Global North, the way you survive is by having a paid job, and then letting that money flow through you to the supermarkets to get stuff you need or you think you need, that are 'essential' for each aspect of your life. []

[AB: Is this a good understanding of the importance of paid jobs? The emphasis is on money as flow rather than static possession, in that it only releases its value when it flows, i.e. is exchanged for goods or services. People use money to obtain essentials, and they get money from doing a paid job. So money is only a means to obtain (what people see as) essentials, or maybe some nonessentials too. When the economic function of jobs is seen like that, then it helps us incorporate unpaid activity into our ideas.] []

# But the idea of getting money from a paid job and letting it flow through you to a supermarket is not an Eternal Absolute. Take getting money from a paid job and letting it flow through us to buy stuff in a supermarket, so take that out and let / have using our time to get things, the necessities in various aspects [in some other way]. []
# NB: To serve others. [zoom mumbles] The purpose ???of money is?? service. []

# AB: But even subsistence farming /
# When I was in Uganda I was speaking to a guy and he said, "Well, we don't need money to live. We grow our own food. We build our own houses. We bake our own bricks. All we need money for are school fees and hospital fees." [] ***
# TB: And even then you might not even need school fees because you might be able to educate yourself. Health may be bit harder, yeah. []
# AB: And so, that has never left me. []
# TB: And you have the case of the South Sea Islands. They have, yknow, so few people on the island they don't, I think, use any money. They just all live together and keep each other going, and serve each other, literally. []
[1.12.45]

# AB: So, that's made me stand back from the need for jobs.
# Now, in the affluent cultures, we 'need' jobs. But it's only a relative need. And I think we can / this standing-back from it helps us think, "Will AI get rid of our jobs?" And so on and so forth. []

# AB: I remember the 1970s and people were talking about AI taking all our jobs and giving us huge leisure time then. And it didn't happen. Now, it might be that it will happen again, or it might be that again it won't happen. []
# Because of the nature of human functioning [AB: multi-aspectual, and occurring whether paid or not]. []
[1.13.30]

[AB: This wider perspective might help us understand better, especially with the aid of Dooyeweerd, in drawing attention to human functioning, whether paid or not, in every aspect. The question of whether AI will replace our jobs might be replaced by the question, "What human functionings can be replaced, aided or generated by AI?" ]

Summary [za465]: Do not presuppose paid jobs; there is much human functioning that does not need money, but is important. That put AI-taking-our-jobs in a different light.

[AB: But what is the new light? Needs to be worked out. ] []

--- (NB Leaves) [za466]

# NB: I'm afraid that I need to step out here. This has again been very stimulating. I look forward to the next one.
# AB: NB, thank you very much for /
# NB: I've posted the file in the chat and I will also try get that uploaded to our shared Drive and email the file to AB. []
# AB: That's great. Thank you very much.
# NB: Have a great day, everyone. # TB: ??? # NO: Blessing.
[NB left]
# AB: OK, shall the three of us continue, or shall we call it a day there?
# NO: I've probably got about 10 more minutes myself, so, however we want to do that.
# TB: Is there anything to round off I suppose.
# AB: My mind is / what questions? [AB: was thinking about whether any of the questions sent round before the discussion could be useful to discuss.]

--- AI and Our Rethink of Economics [za467]

# AB: I think, if we are talking about economics and presupposing our Rethink, then in some ways it would be a good exercise to go through each of the major categories of our rethink, and see if that says anything about economics [AB: I meant AI!]. So,

# AB: Somehow, maybe this is not /

[AB: So, we seem to have been using our ideas on economics usefully in AI. Each of those should be worked out in a way relevant to 'AI and the Economy'. Some of the working-out has occurred above, more is needed. (Also, we could say the same about AI without linking to the Economy: AI should be embedded, should do good and not harm nor useless, has multi-aspectual value and incurs responsibility.) ]

Summary [za468]: All our major ideas in Rethinking Economics are relevant to AI and the Economy.

[1.15.30]

--- Issues Common to AI and Economics [za469]

# NO: I think that AI / It's very similar of economics, in that it's somewhat hard= to get a grasp on because it has so many dimensions to it. []

# The way I like to think about these things is geometrically.
# What happens is / The reason we have so many discussion about economics and how it should be constructed or viewed is because economics, and AI is part of this, is like a pivot. Economics is what pivots between our nature, the things that are out there, and how we translate those things or transform them.
# Economics is this dial that's going round all the time. We look at it one way and it looks like one thing; we look at it another way and it looks like another thing. ??You get it here?? it's that, it does that. []

# What I think we're trying to do is bring out the fact that economics is not just about money and the GDP and the fact that yknow the only good is your ROI (Return on Investment), that this economics, because it's this critical wheel that pivots between these functions, it has to be imbued with everything that's human. It has to be imbued with a soul or a spirit. Otherwise, we are going to continue to dial up the wrong answers. []

# AB: You said economics is the pivot but then earlier you said AI is the pivot. Are you saying that the same arguments apply to both? # NO: I think the same arguments do apply to both. []
# NO: Because yknow /
[1.18.20]

# AB: So, TB, do you know what / can you express NO's arguments that applies to both AI and economics? I could ask NO to express it, but TB, what do you think? Are you able to? # TB: I'm not sure I can. # AB: NO, can you put your argument about what it applies to both in one sentence? []

# NO: Well, I could say that economics and AI resolve as measurements. They are measurements. # TB: Or metrics. []
# NO: And as measurements, they must communicate. You wouldn't have a measurement that makes no sense, right? # TB: yeah. # NO: So, as measurements that communicate, they pivot. Depending on what you look to measure, you are going to get a particular answer. []
# AB: What do you mean by "pivot"? # NO: As I tried to describe before, they pivot like a - are you familiar with the Wankel Engine that was designed: instead of the normal piston, they developed a rotating wheel that / # AB: Oh, I remember that, yes [Note: Wankel]. As that pivoted, the different cylinders would fire. []
# So what I'm saying is that economics and AI, as measuring informational technology, it's always hard to get a grasp on this, because it's pivoting all the time, depending on what it's trying to do. [AB: Maybe the metaphor of Wankel is that its chambers have different functions?]
# NO: The point that I'm making is that it has [both have] to have a Good-versus-Bad, a Essential-versus-Nonessential, it has to have a spiritual component that goes with it. # AB: [spoke what he was typing in memory of what NO just said.] []
# AB: So both of them have to have that? []
# NO: It seems like in some way, well / In other words, [...] maybe that's an element of outcome. We say, "Well what is the outcome of this application of AI? What is its utility? What is its spiritual impact?" So in some sense there's an accounting. []
[1.21.50]

# AB: OK, those are because we can do those with aspects. We can talk about outcomes. Aspects can help us talk about the outcomes. So, there's spiritual impact, there's utility which is formative impact, probably a social impact, a justice impact, and so on. # NO: Yes. []

[AB: The above seems to refer to the various aspects' norms, and a judgement of or responsibility for the repercussions of our functioning in those aspects, with or against the norms. ]

# NO: I mean really this is what we discussed when we talk about widening or Embedded Economics. We're trying to bring in the humanity, which the pure measurement doesn't account for. [] ***

[AB: NO's metaphor of pivoting might refer to the encircling impact of other aspects on the economic or the AI. ]

Summary [za470]: Both economics and AI are multi-aspectual and both should be 'embedded' among other 'humanity' spheres of life.

[1.22.30]

----- End of Discussion [za471]

# AB: NO, ten minutes is passed since you said you've got ten minutes more.
# NO: Yea, I'd better take care of something in the family here. I'm gonna sign off.
# This has been a very valuable. I'm going to go back to that transcript. []

# NO closed in prayer.
# AB: Thank you. That was great.
# NO: I'll try to get that article that I discussed and post it in the Google Docs as well. TB, you didn't hear about the article called The Politics of Unhappiness and he talks about AI as one of the issues we have to confront. # TB: Interesting.
# ACTION NO: will get the article Politics of Unhappiness.
[1.24.30]

# AB, TB: [some stuff discussed about Christian Academic Network meeting.]

----- Questions Sent Before Discussion [za472]

Q1. What would be the role of AI if we presupposed our rethinking of economics? Stuart Russell presupposed a conventional economics (narrow, detached, growth-oriented, ignoring unpaid household activity, ignoring environment, etc.). We presuppose: taking all aspects of reality fully into account, including environmental issues, unpaid household work, attitude, recognising the difference between the "Good, Harmful and Useless" in the economy, recognising responsibility, etc.
[We discussed some of that.]

And, in discussing the above,

Q2. In what ways might Dooyeweerd's aspects, or other parts of his philosophy, help us understand AI more fully?
[We did not discuss that explicitly, but showed some of these things implicitly.]

Q3. In the light of those, how do we now address the specific question of the possibility suggested by Stuart Russell, and introduced by Charmele, that AI will do all the mundane work and leave us with loads of leisure time? "Substitute or complement human beings?"
[We discussed some of that, and occasionally looked at aspects of it.]

Q4. Add any other question you wish.

--- Notes and References [za473]


Graeber, D. 2018. Bullsh*t Jobs - The Rise of Pointless Work and What We Can Do About It. Penguin Books.


Note on the Wankel Engine. It was like an internal combustion engine in that it ignited fuel at the right time to put pressure on the moveable part but, instead of a piston going up and down, causing much vibration, it was an eccentric shape with corners that always touched the sides to give three different chambers.