Humanist Discussion Group, Vol. 38, No. 157. Department of Digital Humanities, University of Cologne Hosted by DH-Cologne www.dhhumanist.org Submit to: humanist@dhhumanist.org [1] From: Tim Smithers <tim.smithers@cantab.net> Subject: Re: [Humanist] 38.154: a paradox (?) commented (122) [2] From: Tim Smithers <tim.smithers@cantab.net> Subject: Re: [Humanist] 38.145: a paradox (?) discussed (89) [3] From: Mcgann, Jerome (jjm2f) <jjm2f@virginia.edu> Subject: Re: [Humanist] 38.154: a paradox (?) commented (32) [4] From: Willard McCarty <willard.mccarty@mccarty.org.uk> Subject: thinking 'as we do' (24) --[1]------------------------------------------------------------------------ Date: 2024-09-24 21:45:21+00:00 From: Tim Smithers <tim.smithers@cantab.net> Subject: Re: [Humanist] 38.154: a paradox (?) commented Dear Jim, Sorry, but I couldn't resist replying like this. What do you mean by "Calculators do math"? They don't. Calculators do calculations. All the math needed to specify the calculations that can be done using a calculator is done before, or as part of, the designing and building of the calculator. A calculator merely implements instances of the calculations a user is able to specify using the user interface of the calculator. Even when done by a person, proper implementation of a well specified calculation needs no math, just correct application of the operations in the specified calculation. You don't even need to understand what the operations are, or do, mathematically. (This is a state of understanding we still like to push children into when they are at school, so they can do their sums right, but have no understanding of the math involved. This is an example of human unintelligent behaviour, I think.) It is, I suggest, confusions like saying a calculator does math, when what it really does is calculations, that leads us to mistake certain kinds of machine behaviour as kinds of intelligent behaviour, which doing math is, I would say. Machine generation of text, for another example, is now often mistakenly thought of as writing. (More human unintelligent behaviour.) Human languaging, it seems to me, is the best way we've come up with to fool ourselves and others a lot of the time. ChatGPT doesn't get fooled like this, but that's 'cos it only deals in text tokens, and does loads of tensor arithmetic calculations. It's doesn't deal in words. Thinking it does is just us fooling ourselves again. -- Tim PS: If you want a good example of some real AI take a look at the Wolfram Mathematica system. This does do math. Lots of different kinds of math, and lots of hard to do math: it knows and understand lots of math and does lots of mathematical reasoning. And it's been doing this for serious math doing users for a long time, me included. But notice, there are no Connectionist [alias Artificial Neural Network] systems of any kind inside it, and the designers and builders do know and understand how the Mathematica system has all this mathematical knowledge and understanding and mathematical reasoning ability. > On 24 Sep 2024, at 07:32, Humanist <humanist@dhhumanist.org> wrote: > > > Humanist Discussion Group, Vol. 38, No. 154. > Department of Digital Humanities, University of Cologne > Hosted by DH-Cologne > www.dhhumanist.org > Submit to: humanist@dhhumanist.org > > > > > Date: 2024-09-24 03:50:14+00:00 > From: James Rovira <jamesrovira@gmail.com> > Subject: Re: [Humanist] 38.151: a paradox (?) commented > > Willard -- thank you for the elaboration. > > RE: your first paragraph, I think you're being a bit vague. What do you > mean by "the machine thinking as we do"? We do math. Calculators do math. > Calculators do math better and faster than most humans do math, maybe > better than all humans do math. But that's not what you mean. You're not > asking about machines doing math, I don't think. > > Do you mean to refer to "consciousness" or "sentience" by the phrase > "thinking as we do"? I try to discuss the difference between machine > consciousness and human consciousness here: > > https://medium.com/@jamesrovira/ai-and-talking-heads-part-ii-why-sentient- > machines-will-never-exist-5ae276a559fb > > I argue that there are two conceptualizations of machine sentience or > consciousness: gnostic and organic. > > Gnostic conceptualizations make the body irrelevant: consciousness resides > in electrical patterns that could be sustained in a completely inorganic > environment - a metal box. > > Organic conceptualizations make the body essential to consciousness. > Machines can't attain consciousness without a physical body that interacts > with its environment as a human body does. I'm not talking about picking up > objects. I'm talking about things like breathing and having skin and ears. > Nonstop sensory input that is an essential and inescapable part of > cognition on a moment by moment basis. > > I suggest that bodies are essential to consciousness, so something that's > just a computer in a box will never attain consciousness. But I think it > may address some of the ways that the cognitive "it" of the machine is > fundamentally different from the human "it." > > Jim R > > >> The first, because we're confronted with it daily, is the notion that >> the (reachable) end-point of artificial intelligence begins once the >> machine can think as we do--and then goes on to do that cognitive 'it' >> better, faster etc. Frustrating to me is the lack of discussion of how >> different the artificial mode of being intelligent is, and the absence >> of interest (as far as I can tell) in developing smart machines in that >> other direction. Perhaps I am simply ignorant of the research and >> engineering which are doing precisely that; if so, kindly let me know. >> But still the loud public droning on will continue, of course, since it >> is so good at keeping the funding flowing. >> >> > - *Women in Rock, Women in Romanticism > <https://www.routledge.com/Women-in-Rock-Women-in-Romanticism-The- > Emancipation-of-Female-Will/Rovira/p/book/9781032069845>*, > Routledge, 2023 --[2]------------------------------------------------------------------------ Date: 2024-09-24 21:36:56+00:00 From: Tim Smithers <tim.smithers@cantab.net> Subject: Re: [Humanist] 38.145: a paradox (?) discussed Dear François, Yes. It's "Model, modeled & modeling" except I prefer to put it like this. Model, modeled, and the modeller(s) [who do the modelling] To make more explicit that we, the modellers, are there too, and have to be there for there to be any model, and anything modelled. When people, like the PhDers I teach, tell me about some model, I like to ask, whose model is it? And, what did they build it for? And, how well did it work for these people? It's the modellers who have the needed purpose, not, of course, the model or the modelled. -- Tim > On 19 Sep 2024, at 06:58, Humanist <humanist@dhhumanist.org> wrote: > > > Humanist Discussion Group, Vol. 38, No. 145. > Department of Digital Humanities, University of Cologne > Hosted by DH-Cologne > www.dhhumanist.org > Submit to: humanist@dhhumanist.org > <snip> > --[1]------------------------------------------------------------------------ > Date: 2024-09-18 15:15:39+00:00 > From: scholar-at-large@bell.net <scholar-at-large@bell.net> > Subject: Re: [Humanist] 38.143: a paradox? > > Willard > > I wonder if the zone you are seeking to access might be approached via a > triangulation. > > Model, modeled & modeling > > By analogy with translation where the translation and the translated are > instances of the (matrix) of translating. > > Are you seeking to access a zone of potentials? > > François > > >> On Sep 18, 2024, at 1:12 AM, Humanist <humanist@dhhumanist.org> wrote: >> >> >> Humanist Discussion Group, Vol. 38, No. 143. >> Department of Digital Humanities, University of Cologne >> Hosted by DH-Cologne >> www.dhhumanist.org >> Submit to: humanist@dhhumanist.org >> >> >> >> >> Date: 2024-09-18 05:08:06+00:00 >> From: Willard McCarty <willard.mccarty@mccarty.org.uk> >> Subject: side by side >> >> Here's a question I am pondering and would like some help with. >> >> Much is written about modelling, a bit of it by me. But I am bothered by >> the built-in assumption that the role of the machine in this instance is >> to imitate the modelled object or process as closely as possible or >> practical. If, however, we juxtapose the computational machine as we >> know it to a human process or practice, neither to model the latter by >> the former nor to do a point-by-point comparison but to hold the two in >> mind in order to see what happens, what happens then? Where might one >> find a way to think about this situation? >> >> Comments welcome. >> >> Yours, >> WM >> >> >> -- >> Willard McCarty, >> Professor emeritus, King's College London; >> Editor, Interdisciplinary Science Reviews; Humanist >> www.mccarty.org.uk --[3]------------------------------------------------------------------------ Date: 2024-09-24 10:04:41+00:00 From: Mcgann, Jerome (jjm2f) <jjm2f@virginia.edu> Subject: Re: [Humanist] 38.154: a paradox (?) commented Willard (and Jim), I copy you two on this because my earlier response to Willard’s initial query somehow never got posted, so far as I’m aware. Here I merely want to endorse Jim’s comment. No machine can have the peculiar sentience of the human body (or for that matter, of any biological organism). “Consciousness” – see Gensis, see Keats, see see see. . . – is a blessing and a curse,an equivocal gift. Its best students have been theology, philosophy, and science. There is in all organism a “common sentience”. In human beings its best student is “Common Sense”. And while there are clear similarities between machine and human “consciousness”, the two are very different. A brief bibliography Delmore Schwartz, “The Heavy Bear that Goes with Me” A.N. Whitehead: “The taint of Aristotelian Logic has thrown the whole emphasis of metaphysical thought upon substantives and adjectives, to the neglect of prepositions”. [see his discussion of “with”] Laurie Anderson’s exhibition The Withness of the Body “Experience outruns conception” (my take from axial thought) Jerry --[4]------------------------------------------------------------------------ Date: 2024-09-24 06:11:27+00:00 From: Willard McCarty <willard.mccarty@mccarty.org.uk> Subject: thinking 'as we do' In response to Jim Rovira's note about the vagueries of my "machine thinking as we do", I can only agree. There are two problems I was intending to point out (actually one, in two versions). The first is that the interrogative marker, which makes that phrase into a good research question, is commonly forgotten in the rush to 'deliver' an undeliverable promise rather than exploit it. The second is that the failures of the machine we have are overlooked instead of valued as a source of its particular mode of intelligence. Good ethology does not denigrate a monkey for its inability to be as we are but illuminates its unique mode of being. The situation with smart machines is of course quite different in that they co-evolve with us. But perhaps I'd better leave it there, as the waters are getting rather too deep for me. Comments? Yours, WM -- Willard McCarty, Professor emeritus, King's College London; Editor, Interdisciplinary Science Reviews; Humanist www.mccarty.org.uk _______________________________________________ Unsubscribe at: http://dhhumanist.org/Restricted List posts to: humanist@dhhumanist.org List info and archives at at: http://dhhumanist.org Listmember interface at: http://dhhumanist.org/Restricted/ Subscribe at: http://dhhumanist.org/membership_form.php