Humanist Discussion Group

Humanist Archives: Jan. 7, 2025, 7:26 a.m. Humanist 38.305 - AI, poetry and readers: Calvino, neuroscience & intention

				
              Humanist Discussion Group, Vol. 38, No. 305.
        Department of Digital Humanities, University of Cologne
                      Hosted by DH-Cologne
                       www.dhhumanist.org
                Submit to: humanist@dhhumanist.org




        Date: 2025-01-06 16:20:37+00:00
        From: Tim Smithers <tim.smithers@cantab.net>
        Subject: Re: [Humanist] 38.302: AI, poetry and readers: Calvino, neuroscience & intention

On 5 Jan 2025, at 08:36, Humanist <humanist@dhhumanist.org> wrote:
>
>
>              Humanist Discussion Group, Vol. 38, No. 302.
>        Department of Digital Humanities, University of Cologne
>                      Hosted by DH-Cologne
>                       www.dhhumanist.org
>                Submit to: humanist@dhhumanist.org
>
>
>    [1]    From: James Rovira <jamesrovira@gmail.com>
>           Subject: Re: [Humanist] 38.301: AI, poetry and readers: Calvino's
cybernetics (25)
>
>    [2]    From: Gabriel Egan <mail@gabrielegan.com>
>           Subject: Re: [Humanist] 38.300: AI, poetry and readers (37)
>
>    [3]    From: William Benzon <bbenzon@mindspring.com>
>           Subject: GPT in the Classroom, Part 2: Escape to America (11)
>
>

Hello

I'll go backwards.

Bill: that you see in the text you call "Escape to America"
generated by your FredThe Heretic GPT system as "...  dripping
with human intention" is fine.  This does not mean, however,
that there was any intention involved in its generation.
There wasn't.  There's nothing in the way GPT systems are
built and work -- and we do know everything there is to know
about all this -- including your FredTheHeretic system, that
gives them anything that can demonstrably, explainably, and
thus reasonably, be called human-like intention.  (In part, of
course, because we don't know how what we call intention
relates to brain functioning.  Our current folk psychology has
no completed neuroscience to ground it in actual braining
workings, and, it may never have this.  Intentions may turn
out to be like the Aether.)

No amount of "seeing dripping human intention" gives these
Generative AI systems any intentions.  Not even if the
dripping turns to a torrent.  Just like no amount of looking
like real flowers makes an artificial flower a real flower.
And artificial flowers can look incredibly like real ones.  My
favourite, by a long way, are in the Ware Collection of
Blaschka Glass Models of Plants in the Harvard Museum of
Natural History.  The human creativity displayed in the making
of these is some of the most remarkable I know of: they each
"drip" with human knowledge, understanding, reasoning, skill,
pain staking persistence, and remarkable achievement.  (If
you're not near enough to Cambridge, MA, to visit the museum,
find a copy of "The Glass Flowers at Harvard," by Richard
Evans Schultes and William A David, with photographs by Hillel
Burger, E P Dutton, Inc, New York, 1982.)


Gabriel: You ask ...

    "...  Do we understand the brain well enough to discount
     the possibility that our AI machines work like human
     brains?"

Yes, we most certainly do know AI machines do not work like
human brains do, despite remaining unknowns, perhaps more
unknowns than we currently suppose, about how brains are built
and function.  Why?  Because both do not use "neural
networks."  Brains use what we can reasonably describe as
neural networks: massive collections of highly interconnected
cells of various types we can, and do, reliably identify as
particular kinds of neurones.  Just because the Connectionist
people use the same term, 'neural network,' to describe what
they build does not mean their systems and brains are built
and work the same way, though many Connectionists appear to
think this, or wish this to be the case.  No amount of calling
things by the same name makes them the same.  Nor, I would
insist, is ignorance of difference evidence for no difference;
ie, sameness.  Demonstrating and explaining that and how two
(interestingly complicated) things are the same takes a great
deal more than calling them by the same name.

I'll repeat.  We do know and understand how today's so called
Generative AI are built and work.  We wouldn't be able to
build and operate them if we didn't.  It is only people who
claim these systems magically know, understand, and reason
about things in the real world, who also claim we don't
understand how these systems do this.  This view may be good
for hype', marketing, and business, but it's no good for doing
empirical rational research, which is, I think, what's needed
to do real work in AI.


Jim: I will try to fold your "...  what AI does is produce a
bunch of permutations just like a mediocre human poet would,"
into my reply to your latest reply which is still in
preparation.  Though, what follows responds some to this.


Willard: thank you for giving us the Calvino quotation.  It
is, I agree, relevant and interesting for discussions of
Generative AI things.  An extensive treatment of combinational
creativity a la Calvino, which I like, is

    Elizabeth Scheiber, 2016.  Calvino's Combinational
    Creativity, Cambridge Scholars Publishing.

Probably you and others here know of this, but I still find
Calvino's notion and use of combination somewhat superficial.
It doesn't easily pick out what I see as importantly different
ways of using combination to do things, interesting, and not
so interesting things.

For example, we might first build a large set of different
atomic pieces each of which can be placed after any one other
piece to form sequences of any length.  And then specify the
probability of each atomic piece being placed after each other
atomic piece, and thereby have lots of probabilities for all
the possible ordered pairs of our atomic pieces.  Combination,
in this case might then be done by picking an atomic piece, or
perhaps a short sequence of atomic pieces, and then adding,
one piece at a time, the atomic piece with the highest
probability of going next after the last piece in the
sequence.  To make things a little more interesting we may
introduce some random choice over a small set of the next most
probable pieces, so that we don't always get the same thing
every time we start with the same atomic piece.  A further
complication might be to extend the probability relationships
to cover what comes next after sequences of atomic pieces, not
just one, and sequence of different lengths.  Combination in
this case is a kind of simple probabilistic adding to the end
of what we've already generated.  Even if we think we get
interesting things out of this combinational procedure, notice
that the different atomic pieces, despite having lots of
different ones, play no part in the combination and the
result.  Only the probability relationships between ordered
pairs of atomic pieces do, which are all pre-fixed, somehow.
I call this a kind of simple adding-on use of combination.
It's a kind of generative grammar; one with probabilistic
grammar rules.  It can, sometimes generate interesting things,
but not all the time, I would say.

Here's a different example.

Writing is the putting down in text, using some particular
[shared] alphabet including punctuation glyphs, of the words
we decide to form together to say something we want to, or
need to, or are trying to say.  But this is not a simple
linear process, not all the time, at least.  It doesn't always
go: think of what to say; work out which words to say this
with; then write down these words in the order we decided to
use them, using the alphabet to generate the text.  Only
writing simple things to say goes like this.  More usually,
the putting down of words in text is an integral, and
necessary part of working out what to say, and of working out
which words to use to say this, and of discovering what there
is to say.  Trying out words, by writing them down and then
reading them back, shows us other possibilities, and,
sometimes better possibilities, or weaknesses, or errors, in
what we've written.  And, seeing our choice of words written
down, and therefore needing to read what we wrote, thus
putting them back in our head, often shows us we could say
something different, perhaps quite different, or say it in
quite a different way.  It can also show us relationships
between things we know and understand which we had not seen
before: writing is a discovery by combination activity, not
just a text production process.  This is why writing is a way
of thinking, a powerful and effective way of thinking, just
like drawing or sketching is another way of thinking, also
powerful and effective.  Thinking is not something we do just
in our heads which results in some kind of outcome we then
turned into words, and then write down, just like a drawing or
sketch is not some external version of the final internal
outcome of forming in our head a needed image.  A drawing is
not an externalisation of a mental sketch.  Doing the drawing
is the only way of getting to a satisfactory final sketch.
The drawing actions are an integral part of doing the
thinking, and the thinking would not happen, and could not
happen, without doing the drawing.  It's the same with words,
I think.

So, when we are writing we are doing thinking; knowledgeable
reasoning.  And, as we write, what comes into view, because we
read what we've written, are possibilities and discoveries of
where to take what we want to say, and possibilities and
discoveries of how to say what we want to say, and of what we
may think.  We do not write by turning our thoughts into
words, then turn these words into text and then decide what is
the most probably next piece of text we could add to the text
we have already.  That is not a how constructive combination
processes work; just adding on the most probably next piece is
not constructing, it's being procedural.  Writing is
constructive because on reading the text we have written so
far, we "see" where we may take what we're saying, we discover
new things to think, to ask, we discover things we don't
understand, don't know how to say.  Writing is a working out
of what to think, and how to think and understand, things we
are working on.  It's not just a text generation procedure.
Writing is a conversation -- literally literal -- between us
and what the words we read from our own text say to us when we
read them, and re-read them, and change them, and start again
with them, and thereby discover what we are saying, not say,
can say, can't say, and more.

What's important, here, I would say, is what we see can be
combined with where we have arrived at, and how it might be
combined in different ways, in different places in what we
have, to say the same, or something different, is not a simple
add-on the end kind of combination.  The kind of combination
that happens in writing is a highly nonlinear re-entrant
combining of ideas, meanings, feelings, expressions, and other
such mental things, and the natures of each of these things
matters; it's what drives the combining; not simple
probabilities or other such simple notions.  Calvino does talk
about this kind of combinational creativity, but, for me,
doesn't dig into the details enough to try to get to a better,
more detailed, view and understanding of what is actually
going on.  But, perhaps it's my lack of literary skills and
training that blinds me to seeing this in Calvino's texts.  As
always, seeing things does depend upon where we are looking
from, not just what you are looking at.

-- Tim



> --[1]------------------------------------------------------------------------
>        Date: 2025-01-04 14:20:54+00:00
>        From: James Rovira <jamesrovira@gmail.com>
>        Subject: Re: [Humanist] 38.301: AI, poetry and readers: Calvino's
cybernetics
>
> thanks for posting that quotation from Calvino, Willard. One thing I've said
> throughout the course of this discussion is that I believed AI can produce
> interpretable poems, but I also said I didn't think it could produce a great
> poem.
>
> Human beings are like that too. They may write a lot of poetry, but seldom if
> ever write great poetry.
>
> So here is the relevant quotation to me:
>
> "To return to the storyteller of the tribe, he continues imperturbably to make
> his permutations of jaguars and toucans until the moment comes when one of his
> innocent little tales explodes into a terrible revelation: a myth, which must
be
> recited in secret, and in a secret place."
>
> He's describing a storyteller who starts out reciting the usual sort of stuff
-
> permutations of jaguars and toucans - but then continues until he hits on
> something great finally - myth and revelation.
>
> So what AI does is produce a bunch of permutations just like a mediocre human
> poet would. But I don't think it would ever produce anything great. It would
> need that self reflective, embedded consciousness in a specific historical
> context to go beyond the permutations that it is literally producing.
>
> Jim R
>
> --[2]------------------------------------------------------------------------
>        Date: 2025-01-04 11:49:13+00:00
>        From: Gabriel Egan <mail@gabrielegan.com>
>        Subject: Re: [Humanist] 38.300: AI, poetry and readers
>
> Dear Humanists
>
> James Rovira wrote that "the machine
> [an AI] does not model the mental goings
> on of any human being".
>
> I am wondering how we might be able
> to know that. Do we understand the
> brain well enough to discount the
> possibility that our AI machines
> work like human brains?
>
> Both use neural networks. Both
> hold knowledge and are inscrutable
> about how they do that. That is,
> we can be sure that both know that
> London is to England as Paris is to
> France -- because both will complete
> that four-term homology if given three
> of the terms -- but we cannot see
> where in their neural networks this
> knowledge is held.
>
> So why rule out the possibility that
> in making our AIs we are unintentionally
> modelling an aspect of the mental goings
> on of human beings?
>
> On the topic of what it means to understand
> a computer system and a brain, I recommend
> Jonas & Kording "Could a Neuroscientist
> Understand a Microprocessor?"
> (https://doi.org/10.1371/journal.pcbi.1005268)
>
> Regards
>
> Gabriel Egan
>
> --[3]------------------------------------------------------------------------
>        Date: 2025-01-04 08:57:21+00:00
>        From: William Benzon <bbenzon@mindspring.com>
>        Subject: GPT in the Classroom, Part 2: Escape to America
>
> Here’s a recent blogpost that puts some “pressure” on thinking about computer-
> generated poetry: https://new-savanna.blogspot.com/2024/12/gpt-in-classroom-
> part-2-escape-to.html. The words were generated by FredTheHeretic, a GPT based
> on the poetry of Frederick Turner. The subject matter of the sonnet comes from
> Miriam Yevick’s memoire, "A Testament for Ariela." I selected three separate
> paragraphs from that book and directed FredTheHeretic to use each as the basis
> for one quatrain in a sonnet. When the first draft had problems, I requested
> that FredTheHeretic fix them. The way I see it, that sonnet, “Escape to
> America,” is dripping with human intention.
>
> Bill Benzon



_______________________________________________
Unsubscribe at: http://dhhumanist.org/Restricted
List posts to: humanist@dhhumanist.org
List info and archives at at: http://dhhumanist.org
Listmember interface at: http://dhhumanist.org/Restricted/
Subscribe at: http://dhhumanist.org/membership_form.php