Humanist Discussion Group

Humanist Archives: Jan. 12, 2025, 9:22 a.m. Humanist 38.313 - AI, poetry and readers

				
              Humanist Discussion Group, Vol. 38, No. 313.
        Department of Digital Humanities, University of Cologne
                      Hosted by DH-Cologne
                       www.dhhumanist.org
                Submit to: humanist@dhhumanist.org


    [1]    From: Tim Smithers <tim.smithers@cantab.net>
           Subject: Re: [Humanist] 38.300: AI, poetry and readers (255)

    [2]    From: James Rovira <jamesrovira@gmail.com>
           Subject: Re: [Humanist] 38.312: AI, poetry and readers: Calvino, neuroscience & intention (85)


--[1]------------------------------------------------------------------------
        Date: 2025-01-10 16:35:00+00:00
        From: Tim Smithers <tim.smithers@cantab.net>
        Subject: Re: [Humanist] 38.300: AI, poetry and readers

Hello

First

Gabriel: Thank you for your reply.  I will respond but
probably not soon.  I start some PhD teaching in about a week,
with plenty still to prepare.  Then follows four weeks in
which I'll have little time, and even less mental energy, for
other serious matters.  After that I'll likely have time to
climb back up on the wall here and attempt a reply.

-- Humpty Dumpty, standing in for [but not modelling!] Tim


Next

Jim: Thank you for your further thoughts: once again,
interesting and usefully pressing.

Here I've tried to do as you asked ...

   "...  keep in mind I'm only thinking about -external
    product only without regard for the production of the
    text.-

I agree, if some text, no matter how produced or generated, is
readable, words, I will say, will result, and thus meanings
will occur, in the head of the reader.  But, we should note,
this does require a reader, which, I would want to say, is
someone or something for which the read words render meanings,
and note that ChatGPT is not a reader; it does no reading of
the text it generates, nor of the prompts given it.

You say ...

   "The textual output of an LLM on a computer screen is a
    presentation of *human language*.  So in a practical,
    material, observable sense, LLMs model human language in
    exactly the same form, with the same kind of material
    output, that a human being would: text on a computer
    screen.  If the output is the same, that to me makes it a
    model."

I don't disagree with this, but, for me, there is a lot packed
into your last sentence here: "If the output is the same, that
to me makes it a model."

What does it take to "be the same"?  How is sameness assessed
here?  Which differences, if there are any, matter and which
don't, and why?  I think you may not like this, but, for me,
for something to be a model it needs to be shown to be fit for
purpose, and that purpose needs to involve using what is to be
the model in place of what this is supposed to be a model of.
We can, I accept, occasionally come across something that can
serve some modelling purpose, but which we have not explicitly
designed and built to be a model that satisfies our purpose,
but we still need to show that this borrowed thing does
satisfy our modelling purpose well enough.  In other words,
just looking the same is not sufficient to satisfy the
Modelling Relation, and, of course, it's not a necessary
property either: similarity is not, I think, a useful notion
when it comes to designing, building, using, and understanding
models, not in research, at least.  Demonstrated sufficient
equivalence in at least some observable aspects is what's
required.  (Think of the Lotka-Volterra model of predator-prey
populations -- a system of two coupled nonlinear differential
equations -- and population of two real animal species, one
being predators and the other being prey animals.)  So, how
could we usefully assess sufficient equivalence in
automatically generated text and text from someone writing
some [meaningful] words?  Is just being [sufficiently]
grammatical and readable enough?  If so, why?  Not all
artificial flowers are models, though they can look very like
real flowers.  The Ware Collection of Blaschka Glass Models of
Plants, in the Harvard Museum of Natural History, are a good
example of plant models because they were made, and verified,
to be sufficient models of the real plants using pre-decided
upon and declared assessable criteria.  Looking like something
is, I think, at best a weak basis for having something be a
model of something else.  Looking like something is a variable
quality that depends upon the seeing ability of the looker,
just like being readable text depends upon the reading capacity
of the reader.

But, you go on to say ...

   "Truthfully, it's far harder to write a grammatically
    correct, completely nonsense sentence than it is to even
    accidentally create meaning with one."

Yes!  And I think this could be an interesting way to assess
how well some text generator does, and what we could use to
assess how well it models human text produced by writing.
Nonsense sentences, despite what we call them, do render
meanings in people's heads, just not meanings that go together
as we usually expect to make some overall sense.  They render,
on reading, ambiguous, unexpected, strange, never before felt,
meanings.  Which is why I like this kind of stuff.  It makes
different interesting things happen in my head each time I
read the same verses; the same text.  They are a way for me to
poke around at my ways of thinking about things.  [This is, I
suppose, at least in part, what they are intended for.  But,
we're not supposed to be talking about this kind of inside
stuff here.]

So, let's take ChatGPT, or some other automatic text
generating system of the same kind, as a model of human text
production from writing.  (I don't want to call this a
language model, or a model of languaging, because text from
writing is only one aspect of human languaging, albeit an
important and useful one.)  Then, with the sonnets we have
seen, we can say, I think, our text-production model has been
shown to generate readable text, and text in the [generally
accepted] form of [what we call] sonnets, but, in our
judgement, or, better said, your judgement, not particularly
interesting sonnets.  [I'm not qualified to do this kind of
judging.]  What, I wonder, does this model using then tell us?
Anything?  We already know very few people are able to write
interesting or original sonnets, but plenty more people can
write uninteresting but correctly formed sonnets.  What more
do we learn about sonnets, or sonnet writing, from getting
things like ChatGPT to generate text in sonnet forms?  And,
importantly, I would say, what can we learn from using ChatGPT
as a model of sonnet writing which we cannot learn from
studying sonnet writing in human poets?  It's not enough to
just look similar to be a model.  A useful, and sensible,
purpose needs to be served by using the model.  Models don't
come free of good purposes.

But this is beginning to take us back towards inside things,
which I have here tried to avoid, as you asked.  So, as you'll
have noticed, I've not responded to your observation that some
of my argument seems circular.  I accept that it does tend
towards circular, but that's 'cos I've [deliberately] tried to
keep certain things simple, too simple, like what it is that
happens in "people's heads," and what is it for "words" to
exist in "people's head," what is it that's doing the existing
here, and what do we mean by existing when we can't observe
it directly, and what are "intentions to say something" that
I've talked about forming in people's heads.  Grounding these
things would take a whole lot more explanation and discussion,
and almost certainly never get made complete enough, given
current neuroscience and cognitive science.  So, we may best
leave this issue aside for now, I think.

-- Tim



> On 4 Jan 2025, at 09:10, Humanist <humanist@dhhumanist.org> wrote:
>
>
>              Humanist Discussion Group, Vol. 38, No. 300.
>        Department of Digital Humanities, University of Cologne
>                      Hosted by DH-Cologne
>                       www.dhhumanist.org
>                Submit to: humanist@dhhumanist.org
>
>
>
>
>        Date: 2025-01-03 15:44:50+00:00
>        From: James Rovira <jamesrovira@gmail.com>
>        Subject: Re: [Humanist] 38.298: AI, poetry and readers (or inaugurating
the new year)
>
> Thanks for your response, Tim -- I'm really enjoying the discussion, and I
> would like to second your appreciation for Willard's support of open
> discussion on these forums. It's also useful to me because I may be
> developing an anthology with a co-editor on AI and Human Consciousness.
>
> I would like to simplify my initial claim: *language itself *is the product
> of human minds, so any representation of language in any form, however
> produced, resembles the product of human minds and is interpretable as the
> product of human minds.
>
> Here's what I'm not saying:
> -I'm not saying that text output from LLMs model human minds. I'm saying
> they model text that is in fact a human language. I will stick to your
> distinction between words and text for now.
> -I didn't say the sonnets were in some original form. They were fairly
> unoriginal as sonnets. But human beings write unoriginal sonnets *all the
> time*. Ha.
>
> In response to your last post, I would say that you're arguing in a circle:
> you say that LLMs aren't actually models of human language because there's
> no human mind behind them. To me, that's just a reassertion of your initial
> point, that for something to count as words, not just text, there needs to
> be a human mind behind it. Here is where I think you do so:
>
> "So, we need to ask, I'd say, does a machine built by [automating the]
> digging out, from massive amounts of human made text, huge numbers of
> detailed statistical relationships between the mostly unreadable text
> tokens all the text is first broken into, model well enough the mental
> goings on when a person forms words to say something with, and then writes
> these words down?  I would say no, it definitely doesn't.
>
> This is the key clause: "does a machine. . . model well enough the mental
> goings on when a person forms words to say something with?" Phrasing the
> question that way assumes the point you're trying to support, so it looks
> like arguing in a circle to me.
>
> Either way, though, I agree with you, the machine does not model the
> mental goings on of any human being. But that was never my question or my
> claim. I never said that machines model "mental goings on." I said that
> machines model textual patterns of human language, and the language itself
> is the product human minds.
>
> The textual output of an LLM on a computer screen is a presentation of *human
> language*. So in a practical, material, observable sense, LLMs model human
> language in exactly the same form, with the same kind of material output,
> that a human being would: text on a computer screen. If the output is the
> same, that to me makes it a model.
>
> What you do consistently do, and what I agree with, is locate meaning in
> the reader of the text apart from our access to the mind (or not) that
> composed the text. But again, that is only dependent upon output. In this
> case, the mind generating the meaning of the texts, if it resides in the
> reader, means the mind of origin is irrelevant. The reader can create
> meaning out of the text on the screen that the origin of the text did not.
> That is true of both humanly written words and machine generated text.
> Whatever meaning readers get out of text on a page or screen isn't
> dependent upon a human mind intending those meanings at the moment of
> composition. We actually can't get to the mental goings on of a person just
> by reading their words. Words, especially something like a sonnet, can mean
> too many different things, so intentions can vary widely. But we can have
> our own mental goings on when we read someone else's words. That's why we
> read.
>
> Readers are able to have their own mental goings on because they're reading
> a language invented by and used by human beings and that actually make up a
> good bit of human consciousness. I'm not saying that LLMs represent textual
> patterns already present in their database. I know they're just producing a
> statistically probable textual output. So I'm saying they are generating
> text in a language already used by human beings, and that human beings
> think in, so human beings as readers of the text can transform this output
> from text into words. We do that all the time with humanly written words.
> We make meaning out of them without any consideration of what the author
> may have been thinking, as we both agree.
>
> Truthfully, it's far harder to write a grammatically correct, completely
> nonsense sentence than it is to even accidentally create meaning with one.
>
> Caveat: I would reaffirm that we can't have a real conversation with an
> LLM. I'm only talking about a discreet, limited output: one and done, like
> a sonnet. A sonnet isn't a conversation between the reader and writer even
> if it imitates one. The conversation that exists in a sonnet exists
> completely in the reader's head, and different readers can have different
> conversations with the same sonnet. A real conversation between two people
> is fluid and exists in some kind of real time, and I think the goal of all
> such conversation is, or at least should be, two people having the same
> conversation with each other.
>
> Jim R


--[2]------------------------------------------------------------------------
        Date: 2025-01-10 10:40:59+00:00
        From: James Rovira <jamesrovira@gmail.com>
        Subject: Re: [Humanist] 38.312: AI, poetry and readers: Calvino, neuroscience & intention

Responses below: to Gabriel and then Bill.

Sent from my iPhone

> Surely, Jim, you agree that your brain is
> also, at root, purely mechanical and works
> with units smaller than, and representative
> of, words.

Not sure how you're using the word "mechanical." The human brain isn't
mechanical. It's electrochemical. It's organic. Physical and material is not the
same as mechanical.

> The fact that a computer's internal
> representations are binary is not
> a defining characteristic. The
> world's oldest working digital computer
> -- the WITCH at the National Computing
> Museum in Bletchley UK -- is not a
> binary machine.

Ok, but it’s not running AI either, which is what we were talking about.

> The first artificial neural networks
> built from perceptrons were electrical
> but not digital: they were analogue
> devices. We agree I hope that brains,
> like computers, are electrical
> devices. But even this is inessential.

Human brains are electrical, yes, but they aren't "devices."

> The fundamental units of computing
> devices have been implemented in,
> amongst other things, the toppling
> of upended dominoes, the flowing of
> water through valves, and the falling
> of marbles through bagatelle boards.
> (I can provide references to YouTube
> videos of computing devices made from
> these materials if anyone is interested.)

None of these are running AI either or would be accused of being anything like
the human brain. This doesn't resemble the "underlying logical elements" of the
human brain, if those even exist.

Overall, you seem to be guilty of the fallacy of the undistributed middle: two
things can have the same predicate without being alike at all. A blue car
doesn't resemble a blue carpet at all.

A question for Bill below.

>> Bill -- a poem made up of another poet's lines from the other poet's poetry
>> is called a "cento." I believe that AI could generate centos. But in that
>> case, the lines come from another source, so any "intention" would be from
>> the human source of the original poetry, not the AI that assembled the
>> lines. Computers don't have intention. Even if it were thematically based
>> on another poet's poems, I would say the same thing.
>>
>> Jim R
>
> You misunderstood the procedure. I, me, a human being, I choose paragraphs
from
> a text, and gave them to FTH, and FTH, in turn derived a poem from them. The
> intentionality that put those things together is mine, not FTH’s. BTW, FTH
> didn’t quote anything. Perhaps we could say that it transformed the text it
was
> given, though ’transform’ seems rather a weak idea for what happened. Anyhow,
> once it did what it did, I told it to make some changes. The intention that
> called for those changes, that was my intention, not FTH’s. The actual
procedure
> is more complex than you’re implying and I don’t see how my intentionality can
> be completely discounted, as you are
> doing.
>
> Bill Benzon

I said the intention was in the words themselves, so I'm not completely
discounting intention. You said just now the intention is in your own use of the
tech. I agree with that as well.

I only said the intention was not in the computer itself, which you didn't
address. What intention am I completely discounting?

Jim R


_______________________________________________
Unsubscribe at: http://dhhumanist.org/Restricted
List posts to: humanist@dhhumanist.org
List info and archives at at: http://dhhumanist.org
Listmember interface at: http://dhhumanist.org/Restricted/
Subscribe at: http://dhhumanist.org/membership_form.php