Humanist Discussion Group

Humanist Archives: Jan. 4, 2025, 8:10 a.m. Humanist 38.300 - AI, poetry and readers

				
              Humanist Discussion Group, Vol. 38, No. 300.
        Department of Digital Humanities, University of Cologne
                      Hosted by DH-Cologne
                       www.dhhumanist.org
                Submit to: humanist@dhhumanist.org




        Date: 2025-01-03 15:44:50+00:00
        From: James Rovira <jamesrovira@gmail.com>
        Subject: Re: [Humanist] 38.298: AI, poetry and readers (or inaugurating the new year)

Thanks for your response, Tim -- I'm really enjoying the discussion, and I
would like to second your appreciation for Willard's support of open
discussion on these forums. It's also useful to me because I may be
developing an anthology with a co-editor on AI and Human Consciousness.

I would like to simplify my initial claim: *language itself *is the product
of human minds, so any representation of language in any form, however
produced, resembles the product of human minds and is interpretable as the
product of human minds.

Here's what I'm not saying:
-I'm not saying that text output from LLMs model human minds. I'm saying
they model text that is in fact a human language. I will stick to your
distinction between words and text for now.
-I didn't say the sonnets were in some original form. They were fairly
unoriginal as sonnets. But human beings write unoriginal sonnets *all the
time*. Ha.

In response to your last post, I would say that you're arguing in a circle:
you say that LLMs aren't actually models of human language because there's
no human mind behind them. To me, that's just a reassertion of your initial
point, that for something to count as words, not just text, there needs to
be a human mind behind it. Here is where I think you do so:

"So, we need to ask, I'd say, does a machine built by [automating the]
digging out, from massive amounts of human made text, huge numbers of
detailed statistical relationships between the mostly unreadable text
tokens all the text is first broken into, model well enough the mental
goings on when a person forms words to say something with, and then writes
these words down?  I would say no, it definitely doesn't.

This is the key clause: "does a machine. . . model well enough the mental
goings on when a person forms words to say something with?" Phrasing the
question that way assumes the point you're trying to support, so it looks
like arguing in a circle to me.

Either way, though, I agree with you, the machine does not model the
mental goings on of any human being. But that was never my question or my
claim. I never said that machines model "mental goings on." I said that
machines model textual patterns of human language, and the language itself
is the product human minds.

The textual output of an LLM on a computer screen is a presentation of *human
language*. So in a practical, material, observable sense, LLMs model human
language in exactly the same form, with the same kind of material output,
that a human being would: text on a computer screen. If the output is the
same, that to me makes it a model.

What you do consistently do, and what I agree with, is locate meaning in
the reader of the text apart from our access to the mind (or not) that
composed the text. But again, that is only dependent upon output. In this
case, the mind generating the meaning of the texts, if it resides in the
reader, means the mind of origin is irrelevant. The reader can create
meaning out of the text on the screen that the origin of the text did not.
That is true of both humanly written words and machine generated text.
Whatever meaning readers get out of text on a page or screen isn't
dependent upon a human mind intending those meanings at the moment of
composition. We actually can't get to the mental goings on of a person just
by reading their words. Words, especially something like a sonnet, can mean
too many different things, so intentions can vary widely. But we can have
our own mental goings on when we read someone else's words. That's why we
read.

Readers are able to have their own mental goings on because they're reading
a language invented by and used by human beings and that actually make up a
good bit of human consciousness. I'm not saying that LLMs represent textual
patterns already present in their database. I know they're just producing a
statistically probable textual output. So I'm saying they are generating
text in a language already used by human beings, and that human beings
think in, so human beings as readers of the text can transform this output
from text into words. We do that all the time with humanly written words.
We make meaning out of them without any consideration of what the author
may have been thinking, as we both agree.

Truthfully, it's far harder to write a grammatically correct, completely
nonsense sentence than it is to even accidentally create meaning with one.

Caveat: I would reaffirm that we can't have a real conversation with an
LLM. I'm only talking about a discreet, limited output: one and done, like
a sonnet. A sonnet isn't a conversation between the reader and writer even
if it imitates one. The conversation that exists in a sonnet exists
completely in the reader's head, and different readers can have different
conversations with the same sonnet. A real conversation between two people
is fluid and exists in some kind of real time, and I think the goal of all
such conversation is, or at least should be, two people having the same
conversation with each other.

Jim R


_______________________________________________
Unsubscribe at: http://dhhumanist.org/Restricted
List posts to: humanist@dhhumanist.org
List info and archives at at: http://dhhumanist.org
Listmember interface at: http://dhhumanist.org/Restricted/
Subscribe at: http://dhhumanist.org/membership_form.php