Humanist Discussion Group

Humanist Archives: Dec. 17, 2024, 10:37 a.m. Humanist 38.280 - AI, poetry and readers

				
              Humanist Discussion Group, Vol. 38, No. 280.
        Department of Digital Humanities, University of Cologne
                      Hosted by DH-Cologne
                       www.dhhumanist.org
                Submit to: humanist@dhhumanist.org




        Date: 2024-12-16 10:28:34+00:00
        From: Tim Smithers <tim.smithers@cantab.net>
        Subject: Re: [Humanist] 38.275: AI, poetry and readers

Dear Jim,

Thank you for continuing the conversation.  Your further
remarks, for me, usefully bring out more of what's involved
here.  And, my thinking coincides almost completely with that
you say ...  that is, with what I understand you to say from
my reading of your text -:)

The notion of 'word' I use is different from yours, and the
[more] usual notion, and, I admit, mine is idiosyncratic, but
not quite for the reasons you suggest.  I don't want to side
with Plato (nor Socrates), and say writing is bad for us.  I
see writing as a brilliant invention.  I do see a difference
between verbal conversation and written conversation, like
this one, which is to do with the way spoken conversation
[more easily] allows swifter clarification, and perhaps
correction, of differences occurring in the heads of the
people involved in the conversation.  (But I don't want to say
this makes verbal conversation better.)

I like your

   "...  "text" is any kind of interpretable product, and
    "words" are only one kind of text.  Images (say, photos or
    paintings) are also texts, as are film, music (performed
    or on paper), architecture, the design of a city,
    clothing, etc.: these are all social texts that are
    interpretable."

though I prefer to keep the term 'text' for the marks left by
writing words, and not for all your other things.  But that's
me being picky, and not me being different, I'd say.

I like too your expansion on Semiotics, and happily agree with
what you say here.

    "...  In other words, when you say "tree," you picture a
     different tree in your head than I do.  Your tree is
     based on your memory and experience and mine is based on
     my own.  So my signified is to an extent different from
     your signified.

Yes! Exactly.

Your remark on Derrida preferring the written version of a
triangle over the spoken version is interesting, and new to
me.  Does Derrida explain somewhere how we should understand
"better approximates the reality" here?  Where should I go to
find this?

Where I do still differ shows up here.

    "Now back to AI. I believe that with AI we are in the
     same situation as with Shakespeare: we just have the text
     with no accessible mind present. ..."

No, I don't think we're in the same situation.  From
Generative AI systems that produce text we only have text,
yes.  But, there was no mind involved in the generation of
this text; there were no words written down; there was no
Shakespeare forming the words and writing them down for us to
read, and interpret, long after Shakespeare's mind is gone.
It is this lack of any mind, and thus lack of any words [in my
sense], in the generation of this text that I think makes a
difference, a big one.  Mind talk is too mysterious for me --
it's too easy to load it up with lots of abilities and
capacities -- so I prefer just to talk about what goes on in
people's heads, somehow, and, in this case, what doesn't
happen: no words were formed and then written to produce the
text we get from these Generative AI systems.  And, it's the
same, I would insist, for the systems that generate images,
video, sounds (speech and music), and combinations thereof.
We can, and do, "read" all these generated forms, and
interpret and understand things from our "reading," as if they
were made by thinking, reasoning, intending, people, and do
this easily, but there was no thinking, reasoning, intending
involved in the generation of this text.  So, all and any
understanding we get from our reading and interpreting is new
understanding, originated in each of our heads.  It's not that
our understanding from our interpreting results in a different
understanding from that of the original author of what we
read, look at, view, or listen to.  It's that our
interpretations and understandings are the only
interpretations and understandings involved.  And this, I want
to insists, is different from Shakespeare no longer being with
us.  It matters, I think, that there was, once, a head in
which words were intentionally formed and written down, even
if we have no way of knowing what the intention and thinking
that formed those words was, nor any way of knowing what the
person wanted to say with the words they formed and wrote
down.  Text does not have a natural meaningful existence
without there first being words formed [in somebody's head] to
say something with, and the written down.

You end with

    "...  We read and interpret almost all texts as if the
    author didn't exist or was anonymous."

Yes, I agree, we do read and interpret almost all texts -- and
texts understood in your broader sense to include fotos,
paintings, drawings, music, film, architecture, clothing, etc
--- as if the author didn't exists or is unknown to us.  But
we do think there was an author.  And this, I think, strongly
influences how we do all this reading of human made marks.

This is, I would say, both natural and reasonable, because all
this text was made by thinking, reasoning, intending people,
albethem no longer present, or known to us.  However, this
kind of reading and interpreting is not reasonable for the
automatically generated text we get from today's Generative
systems.  These Generative systems may give us a new way for
each of us to form our own new words -- by reading some
generated text and deciding we like, or want to use, the words
we get in our heads on reading the generated marks as text --
but this is not the same, I think, as reading the text that
resulted from someone else, forgotten or unknown, writing
words they formed to say something with, and us then deciding
we understand and like or don't like what we each think the
author said.  Shakespeare's words, made accessible to us by
being written down, tell us things because Shakespeare was
trying to say something when they wrote the words, even if
what we understand from our reading of this text is completely
different from what Shakespeare wanted to say.  Generative AI
system have no capacity or mechanisms for saying things, only
for generating the marks we call, and can read as, text.  They
are like slide rules.  Slide rules don't know and understand
anything about numbers.  They are designed and made to
manipulate, and display for reading by us, numerals in certain
[well defined] ways.  They are designed and built to implement
the grammar -- if you'll let me call it this -- of numerals
used to represent numbers to us, and for us to them properly
manipulate numerals in doing calculations with numbers.  A
slide rule can thus be used by us as an aid to doing numerical
calculations.  It's us who read the numerals we get to with
our slide rule operations, and who have to put the decimal
point in the right place to get the number we arrive at, and
are trying to calculate.  Slide rules are useful tools when
used well, but not when they are not used well.  Thinking, or
believing, they know and understand things about numbers is
not a good basis for this good use.  It's plainly a silly way
to think of slide rules.  Similarly, thinking of automatic
text generators as knowing about and understanding words is a
silly, and thoroughly mistaken, idea.  Which, I think, we
agree on.

So, one last question.  Would this text-mediated conversation
be different if one us turned out to be an automatic text
generator?  Would it be like me talking to my slide rule?
[Which I don't do, by the way.]

-- Tim




> On 12 Dec 2024, at 08:18, Humanist <humanist@dhhumanist.org> wrote:
>
>
>              Humanist Discussion Group, Vol. 38, No. 275.
>        Department of Digital Humanities, University of Cologne
>                      Hosted by DH-Cologne
>                       www.dhhumanist.org
>                Submit to: humanist@dhhumanist.org
>
>
>
>
>        Date: 2024-12-12 02:35:42+00:00
>        From: James Rovira <jamesrovira@gmail.com>
>        Subject: Re: [Humanist] 38.271: AI, poetry and readers
>
> Thanks so much for your long and thoughtful reply, Tim. We are using words
> a bit differently, but since you define your terms well and use them
> consistently with your definition, that worked for me.  I feel that I could
> easily understand you.
>
> We both agree about what AI is and is not and that it is
> fundamentally different from what goes on in a human mind even when both
> are producing text.
>
> My usual definitions from my prior reading in linguistics, semiotics, and
> literary theory is this: "text" is any kind of interpretable product, and
> "words" are only one kind of text. Images (say, photos or paintings) are
> also texts, as are film, music (performed or on paper), architecture, the
> design of a city, clothing, etc.: these are all social texts that are
> interpretable.
>
> But your emphasis on the mental product at the expense of the physical
> medium is a bit idiosyncratic on your part unless you really want to go far
> back, to, say, Plato. Plato believed (well, Socrates) writing itself was
> bad, and communication only occurred between two people in physical
> proximity to one another who are talking to one another directly. What
> you're saying about AI generated text Plato said about physical writing in
> books, because that writing is separate from its author and originary
> context, and you can't really talk back to it. There's no dialog possible.
> There is no mind present.
>
> Semiotics as I recall identify the "sign" as the vehicle of communication,
> which could be either a spoken or written word, and the "signified" as the
> mental object. The signified is made up of a sound image and a mental image
> combined, say, the verbal expression "tree" and the mental representation
> of a tree in the mind. This is from Saussure's Course in General
> Linguistics. Very old. The sign then creates a new signified in the
> recipient's mind, which may not (and probably doesn't) match the signified
> in the mind of the speaker. In other words, when you say "tree," you
> picture a different tree in your head than I do. Your tree is based on your
> memory and experience and mine is based on my own. So my signified is to an
> extent different from your signified. Later linguistic theory gets more
> complicated; say, Chomsky, and it has been extensively studied. The problem
> is that we can't directly observe the mind at work forming words. We can
> read electrical impulses. That's it.
>
> The mind's role in interpretation has been thoroughly discussed, though.
> Plato started a way of thinking that associated a person's state of the
> soul, so to speak, with the person's interpretive habits, which was carried
> forward by the church fathers into Biblical interpretation. Gadamer
> believed that interpretation was as natural as breathing; Lacan that
> thinking itself is a syntax of sorts, etc. It goes on and on.
>
> Anyway, the mental objects in every person's head are independent of every
> other person's head. The only thing we really have in common is the medium
> itself, which are the soundwaves produced by spoken words or the written
> words on a page. That means the physical medium is not nothing: it is
> everything. It is the only thing. We agree that it doesn't mean anything
> until it is recovered by a mind and interpreted, though. A musical score is
> something very different from your other examples. A trained musician can
> read a musical score and hear music in his or her head the same way we can
> read words on a page and hear a voice in our heads. And you're right in
> that both things are subject to interpretation, which is because the
> content of the author's head is *not *communicated via the text, but rather
> formed by the recipient's mind, a point on which we both agree. If somehow
> the author's actual signified was directly communicated mind to mind, say
> through a hive mind, or telepathically, no interpretation would be
> necessary. But that's never the case.
>
> So if I were to read Shakespeare today, Shakespeare's mind no longer
> exists. We no longer have access to whatever it was he was thinking while
> writing, and he wasn't very good about telling us either. The mind of
> origin is irrelevant to interpretation because it is now non-existent.. All
> that we have is the physical medium containing his words and our own minds.
> That is why Plato disliked writing and preferred speech. Then the mind of
> origin is present, in the present, as is the mind receiving the
> communication, and through dialog closer approximations of shared meaning
> are possible. But even Plato didn't believe in direct mind to mind
> communication.
>
> We could really get into Derrida here and discuss why he preferred writing
> over speech. It begins with, of all things, his translation of a book about
> geometry and triangles in 1958: the written version of a triangle better
> approximates the reality than the spoken version. This was a great
> innovation in western philosophy, a development that he classified as
> phenomenology (it was Husserl's Geometry he was translating, and that work
> on geometry was the basis of his phenomenology), not some kind of
> specifically literary theory. Phenomenology was an early 20thC attempt to
> explain the workings of the mind in relationship to language as well.
>
> Now back to AI. I believe that with AI we are in the same situation as with
> Shakespeare: we just have the text with no accessible mind present. I
> started not by studying formalism or literary theory, but by studying
> hermeneutics, which is the practice of Biblical interpretation. That
> traditionally does try to recover authorial intent because the author is
> the source of the authority of the text. Hermeneutics works through an ever
> expanding series of interpretive circles: the author him or herself as
> context (biography, other writings), the author's immediate context (circle
> of people around him or her, books read and referred to), the author's
> social context, the genre in which the author is writing, etc. But I
> realized that I wasn't constructing the author necessarily, but a person
> from the author's own time period during the writing of the text that is
> external to the text. I am creating an imaginary primal *reader*, in other
> words, making the author out to be the first interpreter of his or her own
> text. But even this author's contemporaries could interpret the author's
> text differently and not necessarily be wrong. A text always exceeds the
> meanings that are intended for it.
>
> Does AI "win"? Not unless we're stupid enough to forget it is literally
> mindless. Ha... then I guess it does win in many cases. But that's the
> fault of the person, not AI. We're in control of the blank, stupid,
> mindless objects we choose to venerate. It used to be statues. Now it's
> computers.
>
> Anyway, the end result: AI generated text is always interpretable as if it
> were humanly written text, except in the case of hermeneutics, in which
> we're trying to reconstruct a specific human being in a specific time and
> place. But when we so interpret AI generated text, we are interpreting it *as
> if *it were written by a human being. And, as we both know, it's not. It's
> just the textual representation of a bunch of number crunching. Does that
> distinction make a difference? Not for most acts of interpretation, because
> we almost never study authors deeply before we read their texts. We read
> and interpret almost all texts as if the author didn't exist or was
> anonymous.
>
> Jim R



_______________________________________________
Unsubscribe at: http://dhhumanist.org/Restricted
List posts to: humanist@dhhumanist.org
List info and archives at at: http://dhhumanist.org
Listmember interface at: http://dhhumanist.org/Restricted/
Subscribe at: http://dhhumanist.org/membership_form.php