Humanist Discussion Group

Humanist Archives: Dec. 12, 2024, 7:18 a.m. Humanist 38.275 - AI, poetry and readers

				
              Humanist Discussion Group, Vol. 38, No. 275.
        Department of Digital Humanities, University of Cologne
                      Hosted by DH-Cologne
                       www.dhhumanist.org
                Submit to: humanist@dhhumanist.org




        Date: 2024-12-12 02:35:42+00:00
        From: James Rovira <jamesrovira@gmail.com>
        Subject: Re: [Humanist] 38.271: AI, poetry and readers

Thanks so much for your long and thoughtful reply, Tim. We are using words
a bit differently, but since you define your terms well and use them
consistently with your definition, that worked for me.  I feel that I could
easily understand you.

We both agree about what AI is and is not and that it is
fundamentally different from what goes on in a human mind even when both
are producing text.

My usual definitions from my prior reading in linguistics, semiotics, and
literary theory is this: "text" is any kind of interpretable product, and
"words" are only one kind of text. Images (say, photos or paintings) are
also texts, as are film, music (performed or on paper), architecture, the
design of a city, clothing, etc.: these are all social texts that are
interpretable.

But your emphasis on the mental product at the expense of the physical
medium is a bit idiosyncratic on your part unless you really want to go far
back, to, say, Plato. Plato believed (well, Socrates) writing itself was
bad, and communication only occurred between two people in physical
proximity to one another who are talking to one another directly. What
you're saying about AI generated text Plato said about physical writing in
books, because that writing is separate from its author and originary
context, and you can't really talk back to it. There's no dialog possible.
There is no mind present.

Semiotics as I recall identify the "sign" as the vehicle of communication,
which could be either a spoken or written word, and the "signified" as the
mental object. The signified is made up of a sound image and a mental image
combined, say, the verbal expression "tree" and the mental representation
of a tree in the mind. This is from Saussure's Course in General
Linguistics. Very old. The sign then creates a new signified in the
recipient's mind, which may not (and probably doesn't) match the signified
in the mind of the speaker. In other words, when you say "tree," you
picture a different tree in your head than I do. Your tree is based on your
memory and experience and mine is based on my own. So my signified is to an
extent different from your signified. Later linguistic theory gets more
complicated; say, Chomsky, and it has been extensively studied. The problem
is that we can't directly observe the mind at work forming words. We can
read electrical impulses. That's it.

The mind's role in interpretation has been thoroughly discussed, though.
Plato started a way of thinking that associated a person's state of the
soul, so to speak, with the person's interpretive habits, which was carried
forward by the church fathers into Biblical interpretation. Gadamer
believed that interpretation was as natural as breathing; Lacan that
thinking itself is a syntax of sorts, etc. It goes on and on.

Anyway, the mental objects in every person's head are independent of every
other person's head. The only thing we really have in common is the medium
itself, which are the soundwaves produced by spoken words or the written
words on a page. That means the physical medium is not nothing: it is
everything. It is the only thing. We agree that it doesn't mean anything
until it is recovered by a mind and interpreted, though. A musical score is
something very different from your other examples. A trained musician can
read a musical score and hear music in his or her head the same way we can
read words on a page and hear a voice in our heads. And you're right in
that both things are subject to interpretation, which is because the
content of the author's head is *not *communicated via the text, but rather
formed by the recipient's mind, a point on which we both agree. If somehow
the author's actual signified was directly communicated mind to mind, say
through a hive mind, or telepathically, no interpretation would be
necessary. But that's never the case.

So if I were to read Shakespeare today, Shakespeare's mind no longer
exists. We no longer have access to whatever it was he was thinking while
writing, and he wasn't very good about telling us either. The mind of
origin is irrelevant to interpretation because it is now non-existent.. All
that we have is the physical medium containing his words and our own minds.
That is why Plato disliked writing and preferred speech. Then the mind of
origin is present, in the present, as is the mind receiving the
communication, and through dialog closer approximations of shared meaning
are possible. But even Plato didn't believe in direct mind to mind
communication.

We could really get into Derrida here and discuss why he preferred writing
over speech. It begins with, of all things, his translation of a book about
geometry and triangles in 1958: the written version of a triangle better
approximates the reality than the spoken version. This was a great
innovation in western philosophy, a development that he classified as
phenomenology (it was Husserl's Geometry he was translating, and that work
on geometry was the basis of his phenomenology), not some kind of
specifically literary theory. Phenomenology was an early 20thC attempt to
explain the workings of the mind in relationship to language as well.

Now back to AI. I believe that with AI we are in the same situation as with
Shakespeare: we just have the text with no accessible mind present. I
started not by studying formalism or literary theory, but by studying
hermeneutics, which is the practice of Biblical interpretation. That
traditionally does try to recover authorial intent because the author is
the source of the authority of the text. Hermeneutics works through an ever
expanding series of interpretive circles: the author him or herself as
context (biography, other writings), the author's immediate context (circle
of people around him or her, books read and referred to), the author's
social context, the genre in which the author is writing, etc. But I
realized that I wasn't constructing the author necessarily, but a person
from the author's own time period during the writing of the text that is
external to the text. I am creating an imaginary primal *reader*, in other
words, making the author out to be the first interpreter of his or her own
text. But even this author's contemporaries could interpret the author's
text differently and not necessarily be wrong. A text always exceeds the
meanings that are intended for it.

Does AI "win"? Not unless we're stupid enough to forget it is literally
mindless. Ha... then I guess it does win in many cases. But that's the
fault of the person, not AI. We're in control of the blank, stupid,
mindless objects we choose to venerate. It used to be statues. Now it's
computers.

Anyway, the end result: AI generated text is always interpretable as if it
were humanly written text, except in the case of hermeneutics, in which
we're trying to reconstruct a specific human being in a specific time and
place. But when we so interpret AI generated text, we are interpreting it *as
if *it were written by a human being. And, as we both know, it's not. It's
just the textual representation of a bunch of number crunching. Does that
distinction make a difference? Not for most acts of interpretation, because
we almost never study authors deeply before we read their texts. We read
and interpret almost all texts as if the author didn't exist or was
anonymous.

Jim R


_______________________________________________
Unsubscribe at: http://dhhumanist.org/Restricted
List posts to: humanist@dhhumanist.org
List info and archives at at: http://dhhumanist.org
Listmember interface at: http://dhhumanist.org/Restricted/
Subscribe at: http://dhhumanist.org/membership_form.php