Humanist Discussion Group

Humanist Archives: March 22, 2023, 5:05 a.m. Humanist 36.468 - followup: agency & intelligence

				
              Humanist Discussion Group, Vol. 36, No. 468.
        Department of Digital Humanities, University of Cologne
                      Hosted by DH-Cologne
                       www.dhhumanist.org
                Submit to: humanist@dhhumanist.org




        Date: 2023-03-22 02:38:18+00:00
        From: James Rovira <jamesrovira@gmail.com>
        Subject: Re: [Humanist] 36.463: followup: agency & intelligence

Many thanks to Tim, Mauricio, and Williard for their replies. I feel a bit
silly needing to be re-reminded of Milton's Areopagitica, but that is
perhaps the key sentence in the key text for our discussion. I like
Maurcio's comments as well. Since the AI is a human product, how does human
agency relate to the product of AI? We can say that humans programmed the
AI to produce text, and to produce text in a certain way, but not
necessarily to produce any *specific *text. In that case, it wouldn't be an
AI, but a kind of advanced photocopier.

The real issues come out in Tim's distinction between intentional and
unintentional agency. But these aren't new issues. It seems to need no
defense to say that an author writes with intentional agency. The problems,
historically (and the issue has been critically beaten to death since the
1940s, I think), are several:

   - First, it's not always true that authors write with intentional
   agency. Sometimes (perhaps rarely) authors don't really know where they're
   going with a piece of writing. Does that mean the text has no meaning?
   - Next, intentional agency would seem to be only relevant at the moment
   of composition. It is not unusual for authors to forget, change their mind,
   or worse, lie about whatever it was that they were thinking when they wrote
   the text down. Yes, they do lie. They say one thing in letters around the
   time of composition and then something else entirely in interviews or
   autobiographies. We're talking about people who make up stories for a
   living, after all.
   - Other times, authors simply don't ever say. They write, they die, they
   leave no record of what they were thinking. What do we do in that case? How
   do we read an author's mind retroactively? My own thinking about this
   subject goes along these lines: the work that we do to recover an author's
   probable intent is not at all an act of mind reading. What we're really
   doing is recreating a reading community -- imagined readers who have the
   author's background, interests, ideas, past reading, etc. -- and ascribing
   likely interpretations to that reading community.
   - Finally, there's the fact of polysemy, which has been long recognized.
   It was described in Plato's dialogues, repeated by the early church fathers
   (Origen and Augustine), codified and systematized by Aquinas, reaffirmed by
   Dante, and then extended to literary interpretation rather late in this
   history. This fact alone, the fact that a sufficiently complex text is
   capable of producing a number of different, and sometimes conflicting,
   meanings simultaneously means that textual meaning can't be limited to
   authorial intent. Authorial intent, if it *could be *determined, would
   only consist of a limited range of these possible meanings, which aren't
   infinite, but are more than any one author could reasonably be expected to
   plan into the work.

That's what we do with texts produced by *living authors*. AI generated
text? I would say its meaning would be entirely dependent upon the reading
community that receives it because, indeed, it has no intentional agency.
So a lack of intentional agency does not mean we have to ascribe
meaninglessness to a text. Whatever a text's origin, the text's *meaning(s)
*reside in its words, and those words are communal property.

But in this way, we're putting AI generated text on an interpretive level
equal to humanly generated (written) text. The living human author does not
own exclusively his or her language any more than the machine does, the
human being does not express intention except through words, and we have no
access to the author's intention except through his or her words. So at the
receiving end, we simply and always interpret the words, not the author.
The communal property of language is what makes communication possible.

I would say that honesty and charity requires that we ascribe meaning to
the text, intention to the author, but that we never equate meaning with
intention except with the author's permission. I can say what I think this
text means, but I can't always say certainly that the author so intended.
In the case of AI, we can discard the need for charity and only concern
ourselves with meaning, not intention.

Jim R

--
Dr. James Rovira <http://www.jamesrovira.com/>

   - *David Bowie and Romanticism
   <https://jamesrovira.com/2022/09/02/david-bowie-and-romanticism/>*,
   Palgrave Macmillan, 2022
   - *Women in Rock, Women in Romanticism
   <https://www.routledge.com/Women-in-Rock-Women-in-Romanticism-The-
Emancipation-of-Female-Will/Rovira/p/book/9781032069845>*,
   Routledge, 2023


_______________________________________________
Unsubscribe at: http://dhhumanist.org/Restricted
List posts to: humanist@dhhumanist.org
List info and archives at at: http://dhhumanist.org
Listmember interface at: http://dhhumanist.org/Restricted/
Subscribe at: http://dhhumanist.org/membership_form.php