Humanist Discussion Group

Humanist Archives: Oct. 28, 2024, 8:36 a.m. Humanist 38.208 - what chatbots chat you into

				
              Humanist Discussion Group, Vol. 38, No. 208.
        Department of Digital Humanities, University of Cologne
                      Hosted by DH-Cologne
                       www.dhhumanist.org
                Submit to: humanist@dhhumanist.org




        Date: 2024-10-27 13:42:44+00:00
        From: James Rovira <jamesrovira@gmail.com>
        Subject: Re: [Humanist] 38.204: what chatbots chat you into

Tim -

Thank you again for your response, always worth reading. We both agree, of
course, that we human beings continually use the same words to describe two
or more different things, and I agree that the key terms -- knowing,
understanding, and reasoning -- vary also just within human use, much less
between human and machine use. I think you did a beautiful job describing
how these terms apply to machines. They may be somewhat contested even in
that field, but you limited it to a specific machine use, so I could run
with your definitions.

Yes, equivocation is a natural fallacy in human language use, and is
sometimes intentional, so we have to work to avoid it. I don't know that
the human sciences are developed enough to provide a consensus on those
terms, which I suspect would be specific to the disciplinary approach. What
I think we can do is identify the broad range of possibilities for those
terms as they apply to humans and then point out similarities (some) and
differences (many) when applied to machines. But in the meantime,
equivocation always makes headlines and generates entertaining science
fiction. My last response pointed in this direction: all human cognitive
activities include affect -- motives, feelings, history, memory, ambition,
etc. -- even when they are engaged in raw calculation. The machine never
asks why it is calculating or has a motive for it. Reasons for doing it are
not motives for doing it.

I would like to respond to your response to Willard below, however. In
theory and interpretation in the humanities, on a very simple level
communication is divided into three parts: the communicator, the media
(text, images, etc.), and the recipient. Different theories of
interpretation locate meaning differently on that scale: intentionalists
say the meaning lies with the communicator, formalists with the text, and
reader response theory (for example) with the recipient, while some
theories, say structuralism or historicist approaches, expand the text to
include a social context that may embrace the communicator but not the
recipient, or the recipient and not the communicator, or all three, and
then people like Derrirda might leverage formalism or, better,
phenomenology, against structuralism. And the whole thing gets increasingly
complicated as it comes to involve sociology, philosophy, psychology,
linguistics, etc.

So, I don't think Willard was out of line in saying the "message" can
influence people's thinking. He can say that without attributing agency or
meaning to the source (the chatbot). That sentence can locate meaning
formally in the arrangement of words on the screen or in a reader response
way to the recipient's consumption of the words, or of course both. But I
also think you make a good point about our use of the term "chatbot": it
inherently creates a false impression. No one "chatting" with a chatbot is
actually chatting with it in any way comparable to a chat with a human
being.

Jim R


> And, Willard, this is what I see going on when we, humans,
> read text from automatic text generators, such as ChatGPT, as
> if we are reading writing.  Illustrated by your quotation of
> James Vincent [Humanist 38.188]
>
>   "...  messages generated by a chatbot have the potential to
>    change minds, as any form of writing does."
>
> To attribute any such "mind changing" to messages from a
> chatbot is, I think, seriously mistaken.  In this case it is
> the mind owner who does any mind changing, not messages from a
> chatbot.  This confusing of artificially generated text with
> writing, which, as we know, is easy to do, is terminological
> mush in action, I would say.  It's real conversation that can
> change our minds.  Chatbots don't chat.  We don't have
> conversations with chatbots.  Thinking we do is yet another
> example of McDermott's natural stupidity.  It's the cause of
> what I call Weizenbaum's "ELISA mind trap."  It's mistaking
> Artificial Flower type AI with Artificial Light type AI.
>
>


_______________________________________________
Unsubscribe at: http://dhhumanist.org/Restricted
List posts to: humanist@dhhumanist.org
List info and archives at at: http://dhhumanist.org
Listmember interface at: http://dhhumanist.org/Restricted/
Subscribe at: http://dhhumanist.org/membership_form.php