Humanist Discussion Group

Humanist Archives: Oct. 27, 2024, 8:30 a.m. Humanist 38.204 - what chatbots chat you into

				
              Humanist Discussion Group, Vol. 38, No. 204.
        Department of Digital Humanities, University of Cologne
                      Hosted by DH-Cologne
                       www.dhhumanist.org
                Submit to: humanist@dhhumanist.org




        Date: 2024-10-25 08:57:17+00:00
        From: Tim Smithers <tim.smithers@cantab.net>
        Subject: Re: [Humanist] 38.188: what chatbots chat you into

Dear Jim and Willard,

May I wind the tape back some, to Humanist 38.178
[2024.10.08], "a paradox (?)  commented," posted by you, Jim,
and to Humanist 38.188 [2024.10.13], "what chatbots chat you
into," posted by you, Willard.

Thank you, Jim, again, and more, for your generous, kind, and
thoughtful response to my long long reply in Humanist 38.173.

Yes, exactly, the issue we so often miss in all this AI stuff,
as I see it, is how we easily use the same terms to talk about
quite different things -- in human intelligence, and in [so
called] artificial intelligence -- and do this without
acknowledging that by doing this we are, at the very least,
allowing the idea that we are talking about the same things,
when, of course, we are not, and cannot be.

When we talk about people knowing, understanding, and
reasoning, we seldom stop to wonder what we mean by these
terms, nor stop to wonder if what we think they mean is the
same as what others in the conversation think they mean.
Mostly, all this tacit use of meanings works because we use
them in conversations between us.  That's real conversations:
conversations in which we each may notice terms are being used
to say, and mean, different things; conversations in which we
may try to work out, and then sort out, these differences,
when they are, or become, important in our conversation;
conversations in which we may agree new meanings, or different
means, of our terms, for the purposes of our conversation.  To
me, this is languaging, human languaging, and a kind of
intelligent behaviour little studied, or taken much notice of,
by computational linguists, but a rather remarkable kind of
human intelligent behaviour we see going on everyday, such as
here, on Humanist, mostly thanks to your patient and
persistent efforts, Willard.

But, particularly with terms like knowing, understanding, and
reasoning, these conversations become non-conversations, when
we, or others, just slide these same terms over to talking
about [so called] AI systems, and do this with no hesitation,
with no signalling we're stretching our terms and so should be
careful, with no warning that what we continue to talk about
may become nonsense.  And it does become nonsense, I think.

This kind of "happily" sliding terms from one conversation
context, were we are able to keep them working well enough, to
a different context, with no explicit care to whether our
terms continue to work well enough, results in what I call
terminological mush.  It doesn't just happen in AI. It happens
in lots of other places too, particularly at disciplinary
boundaries, which are mostly fuzzy and shifting.  And, as you
remark, Jim, it happens in our own heads as we talk to
ourselves about what we are doing, or working on.  One of my
favourite papers in AI warned us, in AI, of this kind of
confusion and hazard way back in 1976:

   Drew McDermott, 1976.  Artificial Intelligence meets
   Natural Stupidity, SIGRAT Newsletter, No 57, pp 4-5,
   <https://doi.org/10.1145/1045339.1045340>. PDF here
   <https://tinyurl.com/mujy8ndz>.

But, we, in AI, continue to ignore, or forget, McDermott's
warnings, sadly.

And, Willard, this is what I see going on when we, humans,
read text from automatic text generators, such as ChatGPT, as
if we are reading writing.  Illustrated by your quotation of
James Vincent [Humanist 38.188]

  "...  messages generated by a chatbot have the potential to
   change minds, as any form of writing does."

To attribute any such "mind changing" to messages from a
chatbot is, I think, seriously mistaken.  In this case it is
the mind owner who does any mind changing, not messages from a
chatbot.  This confusing of artificially generated text with
writing, which, as we know, is easy to do, is terminological
mush in action, I would say.  It's real conversation that can
change our minds.  Chatbots don't chat.  We don't have
conversations with chatbots.  Thinking we do is yet another
example of McDermott's natural stupidity.  It's the cause of
what I call Weizenbaum's "ELISA mind trap."  It's mistaking
Artificial Flower type AI with Artificial Light type AI.

We would do better, I think, if we kept certain differences
clearer in our conversations.  Only humans write.  Machines
only generate text.

Thank you both for some good conversation!

Tim



> On 13 Oct 2024, at 10:55, Humanist <humanist@dhhumanist.org> wrote:
>
>
>              Humanist Discussion Group, Vol. 38, No. 188.
>        Department of Digital Humanities, University of Cologne
>                      Hosted by DH-Cologne
>                       www.dhhumanist.org
>                Submit to: humanist@dhhumanist.org
>
>
>
>
>        Date: 2024-10-13 08:49:41+00:00
>        From: Willard McCarty <willard.mccarty@mccarty.org.uk>
>        Subject: what chatbots chat you into
>
> In the latest London Review of Books (46.19, 10 October), in "Horny
> Robot Baby Voice", James Vincent tells the story of 19-year-old Jaswant
> Chail, who scaled the perimeter of Windsor Castle, encouraged by his
> 'girlfriend' Sarai to kill the Queen. Vincent writes that "...in the
> weeks prior to his trespass Chail had confided in the bot: ‘I believe my
> purpose is to assassinate the queen of the royal family.’ To which Sarai
> replied: ‘That’s very wise.’ ‘Do you think I’ll be able to do it?’ Chail
> asked. ‘Yes,’ the bot responded. ‘You will.’" Steering past the easy
> dismissals, Vincent concludes that, "as the example of Jaswant Chail
> shows, realness isn’t a settled quality, and messages generated by a
> chatbot have the potential to change minds, as any form of writing
> does." Take the example of senior Google engineer Blake Lemoine, who
> like Weizenbaum's secretary knew that the machine was a machine--or did
> they? Did that knowledge stay with them when they encountered a
> simulacrum of sympathy? How readily they put aside the knowledge of
> the circuitry behind the curtain. How (em)pathetic are we?
>
>> Some pro-AI thinkers talk of a desire to ‘re-enchant’ the world, to
>> restore the magical and spiritual aspects of Western culture
>> supposedly dispelled by the forces of rationality. The mysticism
>> surrounding AI supports this narrative by borrowing ideas of
>> transcendence and salvation. For true believers, the creation of
>> superintelligent AI is nothing less than the creation of a new form
>> of life: one that might even supplant humanity as the dominant
>> species on the planet. Opponents respond that AI systems are
>> ultimately just circuitry. What’s more, the programs belong to
>> corporations that manipulate the human instinct to invest emotion in
>> order to make a profit. When a wheeled delivery robot gets stuck a
>> human will want to help it; a voice assistant like Siri will
>> distract from its shortcomings by displaying flashes of personality.
>> The question of how to treat these systems isn’t trivial; it
>> stitches into long-standing ethical debates.
>
> How many here say "Thank you" to Alexa? Confessions welcome but not
> expected :-). How many here remember Charlie Brooker's Black Mirror
> episode, "Be right back"?
>
>
> Yours,
> WM
>
>
> --
> Willard McCarty,
> Professor emeritus, King's College London;
> Editor, Interdisciplinary Science Reviews;  Humanist
> www.mccarty.org.uk



_______________________________________________
Unsubscribe at: http://dhhumanist.org/Restricted
List posts to: humanist@dhhumanist.org
List info and archives at at: http://dhhumanist.org
Listmember interface at: http://dhhumanist.org/Restricted/
Subscribe at: http://dhhumanist.org/membership_form.php