Humanist Discussion Group

Humanist Archives: April 4, 2024, 9:53 a.m. Humanist 37.525 - talking to & from smart machines

				
              Humanist Discussion Group, Vol. 37, No. 525.
        Department of Digital Humanities, University of Cologne
                      Hosted by DH-Cologne
                       www.dhhumanist.org
                Submit to: humanist@dhhumanist.org




        Date: 2024-04-04 08:47:57+00:00
        From: Willard McCarty <willard.mccarty@mccarty.org.uk>
        Subject: talking to & from smart machines

This is mostly a question for those who have had the chance recently to
try out 'conversing' with one of the Large Language Models (LLMs).
Looking over a long transcript of an exchange between one of these and a
well-educated friend in computer science, at first I was amazed at the
agility of the LLM. (We've ceased to be surprised with such a reaction,
though we continue to marvel.) The longer I read, however, the stronger
my impression that the LLM was behaving like a very eager and adept
student, or a very able ping-pong opponent. Had I not known that my
friend's partner in this exercise was an LLM, I might have been fooled,
but had I thought it a person, I would also have been baffled as to its
personality--blank, flat, dull. We know that a human can indeed appear
to us as having no personality, no life behind the mask, so to that
extent the LLM is a brilliant success.

In Truth and Method, Chapter 5, the philosopher Hans-Georg Gadamer,
writes as follows:

> We say that we "conduct" a conversation, but the more genuine a
> conversation is, the less its conduct lies within the will of either
> partner. Thus a genuine conversation is never the one that we wanted
> to conduct. Rather, it is generally more correct to say that we fall
> into conversation, or even that we become involved in it. The way one
> word follows another, with the conversation taking its own twists and
> reaching its own conclusion, may well be conducted in some way, but
> the partners conversing are far less the leaders of it than the led.
> No one knows in advance what will "come out" of a conversation.
> Understanding or its failure is like an event that happens to us.
> Thus we can say that something was a good conversation or that it was
> ill fated. All this shows that a conversation has a spirit of its
> own, and that the language in which it is conducted bears its own
> truth within it—i.e., that it allows something to "emerge" which
> henceforth exists.

I'm not out to establish the inferiority of the machine, however smart,
rather to question what would need to be done to give an LLM the ability
to engage in a "genuine conversation", as Gadamer says--one in which the
user was not so much in control of, and the LLM not so much eager
flatteringly to please (and so to help keep the research funding flowing).

Comments please.

Yours,
WM
--
Willard McCarty,
Professor emeritus, King's College London;
Editor, Interdisciplinary Science Reviews;  Humanist
www.mccarty.org.uk


_______________________________________________
Unsubscribe at: http://dhhumanist.org/Restricted
List posts to: humanist@dhhumanist.org
List info and archives at at: http://dhhumanist.org
Listmember interface at: http://dhhumanist.org/Restricted/
Subscribe at: http://dhhumanist.org/membership_form.php