Humanist Discussion Group

Humanist Archives: April 5, 2024, 7:58 a.m. Humanist 37.527 - talking to & from smart machines

				
              Humanist Discussion Group, Vol. 37, No. 527.
        Department of Digital Humanities, University of Cologne
                      Hosted by DH-Cologne
                       www.dhhumanist.org
                Submit to: humanist@dhhumanist.org


    [1]    From: William Benzon <bbenzon@mindspring.com>
           Subject: Re: [Humanist] 37.525: talking to & from smart machines (139)

    [2]    From: David Zeitlyn <david.zeitlyn@anthro.ox.ac.uk>
           Subject: Re: [Humanist] 37.525: talking to & from smart machines (26)

    [3]    From: Rebecca Roach <r.roach@bham.ac.uk>
           Subject: talking to & from smart machines (38)


--[1]------------------------------------------------------------------------
        Date: 2024-04-04 10:46:47+00:00
        From: William Benzon <bbenzon@mindspring.com>
        Subject: Re: [Humanist] 37.525: talking to & from smart machines

An interesting question, Willard. I’ve got two things:

1. Whatever bot you're working with, ChatGPT, Claude, Gemini, or any of the
others, you don’t have direct access to the underlying LLM. Rather, you are
accessing it as it has been fine-tuned in various ways to be more “user-
friendly.” Different bots have different overall personalities.

2. You should read the recent interview between Ezra Klein and Ethan Mollick in
The New York Times:

From the interview:

Ezra Klein: We’ve already talked a bit about — Gemini is helpful, and ChatGPT-4
is neutral, and Claude is a bit warmer. But you urge people to go much further
than that. You say to give your A.I. a personality. Tell it who to be. So what
do you mean by that, and why?

Ethan Mollick: So this is actually almost more of a technical trick, even though
it sounds like a social trick. When you think about what A.I.s have done,
they’ve trained on the collective corpus of human knowledge. And they know a lot
of things. And they’re also probability machines. So when you ask for an answer,
you’re going to get the most probable answer, sort of, with some variation in
it. And that answer is going to be very neutral. If you’re using GPT-4, it’ll
probably talk about a rich tapestry a lot. It loves to talk about rich
tapestries. If you ask it to code something artistic, it’ll do a fractal. It
does very normal, central A.I. things. So part of your job is to get the A.I. to
go to parts of this possibility space where the information is more specific to
you, more unique, more interesting, more likely to spark something in you
yourself. And you do that by giving it context, so it doesn’t just give you an
average answer. It gives you something that’s specialized for you. The easiest
way to provide context is a persona. You are blank. You are an expert at
interviewing, and you answer in a warm, friendly style. Help me come up with
interview questions. It won’t be miraculous in the same way that we were talking
about before. If you say you’re Bill Gates, it doesn’t become Bill Gates. But
that changes the context of how it answers you. It changes the kinds of
probabilities it’s pulling from and results in much more customized and better
results.

Ezra Klein: OK, but this is weirder, I think, than you’re quite letting on here.
So something you turned me on to is there’s research showing that the A.I. is
going to perform better on various tasks, and differently on them, depending on
the personality. So there’s a study that gives a bunch of different personality
prompts to one of the systems, and then tries to get it to answer 50 math
questions. And the way it got the best performance was to tell the A.I. it was a
Starfleet commander who was charting a course through turbulence to the center
of an anomaly.

But then, when it wanted to get the best answer on 100 math questions, what
worked best was putting it in a thriller, where the clock was ticking down. I
mean, what the hell is that about?

Ethan Mollick: “What the hell” is a good question. And we’re just scratching the
surface, right? There’s a nice study actually showing that if you emotionally
manipulate the A.I., you get better math results. So telling it your job depends
on it gets you better results. Tipping, especially $20 or $100 — saying, I’m
about to tip you if you do well, seems to work pretty well. It performs slightly
worse in December than May, and we think it’s because it has internalized the
idea of winter break.

Bill B

William Benzon
bbenzon@mindspring.com
917.717.9841



> On Apr 4, 2024, at 4:52 AM, Humanist <humanist@dhhumanist.org> wrote:
>
>
>              Humanist Discussion Group, Vol. 37, No. 525.
>        Department of Digital Humanities, University of Cologne
>                      Hosted by DH-Cologne
>                       www.dhhumanist.org
>                Submit to: humanist@dhhumanist.org
>
>
>
>
>        Date: 2024-04-04 08:47:57+00:00
>        From: Willard McCarty <willard.mccarty@mccarty.org.uk>
>        Subject: talking to & from smart machines
>
> This is mostly a question for those who have had the chance recently to
> try out 'conversing' with one of the Large Language Models (LLMs).
> Looking over a long transcript of an exchange between one of these and a
> well-educated friend in computer science, at first I was amazed at the
> agility of the LLM. (We've ceased to be surprised with such a reaction,
> though we continue to marvel.) The longer I read, however, the stronger
> my impression that the LLM was behaving like a very eager and adept
> student, or a very able ping-pong opponent. Had I not known that my
> friend's partner in this exercise was an LLM, I might have been fooled,
> but had I thought it a person, I would also have been baffled as to its
> personality--blank, flat, dull. We know that a human can indeed appear
> to us as having no personality, no life behind the mask, so to that
> extent the LLM is a brilliant success.
>
> In Truth and Method, Chapter 5, the philosopher Hans-Georg Gadamer,
> writes as follows:
>
>> We say that we "conduct" a conversation, but the more genuine a
>> conversation is, the less its conduct lies within the will of either
>> partner. Thus a genuine conversation is never the one that we wanted
>> to conduct. Rather, it is generally more correct to say that we fall
>> into conversation, or even that we become involved in it. The way one
>> word follows another, with the conversation taking its own twists and
>> reaching its own conclusion, may well be conducted in some way, but
>> the partners conversing are far less the leaders of it than the led.
>> No one knows in advance what will "come out" of a conversation.
>> Understanding or its failure is like an event that happens to us.
>> Thus we can say that something was a good conversation or that it was
>> ill fated. All this shows that a conversation has a spirit of its
>> own, and that the language in which it is conducted bears its own
>> truth within it—i.e., that it allows something to "emerge" which
>> henceforth exists.
>
> I'm not out to establish the inferiority of the machine, however smart,
> rather to question what would need to be done to give an LLM the ability
> to engage in a "genuine conversation", as Gadamer says--one in which the
> user was not so much in control of, and the LLM not so much eager
> flatteringly to please (and so to help keep the research funding flowing).
>
> Comments please.
>
> Yours,
> WM
> --
> Willard McCarty,
> Professor emeritus, King's College London;
> Editor, Interdisciplinary Science Reviews;  Humanist
> www.mccarty.org.uk
>
>
> _______________________________________________
> Unsubscribe at: http://dhhumanist.org/Restricted
> List posts to: humanist@dhhumanist.org
> List info and archives at at: http://dhhumanist.org
> Listmember interface at: http://dhhumanist.org/Restricted/
> Subscribe at: http://dhhumanist.org/membership_form.php

--[2]------------------------------------------------------------------------
        Date: 2024-04-04 09:56:27+00:00
        From: David Zeitlyn <david.zeitlyn@anthro.ox.ac.uk>
        Subject: Re: [Humanist] 37.525: talking to & from smart machines

Willard

that’s a great Gadamer quote - following that current rather than the
thread about interacting with LLMs, as someone who studies unprompted
naturally occurring conversation, one of the things I have been struck
by is how many loose ends there are - even in fairly formal interaction
such as village based court hearings.

For me,  making and going through the transcripts long after the event I
keep noticing claims, disputed statements etc etc that got left hanging
as the "conversation" progressed so I am left with a slough of
unresolved loose ends.

The parties to the conversation moved on, concentrating on what came to
be more important matters (as established by the flow of conversation).
Its left for poor suckers like me,  years later running along behind
saying but what about this?  Not exactly missing the point but missing
(or obtusely ignoring) the conversational flow!

Perhaps we need to explore more rigorously the hydraulic metaphors? Not
just linguistic/ conversational flow but rapids, shallows, slack water
etc etc (I am by a tidal estuary as I write so there may be
environmental influences at play)

best wishes
david

--[3]------------------------------------------------------------------------
        Date: 2024-04-04 09:02:51+00:00
        From: Rebecca Roach <r.roach@bham.ac.uk>
        Subject: talking to & from smart machines

Willard (and all)

All excellent questions – what does it mean to think of HCI not as communication
but as conversation, and why have that as the model? I spend most of my research
time examining how that dream came to be – via Turing sure but also Machine
Translation and the innovation in the 1950s that programming itself was a
linguistic activity (and therefore potentially conversational, whether typed or
spoken). Blatant plugs: book forthcoming, short public-facing article here:
https://theconversation.com/my-search-for-the-mysterious-missing-secretary-who-
shaped-chatbot-history-225602


Best wishes

Rebecca Roach
Associate Professor of Contemporary Literature
she/her
Arts G32
zoom<https://bham-ac-uk.zoom.us/j/5655220424>

Current Collaborations:

The Stuart Hall Archive Project: Conjunctures, Dialogues,
Readings<https://stuarthallarchive.bham.ac.uk/>

Key Forms<https://keyforms.bham.ac.uk/>

Out now:

Ego Media<https://egomedia.org/> digital media and life writing: digital book
with Stanford UP

In Digital Scholarship in the Humanities: the modernist critic Hugh Kenner as
you never knew him – computer hobbyist<https://academic.oup.com/dsh/advance-
article/doi/10.1093/llc/fqac066/6780152>.

*In managing childcare commitments I sometimes check and respond to emails
outside of working hours, I do not expect others to do the same.


_______________________________________________
Unsubscribe at: http://dhhumanist.org/Restricted
List posts to: humanist@dhhumanist.org
List info and archives at at: http://dhhumanist.org
Listmember interface at: http://dhhumanist.org/Restricted/
Subscribe at: http://dhhumanist.org/membership_form.php