Humanist Discussion Group

Humanist Archives: March 5, 2022, 7:25 a.m. Humanist 35.569 - ELIZA and conversation

				
              Humanist Discussion Group, Vol. 35, No. 569.
        Department of Digital Humanities, University of Cologne
                      Hosted by DH-Cologne
                       www.dhhumanist.org
                Submit to: humanist@dhhumanist.org


    [1]    From: Dr. Herbert Wender <drwender@aol.com>
           Subject: Re: [Humanist] 35.567: considering ELIZA: technical improvements & consequences (30)

    [2]    From: Willard McCarty <willard.mccarty@mccarty.org.uk>
           Subject: beyond ELIZA (37)


--[1]------------------------------------------------------------------------
        Date: 2022-03-04 21:46:42+00:00
        From: Dr. Herbert Wender <drwender@aol.com>
        Subject: Re: [Humanist] 35.567: considering ELIZA: technical improvements & consequences

Willard

In the postings today bySimon Rae and Gioele Barabucci I see two different
perspectives, apsychological and an engineering point of view. I would add
a third one: epistemical.

If we use the CfP of yesterday [35.564] to characterize the questioned task maybe
we can - following Simon's focussing on the user interface - reformulate "How do
we calibrate or modulate our (dis)trust when it comes to sources of information"
out of a psychologist's viewpoint: What kind of features in the interaction with
the addressees is best suited to induce delusional thinking?

An engineer's viewpoint - as Gioele's listing shows - focusses on the hardware
and software options in configuring the system. The focus on
technological developments is surely the right one if we ask 'What is and to
what ends we use AI?’  and answer: as most technical inventions a means to live
easier.

Additionally I would like to racall that initially the NLP questions in the
context of experimenting with 'block worlds', expert and QA systems were
epistemic and not psychological. Surely, the latter is nearer to Turing's
test situation but in that times in which Chomsky's model of deep structure,
transformation rules and surface appearance which dominated the efforts in
building realistic QA systems were not a tleast directed toward the understanding
of the functioning of natural languages through building of production systems in
a similar way.IMHO it is to regret that in the alternative of so-
called 'intelligent' and 'power' approach the latter is always predominant.

Herbert


--[2]------------------------------------------------------------------------
        Date: 2022-03-04 06:34:47+00:00
        From: Willard McCarty <willard.mccarty@mccarty.org.uk>
        Subject: beyond ELIZA

Thanks to Simon Rae and Gioele Barabucci for spelling out differences
that a new ELIZA could take advantage of. Perhaps on that basis we can
dive into more difficult questions.  But in case I have the wrong end of
the stick, I'll ask further questions and hope for better ones.

Would it be correct to say that all we've accomplished with these
improvements and changes is to become better at manipulating character
strings and doing it faster with a greater supply? In the act of writing, we
summon contexts and histories of use of words that we all share, more
or less. (I realise this is a very poor description of what we do;
perhaps someone here could supply a better one?) This is not, I think,
what GPT-3 and its diverse kin are doing; they're doing what digital
machines can do.

I would assume that as these machines (and the mathematical techniques
used by software) get better at this imitation game, more people will
more often be fooled by their performance. Impressive but, from a
non-technical perspective, both boring and dangerous.  Far more
interesting, I'd think, would be if this game were to reveal more about
our use of language than we knew, thus restarting the game. Furthermore,
the further surfacing of machinic anomalies would give us an opportunity
more fruitfully to explore a different mode of intelligence.

For all its primitive nature, and its maker's outrage at what people did
with it, Weizenbaum stumbled on the conversational relation with
machines. So, my question is, what sort of conversation do we want to
have, and to what end?

Comments?

Yours,
WM
--
Willard McCarty,
Professor emeritus, King's College London;
Editor, Interdisciplinary Science Reviews;  Humanist
www.mccarty.org.uk


_______________________________________________
Unsubscribe at: http://dhhumanist.org/Restricted
List posts to: humanist@dhhumanist.org
List info and archives at at: http://dhhumanist.org
Listmember interface at: http://dhhumanist.org/Restricted/
Subscribe at: http://dhhumanist.org/membership_form.php