Home About Subscribe Search Member Area

Humanist Discussion Group

< Back to Volume 34

Humanist Archives: Aug. 8, 2020, 7:47 a.m. Humanist 34.218 - on GPT-3

                  Humanist Discussion Group, Vol. 34, No. 218.
            Department of Digital Humanities, King's College London
                   Hosted by King's Digital Lab
                Submit to: humanist@dhhumanist.org

    [1]    From: Mark Wolff 
           Subject: Re: [Humanist] 34.216: on GPT-3 (41)

    [2]    From: Gabriel Egan 
           Subject: Re: [Humanist] 34.216: on GPT-3 (33)

    [3]    From: Jim Rovira 
           Subject: Re: [Humanist] 34.216: on GPT-3 (38)

        Date: 2020-08-07 19:45:45+00:00
        From: Mark Wolff 
        Subject: Re: [Humanist] 34.216: on GPT-3

On Aug 7, 2020, at 2:41 AM, Humanist  wrote:

> Ferdinand de Saussure's famous structuralist model of language is also
> relational, but it is heterogeneous: all signifiers -- "sound images" for
> Saussure -- are of the same kind, and within the homogeneous set of signifiers
> -- all of them "sound images" --, each signifier is defined precisely by being
> different from all others. The same holds for all signifieds, concepts for
> Saussure. A sign is formed when a signifier is connected to a signified -- a
> sound image to a concept -- and thus when *categorically different* units,
> defined differentially within its own homogeneous system, are brought
> Signification arises out of a *heterogeneous* system.

> (1) What happens when a large machine learning algorithm is fed two or more
> _different_ sets of inputs with the model tasked to build not one (as GPT-3),
> but two or more homogeneous relational systems which are categorically
> from each other, and to connect them together, creating relationships between
> heterogeneous units and thus a structure of signification?

This is an interesting way to frame the question. The relationships between
signifiers can be mapped using neural networks and the relationships between
signifieds can be mapped using topic modeling. These are different approaches to
machine learning and therefore could, if combined, instantiate a heterogeneous
system for signification.

Mark B. Wolff, Ph.D.
Professor of French
Chair, Modern Languages
One Hartwick Drive
Hartwick College
Oneonta, NY  13820
(607) 431-4615


        Date: 2020-08-07 10:21:27+00:00
        From: Gabriel Egan 
        Subject: Re: [Humanist] 34.216: on GPT-3


Brigitte Rath wrote:

 > Within GPT-3, words only ever connect to other
 > words, they cannot connect to concepts or objects

The proof that GPT-3 does indeed encode concepts and
can use them in something like reasoning is surely its
performance at arithmetic. It is not surprising that
if you enter the string "2 + 2 =" into GPT-3 it
responds with the string "4", since as an answer to
the question "what comes next?" the "4" is predictable
because the training data doubtless contains some
examples of the string "2 + 2 = 4".

But you if enter into GPT-3 a three-digit addition or
subtraction such as "543 + 298 =" the correct answer (in
this case "841") comes back about 80-90% of the time. These
results are not the effect of the model memorizing all the
possible sums in its dataset -- demonstrably, these sums are
not present in the training data -- but rather the result
of the model embodying the principles of arithmetic,
including place-value and carrying-out.

This, surely, qualifies as the encoding of concepts.


Gabriel Egan

        Date: 2020-08-07 13:47:17+00:00
        From: Jim Rovira 
        Subject: Re: [Humanist] 34.216: on GPT-3

Willard --

Does the word mimesis apply to machine activity? Imitation is a function of
consciousness. Calling machine behavior mimesis answers the question ahead
of time; it's a kind of question begging. Aristotelian mimesis referred to
the creation of art forms (poetry, music, dance, art) by conscious beings.
We can only begin to talk about machines engaging in mimesis when they
imitate observed activity outside of their programming, and when the
imitation isn't rote but creative: a kind of interpretation or
re-presentation of an object or being that shows us not only the object but
communicates a person's understanding or experience of the object. So a
camera might produce an image, but the image doesn't communicate the
camera's individual experience of the object captured on film.

I think questions about consciousness are eminently worth asking, but I
think they can only be asked of organic beings. I think we need different
words to discuss machines: maybe processing? The act of executing a

Brigitte --

Very interesting bringing Saussure into this discussion. About this

"What happens when a large machine learning algorithm is fed two or
more _different_ sets of inputs with the model tasked to build not one (as

Do machines ever receive more than one different kind of input? It's all
code, isn't it? Voice commands are code, keyboard input is code, RFID is
code, and it's all the same kind of code: the language with which the
machine is programmed. I don't think the human brain processes vision the
same way it processes sound or touch (although it occurs to me I don't
know), so we can say that our brains have different inputs and outputs, but
do computers?

Jim R

Unsubscribe at: http://dhhumanist.org/Restricted
List posts to: humanist@dhhumanist.org
List info and archives at at: http://dhhumanist.org
Listmember interface at: http://dhhumanist.org/Restricted/
Subscribe at: http://dhhumanist.org/membership_form.php

Editor: Willard McCarty (King's College London, U.K.; Western Sydney University, Australia)
Software designer: Malgosia Askanas (Mind-Crafts)

This site is maintained under a service level agreement by King's Digital Lab.