Humanist Discussion Group

Humanist Archives: Oct. 28, 2021, 8:02 a.m. Humanist 35.327 - psychoanalysis of a digital unconscious &c.

				
              Humanist Discussion Group, Vol. 35, No. 327.
        Department of Digital Humanities, University of Cologne
                      Hosted by DH-Cologne
                       www.dhhumanist.org
                Submit to: humanist@dhhumanist.org


    [1]    From: John Wall <jnwall@ncsu.edu>
           Subject: Re: [Humanist] 35.312: psychoanalysis of a digital unconscious? (75)

    [2]    From: Tim Smithers <tim.smithers@cantab.net>
           Subject: Re: [Humanist] 35.316: psychoanalysis by a digital doctor? (98)

    [3]    From: maurizio lana <maurizio.lana@uniupo.it>
           Subject: Re: [Humanist] 35.325: psychoanalysis of a digital unconscious, design of systems and affective computing (188)


--[1]------------------------------------------------------------------------
        Date: 2021-10-27 13:32:09+00:00
        From: John Wall <jnwall@ncsu.edu>
        Subject: Re: [Humanist] 35.312: psychoanalysis of a digital unconscious?

Willard,

I know absolutely nothing about the subject under discussion, but I am
reminded of a very old computer program called Eliza, in which the program
interacted with a human user in a series of formulaic responses loosely
modeled after a particular style of psychotherapeutic practice.

The discovery was that the human user was likely to respond in ways that
suggested that the human user was creating, imaginatively, an "other" with
whom the user was interacting. Human users often found the interaction
helpful to them, in the same way as if the program were a therapist.

So the project was yet one more testimony to the creative powers of the
human imagination. Or as Duke Theseus puts it, the human imagination "bodies
forth/ The forms of things unknown . . . /Turns them to shapes, and gives
to aery nothing/ A local habitation and a name."

JNW

On Thu, Oct 21, 2021 at 2:11 AM Humanist <humanist@dhhumanist.org> wrote:

>                   Humanist Discussion Group, Vol. 35, No. 312.
>         Department of Digital Humanities, University of Cologne
>                                 Hosted by DH-Cologne
>                        www.dhhumanist.org
>                 Submit to: humanist@dhhumanist.org
>
>
>
>
>         Date: 2021-10-20 06:19:16+00:00
>         From: Willard McCarty <willard.mccarty@mccarty.org.uk>
>         Subject: a digital 'unconscious'?
>
> We know that a computing system, hardware and operating system software,
> is many-layered, from the hardware circuitry, firmware and the many
> abstraction layers up to the user interface.
>
> For purposes of argument, let's call what the user sees and can know
> from a running maching its 'consciousness', i.e. that of which we can be
> consciously aware. Let's also call everything that the user cannot know
> directly the machine's 'unconscious'. In the former, we can easily spot
> design choices, perhaps construable as prejudices, e.g. in favour of
> right-handed people. or those who demand bright colours and active
> movement in the interface. In the latter, let us say in the role of a
> systems psychoanalyst, I assume we can find unhealthy quirks, a.k.a.
> prejudices.
>
> Here is my question. In principle how deep, down through the abstraction
> layers, can there be such quirks? Prejudice-hunting is these days in
> full swing, so I expect this question may have been considered at
> length. But critically speaking, under what conditions, at how deep a
> level can choices recognisable as cultural biases be found?
>
> Comments?
>
> Yours,
> WM
>
>
> --
> Willard McCarty,
> Professor emeritus, King's College London;
> Editor, Interdisciplinary Science Reviews;  Humanist
> www.mccarty.org.uk

--
John N. Wall
Professor of English Literature
NC State University
Principal Investigator for
The Virtual St Paul's Cathedral Project
https://vpcathedral.chass.ncsu.edu/
The Virtual Paul's Cross Project
https://vpcross.chass.ncsu.edu/

--[2]------------------------------------------------------------------------
        Date: 2021-10-27 09:30:47+00:00
        From: Tim Smithers <tim.smithers@cantab.net>
        Subject: Re: [Humanist] 35.316: psychoanalysis by a digital doctor?

Dear Willard,

When we see or hear someone mimicking or imitating someone
else, we may find it convincing, perhaps entertaining, but
this does not, I think, usually dissolve our understanding
that the one doing the imitating has not become another of the
one being imitated: mimicry does not result in replication;
it's only a look alike.

So why, I keep wondering, do we think that systems built using
so called Deep Learning techniques, with massive amount of
data, that imitate, often convincingly, some things people can
do, are replications of what people do?

Did Deep Blue (II) play chess or just imitate chess playing?
Did it just look like it played chess?  I'm serious.  Garry
Kasparov had to play chess to engage with Deep Blue in the
intended way, for sure.  Deep Blue moved it's chess pieces in
legal ways, and in ways that made it hard, and sometimes
impossible, for Kasparov to win the chess game.  Did Deep Blue
know it had won, in the way Kasparov knew he had won, when he
did?  Deep Blue could detect the legal end of a game, sure,
and which colour had won, sure, but this is not wining like it
was for Kasparov.  Could Deep Blue explain it's chess moves
like Kasparov could, and did, in ways that other chess players
could understand and appreciate?  No!

Deep Blue imitated chess playing, well.  And, by extension,
GPT3-like systems, and its relatives, mimic other kinds of
human behaviour, usually highly skilled behaviour.  They
don't, in my humble opinion, replicate it.  No where near.  I
don't understand why we persist in not seeing this, and saying
this, and questioning the purpose and uses of this imitating?
I don't want to say there are no good uses.  There are.  I
know a few.  But it's still use of an imitation, not of a real
replication.  I thought this is what Joseph Weizenbaum showed
us with Elisa?

We need to remove our Deep Blue glasses and see the world for
what it really is, I suggest.

A last thought.  Have you noticed that convincing mimicry is
often built upon identifying, capturing, and performing, often
with added exaggeration, the biases and prejudices of the one
being mimicked?

Off to get my big (multi-layer) shield :)

Best regards,

Tim



> On 23 Oct 2021, at 07:35, Humanist <humanist@dhhumanist.org> wrote:
>
>                  Humanist Discussion Group, Vol. 35, No. 316.
>        Department of Digital Humanities, University of Cologne
>                               Hosted by DH-Cologne
>                       www.dhhumanist.org
>                Submit to: humanist@dhhumanist.org
>
>
>
>
>        Date: 2021-10-22 08:27:51+00:00
>        From: Willard McCarty <willard.mccarty@mccarty.org.uk>
>        Subject: psychoanalysis continued
>
> My thanks to Henry and Hartmut for better informed responses to my
> question on "psychoanalysis of a digital unconscious". More of that kind
> would be very welcome. But let me change the question slightly.
>
> Let's imagine an updated version of Weizenbaum's ELIZA in its
> psychoanalytic application combined with something like an expert system
> (remember those?) covering the subject area of the human interlocutor's
> field of interest. And since we're imagining all this, let's assume
> whatever computational power might be required. Let's say we have a
> artificial doctor that learns dynamically from conversation with the
> human, then responds by rephrasing the conversation, producing examples
> --or writing up a summary conclusion.
>
> So far, I take it, systems have been designed to mimic, based on their
> training sets. Is it conceivable given what we know now that such a
> GPT3-like system could be designed not to mimic based on learned biases
> but to deviate or extend 'beyond the information given' so as to
> illuminate these biases?
>
> Comments?
>
> Yours,
> WM
> --
> Willard McCarty,
> Professor emeritus, King's College London;
> Editor, Interdisciplinary Science Reviews;  Humanist
> www.mccarty.org.uk


--[3]------------------------------------------------------------------------
        Date: 2021-10-27 06:45:41+00:00
        From: maurizio lana <maurizio.lana@uniupo.it>
        Subject: Re: [Humanist] 35.325: psychoanalysis of a digital unconscious, design of systems and affective computing

hi Tim
interesting message.
when i read

It is mostly difficult, sometimes, very difficult, if not
impossible in practice, to anticipate the consequences of all
our design and construction decisions, especially in
complicated systems like computing systems.  Suggesting, as
you seem to do, that when we discover some kind of unjust,
unfair, unacceptable, discrimination happening when our system
is used, that we can properly attribute this to some design
decision, at some "level," seems to me to presume a rather
simplistic idea of how these complicated systems work.

and particularly the words "when we discover some kind of unjust,
unfair, unacceptable, discrimination happening when our system is
used", i wonder if it ever happened that a system produced some some
kind of "positive" discrimination: let's say that it gave life
insurance also to ill persons, or that it gave conditional release
to black people more that to white people and so on. because what we
usually see in the discrimination produced by AI software system is
that it corresponds to the worse discrimination the humans do, never
to its contrary.

you write that you "also think it's unfair to load the cause of such
prejudices on the designers and makers of these systems" and suggest
that the discrimination must be attributed "to thoughtless or
ill-considered or untested, use of complicated systems, due to
ignorance or lack of understanding of how they really work, or have
been designed and built".

but could we say that if a car has weak brakes and when involved in
accidents more people are killed than when other cars are involved,
this is not a direct responsibility of its designers?

what i mean is that one is responsible not only for the consequences
of its actions but also for the consequences of its omissions.
Maurizio


Il 27/10/21 07:59, Humanist ha scritto:


Humanist Discussion Group, Vol. 35, No. 325.
Department of Digital Humanities, University of Cologne
Hosted by DH-Cologne
www.dhhumanist.org
Submit to: humanist@dhhumanist.org


--[1]------------------------------------------------------------------------
Date: 2021-10-26 17:42:42+00:00
From: Tim Smithers <tim.smithers@cantab.net>
Subject: Re: [Humanist] 35.312: psychoanalysis of a digital unconscious?

Dear Willard,

Here, some thoughts on

1 Systems as Layers,

2 Prejudice-Hunting,

then a

3 Postscript

It's a bit long, I'm afraid.


1 Systems as Layers

I don't think thinking of computing systems as being
many-layered gives us a good understanding of how these system
really work.  I know we present computing systems, and many
other kinds of systems, as if they are made of layers.  It's a
good way to describe and explain their design and functioning,
but is it, I believe, a fiction; albeit a useful one.  In AI,
Allen Newell's 1982 "The Knowledge Level" is built upon a
story of computing system layers, and he uses this to arrive
an a useful concept of knowledge.  David Marr's theory of
visual processing was also built upon a story of levels, and
this usefully influenced a lot of research in Artificial
Vision and Neuroscience back then, but we now know this is too
simple a theory.

It's a while ago now, but I, with others, designed and built
some computing systems: hardware + operating system +
application code, and used these.  And, as usual, these were
documented, presented, and explained using a story of levels,
but my own understanding of these systems would be better
described as like a cloud of many connections, loads of them
cris-crossing the supposedly neatly separable levels.
Resolving a hardware issue might, for example, be made
possible by a [top-level] design change, and, usually, a chain
of needed other changes through the connections to where the
hardware issue resides.  The chains of "this is like this
because" thus formed this cloud of connections, and you needed
to know and remember them, else you'd break something with the
next design change.  Yes, yes, I know, you're supposed to
encapsulate functionality, but functionality does not "talk
to" efficiency and usability, and these latter issues must be
addressed successfully, sometimes at the expense of nicely
packaged functionality.  Once we learn what works, when, and
where, all this tends to become manageable.  So we, designers,
don't worry too much about the boss asking for clear,
transparent, functional encapsulation.  We tell them they've
got this, by showing them a picture of all the "levels" in our
design, that keep things nicely separated and organised.

It's hard work to document a cloud of design decision
connections, and difficult for people to see what they're
looking at, and understand how things work.  So we don't try
to do this, usually.

The abstractions we use in making our layer stories are our
abstractions, they are not properties of, or somehow also
possessed by, the systems we design and build, I would say.  I
think it is a category mistake to attribute to real systems
abstractions we make and use in their design, construction,
and use, even when we have no other way of doing this
designing and building.  You only think you see these layers
in the systems we build because this is how we think about
them.  This does not mean this is how they really are.
Abstractions have no traction or force in the real world, no
matter how good they are for designing it and understanding
it.

A cloud of connections understanding comes in very handy, I
discovered, back then, when we had to diagnose faults and
failures in the systems we designed and built, and that users
uncovered.


2 Prejudice-Hunting

So, I don't think you're idea of going "down through the
abstraction layers," looking for prejudices, makes sense.

It is mostly difficult, sometimes, very difficult, if not
impossible in practice, to anticipate the consequences of all
our design and construction decisions, especially in
complicated systems like computing systems.  Suggesting, as
you seem to do, that when we discover some kind of unjust,
unfair, unacceptable, discrimination happening when our system
is used, that we can properly attribute this to some design
decision, at some "level," seems to me to presume a rather
simplistic idea of how these complicated systems work.

I also think it's unfair to load the cause of such prejudices
on the designers and makers of these systems.  Of course,
designers have important professional and moral obligations to
avoid bad outcomes from the use of their designs.  And those
designers who fail to do this should face the consequences.
But this does not cover, and, I think, cannot properly be made
to cover, thoughtless or ill-considered or untested, use of
complicated systems, due to ignorance or lack of understanding
of how they really work, or have been designed and built.  A
simple layers story is probably not going to be enough to
judge this kind of thing well, and, I think, we should not
expect it to.

Black box use of any complicated system, without good real use
testing and validation, is, I think, bound to lead to tears,
and distress, sometimes, at least.


3 Postscript

These are, as ever, just my thoughts and experiences.  I don't
expect others who have done similar things to think the same,
but I'd sure be interested in how others here do think about
these things.  How we humans relate to the things we design,
build, and use, is a part of the [Digital] Humanities, I'd
say.

I'm off to get my hard hat. It's made of many layers :)

Best regards,

Tim


Maurizio Lana
Dipartimento di Studi Umanistici
UniversitĂ  del Piemonte Orientale
piazza Roma 36 - 13100 Vercelli
tel. +39 347 7370925


_______________________________________________
Unsubscribe at: http://dhhumanist.org/Restricted
List posts to: humanist@dhhumanist.org
List info and archives at at: http://dhhumanist.org
Listmember interface at: http://dhhumanist.org/Restricted/
Subscribe at: http://dhhumanist.org/membership_form.php