Humanist Discussion Group

Humanist Archives: Oct. 27, 2021, 7 a.m. Humanist 35.325 - psychoanalysis of a digital unconscious, design of systems and affective computing

				
              Humanist Discussion Group, Vol. 35, No. 325.
        Department of Digital Humanities, University of Cologne
                      Hosted by DH-Cologne
                       www.dhhumanist.org
                Submit to: humanist@dhhumanist.org


    [1]    From: Tim Smithers <tim.smithers@cantab.net>
           Subject: Re: [Humanist] 35.312: psychoanalysis of a digital unconscious? (176)

    [2]    From: Willard McCarty <willard.mccarty@mccarty.org.uk>
           Subject: affective computing (29)


--[1]------------------------------------------------------------------------
        Date: 2021-10-26 17:42:42+00:00
        From: Tim Smithers <tim.smithers@cantab.net>
        Subject: Re: [Humanist] 35.312: psychoanalysis of a digital unconscious?

Dear Willard,

Here, some thoughts on

  1 Systems as Layers,

  2 Prejudice-Hunting,

then a

  3 Postscript

It's a bit long, I'm afraid.


1 Systems as Layers

I don't think thinking of computing systems as being
many-layered gives us a good understanding of how these system
really work.  I know we present computing systems, and many
other kinds of systems, as if they are made of layers.  It's a
good way to describe and explain their design and functioning,
but is it, I believe, a fiction; albeit a useful one.  In AI,
Allen Newell's 1982 "The Knowledge Level" is built upon a
story of computing system layers, and he uses this to arrive
an a useful concept of knowledge.  David Marr's theory of
visual processing was also built upon a story of levels, and
this usefully influenced a lot of research in Artificial
Vision and Neuroscience back then, but we now know this is too
simple a theory.

It's a while ago now, but I, with others, designed and built
some computing systems: hardware + operating system +
application code, and used these.  And, as usual, these were
documented, presented, and explained using a story of levels,
but my own understanding of these systems would be better
described as like a cloud of many connections, loads of them
cris-crossing the supposedly neatly separable levels.
Resolving a hardware issue might, for example, be made
possible by a [top-level] design change, and, usually, a chain
of needed other changes through the connections to where the
hardware issue resides.  The chains of "this is like this
because" thus formed this cloud of connections, and you needed
to know and remember them, else you'd break something with the
next design change.  Yes, yes, I know, you're supposed to
encapsulate functionality, but functionality does not "talk
to" efficiency and usability, and these latter issues must be
addressed successfully, sometimes at the expense of nicely
packaged functionality.  Once we learn what works, when, and
where, all this tends to become manageable.  So we, designers,
don't worry too much about the boss asking for clear,
transparent, functional encapsulation.  We tell them they've
got this, by showing them a picture of all the "levels" in our
design, that keep things nicely separated and organised.

It's hard work to document a cloud of design decision
connections, and difficult for people to see what they're
looking at, and understand how things work.  So we don't try
to do this, usually.

The abstractions we use in making our layer stories are our
abstractions, they are not properties of, or somehow also
possessed by, the systems we design and build, I would say.  I
think it is a category mistake to attribute to real systems
abstractions we make and use in their design, construction,
and use, even when we have no other way of doing this
designing and building.  You only think you see these layers
in the systems we build because this is how we think about
them.  This does not mean this is how they really are.
Abstractions have no traction or force in the real world, no
matter how good they are for designing it and understanding
it.

A cloud of connections understanding comes in very handy, I
discovered, back then, when we had to diagnose faults and
failures in the systems we designed and built, and that users
uncovered.


2 Prejudice-Hunting

So, I don't think you're idea of going "down through the
abstraction layers," looking for prejudices, makes sense.

It is mostly difficult, sometimes, very difficult, if not
impossible in practice, to anticipate the consequences of all
our design and construction decisions, especially in
complicated systems like computing systems.  Suggesting, as
you seem to do, that when we discover some kind of unjust,
unfair, unacceptable, discrimination happening when our system
is used, that we can properly attribute this to some design
decision, at some "level," seems to me to presume a rather
simplistic idea of how these complicated systems work.

I also think it's unfair to load the cause of such prejudices
on the designers and makers of these systems.  Of course,
designers have important professional and moral obligations to
avoid bad outcomes from the use of their designs.  And those
designers who fail to do this should face the consequences.
But this does not cover, and, I think, cannot properly be made
to cover, thoughtless or ill-considered or untested, use of
complicated systems, due to ignorance or lack of understanding
of how they really work, or have been designed and built.  A
simple layers story is probably not going to be enough to
judge this kind of thing well, and, I think, we should not
expect it to.

Black box use of any complicated system, without good real use
testing and validation, is, I think, bound to lead to tears,
and distress, sometimes, at least.


3 Postscript

These are, as ever, just my thoughts and experiences.  I don't
expect others who have done similar things to think the same,
but I'd sure be interested in how others here do think about
these things.  How we humans relate to the things we design,
build, and use, is a part of the [Digital] Humanities, I'd
say.

I'm off to get my hard hat. It's made of many layers :)

Best regards,

Tim




> On 21 Oct 2021, at 08:11, Humanist <humanist@dhhumanist.org> wrote:
>
>                  Humanist Discussion Group, Vol. 35, No. 312.
>        Department of Digital Humanities, University of Cologne
>                               Hosted by DH-Cologne
>                       www.dhhumanist.org
>                Submit to: humanist@dhhumanist.org
>
>
>
>
>        Date: 2021-10-20 06:19:16+00:00
>        From: Willard McCarty <willard.mccarty@mccarty.org.uk>
>        Subject: a digital 'unconscious'?
>
> We know that a computing system, hardware and operating system software,
> is many-layered, from the hardware circuitry, firmware and the many
> abstraction layers up to the user interface.
>
> For purposes of argument, let's call what the user sees and can know
> from a running maching its 'consciousness', i.e. that of which we can be
> consciously aware. Let's also call everything that the user cannot know
> directly the machine's 'unconscious'. In the former, we can easily spot
> design choices, perhaps construable as prejudices, e.g. in favour of
> right-handed people. or those who demand bright colours and active
> movement in the interface. In the latter, let us say in the role of a
> systems psychoanalyst, I assume we can find unhealthy quirks, a.k.a.
> prejudices.
>
> Here is my question. In principle how deep, down through the abstraction
> layers, can there be such quirks? Prejudice-hunting is these days in
> full swing, so I expect this question may have been considered at
> length. But critically speaking, under what conditions, at how deep a
> level can choices recognisable as cultural biases be found?
>
> Comments?
>
> Yours,
> WM
>
>
> --
> Willard McCarty,
> Professor emeritus, King's College London;
> Editor, Interdisciplinary Science Reviews;  Humanist
> www.mccarty.org.uk

--[2]------------------------------------------------------------------------
        Date: 2021-10-26 12:14:54+00:00
        From: Willard McCarty <willard.mccarty@mccarty.org.uk>
        Subject: affective computing

I've only recently become aware of the interesting and, more
importantly, consequential work in the (relatively) new field of
affactive computing thanks to a fine interview of Rosalind Picard (MIT)
by Lex Fridman on YouTube, "Rosalind Picard: Affective Computing,
Emotion, Privacy, and Health", at
https://www.youtube.com/watch?v=kq0VO1FqE6I -- which I strongly
recommend. Her 1997 book, Affective Computing (MIT Press) and articles,
such as "Affective Computing: From Laughter to IEEE" (IEEE Transactions
on Affective Computing) are very much worth reading.

All this, you may already have guessed, is connected with my recent
probing for work on AI and psychoanalysis. You may be aware that
clinical psychology and computing have a long history, dating back to
the early 1960s, but much of it seems to me severely shakled by
dependence on models of psychopathological conditions, such as paranoia,
and the text-analytic techniques of the time. The more recent technical
work baseed on 'machine learning' gives  the construction of
intersubjective dialogue between machine and human a much, much bigger
world to explore.

Comments?

Yours,
WM
--
Willard McCarty,
Professor emeritus, King's College London;
Editor, Interdisciplinary Science Reviews;  Humanist
www.mccarty.org.uk


_______________________________________________
Unsubscribe at: http://dhhumanist.org/Restricted
List posts to: humanist@dhhumanist.org
List info and archives at at: http://dhhumanist.org
Listmember interface at: http://dhhumanist.org/Restricted/
Subscribe at: http://dhhumanist.org/membership_form.php