Humanist Discussion Group

Humanist Archives: Oct. 30, 2021, 7:15 a.m. Humanist 35.336 - psychoanalysis of a digital unconscious &c.

				
              Humanist Discussion Group, Vol. 35, No. 336.
        Department of Digital Humanities, University of Cologne
                      Hosted by DH-Cologne
                       www.dhhumanist.org
                Submit to: humanist@dhhumanist.org




        Date: 2021-10-29 09:31:45+00:00
        From: Tim Smithers <tim.smithers@cantab.net>
        Subject: Re: [Humanist] 35.327: psychoanalysis of a digital unconscious &c.

Dear Maurizio,

Thank you for your questions after my post on "psychoanalysis
of a digital unconscious."  I'll respond with how I see
things.

I'll start with cars with poor brakes, and add that the
question of where lies responsibility in this kind of thing
is, I think, a can of worms.

But, I have no doubt that all designers and makers of things
used by others have a professional responsibility, and/or a
moral obligation, to be as sure as they reasonably can be
that, what they design and build will work as intended, and
_only_ as intended [notice!], in the conditions and situations
it is designed for, and in ways that are safe, effective,
efficient, and usable for those who will make use of it.

So, yes, the designers and builders of a car that turns out to
have inadequate brakes in the use conditions it is designed
for, which results in falta accidents, are responsible for
these deaths.  That's how I see it, at least.  Not everybody
does.  I have heard different positions on this: the buyer of
the car takes all responsibility, for example, from the idea
that Governments shouldn't be baby sitters.

Now, say the brake design and manufacture and assembly is
fine, and has been shown to be by adequate using industry
standard testing, but it turns out, in intended use
conditions, these brakes wear more quickly than the brakes in
other cars.  What if a driver fails to realise this, and
continues to drive the car with worn, and thus weakened
brakes, and this results in a fatal accident.  Who is
responsible for the death in this case?  When you bought your
last car -- assuming you have bought a car -- did you ask how
long the brakes last on the model you chose?  May be not.
It's an unusual question to ask when selecting a car, I
understand, having talked to people who sell cars.

In this situation, which is, I think, nearer to the realities
we see, than straight off poorly designed brakes, the
attributing of responsibility becomes more difficult.  Is it
the designers and makers, who developed brakes that wear less
well than those in other cars?  Is it the driver for not
keeping their car in good working order?  Is it the car sales
company, for not telling the person who bought the car that
the brakes don't last so long on this model?  Then, what if,
as is usually the case, this faster wearing issue only becomes
evident over time, perhaps years?  Who is responsible for
collecting the use data needed to detect this faster wearing?
Who then becomes responsible for making drivers aware of this?
And, who is then responsible for properly dealing with it?

In the civil air transport industry, where similar things
happen, typically with more fatalities, we have official
accident investigations by independent investigators, whose
job is to try to work out what happened, and what didn't
happen [!], why, and whose fault is was.  In the case of car
accidents we don't do this, not for all accidents, at least.

Mostly I suspect we are in agreement on this, but I'd be
interested to know how you, and others here, see this,
particularly in the expanded scenarios I outline.

This is already long, but on to the unanticipated "positive"
discriminations you ask about first.

First, I have not seen any reports of things like 'giving life
insurance to ill people' or 'giving conditional release to
people typically discriminated against', in the use of
decision support systems built using today's Machine Learning
techniques.  But, I suspect this is because nobody looks for
these cases, not because they don't, or can't happen.  We
should also be clear here, these systems are supposed to be
decision support systems, not decision making systems, so if
we find bias or prejudice in the decision outcomes, when these
systems are used, it's not just the system that should be
questioned.  It's the people or person who used the system
too!  Still, because of this still present human component in
this kind of decision making, I would guess there probably are
some cases of (inadvertent) "positive" discrimination, though
not many.  Humans, are, as we know, biased and prejudiced in
much of our thinking and decision making, and not only in bad
and negative ways.

Second, there's nothing new in the idea that if you use biased
training data, your system will display this bias in its use.
It can't do otherwise.  So, for the cases where this happens,
and it's because the designers and builders failed to check
the training data they use for biases, the designers and
builders are at fault; serious fault.  Not knowing how to
check training data for biases, or this is difficult and
costly to do, are not excuses.  (Though I have seen these
kinds of excuse attempted.)

Third, what you point to a more subtle question, as I see it,
and an interesting one.  Let's suppose we have designed and
built a decision support system, to be used by others in some
real world decision making task, and that we have taken good,
demonstrable, and well documented, care, to be sure there are
no biases or prejudices, negative or positive, in the decision
support workings of our system; is it still possible that real
use will result in biased or prejudicial decision, negative
and/or positive?  Yes, it is, and, in my view, no amount of
good designing and building will avoid this possibility,
simply because designers cannot anticipate all future real
world use; they cannot know enough about all these future
cases, situations, conditions, and the people who will use
their system.  And, it's not reasonable, in my view, to expect
designers to do this.

So, we need to watch what happens when we use these kinds of
systems, especially in real world settings.  This is not easy
to do.  Most users don't know how to do this, and it wouldn't
be reasonable to expect or require them to do this.  It still
needs doing though.  So who will do this?

This is, I think, one of the more central unasked questions in
the expanding use of so called AI in our human doings and
goings on.  It isn't as simple as all AI is bad, and going to
kill us humans off.  That's as silly as it sounds.  Nor is it
that AI is just good for us.  That too is silly.

One part of responding to this question is for designers and
builds of these artificially clever systems -- sorry I just
can't call them "intelligent" when they evidently ain't -- is
to design and build in the functionalities need to support
sufficient, transparent, and effective, monitoring of their
use and performance in real world settings.  Once again, in
the aircraft industry we do this, and have done for a long
time.  All aircraft used in civil aviation have Black Box
flight data recorders and cockpit voice (ie sound) recording
boxes, and these are designed to be use to support the work of
investigators when accidents happen, and are built to be
robust enough and easily detectable so that they (mostly)
survive accidents and can be recovered afterwards.  I the case
of the artificially clever systems we are using more of, we
need, I would humbly suggest, something similar, but not just
for the case of accidents.  We need these to help us look at,
and understand, what happens when we use these systems in our
real world activities all the time.  More of this, together
with more reporting, and public discussion, of this kind of
monitoring would, I think, help more people have a better idea
of how these system work, why they work the way they do, what
we can reasonably expect from them, and what we should and
shouldn't do with them.

I've gone on lots, and I'm unsure I've responded well to your
questions.  Still, do ask more, if you'd care to.  And, to
everybody here, please don't let the length of this post make
you feel it's not your conversation.  It is.  It'd be really
good to hear from many more here.  This is, I think, a
Humanities topic, and most often, a Digital Humanities topic.

Thank you again for your questions, Maurizio.

Best regards,

Tim


> On 28 Oct 2021, at 09:02, Humanist <humanist@dhhumanist.org> wrote:
>
>                  Humanist Discussion Group, Vol. 35, No. 327.
>        Department of Digital Humanities, University of Cologne
>                               Hosted by DH-Cologne
>                       www.dhhumanist.org
>                Submit to: humanist@dhhumanist.org
>

<snip>

>    [3]    From: maurizio lana <maurizio.lana@uniupo.it>
>           Subject: Re: [Humanist] 35.325: psychoanalysis of a digital
unconscious, design of systems and affective computing (188)

<snip>

>
> --[3]------------------------------------------------------------------------
>        Date: 2021-10-27 06:45:41+00:00
>        From: maurizio lana <maurizio.lana@uniupo.it>
>        Subject: Re: [Humanist] 35.325: psychoanalysis of a digital
unconscious, design of systems and affective computing
>
> hi Tim
> interesting message.
> when i read
>
> It is mostly difficult, sometimes, very difficult, if not
> impossible in practice, to anticipate the consequences of all
> our design and construction decisions, especially in
> complicated systems like computing systems.  Suggesting, as
> you seem to do, that when we discover some kind of unjust,
> unfair, unacceptable, discrimination happening when our system
> is used, that we can properly attribute this to some design
> decision, at some "level," seems to me to presume a rather
> simplistic idea of how these complicated systems work.
>
> and particularly the words "when we discover some kind of unjust,
> unfair, unacceptable, discrimination happening when our system is
> used", i wonder if it ever happened that a system produced some some
> kind of "positive" discrimination: let's say that it gave life
> insurance also to ill persons, or that it gave conditional release
> to black people more that to white people and so on. because what we
> usually see in the discrimination produced by AI software system is
> that it corresponds to the worse discrimination the humans do, never
> to its contrary.
>
> you write that you "also think it's unfair to load the cause of such
> prejudices on the designers and makers of these systems" and suggest
> that the discrimination must be attributed "to thoughtless or
> ill-considered or untested, use of complicated systems, due to
> ignorance or lack of understanding of how they really work, or have
> been designed and built".
>
> but could we say that if a car has weak brakes and when involved in
> accidents more people are killed than when other cars are involved,
> this is not a direct responsibility of its designers?
>
> what i mean is that one is responsible not only for the consequences
> of its actions but also for the consequences of its omissions.
> Maurizio
>

<snip>


_______________________________________________
Unsubscribe at: http://dhhumanist.org/Restricted
List posts to: humanist@dhhumanist.org
List info and archives at at: http://dhhumanist.org
Listmember interface at: http://dhhumanist.org/Restricted/
Subscribe at: http://dhhumanist.org/membership_form.php