Humanist Discussion Group

Humanist Archives: June 13, 2022, 9:28 a.m. Humanist 36.59 - AI: a shifting moral agent

				
              Humanist Discussion Group, Vol. 36, No. 59.
        Department of Digital Humanities, University of Cologne
                      Hosted by DH-Cologne
                       www.dhhumanist.org
                Submit to: humanist@dhhumanist.org




        Date: 2022-06-09 07:43:49+00:00
        From: Tim Smithers <tim.smithers@cantab.net>
        Subject: Re: [Humanist] 36.52: AI: a shifting moral agent?

Dear Willard,

Starting from Cavell's "The world of art ...  is ethically
charged", you continue and ask

   "...  Remembering that 'artificial' means literally 'made
   by art', what does this say about artificial intelligence?
   If the moral agent of a painting, say, is the painter, then
   who is the moral agent of an automaton?"

First, as you make clear, 'art' here means the art of the
maker, and thus the art in the making, not the art sold in the
auction room or hung in gallery.  Then, I see the answer as
being quite clear: the moral agent of the [AI] automaton is
the maker of the automaton.

Agency is the capacity to bring about change in the world.
Moral agency, I submit, is the capacity to bring about morally
acceptable change in the world, which, I would further
suggest, requires a capacity to take responsibility for the
consequences (intended and unintended [*]) of the changes
made.

This means, I think, bringing about change that is known, by
the change making agent, to be at least neutral for others, if
not beneficial to them in some, possibly indirect, way.  This
necessarily requires, it seems to me, that moral agents acts
with sufficient awareness of, knowledge of, and understanding
of, these others; all of them.  (The longer story, which I'll
not tell here, is that this requires autonomy -- self law
making -- not just automation -- self law following, which is
what automatons do.)

This awareness, knowledge, and understanding, of others is
something some of today's AI builders don't display much
evidence of, but it also explains why the automatons they
build cannot be moral agents: [AI] automatons have no
awareness of, knowledge of, nor understanding of, other
agents, in particular, of humans.  Agency, and, in particular,
the agency of others, is not modelled by these automatons;
only things happening are.  Which is, of course, inevitable.
Evidence of things happening can be found in, and modelled
from, data of what is happening, but this data does not
contain anything recoverable about any of the agents that may
be making happen what is detected as happening.  That, I would
say, takes being in the world in a way like we humans are; a
way of being that's rather different from the way of being of
current AI automatons, especially those programmed by data of
what's happened, often massive amounts of it -- typically
miss-named Machine Learning.

So, I differ some from what Maurizio offers.  Users do need to
be well trained to make good use of the [AI] tools they use,
and are necessarily responsible, I would say, for the outcomes
and consequences of their tool use.  But this does not, I
think, make them necessarily a moral agent in the making of
the tools they use.

Best regards,

Tim

[*] Colin Burrow, in the LRB piece you cite, says something
interesting about intentions which I think is relevant to this
discussion, near the end of paragraph five.

     "...  the fact that we say 'I didn't intend to give
     offence' gives us an understanding of what 'intending' in
     normal usage means.  (We normally use it to reduce our
     responsibility for something unintended, hence we might
     infer that 'intending' is not a distinct psychological
     manoeuvre performed in advance of speech or writing [or,
     I would add here, acting] but, usually, a retrospective
     construction of one's own or another's behaviour, often
     as a way of explaining what went wrong with it.)  ..."

A kind of intending no AI automaton can yet do, I think.



> On 05 Jun 2022, at 07:46, Humanist <humanist@dhhumanist.org> wrote:
>
>
>              Humanist Discussion Group, Vol. 36, No. 52.
>        Department of Digital Humanities, University of Cologne
>                      Hosted by DH-Cologne
>                       www.dhhumanist.org
>                Submit to: humanist@dhhumanist.org
>
>
>
>
>        Date: 2022-06-05 05:36:00+00:00
>        From: Willard McCarty <willard.mccarty@mccarty.org.uk>
>        Subject: a shifting moral agent?
>
> In his review of a collection of Stanley Cavell's essays, Colin Burrow
> notes that in Cavell's writing "The world of art, in particular, is
> ethically charged", then quotes his author: "The creation of art, being
> human conduct which affects others, has the commitments any conduct
> has."* Remembering that 'artificial' means literally 'made by art', what
> does this say about artificial intelligence? If the moral agent of a
> painting, say, is the painter, then who is the moral agent of an automaton?
>
> Developing some clarity for this question (revising it as need be) would
> be worthwhile, don't you think?
>
> And comments?
>
> Yours,
> WM
> --
> *Colin Burrow, "Paraphrase me if you dare". Rev. of Stanley Cavell, Here and
> There: Sites of Philosophy, ed. Nancy Bauer, Alice Crary and Sandra Laugier.
> London Review of Books 44.11 (9 June 2022).
> <https://www.lrb.co.uk/the-paper/v44/n11/colin-burrow/paraphrase-me-if-you-
dare>
>
>
> --
> Willard McCarty,
> Professor emeritus, King's College London;
> Editor, Interdisciplinary Science Reviews;  Humanist
> www.mccarty.org.uk


_______________________________________________
Unsubscribe at: http://dhhumanist.org/Restricted
List posts to: humanist@dhhumanist.org
List info and archives at at: http://dhhumanist.org
Listmember interface at: http://dhhumanist.org/Restricted/
Subscribe at: http://dhhumanist.org/membership_form.php