Humanist Discussion Group, Vol. 36, No. 63. Department of Digital Humanities, University of Cologne Hosted by DH-Cologne www.dhhumanist.org Submit to: humanist@dhhumanist.org Date: 2022-06-13 13:07:43+00:00 From: Mcgann, Jerome (jjm2f) <jjm2f@virginia.edu> Subject: Re: [Humanist] 36.59: AI: a shifting moral agent A thought about “making” and “made by art” in computational contexts. When it comes to poiesis, the distinction between author” and “user” (say scholar, reader, audience, etc) is not so transparent as seems to be assumed here. Everyone involved in the materials being exchanged in poiesis are makers – that is the horizon of the actions we track in composition and reception histories. Even “the original author” should not to be thought a godlike maker ex nihilo. Those histories expose moments/agents who have been more or less authoritative. Both histories shapeshift over time because agency (intentionalities) carry on. In that conceptual framework, users of computational tools they have not themselves had a hand in making – and may have therefore a more or less diminished capacity to understand in certain crucial respects -- nevertheless have a hand in their remaking (their use). From that to this particular thought: between the 1980s and today a set of tools have become the institutional standard for modeling and representing poietic works. Because the tools were designed for informational not poietic works – ie, for marking, extracting, and organizing certain specified self-identical conceptual entities – the tools radically fail to achieve what the far more flexible systems of oral and paper/print machineries are capable of. And one further thought. In using those (oral and textual) systems it may or may not be advantageous to have an expert understanding of how they work. Their radical maturity ensures that they can be called upon by anyone with a determination to use them. The moral: the tools are apt for expressive and transactional purposes, but – so far – not nearly so apt for reflexive purposes. Also sprach Zarathustra. Best, Jerry From: Humanist <humanist@dhhumanist.org> Date: Monday, June 13, 2022 at 4:29 AM To: Mcgann, Jerome (jjm2f) <jjm2f@virginia.edu> Subject: [Humanist] 36.59: AI: a shifting moral agent Humanist Discussion Group, Vol. 36, No. 59. Department of Digital Humanities, University of Cologne Hosted by DH-Cologne www.dhhumanist.org<http://www.dhhumanist.org> Submit to: humanist@dhhumanist.org Date: 2022-06-09 07:43:49+00:00 From: Tim Smithers <tim.smithers@cantab.net> Subject: Re: [Humanist] 36.52: AI: a shifting moral agent? Dear Willard, Starting from Cavell's "The world of art ... is ethically charged", you continue and ask "... Remembering that 'artificial' means literally 'made by art', what does this say about artificial intelligence? If the moral agent of a painting, say, is the painter, then who is the moral agent of an automaton?" First, as you make clear, 'art' here means the art of the maker, and thus the art in the making, not the art sold in the auction room or hung in gallery. Then, I see the answer as being quite clear: the moral agent of the [AI] automaton is the maker of the automaton. Agency is the capacity to bring about change in the world. Moral agency, I submit, is the capacity to bring about morally acceptable change in the world, which, I would further suggest, requires a capacity to take responsibility for the consequences (intended and unintended [*]) of the changes made. This means, I think, bringing about change that is known, by the change making agent, to be at least neutral for others, if not beneficial to them in some, possibly indirect, way. This necessarily requires, it seems to me, that moral agents acts with sufficient awareness of, knowledge of, and understanding of, these others; all of them. (The longer story, which I'll not tell here, is that this requires autonomy -- self law making -- not just automation -- self law following, which is what automatons do.) This awareness, knowledge, and understanding, of others is something some of today's AI builders don't display much evidence of, but it also explains why the automatons they build cannot be moral agents: [AI] automatons have no awareness of, knowledge of, nor understanding of, other agents, in particular, of humans. Agency, and, in particular, the agency of others, is not modelled by these automatons; only things happening are. Which is, of course, inevitable. Evidence of things happening can be found in, and modelled from, data of what is happening, but this data does not contain anything recoverable about any of the agents that may be making happen what is detected as happening. That, I would say, takes being in the world in a way like we humans are; a way of being that's rather different from the way of being of current AI automatons, especially those programmed by data of what's happened, often massive amounts of it -- typically miss-named Machine Learning. So, I differ some from what Maurizio offers. Users do need to be well trained to make good use of the [AI] tools they use, and are necessarily responsible, I would say, for the outcomes and consequences of their tool use. But this does not, I think, make them necessarily a moral agent in the making of the tools they use. Best regards, Tim [*] Colin Burrow, in the LRB piece you cite, says something interesting about intentions which I think is relevant to this discussion, near the end of paragraph five. "... the fact that we say 'I didn't intend to give offence' gives us an understanding of what 'intending' in normal usage means. (We normally use it to reduce our responsibility for something unintended, hence we might infer that 'intending' is not a distinct psychological manoeuvre performed in advance of speech or writing [or, I would add here, acting] but, usually, a retrospective construction of one's own or another's behaviour, often as a way of explaining what went wrong with it.) ..." A kind of intending no AI automaton can yet do, I think. > On 05 Jun 2022, at 07:46, Humanist <humanist@dhhumanist.org> wrote: > > > Humanist Discussion Group, Vol. 36, No. 52. > Department of Digital Humanities, University of Cologne > Hosted by DH-Cologne > www.dhhumanist.org<http://www.dhhumanist.org> > Submit to: humanist@dhhumanist.org > > > > > Date: 2022-06-05 05:36:00+00:00 > From: Willard McCarty <willard.mccarty@mccarty.org.uk> > Subject: a shifting moral agent? > > In his review of a collection of Stanley Cavell's essays, Colin Burrow > notes that in Cavell's writing "The world of art, in particular, is > ethically charged", then quotes his author: "The creation of art, being > human conduct which affects others, has the commitments any conduct > has."* Remembering that 'artificial' means literally 'made by art', what > does this say about artificial intelligence? If the moral agent of a > painting, say, is the painter, then who is the moral agent of an automaton? > > Developing some clarity for this question (revising it as need be) would > be worthwhile, don't you think? > > And comments? > > Yours, > WM > -- > *Colin Burrow, "Paraphrase me if you dare". Rev. of Stanley Cavell, Here and > There: Sites of Philosophy, ed. Nancy Bauer, Alice Crary and Sandra Laugier. > London Review of Books 44.11 (9 June 2022). > <https://www.lrb.co.uk/the-paper/v44/n11/colin-burrow/paraphrase-me-if-you- dare> > > > -- > Willard McCarty, > Professor emeritus, King's College London; > Editor, Interdisciplinary Science Reviews; Humanist > www.mccarty.org.uk<http://www.mccarty.org.uk> _______________________________________________ Unsubscribe at: http://dhhumanist.org/Restricted List posts to: humanist@dhhumanist.org List info and archives at at: http://dhhumanist.org Listmember interface at: http://dhhumanist.org/Restricted/ Subscribe at: http://dhhumanist.org/membership_form.php