Humanist Discussion Group

Humanist Archives: March 29, 2023, 6:39 a.m. Humanist 36.490 - agency & intelligence

				
              Humanist Discussion Group, Vol. 36, No. 490.
        Department of Digital Humanities, University of Cologne
                      Hosted by DH-Cologne
                       www.dhhumanist.org
                Submit to: humanist@dhhumanist.org




        Date: 2023-03-28 23:50:00+00:00
        From: Michael Falk <michaelgfalk@gmail.com>
        Subject: Re: [Humanist] 36.487: agency & intelligence

Hi James,

What I write below is inspired largely by the work of R. Stuart Geiger and Nick
Seaver, who have both written beautiful articles on the ethnography of bots and
algorithms.

From one Romanticist to another, I would also suggest revisiting Goethe’s “The
Sorcerer’s Apprentice”! The apprentice intends to make the brooms do his work –
does he intend all the other consequences? Is he the ‘author’ of the brooms’
actions? When we interpret the brooms’ actions, does it matter that we don’t
know how to cast the spell he uses? Or can we see perfectly well what is going
on by interpreting the interaction of human and artificial agents in the
situation?

The position you articulate in your post might be called the ‘sockpuppet’ theory
of bot authorship. For a bot to come into being, a human programmer has to write
its source code. The source code expresses the programmer’s intentions, and the
bot blindly executes those intentions. Thus the bot is just an expression of the
programmer.

The problem with this model is that a program is not like a literary text. A
literary text is *relatively* inert: it doesn’t change (much) unless the author
changes it. But a program will change when its inputs change, and the programmer
might have very little knowledge of these inputs. When Derek Ramsay wrote
Rambot, did he know what was contained in those 33,000 census records? More
extremely, could the ‘authors’ of ChatGPT have any idea what is contained in the
billions of words of text on which the model was changed? Can they be held
responsible for text generated by the model? In a very real sense, they can’t –
it is impossible for them to manually alter the parameters of the model in order
to prevent certain outputs from appearing. An author can amend their text if it
is defamatory, inaccurate or offensive. Now of course, OpenAI could hire content
moderators to check ChatGPT’s outputs, or design another system to sanitise the
outputs, but I think we’re getting a long way from authorship as the expression
of human intention…

I could go on, but I’d just be rehashing points that were made better by Goethe,
Geiger and Seaver.

Geiger, R Stuart. “Beyond Opening up the Black Box: Investigating the Role of
Algorithmic Systems in Wikipedian Organizational Culture.” Big Data & Society 4,
no. 2 (December 2017): 205395171773073.
https://doi.org/10.1177/2053951717730735.
Seaver, Nick. “Algorithms as Culture: Some Tactics for the Ethnography of
Algorithmic Systems.” Big Data & Society 4, no. 2 (2017): 2053951717738104.
Goethe, ‘Der Zauberlehrling’
https://de.wikisource.org/wiki/Der_Zauberlehrling_(1798)

Cheers,

Michael


_______________________________________________
Unsubscribe at: http://dhhumanist.org/Restricted
List posts to: humanist@dhhumanist.org
List info and archives at at: http://dhhumanist.org
Listmember interface at: http://dhhumanist.org/Restricted/
Subscribe at: http://dhhumanist.org/membership_form.php