Humanist Discussion Group

Humanist Archives: March 31, 2023, 8:03 a.m. Humanist 36.501 - agency & intelligence

				
              Humanist Discussion Group, Vol. 36, No. 501.
        Department of Digital Humanities, University of Cologne
                      Hosted by DH-Cologne
                       www.dhhumanist.org
                Submit to: humanist@dhhumanist.org




        Date: 2023-03-30 15:25:47+00:00
        From: James Rovira <jamesrovira@gmail.com>
        Subject: Re: [Humanist] 36.495: agency & intelligence

Many thanks to Michael for his response, and it's always a privilege to
hear from JM. And, I always love a post with links, especially links to
Goethe. It's been awhile since I've read the Sorcerer's Apprentice, and I
look forward to reading it again, especially since I'm reading in
18th/19thC German criticism, philosophy, and musical writings.

I love the paragraph quoted below because it provides great occasion for
thinking about the subject.

I would agree that a literary text (really, any text, visual or written of
any kind) does not change as a printed, painted, or filmed artefact. These
are fixed objects. The same words physically appear on the page every time
we open the book; the same progression of scenes and shots appear in every
film. But these fixed objects also generate "output" in the form of
readers' responses to these texts, which in some sense are always acts of
interpretation. But these outputs can vary widely. Some are just immediate
emotional reactions that the reader/viewer can't describe in words, at
least not at first. Others are sophisticated interpretations that can be
put into words, that perhaps don't fully exist until they are put into
words. But these are all "outputs" in a sense generated by the text when it
enters a human mind. The results are indeed dynamic, not static, even from
one hour to the next. We have both no doubt often had the experience of
seeing different things in the same text from one reading to the next even
though the same words are always there.

So I would say that the program doesn't change with its inputs. The
program's outputs change with its inputs, but what the program is able to
do with its inputs is I think more fixed and limited than the human mind. I
agree that the programmer has little or no knowledge of these inputs, and
only has varying degrees of knowledge of the possible outputs depending
upon the sophistication of the program. Some programs, of course, are very
valuable because they produce fixed, limited, and predictable outputs.

But have we really eliminated intention? In ChatGPT, programmers intend to
produce a program (or group of programs) that could produce different kinds
of outputs. They set parameters and train the program on a corpus, part of
which included providing it true and false results. The program didn't
create itself. ChatGPT generates text because it is made to generate text,
so programmers intentionally developed a program that could do so. Of
course, we both agree they can't predict what kind of text will be
generated for any given input. But they built the thing that does that
work. That is their intent.

Do readers intend to react to a text ahead of time, or do they just react?
Are they responsible for their reactions, or is the text, or the author? Or
is their personal history and experiences?

All of these questions lead up to a sense of responsibility, legal or
ethical, but by themselves they don't give us answers.

I will say that if a company produces a program, makes it publicly
accessible, and continues to update and release new versions of the
program, that company is responsible, legally and ethically, for its
product even if it can't predict all outcomes. It's one thing to build a
thing to see how it works. It's another thing to set it loose on the public
without knowing what's going to happen. It's like animating a broom,
telling it to carry water, and then falling asleep on the job. Ha, no --
that person is absolutely responsible for what happens when he initiates a
process he doesn't understand and can't control even if he can't predict
those outcomes.

If we really want to think like engineers, that's all about controlling
outcomes.

Yes, they are responsible.

Jim R

On Thu, Mar 30, 2023 at 1:35 AM Humanist <humanist@dhhumanist.org> wrote:

>
> The problem with this model is that a program is not like a literary text.
> A
> literary text is *relatively* inert: it doesn’t change (much) unless the
> author
> changes it. But a program will change when its inputs change, and the
> programmer
> might have very little knowledge of these inputs. When Derek Ramsay wrote
> Rambot, did he know what was contained in those 33,000 census records? More
> extremely, could the ‘authors’ of ChatGPT have any idea what is contained
> in the
> billions of words of text on which the model was changed? Can they be held
> responsible for text generated by the model? In a very real sense, they
> can’t –
> it is impossible for them to manually alter the parameters of the model in
> order
> to prevent certain outputs from appearing. An author can amend their text
> if it
> is defamatory, inaccurate or offensive. Now of course, OpenAI could hire
> content
> moderators to check ChatGPT’s outputs, or design another system to
> sanitise the
> outputs, but I think we’re getting a long way from authorship as the
> expression
> of human intention…
>
>


_______________________________________________
Unsubscribe at: http://dhhumanist.org/Restricted
List posts to: humanist@dhhumanist.org
List info and archives at at: http://dhhumanist.org
Listmember interface at: http://dhhumanist.org/Restricted/
Subscribe at: http://dhhumanist.org/membership_form.php