Humanist Discussion Group

Humanist Archives: Dec. 19, 2022, 8:33 a.m. Humanist 36.309 - death of the author 2.0 continued

				
              Humanist Discussion Group, Vol. 36, No. 309.
        Department of Digital Humanities, University of Cologne
                      Hosted by DH-Cologne
                       www.dhhumanist.org
                Submit to: humanist@dhhumanist.org


    [1]    From: James Rovira <jamesrovira@gmail.com>
           Subject: Re: [Humanist] 36.308: death of the author 2.0 continued (67)

    [2]    From: William Benzon <bbenzon@mindspring.com>
           Subject: Re: [Humanist] 36.307: death of the author 2.0 continued (95)


--[1]------------------------------------------------------------------------
        Date: 2022-12-18 18:36:28+00:00
        From: James Rovira <jamesrovira@gmail.com>
        Subject: Re: [Humanist] 36.308: death of the author 2.0 continued

It's reductive to describe the observation of an absurdity, or a series of
absurdities, or a claim or series of claims with literally no evidence
supporting the claim to some "need" to "keep machines in their pre-assigned
boxes."

Mystification at the output of a program isn't sufficient justification to
call anything "life." We have no reason from the start to ascribe life or
consciousness to anything that isn't biological. There is literally no
empirical basis for doing so, or even thinking it's a possibility. I
realize this goes against the grain of our best science fiction, but at the
same time, in our non-fiction lives most of this discourse is trolling for
headlines. We don't have a hard time keeping machines in their pre-assigned
boxes. ChatGPT will never grow arms and legs and walk out the door, or run
for Congress. It has no meaningful autonomy except to produce specific
kinds of outputs. What we have instead is attention-getting discourse
paired with sloppy thinking.

I don't see any reason to define "life" as anything other than biological.
But, if we want to do that, shouldn't we do better than just blandly
observe the fact of changing definitions and maybe get into the details of
the discussion over time? I don't think "what is life?"  is the
issue, though. We could start with the current biological definition. Why
isn't it being mentioned? Does science not matter? The real question, to
me, is that of consciousness, will, or personhood. I don't think we're even
asking the right questions yet. Maybe a machine will one day achieve
personhood, or achieve some kind of state where we would want to ascribe to
it personhood. We would not need to call it "life" in order to do so.

Jim R


> With a number of the issues stirred up in this discussion--one e.g.
> awakened by the charge of 'anthropomorphism'--I prefer to note the
> tendency of thinking that such usage signals rather than its
> truth-value. Is it not interesting that to an increasing degree it
> becomes more and more difficult to keep machines in their pre-assigned
> place, as it were? Some, as we all know, are always wanting to move the
> goalposts of sentience or whatever closer to what the latest gadget can
> do, others further away, e.g. to preserve human dignity. Rather than
> join in, I prefer to watch and ponder. Consider, for example, Evelyn Fox
> Keller's response to the many efforts to create life artifically:
>
> > Should we call these newly formed categories by the name life? Well,
> > that depends. It depends on our local needs and interests, on our
> > estimates of the costs and benefits of doing so, and of course, on
> > our larger cultural and historical location. The notion of doing so
> > would have seemed absurd to people living not so long ago-indeed, it
> > seems absurd to me now. But that does not mean that we will not or
> > should not call these categories Life. It only means that the
> > question What is life? is a historical question, answerable only in
> > terms of the categories by which we as human actors choose to abide,
> > the differences that we as human actors choose to honor, and not in
> > logical, scientific, or technical terms. It is in this sense that the
> > category of life is a human rather than a natural kind. Not unlike
> > explanation.
>
> Evelyn Fox Keller, "Marrying the Premodern to the Postmodern:
> Computers and Organisms after World War II", in Mechanical Bodies,
> Computational Minds: Artificial Intelligence from Automata to
> Cyborgs, ed. Stefano Franchi and Guven Guzeldere. (MIT Press,
> 2005), p. 221
>
> Comments?
>
> Yours,
> WM
>

--[2]------------------------------------------------------------------------
        Date: 2022-12-18 08:55:46+00:00
        From: William Benzon <bbenzon@mindspring.com>
        Subject: Re: [Humanist] 36.307: death of the author 2.0 continued

>
> --[2]------------------------------------------------------------------------
>        Date: 2022-12-16 09:58:36+00:00
>        From: Tim Smithers <tim.smithers@cantab.net>
>        Subject: Re: [Humanist] 36.301: death of the author 2.0 continued, or
the opaque & the different
>
> Dear Bill,
>
> Pray, tell us, if you would be so kind, what is it you would
> say ChatGPT, and its ilk, know and understand?  (Knowing and
> understanding are widely accepted common characteristics of
> intelligent behaviour, I would say.)
>
> Before telling us, if you'd care to, I think it'd be good to
> have from you what you want us to understand from your use of
> the terms 'to know' and 'to understand.’

Thanks for bringing this up, Tim. I know very well how “to understand” and “to
know” are ordinarily used. And I understand that using them in the context of
ChatGPT isn’t quite right. However...

Imagine that you are having a conversation with a student and you asked them to
define justice. They do so. The prose is nondescript and the thought routine,
but as a short statement, sure, why not? Then you ask the student to say a few
words about Plato’s treatment of justice in Plato’s Republic. The student gives
a reasonable answer. Not ready for the Stanford Encyclopedia, but they’ve got
the general idea. Then you give them a short story and that contains an
injustice. You ask: In the following story, do we see justice being served? The
student takes a couple of minutes to read it and replies, “No, it is not, and
here’s why…” You ask the student to take the story and turn it into one where
justice IS served. The student takes some time and produces an acceptable
version.

Would you say that that student has some knowledge of, some understand of
justice? I’m not talking about profound understanding, but just a basic workman-
like understanding, something adequate for enjoying, say, TV dramas about crime
and justice.

Well – and you know where I’m going with this – I did just that the other day.
But not with a student, with ChatGPT. I wrote about it here, Abstract concepts
and metalingual definition: Does ChatGPT understand justice and charity?,
https://new-savanna.blogspot.com/2022/12/abstract-concepts-and-metalingual.html
<https://new-savanna.blogspot.com/2022/12/abstract-concepts-and-
metalingual.html>. By what behavioral criterion are you going to say that
ChatGPT doesn’t understand what justice is?

I say behavioral criterion, because that’s all we’ve really got in any case, no?
We can’t see what’s happening in people’s brains when they’re thinking,
speaking, or writing about justice. All we can do is observe their behavior and
draw conclusions based on that. I’m asking that you grant ChatGPT the same
courtesy.

However we may philosophize and psychologize about “to think” and “to
understand,” it’s not at all clear to me that our thoughts on those matters will
take us to a strong criterion for denying thought to ChatGPT. At the moment I
believe that working with ChatGPT, extensively, is a much better way for me to
begin to understand what it’s doing than is philosophizing.

Later in your note you say:

> So, just asserting that looking at the source code of ChatGPT “won't mean
anything to you," may describe your reaction to looking, but it doesn't describe
everybody's reaction.


That is not correct. There is of course code of the kind you are talking about
and, at least I assume, the source code also has comments that make it easier
for us to understand what’s happening. When that code is compiled and the
resulting object code is executed, it is executed agains a huge database of
text, a significant fraction of the internet. I’ve seen estimates the the first
run of GPT-3 cost over 10 million dollars. The result of execution is a large
language model (LLM). That model consists of a bunch of numbers in some
arrangement; those numbers are the weights of each of 175 million parameters.
THAT’s what is unintelligible upon inspection, not by me, by you, or even the
people at OpenAI who coded the engine. We don’t know how to look at a parameter
or set of parameters and identify them with ChatGPT’s output.

As an analogy, consider DNA, which is some kind of code. When “executed,” a
living thing, plant, animal, or something else, is created. We don’t know how to
identify most of the DNA strands with morphological or behavioral features in
phenotypes. We know that some of the DNA codes for specific proteins. But most
to the DNA isn’t like that. We do know that some of it regulates the development
process, but what most of that material is doing, we don’t know. A lot of it has
been termed “junk” DNA. Maybe it is, maybe it isn’t. It is not unusual to
discover that this or that chunk of junk DNA actually does something.

Now, there are people working on the problem of figuring out how these models
work, of figuring what the DNA, if you will, is doing. But that work has only
just begun. So, no, it is not at all like reading source code. It is something
else, something very strange to us.

Best,

BB


_______________________________________________
Unsubscribe at: http://dhhumanist.org/Restricted
List posts to: humanist@dhhumanist.org
List info and archives at at: http://dhhumanist.org
Listmember interface at: http://dhhumanist.org/Restricted/
Subscribe at: http://dhhumanist.org/membership_form.php