Humanist Discussion Group

Humanist Archives: Dec. 22, 2022, 11:52 a.m. Humanist 36.315 - death of the author 2.0 continued

				
              Humanist Discussion Group, Vol. 36, No. 315.
        Department of Digital Humanities, University of Cologne
                      Hosted by DH-Cologne
                       www.dhhumanist.org
                Submit to: humanist@dhhumanist.org




        Date: 2022-12-21 07:47:47+00:00
        From: Tim Smithers <tim.smithers@cantab.net>
        Subject: Re: [Humanist] 36.309: death of the author 2.0 continued

Dear Bill,

I'll make this my last go.

On Artificial Flowers

Describing an instance of A using a close analogy to an
instance of B does not make, and cannot make, the A identical
to the B, or somehow essentially the same as a B, as you seem
to want us to understand from your imagined student/ChatGPT
exercise.

In the context of AI, I call this the Artificial Flower
mistake.  Artificial light, artificially generated, is the
same as Natural light, from the Sun, for example.  Both are
instances of real light, ie photons, and we can demonstrate
this by careful comparative study of both.  However,
artificial flowers are not the same as Natural flowers, no
matter how like real flowers the artificial flowers look.  It
takes more than looks to demonstrate you have real flowers.
Making artificial flowers more real looking does not, and
cannot, make them more real as flowers, not really.

The same, I think, for intelligent behaviour; it takes more
than looks to have real intelligent behaviour artificially
generated.  This is something some people working on, or
making use of, Large Language Models like ChatGPT, and its
ilk, seem to have forgotten, but, to be fair, they would not
be the first people in AI to do this kind of forgetting.

What it takes to reliably demonstrate the presence of real
intelligent behaviour, in the artificial and in the Natural,
remains an open question; a question that AI used to try to
make more explicit efforts to contribute to, together with
efforts in neighbouring disciplines such as Cognitive Science,
Psychology, Ethology, Biology, Anthropology, Sociology, and
the Humanities.  Turing may have made a start on this question
from the AI side, but they did not provide a final answer.


On the Unintelligent Unintelligible

You assert that the "weights of each of [the] 175 million
parameters" are "unintelligible upon inspection," by you, me,
and anybody else, in an attempt to "identify them with
ChatGPT's output."

I don't know if the folks at OpenAI know how to do this or
not, and, as far as I know, they don't let others attempt to
do this.  However, it is possible to do just this for these
kinds of very large, sophisticated, programmed with data,
systems.  I've seen it done.  Of course, it's not a trivial
job.  It requires plenty of work, imagination, ingenuity,
powerful specially built tools, and team work.  But, this is
no different, as far as I can see, from the usual efforts
needed to know and understand the execution behaviour and
performance of any complicated computational system.  In my
experience, always with others, this can take the most effort
in a project to design, develop, efficiently implement,
verify, validate, calibrate, test, demonstrate, and document,
a powerful software system, AI or some other kind.  Basically,
you have to be able to do this in the world of professional
software engineering.  The scale of things like ChatGPT does
pose new difficulties, but nothing that somehow pushes them
into a realm of impossible to inspect and understand systems,
or into a realm of never seen before kinds of computational
systems.

Best regards,

Tim


> On 19 Dec 2022, at 09:33, Humanist <humanist@dhhumanist.org> wrote:
>
>
>              Humanist Discussion Group, Vol. 36, No. 309.
>        Department of Digital Humanities, University of Cologne
>                      Hosted by DH-Cologne
>                       www.dhhumanist.org
>                Submit to: humanist@dhhumanist.org
>
>
<snip>
>
>    [2]    From: William Benzon <bbenzon@mindspring.com>
>           Subject: Re: [Humanist] 36.307: death of the author 2.0 continued
(95)
>
<snip>
>
> --[2]------------------------------------------------------------------------
>        Date: 2022-12-18 08:55:46+00:00
>        From: William Benzon <bbenzon@mindspring.com>
>        Subject: Re: [Humanist] 36.307: death of the author 2.0 continued
>
>>
>> --[2]------------------------------------------------------------------------
>>       Date: 2022-12-16 09:58:36+00:00
>>       From: Tim Smithers <tim.smithers@cantab.net>
>>       Subject: Re: [Humanist] 36.301: death of the author 2.0 continued, or
> the opaque & the different
>>
>> Dear Bill,
>>
>> Pray, tell us, if you would be so kind, what is it you would
>> say ChatGPT, and its ilk, know and understand?  (Knowing and
>> understanding are widely accepted common characteristics of
>> intelligent behaviour, I would say.)
>>
>> Before telling us, if you'd care to, I think it'd be good to
>> have from you what you want us to understand from your use of
>> the terms 'to know' and 'to understand.’
>
> Thanks for bringing this up, Tim. I know very well how “to understand” and “to
> know” are ordinarily used. And I understand that using them in the context of
> ChatGPT isn’t quite right. However...
>
> Imagine that you are having a conversation with a student and you asked them
to
> define justice. They do so. The prose is nondescript and the thought routine,
> but as a short statement, sure, why not? Then you ask the student to say a few
> words about Plato’s treatment of justice in Plato’s Republic. The student
gives
> a reasonable answer. Not ready for the Stanford Encyclopedia, but they’ve got
> the general idea. Then you give them a short story and that contains an
> injustice. You ask: In the following story, do we see justice being served?
The
> student takes a couple of minutes to read it and replies, “No, it is not, and
> here’s why…” You ask the student to take the story and turn it into one where
> justice IS served. The student takes some time and produces an acceptable
> version.
>
> Would you say that that student has some knowledge of, some understand of
> justice? I’m not talking about profound understanding, but just a basic
workman-
> like understanding, something adequate for enjoying, say, TV dramas about
crime
> and justice.
>
> Well – and you know where I’m going with this – I did just that the other day.
> But not with a student, with ChatGPT. I wrote about it here, Abstract concepts
> and metalingual definition: Does ChatGPT understand justice and charity?,
> https://new-savanna.blogspot.com/2022/12/abstract-concepts-and-
metalingual.html
> <https://new-savanna.blogspot.com/2022/12/abstract-concepts-and-
> metalingual.html>. By what behavioral criterion are you going to say that
> ChatGPT doesn’t understand what justice is?
>
> I say behavioral criterion, because that’s all we’ve really got in any case,
no?
> We can’t see what’s happening in people’s brains when they’re thinking,
> speaking, or writing about justice. All we can do is observe their behavior
and
> draw conclusions based on that. I’m asking that you grant ChatGPT the same
> courtesy.
>
> However we may philosophize and psychologize about “to think” and “to
> understand,” it’s not at all clear to me that our thoughts on those matters
will
> take us to a strong criterion for denying thought to ChatGPT. At the moment I
> believe that working with ChatGPT, extensively, is a much better way for me to
> begin to understand what it’s doing than is philosophizing.
>
> Later in your note you say:
>
>> So, just asserting that looking at the source code of ChatGPT “won't mean
> anything to you," may describe your reaction to looking, but it doesn't
describe
> everybody's reaction.
>
>
> That is not correct. There is of course code of the kind you are talking about
> and, at least I assume, the source code also has comments that make it easier
> for us to understand what’s happening. When that code is compiled and the
> resulting object code is executed, it is executed agains a huge database of
> text, a significant fraction of the internet. I’ve seen estimates the the
first
> run of GPT-3 cost over 10 million dollars. The result of execution is a large
> language model (LLM). That model consists of a bunch of numbers in some
> arrangement; those numbers are the weights of each of 175 million parameters.
> THAT’s what is unintelligible upon inspection, not by me, by you, or even the
> people at OpenAI who coded the engine. We don’t know how to look at a
parameter
> or set of parameters and identify them with ChatGPT’s output.
>
> As an analogy, consider DNA, which is some kind of code. When “executed,” a
> living thing, plant, animal, or something else, is created. We don’t know how
to
> identify most of the DNA strands with morphological or behavioral features in
> phenotypes. We know that some of the DNA codes for specific proteins. But most
> to the DNA isn’t like that. We do know that some of it regulates the
development
> process, but what most of that material is doing, we don’t know. A lot of it
has
> been termed “junk” DNA. Maybe it is, maybe it isn’t. It is not unusual to
> discover that this or that chunk of junk DNA actually does something.
>
> Now, there are people working on the problem of figuring out how these models
> work, of figuring what the DNA, if you will, is doing. But that work has only
> just begun. So, no, it is not at all like reading source code. It is something
> else, something very strange to us.
>
> Best,
>
> BB



_______________________________________________
Unsubscribe at: http://dhhumanist.org/Restricted
List posts to: humanist@dhhumanist.org
List info and archives at at: http://dhhumanist.org
Listmember interface at: http://dhhumanist.org/Restricted/
Subscribe at: http://dhhumanist.org/membership_form.php