Humanist Discussion Group

Humanist Archives: Dec. 17, 2022, 9:41 a.m. Humanist 36.307 - death of the author 2.0 continued

              Humanist Discussion Group, Vol. 36, No. 307.
        Department of Digital Humanities, University of Cologne
                      Hosted by DH-Cologne
                Submit to:

    [1]    From: William Benzon <>
           Subject: Re: [Humanist] 36.304: death of the author 2.0 continued (44)

    [2]    From: Tim Smithers <>
           Subject: Re: [Humanist] 36.301: death of the author 2.0 continued, or the opaque & the different (211)

    [3]    From: Ken Friedman <>
           Subject: Re: [Humanist] 36.304: death of the author 2.0 continued (13)

        Date: 2022-12-16 11:46:55+00:00
        From: William Benzon <>
        Subject: Re: [Humanist] 36.304: death of the author 2.0 continued


>        Date: 2022-12-15 23:55:47+00:00
>        From: James Rovira <>
>        Subject: Re: [Humanist] 36.301: death of the author 2.0 continued, or
the opaque & the different
> Thanks very much, Bill, for the detailed (and very clear) response -- but I
> believe this kind of thinking involves the confusion of ontology and
> function. The workings of a monkey's or a dog's mind is just as opaque as a
> human's, but we don't equate monkeys, dogs, and humans.

We do, however, recognize our kinship with monkeys and dogs, at least those of
us who accept Darwinian evolutionary theory. But they don’t have language, and
we do. So, in some non-trivial fashion, does ChatGPT. That’s what holds my

> Just about two days ago I was doing some reading in Jacobi, and in it he
> used the analogy of a knitted sock to describe the ego. He asked his
> readers to imagine a highly detailed knitted sock, one with floral and
> other patterns knitted into it. But no matter how complicated the knitting,
> if you pull on the end of the thread long enough you'll wind up with the
> same single piece of yarn. Ontology vs. form. I believe you've described a
> new kind of functionality, but I don't believe you've described a different
> kind of being. It's a computer program. It's still made out of the same
> kind of yarn.

Nope. To continue on, it’s made of a very different kind of “yarn.”

Osamu Tezuka’s “Metropolis”, an early manga of his from the 1950s, centers
around Michi, an “artificial being” ("jinzo ningen” in Japanese) constructed of
artificial cells. But the story also features ordinary electro-mechanical
robots. Moreover, Michi is sometimes male, sometimes female. It’s a different
kind of yarn.

> Had some fun on the ChatGPT website the other day asking questions about
> Romanticism. Good, well written, Wikipedia level answers, and accurate
> enough for that level.

Yes, indeed.

Thanks, Jim.


        Date: 2022-12-16 09:58:36+00:00
        From: Tim Smithers <>
        Subject: Re: [Humanist] 36.301: death of the author 2.0 continued, or the opaque & the different

Dear Bill,

Pray, tell us, if you would be so kind, what is it you would
say ChatGPT, and its ilk, know and understand?  (Knowing and
understanding are widely accepted common characteristics of
intelligent behaviour, I would say.)

Before telling us, if you'd care to, I think it'd be good to
have from you what you want us to understand from your use of
the terms 'to know' and 'to understand.'

To illustrate what I'm asking for, here's how I characterise
knowledge and understanding:

    Knowledge is a capacity for rational action,


    Understanding is a capacity for forming rational

The first, for knowledge, comes from Allen Newell, and was
first published in "The Knowledge Level," Artificial
Intelligence Volume 18, Issue 1, January 1982, Pages 87-127.
The second is my extension of Newell's characterisation of
knowledge to cover understanding.  (Newell's knowledge as a
capacity for rational action formed the basis for subsequent
work on Knowledge Modelling, designing and building Knowledge
Based Systems, and some work on Knowledge Management, when we
did that kind of stuff.)

I'm not asking you to use my characterisations, but please do
if you want to.  If you don't, an explanation of what you want
us to understand from the terms 'knowledge' and
'understanding' when you use these, is, I think, needed to
move on from empty assertions.

I would say things like ChatGPT can reasonably be said to know
only in a rather superficial way, and not understand anything.
They are more accurately described as remembering systems, of
some dubious quality: they appear to be able to remember lots
of things, but not always well, given what they are programmed
with; massive amounts of texts collected [some would say
ripped-off] from the Web.  But why do we need something like
this?  Intelligent use (by humans) of the Web seems to do as
good a job, and often a better job.  This does, of course,
require that people have and use some critical reading and
reasoning skills -- including the kind of careful
understanding and use or our words that Jim and Maurizio point
us to -- but what's the harm in this?  Why would we want to
offer things to people that tend, in their use, to displace
and erode these kinds of generally useful skills, not to say,
needed skills?

As Jim explains, ChatGPT, and its ilk, are computer programs,
and they are not a new kind of program.  They differ in scale,
and some important technical details that result in better
computational performance, but as programs they happily sit on
the technological path that started with Minsky and Papert's

Also, just like in those early days of making computers do
things by programming, the 'transparency' of the code they are
programmed with is always relative to the knowledge and
understanding of the programmer looking at the code.  So, just
asserting that looking at the source code of ChatGPT "won't
mean anything to you," may describe your reaction to looking,
but it doesn't describe everybody's reaction.  OpenAI is no
longer an Open Source company, as it was when it started, nor
is it a not for profit company any more, so we can't actually
look at the source code of ChatGPT, but we could for earlier
versions, when it was GPT-1 and [I think] GPT-2.  From such
looking it was quite possible to understand how the code
worked, and what it did, if you understand enough about this
kind of programming.  So called Machine Learning techniques
are properly understood as kinds of computer programming
techniques: programming with data.  This too is not new.  Just
the amount of data used, and the [large] number or parameters
programmed this way, are new[ish].  Humans, and other animals
learn, not computers.  Computers are programmed ...  by people
who learn how to do this programming.

One last thing, since we're on about this here.  Computer
programs, such as the "neural net" kind you talk about, do not
recognise things.  That's what humans and other animals do.
These programs do pattern matching ...  this image to that
name, for example.  Anthropomorphising what these programs do,
by saying they 'recognise,' does not make them like us, nor
necessarily intelligent, even if you can make them look like
they are like us with the way you talk about what they do.
The 'learning' in the term 'machine learning' is used by
analogy to learning seen in humans, not because it is learning
like we see in humans.  We should not forget this, I think, if
we want to avoid fooling ourselves.

Best regards,


> On 15 Dec 2022, at 08:37, Humanist <> wrote:
>              Humanist Discussion Group, Vol. 36, No. 301.
>        Department of Digital Humanities, University of Cologne
>                      Hosted by DH-Cologne
>                Submit to:
>    [1]    From: William Benzon <>
>           Subject: Re: [Humanist] 36.297: death of the author 2.0 continued,
or swinging on a star (74)
> --[1]------------------------------------------------------------------------
>        Date: 2022-12-14 20:23:30+00:00
>        From: William Benzon <>
>        Subject: Re: [Humanist] 36.297: death of the author 2.0 continued, or
swinging on a star
> James,
> With a computer program of the ordinary kind, the vast majority of them, you
> examine the source code and see how things operate. You can’t do that with the
> model at the heart of ChatGPT. Well, sure, you can look at it, but it won’t
> anything to you. There’s stuff there, but what it does, that’s opaque. From an
> old blog post:
> The only case of an intelligent mind that we know of is the human mind, and
> human mind is built from the “inside.” It isn’t programmed by external agents.
> To be sure, we sometime refer to people as being programmed to do this or
> and when we do so the implication is that the “programming” is somehow against
> the person’s best interests, that the behavior is in some way imposed on them.
> And that, of course, is how computers are programmed. They are designed to be
> imposed upon by programmers. A programmer will survey the application domain,
> build a conceptual model of it, express that conceptual model in some design
> formalism, formulate computational processes in that formalism, and then
> code that implements those processes. To do this, of course, the programmer
> also know something about how the computer works since it’s the computer’s
> operations that dictate the language in which the process design must be
> encoded.
> To be a bit philosophical about this, the computer programmer has a
> “transcendental” relationship with the computer and the application domain.
> programmer is outside and “above” both, surveying and commanding them from on
> high. All too frequently, this transcendence is flawed, the programmer’s
> knowledge of both domain and computer is faulty, and the resulting software is
> less than wonderful.
> Things are a bit different with machine learning. Let us say that one uses a
> neural net to recognize speech sounds or recognize faces. The computer must be
> provided with a front end that transduces visual or sonic energy and presents
> the computer with some low-level representation of the sensory signal. The
> computer then undertakes a learning routine of some kind the result of which
> a bunch of weightings on features in the net. Those weightings determine how
> computer will classify inputs, whether mapping speech sounds to letters or
> to identifiers.
> Now, it is possible to examine those feature weightings, but for the most part
> they will be opaque to human inspection. There won’t be any obvious
> between those weightings and the inputs and outputs of the program. They
> meaningful to the “outside.” They make sense only from the “inside.” The
> programmer no longer has transcendental knowledge of the inner operations of
> program that he or she built.
> If we want a computer to hold vast intellectual resources at its command, it’s
> going to have to learn them, and learn them from the inside, just like we do.
> And we’re not going to know, in detail, how it does it, any more than we know,
> in detail, what goes on in one another’s minds.
> Such things are new to us. They didn’t exist 20 years ago.
> Bill B
>> --[1]------------------------------------------------------------------------
>>       Date: 2022-12-13 18:26:43+00:00
>>       From: James Rovira <>
>>       Subject: Re: [Humanist] 36.295: death of the author 2.0
>> I'm curious how Bill can say a thing is "completely new in the universe"
>> AND "opaque to us." It's unclear how we can make any claims about it at all
>> until we know what it is.
>> "The resulting model isopaque to us, we didn’t program it. The resulting
>> behavioral capacities are unlike those of any other creature/being/thing
>> we’ve experienced, nor do we know what those capacities will evolve into in
>> the future. This creature/being/thing is something fundamentally NEW in the
>> universe, at least our local corner of it, and needs to be thought of
>> appropriately. It deserves/requires a new term." - Bill B.
>> Is it made up of circuit boards? Run on electricity? 1s and 0s? Are we
>> confusing ontology with functionality?

        Date: 2022-12-16 08:19:42+00:00
        From: Ken Friedman <>
        Subject: Re: [Humanist] 36.304: death of the author 2.0 continued

Dear James,

A mild correction to Jacobi.

To knit socks with a floral pattern, one requires several kinds of yarn. They
are different pieces, and different colors. If you unravel the sock, it will not
be a single piece of yarn, but several pieces of yarn, generally of several



Unsubscribe at:
List posts to:
List info and archives at at:
Listmember interface at:
Subscribe at: