Humanist Discussion Group

Humanist Archives: Dec. 18, 2022, 8:09 a.m. Humanist 36.308 - death of the author 2.0 continued

				
              Humanist Discussion Group, Vol. 36, No. 308.
        Department of Digital Humanities, University of Cologne
                      Hosted by DH-Cologne
                       www.dhhumanist.org
                Submit to: humanist@dhhumanist.org


    [1]    From: James Rovira <jamesrovira@gmail.com>
           Subject: Re: [Humanist] 36.307: death of the author 2.0 continued (70)

    [2]    From: Willard McCarty <willard.mccarty@mccarty.org.uk>
           Subject: how our thoughts go (42)


--[1]------------------------------------------------------------------------
        Date: 2022-12-17 15:53:18+00:00
        From: James Rovira <jamesrovira@gmail.com>
        Subject: Re: [Humanist] 36.307: death of the author 2.0 continued

Some responses below.

On Sat, Dec 17, 2022 at 4:41 AM Humanist <humanist@dhhumanist.org> wrote:

>
> > Thanks very much, Bill, for the detailed (and very clear) response --
> but I
> > believe this kind of thinking involves the confusion of ontology and
> > function. The workings of a monkey's or a dog's mind is just as opaque
> as a
> > human's, but we don't equate monkeys, dogs, and humans.
>
> We do, however, recognize our kinship with monkeys and dogs, at least
> those of
> us who accept Darwinian evolutionary theory. But they don’t have language,
> and
> we do. So, in some non-trivial fashion, does ChatGPT. That’s what holds my
> attention.
>

It's not clear to me that ChatGPT "uses" language in a way comparable to
humans, and I'm not sure how you can make that claim while saying its
processing is "opaque." How can you ascribe intention to a black box until
you see into it? We ascribe intention to one another because we see
into ourselves, but if something is completely foreign to us? It's also not
clear to me that dogs and monkeys do not use language in some forms, and I
agree with you that dogs and monkeys are not as completely foreign to us as
any machine.


> > Just about two days ago I was doing some reading in Jacobi, and in it he
> > used the analogy of a knitted sock to describe the ego.
>
> Nope. To continue on, it’s made of a very different kind of “yarn.”
>

Can you explain how? Programming isn't "yarn" in the analogy. What the
programming is -made of- is the yarn. What the device is physically made of
is the yarn. Everything else is function and output. We can know what cells
are made of before we know how they work.


> Dear James,
>
> A mild correction to Jacobi.
>
> To knit socks with a floral pattern, one requires several kinds of yarn.
> They
> are different pieces, and different colors. If you unravel the sock, it
> will not
> be a single piece of yarn, but several pieces of yarn, generally of several
> kinds.
>
> Yours,
>
> Ken
>

Thanks much for the reply, Ken. Apologies if I misremembered Jacobi, but
just for the sake of the analogy, a pattern by itself doesn't need
different kinds of yarn. A sock can have floral patterns all of the same
color from the same single piece of yarn. It's also possible to use a
single, multicolored piece of yarn to create different colored patterns,
and it's possible to use the same kind (material and thickness) of yarn in
different colors. And, of course, it's possible to knit a sock out of yarn
and sew patterns into it using thread. But I think the analogy holds up in
all cases. Cut pieces of thread from the same spool? Wouldn't philosophers
say that color is a literally insubstantial difference -- accidens?

Jim R

--[2]------------------------------------------------------------------------
        Date: 2022-12-18 08:03:08+00:00
        From: Willard McCarty <willard.mccarty@mccarty.org.uk>
        Subject: how our thoughts go

With a number of the issues stirred up in this discussion--one e.g.
awakened by the charge of 'anthropomorphism'--I prefer to note the
tendency of thinking that such usage signals rather than its
truth-value. Is it not interesting that to an increasing degree it
becomes more and more difficult to keep machines in their pre-assigned
place, as it were? Some, as we all know, are always wanting to move the
goalposts of sentience or whatever closer to what the latest gadget can
do, others further away, e.g. to preserve human dignity. Rather than
join in, I prefer to watch and ponder. Consider, for example, Evelyn Fox 
Keller's response to the many efforts to create life artifically:

> Should we call these newly formed categories by the name life? Well,
> that depends. It depends on our local needs and interests, on our
> estimates of the costs and benefits of doing so, and of course, on
> our larger cultural and historical location. The notion of doing so
> would have seemed absurd to people living not so long ago-indeed, it
> seems absurd to me now. But that does not mean that we will not or
> should not call these categories Life. It only means that the
> question What is life? is a historical question, answerable only in
> terms of the categories by which we as human actors choose to abide,
> the differences that we as human actors choose to honor, and not in
> logical, scientific, or technical terms. It is in this sense that the
> category of life is a human rather than a natural kind. Not unlike
> explanation.

Evelyn Fox Keller, "Marrying the Premodern to the Postmodern:
Computers and Organisms after World War II", in Mechanical Bodies,
Computational Minds: Artificial Intelligence from Automata to
Cyborgs, ed. Stefano Franchi and Guven Guzeldere. (MIT Press,
2005), p. 221

Comments?

Yours,
WM


--
Willard McCarty,
Professor emeritus, King's College London;
Editor, Interdisciplinary Science Reviews;  Humanist
www.mccarty.org.uk


_______________________________________________
Unsubscribe at: http://dhhumanist.org/Restricted
List posts to: humanist@dhhumanist.org
List info and archives at at: http://dhhumanist.org
Listmember interface at: http://dhhumanist.org/Restricted/
Subscribe at: http://dhhumanist.org/membership_form.php