Humanist Discussion Group

Humanist Archives: Jan. 10, 2025, 9:28 a.m. Humanist 38.312 - AI, poetry and readers: Calvino, neuroscience & intention

				
              Humanist Discussion Group, Vol. 38, No. 312.
        Department of Digital Humanities, University of Cologne
                      Hosted by DH-Cologne
                       www.dhhumanist.org
                Submit to: humanist@dhhumanist.org


    [1]    From: Gabriel Egan <mail@gabrielegan.com>
           Subject: Re: [Humanist] 38.308: AI, poetry and readers (49)

    [2]    From: Gabriel Egan <mail@gabrielegan.com>
           Subject: Re: [Humanist] 38.305: AI, poetry and readers (156)

    [3]    From: William Benzon <bbenzon@mindspring.com>
           Subject: Re: [Humanist] 38.308: AI, poetry and readers: Calvino, neuroscience & intention (22)


--[1]------------------------------------------------------------------------
        Date: 2025-01-08 10:51:54+00:00
        From: Gabriel Egan <mail@gabrielegan.com>
        Subject: Re: [Humanist] 38.308: AI, poetry and readers

Jim Rovera writes:

 > . . . LLMs aren't working with words.
 > It's all numbers and binary underneath
 > that. It just -renders- the numbers it's
 > working with as words. AI doesn't
 > 'understand" human language. It doesn't
 > even "think" in it. This is very rudimentary
 > to me.

Surely, Jim, you agree that your brain is
also, at root, purely mechanical and works
with units smaller than, and representative
of, words.

The fact that a computer's internal
representations are binary is not
a defining characteristic. The
world's oldest working digital computer
-- the WITCH at the National Computing
Museum in Bletchley UK -- is not a
binary machine.

The first artificial neural networks
built from perceptrons were electrical
but not digital: they were analogue
devices. We agree I hope that brains,
like computers, are electrical
devices. But even this is inessential.

The fundamental units of computing
devices have been implemented in,
amongst other things, the toppling
of upended dominoes, the flowing of
water through valves, and the falling
of marbles through bagatelle boards.
(I can provide references to YouTube
videos of computing devices made from
these materials if anyone is interested.)

Those who distinguish human brains
from mechanical ones by the hardware
implementation of the underlying logical
elements have, I would say, already
given up on any essential difference.

Regards

Gabriel Egan

--[2]------------------------------------------------------------------------
        Date: 2025-01-08 10:33:51+00:00
        From: Gabriel Egan <mail@gabrielegan.com>
        Subject: Re: [Humanist] 38.305: AI, poetry and readers

Tim Smithers writes:

 > . . . we most certainly do know AI machines
 > do not work like human brains do, despite
 > remaining unknowns, perhaps more unknowns
 > than we currently suppose, about how brains
 > are built and function. Why? Because both
 > do not use "neural networks."

Tim goes on to explain this last remark by
saying that the things in our brains really
are neural networks but the things in our
computers are not.

There are two obvious objections to this
reasoning.

The first is that things don't have to be
built the same to work the same. Aeroplanes
work like birds in how they fly: they
generate lift by deflecting a moving
airflow over specially shaped wings. Two
things don't have to be physically
identical to be functionally similar
(that is "like" each other, in my
phrasing) The lenses in my spectacles
work like the lenses in my eyes, for
instance.

The second objection is that to say
that what computer scientists have built
are not neural networks because they are
not like brains is begging the question.
(The question being begged is "what is
a neural network?")

The analogue electrical device called the
perceptron was invented to mimic the
function of the biological device called
the neuron, and people who now connect
together layers of perceptrons -- or more
commonly digital simulations of perceptrons
-- call the things they make 'neural
networks'. There are thousands of scholarly
papers published about these networks and
using that term for them, so to object
that they are not really neural networks
is to risk sounding like Humpty Dumpty
regarding the meaning of words.

Tim goes on to say that:

 > We do know and understand how today's so
 > called Generative AI are built and work.
 > We wouldn't be able to build and operate
 > them if we didn't.

We know how they work in the sense that
we understand the principles we use
to make them, such as back propagation
and the computational solving of partial
differential equations. We understand
them at that level.

But at another level we scarcely
understand them at all. Hence there
is an entire field of research on
the 'inscrutability problem' in AI,
which I alluded to when I mentioned
that we don't know where or how a
Large Language Model stores its
knowledge that Paris is the capital
of France.

In systems built by the principles
of Good Old Fashioned AI (GOFAI),
such as the Expert Systems of the
1970s and 1980s, you certainly could
point to the part of the system that
contained each bit of knowledge that
the system held. But a computational
neural network acquires knowledge
not by having it explicitly put in
by a human creator but by ingesting
a large amount of text and using it
to tweak a large number of weighted
connexions between perceptrons,
and in this process we never
see where it stores each bit of
knowledge.

If Tim were right that "We do know
and understand how today's so
called Generative AI . . . work[s]"
(as he writes) then the field of
research into the inscrutability
problem and the drive to produce
'Explainable AI' would not exist.
That they do exist argues against
Tim's position.

Tackling the topic from a different
angle, Tim argues that human text
generation involves iterative
processes of writing and reading:

 > Writing is a working out of
 > what to think, and how to think
 > and understand, things we are
 > working on. It's not just a
 > text generation procedure.
 > Writing is a conversation --
 > literally literal -- between
 > us and what the words we read
 > from our own text say to us
 > when we read them, and re-read
 > them, and change them, and start
 > again with them, and thereby
 > discover what we are saying,
 > not say, can say, can't say,
 > and more.

I think anyone who writes professionally
will agree with Tim's account of the
iterative process by which humans
revise their text output to perfect
it, which machines do not do. But it is
possible that this iterative process
is no more than a result of the human
brain's limitations.

It would seem to be more efficient
if I could put the 'reading' bit of
my brain onto the task of checking
what is being created by the 'text
generating' bit, all inside my
head and without having to
externalize the generated text as
typed characters and words. But for
all we know the route out of my brain
through my arms and hands into pixels
on a screen and then back in through
my eyeballs is the only possible
route because my brain has not
provided an internal route between
the requisite parts of itself. The
fact that minds do text generation
this way does not indicate some
special property that machines lack
and that makes machines inferior.
The human way may indeed be
suboptimal.

Regards

Gabriel Egan

--[3]------------------------------------------------------------------------
        Date: 2025-01-08 09:39:28+00:00
        From: William Benzon <bbenzon@mindspring.com>
        Subject: Re: [Humanist] 38.308: AI, poetry and readers: Calvino, neuroscience & intention

Comment below.

> Bill -- a poem made up of another poet's lines from the other poet's poetry
> is called a "cento." I believe that AI could generate centos. But in that
> case, the lines come from another source, so any "intention" would be from
> the human source of the original poetry, not the AI that assembled the
> lines. Computers don't have intention. Even if it were thematically based
> on another poet's poems, I would say the same thing.
>
> Jim R

You misunderstood the procedure. I, me, a human being, I choose paragraphs from
a text, and gave them to FTH, and FTH, in turn derived a poem from them. The
intentionality that put those things together is mine, not FTH’s. BTW, FTH
didn’t quote anything. Perhaps we could say that it transformed the text it was
given, though ’transform’ seems rather a weak idea for what happened. Anyhow,
once it did what it did, I told it to make some changes. The intention that
called for those changes, that was my intention, not FTH’s. The actual procedure
is more complex than you’re implying and I don’t see how my intentionality can
be completely discounted, as you are doing.

Bill Benzon


_______________________________________________
Unsubscribe at: http://dhhumanist.org/Restricted
List posts to: humanist@dhhumanist.org
List info and archives at at: http://dhhumanist.org
Listmember interface at: http://dhhumanist.org/Restricted/
Subscribe at: http://dhhumanist.org/membership_form.php