Humanist Discussion Group

Humanist Archives: May 20, 2025, 4:14 p.m. Humanist 39.21 - repetition vs intelligence

				
              Humanist Discussion Group, Vol. 39, No. 21.
        Department of Digital Humanities, University of Cologne
                      Hosted by DH-Cologne
                       www.dhhumanist.org
                Submit to: humanist@dhhumanist.org




        Date: 2025-05-16 09:19:21+00:00
        From: Tim Smithers <tim.smithers@cantab.net>
        Subject: Re: [Humanist] 39.4: repetition vs intelligence?

Dear Willard,

I'm behind Jim Rovira's "we're not talking about the same
thing when we talk of intelligence here," and I like
Maurizio's comments on repetitions, and I greatly enjoyed
Manfred's explorations of what gets mixed up and confused in
our conversations on and around Generative AI. And, I'd like
to add to these, if I may, Manfred?

Our discussions about Generative AI systems, so called Large
Language Models (LLMs), and what these systems do and don't
do, to be useful, needs, I think, more precision and
discipline.  Else our conversations are empty of needed
clarifications and right understandings, and are like a group
of people batting a ballon around.  Entertaining for a while,
perhaps, but of no useful consequence.

The word 'intelligence' is what I call an ice-hockey-puck
word.  It can easily be pushed and knocked around the semantic
playing surface to mean quite different things, and thus used
to score "goals" in all sorts of conversations.  This is the
utility of words like this, but we can never pin anything down
with these ice-hockey-puck words.  Languaging with
ice-hockey-puck words results in what I call Ice Rink talk:
conversation happily slides all over the place going nowhere.
[I used to call these shove-ha'penny words, and the use of
such words ha'penny talk, but people stopped understanding
what I was talking about.  Who here played shove ha'penny as a
kid?]

To get somewhere useful -- a place of greater clarity, sounder
understanding, and, perhaps, better identified disagreement --
any use of the word 'intelligence' must come with some
elaboration of what is to be meant by this word by the author
who choses to use it, and thus, what is to be understood from
it by readers: what, for the purposes of the conversation are
we to understand 'intelligence' to consist in?  This could be,
for example, the intelligence of making winning chess moves,
or the intelligence of writing a gripping and fear inducing
ghost story, or the intelligence of building a coherent
historical account of something which happened, but for which
we don't have complete and detailed records, or it could be
the intelligence of bike riding in busy urban settings, and
mending the punctures when they happen.  Of course, there are
endless more examples like this ...  so you'll happily excuse
me for not listing them, I trust.  Intelligent of me, no?

'Creative' is another ice-hockey-puck word.  [Ha'penny word
sounds so much better, no?]  Using 'intelligence' and
'creative' together results in what I call a Snooker
conversation; these two words can be bounced off each other is
all sorts of ways, and go off in all sorts of directions on
the semantic playing surface.  Fun, perhaps.  Useful?  Hardly.

In today's super-hyp'ed talk about Generative AI systems other
words, which 'til now had good, strong, mostly commonly
understood, meanings, have had their semantic tethers cut.
Words such as 'knowing,' 'reasoning,' 'understanding,'
'writing,' 'hallucination,' and more.  We can now, it seems,
say whatever thing we like does knowing, reasoning,
understanding, and writing, and demand this thing really
really does know, reason, and understand, and write, just like
humans know, reason, understand, and write, only better, and
nobody can have good reason to contradict these obviously
false assertions.  This is speech acts gone mad.  Yet we seem
to be happy to accept this madness.  Why?

I'm currently teaching a PhD course on making and using models
in research, to PhDers from across all the disciplines in the
Arts, Egineerings, Humanities, and Sciences.  So, the term
'Large Language Model' (LLM) gets talked about some.

You're correct, Willard, to preface this term with "so
called."  LLMs are only taken to be models of language because
their builders call them models of language.  Which, in a less
mad world, does not make them models: naming maketh not the
named.  They are not models of language because none of these
people have shown [the rest of us] how their LLM constructions
actually model language in some useful way.  LLMs do not deal
in words, not really.  All the simple minded explanations of
how LLMs "predict" the next word in some sequence of words are
Noddy explanations which seriously mislead people into
thinking LLMs deal in and process words.  They don't.  They
process text tokens, and the large majority of the text tokens
these systems use are not words; they are bits of text.  See,
for example, "ChatGPT’s vocabulary" here
<https://emaggiori.com/chatgpt-vocabulary/>, but read enough
to finally get to where what ChatGPT really uses is explained;
it starts off with the usual misleading story about word
predicting.

ChatGPT uses more than 100,000 different text tokens, and
these are encoded using UTF-8 and represented using very large
numerical vectors to form what is called the "embedding
space."  This is often [grossly mis-] described as the
"semantic space" of the LLM. No semantics of any kind plays
any role in the computations carried out by the LLM on these
numerical vectors.  It's just made to look like it does, if
you play fast and lose with what is meant by semantics and how
words mean anything.  Text tokens which frequently occur close
to each other in sequences of text tokens in the text used to
program these systems -- so called "train" them -- have places
in this victor space which are close together.  If you present
this using text tokens we recognise, and thus [automatically]
read as words, it can look like this "closeness" in the text
token vector space "captures" the semantic relationships of
the words involved.  But this semantic relationship is an
artefact of the statistics of text token patterns found in the
text used to program the system.  To claim this token vector
space is a semantic space of words would require showing that
this is the only kind of relationship found between the
represented tokens, not just something we can find if we pick
the right text tokens to use, which, of course, cannot be true
since most of the text tokens represents are not words.  LLMs
do capture the statistics of text token relationships found in
the original text, but given how they are programmed with all
this text, they cannot do anything else.  Claiming this
statistics of text token patterns is the same as the semantics
of word sequences is either deliberate deception or ignorance
induced delusion on the part of the people to claim this.

So, the best we might be able to say about LLMs is that they
are statistical models of text token patterns found in a
ginormous collection of texts that have resulted from some
human writing, after ripping all images and other non-text
content that are integral components of the original text, and
ripping out all the typographical formatting of this original
text needed to make it readable by us: try reading long
sequences of UTF-8 codes!

Text is the marks left by some human writing, and, now-a-days,
often printed or screen rendered using suitable well designed
font(s) and typographical designs.  Text is not the same as
words.  The words involved were formed in the head of the
author and remain there.  Writing words to say something
involves encoding the chosen words in some shared alphabet and
shared spelling and grammar.  This results in the marks we
call text.  Text is thus a sequence of signs, and it must be
read, by, of course, something that can read these signs, to
re-form the words of the author.  These again formed words are
formed in the reader's head, they are not found and somehow
picked out of the text; the signs are not the words, they are
signs for words.  This notion of "picking up the words" is not
what reading is, though this is how it might seem to us, and
how we often talk about it being.  This confusion -- the text
is the words -- was harmless when we [just about] only had
text from human writing, but now we have, thanks to things
like ChatGPT, automated text generation systems, and lots of
text which is not the result of any kind of writing.  Just
because we can read this automatically generated text, and
form words in our heads from this reading, words which mean
something to us, and thus give us the impression that the text
is about something, does not mean, nor necessarily make, the
generator of this text a writer.  To be a writer requires the
author to be a reader of the written text, and, or course,
lots of other text.  And it requires the writer to have a mind
in which they form words to say something with.  ChatGPT, and
other Generative AI systems like it, do not read anything.
ChatGPT does no reading of your [so called] prompt.  The text
you make by writing your prompt is simply chopped into a
sequence of text tokens which are, in turn, used to build a
sequence of vector encodings, together with quite a lot of
other stuff added to your prompt text by the always hidden
prompt processing ChatGPT has to do.  (ChatGPT is not just an
LLM, it has plenty of other machinery needed to make it do
what it does.)

So, to mend the usual, and still needed, semantic tether the
word 'writing' used to have, ChatGPT does not, and cannot,
write, it only generates text.  It has no mind in which to
form words and then work out how to writes down using signs we
can read.  It does not, and cannot, read.  It has no mind in
which to form words, it chops text into text tokens, a
different system of signs, and not one we can read.

To take the text generated by systems like ChatGPT as writing,
and to take this writing to be the result of something that
works out something to say, and then works out which words to
use to say this with, and then writes these words for us to
read, is to hallucinate.

    To hallucinate : to experience an apparent sensory
    perception of something that is not actually present.

This is the real meaning of to hallucinate, a meaning we
clearly still need it to have.  Generative AI system do not
hallucinate.  And, the only fabrication, the only "making
things up," they do is the fabrication of sequences of text
tokens.  They do not say anything by writing, but they are
built to make it look like they do.  This is deliberate
deception, a kind of dishonesty.

Only humans write, machines only generate text.  Let's try to
keep this simple and evident distinction in the way we talk
about these things.

-- Tim



> On 8 May 2025, at 09:38, Humanist <humanist@dhhumanist.org> wrote:
>
>
>              Humanist Discussion Group, Vol. 39, No. 4.
>        Department of Digital Humanities, University of Cologne
>                      Hosted by DH-Cologne
>                       www.dhhumanist.org
>                Submit to: humanist@dhhumanist.org
>
>
>
>
>        Date: 2025-05-08 06:51:41+00:00
>        From: Willard McCarty <willard.mccarty@mccarty.org.uk>
>        Subject: repetition vs intelligence
>
> This is about the current state and probable trajectory of artificial
> intelligence, I'd hope without promotional futurism. (Prominence of the
> future tense in writings about AI I find very interesting indeed, but
> here it is, at least for me, only a distraction.)
>
> My question is this: to what extent, in what ways, do the strategies of
> the so-called Large Language Models produce results that only echo back
> to us current linguistic behaviour (parole), in effect saying nothing
> new, however useful, however news to the questioner? The current term
> for the misbehaviour of LLMs when they make things up seems to be
> 'hallucination'; far more accurate would be 'fabrication'.
> Hallucinations are much more interesting, but used of LLMs lets them off
> the hook.
>
> We could say, as a friend of mine did, that saying something new in my
> sense, i.e. being truly creative, is exceedingly rare. But isn't that
> exactly what we want of intelligence? What would the artificial kind
> have to do to qualify? Or do we have examples, are they being noticed
> and investigated?
>
> Enough for now, I trust. Comments eagerly welcomed!
>
> Best,
> WM
> --
> Willard McCarty,
> Professor emeritus, King's College London;
> Editor, Humanist
> www.mccarty.org.uk



_______________________________________________
Unsubscribe at: http://dhhumanist.org/Restricted
List posts to: humanist@dhhumanist.org
List info and archives at at: http://dhhumanist.org
Listmember interface at: http://dhhumanist.org/Restricted/
Subscribe at: http://dhhumanist.org/membership_form.php