Humanist Discussion Group

Humanist Archives: March 16, 2025, 11:37 a.m. Humanist 38.407 - the strangeness of artificial intelligence

				
              Humanist Discussion Group, Vol. 38, No. 407.
        Department of Digital Humanities, University of Cologne
                      Hosted by DH-Cologne
                       www.dhhumanist.org
                Submit to: humanist@dhhumanist.org




        Date: 2025-03-15 09:29:06+00:00
        From: Michael Falk <michaelgfalk@gmail.com>
        Subject: Re: [Humanist] 38.403: the strangeness of artificial intelligence?

Hey Willard,

Is a great question, and has been a central concern of AI research for some
time. Until recently, the main place where this question was asked was in
the field of reinforcement learning. Models trained using reinforcement
learning don’t learn from human-generated data, so often exhibit very
non-human behaviours. Chess-, Go- and StarCraft-playing models all fall
into this category. Any StarCraft fans in this list will be aware of
AlphaStar’s *very* strange predilection for Disruptor strats.

As our expectations of language models have grown, so have our sensations
of their strangeness. Back when language models were unconvincing, they
didn’t seem “strange.” They simply seemed inadequate.

LLMs are very strange, if you consider them as models of human
intellection. Humans are not provided with a fixed vocabulary of arbitrary
symbols at birth, and then expected to “learn” by observing statistical
correlations between occurrences of these symbols over millions of exactly
identical iterations! I really have no idea what Geoffrey Hinton is talking
about when he says that human learning is “the same” as this.

But in direct answer to your question, the best thing I’ve read recently on
this topic of “AI strangeness” is this wonderful paper on Artificial
Wisdom, which includes the ever-worth-reading Melanie Mitchell among its
co-authors:
http://arxiv.org/abs/2411.02478

Cheers,

Michael Falk
Senior Lecturer in Digital Studies
University of Melbourne

Sent from my mobile phone. Please excuse thumbsy clumbs.


On Sat, 15 Mar 2025 at 18:45, Humanist <humanist@dhhumanist.org> wrote:

>
>               Humanist Discussion Group, Vol. 38, No. 403.
>         Department of Digital Humanities, University of Cologne
>                       Hosted by DH-Cologne
>                        www.dhhumanist.org
>                 Submit to: humanist@dhhumanist.org
>
>
>
>
>         Date: 2025-03-15 07:27:25+00:00
>         From: Willard McCarty <willard.mccarty@mccarty.org.uk>
>         Subject: developments of ChatGPT, DeepSeek &al
>
> Paul Taylor, professor of health informatics at University College
> London, has written a worthy article on "AI Wars" in the latest issue of
> the London Review of Books (47.5, 20 March). For those who have access,
> it is available at:
> <https://www.lrb.co.uk/the-paper/v47/n05/paul-taylor/ai-wars>.
>
> Taylor's observations on the strangeness of behaviour from these LLM
> systems is what caught my eye in particular and leads me to ask if any
> here know of intelligent work on the deviation of AI from the human
> mode of intelligence.
>
> Comments welcome, as always.
>
> Yours,
> WM
> --
> Willard McCarty,
> Professor emeritus, King's College London;
> Editor, Humanist
> www.mccarty.org.uk



_______________________________________________
Unsubscribe at: http://dhhumanist.org/Restricted
List posts to: humanist@dhhumanist.org
List info and archives at at: http://dhhumanist.org
Listmember interface at: http://dhhumanist.org/Restricted/
Subscribe at: http://dhhumanist.org/membership_form.php