Humanist Discussion Group

Humanist Archives: June 3, 2023, 6:10 a.m. Humanist 37.67 - studies of algorithmic prejudice

				
              Humanist Discussion Group, Vol. 37, No. 67.
        Department of Digital Humanities, University of Cologne
                      Hosted by DH-Cologne
                       www.dhhumanist.org
                Submit to: humanist@dhhumanist.org




        Date: 2023-06-02 08:57:34+00:00
        From: Tim Smithers <tim.smithers@cantab.net>
        Subject: Re: [Humanist] 37.52: studies of algorithmic prejudice?

Dear Willard,

I'm slow to your request for recommendations.  Delayed by
teaching a PhD course on models and modelling for researchers,
in which we, of course, talked about LLMs (Large Language
Models), and, in particular, I asked, if LLMs are models, what
are they models of?  It's strangely difficult get a good
answer to this question.  Most, no, almost everybody, I've
asked this are mistaken, I would say.

But, to your question about the down sides and dangers of
algorithms.

Like Robin Burke, I would say

    Cathy O’Neil, 2017.  Weapons of Math Destruction, Crown.

is still a good place to go.

Here's a useful interview to be going on with.

    Interview:
     Weapons of Math Destruction: Cathy O'Neil adds up the
     damage of algorithms
    By Mona Chalabi; The Guardian, 27 October, 2016
    <https://www.theguardian.com/books/2016/oct/27/cathy-oneil-weapons-of-math-
destruction-algorithms-big-data>

Something else I like, though you'll need to dig in to find
what you need, is

    Gathering Strength, Gathering Storms: The One Hundred Year
    Study on Artificial Intelligence (AI100) 2021 Study Panel
    Report, published September 2021
    <https://ai100.stanford.edu/gathering-strength-gathering-storms-one-hundred-
year-study-artificial-intelligence-ai100-2021-study>

    In this, go to
     Standing Questions and Responses
     <https://ai100.stanford.edu/2021-report/standing-questions-and-responses>

Though the following are not academic studies, I would also
say some important and worthwhile relevant analysis can also
be got from

    Interview:
     ‘There was all sorts of toxic behaviour’: Timnit Gebru on her
      sacking by Google, AI’s dangers and big tech’s biases
    By John Harris, The Guardian, 22 May, 2023
    <https://www.theguardian.com/lifeandstyle/2023/may/22/there-was-all-sorts-
of-toxic-behaviour-timnit-gebru-on-her-sacking-by-google-ais-dangers-and-big-
techs-biases>

and from

    OpenAI’s Altman and other AI giants back warning of
    advanced AI as ‘extinction’ risk
    By Natasha Lomas, TechCrunch+, 30 May, 2023
    <https://techcrunch.com/2023/05/30/ai-extiction-risk-statement/>

Best regards,

Tim



> On 28 May 2023, at 07:43, Humanist <humanist@dhhumanist.org> wrote:
>
>
>              Humanist Discussion Group, Vol. 37, No. 52.
>        Department of Digital Humanities, University of Cologne
>                      Hosted by DH-Cologne
>                       www.dhhumanist.org
>                Submit to: humanist@dhhumanist.org
>
>
>
>
>        Date: 2023-05-28 05:39:21+00:00
>        From: Willard McCarty <willard.mccarty@mccarty.org.uk>
>        Subject: algorithmic prejudice
>
> A recommendation or two, if you would: for a reliable study of
> preferences built into algorithms, with emphasis on those we regard as
> socially problematic, even dangerous, unjust, wrong.
>
> Many thanks.
>
> Yours,
> WM
> --
> Willard McCarty,
> Professor emeritus, King's College London;
> Editor, Interdisciplinary Science Reviews;  Humanist
> www.mccarty.org.uk



_______________________________________________
Unsubscribe at: http://dhhumanist.org/Restricted
List posts to: humanist@dhhumanist.org
List info and archives at at: http://dhhumanist.org
Listmember interface at: http://dhhumanist.org/Restricted/
Subscribe at: http://dhhumanist.org/membership_form.php