Humanist Discussion Group

Humanist Archives: June 24, 2024, 8:55 a.m. Humanist 38.51 - 'the sky is falling'

				
              Humanist Discussion Group, Vol. 38, No. 51.
        Department of Digital Humanities, University of Cologne
                      Hosted by DH-Cologne
                       www.dhhumanist.org
                Submit to: humanist@dhhumanist.org




        Date: 2024-06-23 14:06:05+00:00
        From: Tanner Durant <kekpenyo@syr.edu>
        Subject: Re : [Humanist] 38.20: 'the sky is falling'

Hi, everyone, responding to this thread from late May, which has been on my mind
in June as I worked on other projects.

(Response part 1) "Don't say 'the AI'."

The strongest takeaway from that thread for me was when one scholar requested
that people not say "the AI" when referencing to artificial intelligence
software but instead use another word. I've been exploring some job options
outside of academia, including several AI training "factory" work contexts where
participants engage in a method called Reinforcement Learning with Human
Feedback (RLHF) – basically a fancy name for a human being comparing two
different AI-generated responses to the same prompt, evaluating and explaining
which of the two responses is better and why, and saving and submitting that
feedback for both AI review and human review.

Google, Facebook, and Open AI, and other companies are actually rapidly farming
out work like this to try and ramp up AI development as fast as possible. The
human participant's experience is that you're pressured and motivated to write
these RLHF comparison reports as fast as possible, and "the AI" definitely is an
easy go-to buzzword in those situations. In my effort to implement the writer's
suggestion, what's worked for me as a first step is to say "the software"
instead of the AI. That at least shows some effort to accomplish the thread's
goal of letting humanities have a powerful say/input into what the future of AI
and of our broader society looks like. Saying "the software" instead of "the AI"
is an entryway to at least situating AI development in the context of "software
studies."

One of my independent contracting customers, when he paid me on Venmo, made the
comment, humorously, "aiaiaiaiaiaiaiaiaiaiaiai." That was how he labeled the
Venmo transaction. I realized that he was commenting on what professional life
was like for him. Lots people saying "AI," "AI," "AI," in conversation as a
catchword. How grateful I am for this newsgroup thought space and our effort to
deepen our experience of life through suggestions like what this scholar made.
The scholar's full quote was:

"we face a naïve enthusiasm for AI which expresses itself in most extreme
form when nearly any statistical data elaboration is now described as
made by/with AI.
as that of AI is a human project, we people of DH have the
responsibility to slow down the pace, to cool down the uncritical
enthusiasm, to study which are the effective uses where AI can enhance
our capabilities, to support and promote a secular vision of AI ("let us
examine the pros and cons of AI systems one by one and see what to do"),
and not a religious one ("our salvation is in AI")
the first step is to stop using expressions like "the AI", and starting
to speak of "AI systems", "AI software", and so on
(and uncritical enthusiasm seems to be more among humanists than among
computer scientists...)"

(Response part 2) What librarianship can contribute to the humanities'
moderation/balancing of STEM perspectives on AI

I've also had a second quote from that thread saved in my drafts since May 31,
and so I want to respond to it. The quote was:

"But would that not imply, that the Humanities do not simply comment on
the developments of the technologies, but get involved in their formation?
To be "hackers to hack for the good" in that context seems to me to ask
for a technical engagement, which is not way, but ways beyond the
mainstream in the Humanities' engagement with technology currently
occurring."

I have a cool perspective on this theme. Although I'm at the masters level, I
joined this newsgroup in 2020 and set the goal of developing both humanities-
informed and STEM-informed perspectives on AI. I learned how to code in 2020
while in a library and information science masters program, but upon getting my
first data job that year, the boss encouraged me to get a data science masters
after finishing the LIS masters. So I did. Worked on the LIS degree roughly 2020
to 2022, and worked on the data science degree roughly 2022 to 2024—just
graduated a few weeks ago. ​

There are different professional affects and persona norms for librarians vs.
data scientists, and overall I see data science more cutthroat, especially for
the kinds of open-minded people who tend to become librarians. I've spent a lot
of time learning how recast myself into certain aspects of a "tech bro" or
"finance bro" mindset so as to legitimate my claim to skills authenticity in
cutthroat data science job interviews and workspaces. Erin Cech, of the
University of Michigan, speaks of how difficulty legitimating one's tech
skillset is one of the systemic inequalities that subaltern individuals in a
tech workspace may face (see "Systemic inequalities for LGBT professionals in
STEM").

After all this effort for months to perform as a data scientist and as a
competitive Python programmer, I was relieved to do something very "librarian-y"
last night which is that I went to the library and checked out and read a
children's book on artificial intelligence: "Artificial Intelligence and You" by
Corona Brezina (2020).

Children's literature is kind of a very powerful space in librarian thinking –
epistemological questions about how to summarize information for young minds,
how to define what information is canonical or standard enough to be relevant
for presentation to children, and also questions of how to introduce
controversial questions to children and to what extent to expose youth to adult
controversy.

I liked some of the book's humanities suggestions: that we should watch The
Jetsons cartoon (I see it for free on Amazon streaming if you use a 7 day
Boomerang free trial), and that we should read a 1995 novel called Galatea 2.2
by Richard Powers. In Galatea 2.2, the children's book summarizes, the narrator
experiences a modern reimagining of the Pygmalion myth, where "the sculptor
Pygmalion carves a marble statue of the perfect woman [...] ends up falling in
love with his creation" and is blessed to see Aphrodite, goddess of love, take
sympathy with him and bring his statue, Galatea to life. In Galatea 2.2, the
modern 1995 reimagining, "the narrator is charged with training an AI neural
network program named Helen to produce literary criticism that is difficult to
tell apart from the work of a human."

Being born in 1991 personally, I don't know much about what life was like 1995,
and it was interesting and surprising to see a 1995 producing a robust narrative
model of AI that quite much resembles the core thrust of what the AI industry is
like today, like the reinforcement learning with human feedback (RLHF) jobs that
I described above. The children's book overall does a great job presenting AI as
something that has been known about and discussed for a long time, since the
1950s and 60s, rather than as a new thing that began in this century. This
approach is similar to a recent article I read, "The unbearable oldness of
Generative AI," which tries to situate new AI developments in the broader
context of computing history.

Tanner

________________________________
De : Humanist <humanist@dhhumanist.org>
Envoyé : jeudi 30 mai 2024 22:41
À : Tanner Durant <kekpenyo@syr.edu>
Objet : [Humanist] 38.20: 'the sky is falling'


              Humanist Discussion Group, Vol. 38, No. 20.
        Department of Digital Humanities, University of Cologne
                      Hosted by DH-Cologne
                       www.dhhumanist.org<http://www.dhhumanist.org>
                Submit to: humanist@dhhumanist.org


    [1]    From: Mcgann, Jerome (jjm2f) <jjm2f@virginia.edu>
           Subject: Re: [Humanist] 38.18: 'the sky is falling' (71)

    [2]    From: maurizio lana <maurizio.lana@uniupo.it>
           Subject: Re: [Humanist] 38.18: 'the sky is falling' (46)

    [3]    From: Manfred Thaller <manfred.thaller@uni-koeln.de>
           Subject: Re: [Humanist] 38.18: 'the sky is falling' (62)


--[1]------------------------------------------------------------------------
        Date: 2024-05-30 18:17:54+00:00
        From: Mcgann, Jerome (jjm2f) <jjm2f@virginia.edu>
        Subject: Re: [Humanist] 38.18: 'the sky is falling'

Dear Willard,

AI is computation and computation is essentially counting.  That basic fact
about computational “intelligence” is fundamental.

Any living or nonliving entity at any scale can be modelled computationally.
Self-organization, whether autopoietic or sympoietic, is still a computational
process.

So: what is it that “doesn’t count”?  Or: what can’t be accounted for?
Deviation?  Not hardly.  Loss? Not that either.

Nothing can’t be accounted for.

Lucretius accounted for the atoms with the swerve.  But his account couldn’t
account for the plague.

Best,
Jerry

From: Humanist <humanist@dhhumanist.org>
Date: Thursday, May 30, 2024 at 3:04 AM
To: Mcgann, Jerome (jjm2f) <jjm2f@virginia.edu>
Subject: [Humanist] 38.18: 'the sky is falling'

              Humanist Discussion Group, Vol. 38, No. 18.
        Department of Digital Humanities, University of Cologne
                      Hosted by DH-Cologne
                       www.dhhumanist.org<http://www.dhhumanist.org>
                Submit to: humanist@dhhumanist.org




        Date: 2024-05-30 06:59:28+00:00
        From: Willard McCarty <willard.mccarty@mccarty.org.uk>
        Subject: the continuing problem of 'impact'

I refer to that event in Oxford, 'The Impact of Generative AI on the Digital
Humanities: Disruption in Research and Education", reported just an hour
or so earlier.

No one, I suppose, could argue against the wisdom of taking
precautionary measures in light of signs indicating the explosion of a
nearby volcano, say, or the arrival of a tsunami. But this is not what
we're facing with AI. Accepting the rhetoric of 'impact' renders those
who witlessly accept it passive victims. Of course it's prudent to stay
aware of what the tech giants and their fellow travellers are up to, but
this is not the same thing as assuming its inevitability, as if it were
a force of nature rather than a very human project. Do we not have a
responsibility, as hackers to hack for the good, as scholars to keep a
clear head and write and lecture accordingly, asking Lenin's
question--"What is to be done?--and coming up with persuasive arguments?

Comments?

Yours,
WM
--
Willard McCarty,
Professor emeritus, King's College London;
Editor, Interdisciplinary Science Reviews;  Humanist
www.mccarty.org.uk<http://www.mccarty.org.uk>

--[2]------------------------------------------------------------------------
        Date: 2024-05-30 11:09:57+00:00
        From: maurizio lana <maurizio.lana@uniupo.it>
        Subject: Re: [Humanist] 38.18: 'the sky is falling'

Il 30/05/24 09:03, Humanist ha scritto:

> I refer to that event in Oxford, 'The Impact of Generative AI on the Digital
> Humanities: Disruption in Research and Education", reported just an hour
> or so earlier.
>
> No one, I suppose, could argue against the wisdom of taking
> precautionary measures in light of signs indicating the explosion of a
> nearby volcano, say, or the arrival of a tsunami. But this is not what
> we're facing with AI. Accepting the rhetoric of 'impact' renders those
> who witlessly accept it passive victims. Of course it's prudent to stay
> aware of what the tech giants and their fellow travellers are up to, but
> this is not the same thing as assuming its inevitability, as if it were
> a force of nature rather than a very human project. Do we not have a
> responsibility, as hackers to hack for the good, as scholars to keep a
> clear head and write and lecture accordingly, asking Lenin's
> question--"What is to be done?--and coming up with persuasive arguments?

i completely support your position.
we face a naïve enthusiasm for AI which expresses itself in most extreme
form when nearly any statistical data elaboration is now described as
made by/with AI.
as that of AI is a human project, we people of DH have the
responsibility to slow down the pace, to cool down the uncritical
enthusiasm, to study which are the effective uses where AI can enhance
our capabilities, to support and promote a secular vision of AI ("let us
examine the pros and cons of AI systems one by one and see what to do"),
and not a religious one ("our salvation is in AI")
the first step is to stop using expressions like "the AI", and starting
to speak of "AI systems", "AI software", and so on
(and uncritical enthusiasm seems to be more among humanists than among
computer scientists...)
Maurizio

------------------------------------------------------------------------

one of the things I really believed in is the idea of simplicity,
that life should always be moving toward more simplicity
rather than more complexity
yvon chouinard

------------------------------------------------------------------------
Maurizio Lana
Università del Piemonte Orientale
Dipartimento di Studi Umanistici
Piazza Roma 36 - 13100 Vercelli

--[3]------------------------------------------------------------------------
        Date: 2024-05-30 07:36:39+00:00
        From: Manfred Thaller <manfred.thaller@uni-koeln.de>
        Subject: Re: [Humanist] 38.18: 'the sky is falling'

Dear Willard,

I could not agree more:
> Do we not have a
> responsibility, as hackers to hack for the good, as scholars to keep a
> clear head and write and lecture accordingly, asking Lenin's
> question--"What is to be done?--and coming up with persuasive arguments?

But would that not imply, that the Humanities do not simply comment on
the developments of the technologies, but get involved in their formation?
To be "hackers to hack for the good" in that context seems to me to ask
for a technical engagement, which is not way, but ways beyond the
mainstream in the Humanities' engagement with technology currently
occurring.

Kind regards,
Manfred

Am 30.05.24 um 09:04 schrieb Humanist:
>                Humanist Discussion Group, Vol. 38, No. 18.
>          Department of Digital Humanities, University of Cologne
>                        Hosted by DH-Cologne
>                         www.dhhumanist.org<http://www.dhhumanist.org>
>                  Submit to:humanist@dhhumanist.org
>
>
>
>
>          Date: 2024-05-30 06:59:28+00:00
>          From: Willard McCarty<willard.mccarty@mccarty.org.uk>
>          Subject: the continuing problem of 'impact'
>
> I refer to that event in Oxford, 'The Impact of Generative AI on the Digital
> Humanities: Disruption in Research and Education", reported just an hour
> or so earlier.
>
> No one, I suppose, could argue against the wisdom of taking
> precautionary measures in light of signs indicating the explosion of a
> nearby volcano, say, or the arrival of a tsunami. But this is not what
> we're facing with AI. Accepting the rhetoric of 'impact' renders those
> who witlessly accept it passive victims. Of course it's prudent to stay
> aware of what the tech giants and their fellow travellers are up to, but
> this is not the same thing as assuming its inevitability, as if it were
> a force of nature rather than a very human project. Do we not have a
> responsibility, as hackers to hack for the good, as scholars to keep a
> clear head and write and lecture accordingly, asking Lenin's
> question--"What is to be done?--and coming up with persuasive arguments?
>
> Comments?
>
> Yours,
> WM
> --
> Willard McCarty,
> Professor emeritus, King's College London;
> Editor, Interdisciplinary Science Reviews;  Humanist
> www.mccarty.org.uk<http://www.mccarty.org.uk>

--
Prof.em.Dr. Manfred Thaller
formerly University at Cologne /
zuletzt Universität zu Köln



_______________________________________________
Unsubscribe at: http://dhhumanist.org/Restricted
List posts to: humanist@dhhumanist.org
List info and archives at at: http://dhhumanist.org
Listmember interface at: http://dhhumanist.org/Restricted/
Subscribe at: http://dhhumanist.org/membership_form.php