Humanist Discussion Group

Humanist Archives: June 30, 2021, 7:15 a.m. Humanist 35.116 - an oppositional artificial intelligence

				
              Humanist Discussion Group, Vol. 35, No. 116.
        Department of Digital Humanities, University of Cologne
                      Hosted by DH-Cologne
                       www.dhhumanist.org
                Submit to: humanist@dhhumanist.org


    [1]    From: Alasdair Ekpenyong <kekpenyo@syr.edu>
           Subject: Re: [Humanist] 35.114: an oppositional artificial intelligence (31)

    [2]    From: Mcgann, Jerome (jjm2f) <jjm2f@virginia.edu>
           Subject: Re: [Humanist] 35.114: an oppositional artificial intelligence (43)


--[1]------------------------------------------------------------------------
        Date: 2021-06-29 13:30:11+00:00
        From: Alasdair Ekpenyong <kekpenyo@syr.edu>
        Subject: Re: [Humanist] 35.114: an oppositional artificial intelligence

I can envision at my novice level writing some « if — elif » code in Python
where for three rounds the AI responds to the user suggestion with a random
alternative suggestion before finally ending the skirmish. You ask should
arguments be understood stochastically or as the means of persuasion— I suppose
there is value in simple argument for its own sake, like when the computer asks
« are you sure? » when you attempt to delete something, but obviously we would
want computer science to get to a place where AI is able to approximate more
complex forms of thought.

There’s a field of study called multimodal interaction that involves teaching
computers how to recognize not just textual code but other body language from
the other five senses. For example, teaching the computer how to discern
confusion from someone’s eye movements (if there is a Tobii eye tracker or a
similar device installed on the computer) or teaching the computer how to
identify humor and the punchline in someone’s spoken words. Multimodal
interaction could probably play a role in constructing the kind of oppositional
AI we are dreaming of here. There’s an International Conference on Multimodal
Interaction one should be aware of.

Cheers,
Alasdair

Envoyé de mon iPhone

> Le 29 juin 2021 à 02:40, Humanist <humanist@dhhumanist.org> a écrit :
>
> How do you
> program something to be oppositional without being purposefully
> contradictory, for contradiction's sake? I suppose machine learning
> techniques could construct arguments from patterns, but should arguments be
> understood stochastically or as the means of persuasion?

--[2]------------------------------------------------------------------------
        Date: 2021-06-29 11:27:56+00:00
        From: Mcgann, Jerome (jjm2f) <jjm2f@virginia.edu>
        Subject: Re: [Humanist] 35.114: an oppositional artificial intelligence

Willard’s posting prompts me to these thoughts about HAL and what happened to
the mission in 2001.

Here is what seems to me an “improbable” reading of the film, at least one that
I’ve not seen before.

Briefly, while HAL was programmed to save the mission from human agency, the
program had an improbable outcome – as if the death in the opening sequence were
replayed and “reversed”.  But the effect/affect is not to “argue” the triumph of
“human agency” over AI.  2001 is first to last the story of a massive faustian
undertaking.  It’s notable – crucial -- that all of the “people” in the film are
portrayed as having barely any affect except in relation to the machines they
have invented and inherited.  Roger Ebert once argued that HAL is the most human
of the agents in the film when in fact HAL is only the agent whose behavior
appears to be the most human – an appearance that emerges fully only when his
“higher cognitive functions” have been disabled.  HAL is programmed to simulate
human agency.  And he’s programmed by human beings who conceive and execute
“human agency” in an AI frame of reference.

HAL doesn’t malfunction nor does he “act humanly”.  All of his functions are
perfectly performed.  Bowman survives because he discovers an improbable move
that could not have been and wasn’t programmed for.  But it’s strictly a
tactical move in a strategic network organized for survival, on one hand, and
mission success on the other (call all that “the purpose driven life”).  The
mission continues with the patch Bowman has provided.

The mission is “saved” in a (perhaps more grim?)  sense than has been suggested
by the “dark” readings of the film.  There are no Lucretian swerves in the
actions as executed in the actions represented in  the film.   Everything –
including both the (alien) past and the projected future  – is conceived under
the horizon of probabilities and of learning (enlightenment) as machinic
learning.

So the only “human” response to the film’s immense seductions, both intellectual
and aesthetic, would be the Lucretian swerve of an Everlasting Nay – and it
would have to be “everlasting” because the probabilistic model of existence is
as everlasting as the death drive it sustains.

2001 seems to suggest that the only (humanly?) acceptable form of an Everlasting
Yea . . . is an Everlasting Nay, the ultimate Lucretian swerve.

Jerry



_______________________________________________
Unsubscribe at: http://dhhumanist.org/Restricted
List posts to: humanist@dhhumanist.org
List info and archives at at: http://dhhumanist.org
Listmember interface at: http://dhhumanist.org/Restricted/
Subscribe at: http://dhhumanist.org/membership_form.php