Humanist Discussion Group

Humanist Archives: June 29, 2021, 9:40 a.m. Humanist 35.114 - an oppositional artificial intelligence

				
              Humanist Discussion Group, Vol. 35, No. 114.
        Department of Digital Humanities, University of Cologne
                      Hosted by DH-Cologne
                       www.dhhumanist.org
                Submit to: humanist@dhhumanist.org


    [1]    From: Mark Wolff <wolff.mark.b@gmail.com>
           Subject: Re: [Humanist] 35.113: an oppositional artificial intelligence (100)

    [2]    From: Dr. Herbert Wender <drwender@aol.com>
           Subject: Re: [Humanist] 35.113: an oppositional artificial intelligence (15)

    [3]    From: Michael Falk <M.G.Falk@kent.ac.uk>
           Subject: Re: [Humanist] 35.113: an oppositional artificial intelligence (38)


--[1]------------------------------------------------------------------------
        Date: 2021-06-28 19:37:19+00:00
        From: Mark Wolff <wolff.mark.b@gmail.com>
        Subject: Re: [Humanist] 35.113: an oppositional artificial intelligence

I think one challenge in developing an oppositional AI is that code is in
part rhetorical. A programmer who writes code is making arguments, and
arguments must run in order to be valid. An oppositional AI would somehow
seek to throw exceptions and disrupt the execution of the code. How do you
program something to be oppositional without being purposefully
contradictory, for contradiction's sake? I suppose machine learning
techniques could construct arguments from patterns, but should arguments be
understood stochastically or as the means of persuasion?

mw

On Mon, Jun 28, 2021 at 1:08 AM Humanist <humanist@dhhumanist.org> wrote:

>                   Humanist Discussion Group, Vol. 35, No. 113.
>         Department of Digital Humanities, University of Cologne
>                                 Hosted by DH-Cologne
>                        www.dhhumanist.org
>                 Submit to: humanist@dhhumanist.org
>
>
>
>
>         Date: 2021-06-26 11:06:03+00:00
>         From: maurizio lana <maurizio.lana@uniupo.it>
>         Subject: Re: [Humanist] 35.112: an oppositional artificial
> intelligence?
>
> Hi Willard
>
> what you are searching for ("an artificial entity that would respond
> to an articulated train of thought by derailing it in
> such a way as possibly to be helpful, provocative, enlightening")
> implies, i think, that the artificial intelligence be a true
> intelligence (whatever it means), and one of high level: how
> frequently a human true intelligence is really capable of derailing
> a train thought in enlightening ways?
>
> while what is under questioning by many is if the "artificial
> intelligence" is an intelligence at all.
>
> best
> Maurizio
>
>
>
> Il 26/06/21 11:37, Humanist ha scritto:
>
>
> Humanist Discussion Group, Vol. 35, No. 112.
> Department of Digital Humanities, University of Cologne
> Hosted by DH-Cologne
> www.dhhumanist.org
> Submit to: humanist@dhhumanist.org
>
>
>
>
> Date: 2021-06-26 09:29:25+00:00
> From: Willard McCarty <willard.mccarty@mccarty.org.uk>
> Subject: a (crazy?) idea
>
> I've been looking for developments towards what I am calling an
> "oppositional artificial intelligence", that is, an artificial entity
> that would respond to an articulated train of thought by derailing it in
> such a way as possibly to be helpful, provocative, enlightening. I would
> think this best done by striking a balance between the intrinsically
> machinic and the recognisably human. Its objective would NOT be to
> imitate but to differ -- profoundly, perhaps, but intelligibly. It would
> (I am guessing) be somewhat like DeepMind's AlphaGo Zero
> (generating plays no human player has ever considered making)
> but beyond the constraints of the gameboard.
>
> The closest approximations I have been able to find in current work are
> on 'generative adversarian nets', negotiation mechanisms, adversarial
> AI -- and IBM's Project Debater. Observations on the embodied nature
> of conversation notwithstanding, it seems to me that talking with a
> distinctly otherwise-embodied machine would be part of the attraction.
>
> Comments? Suggestions?
>
> Yours,
> WM
> --
> Willard McCarty,
> Professor emeritus, King's College London;
> Editor, Interdisciplinary Science Reviews;  Humanist
> www.mccarty.org.uk
>
>
> Giulio Regeni, Mohammed Mahmoud Street, Cairo
>
> https://alwafd.news/images/thumbs/752/new/027f918bb62bf148193d5920ca67ded7.jpg
> https://www.bbc.com/news/world-middle-east-20395260
>
> Maurizio Lana
> Dipartimento di Studi Umanistici
> Università del Piemonte Orientale
> piazza Roma 36 - 13100 Vercelli
> tel. +39 347 7370925


--[2]------------------------------------------------------------------------
        Date: 2021-06-28 16:35:03+00:00
        From: Dr. Herbert Wender <drwender@aol.com>
        Subject: Re: [Humanist] 35.113: an oppositional artificial intelligence

Maurizio,

you asked: "how frequently a human true intelligence is really capable of
derailing a train thought in enlightening ways?"

Maybe the suggestion by the metaphor - derailing a train - is too strong. But in
principle, I would deny the presuposition behind your question because there is
a honourable field of research which assumes the fruitfulness of the game Willard
would like to play with a machine:

https://en.wikipedia.org/wiki/Dialogical_logic

Kind regards,
Herbert


--[3]------------------------------------------------------------------------
        Date: 2021-06-28 09:22:43+00:00
        From: Michael Falk <M.G.Falk@kent.ac.uk>
        Subject: Re: [Humanist] 35.113: an oppositional artificial intelligence

Hi Willard,

This is a fascinating area of research. The concept of “mixed initiative
interaction” has been a popular theme in HCI research for some decades. I
believe Eric Horvitz is considered to have written the seminal early papers.

You mention adversarial learning in your initial post (e.g. generative
adversarial networks). There is also the similar field of reinforcement
learning. These fields certainly involve “oppositional” AI, but in both cases,
an AI is either opposed to another AI (in the “adversarial” case) or is opposed
to itself (in the “reinforcement” case). For example, a generative adversarial
network is a system of two AIs: one is trained to generate images, the other is
trained to distinguish images generated by a computer from images generated by a
person. To create the generative adversarial network, you set the two systems
against each other, and eventually (you hope!) the image-generating system
learns to fool the image-checking system into thinking that the computer-
generated images are human-generated. The reason this works so well is that the
image-checking system gets better and better at distinguishing real from fake
images as the training proceeds, and pushes the image-generating system to
produce very authentic images.

In reinforcement learning the system is opposed to itself. The most famous
recent examples are Google’s AlphaGo and AlphaStar systems, which learned to
play Go and Starcraft II at a high level by playing millions of games against
themselves.

In a sense, I think all AI is oppositional in the way you propose. As soon as we
can get a computer to learn to perform a certain task or make a certain
prediction, it completely alters our sense of what the problem is that the
computer has solved. Computers never do things the same way humans do, and
invariably surprise us when they are able to replicate some behaviour that
hitherto we could only do with our own minds.

Cheers,

Michael Falk
Lecturer in Eighteenth Century Literature | University of Kent
Adjunct Fellow in Digital Humanities | Western Sydney University


_______________________________________________
Unsubscribe at: http://dhhumanist.org/Restricted
List posts to: humanist@dhhumanist.org
List info and archives at at: http://dhhumanist.org
Listmember interface at: http://dhhumanist.org/Restricted/
Subscribe at: http://dhhumanist.org/membership_form.php