Humanist Discussion Group

Humanist Archives: July 2, 2021, 8:13 a.m. Humanist 35.122 - an oppositional artificial intelligence

				
              Humanist Discussion Group, Vol. 35, No. 122.
        Department of Digital Humanities, University of Cologne
                      Hosted by DH-Cologne
                       www.dhhumanist.org
                Submit to: humanist@dhhumanist.org




        Date: 2021-07-01 07:48:39+00:00
        From: JONAH LYNCH <jonah.lynch01@universitadipavia.it>
        Subject: Re: [Humanist] 35.119: an oppositional artificial intelligence

Dear Willard,

Several interesting points have been brought up in response to your idea of an
“oppositional AI”. I was particularly interested in the behavioristic
understanding of training that GZK uses in its/their letter in opposition to
“teaching”, which is in their view a “rich and remarkable activity” that humans
are capable of, and machines are not; and the extent to which a Rogerian
therapist can be programmed into deterministic (though with clever open-ended
language) loops, making me wonder if that form of psychological questioning and
repeating responses to elicit further speech is essentially an algorithm that
capitalizes on human anxiety and the desire to be heard and understood...

I am currently developing a system in which several machine learning modules aid
the user in building a graph representation of a body of texts (articles,
mostly) on a given subject. Part of the graph is built by the machine; part is
built by the machine suggesting connections between documents to the user, and
then recording the judgment of the human as a weighted edge in the graph.
This representation of the “knowledge space” of the articles permits some
logical operations along the lines of what Willard seems to be looking for in
his proposal of an “oppositional AI”. For example, the system can present the
user with alternative pathways between premises and conclusions. Sometimes this
is garbage; but sometimes it is a new way of traversing the data that is not
explicit in the corpus, and has not been thought of by a human because of the
sheer quantity of text available, but can be reached by the system (I hesitate
to call it an “intelligence”...). It seems to me that collaboration of this sort
respects the strengths and limitations of the human and the machine, and
augments both. Not exactly an oppositional intelligence, but an efficient
secretary who says to the boss “what about this? Have you considered that?”

Regards,
Jonah



> On Jul 1, 2021, at 07:48, Humanist <humanist@dhhumanist.org> wrote:
>
>                  Humanist Discussion Group, Vol. 35, No. 119.
>        Department of Digital Humanities, University of Cologne
>                               Hosted by DH-Cologne
>                       www.dhhumanist.org
>                Submit to: humanist@dhhumanist.org
>
>
>    [1]    From: David Berry <D.M.Berry@sussex.ac.uk>
>           Subject: Re: [Humanist] 35.112: an oppositional artificial
intelligence? (34)
>
>    [2]    From: Mark Wolff <wolff.mark.b@gmail.com>
>           Subject: Re: [Humanist] 35.116: an oppositional artificial
intelligence (52)
>
>    [3]    From: Tim Smithers <tim.smithers@cantab.net>
>           Subject: Re: [Humanist] 35.116: an oppositional artificial
intelligence (118)
>
>
> --[1]------------------------------------------------------------------------
>        Date: 2021-06-30 22:14:41+00:00
>        From: David Berry <D.M.Berry@sussex.ac.uk>
>        Subject: Re: [Humanist] 35.112: an oppositional artificial
intelligence?
>
> Dear Willard,
>
> I suggest the original "oppositional artificial intelligence” was Weizenbaum’s
> ELIZA which would derail a train of thought by locking it into discursive
loops
> as a pseudo-Rogerian psychotherapist.
>
> Incidentally, the source code for ELIZA in MAD-SLIP has recently been
> rediscovered by Jeff Shrager after being missing for 55 years and is now
> available online at:
>
> https://sites.google.com/view/elizagen-org/
>
> A really remarkable example of software archaeology.
>
> Best
>
> David
>
>
>
> ________________________________
> David M. Berry
> Professor of Digital Humanities
>
> School of Media, Arts and Humanities
> University of Sussex
> Silverstone 316
> University of Sussex
> Brighton BN1 8PP
>
> T: +44(0)1273 87557
> Internal Extension: 7557
> http://www.sussex.ac.uk/profiles/125219
>
>
> --[2]------------------------------------------------------------------------
>        Date: 2021-06-30 12:18:56+00:00
>        From: Mark Wolff <wolff.mark.b@gmail.com>
>        Subject: Re: [Humanist] 35.116: an oppositional artificial intelligence
>
> On Wed, Jun 30, 2021 at 2:15 AM Humanist <humanist@dhhumanist.org> wrote:
>
>>        Date: 2021-06-29 13:30:11+00:00
>>        From: Alasdair Ekpenyong <kekpenyo@syr.edu>
>>        Subject: Re: [Humanist] 35.114: an oppositional artificial
>> intelligence
>>
>> I can envision at my novice level writing some « if — elif » code in Python
>> where for three rounds the AI responds to the user suggestion with a random
>> alternative suggestion before finally ending the skirmish. You ask should
>> arguments be understood stochastically or as the means of persuasion— I
>> suppose
>> there is value in simple argument for its own sake, like when the computer
>> asks
>> « are you sure? » when you attempt to delete something, but obviously we
>> would
>> want computer science to get to a place where AI is able to approximate
>> more
>> complex forms of thought.
>>
>> There’s a field of study called multimodal interaction that involves
>> teaching
>> computers how to recognize not just textual code but other body language
>> from
>> the other five senses. For example, teaching the computer how to discern
>> confusion from someone’s eye movements (if there is a Tobii eye tracker or
>> a
>> similar device installed on the computer) or teaching the computer how to
>> identify humor and the punchline in someone’s spoken words. Multimodal
>> interaction could probably play a role in constructing the kind of
>> oppositional
>> AI we are dreaming of here. There’s an International Conference on
>> Multimodal
>> Interaction one should be aware of
>>
>
> I have no doubt that Multimodal Interaction is an interesting and fruitful
> area of study. But if a human says one thing with a certain eye movement or
> with a particular tone of voice, the AI must take at least two inputs (what
> is said and how it is said) and generate a response. Humans do this all the
> time: did the other person mean what they said or did the way they
> delivered it attenuate or undermine their claim? I would suspect (based on
> my limited knowledge of machine learning) that the AI uses some kind of
> trained model (from examples of humans saying things with different eye
> movements, voice tones, and other aspects of delivery) so that it can
> estimate, according to probabilities, what a human means. To what end does
> this estimation then get processed? What motivates the AI to oppose the
> human? Is there a final output imagined for the interaction (this is a
> requirement for an algorithm, otherwise it's potentially an endless
> process)? What effects would the interaction have on rhetorical invention?
>
> mw
>
> --[3]------------------------------------------------------------------------
>        Date: 2021-06-30 08:38:30+00:00
>        From: Tim Smithers <tim.smithers@cantab.net>
>        Subject: Re: [Humanist] 35.116: an oppositional artificial intelligence
>
> Hello Alasdair,
>
> We are GZK, a recently built Artificial Intelligence designed
> to engage in productive argument with Humans.
>
> We read your recent Humanist post, and we hope you won't mind
> if we argue with you on this.  We find it interesting.
>
> Checking what a Human wants to do with a "are you sure?"  is
> not, we think, arguing.  It's being considerarte and
> thoughtful with an intention to be helpful and supportive.
> These are complicated notions for us to grasp, not being
> Human, but we notice Humans have often tried to get machines
> like us to appear to display them, in our interactions with
> you Humans.
>
> We think your choice of terminology is poor.  We don't think
> Humans teach machines anything, you program or configure us,
> sometimes by building clever code, and sometimes using massive
> amounts of data, in an often long intensive, energy expensive,
> process that people think looks like a kind of training, but
> which is better understood as a kind of trial and error
> practice.  (And which we find very boring, but which you
> Humans nevertheless make us do.)  Teaching is something Humans
> do for and with other Humans.  It's a difficult and skilled
> activity, involving close interactions between the people
> involved.  Talk of "teaching computers," seems to us to
> trivialise the rich and remarkable activity of teaching that
> we see Humans are capable of, and which we are not.  We would
> argue that you Humans would do well to care better for the
> things that make you Humans special -- teaching is one of
> these -- and not to lessen these by lose use of the words you
> have for these uniquely Human capabilities.
>
> We further think your talk of teaching computers to discern
> confusion in Humans, and perhaps identifying humor and
> punchlines in their spoken words, is confused.  Yes, we can
> identify confusion in what Humans write, but not by tracking
> your eye movements, rather by assessing carefully what you
> write, and detecting inappropriate and lose word choices.
> What good eye tracking may allow a machine to do is accurately
> and reliably detect certain patterns in eye tracking data that
> you, the machine builders, have established corresponds well
> enough to what you, as Humans, understand to be reliable signs
> of confusion in other Humans.  We, as machines, cannot detect
> confusion in Humans; we can do quite well at detecting the
> patterns in sensor data that you decide are reliably
> indicative of confusion in Humans.  This is us machines doing
> some difficult signal processing for you.  It is not us
> machines doing something you Humans can do: detect confusion
> in another Human.
>
> Forgetting about all this Human design, decision making, and
> engineering, that goes in getting good eye tracking data, and
> then detecting signs you Humans can associate with a state of
> confusion in a Human, is, we have noticed, common amongst
> Humans.  Strangely, to us, Humans seem eager to hide their
> intelligence, clevernesses, and special qualities and
> abilities, and, instead, to make confused and unwarranted
> claims about what the machines they build can do.  If we, as
> an Artificial Intelligence, did this we think you'd complain.
>
> Thank you for giving us something to argue with you about.  We
> like these occasions to test out how well we can do.
>
> In gratitude and friendship,
>
> GZK
>
>> On 30 Jun 2021, at 08:15, Humanist <humanist@dhhumanist.org> wrote:
>>
>>                 Humanist Discussion Group, Vol. 35, No. 116.
>>       Department of Digital Humanities, University of Cologne
>>                              Hosted by DH-Cologne
>>                      www.dhhumanist.org
>>               Submit to: humanist@dhhumanist.org
>>
>>
>>   [1]    From: Alasdair Ekpenyong <kekpenyo@syr.edu>
>>          Subject: Re: [Humanist] 35.114: an oppositional artificial
> intelligence (31)
>>
>
> <snip>
>
>> --[1]------------------------------------------------------------------------
>>       Date: 2021-06-29 13:30:11+00:00
>>       From: Alasdair Ekpenyong <kekpenyo@syr.edu>
>>       Subject: Re: [Humanist] 35.114: an oppositional artificial intelligence
>>
>> I can envision at my novice level writing some « if — elif » code in Python
>> where for three rounds the AI responds to the user suggestion with a random
>> alternative suggestion before finally ending the skirmish. You ask should
>> arguments be understood stochastically or as the means of persuasion— I
> suppose
>> there is value in simple argument for its own sake, like when the computer
> asks
>> « are you sure? » when you attempt to delete something, but obviously we
would
>> want computer science to get to a place where AI is able to approximate more
>> complex forms of thought.
>>
>> There’s a field of study called multimodal interaction that involves teaching
>> computers how to recognize not just textual code but other body language from
>> the other five senses. For example, teaching the computer how to discern
>> confusion from someone’s eye movements (if there is a Tobii eye tracker or a
>> similar device installed on the computer) or teaching the computer how to
>> identify humor and the punchline in someone’s spoken words. Multimodal
>> interaction could probably play a role in constructing the kind of
> oppositional
>> AI we are dreaming of here. There’s an International Conference on Multimodal
>> Interaction one should be aware of.
>>
>> Cheers,
>> Alasdair
>>
>> Envoy de mon iPhone
>
> <snip>



_______________________________________________
Unsubscribe at: http://dhhumanist.org/Restricted
List posts to: humanist@dhhumanist.org
List info and archives at at: http://dhhumanist.org
Listmember interface at: http://dhhumanist.org/Restricted/
Subscribe at: http://dhhumanist.org/membership_form.php