Humanist Discussion Group

Humanist Archives: June 22, 2021, 6:01 a.m. Humanist 35.101 - AI-Human co-authorship?

				
              Humanist Discussion Group, Vol. 35, No. 101.
        Department of Digital Humanities, University of Cologne
                      Hosted by DH-Cologne
                       www.dhhumanist.org
                Submit to: humanist@dhhumanist.org




        Date: 2021-06-21 10:04:41+00:00
        From: Tim Smithers <tim.smithers@cantab.net>
        Subject: Re: [Humanist] 35.80: AI-Human co-authorship?

Dear Jonathan,

What follows is not supposed to be, in any way, against your
idea of investigating Human and AI collaboration.  However,
some things about the way you describe this worry me a bit.

You say

    "As AI is improving rapidly, the urgency of these
     questions intensifies.  ..."

Is AI improving rapidly?  Yes, human built AI systems are
being shown to do more things than before, including beating
humans at games most people find hard to play well, and write
texts that many humans might struggle to write.  But these AI
systems are all built using basically the same computational
Machine Learning techniques.  I see this not as improved AI,
but as an indication that we humans are getting better at
building AI systems to do more of the same kind of things.  Do
any of these more things our AI systems can do now constitute
better Artificial Intelligence, as opposed to just more AI? Is
more intelligence in humans (or other animals) marked by being
able to do more of the same kinds of things?  I don't think
so.  More AI is not the same as improving AI, I think.

What has, I think, got better in recent years is the cost --
it's gone down lots -- and practicality -- it's got easier --
to do enormous amounts of computation, together with the cost
-- it's gone down lots -- and practicality -- it's got easier
-- to assemble the enormous amounts of data needed to train
artificial systems.  The AI we hear about so much today
depends upon these real improvements, but is, itself, little
different from the neurally inspired Machine Learning
techniques we had before, but without the amounts of
computation and data needed to make this techniques to useful
things.

More.  I would say what we have today is not artificial
intelligence, it is kinds of useful (and sometimes abused)
Machine Learning.  For me, and I think for you in your
project, intelligence requires a system to have a capacity to
explain what it does, why, and how, in ways others can then
understand the rational of the [intelligent] actions or
activities the system engages in.  No effective explanation,
no intelligence, just lots of cleverness, perhaps, is how I
see it.

In your phrase "a slavish tool" you point to something
important, I think.  Tools are never slaves; not good tools;
not tools well used.

A tool is something that, in some way, enhances or extends
some human capacity, thus making possible, or easier, some
purposeful human action or activity.  Tools are, I think,
therefore better understood as means, not as substitutes.
Tools can become [kinds of] slaves to our actions and
activities, but this is abuse, I would say, not good tool use.
If you use a spelling checker system to do all your spell
checking, for example, rather than use it to help you be sure
you have all the words you chose to use spelt correctly,
according to British English, say, it would be fair to say you
are using the spelling checker slavishly.  But this is a
broken way to use a powerful tool well.

But we don't collaborate with tools, we take up tools as an
integral means to doing some purposeful action or activity.
Collaboration requires, I would say, an autonomous other.
Today's AI may be getting more and more clever at doing
certain things, but it is not yet on the road to becoming
autonomous, not in the way we use this word, and concept, when
we talk about human autonomy, and autonomy in other living
things.  Being autonomous (self law making) is orthogonal to
being automatic (self acting).  You don't become autonomous by
becoming more and more automatic, despite what many people in
AI, robotics, and now the car building, ship building, flying
drone building, and lethal weapon building industries, like to
think, and often loudly claim.

So, wouldn't studying how different people collaborate in
creative and intelectual pursuits be a way to investigate what
an AI would need to have, and need to be able to do, to be a
similarly useful collaborative partner?  This would show, I
think, how far off we really are from having AIs we could
usefully collaborate with, rather than use, or abuse, as
tools.

Best regards,

Tim



> On 11 Jun 2021, at 06:48, Humanist <humanist@dhhumanist.org> wrote:
>
>                  Humanist Discussion Group, Vol. 35, No. 80.
>        Department of Digital Humanities, University of Cologne
>                               Hosted by DH-Cologne
>                       www.dhhumanist.org
>                Submit to: humanist@dhhumanist.org
>
>
>
>
>        Date: 2021-06-10 19:14:42+00:00
>        From: Jonathan Cohn <cohn@ualberta.ca>
>        Subject: AI-Human Collaboration Grant
>
> I am revising a grant proposal on the ethics of Human-AI collaborative
> writing and artistic experimentation and am looking for potential
> collaborators and new research on the topic.  Below is a brief
> description of the project, please feel free to contact me off-listserv
> if you’d like to collaborate (or just chat).  Thank you!
>
>
> The Master’s Tools: AI-Human co-authorship and collaborative research in
> the Humanities.
>
> What do we give up and what do we gain by imagining Artificial
> Intelligence as an equal partner in our creative and intellectual
> pursuits? How can we revise feminist antiracist methods of collaboration
> to, as Jason Edward Lewis et. al. encourage, make kin with machines
> (2018)? As AI is improving rapidly, the urgency of these questions
> intensifies.  Our project will focus on how to artistically and
> equitably collaborate with AI in a way that does not simply treat it as
> a slavish tool, but instead imagines it as a unique subjectivity with
> its own situated knowledge. What does real mutual collaboration look
> like with a technology whose status as a subject is contentious and who
> can hardly be said to benefit from the work it contributes? Is this even
> possible?
>
>
>
> Jonathan Cohn
> Director, Digital Humanities
> Assistant Professor, English and Film Studies
> University of Alberta
> ᐊᒥᐢᑿᒌᐚᐢᑲᐦᐃᑲᐣ (Amiskwacîwâskahikan), Treaty 6/Métis Territory
>
> New Book Coming Soon: Very Special Episodes: Televising Industrial and
> Social Change
> (https://www.rutgersuniversitypress.org/bucknell/very-special-
> episodes/9781978821156)
> Newish Book: The Burden of Choice: Recommendations, Subversion, and
> Algorithmic Culture
> (https://www.rutgersuniversitypress.org/the-burden-of-choice/9780813597812)



_______________________________________________
Unsubscribe at: http://dhhumanist.org/Restricted
List posts to: humanist@dhhumanist.org
List info and archives at at: http://dhhumanist.org
Listmember interface at: http://dhhumanist.org/Restricted/
Subscribe at: http://dhhumanist.org/membership_form.php