Humanist Discussion Group

Humanist Archives: Nov. 3, 2021, 6:23 a.m. Humanist 35.340 - psychoanalysis of a digital unconscious &c.

				
              Humanist Discussion Group, Vol. 35, No. 340.
        Department of Digital Humanities, University of Cologne
                      Hosted by DH-Cologne
                       www.dhhumanist.org
                Submit to: humanist@dhhumanist.org




        Date: 2021-11-02 07:24:53+00:00
        From: Tim Smithers <tim.smithers@cantab.net>
        Subject: Re: [Humanist] 35.332: psychoanalysis of a digital unconscious &c.

Dear Maurizio,

Again, you present some interesting observations.

Suppose we take AI to the be Science that seeks to investigate
intelligent behaviour by trying to create and study it in the
artificial (as I do), as opposed to investigating intelligent
behaviour in the natural, as do Psychology and Cognitive
Science, for example.  [Not all AI researchers would agree
with this way of characterising AI research.]

Then, AI researchers have a need, and an obligation, to
identify and specify effective and practical criteria they can
use to assess and judge the intelligence of any behaviour they
artificially create and study.

With this in mind, we may ask, was Turing, in describing his
"Imitation Game," identifying such criteria?  Or was he more
concerned with explaining to his readers a way to imagine how
we might test some artificially created intelligent behaviour?
Turing, as we know, suggested that the operational criteria in
his Imitation Game should be how convinced the human who takes
part in the game was that the other party was another human or
not, on the basis of their conversation.  This, in my view, is
not an adequately well identified and specified practical
criteria to use to investigate the behaviour of the described
conversation system.  It is not strong enough, nor reliable
and robust enough.  Mere imitation is not a sufficient
criteria, no matter how convincing it is.  [To be clear,
Turing did not ask that all humans should be convinced, only
the person taking part in his 'test'.]

So, as insightful as Turning was, his Imitation Game paper did
not, in my view, present the basis for a realistic and viable
AI research programme.  I rather think Turing was more
concerned with interesting and motivating readers, and not so
much, with seriously suggesting how some good AI research
might be done.  Turing (1950) is a motivational paper, not a
scientific one, I would say.

We may ask the same of the (1955) Dartmouth Summer School
announcement, which, as you quoted, says

   "For the present purpose the artificial intelligence
    problem is taken to be that of making a machine behave in
    ways that would be called intelligent if a human were so
    behaving."

Once again, I believe, this is not, and was not intended to
be, the identification and specification of the criteria to be
used in a serious AI research programme.  Rather it was, as
the context in which it was presented would suggest, an
attempt to explain to other people what kind of work this
group of people had been engaged in, in the summer of 1955.
Again, I would characterise it as a motivational statement,
not a scientific one.

In my experience, identifying and specifying suitable, strong
enough, yet practical, criteria to use in some AI research is
a Dark Art, seldom spoken of, yet, necessarily, always
practiced, albeit tacitly, and all too often, I suspect, in
ignorance, by the researchers involved.

It often works like this, from what I have seen.  I have
worked in a subfield of AI called AI in Design, which, mostly,
has been concerned with understanding how to provide
intelligent support to designers doing some particular kind of
designing.  All to often, what designing means in this kind of
work, is what the researchers say designing is, if they say
anything at all about designing.  These researchers, who are
mostly not designers of any kind, typically make no attempt to
offer either theoretically derived or empirically informed
criteria based upon real examples of the kind of designing the
AI researchers claim to be trying to understand how to
support.  [It's mostly, "we all know what designing is, so
there's no need to elaborate anything on this here."] My own
concern for how to identify and specify suitable and adequate
criteria for our own AI in Design research led me on a years
long attempt to develop a Knowledge Level theory of designing.
An effort that remains unfinished today.

Any criteria used for assessing and judging artificially
created intelligent behaviour must, I think, pass what I call
the "Flower/Light Test."  The term 'artificial' is used in two
common ways in English, illustrated by the phrases "artificial
flowers" and "artificial light."  The former points to things
that look convincingly like flowers, but which are not real
flowers.  The second points to real light, but made by
artificial means.  The difference is crucial, as I am sure is
obvious.

Being indistinguishable is not, and cannot be, sufficient.
Thinking it is sufficient only leads to fake AI. The kind we
have lot of today, in my view.

Best regards,

Tim



> On 29 Oct 2021, at 10:28, Humanist <humanist@dhhumanist.org> wrote:
>
>                  Humanist Discussion Group, Vol. 35, No. 332.
>        Department of Digital Humanities, University of Cologne
>                               Hosted by DH-Cologne
>                       www.dhhumanist.org
>                Submit to: humanist@dhhumanist.org
>
>
>
>
>        Date: 2021-10-28 07:27:19+00:00
>        From: maurizio lana <maurizio.lana@uniupo.it>
>        Subject: Re: [Humanist] 35.327: psychoanalysis of a digital unconscious
&c.
>
> the matter of imitation is relevant.
>
> i would like to recall that it is from the very beginning of the
> field of AI that the imitation plays a central role. (i doubt that
> it is really so also today for the developers. the
> focus on imitation could be marketing of AI, in order to make it
> suitable to narration, and suitable to be accepted by people: "look,
> it is not alien, it is similar to me!" "look, it writes like dylan
> thomas but also like jk rowlings! wonderful!")
>
> three historical passages.
>
> in 1950 Alan Turing in his famous article (Turing, Alan Mathison.
> 1950. «Computing Machinery and Intelligence». Mind LIX
> (236): 433–60. https://doi.org/10.1093/mind/LIX.236.433)
> had the first paragraph titled "the imitation game" with the
> definition of the "Turing test" where the intelligence of the
> machine will show when its written answers to a human interrogator
> will be indistinguishable from those of a human.
>
> in 1955 in the Dartmouth project on AI (McCarthy, John, Marvin L.
> Minsky, Nathaniel Rochester, e Claude Elwood Shannon. 1955. «A
> proposal for the Dartmouth summer research project on Artificial
> Intelligence
> (http://www-formal.stanford.edu/jmc/history/dartmouth/dartmouth.html)
> the objective of AI was described with these words: "For the present
> purpose the artificial intelligence problem is taken to be that of
> making a machine behave in ways that would be called intelligent if
> a human were so behaving."
>
> in 2019 about this description of AI in the Dartmouth project (and
> the about the Turing test) Luciano Floridi says (Floridi, Luciano, e
> Josh Cowls. 2019. «A Unified Framework of Five Principles for AI in
> Society». Harvard Data Science Review 1 (1).
> https://doi.org/10.1162/99608f92.8cd550d1):
>
> The latter scenario is a fallacy, and smacks
> of superstition. Just because a dishwasher cleans the dishes as
> well as (or even better than) I do does not mean that it cleans
> them like I do, or needs any intelligence to achieve its task. The
> same counterfactual understanding of AI underpins the Turing test
> (Floridi, Taddeo, & Turilli, 2009 ), which, in this case,
> checks the ability of a machine to perform a task in such a way
> that the outcome would be indistinguishable from the outcome of a
> human agent working to achieve the same task (Turing, 1950).
> the fact that an AI system has the syntax doesn't mean that
> it has the semantics, or that its semantics if any be
> similar to ours. and this is well described by your example of the
> chess play. but.
>
> but the lack of semantics is appealing in view of a terse, dry,
> techno society where compassion is absent. where no one gets in
> touch with blood, sweat, smell of fatigue.
>
> best
> Maurizio
>
>
>
> Il 28/10/21 09:02, Tim Smithers ha
> scritto:
>
>
> So why, I keep wondering, do we think that systems built using
> so called Deep Learning techniques, with massive amount of
> data, that imitate, often convincingly, some things people can
> do, are replications of what people do?
>
> Did Deep Blue (II) play chess or just imitate chess playing?
> Did it just look like it played chess?  I'm serious.  Garry
> Kasparov had to play chess to engage with Deep Blue in the
> intended way, for sure.  Deep Blue moved it's chess pieces in
> legal ways, and in ways that made it hard, and sometimes
> impossible, for Kasparov to win the chess game.  Did Deep Blue
> know it had won, in the way Kasparov knew he had won, when he
> did?  Deep Blue could detect the legal end of a game, sure,
> and which colour had won, sure, but this is not wining like it
> was for Kasparov.
>
>
>
>
>
>
> mural of Giulio Regeni in Mohammed Mahmoud Street, Cairo
>
> the source is
> https://alwafd.news/images/thumbs/752/new/027f918bb62bf148193d5920ca67ded7.jpg
> the meaning of the place
> https://www.bbc.com/news/world-middle-east-20395260
>
> Maurizio Lana
> Dipartimento di Studi Umanistici
> Università del Piemonte Orientale
> piazza Roma 36 - 13100 Vercelli
> tel. +39 347 7370925


_______________________________________________
Unsubscribe at: http://dhhumanist.org/Restricted
List posts to: humanist@dhhumanist.org
List info and archives at at: http://dhhumanist.org
Listmember interface at: http://dhhumanist.org/Restricted/
Subscribe at: http://dhhumanist.org/membership_form.php