Humanist Discussion Group

Humanist Archives: March 3, 2021, 7:28 a.m. Humanist 34.252 - 'intelligent' vs 'smart'

				                  Humanist Discussion Group, Vol. 34, No. 252.
        Department of Digital Humanities, University of Cologne
                   		Hosted by DH-Cologne
                       www.dhhumanist.org
                Submit to: humanist@dhhumanist.org




        Date: 2021-03-02 19:41:22+00:00
        From: Dr. Herbert Wender 
        Subject: Re: [Humanist] 34.249: comments on prediction without explanation in ML

As foreigner I feel some matching between the headline of the CFP and my
understanding wth respect to the difference between 'smart' and 'intelligent'
machines; 'intelligent' means: in failing cases it's hard to  explain...

Nice to see you all back at (New) Humanist !
Herbert

-----Ursprüngliche Mitteilung-----
Von: Humanist 
An: drwender@aol.com
Verschickt: Di, 2. Mrz 2021 9:41
Betreff: [Humanist] 34.249: comments on prediction without explanation in ML

                  Humanist Discussion Group, Vol. 34, No. 249.
        Department of Digital Humanities, University of Cologne
                          Hosted by DH-Cologne
                      www.dhhumanist.org
                Submit to: humanist@dhhumanist.org




        Date: 2021-03-01 08:27:19+00:00
        From: maurizio lana 
        Subject: Re: [Humanist] 34.246: prediction without explanation in ML

hi willard,

the CFP we read, left me dissatisfied: i think that we need
explanations allowing to trust the results not only in "medicine,
climate science, or particle physics, an explanation may be desired"
but in every field of science where ML is adopted, that is in every
field of science because ML is adopted not only in medicine, climate
science, or particle physics).
otherwise we end with STEM high level science where explanations are
required, and [everything other: SSH] low level science where you
can do without explanations, where supposedly "this may not create
any interesting philosophical challenges"- and once again in
practice we face an unresolved "two cultures question". with the
irony that for qualifying the interest of the problems posed by ML
to the STEM field one cannot but recur to the philosophy
("philosophical challenges") which is eminently not-STEM!
best
maurizio

>    Il 01/03/21 08:01, Humanist ha scritto:
>
>
>    This apparent lack of explanation is often also linked to the opacity of
>    ML techniques, sometimes referred to as the ‘Black Box Challenge’.
>    Methods such as heat maps or adversarial examples are aimed at reducing
>    this opacity and opening the black box. But at present, it remains an
>    open question how and what exactly these methods explain and what the
>    nature of these explanations is.
>
>    While in some areas of science this may not create any interesting
>    philosophical challenges, in many fields, such as medicine, climate
>    science, or particle physics, an explanation may be desired; among other
>    things for the sake of rendering subsequent decisions and policy making
>    transparent.
>
>    Maurizio Lana
>    Dipartimento di Studi Umanistici
>    Università del Piemonte Orientale
>    piazza Roma 36 - 13100 Vercelli
>    tel. +39 347 7370925



_______________________________________________
Unsubscribe at: http://dhhumanist.org/Restricted
List posts to: humanist@dhhumanist.org
List info and archives at at: http://dhhumanist.org
Listmember interface at: http://dhhumanist.org/Restricted/
Subscribe at: http://dhhumanist.org/membership_form.php