Humanist Discussion Group, Vol. 35, No. 314. Department of Digital Humanities, University of Cologne Hosted by DH-Cologne www.dhhumanist.org Submit to: humanist@dhhumanist.org [1] From: Henry Schaffer <hes@ncsu.edu> Subject: Re: [Humanist] 35.312: psychoanalysis of a digital unconscious? (84) [2] From: Henry Schaffer <hes@ncsu.edu> Subject: Re: [Humanist] 35.312: psychoanalysis of a digital unconscious? (15) [3] From: Dr. Hartmut Krech <hkrech@gmx.de> Subject: Re: [Humanist] 35.312: psychoanalysis of a digital unconscious? (17) --[1]------------------------------------------------------------------------ Date: 2021-10-21 14:32:28+00:00 From: Henry Schaffer <hes@ncsu.edu> Subject: Re: [Humanist] 35.312: psychoanalysis of a digital unconscious? Willard, In the AI/computing arena such biases are well known. These can easily arise in a number of ways, and a commonly encountered source is in the training data set. We've read about biases arising with regard to skin color, but more easily understood biases arise from such things as the background of pictures - here's a discussion https://kde.mitre.org/blog/2018/10/28/is-this-a-wolf-understanding-bias-in-machine-learning/ which is well illustrated. In the old days, AI used to mean algorithms which were written by programmers (e.g. https://en.wikipedia.org/wiki/ELIZA ) and so everything was explicitly written down and could be investigated if one cared to dig into code. Today, much of the AI is done by CNNs (Convolutional Neural Networks) which are provided with a "training set" (very commonly many thousands of photos) and millions of parameters are estimated to produce an "algorithm" which can distinguish between different categories of the training set. A lot of work has gone into making this process run well and be more effective - and sometimes the computation needed is impressive (e.g. a $250,000 computer cluster running nonstop for a month - or even more.) But then, digging into the millions of parameters behind the algorithm gets to be an extremely difficult task. Tools have been developed to make this (barely) possible. I suspect that this latter aspect is what you label the digital 'unconscious'. (I, personally, don't like that label as it's too anthropomorphic) and finding out the current progress in that area can be done - but I'm not up with the latest. It would be great if anyone is current and can describe what's going on in that area - and incidentally elaborate on what I've written above, and correct me if necessary. --henry On Thu, Oct 21, 2021 at 2:11 AM Humanist <humanist@dhhumanist.org> wrote: > Humanist Discussion Group, Vol. 35, No. 312. > Department of Digital Humanities, University of Cologne > Hosted by DH-Cologne > www.dhhumanist.org > Submit to: humanist@dhhumanist.org > > > > > Date: 2021-10-20 06:19:16+00:00 > From: Willard McCarty <willard.mccarty@mccarty.org.uk> > Subject: a digital 'unconscious'? > > We know that a computing system, hardware and operating system software, > is many-layered, from the hardware circuitry, firmware and the many > abstraction layers up to the user interface. > > For purposes of argument, let's call what the user sees and can know > from a running maching its 'consciousness', i.e. that of which we can be > consciously aware. Let's also call everything that the user cannot know > directly the machine's 'unconscious'. In the former, we can easily spot > design choices, perhaps construable as prejudices, e.g. in favour of > right-handed people. or those who demand bright colours and active > movement in the interface. In the latter, let us say in the role of a > systems psychoanalyst, I assume we can find unhealthy quirks, a.k.a. > prejudices. > > Here is my question. In principle how deep, down through the abstraction > layers, can there be such quirks? Prejudice-hunting is these days in > full swing, so I expect this question may have been considered at > length. But critically speaking, under what conditions, at how deep a > level can choices recognisable as cultural biases be found? > > Comments? > > Yours, > WM > > > -- > Willard McCarty, > Professor emeritus, King's College London; > Editor, Interdisciplinary Science Reviews; Humanist > www.mccarty.org.uk --[2]------------------------------------------------------------------------ Date: 2021-10-21 17:58:54+00:00 From: Henry Schaffer <hes@ncsu.edu> Subject: Re: [Humanist] 35.312: psychoanalysis of a digital unconscious? A bit more since I just got an email on this: "GPT-3, or the third generation Generative Pre-trained Transformer, is a neural network machine learning model trained using internet data to generate any type of text." "GPT-3's deep learning neural network is a model with *over 175 billion machine learning parameters*." [emphasis added] So my statement about millions of parameters ... --henry P.S. the above is from https://searchenterpriseai.techtarget.com/definition/GPT-3 --[3]------------------------------------------------------------------------ Date: 2021-10-21 07:25:00+00:00 From: Dr. Hartmut Krech <hkrech@gmx.de> Subject: Re: [Humanist] 35.312: psychoanalysis of a digital unconscious? Willard, That's a most interesting question. As a convenient entry point for such deliberations, I try to imagine how the life of a human group might be, if some certain trait, idea, decision, judgment, contrivance, or machine were absent from the life of that group. As the object or purpose of a 'machine', I take Franz Reuleaux's definition that mechanical machines are concerned with "(restricted) movement(s) within limits" (Reuleaux, Theoretische Kinematik, Braunschweig 1875, 39). Formulating our question like this, will open up almost unlimited numbers of alternatives in addition to the constructive possiblities contained within the limitations of such automatisms. It will also reveal the inherent, hidden, or 'unconscious' preconditions of any cultural trait or element. Such reasoning may be considered as part of cultural analytics. Best regards, Hartmut _______________________________________________ Unsubscribe at: http://dhhumanist.org/Restricted List posts to: humanist@dhhumanist.org List info and archives at at: http://dhhumanist.org Listmember interface at: http://dhhumanist.org/Restricted/ Subscribe at: http://dhhumanist.org/membership_form.php