Humanist Discussion Group, Vol. 35, No. 146. Department of Digital Humanities, University of Cologne Hosted by DH-Cologne www.dhhumanist.org Submit to: humanist@dhhumanist.org [1] From: Jez Cope <j.cope@erambler.co.uk> Subject: Re: [Humanist] 35.136: informed sci-fi on AI? (54) [2] From: Bill Benzon <wlbenzon@gmail.com> Subject: NEW SAVANNA: Let’s think of GPT-3’s prose output as a form of bullshit, where “bullshit” is a term of philosophical art. (13) --[1]------------------------------------------------------------------------ Date: 2021-07-16 16:13:20+00:00 From: Jez Cope <j.cope@erambler.co.uk> Subject: Re: [Humanist] 35.136: informed sci-fi on AI? The thing that immediately occurred to me in response to your question was Isaac Asimov's Foundation series, which follows the fall of an interstellar empire as predicted by the science of "psychohistory", which effectively uses the vast analytical power of AI to "solve" social science on a galactic scale. The story revolves around the attempts by the eponymous Foundation to minimise the harm and suffering caused by the empire's collapse, also aided by the predictions of psychohistory. Since this is Asimov, it also ties in several other of his common themes, such as the Three Laws of Robotics (and the Zeroth law for large AIs, which "...may not harm humanity, or, by inaction, allow humanity to come to harm.") which presages the ethical questions around AI that we grapple with today. All the best, Jez, long-time listener; first-time caller etc. etc. ;) -- Jez Cope | he/him | j.cope@erambler.co.uk erambler.co.uk | gh | tw | ma The most dangerous phrase in the language is, "We've always done it this way" — Rear Admiral Grace Hopper (att.) ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐ On Tuesday, July 13th, 2021 at 8:20 AM, Humanist <humanist@dhhumanist.org> wrote: > Humanist Discussion Group, Vol. 35, No. 136. > Department of Digital Humanities, University of Cologne > Hosted by DH-Cologne > www.dhhumanist.org > Submit to: humanist@dhhumanist.org > > > Date: 2021-07-13 06:59:22+00:00 > > From: Willard McCarty willard.mccarty@mccarty.org.uk > Subject: a particular kind of science fiction > > I'm looking for a few examples of a particular kind of science fiction > or imaginative speculation that contributes substantially, in an > informed way, to our thinking on an artificial intelligence of value to > research in the human sciences. I'd be very grateful for any suggestions. > > Yours, > > WM > ----- > > Willard McCarty, > Professor emeritus, King's College London; > Editor, Interdisciplinary Science Reviews; Humanist > www.mccarty.org.uk --[2]------------------------------------------------------------------------ Date: 2021-07-16 10:56:13+00:00 From: Bill Benzon <wlbenzon@gmail.com> Subject: NEW SAVANNA: Let’s think of GPT-3’s prose output as a form of bullshit, where “bullshit” is a term of philosophical art. Here’a a post I wrote a week ago that, I think, clarifies what engines like GPT-3 are able to do. Note that I do not mean “bullshit” as a term of opprobrium. I mean it as a term of philosophical art, where, in terms advanced by Harry G. Frankfurt in a well-known essay from 2009 (which I’ve never read), it designates a kind of language concocted without regard to truth. Bullshit must be coherent, and sound plausible, but its truth is irrelevant to the speaker’s purpose. https://new-savanna.blogspot.com/2021/07/lets-think-of-gpt-3s-prose-output- as.html Bill Benzon wlbenzon@gmail.com _______________________________________________ Unsubscribe at: http://dhhumanist.org/Restricted List posts to: humanist@dhhumanist.org List info and archives at at: http://dhhumanist.org Listmember interface at: http://dhhumanist.org/Restricted/ Subscribe at: http://dhhumanist.org/membership_form.php