Humanist Discussion Group

Humanist Archives: Feb. 20, 2022, 7:08 a.m. Humanist 35.543 - GPT-3 to poetry & helpful agents

				
              Humanist Discussion Group, Vol. 35, No. 543.
        Department of Digital Humanities, University of Cologne
                      Hosted by DH-Cologne
                       www.dhhumanist.org
                Submit to: humanist@dhhumanist.org


    [1]    From: Tim Smithers <tim.smithers@cantab.net>
           Subject: Re: [Humanist] 35.529: GPT-3 and generated poetry (55)

    [2]    From: Robert A Amsler <robert.amsler@utexas.edu>
           Subject: [Humanist] 35.539: from GPT-3 to helpful agents (34)


--[1]------------------------------------------------------------------------
        Date: 2022-02-19 18:23:59+00:00
        From: Tim Smithers <tim.smithers@cantab.net>
        Subject: Re: [Humanist] 35.529: GPT-3 and generated poetry

Hello

With thanks, and apologies, to Mark Wolff, who suggested

 "Maybe GPT-3 is referring to the method prescribed for
  writing a Dadaist poem
  (https://www.writing.upenn.edu/~afilreis/88v/tzara.html)
  ..."

in [Humanist] 35.529: GPT-3 and generated poetry, 2022.02.15.


How to make a Dadaist text (method of GPT-3)

 Take all the pages you can find on the Web

 Take a pair of scissors

 Cut out all the parts of pages in English from all the
  webpages and cut out all the words from these parts

 Make a list of word-EnglishPart pairs using the words you
  cut out of each part, for all the parts

 Repeatedly push this list at a BIG, sorry!, ENOURMOUS,
  multi-layer neural network (so called) [but also misleadingly
  called a Deep Learning System, or, oxymoronically, a Machine
  Learning system] until this stops changing its parameters
  each time you push your list at it

 Type to your now programmed-with-data system some words of
  your own about what you won't your text to be about, sort of

 Copy conscientiously the output [ie copy-and-paste it] to
  where you want to keep this (and use to show off to others)

 The text will be like you

 And here you are a writer (I didn't say author!), infinitely
  original and endowed with a sensibility that is charming
  beyond the wit of many, it seems


Question: How many tries would it take before GPT-3 would come
up with these instructions when started with the input "How to
make a Dadaist text (method of GPT-3)"?


Sorry!  All this just fell out of my fingers as I read the
recent Humanist posts on GPT-3, and I couldn't stop them
pushing the keys ...  not even the "Send" button -:(

Back to listening to the Jan Garbarek I have on.

Tim

--[2]------------------------------------------------------------------------
        Date: 2022-02-19 12:17:32+00:00
        From: Robert A Amsler <robert.amsler@utexas.edu>
        Subject: [Humanist] 35.539: from GPT-3 to helpful agents

Something does occur to me.

An anecdote about what an electrical repair technician is taught about
fixing a problem someone is having with a home appliance that has
stopped working comes to mind. The very first question to ask after
hearing the customer's typically long and detailed description of their
situation is to ask them "Is it plugged in?"

The merit of that question becomes clear when one realizes the best
advice to someone having a problem is to run through a standard set of
questions of the most obvious kind, seeking answers to the simplest
preliminary steps for anyone to answer in performing the task. The
reason is that, first, no entity, human or A.I., other than the writer
themselves can provide the guidance necessary and that the best solution
is to run through a standard set of the simplest questions asking the
writer to explain what they have done so far that will lead them to
recognize where they have prematurely made a step forward without
completing an acceptable prior step. Thus, the A.I. component is
actually just there to add natural language fluency in asking a very
standard series of sequential questions about what the author is trying
to do (Your "child's questions"). The computer can't provide the
answers, it can just be persistent in seeing that the author has asked
and answered the necessary precursor questions to their own satisfaction
leading the writer to their "ah ha" moment of realizing what they likely
know they forgot to resolve in their mind before moving on to the next
step in their work.

There is an analogy to writing programming code. When your program fails
to work, other than due to obvious syntactic errors the compiler or
interpreter can find, it is usually the last lines of code you wrote.
Back up and look at that code again, rewrite it in a different way, if
need be.

Robert A. Amsler, retired computational lexicologist


_______________________________________________
Unsubscribe at: http://dhhumanist.org/Restricted
List posts to: humanist@dhhumanist.org
List info and archives at at: http://dhhumanist.org
Listmember interface at: http://dhhumanist.org/Restricted/
Subscribe at: http://dhhumanist.org/membership_form.php