Humanist Discussion Group

Humanist Archives: Feb. 17, 2022, 5:46 a.m. Humanist 35.535 - Man a Machine . . . and AI

              Humanist Discussion Group, Vol. 35, No. 535.
        Department of Digital Humanities, University of Cologne
                      Hosted by DH-Cologne
                Submit to:

        Date: 2022-02-16 17:54:27+00:00
        From: Mcgann, Jerome (jjm2f) <>
        Subject: Re: [Humanist] 35.531: Man a Machine . . . and AI

Dear Manfred and Øyvind,

I'm not sure that what I am about to write will be addressing the issues as you
two would like to have them addressed.  But this is what your comments have led
me to think.

My essay you quote from, Øyvind, was written/published in 2003/2004 when Johanna
Drucker, Beth Nowviskie, and a group of graduate students and I were deeply
involved in designing and testing/playing The IVANHOE Game (Manuel Portela's
Book of Disquiet project is an interesting scholarly spinoff from that effort we
were making).  I mention this for two reasons: first, by 2004 I was completely
fed up with TEI and relational databasing for humanities materials and saw that
The Rossetti Archive, begun with such starry-eyed hopes in 1993, was a DH "dead
end" (that is Willard's description of the equally dismal revelation that came
out of his equally inspiring Onomasticon Project from the '80s); second, the
essay was "in search of a method" and while I remain committed to certain of its
views, I now regard the "dimensional/dementianal" ontology wrongheaded because
ultimately (technically) as static as TEI/XML and relational databasing.  That
model for online editing, now alas fairly institutionalized, still weighs like a
nightmare on the brain of DH scholars, though the dawn is red with the promise
of graph approaches.  Random and Fractal models are my current chief interest.
(And it's embarrassing to realize now that there was then available, if one knew
where to look, all kinds of CS scholarship that was already sketching the
virtues of such approaches.)

The simple truth, I believe is that you can't mark up natural language forms or
organize them in a relational database when -- to quote Susan Hockey's decisive
comment -- "there is no obvious unit of (natural) language".  And if you resort
to standoff markup/annotation, how do you integrate those moves into the
computational field?  You have fundamentally dissevered the codependent dynamic
of all messaging.

But the interpretive moves have to be integrated in humanist documents because
their "content" is always a (changing, dynamic) result of
a particular reader's/user's/interpreter's act of measurement.  No document,
however ancient or "dead", is "information" (in Shannon's sense).  Faulkner's
famous remark is applicable: "the past is never dead; it's not even past".  Or
it "is" information only if we decide to treat it as such.

The other general point I want to make is about natural language: that it is
what AI calls an open-ended algorithmic process.  More to the point, the
vehicular forms of specific languages represent alternative (sociohistorically
determined) instantiations of such processes.  So if you reduce, say, a specific
English language document to a digital form, you will have decided to
interpret/measure it as information.  But the document ISN'T information ,
though it can be usefully treated as such.  (That is as true for "historical"
documents as for poetic/literary ones, though as Manfred points out, the two --
while their flexible openendednesses always overlap -- have different final
causes.) I add that when I speak of language documents I do not just mean "text"
(words and marks of punctuation);  I mean document in Boeckh's sense when he
laid out his program of Sachfilologie.

The  complex documentary materials of the project we are working on -- the
literary, linguistic, and ethnolinguistic works of Jaime de Angulo -- are such
that they have led us to hypothesize that graph databasing offers a way out of
the deadend of the text markup cum relational databasing approach to DH and its
special documentary materials.

Machinic computation will atomize the documents as purely electronic information
-- time stops for audio files, pixel coordinates for textual and graphical files
-- and hand over that documentary data to human agents/users to spell out any
set of noninformational differentials that the "meaningless" data has isolated
for attention.


PS. If you like, I will send each of you a brief essay I recently wrote for
Marta Werner's forthcoming special issue of Textual Cultures -- an issue of
short essays that aim to "provoke" new ways to think about DH and online

Unsubscribe at:
List posts to:
List info and archives at at:
Listmember interface at:
Subscribe at: