Markup: textual variants (53)
Willard McCarty (MCCARTY@VM.EPAS.UTORONTO.CA)
Tue, 14 Mar 89 20:03:05 EST
Humanist Mailing List, Vol. 2, No. 723. Tuesday, 14 Mar 1989.
Date: Tue, 14 Mar 89 20:52 O
Subject: TEXTUAL VARIANTS‡i
I found the recent discussion concerning the electronic
marking of variants -- it really started with footnotes, but
quickly moved to textual variants -- very interesting.
I think it is important to distinguish between two
Question no. 1: How do we get the mass of textual material --
specifically variants -- already extant in printed books into our
computers, so they can be searched, analyzed, etc.?
Question no. 2: If we had no critical editions of anything,
what would be the most productive manner of computer input?
With regard to question no. 1, there is no choice but to
develop suitable mark-up methods, which utilize the already
existent lemmatization in the specific edition being input. (Did
CCAT, TLG or anyone else do this by OCR or was it all keyed in?)
With regard to question no. 2, though, straight text input,
with the development of software which handles the full texts and
not simply the variants, would seem to be the best option. In
other words, what I am suggesting is that the format of the
critical edition is a second-best mode, necessarily developed by
editors/printers because there was no other feasible way to
present the reader with the necessary material.
Lemmatization, all of us who have worked on critical
editions know, is a very time-consuming job, and would, in a
sense, become more than semi-automatic by means of the comparison
and collation software which would be developed. The reader of
the text would have all versions immediately available, and could
browse or search at will.
Obviously such programs would have to have methods of
indicating relationships between words, etc., but I do not think
the word "mark-up" would accurately describe these methods.
"Mark-up" denotes separating elements in one text, while here we
are dealing with many texts, with software pointers connecting
them all together at the thousand different intersections.
(Obviously, by means of suitable programming, such texts could be
converted into a single text with the requisite mark-ups). This,
I think, is very similar to what Charles Faulhaber wrote several
days back, though I would prefer not to use the term hyper-text
in this context.
Chaim Milikowsky <F12016@BARILAN>