Humanist Discussion Group, Vol. 16, No. 355.
Centre for Computing in the Humanities, King's College London
www.kcl.ac.uk/humanities/cch/humanist/
Submit to: humanist@princeton.edu
Date: Sat, 30 Nov 2002 08:48:42 +0000
From: Willard McCarty <willard.mccarty@kcl.ac.uk>
Subject: significant deviations
In Nelson Goodman's extraordinary book, Languages of Art
(Indianapolis/Cambridge, 1976), is the most precise and clearest argument I
have encountered for the nature of algorithmically constrained thinking --
and, among many other questions, what makes digital different from analog.
Goodman is concerned to articulate the idea of a notational language to
resolve questions in aesthetics, centrally on the nature of representation.
His prime example of a notational system is the musical score. Goodman's
argument is, as I said, clear, but it is also as difficult as it is
rewarding. So I will not attempt any summary. But I do wish to recommend
the entire book in passing, hoping to solicit comments on it from others
who know the work, while speeding to a particular question that he raises.
In discussing alternate musical notations, Goodman very briefly outlines
John Cage's (pp. 187ff): "dots, for single sounds, are placed within a
rectangle; across the rectangle, at varying angles and perhaps
intersecting, run five straight lines for (severally) frequency, duration,
timbre, amplitude, and succession. The significant factors determining the
sounds indicated by a dot are the perpendicular distances from the dot to
these lines." For the purposes of my question it's not important that you
understand much about Cage's notation, except that in Goodman's terms it is
not notational: since no limit is set on how small the differences in
position can be to distinguish one note from another, no measurement can
ever determine that any mark belongs to one note rather than to another.
Thus no measurement can determine if any given performance complies with a
given diagram.
Now here's the point I wish to question. Goodman asks, aren't such diagrams
good enough, given precise means of reproduction? The answer is no: however
small the inaccuracy, a chain of successive reproductions can lead to
significant deviation. And one has to ensure not only a limit on
significant deviation, also disjointness of characters -- i.e. you have to
be able in principle *always* to tell two characters apart in the system.
Consider the rationale of an electronic textual edition, in particular
consider the argument that all one needs in such an edition are
high-definition images. The usual contrary is that, after all, we really do
need the investment of editorial intelligence in our editions, and in fact
we can have the best of both worlds by combining images with commentary. My
question is as follows: does Goodman's point about Cage-like systems lend
new and fundamentally more interesting support to editorial intervention
than the argument from efficiency of accumulated scholarship (we need
editors to do their thing because we want to do other things)? No matter
how good the imaging, the digital image, as it is actually seen, is not the
original: choices are made during the imaging, and the process of putting
the result before one's eyes inevitably causes changes. But the editor,
having viewed the original, can record insights obtained from direct
inspection, e.g. as markup.
So, then, if I am right, the digital less can be more than imaging is able
to provide. Could this be what it means to reimagine a work in the new medium?
Yours,
WM
Dr Willard McCarty | Senior Lecturer | Centre for Computing in the
Humanities | King's College London | Strand | London WC2R 2LS || +44 (0)20
7848-2784 fax: -2980 || willard.mccarty@kcl.ac.uk
www.kcl.ac.uk/humanities/cch/wlm/
This archive was generated by hypermail 2b30 : Sat Nov 30 2002 - 04:06:35 EST