making computational linguistics (252)

Thu, 7 Sep 89 22:15:13 EDT

Humanist Discussion Group, Vol. 3, No. 445. Thursday, 7 Sep 1989.

Date: 7 September 1989
From: Willard McCarty <>
Subject: making computational linguistics

The following is abstracted from a letter sent by Hans Karlgren,
the Chairman of the organizing committee of the Computational
Linguistics conference COLING 1990, to all potential
participants. It should be of considerable interest to several of
us who normally have nothing directly to do with computational
- - - - - - - - - - - - - - - - - - - - - - - - -
Date: Wed, 6 Sep 89 18:34:00 EDT
From: Hans Karlgren KVAL <>
Subject: COLING

Dear Colleague,


After twelve well-renowned international conferences arranged by
the International Committee for Computational Linguistics and
after an increasing amount of literature and local meetings
dedicated to the topic one might assume that all of us who care
would by now know exactly what computational linguistics is
about. We don't. We do not only differ slightly on where we want
to place the emphasis, the concept also evolves with each of us
in the vague, successive way in which human language so
intriguingly and so fruitfully keeps changing. Computational
linguistics is what we make it.

An international conference can be seen as a stimulus-response
sequence. The initiators of COLING emit a stimulus to a wide
community of people who probe human language - and such as do not
know they - do and get a response we can only partially control.
We set things in motion by announcing the conference, we can aim
at an intended target area by filtering the contributions offered
and we will not insignificantly guide the missiles underway by
giving directions and hints to the authors/speakers and
discussants. COLING is not a publishing service, impartially
recording the best, in some predefined sense, of what is going on
anyhow. It should help make things happen. As a consequence, we
may well have to turn down offers of papers which in all
objectivity are good pieces of scientific work.

My endeavour as the chairman of the program committee is to
encourage controversial presentations worthy of discussion. That
is what I tried to signal in the little space available in the
first announcement where we invite the public to present either a
topical paper on some crucial issue in computational linguistics
or else a very brief report, with software demonstra- tion, on
some interesting ongoing project. What I tried to negate with
that formulation was extensive project descriptions. I do respect
large-scale experiments and I do support the demand that great
efforts should be given to their documentation down to minute
detail of procedure and storage format, to make repetition easier
for verification and to avoid unnecessary duplication, but such
accounts are utterly unsuited for oral discussion and should not
encroach on the few hours we have available for multi-lateral
documentation in Helsinki.

The kind of papers I hope to see less of is the kind which is so
common in many international conferences, COLINGs not excluded,
where a reasonable project, based on sound (combinations of
current) theoretical assumptions and claiming, plausibly
justifiably, five per cent better performance in some dimension
than current procedures, is described in great detail, rounding
up with a little preview of the next version of "The System".
Whatever the scientific merits of such projects - a few of them
fall in a gap between knowledge-seeking research and usable
applications - they cannot be meaning- fully presented in six
pages, summarized in 15 minutes and evaluated in the same time
quantum. It is incumbent on the author to lift up some crucial
issue, if any, which is raised by the project and which is
related to computational modeling; he should not just tell COLING
what he is working on these days.

What, then, is the core of the matter? I believe we all agree
that computational linguistics is about computation and
linguistics, with an emphasis on 'and'. The key concepts are
computation, not computer, and linguistics, not language
processing. We should therefore exclude papers, however good,
about computation applied to lingustic material unless some
linguistic insight is at issue or about computer support for
linguistics unless the computational procedure has some no-
trivial linguistic aspect.

A great goal is to model computationally human linguistic
behaviour as a manner to better understand how we speak and
listen, write and read learn and unlearn, understand, store and
restructure information. An ultimate question is to what extent
these our most human activities can be reduced to mechanistic
operations: by teaching machines we can recognize what in us is
machine-like. Whenever we can mechanize something which seems
deeply human, we gather urgent, often painful, knowledge about
ourselves; whenever we fail, we may learn even more. It is not
only in thermodynamics that the great failures mark the great

Computational modeling of human behaviour is a great goal. Some
colleagues would say it is the goal. I think it is going too far
to require that computational models of human behaviour must
needs be valid as possible (future components of) models of the
human intellect; that is a moot point of a rather remote
philosophical nature since we can hardly ever verify claims about
the similarity or analogy between our models and human

One theme which I personally see as crucial in computational
linguistics at this particular point of time is machine learning;
cf. my portion of the summing- up-and-look-ahead session at
COLING 88 in Budapest, subsequently published along with the
other statements of that session in the Prague Bulletin No. 51,
which was intended as a seed for COLING 90 and which I therefore
recommend reading.

Modeling learning is interesting in itself but modeling language
user's learning and adaptation attacks one of the most salient
features of natural languages and one which so far is
conspicuously absent from invented languages: the intriguing
feature that human users understand utterances and texts by means
of knowledge about the language system and that such knowledge is
successively acquired from the utterances and texts we

To get a relevant model for human linguistic competence we must
teach machines to learn: to update their grammar and lexicon from
the very texts on which they apply them, treating the texts as
operands for the analyzers and simultaneously as operators that
modify the analyzers. It is my belief that there are basic
procedures, as yet poorly understood, which are common to
language change over longer periods, language acquisition by an
individual and the mutual adaptation between dialogue
participants or the reader's adaptation to the author during and
possibly merely for the purpose of the current dialogue or text.

The important successful attempts to handle very large text
corpora and huge lexical data bases might obscure this crucial
issue and postpone its solution: I feel uneasy about some
impressive analyses and syntheses based on sub-sub-
subcategorizations of words and situations in some micro-slice of
our world. Close-ups on some instances are indispensible in
serious empirical research, but continued fact collecting and
algorithm building does not necessarily bring us generalizable
insights or generalizable procedures. The conclusion when we have
succeeded in mapping some detail, which turned out to be more
complex than we could imagine, should not always be to find
resources, ours or somebody elses, for every other detail to be
mapped with equal precision, but to model the procedure for such

Details must be seen in a context and I believe that the most
fruitful context just now is that of learning and adaptation.

Now, artificial intelligence does study machine learning. But I
expect that it is from linguistics, with its tradition of
studying change and with an object which so obviously does not
wait til the next autorized release before it changes, that a
major break-through will come for linguistic adaptation and for
learning at large. Why not at COLING?

What I have said should not be taken to mean that I decline
applied computational linguistics as worthy of discussion at
COLING. I certainly do not. Applications can help us ask new
questions, and the success and, even more, the failures in
practical tasks gives us very valuable feedback, confirming and
disconfirming our beliefs. But it should be clearly understood
that application is not the ultimate test of the value of what we
are doing: I think it is absurd to see, say, the needs for office
automation as a justification for our study of human language.

There are many good issues which emerge in applications, and
catch our eye only there, but their implications for how we see
language and computation have to be pinpointed. Here, the program
committee can help the authors so that they dare focus on one
crucial issue rather than describe their whole project.

Thus, if somebody would have constructed an automatic translator,
actually producing readable output when given arbitrary economic
or technical prose, the world would not have become a very
different place, although quite a few organizations would have
run more smoothly: the insights gathered from trying to translate
mechanically by mere dictionary and syntax provide us with
essential knowledge translation, about language and hence about
ourselves. In the case of machine translation, therefore, I would
like to see papers illuminating some feature of the task of
translating which they claim to be (un)programmable, rather than
demonstrate how well their tool works.

One particular field of application which I think deserve more
attention from good computational linguists is that of
documentation and information retrieval. Many good linguists are
unaware of the great challenges of that field, and needless to
say documentalists at large, including those who are advanced in
using and designing computerized systems, are typically unaware
of the linguistic issues, ignoring procedures which are known by
linguists to be effective and unconscious about fundamental
difficulties and impossibilities where linguists could help them
canalize efforts to more rewarding ends.


Hans Karlgren
Program Committee Chairman

Practical notes

1. Focus on some issue, and state which.
2. Say explicitly where you differ from predecessors (including
yourself in earlier publications) and opponents. It is the for
the author to write contrastively, not for the evaluators and
other readers to run their mental compare programs to detect
possible differences.
3. Cut out all details and technicalities which are not
indispensible for your argument and add some simple examples.
(Simple to all: only a small minority of COLING participants have
anything like a native command of your language, even though that
be English, and witty examples are typically lost on us).
4. Explain all technical terms and notations even though they
seem elementary to you and your nearest colleagues. Readers are
unlikely to recall precisely what you - or somebody else or the
reader himself - wrote last year. And do not use abbreviations -
their effect on total text length is negligible but the
alienation effect on a reader is considerable, even when the
abbreviations are explained: few "words" are so homonymous as
acronyms, not to speak of temporary abbreviations.
5. Don't be provincial: Do not restrict your readership to those
who have the same language of study or use the same scientific
jargon, formal apparatus and software tools. All qualified COLING
participants are certainly not familiar with, say, conversational
English, Prolog or your group's favourite semantic
representation, and you must have very good reasons if you want
to spend part of your space allotment in teaching them. If you
avoid cultural provincialisms in your presentation you are likely
to find some of your best critics and future scientific dialogue
partners among those who have a very different background - that
is one of the points in addressing an international audience like