14.0440 hypertext and the Web and XML

From: by way of Willard McCarty (willard@lists.village.Virginia.EDU)
Date: 10/29/00

  • Next message: by way of Willard McCarty: "14.0442 hero worship, AI and robotics"

                   Humanist Discussion Group, Vol. 14, No. 440.
           Centre for Computing in the Humanities, King's College London
             Date: Sun, 29 Oct 2000 06:36:51 +0100
             From: Willard McCarty <willard.mccarty@kcl.ac.uk>
             Subject: hypertext and the Web
    The following extract from the abstract of John B Smith's opening plenary
    to Hypertext '97 (Southampton U.K.), helps to explain the discrepancy
    between the state of hypertext research in computer science and the state
    of the Web:
    >Many members view the Web as an intrusive, unwelcome guest who insists on
    >making his or her point of view prevail. Ignoring the hard-won knowledge 
    >of this community,
    >the WWW has simplified the data model, ignored problems of large-scale 
    >navigation, and
    >declared that link integrity is irrelevant. Consequently, many wish that 
    >it would go away so
    >that they could continue their studies along familiar paths. Since it 
    >hasn't, they have begun to
    >adapt their work to it, but often grudgingly and with the least 
    >accommodation possible.
    >I will suggest a different perspective. The WWW, along with Java and the
    >Internet, are not just new elements in the computing infrastructure. They 
    >ARE the infrastructure.
    >Most computing and communication activities in the future will take place 
    >in this context. If
    >the Hypertext community wants to continue and to create value for its 
    >knowledge, it must embrace
    >the WWW, not just tolerate it.
    (See the link at <http://journals.ecs.soton.ac.uk/~lac/ht97/>, and note in
    passing that the folks at Southampton have put the most of the proceedings
    freely online, for which many thanks.)
    A question to those who understand XML: to what extent will it allow us
    users of the Web to get the benefit of this CS research, which is now
    effectively out of reach? In "Open Hypermedia as User Controlled Meta Data
    for the Web", Kaj Grnbk, Lennert Sloth and Niels Olof Bouvin (Aarhus)
    describe "a mechanism [built on XML]... for users or groups of users to
    control and generate their own meta data and structures", e.g. "user
    controlled annotations and structuring which can be kept separate to the
    documents containing the information content". If I understand the import
    of what these fellows are saying, this would mean that people like us could
    build far more adequate scholarly forms (editions, commentaries et al.)
    online. Or am I misreading?
    The Open Hypermedia movement (if it can be called that) seems quite
    interesting and promising to this outsider; see the group's Web page at
    <http://www.csdl.tamu.edu/ohs/>. Its purpose, I gather, is to open up the
    layer of software in which structural abstractions are defined so that
    users could define their own. "A natural way in which to accomplish this is
    to generalize the link server of contemporary OHS's, replacing
    this single entity with an open set of link server peers (or simply,
    structure servers)." Does this mean, as I think it does, that people like
    us could begin to modify what links and nodes do? That's certainly what we
    need if we're going to craft adequate scholarly forms online.
    Expert comment most welcome!
    Dr Willard McCarty / Senior Lecturer /
    Centre for Computing in the Humanities / King's College London /
    Strand / London WC2R 2LS / U.K. /
    +44 (0)20 7848-2784 / ilex.cc.kcl.ac.uk/wlm/

    This archive was generated by hypermail 2b30 : 10/29/00 EST