Humanist Discussion Group

Humanist Archives: Feb. 15, 2022, 7:23 a.m. Humanist 35.531 - Man a Machine . . . and AI

				
              Humanist Discussion Group, Vol. 35, No. 531.
        Department of Digital Humanities, University of Cologne
                      Hosted by DH-Cologne
                       www.dhhumanist.org
                Submit to: humanist@dhhumanist.org




        Date: 2022-02-14 10:33:25+00:00
        From: Manfred Thaller <manfred.thaller@uni-koeln.de>
        Subject: Re: [Humanist] 35.524: Man a Machine . . . and AI

Dear Øyvind, Dear Jerry,

sorry for the slight delay; which gives me the possibility also to react
to the answers Jerry has provided already.

Very broadly I agree with Jerry that the "dimensions" he conceptualizes
are different from the ones I am contemplating. My explanations for that
differ from his, however - and there are some tantalizing convergences
on which I'd like to comment.

Much, if not all, of the differences com from one very strong
disagreement I have.
> So historical documents are fundamentally no different
> from poetic documents,
Well - as documents are documents, I have of course to agree, that they
are related; but that is almost completely irrelevant, if we re-focus on
the way in which philologists / literary scholars and historians read
them. A literary scholar comparing the narrative devices of Dickens and
Trollope and a historian trying to integrate both their descriptions
into a consistent view of 19th century society are doing something
seriously different.

Personal stage aside: In the unlikely event, that I should ever be asked
to write a history of the historiographical relevance of Hayden White, I
would certainly select satire, more probably straightforward ridicule.

A further difference is, that for at least a substantial sub-section of
historians different types of text are more important than for literary
scholars. For my stripe of historian, the administrative acts of the
poor houses of England, as far as they have survived, constitute a
better source than Oliver Twist. And if I'd look at the reasons England
had so much difficulties to accept a decimal currency, it is not
primarily the Duke of Omnium I would consult.

Deducing from that the properties of computational models to handle
texts / sources, I think, that Jerry's are primarily derived from the
intention to represent a text as an entity which can be analyzed as a
text. Mine come primarily from the purpose of extracting snippets from
different sources (considerable emphasis on non-textual ones, by the
way) and trying to reconstruct a model of some aspect of the society
that has produced them.

This leads to an interesting point where we both agree; but than we don't.

> I don't think that, at the user-level, such machines should be working to
> "minimize" or even eliminate contradictions
As long as we are discussing the representation of a document for
processing, I could not agree more.
(https://www.academia.edu/43660950/On_vagueness_and_uncertainty_in_historical_da
ta)

But during the reconstruction of some subsystem I consider an
integration that integrates n+1 data points consistently as preferable
against one that integrates just n. (Beware of rhetoric here! In a more
serious treatment I would not seriously argue that all inconsistencies
have equal weight.)

If I understand Jerry correctly, he discusses dimensions as a vehicle to
model the conceptual space in which a document is represented; I am
using them as a device to construct a space in which algorithms can
create connections between sources.

Allow me the assumption, that a researcher starts with a complete lack
of knowledge of a new field of research, encountering some n snippets
from various books and sources. If such a researcher is able to connect
some of those snippets to previous knowledge, not only those snippets
become more meaningful, but also the knowledge base for trying to
understand the remainder of them increases. - You have of course
discovered that I describe the classical hermeneutical circle (or rather
spiral) or, if you prefer, Peirce's abductive reasoning. As this is a
dynamic process, the amount of knowledge / information is never static,
it is permanently changing as a result of that ongoing interpretation of
snippets in the light of changing background knowledge. (And when the
background knowledge discovers some inconsistencies, it can accept them,
marking some of the interpretations of snippets as doubtful or removing
an inconsistency by dropping some interpretations.)

Now my point is, that in the case of a biological cognitive agent, aka a
human being, this process never stops. Parts of it may almost stop,
being kept in a limbo somewhere in the background while the finite
cognitive capabilities are dedicated to other problems. And when the
focus is directed at the background problem again, some of the network
of interpretations may have been lost. In any case: in the biological
case, there is no fixed amount of information or knowledge independent
of the state of the system as a whole.

With our current computational systems, however, we assume, that there
is a clear separation between data and algorithms operating upon them.
The state an algorithm is in, is something inherently different from the
data it operates upon.

That is irrelevant, as long as you see an information system / program
as a tool, which performs a clearly defined finite task - count all
words with property x - which is than considered by a user. When you
want to progress beyond that, creating a system which does not perform a
single discrete task, but acts as an enhancement to your possibility to
juggle "understandingly" the relationships between 10.000 rather than
100 observational snippets, it becomes highly relevant. And the
algorithm that does the juggling becomes central, it IS the information
in the system.

Quite difficult to describe in depth in short space. A trivial example
may make that clearer:
https://www.academia.edu/69323767/Can_historical_information_be_represented_outs
ide_of_a_graph_hypergraph_network_1

My apologies for being not only late but loquacious too.

Nevertheless three more short comments:

> In general, the move is to atomize the
> natural language materials so completely as to eliminate the need for any
> relational database,
Comment 1: That warms my heart. From the point of view of a more
technical theory of "modelling" I always found the idea to mark a text
up in XML (with an underlying graph structure) and than process such
data in  a relational (table based) system as an attempt to keep a
canary bird in a goldfish bowl.

Comment 2: "atoms", from ἄτομος inseparable.
Well:
Source "on the Friday after St. Hilari(o)us 1763"
== Data Point 1: "astonishingly late usage of old style dating in
administrative record"
== Data Point 2: "January 14th, 1763"
ἄτομος?

Well, if the source snippet has a position on a dimension representing
time and a position on another dimension representing bureaucratic
development ...

> to implement certain affordances of
> graph databasing
Comment 3: The second paper of mine quoted above probably proves that
graphs are dear to my heart. However: graphs are dimensionless, i.e.,
all edges have exactly the same length (0). In my opinion if you want to
use them to discuss language phenomena, e.g. metaphors, you have to
embed them in one of those n-dimensional spaces. Unfortunately in
German:
https://www.academia.edu/61283788/Hamburg_UP_Flueh_et_al_Reading_Thaller

Kind regards,
Manfred


Am 12.02.22 um 07:10 schrieb Humanist:
> Humanist Discussion Group, Vol. 35, No. 524.
> Department of Digital Humanities, University of Cologne
> Hosted by DH-Cologne
> www.dhhumanist.org
> Submit to: humanist@dhhumanist.org
>
>
>
>
> Date: 2022-02-11 16:09:19+00:00
> From: Mcgann, Jerome (jjm2f) <jjm2f@virginia.edu>
> Subject: Re: [Humanist] 35.521: Man a Machine . . . and AI
>
> Dear Øyvind,
>
> If I understand what Manfred is arguing, his approach to the general
> design of
> computational platforms for "humanist" documents is different from mine. I
> don't think that, at the user-level, such machines should be working to
> "minimize" or even eliminate contradictions. The goal should be to
> clarify the
> differential relations that constitute the system of natural language
> communication.
>
> We're all aware of the operational "contradictions" that pervade so-called
> poetic discourse, but in my view such features are characteristic of
> all natural
> language communication, which is always an exchange between different
> (codependent0 agents. So historical documents are fundamentally no
> different
> from poetic documents, although the latter operate by foregrounding
> how they
> deploy and exploit contradictions and differentials.
>
> "At the user level" the platform should be setting the material's
> codependent
> agent(s) free to expose these differential relations ("different
> interpretations") by interacting with, and adding to, the history of these
> differences ( = the "reception history" of the work(s) being
> investigated).
> Briefly, these interpretive acts would be declarative sets of stand-off
> annotations that the computational design reinvests in the reception
> history
> (which looks to me a lot like the "running processes" that Manfred
> speaks of).
> Current interpretive moves have to be reinvested because the receptions
> histories of "the past" are always informing the moves of current
> agents. As
> Faulkner once shrewdly observed: "the past is never dead . . . it's
> not even
> past". Currently running processes are always re-running earlier
> processes . .
> . which is why current users' moves have to be reinvested in the system.
>
> I can't get into the technical details of how to do this -- I know
> some think it
> is impossible -- except to say that we mean to implement certain
> affordances of
> graph databasing (specifically Neo4j). In general, the move is to
> atomize the
> natural language materials so completely as to eliminate the need for any
> relational database, which as David Schloen some time ago pointed out
> runs a
> minimal form of natural language computing. As such, it makes an
> unfortunate
> compromise between the computational power of natural language
> documents and the
> power of digital-electronic documents. Our view is that if natural
> language
> documents -- oral, textual, graphical, electronic -- are more
> radically atomized
> than, for instance, is the case in the CEDAR Initiative, we could have
> computational machines that will be useful prosthetic tools for
> studying the
> differential operations of natural language materials.
>
> I recently wrote a brief sketch of my general approach for a special
> issue of
> Textual Cultures that Marta Werner is putting together for publication
> later
> this year. The topic for the issue is "Provocations for New Approaches to
> Editing" (or something like that). At present I'm trying to find the
> time to
> expand it.
>
> Jerry
>
> On 2/10/22, 9:49 PM, "Humanist" <humanist@dhhumanist.org> wrote:
>
>
> Humanist Discussion Group, Vol. 35, No. 521.
> Department of Digital Humanities, University of Cologne
> Hosted by DH-Cologne
> www.dhhumanist.org
> Submit to: humanist@dhhumanist.org
>
>
>
>
> Date: 2022-02-10 16:02:47+00:00
> From: Öyvind Eide <oeide@uni-koeln.de>
> Subject: Re: [Humanist] 35.499: Man a Machine . . . and AI
>
> Dear Jerry,
>
> your email provoked me to pick up on but one of the things you
> mention. I do
> this in gratefulness to a group of students with whom these issues were
> discussed in a colloquium last semester:
> https://lehre.idh.uni-koeln.de/lehrveranstaltungen/wisem21/digital-humanities-
> theorie-und-praxis/
>
> The following two articles were the basis for the comment:
>
> McGann, Jerome. “Texts in N-Dimensions and Interpretation in a New Key
> [Discourse and Interpretation in N-Dimensions].” TEXT Technology : the
> journal
> of computer text processing 12, no. 2 (2003).
>
> Manfred Thaller (2017): Between the Chairs: An Interdisciplinary Career.
> Historical Social Research, Supplement, 29, 7-109. Part 7: Next Life:
> My Very
> Own Ivory Tower, 81–93.
>
>> Date: 2022-01-28 16:59:33+00:00
>> From: Mcgann, Jerome (jjm2f) <jjm2f@virginia.edu>
>> Subject: Re: Man a Machine . . . and AI
>>
>> I set this personal event in the context of the distributed computational
>> network of human communication and get a sober view of AI. By no means a
>> dismissive view. But the distributed network of any AI computational
>> model,
>> actual or conceivable, seems so minimal as to be all but without any
> statistical
>> or quantum relevance.
>>
>> Why? Because unlike “natural” processes, the hardware of AI as currently
>> designed has no access to its own quantum “histories”. A reply from an AI
>> visionary might be (has been?) that when AI software is designed to
>> interoperate directly (seamlessly?) with an individual’s biochemical
>> system,
>> that limitation will be overcome. Does anyone here know if such proposals
>> have been advanced and perhaps also disputed? (I know that the poet
>> Christian Bok has been working on creating what he calls a “living text”
>> (biochemically coded). No one, not even himself, has been happy with the
>> results yet.
> The question the students and I pondered on was the relationship
> between these
> two paragraphs in the articles mentioned above:
>
>> We might begin from the following observation by the celebrated
>> mathematician
>> René Thom: “In quantum mechanics every system carries the record of
>> every previous interaction it has experienced – in particular, that
>> which created it -- and in general it is impossible to reveal or
>> evaluate this record” (Thom 16). A literary scholar would have no
>> difficulty rewriting this as follows: In poetry every work carries
>> the record of every previous interpretation it has experienced – in
>> particular, that which created it -- and in general it is impossible
>> to reveal or evaluate this record.” It is impossible because the
>> record is indeterminate. Every move to reveal or evaluate the record
>> changes the entire system not just in a linear but in a recursive
>> way, for the system – which is to say, the poetical work – and any
>> interpretation of it are part of the same codependent dynamic field.
>> Consequently, to speak of any interpretation as “partial” is
>> misleading, for the interpretive move reconstructs the system, the
>> poem, as a totality. This reconstruction corresponds to what is
>> termed in quantum mechanics the collapse of a wave-function into its
>> eigenstate. (McGann, p 15)
>> 10) An information system fit for the handling of historical sources
>> should exist as a set of permanently running processes, which try to
>> remove contradictions between tokens. Such tokens are used to
>> represent data. They do not directly map into information.
>> Information is represented by a snapshot of the state of a specific
>> subset of the concurrently running processes. [...] 11) The data in
>> the totality of historical sources, or any subset thereof, forms a
>> mutual context for the interpretation of any set of specific items
>> contained therein. It can be envisaged as a set of n-dimensional
>> configurations of tokens representing physically existing sources,
>> each of which exists in an m-dimensional universe of interpretative
>> assumptions. Information arises out of these data by permanently
>> running processes, which try to minimize contra- dictions and
>> inconsistencies between subsets of the data. 12) This model is both,
>> a conceptual one for the hermeneutic “understanding” of historical
>> interpretation, as well as a technical one for future information
>> systems supporting historical analysis. (Thaller, pp 89–90)
> Is ”the collapse [...] into its eigenstate” to be compared to ”a
> snapshot of the
> state of a specific subset of the concurrently running processes”¯
>
> Is the interpretative move (McGann) the same as the context-based
> interpretation
> (Thaller)? Or are they analogous, parallel, or at least comparable?
>
>> Realizing that seems to me important as we try to design and build
>> digital
>> tools for investigating and sustaining human exchange in both natural and
> artificial
>> worlds, including language exchange.
>
> So to my main (and quite naive) question: Is the system suggested by
> Thaller an
> operationalisation of quantum poetics, applied to historical disciplines?
>
> All the best,
>
> Øyvind



_______________________________________________
Unsubscribe at: http://dhhumanist.org/Restricted
List posts to: humanist@dhhumanist.org
List info and archives at at: http://dhhumanist.org
Listmember interface at: http://dhhumanist.org/Restricted/
Subscribe at: http://dhhumanist.org/membership_form.php