4.0414 Big Science (2/52)

Elaine Brennan & Allen Renear (EDITORS@BROWNVM.BITNET)
Thu, 23 Aug 90 21:07:29 EDT

Humanist Discussion Group, Vol. 4, No. 0414. Thursday, 23 Aug 1990.

(1) Date: Wed, 22 Aug 90 18:25:12 CDT (12 lines)
From: "Michael S. Hart" <HART@UIUCVMD>
Subject: Re: 4.0408 Big Science

(2) Date: Thu, 23 Aug 90 10:47 PDT (40 lines)
Subject: Re: 4.0408 Big Science

(1) --------------------------------------------------------------------
Date: Wed, 22 Aug 90 18:25:12 CDT
From: "Michael S. Hart" <HART@UIUCVMD>
Subject: Re: 4.0408 Big Science (1/20)

This is true of "small science" too. When I was studying System Dynamics
and Design at Dartmouth, I once cornered the head of the department in
his office, and forced him to tell me that what he was telling congress
was not what he really thought, or what anyone who had studied systems
in even a single class would think. When he finally told me the truth,
I nearly abandoned hope (all ye who enter here).

Michael S. Hart
(2) --------------------------------------------------------------151---
Date: Thu, 23 Aug 90 10:47 PDT
Subject: Re: 4.0408 Big Science (1/20)

Even more insidious (I suspect) in the long run is the practice of
applying for Big Science grants, say in biology, chemistry, in medicine,
AND SUCCESSFULLY DONE. That way, the reports at the end of the year
will show provable, testable progress. That way, the grant will be
renewed, the referrees will be able to write letter s validating the
research as good. The old idea of say dropping lead balls from the
Leaning Tower, our Humanist (fossilized) memory and model is not
pertinent, because testing hypotheses that may not PAN out (no gold
there), will show th at themoney was spent on work that led to a dead
end. Funders are not in busine ss to keep a lab running, with its
overhead in salaries and equipment, for the sake of having a lab
running. The lab must be successful. So, you get fairly easy
projects, or you get things that are will probably show results that are
pos itive, but you dont get risky things, or harebrained (as it turns
out?) results with Palladium. You dont get challenging science either,
or odd or far out or problematic things to investigate. Or you get,
increasingly, fudged papers and results because of the pressure to
produce. Alchemists were more honest looking for the homonculus,
perhaps. ON their own time and equipment. If desperate, talk to the
Black Poodle playing with its tail, and sign a longterm contract for
success. One is more "honest" with a small time, individual grant
application to write or complete a book of poems, say. The poems may
turn out poor things. But it would be wiser to have completed the book
or almost done it, and show th at as a positive result, more cynical,
but wiser. Also of course, for J Goldfield, a good example of la
mauvaise foi, which is endemic to these times. Everyone can approve
success or promising results; no one wants to say, the odds were against
finding those proteins in that way in that place, because then you are
betting other people's funds and sounding irresponsible. An analysis of
the bibliographies transported (by computer) from research paper to
research paper, and citations of names cited, which is used to measure
quality too, is a self-justifying, self-perpetuating "racket." In big
sociology and psychology too, one might suggest. But, que voulez-vous?