Humanist Discussion Group, Vol. 13, No. 547.
Centre for Computing in the Humanities, King's College London
 From: Thierry van Steenberghe (41)
Subject: Re: 13.0529 early b'day greeting voiced
 From: Willard McCarty <firstname.lastname@example.org> (49)
Subject: music and a digital Ariel
Date: Tue, 18 Apr 2000 21:38:00 +0100
From: Thierry van Steenberghe <email@example.com>
Subject: Re: 13.0529 early b'day greeting voiced
RE: Message of Franois Lachance in Humanist 13.529:
This is intended as a response to his wondering, rather than to his
greetings, but I also want to take the opportunity to wish you, Willard, as
the Humanist list's father, a very happy list b-day!
Humanist Discussion Group wrote:
> Humanist Discussion Group, Vol. 13, No. 529.
> Date: Thu, 6 Apr 2000 11:56:24 -0400 (EDT)
> From: Francois Lachance <firstname.lastname@example.org>
> I recently had the occasion to wonder if any persons have been
> exprimenting with phonetic transcription and voice synthesis software.
I have been experimenting such things, with moderate satisfaction, some
During a project I was leading for a publishing company, I wanted to know
whether it was possible to have correct voice synthesis of words and
sentence segments (in French) using directly their phonetic transcription
(IPA), which was existing in the book we worked on, intended to be
e-published on cd-rom. The idea was that this should be relatively easy,
and would bring the definite advantage of ensuring a most faithful respect
of the pronunciation (transcription) as given by the book's author,
avoiding the recording of spoken tokens by selected speakers, with all the
associated concerns you can imagine, and possibly also with a gain in file
storage volume, though modern sound compression techniques could make this
aspect less important.
We identified a company (actually a spin-off of a well reputated speech
processing university lab) who had a TTS product using a phonetic
transcription of its dictionary, and asked them if they could transform our
IPA-coded dictionary into the transcription used by their TTS engine. It
turned out that the process worked astonishingly well, even though it also
proved that a finer tweaking than just translating IPA into the TTS engine
phonetic code would be required to obtain 'natural' sounding utterances.
Thierry van Steenberghe Bruxelles / Belgium mailto:email@example.com
-------------------------------------------------------------------- Date: Tue, 18 Apr 2000 21:38:27 +0100 From: Willard McCarty <firstname.lastname@example.org> Subject: music and a digital Ariel
Recently I spent a week with my brother, a musician who lives in Marin County, California, and whose chief delight is in the electronic manipulation and simulation of music. Apart from the fun I had playing ignorantly with his equipment I paid attention to what he and a guitarist friend of his told me about the current state of the art as people like them can get access to it. I also read the professional magazines lying about the sprawling California house, in the light of brilliant sunshine pouring through the conifers into his almost-as-big-as-my-whole-house living room. (Ah, California....) What I heard and read about gave me pause to consider the current state of virtually real music we hear, whether from a CD or "live" in the concert hall. And to wonder when we'll be wrapping our minds around musical data.
Apparently many "live" singers now sing into a device that automatically corrects their pitch. The musicians on stage may be there partly or wholly for show (e.g. the tired-out Rolling Stones), while back-stage are the actually performing musicians and a panoply of equipment. Anyone with the dosh can purchase a device that will process the music from a CD, then transform the music he or she plays into the style of what the device has previously heard. Want to sound like Oscar Peterson? No problem....
Musicians, my brother claimed, will hire programmers, who have the skill and knowledge to construct new sounds for them. They haven't the skill or time to do that. What particularly intrigued me, however, was the -- what shall I call it? -- state of mind that composing on the synthesizers requires. When one has through racks of equipment control of the myriad of components we can analyse in, say, a note from an acoustic instrument such as a guitar, then how does one mentally control all of them? My brother spoke of imagining a "silver cloud" with a sparkling streak down the middle. I wonder if composers now come upon their own imagery in order to get their minds around all the possibilities? Beginners, it seems, simply come upon neat sounds by accident, more or less, then save the lucky finds.
I would be very glad if one of us who understands this stuff were to tell the rest of us what's happening and what it might have to do with the chiefly analytic work in the humanities.
Later, while at the University of Georgia, I was treated to a performance of the Tempest in which Ariel was represented to Prospero as a computer-generated animation, which was controlled by an actress wearing motion sensors. The idea behind the performance was brilliant, the execution less so, but to be fair I think it was not quite entirely successful because the director was lacking a few (or many?) $100K worth of computing equipment, the skills of George Lucas's crew -- or better, whatever it might take to have a 3-dimensional holographic projection in the air over the stage. Again, once the limits of the physical are gone, in what terms do we imagine?
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Dr. Willard McCarty, Senior Lecturer, King's College London voice: +44 (0)171 848 2784 fax: +44 (0)171 848 5081 <Willard.McCarty@kcl.ac.uk> <http://ilex.cc.kcl.ac.uk/wlm/> maui gratia
This archive was generated by hypermail 2b29 : Tue Apr 18 2000 - 20:57:32 CUT