Fun_People Archive
5 Jun
Hot New CD!


Date: Mon,  5 Jun 95 11:30:16 PDT
From: Peter Langston <psl>
To: Fun_People
Subject: Hot New CD!

[I know that few people will be able to survive without this new CD release. 
Well, okay ... I haven't actually heard it yet (lately).  But just from its
description I could tell I would want it; so I called up and ordered three! 
You might consider getting one yourself if you like weird music (or if you're
on the list of authors).  Tell them I sent you...  -psl]

Forwarded-by: Mary Simoni <msimoni@umich.edu>
From: Stephen Travis Pope <stp@cnmat.CNMAT.Berkeley.EDU>

Sound Anthology--"Computer Music Journal" Volume 19 Compact Disc

The first-ever Computer Music Journal CD--Sound Anthology--has appeared!
It includes in its 20 selections over 60 minutes of compositions and sound
examples from composers such as Clarence Barlow, Ludger Bruemmer, Paul
Lansky, D. Gareth Loy, Mari Kimura, Jean-Claude Risset, Neil B. Rolnick,
Denis Smalley, Rick Taube, James Tenney, Barry Truax, Tamas Ungvary, and
Amnon Wolman, and researchers of the likes of James Beauchamp, Perry Cook,
Lippold Haken, Andrew Horner, Peter Langston, Xavier Serra, and Julius
Smith.

The complete contents are below and are also stored on the Computer Music
Journal WWW site and ftp archive under the URL
http://www-mitpress.mit.edu/Computer-Music-Journal/Contents/19.CD.toc.

The Sound Anthology costs US$ 15.00, with US$ 5.00 shipping to addresses
outside the USA, and 7% GST for Canadian residents.

To order the Sound Anthology CD, contact:
	Computer Music Journal CD
	MIT Press Journals
	55 Hayward St.
	Cambridge, Massachusetts, 02142 USA
	Tel: (+1-617) 253-2889
	Fax: (+1-627) 258-6779
	Electronic mail: journals-orders@mit.edu (include credit card information)
	(WWW <A HREF = "mailto:journals-orders@mit.edu">journals-orders@mit.edu</A>)

Computer Music Journal subscriptions and the CD can also be ordered through
various on-line services such as CompuServe's CompuBooks, or the MIT Press
on-line book-store.

-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-

SOUND ANTHOLOGY CD CONTENTS

CMJ Volume 19

1. Rick Taube: Gloriette for John Cage--4:20
2. Tamas Ungvary: Fingerprint #2--3:11
3. Ludger Bruemmer: Excerpt from La cloche sans vallees--3:00
4. Mari Kimura: Performance excerpts--3:00

CMJ Volume 18

5. D. Gareth Loy: Blood from a Stone--4:30
6. Barry Truax: Granular Time-shifting and Transposition Composition
    Examples--6:10
7. Andrew Horner, James Beauchamp, and Lippold Haken:
    FM Matching Sound Examples--2:10

CMJ Volume 17

8. Perry R. Cook: SPASM/LECTOR Sound Examples--2:54
9. James Tenney: Collage #1 ("Blue Suede")--3:22
10. Neil B. Rolnick: Macedonian AirDrumming (excerpt)--3:00
11. Denis Smalley: Wind Chimes (excerpt)--3:40

CMJ Volume 16

12. Amnon Wolman: FORJOHN (excerpt)--3:30
13. Jean-Claude Risset: Echo--3:15
14. Paul Lansky: The Sound of Two Hands (excerpt)--3:50
15. Clarence Barlow: OTOdeBLU--3:30

CMJ Volume 15

16. Charles R. Sullivan: Extended Electric Guitar Timbres--2:42
17. Xavier Serra and Julius O. Smith, III: Spectral Modeling Synthesis
    Examples--3:32
18. Peter S. Langston: Composition Examples: Incidental Music--3:26
19. Michael Gogins: Composition Examples: Iterated Functions Systems--3:00
20. Peter S. Langston: Reprise--0:30

-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-

SOUND ANTHOLOGY CD PROGRAM NOTES

Index 1--4:20

Heinrich K. Taube: Gloriette for John Cage

Gloriette for John Cage is a four minute algorithmic composition for
mechanical organ written in honor of John Cage, who died in 1992. The work
was composed for the "Busy Drone" organ at the Stedelijk Museum in Amsterdam.
This organ reads a large cardboard score similar in some respects to a piano
roll. Though the physical properties of the score limit both the length and
texture of a work, the opportunity to combine modern digital algorithmic
composition with the ancient organ (itself a technological wonder) was simply
too inviting to pass up.

In keeping with the late composer's interest in aleatoric music, the main
algorithm in the work uses chance processes in which the likelihood of the
musical notes C A G E occurring out of a background of G dorian gradually
increases as a function of time, causing the composer's name to slowly emerge
to the forefront. The rhythmic mensuration and number of voices are similarly
inspired by the composer's name.

Index 2--3:11

Tamas Ungvary: Fingerprint No. 2

Fingerprint No. 2 is one of a number of impromptus created out of enthusiasm
when Roel Vertegaal and Tamas Ungvary got together in August 1993 in Vienna
to try out a beta version of the Intuitive Sound Editing Environment (ISEE),
discussed on pages 21-29 of Computer Music Journal 18:2. In these impromptus
the border between the church organ and computer music domains seems to fade.
This piece was edited out of several spontaneous recording sessions. During
those recording sessions, two Sentographs (isometric joysticks with 3 degrees
of freedom, developed out of Clynes' 2-D version by the University of Uppsala
in Sweden with Alf Gabrielsson and Tamas Ungvary) were connected via a
FaderMaster to an Apple Macintosh IIci running Max and ISEE, which were
connected by Apple's MIDI Manager to one another and to a Yamaha SY99
synthesizer. Max was running an existing composition environment by Tamas,
using one Sentograph output to generate notes and chords according to a
selection of scales, a second for dynamics and a third for pitch bend
control. The remaining three Sentograph outputs were used to change the
Overtones, Brightness and Articulation parameters of a selection of Open
Labial, Lingual and Compound organ instrument spaces created by Ernst Bonis.
These organ instrument spaces were implemented on the SY99, respectively
using waveshaping, FM and complex waveform additive synthesis.

The most important ISEE parameter, Overtones, altered the phase of the
waveshaping transfer function in the Open Labial spaces, the c:m ratio in the
Lingual spaces and the complex waveforms used in the Compound space.
Brightness controlled the input function amplitude and the pitch of the
inharmonic organ 'spook' in the Open Labial spaces, the modulation index in
the Lingual spaces and the low-pass filter cutoff frequency in the Compound
space. Articulation controlled the harshness of the attack by changing the
'spook' level and the envelope attack rates. All parameter information was
processed in real-time by the SY99. The various instrument spaces could
instantly be accessed at random from the keyboard. Fingerprint No. 2
exemplifies what two finger tips, moving together in 6 degrees of freedom,
can get up to, in a well-structured way controlling over 30 parameters at
once in real-time. We hope it also demonstrates what regular MIDI hardware is
capable of, given the right tools.

Index 3--3:00

Ludger Bruemmer: La cloches sans vallees (excerpt)

The idea of a "cantus firmus" using an Gregorian melody as a bass line for a
new composition is the model for "La cloche sans vallees." In the same way
that the cantus firmus uses an already existing melody the piece "La vallee
des cloches" from the cycle "Miroir" composed by Maurice Ravel is used as the
source for the new composition.

It is the intention to put a new structure above the source piece so that an
interaction between the original piece and the algorithmically determined
parameters of the composition is developed. This results in a mixture between
different time concepts; the listener switches in the process of perception
between the algorithmic level and the level of the source piece depending of
what is more significant. For example in the case that sound grains are long
enough and not heavily processed, the listener recognizes the source sound.
But if there would be a ritardando of short grains he would perceive the
structure of the algorithm instead. The most important algorithmical
structure is the ritardando and accelerando (slowing down and speeding up).
The first 560 seconds of the piece were completely transposed up 7 times
until this time-window, the transposition, collapses into a click. Beginning
with this click a ritardando which opens up the small time window is
performed exploring its new contents. In the  middle of the piece a quote
appears as a mirror (original plus a cancer, reflecting the time continuity)
referring to the cycle "Miroir" and to the symmetric formal structure of the
source piece.

The signal processing techniques applied to the source sounds are reduced to
simple processes: pointer operations, forward backward reading and sampling
rate conversion. The piece was composed 1993 as the last one of a trilogy
using compositions of Maurice Ravel. It was generated at the NeXT net of the
Center for Computer Research in Music and Acoustics (CCRMA) at the Stanford
University with William Schottstaedt's Common-Lisp-Music and Rick Taube's
Common-Music as well as with Paul Lansky's RT mixing program.

Index 4--3:00

Mari Kimura: Two Composition Examples

U (The Cormorant) by Mari Kimura (excerpt)

In January of 1991, I saw pictures of cormorants in the Persian Gulf trying
to shake the oil off their bodies. A constant feeling of urgency about the
global environment, anzd probably my reflections on the subject effected the
piece. I imagine kinds of sounds that I usually do not identify with myself
playing the violin. I try to merge the timbre and the movement of the sounds
of my violin with the electronic sounds very carefully. Electronic sounds are
created using YAMAHA TG77.

Synchronisms No.9 for violin and tape by Mario Davidovsky (excerpt)

In this work, the violin part makes use of instrumental gestures reminiscent
of Romantic, heroic violinistic virtuosity, although the work's rhythmic,
harmonic, and melodic language are very much consistent with the composer's
own characteristic Rcontemporary languageS. Davidovsky was the director of
Columbia University's Electronic Music Center from 1981-1994. Synchronisms
No.9 was commissioned by the Massachusetts Council of the Arts and
Humanities, and the tape part was realized at the MIT Media Laboratory and
Columbia University's Electronic Music Center.

Index 5--4:30

D. Gareth Loy: Excerpt from Blood from a Stone for Mathews electronic violin
and interactive computer-controlled synthesis system (1982-1992), violin
performed by Janos Negyesy.

Blood from a Stone is a live-performance piece for violin and interactive
computer-controlled synthesis. It began a decade ago with the building--from
scratch--of an interactive performance system around a Mathews electronic
violin, including a custom-built violin pitch-detector. This was in the days
before MIDI. At a formal level, it is an exploration of a taxonomy of
relationships between composer, performer, and interactive, real-time
computer system. The synthesizer accompaniment is generated live during
performance by transforming musical gestures captured from the electronic violin.

The piece is dedicated to Janos Negyesy, my friend--and through his
consummate artistry, the most eloquent spokesman for my music. The title is
dedicated to my brother, T. H. Loy, whose recent work includes extracting
Neanderthal DNA from the blood on ancient stone tools. Blood from a Stone
appears on the CDCM CD The Virtuoso in the Computer Age II; it was reviewed
in the previous issue of Computer Music Journal and was the inspiration for
the drawing on the cover of this issue.

Index 6--6:10

Barry Truax: Examples of pitch and time transformations using real-time
granular synthesis techniques

These sound examples accompany the article "Discovering Inner Complexity:
Time-shifting and Transposition with a Real-time Granulation Technique" that
will appear in Computer Music Journal 18:2.

(1) Excerpt from the opening of The Wings of Nike (1987) using fixed
granulation of two pairs of phonemes from male and female voices--0:30 minutes.

(2) Material for the "Ocean" movement of Pacific (1990); original ocean waves
cross-faded with a time-stretched version that is gradually low-pass
filtered--1:15 minutes.

(3) Excerpt from Dominion (1991); a time-stretched train whistle leading to
three blasts from a ferry horn (the last of which is stretched), mixed with a
stretched steam whistle--0:30 minutes.

(4) Source material for Basilica (1992) and the opening of the work using
both time stretching and harmonization (adding versions one octave lower and
a twelfth higher than the original)--0:12 and 1:23 minutes.

(5) Material for Song of Songs (1992); the text "I am the rose of Sharon" is
transposed down by a fourth, an octave, and two octaves, first with a slight
degree of time-stretching, followed by a sudden jump to a 50:1 stretching
ratio--0:45 minutes.

(6) Excerpt from the "Evening" movement of Song of Songs (1992) showing
granulation and time-stretching of male and female voices accompanied by the
granulated sound of a fire crackling and a time-stretched monastery
bell--0:53 minutes.

Index 7--2:20

Andrew Horner, James Beauchamp, and Lippold Haken: Sound examples of FM
parameter matching using genetic algorithms

These sound examples accompany the article "Machine Tongues XVI: Genetic
Algorithms and their Application to FM Matching Synthesis" that appeared in
Computer Music Journal 17:4. Using this technique, the "optimal" parameters
for FM synthesis are derived to match a given source sound iteratively using
so-called genetic algorithms. For each instrument used here as an example,
the orginal tone is played first, followed by the FM reconstructions using
one, three, and five FM carriers, respectively. The test instruments
demonstrated here are trumpet, oboe, tenor voice, viola, and guitar.

Index 8--2:54

Perry R. Cook: Sound examples from the Spasm, Lector and Singer programs

These sound examples accompany Perry Cook's article in this issue of Computer
Music Journal describing his Spasm, Lector and Singer physical model based
speech and singing synthesis systems.

Example 1. Phoneme synthesis: "ahh eee ooo" (repeated).
Example 2. What happens if there's no pitch deviation (played once).
Example 2. Fricative consonants: "fff, sss, shh, xxx" (played twice).
Example 4. Diphone synthesis: "yah yoo ooee lah rah" (played twice).
Example 5. Nasal diphones: "mah nah ngah mee noo" (played once).
Example 6. Voiced plosives: "bah dah gah bee goo" (played twice).
Example 7. Glottal interpolation: crescendo Example (played twice).
Example 8. Singer code: "Sheila" (played twice).
Example 9. Lector: "requiem" spoken (played twice).
Example 10. Lector: "requiem" sung (played twice).
Example 11. Connected singing: vocal exercise (played once).
Example 12. Voice quality example: yodeling (played once).
Example 13. Putting it all in context--a duet for the original (1959)
    Kelly-Lochbaum-Mathews "Daisy" and Singer (played once).

Index 9--3:22

James Tenney: Collage #1 ("Blue Suede") (1961)

This piece, long regarded as one of the "classics" of American musique
concrete, was realized in the electronic music studio at the University of
Illinois at Urbana-Champaign in April of 1961 using a recording of Elvis
Presley's rendition of Blue Suede Shoes. It was first released on Musicworks
cassette number 36 and is taken here from the CD James Tenney: Selected Works
1961-1969 reviewed in this issue of Computer Music Journal. This version is a
remastering of the original analog tape master. The CD is available as ART
1007 from Artifact Records, 1374 Francisco Street, Berkeley, California 94702
USA. Copyright (c) 1992 by James Tenney. Used by permission.

Index 10--3:00

Neil B. Rolnick: Excerpt from Macedonian AirDrumming for MIDI performance
system and Palmtree AirDrums (1990). Performed by Neil B. Rolnick.

Macedonian AirDrumming is solo performance piece using musical sources from
the Balkan peninsula. While on a trip to Yugoslavia in 1989 my interest in
the traditional music of the region, which I had earlier explored in the
composition Balkanization, was rekindled and deepened. The samples used in
Macedonian AirDrumming include rhythmic patterns and melodic fragments played
by a Macedonian drum (tapan), flute (duduk), and fiddle (cemene). The AirDrum
MIDI controllers provide me with a whole new set of physical gestures to
transduce into musical gestures. The problem of composing physical gestures
which make sense as a performer led me to design a number of different
combinations of sounds and movements which have evolved as I have toured and
performed the piece. This excerpt is taken from the CD Macedonian
AirDrumming, reviewed in this issue of Computer Music Journal. A complete
recording of Macedonian AirDrumming is available on the CD BCD 9030 from
Bridge Records, Inc. GPO Box 1864, New York, New York 10116 USA. Copyright
(c) 1992 Bridge Records, Inc. Used by permission.

Index 11--3:40

Denis Smalley: Excerpt from Wind Chimes

Wind Chimes was commissioned by the South Bay Center in London and realized
at the studios of the Groupe de Recherches Musicales in Paris and the
University of East Anglia in the UK, being completed in 1987. The piece is
based on the chance find of a set of ceramic wind chimes whose harmonies and
timbres attracted the composer. The sounds of these chimes were processed and
combined with various other natural and synthesized sounds into what the
composer calls "a strongly expressive narrative built around energies and
gestures, materials made up of different substances, and different types of
motions in space." This excerpt is taken from the CD Computer Music Currents
5, Wergo CD WER 2025-2, reviewed in this issue of Computer Music Journal and
distributed through Wergo and Harmonia Mundi. Wind Chimes is also released
(together with several other pieces of Mr. Smalley's), on the CD Impacts
Interieurs, which is available as CD IMED 9209 on the emprientes DIGITALes
label distributed by Diffusion i Midia, 4487, rue Adam, Montreal Quebec H1V
1T9 Canada.

Volume 16:1 Four Compositions in Honor of John Pierce

The soundsheet contains four new compositions written for John Pierce. See
Computer Music Journal 15:4 "Dream Machines for Computer Music: In Honor of
John Pierce's 80th Birthday," Winter, 1991. These program notes were provided
by the composers.

Index 12--3:30

Amnon Wolman: FORJOHN (excerpt)

FORJOHN is based onfive short excerpts of the sounds of the ud playing and
the singing of several Egyptian singers and performers. These were
manipulated and processed using the Studer/Editech Dyaxis digital mixer at
the Northwestern Computer Music Studio. FORJOHN represents my ongoing
interest in the use of folk material in stylized environment. It was written
for John Pierce celebrating our friendship.

Index 13--3:15

Jean-Claude Risset: Echo

Echo is dedicated to John Pierce. The title alludes to the first
communication satellite-a vision which John Pierce turned into a milestone in
the history communication. The sounds have been processed by adding delayed
echoes. If the echoes are very close in time, the process yields acomb filter
effect. In the piece, the "echoes" are often transposed in frequency. At the
end, a clarinet motive is reverberated into a series of echoes going into
nothingness. One could also say that the clarinet like sounds echo the
harp-like sounds, and vice-versa. Last, the title alludes to the nymph Echo
from Greek mythology, who, initially too talkative and distracting, was
deprived of speech; she could then only repeat, reverberate, echo. Echo-a
symbol of sound reflection-fell in love with Narcissus-fond of his own
reflected image.

Echo was realized in the Equipe d'Informatique Musicale of the Laboratoire
de Mecanique et d'Acoustique in Marseille. The sounds of the piece were
developed from three kinds of sound material: clarinet, Celtic harp, and
sounds synthesized by computer using the Music-5 program (as adapted for IBM
PC-compatible computers by my colleague Daniel Arfib). Thus, shortly after
the beginning of the piece, clarinet sounds have been transformed by sharp
resonant filtering, adding a harp-like echo, and by slowing down without
pitch transposition. Next, one hears synthetic trembling sounds and ascending
harmonic arpeggii-the latter generated with the help of a simple
compositional subroutine. The last section, where a host of echoes dwindle
away, was also realized with Music-5 (used as a processing and mixing
program). Most of the sounds in the piece were transformed using the SYTER, a
real-time digital audio processor designed by Jean-Francois Allouis in the
Groupe de Recherches Musicales in Paris.

Index 14--3:50

Paul Lansky: The Sound of Two Hands (excerpt)

In my continuing interest in applying the power of high technology to the
simple sounds of the world around us, I decided to make a computer percussion
piece from the simplest of our carbon-based percussion instruments, the sound
of two hands clapping. I used this source to create lots of different kinds
of percussive sounds, emulating the sounds of hands hitting lots of different
kinds of resonant objects (mostly those found in our kitchen). These sounds
were processed and mixed using the cmix software package on a NeXT computer
at my home and the studios of Princeton University.

Index 15--3:30

Clarence Barlow: OTOdeBLU

My friend and former student Georg Hajdu has done a lot of research into
equal temperaments of various kinds, his particular favorite being the
division of the octave into 17 equal parts. When Amnon Wolman contacted me
about making a contribution to celebrate John Pierce's 80th birthday, I found
the impulse to sit down and write a collection of short pieces for two
interleaved pianos tuned to this 17-tone scale (with the same white keys and
different black keys). It took me eight hours with my program AUTOBUSK, not
counting a couple of days preparation and post-processing. At first I wanted
to create a harmonic grammar involving the 7th and 11th partials, but decided
for lack of time to leave it for a later l7-tone piece. The grammar here is
Grameanus-based (= Glareanus + Rameau). Also, it only uses ten of the 17
tones, all available on one of the two pianos. The title is odd-could it be a
reference to the computer program that was used in the generation of the
piece (everything but a very short quote from the song "16 Ton[e]s" is
algorithmic in origin), or perhaps to the month of its completion (October),
or to the brand new octogenarian in whose honor it was written? In any case
otode blu is Japanese, I am told, for "colored blue by sound" and simply came
to me "out of the blue!" This recording was realized using an Akai sampler
and Atari computer at the Institute for Sonology in The Hague, The Netherlands.

The editors wish to thank Amnon Wolman for his co-production, and the staffs
of the CCRMA Center in Stanford, California, and Anckarstrvm Records in
Gothenburg, Sweden for the use of their production facilities in the
preparation of the soundsheet master.

Index 16--2:42

Charles R. Sullivan: Sound examples to accompany the article "Extending the
Karplus-Strong Algorithm to Synthesize Electric Guitar Timbres with
Distortion
and Feedback" in Computer Music Journal 14(3): 26-37, 1990.

These examples demonstrate the power and flexibility of the author's
extensions to the well-known Karplus-Strong plucked string synthesis algorithm.

Example 1. The Star Spangled Banner (1:28) (for a score example, see Computer
Music Journal 14(3) pp. 70).

Example 2. Two string tones from simplified versions of the algorithm-one
with no decay, and one with frequency-independent decay (0:08).

Example 3. Seven string tones using the full algorithm and varying
parameters (0:32).

Example 4. String tone demonstrating the effect of changing the delay line
length during the duration of a note (0:04).

Example 5. Demonstration of glissando on a bass guitar theme (0:12).

Index 17--3:32

Xavier Serra and Julius O. Smith III: Sound examples to accompany the article
"Spectral Modeling Synthesis: A Sound Analysis/Synthesis System based on
Deterministic plus Stochastic Decomposition" in Computer Music Journal 14(4)
1990.

The three parts of the examples illustrate the use of the SMS technique on
string, voice and percussion timbres.

Part 1. Guitar passage (1:24).
    Example 1. Original sound.
    Example 2. Deterministic component.
    Example 3. Stochastic component.
    Example 4. Deterministic plus stochastic components.
    Example 5. Frequency transposition by a factor of 0.3.
    Example 6. Frequency transposition by a factor of 0.7 with a stretching of
        the partials.
    Example 7. Time-varying glissando with a stretching of the partials.
    Example 8. Time-varying time scale.
    Example 9. Time compression by a factor of 2 with time-varying time scale
        and a stretching of the partials.
    Example 10. Time compression by a factor of 2 and frequency transposition by
        a factor of 0.4.
    Example 11. Time compression by a factor of 2 with a glissando down.

Part 2. Speech phrase (1:08).
    Example 1. Original sound.
    Example 2. Deterministic component.
    Example 3. Stochastic component.
    Example 4. Deterministic plus stochastic components.
    Example 5. Frequency transposition by a factor of 0.6.
    Example 6. Compression of the frequency evolution and frequency
        transposition by a factor of 0.4.
    Example 7. Frequency transposition by a factor of 0.4 with a stretching of
        the partials.
    Example 8. Cross-fade from the deterministic to the stochastic component
        during the phrase.
    Example 9. Time compression by a factor of 3, compression of the frequency
        evolution and frequency transposition by a factor of 0.4.
    Example 10. Time compression by a factor of 3 and compression of the
        frequency evolution.
    Example 11. Time expansion by a factor of 3 of the stochastic component with
        a time-varying time scale.

Part 3. Conga phrase (0:54).
    Example 1. Original sound.
    Example 2. Deterministic component.
    Example 3. Stochastic component.
    Example 4. Deterministic plus stochastic components.
    Example 5. Compression of the frequency evolution.
    Example 6. Compression of the frequency evolution and frequency
        transposition by a factor of 0.3.
    Example 7. Compression of the frequency evolution and frequency
        transposition by a factor of 2.
    Example 8. Stretching of the partials.
    Example 9. Glissando down.
    Example 10. Glissandoup.
    Example 11. Time-varying mix of noise component.
    Example 12. Time-varying time scale.
    Example 13. Time-varying time scale (inverse of Example 12).
    Example 14. Time-varying time scale with a time-varying stretching of the partials.
    Example 15. Manipulation of the frequency evolution.
    Example 16. Manipulation of the frequency evolution (inverse of Example 15).
    Example 17. Time expansion by a factor of 3.

Index 18--3:26

Peter S. Langston: Music examples to accompany the article "IMG/1: An
Incidental Music Generator"

These sound examples are taken from the CD supplement to Computing Systems,
The Journal of the USENIX Association, 3(2), Spring 1990, where they are
described in the article "Little Languages for Music" by Peter S. Langston.
They are used here by permission of the author. The sounds were generated by
commercial MIDI equipment (synthesizers, samplers and drum machines)-with the
exception of the "voice" on Example 6, which was produced by a DECTalk DTC01
speech synthesizer-all driven by the author's improvisation software running
on a Sun Microsystems workstation.

Example 1. Samba Batucada, a classic variation of the samba generated using
the DP drum pattern description language (0:12).

Example 2. Empty Bed Blues, the accompaniment was generated automatically
from cc-format chord charts; the lead (MIDI) vibraphone voice was played by
the author (0:30).

Example 3. Two Reggae Vamps, the results of executing the same MUT language
script twice (0:32)

Example 4. Boogie woogie, samba and bluegrass accompaniments of the same
chord chart. The chord chart was generated by IMG/1 for a boogie woogie
improvisation, and subsequently re-labeled (with a text editor) for samba and
then bluegrass styles and re-interpreted by IMG/1. The harmonic structure of
the boogie woogie is luckily appropriate for the other styles (1:02).

Example 5. Drum part generated by the DDM program using the technique of
stochastic binary subdivision (0:28).

Example 6. Scat singing in an Indian scale, the drone was played by the
author and the voice part generated using DDM (and stochastic binary
subdivision) for melody improvisation (0:42).

Index 19--3:00

Michael Gogins: Music examples to accompany the article "Iterated Functions
Systems Music"

This composition was generated using the author's software on a IBM
PC-compatible personal computer and commercial wavetable MIDI synthesizer.

Example 1. HEX7, this first selection is the opening two minutes of the
seventh of the hexagonally symmetrical IFS systems described in the article (1:52)

Example 2. SQUARE5, this is the final minute of the rectangularly symmetrical
IFS system described in the article text. (1:10).

Index 20--0:30

Peter S. Langston: Reprise

-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-

___Stephen Travis Pope, Editor--Computer Music Journal, MIT Press (and)
___Research Associate--Center for New Music and Audio Technologies, UCB
___email: stp@CNMAT.Berkeley.edu; telephone: (+1-510) 644-3881
___http://www.cnmat.berkeley.edu/~stp/  (personal WWW home page)
___http://www-mitpress.mit.edu/Computer-Music-Journal/CMJ.html (CMJ home)



[=] © 1995 Peter Langston []