Pattern Language

Program Notes
Pattern Language is a series of compositions which could be played live via laptop, or shown as videos with accompanying sound, or both.

For EMM 2017, I would like the selection committee to consider a live laptop audiovisual performance, based on the three example works below, each of which are composed using custom software developed by the artist, exploring various approaches to generating or extracting patterns from autonomous processes.
Alternately, one or more of the example works could be screened as stand alone videos.

The linked works on Vimeo are lower resolution than what would be performed or exhibited, and if selected HD versions can be supplied.

If a live performance were to be considered, I can provide all needed equipment.

Below are 3 example works from the series. Running times are indicated. The live performance, runs approx. 15 mins.


Pattern Language I
Running time- 2:38
Based on Steve Reich's composition, "Piano Phase".

Custom software created by the composer, that generates a single pair of identical note patterns. These patterns shift out of phase over time. This version is built up from four-voice polyphony with real-time control over individual steps within each sequence. Each discrete audio voice is in turn used as the source for the generation of the visual patterning elements. Each "performance" is unique and recorded live. The orignal Max patch was created by Akihiko Matsumoto ( http://akihikomatsumoto.com/maxmsp/pianophase.html ) and customized into an audiovisual instrument by David Fodel (with permission).

Pattern Language II
Running time- 3:22
Based on Leon Theremin/Henry Cowell's "Rhythmicon".

This live audiovisual work was generated using a custom software version of the famous pattern-generating instrument, the "Rhythmicon", commissioned by Henry Cowell and built by Leon Theremin in 1932. The software version allows for the voices/timbres to be separated from the pitches, for real-time modulation of the individual voices, and simultaneous real-time visual elements to be generated by the same engine. Each "performance" is unique and recorded live.

Pattern Language III
Running time- 5:00

Using computer vision algorithms, the movement of the snow is segmented into individual snowflakes and tracked across the field of view of a high-speed (1000 fps) industrial camera. The vertical position of each snowflake, when it is detected, describes the pitch of the note. The horizontal position is tracked and sends a value which controls the timbre of multiple voices over time. In this way the motion of the snow is given voice. The custom software allows for real-time modulation of the individual voices, and simultaneous viewing of the source material and the computer vision overlay. Each "performance" is unique and recorded live.

Biographical Sketch
David Fodel is an artist, educator, curator and writer whose eclectic installations, live performances, award-winning sound design and video works have been exhibited, screened, and performed internationally including ISEA, Hong Kong; TiMaDi, London, England; Post-Screen Festival, Lisbon, Portugal; Festival ECUA-UIO, Quito, Ecuador; Future Places Festival, Porto, Portugal; Transmediale, Berlin, Germany. His work has been written about in Wired Magazine, and published by the Experimental Television Center, New Media Caucus, Post-Screen Festival, and Sekans Cinema Journal.
Residencies include the National Center for Contemporary Art, Moscow, STEIM, Amsterdam, and the Experimental Television Center and Signal Culture in New York.
He teaches Electronic Art and Electronic Performance at the University of Colorado, Denver, curates the Lafayette Electronic Arts Festival [LEAF], and is founder of the MediaLive Festival.