Presentation is loading. Please wait.

Presentation is loading. Please wait.

Motion in Sound: Designing Sound for Interactive Dance Performance Dr. Dan Hosken Associate Professor of Music California State University, Northridge.

Similar presentations


Presentation on theme: "Motion in Sound: Designing Sound for Interactive Dance Performance Dr. Dan Hosken Associate Professor of Music California State University, Northridge."— Presentation transcript:

1 Motion in Sound: Designing Sound for Interactive Dance Performance Dr. Dan Hosken Associate Professor of Music California State University, Northridge Presented at: ATMI 2006 San Antonio, TX September 16, 2006

2 Purpose Present a somewhat simplified and useful approach to creating for the interactive dance medium Present a somewhat simplified and useful approach to creating for the interactive dance medium Facilitate collaboration between students of dance and students of music Facilitate collaboration between students of dance and students of music

3 Objectives: Give an overview of the hardware and software components of a camera-based interactive dance/music system Give an overview of the hardware and software components of a camera-based interactive dance/music system Present a loose taxonomy of motion parameters and mapping types Present a loose taxonomy of motion parameters and mapping types Suggest some useful mappings between motion parameters and sound element parameters Suggest some useful mappings between motion parameters and sound element parameters Illustrate these mappings using examples of my recent work with the Palindrome IMPG Illustrate these mappings using examples of my recent work with the Palindrome IMPG

4 General System Overview Camera trained on dancer(s) is connected to computer Camera trained on dancer(s) is connected to computer Video Analysis Software abstracts motion data in realtime Video Analysis Software abstracts motion data in realtime Motion Data are passed to Sound Software Motion Data are passed to Sound Software Sound Software maps incoming motion data to sound element parameters in realtime Sound Software maps incoming motion data to sound element parameters in realtime

5 Overview w/ bad clipart video computer audio computer ethernet

6 Sound Generation Software Max/MSP (Cycling ‘74) Max/MSP (Cycling ‘74) Max/MSP PD (Miller Puckette)—free! PD (Miller Puckette)—free! PD Supercollider (J. McCartney)—free! Supercollider (J. McCartney)—free! Reaktor (Native Instruments) Reaktor (Native Instruments) Reaktor …and any software that can receive data and produce sound in realtime …and any software that can receive data and produce sound in realtime

7 Video Analysis Software EyeCon (Frieder Weiss) EyeCon (Frieder Weiss) EyeCon EyesWeb (eyesweb.org)—free! EyesWeb (eyesweb.org)—free! EyesWeb Jitter (Cycling ‘74) Jitter (Cycling ‘74) Jitter SoftVNS (David Rokeby) SoftVNS (David Rokeby) Cyclops (Eric Singer/Cycling ‘74) Cyclops (Eric Singer/Cycling ‘74) Cyclops TapTools (Electrotap) TapTools (Electrotap) cv.jit (Jean-Marc Pelletier) cv.jit (Jean-Marc Pelletier) Eyes (Rob Lovel)—free! Eyes (Rob Lovel)—free!

8 Specific System Overview B/W camera (w/ IR filter) sends analog data to video computer (w/ framegrabber board) B/W camera (w/ IR filter) sends analog data to video computer (w/ framegrabber board) EyeCon (w/ custom drivers) abstracts motion data in realtime EyeCon (w/ custom drivers) abstracts motion data in realtime Motion data is sent via Ethernet using Open Sound Control (OSC) to music computer Motion data is sent via Ethernet using Open Sound Control (OSC) to music computer Max/MSP maps incoming motion data to suitable sound element parameters Max/MSP maps incoming motion data to suitable sound element parameters

9 Objectives (redux): Give an overview of the hardware and software components of a camera-based interactive dance/music system Give an overview of the hardware and software components of a camera-based interactive dance/music system Present a loose taxonomy of motion parameters and mapping types Present a loose taxonomy of motion parameters and mapping types Suggest some useful mappings between motion parameters and sound element parameters Suggest some useful mappings between motion parameters and sound element parameters Illustrate these mappings using examples of my recent work with the Palindrome IMPG Illustrate these mappings using examples of my recent work with the Palindrome IMPG

10 Definitions (1) Motion Parameter: made up of specified data abstracted from part or all of video, e.g., Motion Parameter: made up of specified data abstracted from part or all of video, e.g., Height Height Width Width Dynamic Dynamic Sound Element: a distinct, coherent sonic behavior created by one or more synthesis or processing techniques, e.g., Sound Element: a distinct, coherent sonic behavior created by one or more synthesis or processing techniques, e.g., A Low Drone created by FM Synthesis A Low Drone created by FM Synthesis Time-stretched text created by Granulation Time-stretched text created by Granulation Percussive patterns created by Sample Playback Percussive patterns created by Sample Playback

11 Definitions (2) Sound Element Parameter: a parameter of a synthesis/processing technique, e.g., Sound Element Parameter: a parameter of a synthesis/processing technique, e.g., Modulation Frequency of a simple FM pair Modulation Frequency of a simple FM pair Grain Size of a granulated sound file Grain Size of a granulated sound file Ir/regularity of tempo in a rhythmic pattern Ir/regularity of tempo in a rhythmic pattern Mapping: the connection between a motion parameter and a sound element parameter, e.g., Mapping: the connection between a motion parameter and a sound element parameter, e.g., Height  modulation frequency of FM Height  modulation frequency of FM Width  grain size of granulated sound file Width  grain size of granulated sound file Dynamic  Irregularity of tempo Dynamic  Irregularity of tempo

12 Definitions (3) Scene: a group of motion parameters, sound elements, and mappings between them Scene: a group of motion parameters, sound elements, and mappings between them

13 EyeCon Interface (1) Field: can measure height or width or dynamic or… Touchlines: detect crossing and position on line

14 EyeCon Interface (2) Fields and lines are mapped to MIDI data (or OSC) Sequencer steps through “scenes”

15 Taxonomy of Motion Parameters Body Parameters: position independent, “attached” to body Body Parameters: position independent, “attached” to body Height Height Width Width Dynamic Dynamic Stage Parameters: position dependent, “attached” to stage Stage Parameters: position dependent, “attached” to stage Left-right position Left-right position Touchlines Touchlines Extremely Narrow Fields Extremely Narrow Fields

16 Parameter Type Examples Stage Parameter (position): Scene 3 from Brother-Sister Solo Stage Parameter (position): Scene 3 from Brother-Sister Solo Julia Eisele, dancer/choregrapher Julia Eisele, dancer/choregrapher Stuttgart, June 2005 Stuttgart, June 2005 Body Parameter (Dynamic): Conversation Body Parameter (Dynamic): Conversation Robert Wechsler, dancer/choreographer Robert Wechsler, dancer/choreographer Julia Eisele, dancer Julia Eisele, dancer Stuttgart, June 2005 Stuttgart, June 2005

17 Primary/Secondary Mappings Primary Mapping: controls dominant sonic feature Primary Mapping: controls dominant sonic feature Secondary Mapping: …is secondary… Secondary Mapping: …is secondary… Example: Scene 3 from Brother-Sister Solo Example: Scene 3 from Brother-Sister Solo Primary mapping: position  position in sound “landscape” Primary mapping: position  position in sound “landscape” Secondary mapping: dynamic  disturbance of drone Secondary mapping: dynamic  disturbance of drone Secondary mapping: width  loop size/speed of segment within sound file Secondary mapping: width  loop size/speed of segment within sound file

18 Some Useful Sound Elements Soundfile (trigger playback) Soundfile (trigger playback) Low FM Drone (modulation frequency) Low FM Drone (modulation frequency) Mid-Range Additive Cluster (frequency deviation of components from norm) Mid-Range Additive Cluster (frequency deviation of components from norm) Sound file Granulation (position in file, pitch change) Sound file Granulation (position in file, pitch change)

19 Sound Element mappings (1) A Human Conversation (in progress) A Human Conversation (in progress) Scene 7-8: Scene 7-8: Dynamic  Granulated Text (playback rate) Dynamic  Granulated Text (playback rate) Scene 9: Scene 9: Dynamic (left)  Granulated Text (playback rate) Dynamic (left)  Granulated Text (playback rate) Dynamic (right)  Granulated Text (playback rate) Dynamic (right)  Granulated Text (playback rate)

20 A Human Conversation Robert Wechsler (Palindrome), choreographer/dancer Robert Wechsler (Palindrome), choreographer/dancer J’aime Morrison (CSUN), choreographer/dancer J’aime Morrison (CSUN), choreographer/dancer Dan Hosken, composer and sound programmer Dan Hosken, composer and sound programmer Work session, CSUN, June 23, 2006 Work session, CSUN, June 23, 2006

21 Sound Element mappings (2) Perceivable Bodies (Emily Fernandez) Perceivable Bodies (Emily Fernandez) Scene 3a: Scene 3a: Position  Granulated Text (position in file) [Primary] Position  Granulated Text (position in file) [Primary] Width  Granulated Text (grain duration) Width  Granulated Text (grain duration) Dynamic  Low FM Drone (mod frequency) Dynamic  Low FM Drone (mod frequency) Scene 3b: Scene 3b: Position  Phase Voc File (position in file) [Primary] Position  Phase Voc File (position in file) [Primary] Width  Phase Voc file (loop length/rate) Width  Phase Voc file (loop length/rate) Dynamic  Low FM Drone (mod frequency) Dynamic  Low FM Drone (mod frequency) Scene 4: Scene 4: Dynamic  Granulated Noise (density) [Primary] Dynamic  Granulated Noise (density) [Primary] Dynamic  Granulated Noise (position in file) Dynamic  Granulated Noise (position in file)

22 Perceivable Bodies Emily Fernandez, choreographer/dancer Emily Fernandez, choreographer/dancer Frieder Weiss, projections and interactive programming Frieder Weiss, projections and interactive programming Dan Hosken, composer and sound programmer Dan Hosken, composer and sound programmer World Premiere at Connecticut College, April 1, 2006 World Premiere at Connecticut College, April 1, 2006

23 dan.hosken@csun.edu Examples shown can be found: http://www.csun.edu/~dwh50750/Papers-Presentations/ Full Pieces can be found: http://www.csun.edu/~dwh50750/Music/ Other Examples of Palindrome’s work: http://www.palindrome.de/

24 Max/MSP Screenshot

25 PD Screenshot

26 Reaktor Screenshots

27 Eyecon Screenshot

28 EyesWeb Screenshot

29 Jitter Screenshot

30 Cyclops Screenshot


Download ppt "Motion in Sound: Designing Sound for Interactive Dance Performance Dr. Dan Hosken Associate Professor of Music California State University, Northridge."

Similar presentations


Ads by Google