Presentation on theme: "1. Background in a number of institutions, teaching in a number of subject areas, research in both music and computing 2. Singing analysis and synthesis."— Presentation transcript:
1. Background in a number of institutions, teaching in a number of subject areas, research in both music and computing 2. Singing analysis and synthesis 3. Sonic Art 4. Social Music Making 1. System development 2. Enabling music 3. Performance spaces 4. Interfaces for commercial systems 5. Conclusions
Interest in novel interfaces for music synthesis and performance Voice controlled sound synthesis
cellMusic is a real-time, wireless distributed composition and performance system designed for domestic mobile devices. It distinguishes itself from other wireless performance environments in that it is intended for ad hoc performances in a variety of locations, with services and performances dynamically adapting to the number of devices available. It is intended that users will perform in the same manner that they use mobile phones for interacting socially.
Anyone! Sonic artists Appeal and capitalization of the low-fi sound output, but with strength of numbers. Natural diffusion. Exploration of a variety of physical environments, e.g. Parks, outdoor public performance spaces, impromptu performances at conferences etc. Collect sonic material while performing!
CLDC-1.1(Connected Limited Device Configuration) › at least 192 kB of total memory budget available to Java › a 16-bit or 32-bit processor › low power consumption, often operating with battery power › connectivity to some kind of network, often with a wireless, intermittent connection and with limited bandwidth. MIDP-2.1 (Mobile Information Device Protocol) › User Interface package › Application Life Cycle package (How the apps run in their environment) › Networking (e.g. https) › Public Key Package (Secure connections) › Persistence Package (Ability to read and write record stores, although no reading/writing to a file system)
Bluetooth is designed to run at distances of up to 10 metres, however high powered devices will function up to 100 metres. A Bluetooth device can be discoverable. It may respond to an enquiry from another device with the following information: › Device name › Device class (e.g. Headphones, computer) › List of services available on the device › Technical information (relating to the device features, manufacturer, Bluetooth specification implemented etc) Some devices are limited to the number of simultaneous connections they can achieve (typically up to 7 ). In some cases, one device may be required to pair with another in order to access its services. Every device has a 48-bit address which is unique. Generally these are hidden, with user-defined names appearing in response to scans.
Threads are used to allow each node to be both a client and server. Device names, addresses and services are used to enable devices to connect with each other. Sometimes permission has to be given to pair a device with another. A unique UUID (Universally unique identifier)is used to allow identification of a cellMusic service. A thread is used to monitor input from connections and store incoming control data in a list.
Four playback options › Simple tone › WAV / MP3 › Sequence of tones › MIDI Extra data included to specify the playback device/devices (Current, ID, ‘next device’, ‘all devices’ etc) Control instructions › Execute instruction from another device › Wait for instruction › Change tone in a note sequence › Re-assign WAV/MP3 ID › Iteration and selection control
Performance data is used to give structure › One node will invite others to join it › Partly to overcome the problem of constantly searching for other nodes › Future explorations of sub networks Live interaction › Manual zoning of instruments › Attempt to allocate another part › Deallocate a part › Execute an event from a piece › Execute an event live from user input
User interface Evolution of performance data (toward a meaningful language for composers) Evolution of supported network topologies (and sub networks) More on synchronization (Including Master/Slave clock synch) More on live data collection (Images, sound) More on services discovery Explore linking to larger sound systems / lighting control Port to iPhone et al
Postgraduate projects (MSc, PhD) › Programming › Composition › Live diffusion of a piece Commercial marketing of App › Towards a language for sound design Industry link program: Development of software for control of Apollo system (iPad/ iPhone) › Successful Pilot › Configuring surround sound installations › Diffusion of surround sound components Student placements Redevelopment of the Undergraduate Programming Module
A research group about to launch › Huddersfield, MMU, Leeds, Royal Northern College of Music etc. › Apollo, YAMSEN, Inclusive Music, SKUG (Norway), Invention Education, Sensory Software, Sound Sculpture › Adapting modular interactive audio for enabling music projects
Visiting Associate Professor (2011) Visiting Professor (2012) Exploring singing synthesis Developing mobile applications for iPhone/iPad/iPod Touch Investigating the composer/performer/conductor relationship Establishing appropriate protocols Investigating the ‘learning curve’ relating to performance techniques
Dr Ian Gibson University of Huddersfield, United Kingdom
Your consent to our cookies if you continue to use this website.