Mazurka Project Update Craig Stuart Sapp CHARM Symposium Kings College, University of London 26 January 2006.

Slides:



Advertisements
Similar presentations
Introduction to the Elements of Music
Advertisements

Chapter 2: Rhythm and Pitch
1 NATURAL TEST SIGNAL © Kalervo Kuikka Theory and measurements.
Auditory scene analysis 2
Recording-based performance analysis: Feature extraction in Chopin mazurkas Craig Sapp (Royal Holloway, Univ. of London) Andrew Earis (Royal College of.
Sonic Visualiser Tour CHARM Symposium 30 June 2006 Craig Stuart Sapp.
Introducing the Mazurka Project Nick Cook and Craig Stuart Sapp Music Informatics research group seminar School of Informatics, City University, London.
Timbre perception. Objective Timbre perception and the physical properties of the sound on which it depends Formal definition: ‘that attribute of auditory.
Final Year Project Progress January 2007 By Daire O’Neill 4EE.
Content-based retrieval of audio Francois Thibault MUMT 614B McGill University.
The frequency spectrum
Copyright Kenneth M. Chipps Ph.D. How to Use a Spectrum Analyzer Wi-Spy Version Last Update
Sept. 6/11. - Sound Sounds may be perceived as pleasant or unpleasant. What are these sounds that we hear? What is "sound"? What causes it, and how do.
Reverse Conducting Slides Mazurka Project Presentation Materials Version: 14 November 2005.
Lecture 07 Segmentation Lecture 07 Segmentation Mata kuliah: T Computer Vision Tahun: 2010.
Evaluation of the Audio Beat Tracking System BeatRoot By Simon Dixon (JNMR 2007) Presentation by Yading Song Centre for Digital Music
Content-Based Classification, Search & Retrieval of Audio Erling Wold, Thom Blum, Douglas Keislar, James Wheaton Presented By: Adelle C. Knight.
Classification of Music According to Genres Using Neural Networks, Genetic Algorithms and Fuzzy Systems.
Presented By: Karan Parikh Towards the Automated Social Analysis of Situated Speech Data Watt, Chaudhary, Bilmes, Kitts CS546 Intelligent.
EE2F2 - Music Technology 8. Subtractive Synthesis.
© 2010 Pearson Education, Inc. Conceptual Physics 11 th Edition Chapter 21: MUSICAL SOUNDS Noise and Music Musical Sounds Pitch Sound Intensity and Loudness.
CHARM Advisory Board Meeting Institute for Historical Research, School of Advanced Study, UL Senate House, London 12 Dec 2006 Craig Stuart Sapp Centre.
Introduction to Interactive Media 10: Audio in Interactive Digital Media.
Similarity Measurements in Chopin Mazurka Performances Craig Stuart Sapp C4DM Seminar, 11 July 2007 Queen Mary, University of London.
Beat-level comparative performance analysis Craig Stuart Sapp CHARM Friday the 13 th of April 2007 CHARM Symposium 4: Methods for analysing recordings.
Performance Authenticity: A Case Study of the Concert Artist Label Association of Recorded Sound Collections Conference Stanford University 28 March 2008.
The Mazurka Project Craig Stuart Sapp Centre for the History and Analysis of Recorded Music Royal Holloway, Univ. of London Universiteit van Amsterdam.
WHAT IS TRANSCRIBE! ? Transcribe! is computer software that was designed to help people transcribe music from recordings. Transcribe- Learning to play.
EzDJ By Todd Alexander An easy DJ Mixing Solution.
Applications Statistical Graphical Models in Music Informatics Yushen Han Feb I548 Presentation.
Paper by Craig Stuart Sapp 2007 & 2008 Presented by Salehe Erfanian Ebadi QMUL ELE021/ELED021/ELEM021 5 March 2012.
1 Color vision and representation S M L.
London April 2005 London April 2005 Creating Eyeblaster Ads The Rich Media Platform The Rich Media Platform Eyeblaster.
15.1 Properties of Sound  If you could see atoms, the difference between high and low pressure is not as great.  The image below is exaggerated to show.
INTONATION: The control of overall pitch level and individual pitches in relation to other relevant pitches.
Dynamic Range and Dynamic Range Processors
11 Applications of Machine Learning to Music Research: Empirical Investigations into the Phenomenon of Musical Expression 이 인 복.
Digital Image Processing CCS331 Relationships of Pixel 1.
Rhythmic Transcription of MIDI Signals Carmine Casciato MUMT 611 Thursday, February 10, 2005.
Similarity Matrix Processing for Music Structure Analysis Yu Shiu, Hong Jeng C.-C. Jay Kuo ACM Multimedia 2006.
Elements of Music. MELODY  Melody is the part of the music you can sing. To play or sing a melody, there can only be one note at a time. It is also known.
CHARM Symposium Kings College London 6 January 2006 The Mazurka Project.
Mazurka Project 29 June 2006 CHARM Symposium Craig Stuart Sapp.
COS signal to noise capabilities Limitation of COS S/N No good 2-D flat available. Fixed pattern noise dominates COS spectra. An uncalibrated COS spectrum.
Audio Tempo Extraction Presenter: Simon de Leon Date: February 9, 2006 Course: MUMT611.
MUSC 1000 Intro to Music MWF10-10:50. Some General Questions: What is Music? Where do we listen to music? Are there any composers or bands you know or.
Autonomous Robots Vision © Manfred Huber 2014.
The Mazurka Project Science and Music Seminar University of Cambridge 28 Nov 2006 Craig Stuart Sapp Centre for the History and Analysis of Recorded Music.
Performance Comparison of Speaker and Emotion Recognition
Computational Performance Analysis Craig Stuart Sapp CHARM 25 April 2007 CCRMA Colloquium.
A. R. Jayan, P. C. Pandey, EE Dept., IIT Bombay 1 Abstract Perception of speech under adverse listening conditions may be improved by processing it to.
Audio Fingerprinting MUMT 611 Philippe Zaborowski March 2005.
This research was funded by a generous gift from the Xerox Corporation. Beat-Detection: Keeping Tabs on Tempo Using Auto-Correlation David J. Heid
SPATIAL HEARING Ability to locate the direction of a sound. Ability to locate the direction of a sound. Localization: In free field Localization: In free.
And application to estimating the left-hand fingering (automatic tabulature generation) Caroline Traube Center for Computer Research in Music and Acoustics.
Recording Your Script and Creating Your Podcast !!You are not finished until you have completed all steps on all of the slides up to and including Slide.
Assessment of data acquisition parameters, and analysis techniques for noise reduction in spinal cord fMRI data  R.L. Bosma, P.W. Stroman  Magnetic Resonance.
David Sears MUMT November 2009
Rhythmic Transcription of MIDI Signals
C-15 Sound Physics 1.
Mark Sayles, Ian M. Winter  Neuron 
Presented by Steven Lewis
Decision Making during the Psychological Refractory Period
Elements of Music.
Lecture 2: Frequency & Time Domains presented by David Shires
Presenter: Simon de Leon Date: March 2, 2006 Course: MUMT611
Cell Optical Density and Molecular Composition Revealed by Simultaneous Multimodal Label-Free Imaging  Nicolas Pavillon, Alison J. Hobro, Nicholas I.
Attentive Tracking of Sound Sources
What is Sound?
Measuring the Similarity of Rhythmic Patterns
Presentation transcript:

Mazurka Project Update Craig Stuart Sapp CHARM Symposium Kings College, University of London 26 January 2006

Data Entry Update Data Entry Data Analysis

Source material: mazurka recordings 29 performances: 1,374 recordings of 49 mazurkas 65 performers, 73 CDs = 28 performances/mazurka on average number of mazurka performances in each decade

Waveform Where are the notes? How many notes are there? Where are the beats? Reverse conducting (corrected) added to waveform

Mz PowerCurve Started development a few months before first SV release. By-product of looking at how to extract note loudnesses from audio. Some notes become easy to see. Some notes obscured – mostly by beating between harmonics.

Mz SpectralFlux Implementation of Spectral Flux as described by Simon Dixon: Dixon, Simon. "Onset detection revisited" in the Proceedings of the 9th International Conference on Digital Audio Effects (DAFx'06). Montreal, Canada; September 18-20, Component of the MATCH program. Similar to the power curve idea, but measurements done on the spectrum. Only frequency bins gaining energy are considered. Gets rid of ½ of the harmonic beating problem.

Spectral flux peak finding Too many false positives: A few false positives: Sensitive to parameter settings:

Spectral difference Blue vertical lines mark automatically identified beat locations. Pink vertical lines are human-identified beat locations. Notice relation between blue lines and pink lines. Tempo Tracker plugin from QUML C4DM uses same technique as Spectral Flux But called “Spectral Difference” Onset or Difference function not available as outputs from the plugin Only beat locations, shown as blue lines:

Mz Attack Developed July 2006 after last colloquia Clear indications of note onsets Noise peaks are difficult to separate from onset peaks, so usually used in conjuction with Mz PowerCurve. Allows for precise manual correction of reverse conducting to go from ~6 hours/performance to ~1 hour/performance.

Mz SpectralReflux Update on the MzAttack technology based on studying Spectral Flux. Very low noise due to harmonic beating, Only noise left is from clicks, pops, etc., and non-musical sounds in audio. Slightly less sensitive to parameter settings than spectral flux. Working on reverse conducting correction time on the order of ~15-30 minutes/performance (compared to current ~1 hour/performance). = Move taps to nearest onset

Peek Under the Hood (Bonnet)

Possible additions example: {2, 3} arithmetic mean = (2 + 4) / 2 = 3 geometric mean = sqrt(2 * 4) = 2.8 spectral flatness = 2.8 / 3 = 0.94 Used to distinguish between noise and pitched sound note noise note (MzSpectralFlatness)

Performance data extraction Reverse conducting Align taps to beats Automatic feature extraction tempo by beat off-beat timings individual note timings individual note loudnesses Listen to recording and tap to beats. Tap times recorded in Sonic Visualiser by tapping on computer keyboard. Reverse conducting is real-time response of listener, not actions of performer. Adjust tap times to correct beat locations. A bit fuzzy when RH/LH do not play in sync, or for tied notes.

Data Entry Update Data Entry Data Analysis

Dynamics & Phrasing all at once: rubato

Average tempo over time Performances of mazurkas slowing down over time: Friedman 1930 Rubinstein 1966 Indjic 2001 Slowing down at about 3 BPM/decade Laurence Picken, 1967: “Centeral Asian tunes in the Gagaku tradition” in Festschrift für Walter Wiora. Kassel: Bärenreiter,

Average Tempo over time (2) The slow-down in performance tempos is unrelated to the age of the performer

Tempo graphs

Mazurka Meter Stereotypical mazurka rhythm: First beat short Second beat long Mazurka in A minor Op. 17, No. 4 measure with longer second beat measure with longer first beat blurred image to show overall structure BAAAD(A) C

Standard Deviation & Variance Tempo Duration Variance: Standard Deviation: Tempo Duration phrasing mazurka script longer durations = more variability some performers do it, others don’t

Timescapes Examine the internal tempo structure of a performances Plot average tempos over various time-spans in the piece Example of a piece with 6 beats at tempos A, B, C, D, E, and F: average tempo for entire piece plot of individual tempos average tempo of adjacent neighbors 3-neighbor average 4-neighbor average 5-neighbor average

Timescapes (2) average tempo of performance average for performance slower faster phrases

Comparison of performers 6

Same performer

Correlation Pearson correlation: Measures how well two shapes match: r = 1.0 is an exact match. r = 0.0 means no relation at all.

Overall performance correlations Biret Brailowsky Chiu Friere Indjic Luisada Rubinstein 1938 Rubinstein 1966 Smith Uninsky BiLuBrChFlInR8R6SmUn Highest correlation to Biret 1990 Lowest correlation to Biret 1990

Correlation tree Who is closest to whom? (with respect to beat tempos of an entire performance). Mazurka in A minor, 68/3

Correlation tree (2) Mazurka in A minor, 17/4

Correlation network How close is everyone to everyone else? Mazurka in A minor, 17/4

Correlation scapes Who is most similar to a particular performer at any given region in the music?

Same performer over time 3 performances by Rubinstein of mazurka 17/4 in A minor (30 performances compared)

Same performer (2) 2 performances by Horowitz of mazurka 17/4 in A minor plus Biret 1990 performance. (30 performances compared)

Student/Teacher Francois and Biret both studied with Cortot, Mazurka in F major 68/3 (20 performances compared)

Correlation to average

Possible influences

Same source recording The same performance by Magaloff on two different CD releases Philips Philips /29-2 Structures at bottoms due to errors in beat extraction or interpreted beat locations (no notes on the beat). mazurka 17/4 in A minor

Purely coincidental Two difference performances from two different performers on two different record labels from two different countries.

Arch Correlation Paderewski 1912 (17-4) red = phrase peak blue = phrase boundary make your own plots at

Phrase Measure Level Average performance (17-4) A A B A C A D C1 C2C3 structural boundaries

Phrase Identification/Characterization Phrase Edges Ashkenazy c.1980 (17-4) Chiu 1999 Rosen 1989 Paderewski 1912

Arch Correlation (2) Rubinstein 1938 Rubinstein 1952 Rubinstein 1966 Horowitz 1971 Horowitz 1985

Ramp Correlation average 17-4 Arch Ramp red = accel. blue = rit.