Presentation is loading. Please wait.

Presentation is loading. Please wait.

ACE: A Framework for optimizing music classification Cory McKay Rebecca Fiebrink Daniel McEnnis Beinan Li Ichiro Fujinaga Music Technology Area Faculty.

Similar presentations


Presentation on theme: "ACE: A Framework for optimizing music classification Cory McKay Rebecca Fiebrink Daniel McEnnis Beinan Li Ichiro Fujinaga Music Technology Area Faculty."— Presentation transcript:

1 ACE: A Framework for optimizing music classification Cory McKay Rebecca Fiebrink Daniel McEnnis Beinan Li Ichiro Fujinaga Music Technology Area Faculty of Music McGill University

2 2/25 Goals  Highlight limitations of existing pattern recognition software when applied to MIR  Present solutions to these limitations  Stress importance of standardized classification and feature extraction software  Ease of use, portability and extensibility  Present the ACE software framework  Uses meta-learning  Uses classification ensembles

3 3/25 Existing music classification systems  Systems often implemented with specific tasks in mind  Not extensible to general tasks  Often difficult to use for those not involved in project  Need standardized systems for a variety of MIR problems  No need to reimplement existing algorithms  More reliable code  More usable software  Facilitates comparison of methodologies  Important foundations  Marsyas (Tzanetakis & Cook 1999)  M2K (Downie 2004)

4 4/25 Existing general classification systems  Available general-purpose systems:  PRTools (van der Heijden et al. 2004 )  Weka (Witten & Frank 2005)  Other meta-learning systems:  AST (Lindner and Studer 1999)  Metal (www.metal-kdd.org)

5 5/25 Problems with existing systems  Distribution problems  Proprietary software  Not open source  Limited licence  Music-specific systems are often limited  None use meta-learning  Classifier ensembles rarely used  Interfaces not oriented towards end users  General-purpose systems not designed to meet the particular needs of music

6 6/25 Special needs of music classification (1)  Assign multiple classes to individual recordings  A recording may belong to multiple genres, for example  Allow classification of sub-sections and of overall recordings  Audio features often windowed  Useful for segmentation problems  Maintain logical grouping of multi-dimensional features  Musical features often consist of vectors (e.g. MFCC’s)  This relatedness can provide classification opportunities

7 7/25 Special needs of music classification (2)  Maintain identifying meta-data about instances  Title, performer, composer, date, etc.  Take advantage of hierarchically structured taxonomies  Humans often organize music hierarchically  Can provide classification opportunities  Interface for any user

8 8/25 Standardized file formats  Existing formats such as Weka’s ARFF format cannot represent needed information  Important to enable classification systems to communicate with arbitrary feature extractors  Four XML file formats that meet the above needs are described in proceedings

9 9/25 The ACE framework  ACE (Autonomous Classification Engine) is a classification framework that can be applied to arbitrary types of music classification  Meets all requirements presented above  Java implementation makes ACE portable and easy to install

10 10/25 ACE and meta-learning  Many classification methodologies available  Each have different strengths and weaknesses  Uses meta-learning to experiment with a variety of approaches  Finds approaches well suited to each problem  Makes powerful pattern recognition tools available to non- experts  Useful for benchmarking new classifiers and features

11 11/25 ACE Feature Extraction System Classification Methodology n Dimensionality Reduction Classification Methodology 1 Dimensionality Reduction … Model Classifications Music Recordings TaxonomyFeature Settings Extracted Features Experiment Coordinator Classifier Evaluator Trained ClassifiersStatistical Comparison of Classification Methodologies

12 12/25 Algorithms used by ACE  Uses Weka class libraries  Makes it easy to add or develop new algorithms  Candidate classifiers  Induction trees, naive Bayes, k-nearest neighbour, neural networks, support vector machines  Classifier parameters are also varied automatically  Dimensionality reduction  Feature selection using genetic algorithms, principal component analysis, exhaustive searches  Classifier ensembles  Bagging, boosting

13 13/25 Classifier ensembles  Multiple classifiers operating together to arrive at final classifications  e.g. AdaBoost (Freund and Shapire 1996)  Success rates in many MIR areas are behaving asymptotically (Aucouturier and Pachet 2004)  Classifier ensembles could provide some improvement

14 14/25 Musical evaluation experiments  Achieved a 95.6% success with a five-class beatbox recognition experiment (Sinyor et al. 2005)  Repeated Tindale’s percussion recognition experiment (2004)  ACE achieved 96.3% success, as compared to Tindale’s best rate of 94.9%  A reduction in error rate of 27.5%

15 15/25 General evaluation experiments  Applied ACE to six commonly used UCI datasets  Compared results to recently published algorithm (Kotsiantis and Pintelas 2004)

16 16/25 Results of UCI experiments (1) Data Set ACE's Selected Classifier Kotsiantis' Success Rate ACE's Success Rate autosAdaBoost81.70%86.30% diabetesNaïve Bayes76.60%78.00% ionosphereAdaBoost90.70%94.30% irisFF Neural Net95.60%97.30% labork-NN93.40%93.00% voteDecision Tree96.20%96.30%

17 17/25 Results of UCI experiments (2)  ACE performed very well  Statistical uncertainty makes it difficult to say that ACE’s results are inherently superior  ACE can perform at least as well as a state of the art algorithm with no tweaking  ACE achieved these results using only one minute per learning scheme for training and testing

18 18/25 Results of UCI experiments (3)  Different classifiers performed better on different datasets  Supports ACE’s experimental meta-learning approach  Effectiveness of AdaBoost (chosen 2 times out of 6) demonstrates strength of classifier ensembles

19 19/25 Feature extraction  ACE not tied to any particular feature extraction system  Reads Weka ARFF as well as ACE XML files  Does include two powerful and extensible feature extractors are bundled with ACE  Write Weka ARFF as well as ACE XML

20 20/25 jAudio  Reads: .mp3 .wav .aiff .au .snd

21 21/25 jSymbolic  Reads MIDI  Uses 111 Bodhidharma features

22 22/25 ACE’s interface  Graphical interface  Includes an on-line manual  Command-line interface  Batch processing  External calls  Java API  Open source  Well documented  Easy to extend

23 23/25 Current status of ACE  In alpha release  Full release scheduled for January 2006  Finalization of GUI  User constraints on training, classification and meta- learning times  Feature weighting  Expansion of candidate algorithms  Long-term  Distributed processing, unsupervised learning, blackboard systems, automatic cross-project optimization

24 24/25 Conclusions  Need standardized classification software able to deal with the special needs of music  Techniques such as meta-learning and classifier ensembles can lead to improved performance  ACE designed to address these issues

25  Web site:  coltrane.music.mcgill.ca/ACE  E-mail:  cory.mckay@mail.mcgill.ca


Download ppt "ACE: A Framework for optimizing music classification Cory McKay Rebecca Fiebrink Daniel McEnnis Beinan Li Ichiro Fujinaga Music Technology Area Faculty."

Similar presentations


Ads by Google