Music Information Retrieval based on multi-label cascade classification system presented by Zbigniew W. Ras http//:www.mir.uncc.edu CCI,

Slides:



Advertisements
Similar presentations
Franz de Leon, Kirk Martinez Web and Internet Science Group  School of Electronics and Computer Science  University of Southampton {fadl1d09,
Advertisements

Instrumental Families
Timbre A description of the actual sounds that you hear. “Tone color” or “quality”
Unit 3 Time Periods; The Evolution of the Orchestra.
Families of Musical Instruments
Families and Classifications of Instruments
Music Appreciation Musical Instruments.
Y Fernandez- CMS Instruments 1. Families of Instruments String Instruments Violin Viola Cello Double Bass Violin Viola Cello Double Bass.
Content-based retrieval of audio Francois Thibault MUMT 614B McGill University.
Content-Based Classification, Search & Retrieval of Audio Erling Wold, Thom Blum, Douglas Keislar, James Wheaton Presented By: Adelle C. Knight.
Clustering… in General In vector space, clusters are vectors found within  of a cluster vector, with different techniques for determining the cluster.
A Supervised Approach for Detecting Boundaries in Music using Difference Features and Boosting Douglas Turnbull Computer Audition Lab UC San Diego, USA.
Musical Instruments.
Recent Research in Musical Timbre Perception James W. Beauchamp University of Illinois at Urbana-Champaign Andrew B. Horner Hong University of Science.
A brief reference and explanation Instrument Transposition.
Representing Acoustic Information
Musical Instruments Grade Ten Music.
EE513 Audio Signals and Systems Statistical Pattern Classification Kevin D. Donohue Electrical and Computer Engineering University of Kentucky.
Instrument Recognition in Polyphonic Music Jana Eggink Supervisor: Guy J. Brown University of Sheffield
Comparison of machine and human recognition of isolated instrument tones Ichiro Fujinaga McGill University.
A Time Based Approach to Musical Pattern Discovery in Polyphonic Music Tamar Berman Graduate School of Library and Information Science University of Illinois.
Student: Mike Jiang Advisor: Dr. Ras, Zbigniew W. Music Information Retrieval.
Syllabus Quiz  1. T or FA student should not have an understanding of composers and compositions at the end of the term.  2. T or F 90% of your grade.
Monday, November 15, 2010 Write and answer: There are four families of instruments. – String – Woodwind – Brass – Percussion List these instrument families.
Musical Instruments.
Elements of Music. When you listen to a piece of music, you'll notice that it has several different characteristics; it may be soft or loud, slow or fast,
Special topics on text mining [ Part I: text classification ] Hugo Jair Escalante, Aurelio Lopez, Manuel Montes and Luis Villaseñor.
Multimodal Information Analysis for Emotion Recognition
Dan Rosenbaum Nir Muchtar Yoav Yosipovich Faculty member : Prof. Daniel LehmannIndustry Representative : Music Genome.
MUMT611: Music Information Acquisition, Preservation, and Retrieval Presentation on Timbre Similarity Alexandre Savard March 2006.
Sound and Music Pages 8-10.
Musical Instrument Families. Why are there different families Each instrument family has their own unique TIMBRE or sound How is sound produced –By vibrations!
Comparison of machine and human recognition of isolated instrument tones Ichiro Fujinaga McGill University.
Extracting Melody Lines from Complex Audio Jana Eggink Supervisor: Guy J. Brown University of Sheffield {j.eggink
Speech Signal Representations I Seminar Speech Recognition 2002 F.R. Verhage.
Polyphonic music information retrieval based on multi-label cascade classification system presented by Zbigniew W. Ras University of North Carolina, Charlotte,
Music Information Retrieval System based on Cascade Classifiers presented by Zbigniew W. Ras http//: CCI, UNC-Charlotte.
Instrument Classification in a Polyphonic Music Environment Yingkit Chow Spring 2005.
Look who’s talking? Project 3.1 Yannick Thimister Han van Venrooij Bob Verlinden Project DKE Maastricht University.
Welcome to the Boise Philharmonic’s Children’s Concert
Music Information Retrieval based on multi-label cascade classification system presented by Zbigniew W. Ras http//: CCI,
CATEGORIES OF MUSICAL INSTRUMENTS DESCRIPTION AND IDENTIFICATION FOR CLASSES 4TH -8 TH Prepared by : A.P. Vinayak.
Chapter 7 Western Musical Instruments. Strings They are bowed and plucked – Violin – Viola – Cello (also Violoncello) – Double Bass.
Unit 2 The universal language Word power. Brainstorming 1. Do you know how to play the piano /violin /guitar? 2. What is an orchestra like? 3. What kind.
MSc Project Musical Instrument Identification System MIIS Xiang LI ee05m216 Supervisor: Mark Plumbley.
SPELLING PRACTISE. ORCHESTRA 1. Write the word. Spell it, Study it, Remember it.
Data Mining By Farzana Forhad CS 157B. Agenda Decision Tree and ID3 Rough Set Theory Clustering.
Orchestral Seating Chart ► The symphony orchestra is composed of three groups: Strings, Winds and Percussion ► The String section consists of basses, cellos,
1. Woodwind 2. Brass 3. Strings 4. Percussion/Keyboard.
Each corner of the room is a different answer (A, B, C, or D)
The Violin is the smallest and highest pitched of all the String instruments. String Instruments The order of the String instruments from highest pitched.
Musical Instruments.
Instruments of the H R O S C A E T R.
An Appreciation © 2006 The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill Chapter 2—Performing Media: Voices and Instruments Range: based.
Automatic Classification of Audio Data by Carlos H. L. Costa, Jaime D. Valle, Ro L. Koerich IEEE International Conference on Systems, Man, and Cybernetics.
Types of Musical Groups and Ensembles
Musical Instruments and Ensembles
Instruments and Clefs Families Ranges Clefs.
Chapter 2—Performing Media: Voices and Instruments
Music Appreciation Musical Instruments.
Instrumentation and Transposition
Ichiro Fujinaga McGill University
Pitch.
INSTRUMENTS OF THE ORCHESTRA AND CONCERT BAND
Presenter: Simon de Leon Date: March 2, 2006 Course: MUMT611
Presentation on Timbre Similarity
Ichiro Fujinaga Peabody Conservatory of Music Johns Hopkins University
Classification and Sound Production
Measuring the Similarity of Rhythmic Patterns
Ichiro Fujinaga McGill University
Presentation transcript:

Music Information Retrieval based on multi-label cascade classification system presented by Zbigniew W. Ras http//: CCI, UNC-Charlotte Research sponsored by NSF IIS , IIS

Collaborators: Alicja Wieczorkowska (Polish-Japanese Institute of IT, Warsaw, Poland) Krzysztof Marasek (Polish-Japanese Institute of IT, Warsaw, Poland) My former PhD students: Elzbieta Kubera (Maria Curie-Sklodowska University, Lublin, Poland ) Rory Lewis (University of Colorado at Colorado Springs, USA) Wenxin Jiang (Fred Hutchinson Cancer Research Center in Seattle, USA) Xin Zhang (University of North Carolina, Pembroke, USA) My current PhD student: Amanda Cohen-Mostafavi (University of North Carolina, Charlotte, USA)

Outcome: Musical Database represented as FS-tree guarantying efficient storage and retrieval [music pieces indexed by instruments and emotions]. MIRAI - Musical Database (mostly MUMS) [music pieces played by 57 different music instruments] Goal: Design and Implement a System for Automatic Indexing of Music by Instruments (objective task) and Emotions (subjective task)

Alto Flute, Bach-trumpet, bass-clarinet, bassoon, bass-trombone, Bb trumpet, b-flat clarinet, cello, cello-bowed, cello-martele, cello-muted, cello-pizzicato, contrabassclarinet, contrabassoon, crotales, c-trumpet, ctrumpet-harmonStemOut, doublebass-bowed, doublebass-martele, doublebass-muted, doublebass-pizzicato, eflatclarinet, electric-bass, electric-guitar, englishhorn, flute, frenchhorn, frenchHorn-muted, glockenspiel, marimba-crescendo, marimba-singlestroke, oboe, piano-9ft, piano-hamburg, piccolo, piccolo-flutter, saxophone-soprano, saxophone-tenor, steeldrums, symphonic, tenor-trombone, tenor-trombone-muted, tuba, tubular-bells, vibraphone-bowed, vibraphone-hardmallet, viola-bowed, viola-martele, viola-muted, viola-natural, viola-pizzicato, violin-artificial, violin-bowed, violin-ensemble, violin-muted, violin-natural-harmonics, xylophone. MIRAI - Musical Database [music pieces played by 57+ different music instruments (see below) and described by over 910 attributes]

What is needed? Database of monophonic and polyphonic music signals and their descriptions in terms of new features (including temporal) in addition to the standard MPEG7 features. These signals are labeled by instruments and emotions forming additional features called decision features. Automatic Indexing of Music Why is needed? To build classifiers for automatic indexing of musical sound by instruments and emotions.

… … … MIRAI - Cooperative Music Information Retrieval System based on Automatic Indexing User … … … Instruments … Query Indexed Audio Database Query Adapter Durations Empty Answer? Music Objects

Binary File Binary File PCM : PCM : Sampling Rate Sampling Rate 44.1K Hz 16 bits 2,646,000 values/min. Raw data--signal representation PCM (Pulse Code Modulation) - the most straightforward mechanism to store audio. Analog audio is sampled & individual samples are stored sequentially in binary format.

Data source organizationvolumeTypeQuality Traditional data StructuredModestDiscrete,CategoricalClean Audio data Unstructured Very large Continuous,NumericNoise The nature and types of raw data Challenges to applying KDD in MIR

Feature Database traditional pattern recognition Feature Extraction lower level raw data form Higher level representations classificationclusteringregression Amplitude values at each sample point manageable Feature extractions

MPEG7 features Instantaneous Harmonic Spectral Centroid Instantaneous Harmonic Spectral Deviation Signal Hamming Window STFT Signal envelope Fundamental Frequency Harmonic Peaks Detection Instantaneous Harmonic Spectral Spread Temporal Centroid Power Spectrum Spectral Centroid Log Attack Time Instantaneous Harmonic Spectral Variation Hamming Window STFT NFFT FFT points

Derived Database MPEG7 features Non-MPEG7 features & new temporal features Roll-Off Flux Mel frequency cepstral coefficients (MFCC) Tristimulus and similar parameters (contents of odd and even partials- Od, Ev) Mean frequency deviation for low partials Changing ratios of spectral spread Changing ratios of spectral centroid Spectrum Centroid Spectrum Spread Spectrum Flatness Spectrum Basic Functions Spectrum Projection Functions Log Attack Time Harmonic Peaks ……………..

S’(i) = [S(i+1) – S(i)]/S(i) ; C’(i) = [C(i+1) – C(i)]/C(i) where S(i+1), S(i) and C(i+1), C(i) are the spectral spread and spectral centroid of two consecutive frames: frame i+1 and frame i. The changing ratios of spectral spread and spectral centroid for two consecutive frames are considered as the first derivatives of the spread and spectral centroid. Following the same method we calculate the second derivatives: S’’(i) = [S’(i+1) – S’(i)]/S’(i) ; C’’(i) = [C’(i+1) – C’(i)]/C’(i) New Temporal Feat ures – S’(i), C’(i), S’’(i), C’’(i) Remark: Sequence [S(i), S(i+1), S(i+2),….., S(i+k)] can be approximated by polynomial p(x)=a 0 +a 1 *x+a 2 *x 2 + a 3 *x 3 + ……… ; new features: a 0, a 1, a 2, a 3, ……

Experiment Features Classifier Confidence 1 S, C Decision Tree 80.47% 2 S, C, S’, C’ Decision Tree 83.68% 3 S, C, S’, C’, S’’, C’’ Decision Tree 84.76% 4 S,C KNN 80.31% 5 S, C, S’, C’ KNN 84.07% 6 S, C, S’, C’, S’’, C’’ KNN 85.51% Classification confidence with temporal features Experiment with WEKA: 19 instruments [flute, piano, violin, saxophone, vibraphone, trumpet, marimba, french-horn, viola, basson, clarinet, cello, trombone, accordian, guitar, tuba, english-horn, oboe, double-bass], J48 with 0.25 confidence factor for pruning tree, minimum number of instances per leaf – 10; KNN – number of neighbors – 3 Euclidean distance is used as similarity function.

Confusion matrices: left is from Experiment 1, right is from Experiment 3. The correctly classified instances are highlighted in green and the incorrectly classified instances are highlighted in yellow

Precision of the decision tree for each instrument Recall of the decision tree for each instrument F-score of the decision tree for each instrument

. Polyphonic Sound Polyphonic Sound segmentationsegmentation Feature extraction Classifier Get Instrument Sound separation Polyphonic sounds – how to handle? 1.Single-label classification Based on Sound Separation 2.Multi-labeled classifiers Get frame Problems ? subtraction Information loss during the signal subtraction Sound Separation Flowchart

Timbre estimation in polyphonic sounds and designing multi-labeled classifiers timbre relevant descriptors timbre relevant descriptors Spectrum Centroid, Spread Spectrum Centroid, Spread Spectrum Flatness Spectrum Flatness Band Coefficients Harmonic Peaks Harmonic Peaks Mel frequency cepstral coefficients (MFCC) Mel frequency cepstral coefficients (MFCC) Tristimulus

Sub-pattern of single instrument in mixture Feature extraction Mel-Frequency Cepstral Coefficients

Features Extraction Classifier instrumentconfidence Candidate 1 70% Candidate 2 50% Candidate N 10% Timbre estimation based on multi-label classifier timbre descriptors instrumentconfidence Candidate 1 70% Candidate 2 50% Candidate N 10%instrumentconfidence Candidate 1 70% Candidate 2 50% Candidate N 10% 40ms window segmentation Get frame

Timbre Estimation Results based on different methods [Instruments - 45, Training Data (TD) single instr. sounds from MUMS, Testing on 308 mixed sounds randomly chosen from TD, window size – 1s, frame size – 120ms, hop size – 40ms (~25 frames), Mel-frequency cepstral coefficients (MFCC) extracted from each frame experiment # pitch based Sound Separation N(Labels) max RecallPrecisionF-score 1YesYes/No154.55% 39.2%45.60% 2Yes % 38.1%46.96% 3YesNo264.28% 44.8%52.81% 4YesNo467.69% 37.9%48.60% 5YesNo868.3%36.9%47.91% Threshold 0.4 controls the total number of estimations for each index window.

Polyphonic Sound (window) Polyphonic Sound (window) Get frame Feature extraction Classifiers Multiple labels Compressed representations of the signal: Harmonic Peaks, Mel Frequency Ceptral Coefficients (MFCC), Spectral Flatness, …. Irrelevant information (inharmonic frequencies or partials) is removed. Violin and viola have similar MFCC patterns. The same is with double-bass and guitar. It is difficult to distinguish them in polyphonic sounds. More information from the raw signal is needed. Polyphonic Sounds

Short Term Power Spectrum – low level representation of signal (calculated by STFT) Power Spectrum patterns of flute & trombone can be seen in the mixture Spectrum slice – 0.12 seconds long

Experiment: Middle C instrument sounds (pitch equal to C4 in MIDI notation, frequency Hz Training set: Power Spectrum from 3323 frames - extracted by STFT from 26 single instrument sounds: electric guitar, bassoon, oboe, B-flat, clarinet, marimba, C trumpet, E-flat clarinet, tenor trombone, French horn, flute, viola, violin, English horn, vibraphone, Accordion, electric bass, cello, tenor saxophone, B-flat trumpet, bass flute, double bass, Alto flute, piano, Bach trumpet, tuba, and bass clarinet. Testing Set: Fifty two audio files are mixed (using Sound Forge ) by two of these 26 single instrument sounds. Classifier – (1) KNN with Euclidean distance (spectrum match based classification); (2) Decision Tree (multi label classification based on previously extracted features)

Timbre Pattern Match Based on Power Spectrum experiment #descriptionRecall PrecisionF-score 1Feature-based + Decision Tree (n=2)64.28%44.8%52.81% 2Spectrum Match + KNN (k=1;n=2)79.41% 50.8%61.96% 3Spectrum Match + KNN (k=5;n=2)82.43% 45.8%58.88% 4 Spectrum Match + KNN (k=5;n=2) without percussion instrument 87.1% n – number of labels assigned to each frame; k – parameter for KNN

Schema I - Hornbostel Sachs AerophoneChordophoneMembranophoneIdiophone FreeSingle ReedSideLip Vibration Whip Alto Flute FluteC Trumpet French Horn Tuba Oboe Bassoon

Schema II - Play Methods MutedPizzicatoBowedPicked PiccoloFluteBassoonAlto Flute ShakenBlow ……

Xin Cynthia Zhang27 Xin Cynthia Zhang27 Decision Table Obj Classification Attributes Decision Attributes CA 1 … CA n Hornbostel SachsPlay Method 10.22… 0.28 [Aerophone, Side, Alto Flute][Blown, Alto Flute] 20.31… 0.77 [Idiophone, Concussion, Bell][Concussive, Bell] 30.05… 0.21 [Chordophone, Composite, Cello] [Bowed, Cello] 40.12… 0.11 [Chordophone, Composite, Violin] [Martele, Violin]

Example Xabcd x1a[1]b[2]c[1]d[3] x2a[1]b[1]c[1]d[3,1] x3a[1]b[2]c[2,2]d[1] x4a[2]b[2]c[2]d[1] C[1]C[2] C[2,1]C[2,2] d[1]d[2] d[3,1]d[3,2] 3 d[3]Level I Level II Classification AttributesDecision Attributes

Instrument granularity classifiers which are trained at each level of the hierarchical tree Hornbostel/Sachs We do not include membranophones because instruments in this family usually do not produce harmonic sound so that they need special techniques to be identified

Modules of cascade classifier for single instrument estimation --- Hornboch /Sachs Pitch 3B 91.80% 96.02% 98.94% = 95.00% * >

New Experiment: Middle C instrument sounds (pitch equal to C4 in MIDI notation, frequency Hz Training set: 2762 frames extracted from the following instrument sounds: electric guitar, bassoon, oboe, B-flat, clarinet, marimba, C trumpet, E-flat clarinet, tenor trombone, French horn, flute, viola, violin, English horn, vibraphone, Accordion, electric bass, cello, tenor saxophone, B-flat trumpet, bass flute, double bass, Alto flute, piano, Bach trumpet, tuba, and bass clarinet. Classifiers – WEKA: (1) KNN with Euclidean distance (spectrum match based classification); (2) Decision Tree (classification based on previously extracted features) Confidence – ratio of the correct classified instances over the total number of instances

Classification on different Feature Groups Group Feature description ClassifierConfidence A 33 Spectrum Flatness Band Coefficients KNN Decision Tree 99.23% 94.69% B 13 MFCC coefficients KNN Decision Tree 98.19% 93.57% C 28 Harmonic Peaks KNN Decision Tree 86.60% 91.29% D 38 Spectrum projection coefficients KNN Decision Tree 47.45% 31.81% E Log spectral centroid, spread, flux, rolloff, zerocrossing KNN Decision Tree 99.34% 99.77%

Feature and classifier selection at each level of cascade system NodefeatureClassifier chordophone Band CoefficientsKNN aerophone MFCC coefficientsKNN idiophone Band CoefficientsKNN NodefeatureClassifier chrd_composite Band Coefficients KNN aero_double-reed MFCC coefficients KNN aero_lip-vibrated MFCC coefficients KNN aero_side MFCC coefficients KNN aero_single-reed Band Coefficients Decision Tree idio_struck Band Coefficients KNN KNN + Band Coefficients

Classification on the combination of different feature groups Classification based on KNNClassification based on Decision Tree

From those two experiments, we see that: 1)KNN classifier works better with feature vectors such as spectral flatness coefficients, projection coefficients and MFCC. 2)Decision tree works better with harmonic peaks and statistical features. Simply adding more features together does not improve the classifiers and sometime even worsens classification results (such as adding harmonic to other feature groups).

HIERARCHICAL STRUCTURE BUILT BY CLUSTERING ANALYSIS Seven common method to calculate the distance or similarity between clusters: single linkage (nearest neighbor), complete linkage (furthest neighbor), unweighted pair-group method using arithmetic averages (UPGMA), weighted pair-group method using arithmetic averages (WPGMA), unweighted pair-group method using the centroid average (UPGMC), weighted pair-group method using the centroid average (WPGMC), Ward's method. Six most common distance functions: Euclidean, Manhattan, Canberra (examines the sum of series of a fraction differences between coordinates of a pair of objects), Pearson correlation coefficient (PCC) – measures the degree of association between objects, Spearman's rank correlation coefficient, Kendal (counts the number of pairwise disagreements between two lists) Clustering algorithm – HCLUST (Agglomerative hierarchical clustering) – R Package

Testing Datasets (MFCC, flatness coefficients, harmonic peaks) : The middle C pitch group which contains 46 different musical sound objects. Each sound object is segmented into multiple 0.12s frames and each frame is stored as an instance in the testing dataset. There are totally 2884 frames This dataset is represented by 3 different sets of features (MFCC, flatness coefficients, and harmonic peaks) Total number of experiments = 3  7  6 = 126 Clustering: When the algorithm finishes the clustering process, a particular cluster ID is assigned to each single frame.

Cluster 1…Cluster j…Cluster n Instrument 1 X 11 … X 1 j … X1nX1n ……………… Instrument i Xi1Xi1 … X ij … X in ……………… Instrument n X n1 … X nj … X nn Contingency Table derived from clustering result

Featuremethodmetric α wscore Flatness Coefficientswardpearson87.3% Flatness Coefficientswardeuclidean85.8% Flatness Coefficientswardmanhattan85.6% mfccwardkendall81.0% mfccwardpearson83.0% Flatness Coefficientswardkendall82.9% mfccwardeuclidean80.5% mfccwardmanhattan80.1% mfccwardspearman81.3% Flatness Coefficientswardspearman83.7% Flatness Coefficientswardmaximum86.1% mfccwardmaximum79.8% Flatness Coefficientsmcquittyeuclidean88.9% mfccaveragemanhattan87.3% Evaluation result of Hclust algorithm (14 results which yield the highest score among 126 experiments w – number of clusters, α - average clustering accuracy of all the instruments, score= α*w

Clustering result from Hclust algorithm with Ward linkage method and Pearson distance measure; Flatness coefficients are used as the selected feature “ctrumpet” and “batchtrumpet” are clustered in the same group. “ctrumpet_harmonStemOut” is clustered in one single group instead of merging with “ctrumpet”. Bassoon is considered as the sibling of the regular French horn. “French horn muted” is clustered in another different group together with “English Horn” and “Oboe”.

Looking for optimal [classification method  data representation] in monophonic music [Middle C pitch group - 46 different musical sound objects] ExperimentClassification methodDescriptionRecallPrecisionF-Score 1non-cascadeFeature-based64.3%44.8%52.81% 2non-cascadeSpectrum-Match79.4%50.8%61.96% 3CascadeHornbostel/Sachs75.0%43.5%55.06% 4Cascadeplay method77.8%53.6%63.47% 5Cascademachine learned87.5%62.3%72.78%

Looking for optimal [classification method  data representation] in polyphonic music [Middle C pitch group - 46 different musical sound objects] Testing Data: 49 polyphonic sounds are created by selecting three different single instrument sounds from the training database and mixing them together. This set of sounds is used to test again our five different arrangement for [classification method  data representation] KNN (k=3) is used as the classifier for each experiment.

Exp# Classifier Method Recall PrecisionF-Score 1Non-Cascade Single-label based on sound separation 31.48%43.06%36.37% 2Non_Cascade Feature-based multi-label classification Spectrum-Match69.44%58.64%63.59% 3Non_Cascademulti-label classification85.51%55.04%66.97% 4Cascade(hornbostel)multi-label classification64.49%63.10%63.79% 5Cascade(playmethod)multi-label classification66.67%55.25%60.43% 6Cascade(machine Learned)multi-label classification63.77%69.67%66.59% Looking for optimal [classification method  data representation] in polyphonic music Testing Data: 49 polyphonic sounds are created by selecting three different single instrument sounds from the training database and mixing them together. This set of sounds is used to test again our five different arrangement for [classification method  data representation] KNN (k=3) is used as the classifier for each experiment.

Auto indexing system for musical instruments Auto indexing system for musical instruments Auto indexing system for musical instruments Auto indexing system for musical instruments intelligent query answering system for music instruments intelligent query answering system for music instruments intelligent query answering system for music instruments intelligent query answering system for music instruments

He is looking for a particular piece of music Mozart, 40 th Symphony User entering query Yes, but I’m sad today, play the same song but make it sadder. Modified Mozart, 40 th Symphony User is not satisfied and he is entering a new query - Action Rules System

Action Rule Action rule is defined as a term Information System conjunction of fixed condition features shared by both groups proposed changes in values of flexible features desired effect of the action [(ω) ∧ (α → β)] →(ϕ→ψ)

Action Rules Discovery Meta-actions based decision system S(d)=(X,A  {d}, V ), with A= {A 1,A 2,…,A m } A1A1 A2A2 A3A3 A4A4 …..AmAm M1M1 E 11 E 12 E 13 E 14 E 1m M2M2 E 21 E 22 E 23 E 24 E 2m M3M3 E 31 E 32 E 33 E 34 E 3m M4M4 E 41 E 42 E 43 E 44 E 4m ….. MnMn E m1 E m2 E m3 E m4 E mn Influence Matrix r = [(A 1, a 1  a 1 ’)  (A 2, a 2  a 2 ’)  (A 4, a 4  a 4 ’)])  (d, d 1  d 1 ’) Candidate action rule - if E 32 = [a 2  a 2 ’], then E 31 = [a 1  a 1 ’], E 34 = [a 4  a 4 ’] Rule r is supported & covered by M 3

"Action Rules Discovery without pre-existing classification rules", Z.W. Ras, A. Dardzinska, Proceedings of RSCTC 2008 Conference, in Akron, Ohio, LNAI 5306, Springer, 2008,

Since the window diminishes the signal on both edges, it leads to information loss due to the narrowing of frequency spectrum. In order to preserve this information, those consecutive analysis frames have overlap in time. The empirical experiments show the best overlap is two third of window size Time ABAAAA

Windowing Hamming window spectral leakage