Presentation is loading. Please wait.

Presentation is loading. Please wait.

A Survey for Interspeech 2013. Xavier Anguera Information Retrieval-based Dynamic TimeWarping.

Similar presentations


Presentation on theme: "A Survey for Interspeech 2013. Xavier Anguera Information Retrieval-based Dynamic TimeWarping."— Presentation transcript:

1 A Survey for Interspeech 2013

2 Xavier Anguera Information Retrieval-based Dynamic TimeWarping

3 The goal of Dynamic time warping system is to find matching subsequences in two time series Recent DTW approaches usually require prior knowledge of start/end time, and memory-hunger This paper propose a matching approach- IR-DTW by using a vector of counts, which is inspired by IR community Introduction

4

5 The IR-DRW algorithm Information Retrieval-based Dynamic TimeWarping

6 Linear/Diagonal Subsequence Matching Information Retrieval-based Dynamic TimeWarping

7 Non-linear subsequence matching algorithm Information Retrieval-based Dynamic TimeWarping

8 query ref m

9 Information Retrieval-based Dynamic TimeWarping

10 query ref m maxQDist

11 Time warping constraints Information Retrieval-based Dynamic TimeWarping

12

13 The time sequences are constructed by MFCC-39 vector Mediaeval 2012 SWS is used as corpus, which contains 7.5 hours of telephone record Minimum Term Weighted Value is used to measure the performance Experiment and Result

14 This paper presents the IR-DTW algorithm to find non-linearly matching sequences and reduce the memory use The future work includes using different parts of the algorithm to close the S-DTW Conclusion

15 Larry Heck, Dilek Hakkani-T¨ur, Gokhan Tur Leveraging Knowledge Graphs forWeb-Scale Unsupervised Semantic Parsing

16 Spoken language understanding systems aim to automatically identify the intent of the user as expressed in natural language, extract as sociated arguments or slots Most SLU systems are based on statistical methods, and these methods usually rely on supervised training instances This paper leverage web-scale semantic graphs to bootstrap a web-scale semantic parser with no requirement for semantic schema design, no data collection, and no manual annotations Introduction

17 We align the knowledge populated in the semantic graph with the related documents, and transfer entity annotations Then we use these to bootstrap models and further improve them by combining with gazetteers mined from the knowledge graphs and adapting them to the target domain with a MAP-style algorithm Introduction

18 Gazetteers is a important feature in SLU, can also seemed as entity lists But gazetteers usually contain ambiguous, confusable or incorrect phrases; to improve the precision, the method learns from user clicks uses the results to compare the importance of entities Refining Gazetteers with Web Search

19 The unsupervised graph crawling algorithm is summarized as a sequence of the following 6 steps 1.Initialize the crawl Select a entity as the “central pivot node” 2.Retrieve sources of NL surface forms Use the CPN to retrieve documents; the documents are used as resources of natural language surface forms 3.Annotate with first order Use this gazetteer to automatically annotate the sentences from the retrieved documents These annotations will be used as “truth” labels for subsequent (statistical) training passes Unsupervised Data Mining with Knowledge Graphs

20 4.Extract features with large-scale entity lists For the CPN and each of its K related properties, enumerate all possible entity instances and form large-scale gazetteers The web search refining method is used here to increase the precision 5.Annotate with high order relations Extending to higher order relations, and repeat step 3~4 Documents retrieved above again annotate by these relations 6.Crawl graph to select new CPN and repeat Unsupervised Data Mining with Knowledge Graphs

21

22 With high precision label of sentences, we can generate millions if auto annotate sentences for statistical semantic parsers we frame the entity extraction (slot filling) task as a sequence classification problem to obtain the most probable entity sequence; we use discriminative conditional random fields (CRFs) for modeling Modeling Entities for Semantic Parsing

23 After the transition and emission probabilities are optimized, the most probable state sequence, Y, can be determined using the well-known Viterbi algorithm Modeling Entities for Semantic Parsing

24 We leverage these patterns to induce semantic parsing grammars or templates, and then use the templates to spot entities –ent(movie name) -> rel(directed by) -> ent(director) we use the grammars of the entity-relationbased parser as a final pass after the entity extraction parsing Modeling Relations for Semantic Parsing

25 Experiments and Results

26 We develop a graph crawling algorithm for data mining, and two entity extraction approaches: CRF-based method and a relation model with induced entity extraction grammars We also investigate the impact of higher order knowledge graph relations on semantic parsing Conclusion


Download ppt "A Survey for Interspeech 2013. Xavier Anguera Information Retrieval-based Dynamic TimeWarping."

Similar presentations


Ads by Google