Presentation is loading. Please wait.

Presentation is loading. Please wait.

EXPLOITING DYNAMIC VALIDATION FOR DOCUMENT LAYOUT CLASSIFICATION DURING METADATA EXTRACTION Kurt Maly Steven Zeil Mohammad Zubair WWW/Internet 2007 Vila.

Similar presentations


Presentation on theme: "EXPLOITING DYNAMIC VALIDATION FOR DOCUMENT LAYOUT CLASSIFICATION DURING METADATA EXTRACTION Kurt Maly Steven Zeil Mohammad Zubair WWW/Internet 2007 Vila."— Presentation transcript:

1 EXPLOITING DYNAMIC VALIDATION FOR DOCUMENT LAYOUT CLASSIFICATION DURING METADATA EXTRACTION Kurt Maly Steven Zeil Mohammad Zubair WWW/Internet 2007 Vila Real, Portugal October 5-8, 2007

2 OUTLINE 1.Background: Robust automatic extraction of metadata from heterogeneous collections 2.Validation of extracted metadata 3.Post-hoc classification of document layouts 4.Conclusions

3 1. Background Diverse, growing government document collections Amount of metadata available varies considerably Automated system to extract metadata from new documents –Classify documents by layout similarity –Template defines extraction process for a layout class

4 Process Overview

5

6 Sample Metadata Record (including mistakes) Thesis Title: Intrepidity, Iron Will, and Intellect: General Robert L. Eichelberger and Military Genius Name of Candidate: Major Matthew H. Fath Accepted this 18th day of June 2004 by: Approved by: Thesis Committee Chair Jack D. Kem, Ph.D., Member Mr. Charles S. Soby, M.B.A., Member Lieutenant Colonel John A. Suprin, M.A. Robert F. Baumann, Ph.D.

7 Issue: Layout Classification Key to keeping extraction templates simple Previously explored a variety of techniques based upon geometric position of text and graphics –e.g., MX-Y trees, learning machines(??) Generally unsatisfactory in either accuracy or in compatibility with template approach

8 Issue: Robustness Sources of errors –OCR software failures –Poor document quality –Classification errors –Template errors –Extraction engine faults Need to detect dubious outputs –refer to human for inspection & correction

9 2. Validation Exploit statistical and heuristic approaches to evaluate quality of extracted metadata Reference Models Validation Process –tests –specifications

10 Reference Models From previously extracted metadata –specific to document collection Phrase dictionaries constructed for fields with specialized vocabularies –e.g., author, organization Statistics collected –mean and standard deviation –permits detection of outputs that are significantly different from collection norms

11 Statistics collected Field length statistics –title, abstract, author,.. Phrase recurrence rates for fields with specialized vocabularies –author and organization Dictionary detection rates for words in natural language fields –abstract, title,.

12 Field Length (in words), DTIC collection

13 Dictionary Detection (% of recognized words), DTIC collection

14 Phrase Dictionary Hit Percentage, DTIC collection

15 Validation Process Extracted outputs for fields are subjected to a variety of tests –Test results are normalized to obtain confidence value in range 0.0-1.0 Test results for same field are combined to form field confidence Field confidences are combined to form overall confidence

16 Validation Tests Deterministic –Regular patterns such as date, report numbers Probabilistic –Length: if value of metadata is close to average -> high score –Vocabulary: recurrence rate according to field’s phrase dictionary –Dictionary: detection rate of words in English dictionary

17 Combining results Validation specification describes –which tests to apply to which fields –how to combine field tests into field confidence –how to combine field confidences into overall confidence

18 Validation Specification for DTIC Collection

19 Validation Specification - continued

20 <metadata confidence="0.460" warning="ReportDate field does not match required pattern"> Thesis Title: Intrepidity, Iron Will, and Intellect: General Robert L. Eichelberger and Military Genius <PersonalAuthor confidence="0.4" warning="PersonalAuthor: unusual number of words"> Name of Candidate: Major Matthew H. Fath <ReportDate confidence="0.0" warning="ReportDate field does not match required pattern"> Accepted this 18th day of June 2004 by: Approved by: Thesis Committee Chair Jack D. Kem, Ph.D., Member Mr. Charles S. Soby, M.B.A., Member Lieutenant Colonel John A. Suprin, M.A. Robert F. Baumann, Ph.D. Sample Output from the Validator

21 3. Classification Post hoc classification Experimental Results

22 Post hoc Classification Previously attempted a priori classification –choose one layout based on geometry of page –apply template for that chosen layout Alternative: exploit validator for post hoc selection of layout –Apply all templates to given document –Score each output using validator –Select template which scored highest

23 Experimental Design How effective is post-hoc classification? Selected several hundred documents recently added to DTIC collection –Visually classified by humans, comparing to 4 most common layouts from studies of earlier documents discarded documents not in one of those classes 167 documents remained Applied all templates, validated extracted metadata, selected highest confidence as the validator’s choice Compared validator’s preferred layout to human choices

24 Automatic vs. Human Classifications Post-hoc classifier agreed with human on 74% of cases

25 Post hoc Classification Problem: –WYSIWYG extraction often results in extra words in extracted data E.g., in author field ( ‘name of candidate’, “Major’) –Not desired in final output post-processing to remove these anticipated but not yet implemented – Artificially reduce validator scores not part of phrase dictionary Solutions: –Post-processing must be done prior to validation

26 Re-interpreting the experiment Subjected author metadata to simulated post-processing –scripts to remove known extraneous phrases specific to the document layouts military ranks and other honorifics Agreement between post-hoc classifier and human classification rose to 99% –far exceeds our best a priori classifiers to date

27 Conclusions Creating statistical model of existing metadata is very useful tool to validate extracted metadata from new documents Validation can be used to classify documents and select the right template for the automated extraction process


Download ppt "EXPLOITING DYNAMIC VALIDATION FOR DOCUMENT LAYOUT CLASSIFICATION DURING METADATA EXTRACTION Kurt Maly Steven Zeil Mohammad Zubair WWW/Internet 2007 Vila."

Similar presentations


Ads by Google