Presentation is loading. Please wait.

Presentation is loading. Please wait.

The CLEF 2005 Cross-Language Image Retrieval Track Organised by Paul Clough, Henning Müller, Thomas Deselaers, Michael Grubinger, Thomas Lehmann, Jeffery.

Similar presentations


Presentation on theme: "The CLEF 2005 Cross-Language Image Retrieval Track Organised by Paul Clough, Henning Müller, Thomas Deselaers, Michael Grubinger, Thomas Lehmann, Jeffery."— Presentation transcript:

1 The CLEF 2005 Cross-Language Image Retrieval Track Organised by Paul Clough, Henning Müller, Thomas Deselaers, Michael Grubinger, Thomas Lehmann, Jeffery Jensen and William Hersh

2 22/09/05 ImageCLEF: cross-language image retrieval at CLEF2005 Overview Image Retrieval and CLEF Motivations Tasks in 2005 Ad-hoc retrieval of historic photographs and medical images Automatic annotation of medical images Interactive task Summary and future work

3 22/09/05 ImageCLEF: cross-language image retrieval at CLEF2005 Image Retrieval and CLEF Cross-language image retrieval Images often accompanied by text (used for retrieval) Began in 2003 as pilot experiment Aims of ImageCLEF Investigate retrieval combining visual features and associated text Promote the exchange of ideas Provide resources for IR evaluation

4 22/09/05 ImageCLEF: cross-language image retrieval at CLEF2005 Motivations Image retrieval a good application for CLIR Assume images are language-independent Many images have associated text (e.g. captions, metadata, Web page links) CLIR has potential benefits for image vendors and users Image retrieval can be performed using Low-level visual features (e.g. texture, colour and shape) Abstracted features expressed using text Combining both visual and textual approaches

5 22/09/05 ImageCLEF: cross-language image retrieval at CLEF2005 ImageCLEF 2005 24 participants from 11 countries Specific domains and tasks Retrieval of historic photographs (St Andrews) Retrieval and annotation of medical images (medImageCLEF and IRMA) Additional co-ordinators William Hersh and Jeffrey Jensen (OHSU) Thomas Lehmann and Thomas Deselaers (Aachen) Michael Grubinger (Melbourne) Links with MUSCLE NoE including pre-CLEF workshop http://muscle.prip.tuwien.ac.at/workshops.php

6 22/09/05 ImageCLEF: cross-language image retrieval at CLEF2005 Ad-hoc retrieval from historic photographs Paul Clough (University of Sheffield) Michael Grubinger (Victoria University)

7 22/09/05 ImageCLEF: cross-language image retrieval at CLEF2005 From: St Andrews Library historic photographic collection http://specialcollections.st-and.ac.uk/photo/controller イングランドにある灯台の写真 Изображения английских маяков Fotos de faros ingleses Pictures of English lighthouses Kuvia englantilaisista majakoista Bilder von englischen Leuchttürmen صور لمنارات انجليزيه St Andrews image collection

8 22/09/05 ImageCLEF: cross-language image retrieval at CLEF2005 Topics 28 search tasks (topics) Consist of title, narrative and example images Topics more general than 2004 and more “visual” e.g. waves breaking on beach, dog in sitting position Topics translated by native speakers 8 languages for title & narrative (e.g. German, Spanish, Chinese, Japanese) 25 languages for title (e.g. Russian, Bulgarian, Norwegian, Hebrew, Croatian) 2004 topics and qrels used as training data

9 22/09/05 ImageCLEF: cross-language image retrieval at CLEF2005 Relevance judgements Staff from Sheffield University were assessors Assessors judged topic pools Top 50 images from all 349 runs Average of 1,376 images per pool 3 assessments per image (inc. topic creator) Ternary relevance judgements Qrels: images judged as relevant/partially relevant by topic creator and at least one other assessor

10 22/09/05 ImageCLEF: cross-language image retrieval at CLEF2005 Submissions & Results (1) 11 groups (5 new*) CEA* NII* Alicante CUHK* DCU Geneva Indonesia* Miracle NTU Jaen* UNED DimensionType#Runs (%)Avg. MAP LanguageEnglish119 (34%)0.2084 Non-English230 (66%)0.2009 Run typeAutomatic349 (100%)0.2399 Feedback (QE)Yes142 (41%)0.2399 No207 (59%)0.2043 ModalityImage4 (1%)0.1500 Text318 (91%)0.2121 Text + Image27 (8%)0.3086 Initial QueryImage4 (1%)0.1418 Title274 (79%)0.2140 Narrative6 (2%)0.1313 Title + Narr57 (16%)0.2314 Title + Image4 (1%)0.4016 All4 (1%)0.3953

11 22/09/05 ImageCLEF: cross-language image retrieval at CLEF2005 Submissions & Results (2) Language#RunsMax. MAPGroupInitial QueryFeedbackModality English700.4135CUHKTitle+imgYesText+img Chinese (trad.)80.3993NTUTitle+narrYesText+img Spanish (Lat. Am.)360.3447Alicante/JaenTitleYesText Dutch150.3435Alicante/JaenTitleYesText Visual30.3425NTUVisualYesImage German290.3375Alicante/JaenTitleYesText Spanish (Euro.)280.3175UNEDTitleYesText Portuguese120.3073MiracleTitleNoText Greek90.3024DCUTitleYesText French170.2864JaenTitle+narrYesText Japanese160.2811AlicanteTitleYesText Russian150.2798DCUTitleYesText Italian190.2468MiracleTitleYesText

12 22/09/05 ImageCLEF: cross-language image retrieval at CLEF2005 Submissions & Results (3) Language#RunsMax. MAPGroupInitial QueryFeedbackModality Chinese (simpl.)210.2305AlicanteTitleYesText Indonesian90.2290IndonesiaTitleNoText+img Turkish50.2225MiracleTitleNoText Swedish70.2074JaenTitleNoText Norwegian50.1610MiracleTitleNoText Filipino50.1486MiracleTitleNoText Polish50.1558MiracleTitleNoText Romanian50.1429MiracleTitleNoText Bulgarian20.1293MiracleTitleNoText Czech20.1219MiracleTitleNoText Croatian20.1187MiracleTitleNoText Finnish20.1114MiracleTitleNoText Hungarian20.0968MiracleTitlenoText

13 22/09/05 ImageCLEF: cross-language image retrieval at CLEF2005 Summary Most groups focused on text retrieval Fewer combined runs than 2004 But still gives highest average MAP Translation main focus for many groups 13 languages have at least 2 groups More use of title & narrative than 2004 As Relevance feedback (QE) improves results Topics still dominated by semantics But typical of searches in this domain

14 22/09/05 ImageCLEF: cross-language image retrieval at CLEF2005 Ad-hoc medical retrieval task Henning Müller (University Hospitals Geneva) William Hersh, Jeffrey Jensen (OHSU)

15 22/09/05 ImageCLEF: cross-language image retrieval at CLEF2005 Collection 50,000 medical images 4 sub-collections with heterogeneous annotation Radiographs, photographs, Powerpoint slides and illustrations Mixed languages for annotations (French, German and English) In 2004 only 9,000 images available

16 22/09/05 ImageCLEF: cross-language image retrieval at CLEF2005 Search topics Topics based on 4 axes Modality (e.g. x-ray, CT, MRI) Anatomic region shown in image (e.g. head, arm) Pathology (disease) shown in image Abnormal visual observation (e.g. enlarged heart) Different types of topic identified from survey Visual (11) – visual approaches only expected to perform well Mixed (11) – text and visual approaches expected to perform well Semantic (3) – visual approaches not expected to perform well Topics consist of annotation in 3 languages and 1-3 query images

17 22/09/05 ImageCLEF: cross-language image retrieval at CLEF2005 An example (topic # 20 - mixed) Show me microscopic pathologies of cases with chronic myelogenous leukemia. Zeige mir mikroskopische Pathologiebilder von chronischer Leukämie. Montre-moi des images de la leucémie chronique myélogène.

18 22/09/05 ImageCLEF: cross-language image retrieval at CLEF2005 Relevance assessments Medical doctors made relevance judgements Only one per topic for money and time constraints Some additional to verify consistency Relevant/partially relevant/non relevant For ranking only relevant vs. non-relevant Image pools created from submissions Top 40 images from 134 runs Average of 892 images per topic to assess

19 22/09/05 ImageCLEF: cross-language image retrieval at CLEF2005 Submissions 13 groups submitted runs (24 registered) Resources very interesting but lack of manpower 134 runs submitted Several categories for submissions Manual vs. Automatic Data source used Visual/textual/mixed All languages could be used or a single one

20 22/09/05 ImageCLEF: cross-language image retrieval at CLEF2005 Results (1) Mainly automatic and mixed submissions some further to be classified as manual Large variety of text/visual retrieval approaches Ontology-based Simple tf/idf weighting Manual classification before visual retrieval Query typesAutomaticManual Visual283 Textual141 Mixed862

21 22/09/05 ImageCLEF: cross-language image retrieval at CLEF2005 Results (2) – highest MAP Query typesAutomaticManual VisualI2Rfus.txt 0.146 i2r-vk-avg.txt 0.092 TextualIPALI2R_Tn 0.208 OHSUmanual.txt 0.212 MixedIPALI2R_TIan 0.282 OHSUmanvis.txt 0.160

22 22/09/05 ImageCLEF: cross-language image retrieval at CLEF2005 Average results per topic type

23 22/09/05 ImageCLEF: cross-language image retrieval at CLEF2005 Summary Text-only approaches perform better than image-only But some visual systems have high early precision Depends on the topics formulated Visual systems very bad on semantic queries Best overall systems use combined approaches GIFT as a baseline system used by many participants and still best visual completely automatic Few manual runs

24 22/09/05 ImageCLEF: cross-language image retrieval at CLEF2005 Automatic Annotation Task Thomas Deselaers, Thomas Lehmann (RWTH Aachen University)

25 22/09/05 ImageCLEF: cross-language image retrieval at CLEF2005 Automatic annotation Goal Compare state-of-the-art classifiers for medical image annotation task Purely visual task Task 9,000 training & 1,000 test medical images from Aachen University Hospital 57 classes identifying modality, body orientation, body region and biological system (IRMA code) e.g. 01: plain radiography, coronal, cranuim, musculosceletal system Classes in English and German and unevenly distributed

26 22/09/05 ImageCLEF: cross-language image retrieval at CLEF2005 Example of IRMA code Example: 1121-127-720-500 Example: 1121-127-720-500 radiography, plain, analog, overview radiography, plain, analog, overview coronal, AP, supine coronal, AP, supine abdomen, middle abdomen, middle uropoetic system uropoetic system

27 22/09/05 ImageCLEF: cross-language image retrieval at CLEF2005 Example Images http://irma-project.org

28 22/09/05 ImageCLEF: cross-language image retrieval at CLEF2005 Participants Groups 26 registered 12 submitted runs Runs In total 41 submitted CEA (France) CINDI (Montreal,CA) medGift (Geneva, CH) Infocomm (Singapore, SG) Miracle (Madrid, ES) Umontreal (Montreal, CA) Mt. Holyoke College (Mt. Hol., US) NCTU-DBLAB (TW) NTU (TW) RWTH Aachen CS (Aachen, DE) IRMA Group (Aachen, DE) U Liège (Liège, BE)

29 22/09/05 ImageCLEF: cross-language image retrieval at CLEF2005 Results... Baseline error rate: 36.8%

30 22/09/05 ImageCLEF: cross-language image retrieval at CLEF2005 Conclusions Continued global participation from variety of research communities Improvements in ad-hoc medical task Realistic topics Larger medical image collection Introduction of medical annotation task Overall combining text and visual approaches works well for ad-hoc task

31 22/09/05 ImageCLEF: cross-language image retrieval at CLEF2005 ImageCLEF2006 and beyond …

32 22/09/05 ImageCLEF: cross-language image retrieval at CLEF2005 ImageCLEF 2006 … New ad-hoc IAPR collection of 25,000 personal photographs Annotations in English, German and Spanish Medical ad-hoc Same data; new topics Medical annotation Larger collection; more fine-grained classification New interactive task Using Flickr.com … more in iCLEF talk

33 22/09/05 ImageCLEF: cross-language image retrieval at CLEF2005 … and beyond Image annotation task Annotate general images with simple concepts Using the LTU 80,000 Web images (~350 categories) MUSCLE collaboration Create visual queries for ad-hoc task (IAPR) Funding workshop in 2006 All tasks involve cross-language in some way


Download ppt "The CLEF 2005 Cross-Language Image Retrieval Track Organised by Paul Clough, Henning Müller, Thomas Deselaers, Michael Grubinger, Thomas Lehmann, Jeffery."

Similar presentations


Ads by Google