Presentation is loading. Please wait.

Presentation is loading. Please wait.

The Princeton Shape Benchmark Philip Shilane, Patrick Min, Michael Kazhdan, and Thomas Funkhouser.

Similar presentations


Presentation on theme: "The Princeton Shape Benchmark Philip Shilane, Patrick Min, Michael Kazhdan, and Thomas Funkhouser."— Presentation transcript:

1 The Princeton Shape Benchmark Philip Shilane, Patrick Min, Michael Kazhdan, and Thomas Funkhouser

2 Shape Retrieval Problem 3D ModelShape Descriptor Model Database Best Matches

3 Example Shape Descriptors D2 Shape Distributions Extended Gaussian Image Shape Histograms Spherical Extent Function Spherical Harmonic Descriptor Light Field Descriptor etc.

4 Example Shape Descriptors D2 Shape Distributions Extended Gaussian Image Shape Histograms Spherical Extent Function Spherical Harmonic Descriptor Light Field Descriptor etc. How do we know which is best?

5 Typical Retrieval Experiment Create a database of 3D models Group the models into classes For each model: Rank other models by similarity Measure how many models in the same class appear near the top of the ranked list Present average results

6 Typical Retrieval Experiment Create a database of 3D models Group the models into classes For each model: Rank other models by similarity Measure how many models in the same class appear near the top of the ranked list Present average results

7 Typical Retrieval Experiment Create a database of 3D models Group the models into classes For each model: Rank other models by similarity Measure how many models in the same class appear near the top of the ranked list Present average results

8 Typical Retrieval Experiment Create a database of 3D models Group the models into classes For each model: Rank other models by similarity Measure how many models in the same class appear near the top of the ranked list Present average results Query

9 Typical Retrieval Experiment Query Create a database of 3D models Group the models into classes For each model: Rank other models by similarity Measure how many models in the same class appear near the top of the ranked list Present average results

10 Typical Retrieval Experiment Query Create a database of 3D models Group the models into classes For each model: Rank other models by similarity Measure how many models in the same class appear near the top of the ranked list Present average results

11 Typical Retrieval Experiment Create a database of 3D models Group the models into classes For each model: Rank other models by similarity Measure how many models in the same class appear near the top of the ranked list Present average results

12 Typical Retrieval Experiment Create a database of 3D models Group the models into classes For each model: Rank other models by similarity Measure how many models in the same class appear near the top of the ranked list Present average results

13 Shape Retrieval Results Shape Descriptor Compare Time (µs) Storage Size (bytes) Norm. DCGain LFD1,3004,700+21.3% REXT22917,416+13.3% SHD272,148+10.2% GEDT45032,776+10.1% EXT8552+6.0% SECSHEL45132,776+2.8% VOXEL45032,776+2.4% SECTORS14552-0.3% CEGI272,056-9.6% EGI141,032-10.9% D22136-18.2% SHELLS2136-27.3%

14 Outline Introduction Related work Princeton Shape Benchmark Comparison of 12 descriptors Evaluation techniques Results Conclusion

15 Typical Shape Databases Num Models Num Classes Num Classified Largest Class Osada1332513320% MPEG-71,3001522715% Hilaga2303223015% Technion1,0681725810% Zaharia1,3002336214% CCCC1,8415441613% Utrecht684651245% Taiwan1,8334754912% Viewpoint1,890851,28012%

16 Typical Shape Databases Num Models Num Classes Num Classified Largest Class Osada1332513320% MPEG-71,3001522715% Hilaga2303223015% Technion1,0681725810% Zaharia1,3002336214% CCCC1,8415441613% Utrecht684651245% Taiwan1,8334754912% Viewpoint1,890851,28012%

17 Typical Shape Databases Num Models Num Classes Num Classified Largest Class Osada1332513320% MPEG-71,3001522715% Hilaga2303223015% Technion1,0681725810% Zaharia1,3002336214% CCCC1,8415441613% Utrecht684651245% Taiwan1,8334754912% Viewpoint1,890851,28012% Aerodynamic

18 Typical Shape Databases Num Models Num Classes Num Classified Largest Class Osada1332513320% MPEG-71,3001522715% Hilaga2303223015% Technion1,0681725810% Zaharia1,3002336214% CCCC1,8415441613% Utrecht684651245% Taiwan1,8334754912% Viewpoint1,890851,28012% Letter ‘C’

19 Typical Shape Databases

20

21 153 dining chairs25 living room chairs16 beds12 dining tables 8 chests28 bottles39 vases 36 end tables Typical Shape Databases

22

23 Goal: Benchmark for 3D Shape Retrieval Large number of classified models Wide variety of class types Not too many or too few models in each class Standardized evaluation tools Ability to investigate properties of descriptors Freely available to researchers

24 Princeton Shape Benchmark Large shape database 6,670 models 1,814 classified models, 161 classes Separate training and test sets Standardized suite of tests Multiple classifications Targeted sets of queries Standardized evaluation tools Visualization software Quantitative metrics

25 Princeton Shape Benchmark 51 potted plants33 faces15 desk chairs22 dining chairs 100 humans28 biplanes14 flying birds11 ships

26 Num Models Num Classes Num Classified Largest Class Osada1332513320% MPEG-71,3001522715% Hilaga2303223015% Technion1,0681725810% Zaharia1,3002336214% CCCC1,8415441613% Utrecht684651245% Taiwan1,8334754912% Viewpoint1,890851,28012% PSB6,6701611,8146% Princeton Shape Benchmark (PSB)

27

28 Outline Introduction Related work Princeton Shape Benchmark Comparison of 12 descriptors Evaluation techniques Results Conclusion

29 Comparison of Shape Descriptors Shape Histograms (Shells) Shape Histograms (Sectors) Shape Histograms (SecShells) D2 Shape Distributions Extended Gaussian Image (EGI) Complex Extended Gaussian Image (CEGI) Spherical Extent Function (EXT) Radialized Spherical Extent Function (REXT) Voxel Gaussian Euclidean Distance Transform (GEDT) Spherical Harmonic Descriptor (SHD) Light Field Descriptor (LFD)

30 Comparison of Shape Descriptors

31 Evaluation Tools Visualization tools Precision/recall plot Best matches Distance image Tier image Quantitative metrics Nearest neighbor First and Second tier E-Measure Discounted Cumulative Gain (DCG)

32 Evaluation Tools Visualization tools Precision/recall plot Best matches Distance image Tier image Quantitative metrics Nearest neighbor First and Second tier E-Measure Discounted Cumulative Gain (DCG)

33 Evaluation Tools Visualization tools Precision/recall plot Best matches Distance image Tier image Quantitative metrics Nearest neighbor First and Second tier E-Measure Discounted Cumulative Gain (DCG) QueryCorrect class Wrong class

34 Evaluation Tools Visualization tools Precision/recall plot Best matches Distance image Tier image Quantitative metrics Nearest neighbor First and Second tier E-Measure Discounted Cumulative Gain (DCG)

35 Evaluation Tools Visualization tools Precision/recall plot Best matches Distance image Tier image Quantitative metrics Nearest neighbor First and Second tier E-Measure Discounted Cumulative Gain (DCG)

36 Dining Chair Desk Chair Evaluation Tools Visualization tools Precision/recall plot Best matches Distance image Tier image Quantitative metrics Nearest neighbor First and Second tier E-Measure Discounted Cumulative Gain (DCG)

37 Function vs. Shape Functional at the top levels of the hierarchy, shape based at the lower levels Rectangular table Round table Furniture Table Man-made Natural root Vehicle Chair

38 Base Classification (92 classes) Man-made Furniture Table Round table

39 Coarse Classification (44 classes) Man-made Furniture Table Round table

40 Coarser Classification (6 classes) Man-made Furniture Table Round table

41 Coarsest Classification (2 classes) Man-made Furniture Table Round table

42 Granularity Comparison Base (92) Man-made vs. Natural (2)

43 Rotationally Aligned Models (650)

44 All Models (907)

45 Complex Models (200)

46 Performance by Property Rotation Aligned Base Depth Complexity LFD18.821.328.2 REXT12.313.315.0 SHD7.610.28.9 GEDT13.010.113.5 EXT5.06.06.1 SecShells5.22.82.2 Voxel4.72.40.2 Sectors2.0-0.3-1.6 CEGI-8.7-9.6-12.7 EGI-11.2-10.9-9.1 D2-19.7-18.2-19.9 Shells-29.1-27.3-30.9

47 Methodology to compare shape descriptors Vary classifications Query lists targeted at specific properties Unexpected results EGI: good at discriminating man-made vs. natural objects, though poor at fine-grained distinctions LFD: good overall performance across tests Freely available Princeton Shape Benchmark 1,814 classified polygonal models Source code for evaluation tools Conclusion

48 Future Work Multi-classifiers Evaluate statistical significance of results Application of techniques to other domains Text retrieval Image retrieval Protein classification

49 Acknowledgements David Bengali partitioned thousands of models. Ming Ouhyoung and his students provided the light field descriptor. Dejan Vranic provided the CCCC and MPEG-7 databases. Viewpoint Data Labs donated the Viewpoint database. Remco Veltkamp and Hans Tangelder provided the Utrecht database. Funding: The National Science Foundation grants CCR-0093343 and 11S-0121446.

50 The End http://shape.cs.princeton.edu/benchmark

51


Download ppt "The Princeton Shape Benchmark Philip Shilane, Patrick Min, Michael Kazhdan, and Thomas Funkhouser."

Similar presentations


Ads by Google