Presentation is loading. Please wait.

Presentation is loading. Please wait.

Analysis of the 2006 IPA Proofing Roundup Data William B. Birkett Charles Spontelli CGATS TF1 November, 2006 Mesa, AZ William B. Birkett Charles Spontelli.

Similar presentations


Presentation on theme: "Analysis of the 2006 IPA Proofing Roundup Data William B. Birkett Charles Spontelli CGATS TF1 November, 2006 Mesa, AZ William B. Birkett Charles Spontelli."— Presentation transcript:

1 Analysis of the 2006 IPA Proofing Roundup Data William B. Birkett Charles Spontelli CGATS TF1 November, 2006 Mesa, AZ William B. Birkett Charles Spontelli CGATS TF1 November, 2006 Mesa, AZ

2 Mission Statement TF1 - Objective Color Matching Development of a method based on colorimetric measurements which will estimate the probability that hardcopy images reproduced by single or multiple systems, using identical input, will appear similar to the typical human observer.

3 Assumptions ◊Colorimetry works (patches with the same color values appear identical). ◊Our application of colorimetry is correct. ◊Visual illusions are insignificant. ◊Colorimetry works (patches with the same color values appear identical). ◊Our application of colorimetry is correct. ◊Visual illusions are insignificant.

4 Assumptions ◊Our test targets provide a good sampling of the colors used in images. ◊Our color spaces are homogeneous - no discontinuities. ◊Our test targets provide a good sampling of the colors used in images. ◊Our color spaces are homogeneous - no discontinuities.

5 Assumptions ◊Test target data correlates with the color of images. ◊Test target data correlates with the judgment of human observers (using methods yet to be determined). ◊Test target data correlates with the color of images. ◊Test target data correlates with the judgment of human observers (using methods yet to be determined).

6 Expectations ◊Two prints will match perfectly if the measured colors of all corresponding patches in the test targets are identical. ◊The quality level of a match can be gauged by some statistical measure of test target errors. ◊Two prints will match perfectly if the measured colors of all corresponding patches in the test targets are identical. ◊The quality level of a match can be gauged by some statistical measure of test target errors.

7 Question ◊Is it possible for two prints to match when the measured colors of corresponding patches are different?

8 Answer That depends on how you define match: ◊Colorimetric matching requires that all colors are literally identical. ◊Appearance matching depends on the illusion of differently colored prints appearing the same. That depends on how you define match: ◊Colorimetric matching requires that all colors are literally identical. ◊Appearance matching depends on the illusion of differently colored prints appearing the same.

9 Examples ◊Reproducing a color transparency on a printed sheet (smaller gamut). ◊Printing an uncoated paper to match a coated paper (smaller gamut). ◊Printing a bluish paper to match a neutral paper (white point). ◊Reproducing a color transparency on a printed sheet (smaller gamut). ◊Printing an uncoated paper to match a coated paper (smaller gamut). ◊Printing a bluish paper to match a neutral paper (white point).

10 Reinventing the Wheel? ◊Much work has already been done on appearance matching. ◊For instance, CIECAM-02 ◊Can we adapt this work to our needs? ◊Much work has already been done on appearance matching. ◊For instance, CIECAM-02 ◊Can we adapt this work to our needs?

11 2006 IPA Proofing Roundup ◊Reference press sheets printed with the help of GRACoL experts ◊Test targets cut from selected press sheets and given to the participants ◊Proofs made to “match the numbers” of these test targets ◊Reference press sheets printed with the help of GRACoL experts ◊Test targets cut from selected press sheets and given to the participants ◊Proofs made to “match the numbers” of these test targets

12 2006 IPA Proofing Roundup ◊Human judges evaluate the quality of the match to the press sheets, based on the appearance of images and other test elements

13 2006 IPA Proofing Roundup ◊Spectral measurements made of all test targets - press sheets and proofs ◊Can we correlate these measurements to the scores given by the judges? ◊Spectral measurements made of all test targets - press sheets and proofs ◊Can we correlate these measurements to the scores given by the judges?

14 Average deltaE? ◊How about our old favorite, average deltaE? ◊This has already been tested, but let’s review the data. ◊How about our old favorite, average deltaE? ◊This has already been tested, but let’s review the data.

15

16 Average deltaE? ◊Again, no useful correlation from this measurement. ◊Note that the average deltaE is only about 0.7, which is a barely detectable difference in adjacent color patches. ◊Again, no useful correlation from this measurement. ◊Note that the average deltaE is only about 0.7, which is a barely detectable difference in adjacent color patches.

17 Does this Make Sense? ◊Significant differences were reported by the judges, yet the measured data is virtually identical. ◊This is same result that has baffled us in previous TF1 studies. ◊Significant differences were reported by the judges, yet the measured data is virtually identical. ◊This is same result that has baffled us in previous TF1 studies.

18 Our Experiment: ◊Use the measured data to make simulated test prints, and compare those prints with the same judging criteria.

19 Our Experiment: ◊We decided to compare the best and the worst scoring proofs. ◊Vendor 19 (Avg dE = 0.60) (best) ◊Vendor 35 (Avg dE = 0.54) (worst) ◊We made ICC profiles from the four datasets using PM 5.0.7 ◊We decided to compare the best and the worst scoring proofs. ◊Vendor 19 (Avg dE = 0.60) (best) ◊Vendor 35 (Avg dE = 0.54) (worst) ◊We made ICC profiles from the four datasets using PM 5.0.7

20 Our Experiment: ◊Then, we made prints of the IPA test file using an Epson 4800 printer, one for each of the four data sets. The prints were made over a period of about 30 minutes (one after another). We did a nozzle test before and after to ensure consistency.

21 Our Experiment:

22 ◊The prints were judged by a group of 29 graphic arts students at BGSU. We gave them the very same judging sheet that was used by the IPA. They compared the prints in a D50 standard viewing booth, after an explanation of the judging criteria.

23 The Results: BGSU’s average scores are virtually identical, with the IPA’s worst match just slightly better than the IPA’s best.

24 Conclusion These data sets do not contain information indicating that one pair matches better than the other.

25 Possible Explanations ◊Our simulation proofs did not represent the data sets accurately enough. ◊Color sampling of the data sets is too coarse to pick up subtle differences in the proofs. ◊Our simulation proofs did not represent the data sets accurately enough. ◊Color sampling of the data sets is too coarse to pick up subtle differences in the proofs.

26 Possible Explanations ◊Viewing light was not D50, causing metamerism. ◊Color gradients in the press sheets created differences between between images and data sets. ◊Viewing light was not D50, causing metamerism. ◊Color gradients in the press sheets created differences between between images and data sets.

27 Possible Explanations ◊Non-color attributes such as gloss and bronzing account for differences in the judging. ◊UV/optical brightener effects caused color differences (some measurements used UV-cut filter while others didn’t). ◊Non-color attributes such as gloss and bronzing account for differences in the judging. ◊UV/optical brightener effects caused color differences (some measurements used UV-cut filter while others didn’t).

28 Future Work ◊More tests to establish the actual cause(s) of color matching differences among the IPA test proofs. ◊Eliminate as many variables as possible when doing color research. ◊More tests to establish the actual cause(s) of color matching differences among the IPA test proofs. ◊Eliminate as many variables as possible when doing color research.

29 Recommendations ◊Nearly perfect colorimetric matching is now routine among proofing systems. ◊There are other causes of matching failure that need to be considered. ◊Match quality is not a one-to-one function of average deltaE. ◊Nearly perfect colorimetric matching is now routine among proofing systems. ◊There are other causes of matching failure that need to be considered. ◊Match quality is not a one-to-one function of average deltaE.

30 Match Quality vs. Average deltaE Average deltaE Match Quality

31 Recommendation ◊Match quality measurement should be built upon a quantitative understanding of appearance matching.

32 Actions ◊Investigate the nature of appearance matching as it applies to print/proof comparisons. ◊Test potential match quality measures for correlation with visual assessments. ◊Investigate the nature of appearance matching as it applies to print/proof comparisons. ◊Test potential match quality measures for correlation with visual assessments.

33 Actions ◊Testing should be done with methods that avoid “unexplainable results.” ◊Tests should include comparisons of prints that match poorly. ◊Testing should be done with methods that avoid “unexplainable results.” ◊Tests should include comparisons of prints that match poorly.

34 Actions ◊When functional measures are found, test them outside of TF1. ◊If outside testing is successful, publish our results. ◊When functional measures are found, test them outside of TF1. ◊If outside testing is successful, publish our results.


Download ppt "Analysis of the 2006 IPA Proofing Roundup Data William B. Birkett Charles Spontelli CGATS TF1 November, 2006 Mesa, AZ William B. Birkett Charles Spontelli."

Similar presentations


Ads by Google