Presentation on theme: "Assessment of Radiologists’ Performance with CADe for Digital Mammography Elodia B. Cole, MS Medical University of South Carolina Department of Radiology."— Presentation transcript:
Assessment of Radiologists’ Performance with CADe for Digital Mammography Elodia B. Cole, MS Medical University of South Carolina Department of Radiology and Radiological Sciences ACRIN Fall Meeting 2010
Co-Authors Etta Pisano, MD Medical University of South Carolina Department of Radiology and Radiological Science Zheng Zhang, PhD Brown University Center for Statistical Science Helga Marques, MS Brown University Center for Statistical Science Robert Nishikawa, PhD University of Chicago Department of Radiology Martin Yaffe, PhD Sunnybrook Health Sciences Center Depts of Medical Physics and Medical Imaging R. Edward Hendrick, PhD University of Colorado - Denver Department of Radiology Emily Conant, MD University of Pennsylvania Department of Radiology Constantine Gatsonis, PhD Brown University Center for Statistical Science Laurie Fajardo, MD University of Iowa Department of Radiology Janet Baum, MD Harvard Medical School Department of Radiology
Background The Digital Mammographic Imaging Screening Trial (DMIST) was conducted between 2001-2003. Study was comparison of two technologies screen-film and digital mammography. Sensitivity for film-screen and digital mammography in DMIST was 0.41. While CAD for mammography was available for screen-film mammography during this time it was not available for digital mammography and so CAD was not allowed in DMIST.
Study Purpose Would CAD have made a difference in the performance of radiologists during DMIST for digital mammography had it been available?
Study Aim To assess the performance of Radiologists in interpreting digital mammograms without and then with CAD for digital mammograms from DMIST.
Methods & Materials CAD Systems Tested –R2 ImageChecker Cenova v1.0 (Hologic) –iCAD SecondLook v1.4 (iCAD) Cases –Digital mammograms (“for processing” and “for presentation” states) acquired from digital mammography systems used in DMIST. 300 cases for each machine type: 150 cancers, 150 non-cancers. R2 (Hologic – Selenia, GE 2000D, Fischer SenoScan, Fuji CR) iCAD (Hologic-Selenia, GE 2000D, Fuji CR)
Methods & Materials Readers –15 Radiologists Readers with clinical R2 CAD experience in the R2 study. –14 Radiologists Readers with clinical iCAD CAD experience in the iCAD study. Image Preparation –DICOM headers for Fuji and Fischer cases were brought to current standards to allow for CAD processing using custom software.
Methods & Materials The experiment –All reading sessions took place at UNC. Two dedicated CAD reading rooms were set up – one for the R2 readings, one for the iCAD. –Only one reader per CAD system at a time. Each reader session took about two days to complete. –Each reader first reviewed the case without CAD marks and provided BIRADS score for each breast. –The reader then applied CAD to the images by toggle button on mammography review workstation displaying the CAD structured report. –Reader then reviewed case with CAD marks and provided BIRADS score again for each breast.
Conclusions It is likely that CAD would not have improved radiologist performance (sensitivity, specificity, AUC) had it been available for the DMIST study Radiologists are seldom if ever influenced by CAD marks in making their diagnostic decisions
Acknowledgments CAD equipment provided by Hologic and iCAD Softcopy review workstations provided by Hologic and Sectra