Presentation is loading. Please wait.

Presentation is loading. Please wait.

National Guidance on Standards for PACS Image Display Devices

Similar presentations


Presentation on theme: "National Guidance on Standards for PACS Image Display Devices"— Presentation transcript:

1 National Guidance on Standards for PACS Image Display Devices
Dr Rhidian Bramley PACS & Teleradiology SIG Hillingdon, London 22 Nov 2006

2 FAQ Is the LSP PACS web client suitable for diagnostic use?
Are there different recommendations for diagnostic and review workstations? Are there different recommendations for different modality workstations? How do you decide what specification is appropriate for A&E, clinics, wards, theatres etc. Should we deploy 2, 3 or 5 MP display devices on reporting workstations?

3 Display Device QA Guidance
National AAPM TG18 (USA) DIN V (Germany) IPEM 91 (UK) RCR (UK) Connecting for Health (England) International SMPTE VESA FPDM ISO 9241 and 13406 DICOM GSDF and GSPS IEC The 154 pages 2005 American Association of Physicists in Medicine (AAPM) has produced probably the most thorough document on the assessment of medical displays, ‘Assessment of display performance for medical imaging systems’. This describes methods and tools for the assessment of displays and suggests regimes for acceptance testing and for daily, monthly/quarterly and annual quality control tests. The 2001 German standards institution, Deutsches Institut für Normung e.V. (DIN), standard 6868 part 57, Image Quality Assurance in X-Ray Diagnostics, Acceptance Testing for Image Display Devices (DIN 2001) was developed as an acceptance testing standard addressing the requirements for display systems. The standard recognizes the use of DICOM GSDF. In the UK the Institute of Physics in Engineering (IPEM) has recently published Report 91 ‘Recommended Standards for the Routine Performance Testing of Diagnostic X-Ray Systems’. Report 91 replaces IPEM report 77 and provides essential guidance for anyone responsible for diagnostic X-Ray equipment. This includes a section on quality assurance of diagnostic displays. The UK Royal College of Radiologists (RCR) PACS & Teleradiology special interest group has produced guidelines on the specification and quality assurance of primary diagnostic display devices. This guidance includes a minimum and recommended screen display size and resolution for diagnostic image interpretation, in conjunction with best practice guidance on using diagnostic workstations. The International Standards Organization (ISO) standard, ISO :1992, Ergonomic Requirements for Office Work with Visual Display Terminals (VDTs) Part 3: Visual Display Requirements (ISO 1992), gave test methods and performance requirements primarily for displays used for text based terminals. ISO :2001, Ergonomic Requirements for Work with Visual Displays Based on Flat Panels. Part 2: Ergonomic Requirements for Flat Panel Displays (ISO 2001). IEC Ed.1: Evaluation and routine testing in medical imaging departments - Part 3-6 Acceptance Tests - Image Display Devices This draft standard is based on European discussions in which the UK took an active part. Your comments on this draft are welcome and will assist in the preparation of the consequent British Standard. If no comments are received to the contrary, then the UK will approve this draft and implement it as a British Standard. Comment is particularly welcome on national legislative or similar deviations that may be necessary. Even if this draft standard is not approved by the UK, if it receives the necessary support in Europe, the UK will be obliged to publish the official English Language text unchanged as a British Standard and to withdraw any conflicting standard. SMPTE 1992 and VESA 1998/2001 method of measuring performance of displays but no standard Video Electronics Standards Association (VESA) The need for user evaluation was addressed by the Society of Motion Picture and Television Engineers (SMPTE) in the early 1980s and resulted in the approval and publication of a recommended practice, SMPTE RP , Specifications for Medical Diagnostic Imaging Test Pattern for Television Monitors and Hard-Copy Recording Cameras (SMPTE 1991). SMPTE RP 133 described the format, dimensions, and contrast required of a pattern to make measurements of the resolution of display systems for both analog and digital signal sources.

4 UK Guidance Drivers & Objectives
Assist UK PACS deployments Guidance on purchase and QA of display devices for PACS projects & business cases Promote clinical safety Set minimum standards Achieve benefits of PACS Support clinical workflow Provide realistic achievable targets Clinical workflow required to achieve benefits of PACS Clinical workflow, one way by providing access to clinical workstations – can only be achieved by providng relaistic achievable targets.

5 UK Guidance Scope All PACS display devices used for ‘clinical image interpretation’ because by definition there is an associated clinical risk Specify QA tests and minimum standards for a display device to reproduce a DICOM test image Assess whole imaging chain from PACS server to workstation display (including effect of room lighting) Provide guidance on how to view images To optimise spatial and contrast resolution “This document deals with the QA of primary diagnostic display devices used for clinical image interpretation. Where images are reviewed without a requirement for clinical interpretation, the image quality is considered to be of secondary importance. The quality of display in these circumstances should be considered locally, depending on the purpose of the review.” Broad scope Not level playing field – not expect same DD in all areas Sub classify DD – methods

6 TG18-QC Pattern The TG18-QC test pattern is shown in Figure 20. The pattern consists of multiple inserts embedded in a midpixel value background. The inserts include the following: 1. Grid lines (one pixel) with thicker lines (three pixels) along periphery and around central region, for the evaluation of geometric distortions. 2. Sixteen 102 × 102 (1k version) luminance patches with pixel values varying from 8 to 248 (in 8-bit version) [128 to 3968 in 12-bit version]4 for luminance response evaluation. Each patch contains four small 10 × 10 corner patches (1k version) at ±4 [±64] of pixel value difference from the background, +4 [+64] in upper left and lower right, –4 [–64] in lower left and upper right. The small patches are used for visual assessment of luminance response. Additionally, two patches with minimum and maximum pixel value are embedded containing 13 [205], and 242 [3890] pixel value internal patches, similar to 5% and 95% areas in the SMPTE test pattern. 3. Line-pair patterns at the center and four corners at Nyquist and half-Nyquist frequencies for resolution evaluation, having pixel values at 0–255 [0–4095] and 128–130 [2048–2088]. 4. “Cx” patterns at the center and four corners with pixel values of 100, 75, 50, and 25% of maximum pixel values against a zero pixel value background, for resolution evaluation in reference to a set of 12 embedded scoring references with various amounts of Gaussian blurring applied, as tabulated in Table III.9 in appendix III.5 5. Contrast-detail “QUALITY CONTROL” letters with various contrasts at minimum, midpoint, and maximum pixel values for user-friendly low-contrast detectability at three luminance levels. 6. Two vertical bars with continuous pixel value variation for evaluating bit depth and contouring artifacts. 7. White and black bars for evaluating video signal artifacts, similar to those in the SMPTE pattern. 8. A horizontal area at the top center of the pattern for visual characterization of cross talk in flat-panel displays. 9. A border around the outside of the pattern, similar to SMPTE’s.

7 4.5.3 Visual Evaluation of Display Resolution
Assessment Method Display resolution can be evaluated by visually assessing the appearance of the Cx patterns in the TG18-QC or the TG18-CX test patterns. In displaying these patterns, it is important to verify that that the patterns are displayed as one display pixel per image pixel, as any digital magnification will hide the actual response. Most image viewers have function to accomplish this display mode. In order not to be limited by the MTF of the eye, use of a magnifying glass is recommended. Using the TG18-QC pattern and a magnifier, the examiner should inspect the displayed Cx patterns at the center and four corners of the display area and score the appearance using the provided scoring scale. The line-pair patterns at Nyquist and half-Nyquist frequencies in the horizontal and vertical directions should also be evaluated in terms of visibility of the lines. The average brightness of the patterns should also be evaluated using the grayscale step pattern as a reference. The difference in visibility of test patterns between horizontal and vertical patterns should be noted. The relative width of the black and white lines in these patches should also be examined using a magnifier. The resolution uniformity may be ascertained across the display area using the TG18-CX test pattern and a magnifier in the same way that the Cx elements in the TG18-QC pattern are evaluated. Alternatively, the resolution response can be visually assessed using the TG18-PX test pattern. The pattern should be displayed so that each image pixel is mapped to one display pixel. Using a magnifier with a reticule, the physical shape and size of a few pixels in different areas of the pattern at the center and the corners are evaluated. The size of the maximum-brightness pixels should be measured at approximately 50% and 5% of luminance profile (Figure 4c). The resolution- addressability ratio (RAR) is assessed as the ratio of the 50% size (FWHM) and the nominal display pixel size. If notable astigmatism is present at the corners of the active display area, the astigmatism ratio, or the ratio of the large versus short axis of the spot ellipse should be measured. It should be noted that this method of resolution measurement requires experience to achieve consistent results. Expected Response In the visual inspection of the TG18-QC and TG18-CX patterns on primary class display systems, the Cx elements should be scored between 0 and 4 at all locations. This limit coincides with RAR ≤ 1.15 (Table III.9). For secondary class displays, the Cx scores should be between 0 and 6 (RAR ≤ 1.47). For both classes, the horizontal and vertical line-pair patterns at Nyquist frequency should be discernable at all locations and for all directions. In CRTs, it is normal for the performance at the center to be better than that at any corner due to natural deflection distortions. Also, the horizontal line-pair patterns at Nyquist frequency usually appear overall slightly brighter than the vertical patterns because the vertical patterns contain a higher percentage of rise/fall time per pixel, delivering less beam energy to the phosphor screen. At the Nyquist frequency, the difference in the average luminance should be less than 30%. A difference more than 50% indicates a slow video amplifier not well suited for the matrix size. The vertical and horizontal line-pair patterns at half-Nyquist frequency should show less of a luminance difference since the vertical patterns contain two pixels per line, providing more dwell time for the electronic beam. A significant difference between the thicknesses of the black and white lines is also indicative of a poorly shaped pixel with excessive spread of the pixel, which diminishes the black content. In evaluating the display resolution using the TG18-PX test pattern on CRTs, the pixel shapes should be nearly round, indicating a close match of the optics and video bandwidth. The pixel should show a near-Gaussian distribution of luminance, indicating symmetrical rise and fall times. Improper damping of the video amplifier or overshoot phenomena cause distortions that can be described as crescent-shaped echoes and/or comet tails following the intended pixel. The size of the pixel profile at 50% of the maximum should compare closely to manufacturer’s specification. The 5% size should be about twice the 50% size (Figure 4). Larger 5% sizes cause notable display resolution loss due to the increase in pixel overlap. The RAR should be between 0.9 and 1.1 for primary class displays (Muka et al. 1997). This range provides a balance between a structured appearance (e.g., raster lines visible) and an excessive resolution loss. The maximum astigmatism ratio should be less than 1.5 over the display area for primary class displays. For LCDs, the pixel intensity should not extend beyond the nominal pixel area.

8 Classification of Display Devices
Are there different recommendations for diagnostic and review workstations? If so, what constitutes a review workstation?

9 AAPM TG18 Classification
Primary display systems those used for the interpretation of medical images. They are typically used in radiology and in certain medical specialties such as orthopedics. Secondary display systems those used for viewing medical images by medical staff or specialists other than radiologists after an interpretive report is rendered. AAPM foremost, probably most comprehensive and stringent guidelines. Not directly applicable UK, no UK input. Limitations on this subclassification UK practice don’t have 24/7 reporting clinicians are expecetd and therefore also expect right to interpret clinical images. Can’t tell them tha because they have a report they are not able to make their own clinical interpretation. Example ocology, dental, chest etc.

10 IEC Classification Primary Usage Secondary Usage
use of an image display system or its components for the interpretation of medical images toward rendering clinical diagnosis Secondary Usage use of an image display system or its components for viewing medical images for medical purposes other than primary usage Current version in draft form completion, final submissions by end of this month. Will supersede UK guidance Also unsatisfactory, rendering a clinical dagnosis is only one part of clinical image interpretation, e.g. decding treatment, cancer diagnosis, cancer staging comes after – not making a diagnosis but still requires Conclusion all display devices in scope are primary. Secondary usage examples?

11 RCR/CfH Guidance This guidance deals with the QA of primary diagnostic display devices used for clinical image interpretation. Where images are reviewed without a requirement for clinical interpretation, the image quality is considered to be of secondary importance. The quality of display in these circumstances should be considered locally, depending on the purpose of the review.

12 Classification of Display Devices
Are there different recommendations for different modality workstations? Mammography Plain Radiography CT, MR

13 Classification by Image Modality
DIN V Class 1 (projection radiography) Class 2 (cross sectional imaging) IEC Mammography Radiography, Fluoroscopy CT, MRI US, NM See some logic, Mammography, except special case dedicated workstations Radiography higher matrix sizes but no logic behing grouping with fluoroscopy US NM smaller image matrix and use colour In terms of PACS fundamentally flawed. Have very few dedicated modality PACS workstatiosn, almost all are multimodlity workstations So how do we decide what display devices should go where?

14 Classification by Area?
How do you decide what specification is appropriate for Radiology, A&E, clinics, wards, theatres etc? Should everyone have the same display devices? How do you justify one clinical area having a better workstation that another?

15 Clinical Risk Assessment
What can go wrong Patient may be harmed as a result of inadequate quality of the PACS display device How often How bad Need for action?

16 Clinical Risk Assessment
How often How bad Total Radiology Fracture clinic A&E ITU Chest clinic Theatres Wards Bas on real clinical locations Broad groupings can take this down to individual workstation level How Often, Assumptions, down to workstation and Qa so workstaion does not display somethng clinical significant, or display an artefact that becomes clinically significanmt. Leaving pathologies and case mix aside for the moment, simplest wat to look at how often is how often the workstation is used. If 2 workstaions, 1 used for used for 10 hours a day other for 1 hour a day, and extra money to spend on upgrading, which would you choose. Don’t knpw how long each is used for, site specifiv, but of these which do you think would be used the most, mosy number of examinations interpreted.Relative risk sore

17 Clinical Risk Assessment
How often How bad Total Radiology 5 Fracture clinic 3 A&E 4 ITU 2 Chest clinic Theatres 1 Wards How Often, Relative risk score. Based on how often workstation is used How bad, patient coming to harm Instictively, first thing think of is Pathology, difficult assessment, is missing a fracture worse than missing a lung cancer Disabling fracture young poerson affect their whole life, missing lung cancer no proospect of cure in the elderly. Difficult to get into recognise used for multiple pathologies. Look at it a different way in terms of dual reporting. If dual reporting risk to the patient of one person missing something is less. So if on ward miss something, may not significant impa Compare with e.g. fracture clinical – local prpcvedures may be that thse are not dual reproted in raiology, therfore impact of missing something is greater Clearly time between second interpretation may have an impact, as patient may come to harm in between. Less with hot reporting service, more if longdelay and backlogmnof reporting. In general patient is more likley to come to immediate harm for if they are acutely unwell. Therefore ontop of this how bad, factor in something for ITU and A&E, possible theatres.

18 Clinical Risk Assessment
How often How bad Total Radiology 5 Fracture clinic 3 A&E 4 ITU 2 Chest clinic Theatres 1 Wards Bas on real clinical locations Broad groupings can take this down to individual workstation level How Often, Assumptions, down to workstation and Qa so workstaion does not display somethng clinical significant, or display an artefact that becomes clinically significanmt. Leaving pathologies and case mix aside for the moment, simplest wat to look at how often is how often the workstation is used. If 2 workstaions, 1 used for used for 10 hours a day other for 1 hour a day, and extra money to spend on upgrading, which would you choose. Don’t knpw how long each is used for, site specifiv, but of these which do you think would be used the most, mosy number of examinations interpreted.Relative risk sore

19 Clinical Risk Assessment
How often How bad Total Radiology 5 25 Fracture clinic 3 15 A&E 4 12 ITU 2 8 Chest clinic 6 Theatres 1 Wards Bas on real clinical locations Broad groupings can take this down to individual workstation level How Often, Assumptions, down to workstation and Qa so workstaion does not display somethng clinical significant, or display an artefact that becomes clinically significanmt. Leaving pathologies and case mix aside for the moment, simplest wat to look at how often is how often the workstation is used. If 2 workstaions, 1 used for used for 10 hours a day other for 1 hour a day, and extra money to spend on upgrading, which would you choose. Don’t knpw how long each is used for, site specifiv, but of these which do you think would be used the most, mosy number of examinations interpreted.Relative risk sore

20 Clinical Risk Assessment
What can go wrong Patient may be harmed as a result of inadequate specification and QA of the PACS workstation How often How bad Need for action? That was assessing relative risk. If there is a rclinical risk and the display device mitigates this risk, the risk is highest in radiology therefore radiology need the better display devices than on the wards. That assumes there is a clinical risk. Step back review the kliterature – is there really a clinical risk.

21 Evidence base examples 1
Effect of Monitor Luminance and Ambient Light on Observer Performance in Soft-Copy Reading of Digital Chest Radiographs Radiology 2004;232: When adequate window width and level are applied to soft-copy images, the primary diagnosis with chest radiographs on the monitor is unlikely to be affected under low ambient light and a monitor luminance of 25 foot-lamberts or more. Published online before print July 23, 2004, /radiol (Radiology 2004;232: ) © RSNA, 2004 Thoracic Imaging Effect of Monitor Luminance and Ambient Light on Observer Performance in Soft-Copy Reading of Digital Chest Radiographs1 Jin Mo Goo, MD, Ja-Young Choi, MD, Jung-Gi Im, MD, Hyun Ju Lee, MD, Myung Jin Chung, MD, Daehee Han, MD, Seong Ho Park, MD, Jong Hyo Kim, PhD and Sang-Hee Nam, PhD PURPOSE: To examine the combined effects of monitor luminance and ambient light on observer performance for detecting abnormalities in a soft-copy interpretation of digital chest radiographs. MATERIALS AND METHODS: A total of 254 digital chest radiographs were displayed on a high-resolution cathode ray tube monitor at three luminance levels (25, 50, and 100 foot-lamberts) under three ambient light levels (0, 50, and 460 lux). Six chest radiologists reviewed each image in nine modes of combined luminance and ambient light. The observers were allowed to adjust the window width and level of the soft-copy images. The abnormalities included nodule, pneumothorax, and interstitial disease. Observer performance was analyzed in terms of the receiver operating characteristics. The observers reported their subjective level of visual fatigue with each viewing mode. A statistical test was conducted for each of the abnormalities and for fatigue score by using repeated-measures two-way analysis of variance with an interaction. RESULTS: The detection of nodules was the only reading that was affected by the ambient light with a statistically significant difference (P < .05). Otherwise, observer performance for detecting a nodule, pneumothorax, and interstitial disease was not significantly different in the nine-mode comparison. There was no evidence that the luminance of the monitors was related to the ambient light for any of the abnormalities. The fatigue score showed a statistically significant difference due to both the luminance and ambient light. CONCLUSION: When adequate window width and level are applied to soft-copy images, the primary diagnosis with chest radiographs on the monitor is unlikely to be affected under low ambient light and a monitor luminance of 25 foot-lamberts or more.

22 Evidence base examples 2
Personal Computer versus Workstation Display: Observer Performance in Detection of Wrist Fractures on Digital Radiographs Radiology 2005;237: The results of this study showed that there was no difference in accuracy of observer performance for detection of wrist fractures with a PC compared with that with a PACS workstation. Personal Computer versus Workstation Display: Observer Performance in Detection of Wrist Fractures on Digital Radiographs1 Anthony J. Doyle, MB, ChB, James Le Fevre, BHB and Graeme D. Anderson, MB, ChB 1 From the Radiology Department, Middlemore Hospital, Hospital Rd, Otahuhu, Auckland 6, New Zealand. Received August 18, 2004; revision requested October 21; revision received December 17; accepted January 20, Address correspondence to A.J.D. ( ). PURPOSE: To retrospectively compare the accuracy of observer performance with personal computer (PC) compared with that with dedicated picture archiving and communication system (PACS) workstation display in the detection of wrist fractures on computed radiographs. MATERIALS AND METHODS: This study was conducted according to the principles of the Declaration of Helsinki (2002 version) of the World Medical Association. The institutional clinical board approved the study; informed consent was not required. Seven observers independently assessed randomized anonymous digital radiographs of the wrist from 259 subjects; 146 had fractures, and 113 were healthy control subjects (151 male and 108 female subjects; average age, 33 years). Follow-up radiographs and/or computed tomographic scans were used as the reference standard for patients with fractures, and follow-up radiographs and/or clinical history data were used as the reference standard for controls. The PC was a standard hospital machine with a 17-inch (43-cm) color monitor with which Web browser display software was used. The PACS workstation had two portrait 21-inch (53-cm) monochrome monitors that displayed 2300 lines. The observers assigned scores to the radiographs on a scale of 1 (no fracture) to 5 (definite fracture). Receiver operating characteristic (ROC) curves, sensitivity, specificity, and accuracy were compared. RESULTS: The areas under the ROC curves were almost identical for the PC and workstation (0.910 vs 0.918, respectively; difference, 0.008; 95% confidence interval: –0.029, 0.013). The average sensitivity with the PC was almost identical to that with the workstation (85% vs 84%, respectively), as was the average specificity (82% vs 81%, respectively). The average accuracy (83%) was the same for both. CONCLUSION: The results of this study showed that there was no difference in accuracy of observer performance for detection of wrist fractures with a PC compared with that with a PACS workstation.

23 Evidence base examples 3
ROC Analysis for Diagnostic Accuracy of Fracture by Using Different Monitors Journal of Digital Imaging 2006;19: 276 A significant difference was observed in the results obtained by using two kinds of monitors. Color monitors cannot serve as substitutes for monochromatic monitors in the process of interpreting computed radiography (CR) images with fractures.

24 Options to Mitigate Risk
Install higher spec display device Optimise QA - ensure workstation is configured correctly (+ ambient lighting) Training - ensure workstation is used correctly Implement ‘hot reporting’ service Disallow clinical image interpretation on the workstation Assuming there is a risk, albeit small, next step in risk assessment is to asses options for mitigating the risk, only one of which is installing a higher spec display device Fiest step is to optimise QA on existing device, liitle point specify a higher quality display if you are not usin g it to its potential. QA programme recommended. QA also included environment, ambient lighting, noise etc. Again unwise to upgarde a workstation from inimum spec without considering this but have to be realistic, cannot expect perfrectly quiet setting with lights off in all clinical settings. Triaing – story about Nain at CoC occuping quarter of the screen. Ompensate for that by buying a screen 4 x the size but suggest not good use of resources. Hot reporting, dual reporting service. Mitigate risk because 2 people looking at study Disallow, have to consider workflowbso that clinicains can readily access PACS. Like all above, if mitigation action is impracical, it won’t be followed.

25 Importance of Clinical Workflow
Workflow should dictate where ‘diagnostic’ quality display devices are positioned Providing it does not render then non-diagnostic! May use workflow to justify a higher spec display device in some areas Where viewing conditions can not be optimised fully – e.g. operating theatres, angiography rooms Where large numbers of plain radiographs reported to reduce requirements for systematic magnification (spatial resolution) and windowing (contrast resolution).

26 RCR guidance on how to view images
To optimise spatial resolution View image fully in maximum available screen area to optimise pattern recognition of non-spatially limited abnormalities Systematically magnify image to acquisition resolution or greater (100%, 200% etc) to reveal spatial detail zoom and pan image around screen use magnifying glass tool “Studies suggest that there is little reduction in the diagnostic power of using these techniques when compared to displaying the whole image at 1:1 on higher resolution screens, but there is an increase in the time taken to make a report.” “High fidelity dual screen displays (>= 3 MP) are recommended in radiology and other areas where large numbers of radiographic images are reported, to reduce reporting times and thereby optimise department workflow.”

27 RCR guidance on how to view images
To optimise contrast resolution View image at different window level and window width presets to optimise demonstration of different structures e.g. soft tissue, lung, bone windows “.By changing the centre (level) and range (width) of the grey-scale values presented, it should be possible to demonstrate all the grey-scale data represented in the image. The minimum specification of a display device in terms of contrast resolution parameters is therefore somewhat arbitrary, and depends on how the windowing tools are used during normal workflow.” “High fidelity display devices are recommended in radiology and other areas where large numbers of images are reported to reduce requirements for windowing images, and thus assist in reporting workflow.”

28 RCR Guidance Notes [1] The minimum and recommended specifications for diagnostic display devices are only appropriate if clinical image viewing is performed according to image viewing guidelines. All diagnostic image interpretation should be performed on DICOM images making use of the application software zoom, pan, magnification, and windowing tools to optimise spatial and contrast resolution. [2] LCD devices should be run at their native resolution to ensure there is a 1:1 match between screen pixels and screen resolution, and therefore no loss of image quality due to screen interpolation. CRT displays can be run at a variety of resolutions with no loss of display quality; however care should be taken that the correct aspect ratio is maintained to avoid distortion of the image. [3] Where the majority of reporting performed on a diagnostic workstation is of cross-sectional imaging, lower resolution landscape style displays (>= 1.3 megapixels) are considered adequate, providing larger images are interpreted with the aid of systematic magnification. [4] High fidelity (>= 3 megapixels) portrait style displays are recommended in radiology and other areas where large numbers of plain radiographic images are reported to reduce requirements for systematic magnification, and thus reduce image interpretation and reporting times. [5] Display devices may be set initially to operate at a fraction of the maximum luminance in the manufacturer’s specification. This can be adjusted to compensate for the decline in performance of the back-light over time whist maintaining grey-scale calibration. [6] AAPM TG18 recommendation [7] High luminance displays can increase the number of perceivable grey-scale levels (JND index steps) but may have a detrimental effect in user performance through fatigue and the human visual adaptation response. The optimum operating luminance level may vary between users. [8] IPEM 91 recommendation [9] 24-bit and 32-bit colour are equivalent to 8-bit monochrome grey-scale. Colour display devices are recommended for displaying colour images, but they generally perform less well than monochrome display devices in terms of maximum luminance and contrast ratio. [10] The number of permissible pixel defects per million is defined by the ISO standard. Class 1 panels should have no defects. Class 2 panels should be replaced if they have > 2 whole pixel defects per million. Appropriate use of application software zoom, pan and magnification tools can negate the effect of pixel defects in clinical practice.

29 IPEM 91 GUIDANCE Physical parameter Frequency Remedial level
Image display monitor condition Daily to weekly Image display monitors should be clean, and the perceived contrast of the test pattern should be consistent between monitors connected to the same workstation. Verify that the 5% and 95% patches are visible. Greyscale contrast luminance ratio 3 monthly Ratio white to black < 250 Distance and Angle Calibration ± 5 mm ± 3 Resolution Grade AAPM TG18-QC resolution patterns according to the reference score (CX > 4) Greyscale drift 6 to 12 monthly Black baseline ± 25% White baseline ± 20% DICOM greyscale calibration GSDF ± 10% Uniformity U% > 30% Variation between monitors Black baseline > 30% White baseline > 30% Room illumination > 15 lux

30 FAQ Is the LSP PACS web client suitable for diagnostic use?
Are there different recommendations for diagnostic and review workstations? Are there different recommendations for different modality workstations? How do you decide what specification is appropriate for A&E, clinics, wards, theatres etc. Should we deploy 2, 3 or 5 MP display devices on reporting workstations?

31 Summary RCR Guidance Set ‘achievable’ minimum standard for all workstations used for clinical image interpretation Number and locations of clinical ‘diagnostic’ workstations determined by and workflow analysis and QA programme Recommended higher standard for some workstations to optimise clinical safety and workflow Where large numbers of plain radiography images reported Where viewing conditions can not be optimised fully NB not mention dual reporting – if clinically interpreting an image dual reporting Where definitive report produced – controversial… Where images not interpreted in radiology

32 Which is the PACS workstation?
Guidance is that all used for interpreting plain radiographs are orientated portrait style. Optimise on screen resolution as ost radiologahs longer in vertical access, as if looking at patient standing up. A B


Download ppt "National Guidance on Standards for PACS Image Display Devices"

Similar presentations


Ads by Google