Chapter 3: CS689 1 Computational Medical Imaging Analysis Chapter 3: Image Representations, Displays, Communications, and Databases Jun Zhang Laboratory.

Slides:



Advertisements
Similar presentations
History of image processing History of image processing In the 1970s, digital image processing proliferated, when cheaper computers and dedicated hardware.
Advertisements

Images Images are a key component of any multimedia presentation.
Digital Color 24-bit Color Indexed Color Image file compression
Digital Image Processing
Image Data Representations and Standards
Digital Image Fundamentals Selim Aksoy Department of Computer Engineering Bilkent University
Digital Imaging and Image Analysis
Motivation Application driven -- VoD, Information on Demand (WWW), education, telemedicine, videoconference, videophone Storage capacity Large capacity.
Bit Depth and Spatial Resolution SIMG-201 Survey of Imaging Science © 2002 CIS/RIT.
Graphics File Formats. 2 Graphics Data n Vector data –Lines –Polygons –Curves n Bitmap data –Array of pixels –Numerical values corresponding to gray-
1 King ABDUL AZIZ University Faculty Of Computing and Information Technology CS 454 Computer graphicsIntroduction Dr. Eng. Farag Elnagahy
T.Sharon-A.Frank 1 Multimedia Image Compression 2 T.Sharon-A.Frank Coding Techniques – Hybrid.
1 A Balanced Introduction to Computer Science, 2/E David Reed, Creighton University ©2008 Pearson Prentice Hall ISBN Chapter 12 Data.
Introduction to Computer Graphics
1 Basics of Digital Imaging Digital Image Capture and Display Kevin L. Lorick, Ph.D. FDA, CDRH, OIVD, DIHD.
Roger Cheng (JPEG slides courtesy of Brian Bailey) Spring 2007
Trevor McCasland Arch Kelley.  Goal: reduce the size of stored files and data while retaining all necessary perceptual information  Used to create an.
Digital Image Characteristic
Visual Representation of Information
Peripherals and Storage Looking at: Scanners Printers Why do we need storage devices anyway? What are magnetic disks? How do magnetic disks physically.
IE433 CAD/CAM Computer Aided Design and Computer Aided Manufacturing Part-2 CAD Systems Industrial Engineering Department King Saud University.
SCCS 4761 Introduction What is Image Processing? Fundamental of Image Processing.
Digital Images The digital representation of visual information.
CS 1308 Computer Literacy and the Internet. Creating Digital Pictures  A traditional photograph is an analog representation of an image.  Digitizing.
Department of Physics and Astronomy DIGITAL IMAGE PROCESSING
COMP Bitmapped and Vector Graphics Pages Using Qwizdom.
CS Spring 2012 CS 414 – Multimedia Systems Design Lecture 8 – JPEG Compression (Part 3) Klara Nahrstedt Spring 2012.
CSCI-235 Micro-Computers in Science Hardware Part II.
Lab #5-6 Follow-Up: More Python; Images Images ● A signal (e.g. sound, temperature infrared sensor reading) is a single (one- dimensional) quantity that.
: Chapter 12: Image Compression 1 Montri Karnjanadecha ac.th/~montri Image Processing.
Klara Nahrstedt Spring 2011
 Refers to sampling the gray/color level in the picture at MXN (M number of rows and N number of columns )array of points.  Once points are sampled,
Concepts of Multimedia Processing and Transmission IT 481, Lecture 5 Dennis McCaughey, Ph.D. 19 February, 2007.
© 1999 Rochester Institute of Technology Introduction to Digital Imaging.
Video Video.
Chapter 3 Digital Representation of Geographic Data.
Multimedia Elements: Sound, Animation, and Video.
Section 8.1 Create a custom theme Design a color scheme Use shared borders Section 8.2 Identify types of graphics Identify and compare graphic formats.
Chapter 2 : Imaging and Image Representation Computer Vision Lab. Chonbuk National University.
Digital Image Processing Image Compression
Lecture 3 The Digital Image – Part I - Single Channel Data 12 September
Chapter 4: CS6891 Computational Medical Imaging Analysis Chapter 4: Image Visualization Jun Zhang Laboratory for Computational Medical Imaging & Data Analysis.
CHAPTER 8 Color and Texture Mapping © 2008 Cengage Learning EMEA.
Quiz # 1 Chapters 1,2, & 3.
Advances in digital image compression techniques Guojun Lu, Computer Communications, Vol. 16, No. 4, Apr, 1993, pp
Lecture 7: Intro to Computer Graphics. Remember…… DIGITAL - Digital means discrete. DIGITAL - Digital means discrete. Digital representation is comprised.
Digital imaging By : Alanoud Al Saleh. History: It started in 1960 by the National Aeronautics and Space Administration (NASA). The technology of digital.
DIGITAL IMAGE. Basic Image Concepts An image is a spatial representation of an object An image can be thought of as a function with resulting values of.
Marwan Al-Namari 1 Digital Representations. Bits and Bytes Devices can only be in one of two states 0 or 1, yes or no, on or off, … Bit: a unit of data.
INTRODUCTION TO GIS  Used to describe computer facilities which are used to handle data referenced to the spatial domain.  Has the ability to inter-
CSCI-100 Introduction to Computing Hardware Part II.
Digital imaging By : Alanoud Al Saleh. History: It started in 1960 by the National Aeronautics and Space Administration (NASA). The technology of digital.
Image Display. But first a review Remember the 3 main steps 1. Data Acquisition 2. Image Reconstruction 3. Image Display.
Image File Formats. What is an Image File Format? Image file formats are standard way of organizing and storing of image files. Image files are composed.
COMP135/COMP535 Digital Multimedia, 2nd edition Nigel Chapman & Jenny Chapman Chapter 2 Lecture 2 – Digital Representations.
STATISTIC & INFORMATION THEORY (CSNB134) MODULE 11 COMPRESSION.
Intelligent Vision Systems Image Geometry and Acquisition ENT 496 Ms. HEMA C.R. Lecture 2.
Image File Formats By Dr. Rajeev Srivastava 1. Image File Formats Header and Image data. A typical image file format contains two fields namely Dr. Rajeev.
Image Editing Vocabulary Words Pioneer Library System Norman Public Library Nancy Rimassa, Trainer Thanks to Wikipedia ( help.
Graphics and Image Data Representations 1. Q1 How images are represented in a computer system? 2.
By Prof. Stelmark. Digital Imaging In digital imaging, the latent image is stored as digital data and must be processed by the computer for viewing on.
Mohammed AM Dwikat CIS Department Digital Image.
1. 2 What is Digital Image Processing? The term image refers to a two-dimensional light intensity function f(x,y), where x and y denote spatial(plane)
BITMAPPED IMAGES & VECTOR DRAWN GRAPHICS
Binary Notation and Intro to Computer Graphics
IMAGE COMPRESSION.
INTRODUCTION TO GEOGRAPHICAL INFORMATION SYSTEM
Data Compression.
Chapter III, Desktop Imaging Systems and Issues: Lesson IV Working With Images
Multimedia System Image
Presentation transcript:

Chapter 3: CS689 1 Computational Medical Imaging Analysis Chapter 3: Image Representations, Displays, Communications, and Databases Jun Zhang Laboratory for Computational Medical Imaging & Data Analysis Department of Computer Science University of Kentucky Lexington, KY 40506

Chapter 3: CS a: Introduction Biomedical image is a discrete representation in ordered graphical form of a biological object It can be functions of one or two dimensions, generated by a variety of means, sampled within different coordinate systems and dimensions, and having variable quantized magnitude associated with each sample Individual numerical elements in a digital image are pixels (picture elements) for 2D images and voxels (volume picture elements) for 3D images

Chapter 3: CS b: Biomedical Images Biomedical images are generated by a variety of energy transmissions through tissues or cells, (e.g., light, X-ray, magnetic fields, waves) and recorded by a sensor or multiple sensors The mapping relationships between the transmitter, the object, and the sensor(s) are defined physically and mathematically The correspondence between the object and the resulting image can be analytically described and subsequently analysis of the image can be carried out by direct inference, as if the analysis were performed on the object itself

Chapter 3: CS a: Volume Image Representation 3D biomedical images are generally modeled as a “stack of slices” and effectively represented in the computer as a 3D array Important factors in 3D images: organization of the tomographic data, reference coordinate system used to uniquely address its values in three-space, transformations to obtain desired isotropicity and orientation, and useful display of the data, either in “raw” or processed format

Chapter 3: CS b: Volume Image Data Organization Multidimensional biomedical images reconstructed from tomographic imaging systems are characteristically represented as sets of 2D images A pixel contains a value that represents and is related to some local property of the object A pixel has a defined spatial position in two dimensions relative to some coordinate system imposed on the imaged region of interest

3.2 Pixels and an Example Chapter 3: CS689 6

7 3.2c: 3D Voxels 3D images represent the object as adjacent planes through the structure 3D images often contains voxels that are not cubic (the volume is anisotropic), as the thickness of the 2D images is greater than the size of the pixels Many 3D biomedical image processings prior to visualization and analysis involve the creation of isotropic volume images from anisotropic data, using various spatial interpolation methods The third dimension can be time in some cases with 2D images acquired repetitively over some time

Chapter 3: CS d: CT/MRI Image Voxels

Chapter 3: CS e: A Volume Date from 3D Voxels

Chapter 3: CS f: 3D and 4D Images Some 3D representation may consist of spatially correlated 2D images from different scanning modalities, or different scan types within a modality (a multispectral image set) 4D biomedical images are organized as collections of 3D images of varying time 4D organization can also consist of 3D multispectral volume image sets, each represents a different modality or type of acquisition 5D data sets consist of 4D time varying of different modalities

Chapter 3: CS a: Coordinate Systems and Object Orientation The orientation of the structure being imaged is specified in order to standardize the location and orientation of the object in a regular grid, permitting spatial correlation and comparison between volume images One is origin-order system – mathematically rigorous, but not always intuitively obvious The other is view-order consistency – a more natural and intuitive way of conceptualizing 3D volume images, but not mathematically consistent

Chapter 3: CS b: Origin-Order System In origin-order system, the coordinate system maintains an unambiguous, single 3D origin in the volume image with the order of all voxels in a single line of a plane Consistency applied to a left-handed coordinate system, the projection of the origin is always from the origin outward. Volume images can be correctly reformatted into orthogonal sections Flipping images about one or three of the axes corrupts the integrity of the image and results in a volume image that its mirror-reversed

Chapter 3: CS c: View-order Consistency This system is not concerned with the maintenance of a single unambiguous 3D origin. It simply posits a relationship that should hold between the orientation of the sectional images and their order in the data file It accepts sectional images in any of six orientations, and assigns the axes to the row, column, and image directions of the image data regardless of the physical section orientation The main advantage is that it is highly intuitive and completely insensitive to orientation

Chapter 3: CS d: Transverse (Axial) Slices in the XY Plane

Chapter 3: CS e: Sagittal Slices are in the ZY Plane

Chapter 3: CS f: Coronal Slices are in the ZX Plane

Chapter 3: CS g: What is the View of this Slice?

Chapter 3: CS e: What is the View of this Slice?

Chapter 3: CS a: Value Representation At each pixel or voxel, a value is measured and/or computed by the imaging system. The range of the representation must encompass the entire range of possible measured and/or computed values and must be of sufficient accuracy to preserve precision in the measured values This range of representation is the dynamic range of the data, determined by both the imaging system and the numeric scale Discrete representation of value in the computer system cannot capture the entire dynamic range of the acquired signal from the imaging system

Chapter 3: CS a: Interpolation and Transformation Most 3D biomedical volume images are sampled anisotropically, with the slice thickness often significantly greater than the in-plane pixel size Visualization and measurement usually require volume data to be isotropic, and the data must be post-processed before it can be used properly Possible problems for anisotropic data: incorrect aspect ratio along each dimension for visualization, and aliasing artifacts due to difference in sampling frequency

Aliasing and Anti-Aliasing Chapter 3: CS689 21

Chapter 3: CS b: Interpolation First-order linear interpolation of the gray level values is commonly used for 3D volume image Values of “new pixels” between existing pixels are simply interpolated (averaged) values of the existing pixels. It is a tri-linear interpolation in 3D. The interpolation in each of the dimensions is completely separable Tri-linear interpolation works well for most biomedical images

Chapter 3: CS b: Linear Interpolation Discrete dataLinear interpolation

Chapter 3: CS b: Polynomial & Spline Interpolation Polynomial interpolationSpline interpolation

Chapter 3: CS c: High Order Interpolation When the ratio in size between any of the voxel dimensions in 3D becomes greater than approximately 5 to 1, tri-linear interpolation does not work well, which will provide poor approximations High order interpolation, such as cubic interpolation, may be used, with increased computational cost Cubic interpolation uses more than the immediate adjacent voxels and uses a cubic polynomial to estimate the intermediate values Shape-based interpolation methods may be used for certain particular (known) structure of interest

Chapter 3: CS c: Nearest Neighbor Interpolation It is fast. Most useful if you need to see directly the voxels of the data you are displaying Thickness of the corornal slices

Chapter 3: CS c: Linear Interpolation Linear between slice interpolation. The data is interpolated with nearest neighbor with the slice plane

Chapter 3: CS c: Tri-Linear Interpolation 3D linear interpolation based on the neighboring voxels to each pixel on the displaced screen slice Blurring of data problem

Chapter 3: CS c: Cubic Interpolation Best quality images Cubic interpolation may take a long time to update on slower machines

Chapter 3: CS a: Computers and Computer Storage In computers, each switch value is referred to as a binary unit, or “bit” 8 bits form a “byte”. A computer word may be of size 8 bits, 16 bits, 32 bits, or 64 bits For evaluation of images by eye, comparative visual sensitivity is limited to approximately 9 bits of gray scale, but the eye can adapt to lighting conditions over about a 20-bit range Radiologists can modify the lighting conditions of the film to take advantage of the entire dynamic range expressed in the film

Chapter 3: CS b: Numeric Values Most computers have “natural” word sizes of 32 or 64 bits, and have special capabilities for handling numeric values of 8 or 16 bits The resolving power of the physical sensors used to capture biomedical images typically does not exceed more than 12 or 14 bits of information Storing 12 bits of data in 32 bits of storage is very inefficient

Chapter 3: CS c: Computer Storage Portable storage media: floppy disk (2MB), digital versatile disk (5,000 MB), flash disk (1GB) Magnetic tapes are ideal for archiving and backing- up computer data Hard disk: high speed and low cost with good reliability, primary devices for operating systems, computer software, and interim data Memory (2GB) and cache (2MB) are faster and temporary storage. They are critical for practical interactive image processing and visualization

Chapter 3: CS d: Storage of Biomedical Images Biomedical images can usually be stored in slow storage media as they are not viewed frequently Design decisions for storage are based on the needs of the practitioners who use images, and balanced against the cost of data storage and software systems needed to make them accessible, reliable, and durable New standards, such as DICOM (Digital Imaging and Communications in Medicine), are evolving that will combine the universal access of film with the flexibility of digital imagery

Chapter 3: CS a: Display Types and Devices The need for 3D display is emphasized by gaining an appreciation of the fundamental dilemma of studying 3D irregularly shaped objects like the organs of the body by using tomographic images 3D displays avoid the necessity of mentally reconstructing a representation of the structure of interest Multidimensional display techniques for 3D tomographic image analysis can be classified into two basic display types based on what is shown to the viewers: direct display and surface display

Chapter 3: CS b: Direct Display Direct display extends the concept of 2D buffer display of tomographic images into three dimensions by presenting the volume image as a 3D distribution of brightness It enables direct visualization of unprocessed image data Examples of 3D displays are holograms and space- filling vibrating mirrors Most direct 3D displays use augmenting equipment, including stereo holograms, stereo viewers, head- mounted display, etc.

Chapter 3: CS b*: Hologram Display Hologram displays with overhead and underneath illumination

Chapter 3: CS c: Stereo Holograms Left Eye Image Red Channel Only Right Eye Image Red Channel Removed Anaglyph Left & Right Images Overlaid

Chapter 3: CS d: Surface Display Surface display is to visualize the surfaces that exist within the image volume (cognition & information) The surface is first identified either manually or in an automated fashion by computer algorithm The identified surface is represented either as a set of contours or as a surface locus within a volume that reflects light from an imaginary light source Surface display shows light reflectors, while direct display show light emitters Direct display facilitates editing and searching

Chapter 3: CS e: Surface Display

Shaded Surface Display Chapter 3: CS689 40

Chapter 3: CS a: Color Sensitivity The human visual system is highly sensitive to color. In favorable light, humans can detect more than 1 million colors, and non-color-blind individuals can differentiate 20,000 colors Too many colors in an image usually cause confusion in encoding abstract information X-ray images are usually displayed in black and white. Colors may be added to make the image more vivid (in CT images) Colors may be used to represent maps of functions (temperature, pressure, electrical activity), and to distinguish one anatomical object from its surroundings

Chapter 3: CS b: Colored Brain Images

Chapter 3: CS a: Two-dimensional Displays Modern computer workstations generally include a color “bitmapped” display for visual interaction with the system They generally use cathode ray tubes or large liquid crystal devices (LCD) Bitmapping is a technique of breaking the surface of a 2D display screen into individually addressable pixels The value of each pixel is stored in a memory address in the graphics cards, and is scanned and painted onto the screen by the display system many times per second

Chapter 3: CS b: Presentation Formats For display of biomedical images, the most important characteristics of the display system are the number of bits per pixel, resolution, and color properties 8-bit gray scale or color mapped displays may not be sufficient for some images containing color gradients over large areas For maximum color performance, a graphics display subsystem is implemented with three separate raster channels for each of the red, blue, and green guns in the cathode ray tube. Each color is represented with 8-bit value of pixel

Chapter 3: CS c: Cathode Ray Tube

Chapter 3: CS d: Color Displays on CRT Shadow mask CRT close-upAperture grille CRT close-up

Chapter 3: CS a: Ergonomics (Work Environment) Our ability to statistically distinguish different intensities of light is limited to less than two decades To maximize the use of light-emitting displays, the iris should be fully open The ambient light in the room should be near the dark end of the gray scale of the display system A darkened room is optimum for highly detailed analysis and diagnosis

Chapter 3: CS a: Darkened Room Picture

Chapter 3: CS b: Printing Biomedical Images Printing of digital biomedical images has characteristics similar to image display (pixels per inch and color resolution) Color printing uses color primaries orthogonal to that of display system, since printed materials absorb light whereas displays generate light The colors for printing are cyan, magenta, and yellow. (Red, green, and blue for display) Ink jet printers, laser printers, and dye sublimation printers

Chapter 3: CS a: Three-dimensional Displays Most high performance graphics computers have 3D graphics accelerators – a special hardware to assist in the display of 3D surface images on computer screens These accelerators have clock (processing) speed faster than most CPUs. They construct texture maps and inject them into the rendered scene to paint the surfaces of the graphical objects It is possible to utilize the graphics accelerators to do some numerical computing work, to speed up the computation of image data

Chapter 3: CS b: Stereoscopic Visualization The simplest and most common method of generating 3D image displays is to present a pair of appropriately offset 2D images, one for each eye Cross-fusing is to cross one’s eyes while viewing a pair of images. (Need some practice to adjust the eyes quite rapidly) The picture will fuse into a single perceived image in the center of the two separate images Cross-fused images are popular and effective, but they are not natural vision and can be strenuous

Chapter 3: CS c: 3D Cross-Fusing Stereo View

Chapter 3: CS d: Other 3D Stereo Devices Rapidly alternating the presentation of the two stereoscopic images to the eyes and utilizing the persistence effect in the brain to blend the separate images into a single 3D image Can be produced with computer monitors equipped with double-buffered graphic systems Special glasses are needed to view these alternating images, with an LCD eyepiece that can be switched on and off alternately Need synchronization between the switch and screen

Chapter 3: CS e: 3D Stereo Images

Chapter 3: CS a: DICOM DICOM (Digital Image Communications in Medicine) provides a protocol for transmission of image based on their modality, and incorporates metadata for each image within the message Each DICOM image message provides the basic information required to attach it to a patient or an imaging procedure, encoded information may be redundant DICOM requires the sending and receiving computers to agree on a common basic method of communication, and a set of well-defined services (such as image storage) specified before the message is sent

Chapter 3: CS a: DICOM Demonstration

Chapter 3: CS b: Metadata: The Image Header The first part of many image files provides a description of the image – file header The information in the header is called the metadata (the information about the image) Some file formats use formatted header fields to describe the image, e.g., information can be image type, height and width of the image Flexible fields such as TIFF (tagged image file format) defines a set of tags, or field definitions, that may be present or absent in an image file

Chapter 3: CS c: DICOM File A saved DICOM transmission message file has a type of tagged file format. Everything has a tag, a size, and a value Image pixels are described as the value of a pixel tag There are a minimal set of standard tags required, followed by a minimal set of tags for each modality Many optional tagged values may be included, and the specification includes tags for proprietary data, allowing vendors or developers to encode data specific to their machines or process

Chapter 3: CS c: First Part of a DICOM File

Chapter 3: CS c: A DICOM File, composed with data header and the image

Chapter 3: CS c: Captured DICOM Images

Chapter 3: CS d: Pixel/Voxel Packing Most pixel values are stored as binary numeric values with a fixed number of bits Monochrome data needs a single bit; gray scale data is stored as 8 bits, 12 bits, or 16 bits For images with multiple channels in each pixel, such as color encoded as RGB, image file formats may incorporate packed or planar schemes In packed scheme, pixels are grouped by pixel, such as RGB, RGB, RGB, etc In planar scheme, all of the pixels for one color are placed together as a 2D image, such as RRR, followed by GGG followed by BBB

Chapter 3: CS d: Pixel Packing

Chapter 3: CS e: Image Compression Image compression means to express in compact form the information contained in an image The resulting image should retain the salient (or exact) features necessary for the purpose for which it was captured For legal purpose, it is often necessary to insure that decompression restores the image to the same values by which any diagnosis was based Two main types of image compression algorithms: lossless and lossy

Chapter 3: CS f: Lossless Image Compression Lossless compression perfectly recovers the original image after decompression, and works by simply taking advantage of any redundancy in the image, e.g., Huffman encoding, run-length encoding, etc. Most image data has pixel values or groups of pixels that repeat in a predictable pattern It typically achieves ratios of 2:1 or 3:1 on medical images

Chapter 3: CS f: Lossless Compression

Chapter 3: CS g: Run-length Encoding The image pixels are scanned by rows, and a count of successive pixel values along with one copy of the pixel are sent to the output Very effective for large area of constant color Not good for images containing many smooth gradations of gray or color values An example: Original (16 characters) Compressed (12 characters)

Chapter 3: CS g: Run-Length Encoding

Chapter 3: CS g: Run-Length Encoding

Chapter 3: CS g: Run-Length Encoding

Chapter 3: CS h: Huffman and Limpel-Ziv Coding These schemes search for patterns in the data that can be represented by a smaller number of bits These patterns are mapped to a number of representative bytes and stored by probabilities Original (17 characters) Map = A, 32 = B Compressed AABA (13 characters)

Chapter 3: CS i: Lossy Image Compression Lossy compression changes the image values, but attempts to do so in a way that is difficult to detect or has negligible effect on the purpose for which the image will be used It can accomplish 10:1 to 80:1 compression ratios before change in the image is detectable or deleterious Typical techniques are JPEG and wavelets Most often used to reduce bandwidth required to transmit image data over internet

Chapter 3: CS i: Lossy Compression

Chapter 3: CS j: JPEG The JPEG standard was developed for still images, and performs best on photographs of real-world scenes The first pass transforms the color space of the image to a luminance-dominated color space, and then downsamples the image and partitions it into selected blocks of pixels A discrete cosine transform (DCT) is applied to the blocks The resulting DCT values for each block are divided by a quantization coefficients and rounded to integers The reduced data is encoded using Huffman etc

Chapter 3: CS j*: JPEG Lossy Compression Original: 43KMedium compress: 13K Too much: 3.5k 256 colors in Netscape

Chapter 3: CS l: Wavelets Wavelet transform coefficients are partially localized in both space and frequency, and form a multiscale representation of the image with a constant scale factor, leading to localized frequency subbands with equal widths on a logarithmic scale Wavelet compression exploits the fact that real- world images tend to have internal morphological consistency, locally luminance values, oriented edge continuations, and higher order correlations, like textures For CT images, the compression ratio can be 80:1 For chest X-rays, the ratio may be around 10:1

Chapter 3: CS l*: Example of Wavelet Compression Original image bytes JPEG image bytes Wavelet image bytes

Chapter 3: CS l: Wavelet Compression

Chapter 3: CS a: Image Database Basic requirements for an image database are a means to efficiently store and retrieve the images using an indexing method Hierarchical file systems, combine hard disk drives with optical or magnetic tape media, and through a file system database provide what appears to be a monolithic set of files to the users Frequent requests for a large number of files may swamp such a system

Chapter 3: CS b: Where to Store the Images The images can be stored “inside” or “outside” of the databases A much smaller metadata about the image can be stored in the database A small-scale reference image, called “thumbnail”, extracted from the image can be used in a metadata. It is several orders of magnitude smaller than the original image An index is needed to link the metadata with the original image

Chapter 3: CS b: Thumbnail Example

Chapter 3: CS c: DICOM Database DICOM database provides a hierarchical tree structure wherein the patient is at the top of the tree For each patient, there can be several studies, including an image examination by a given modality Within each study, there can be a series of images, where each series can represent different viewpoints of the patient within the study Each series may be a single or a set of images DICOM inherently organizes images in a most suitable way for use in a treatment setting

Chapter 3: CS d: DICOM Data Hierarchy

Chapter 3: CS e: DICOM Database (I)

Chapter 3: CS f: DICOM Database (II)

Chapter 3: CS f: DICOM Database (III)