Patient information extraction in digitized X-ray imagery Hsien-Huang P. Wu Department of Electrical Engineering, National Yunlin University of Science.

Slides:



Advertisements
Similar presentations
Applications of one-class classification
Advertisements

Object Recognition from Local Scale-Invariant Features David G. Lowe Presented by Ashley L. Kapron.
QR Code Recognition Based On Image Processing
Content-Based Image Retrieval
Automatic classification of weld cracks using artificial intelligence and statistical methods Ryszard SIKORA, Piotr BANIUKIEWICZ, Marcin CARYK Szczecin.
The Binary Numbering Systems
IntroductionIntroduction AbstractAbstract AUTOMATIC LICENSE PLATE LOCATION AND RECOGNITION ALGORITHM FOR COLOR IMAGES Kerem Ozkan, Mustafa C. Demir, Buket.
Designing Facial Animation For Speaking Persian Language Hadi Rahimzadeh June 2005.
Automatically Annotating and Integrating Spatial Datasets Chieng-Chien Chen, Snehal Thakkar, Crail Knoblock, Cyrus Shahabi Department of Computer Science.
July 27, 2002 Image Processing for K.R. Precision1 Image Processing Training Lecture 1 by Suthep Madarasmi, Ph.D. Assistant Professor Department of Computer.
September 10, 2013Computer Vision Lecture 3: Binary Image Processing 1Thresholding Here, the right image is created from the left image by thresholding,
Image Indexing and Retrieval using Moment Invariants Imran Ahmad School of Computer Science University of Windsor – Canada.
COMP322/S2000/L181 Pre-processing: Smooth a Binary Image After binarization of a grey level image, the resulting binary image may have zero’s (white) and.
Efficient Moving Object Segmentation Algorithm Using Background Registration Technique Shao-Yi Chien, Shyh-Yih Ma, and Liang-Gee Chen, Fellow, IEEE Hsin-Hua.
3. Introduction to Digital Image Analysis
吳家宇 吳明翰 Face Detection Based on Template Matching and 2DPCA Algorithm 2009/01/14.
Processing Digital Images. Filtering Analysis –Recognition Transmission.
Visual Querying By Color Perceptive Regions Alberto del Bimbo, M. Mugnaini, P. Pala, and F. Turco University of Florence, Italy Pattern Recognition, 1998.
Chaincode Generation Contour separation extracted by algorithm Image Chaincode contour Represented as an array of coordinates and corresponding slopes.
Preprocessing ROI Image Geometry
A Novel 2D To 3D Image Technique Based On Object- Oriented Conversion.
Traffic Sign Identification Team G Project 15. Team members Lajos Rodek-Szeged, Hungary Marcin Rogucki-Lodz, Poland Mircea Nanu -Timisoara, Romania Selman.
Wang, Z., et al. Presented by: Kayla Henneman October 27, 2014 WHO IS HERE: LOCATION AWARE FACE RECOGNITION.
Presented by: Kamakhaya Argulewar Guided by: Prof. Shweta V. Jain
CGMB424: IMAGE PROCESSING AND COMPUTER VISION
LEAF BOUNDARY EXTRACTION AND GEOMETRIC MODELING OF VEGETABLE SEEDLINGS
1 Recognition of Multi-Fonts Character in Early-Modern Printed Books Chisato Ishikawa(1), Naomi Ashida(1)*, Yurie Enomoto(1), Masami Takata(1), Tsukasa.
Ajay Kumar, Member, IEEE, and David Zhang, Senior Member, IEEE.
S EGMENTATION FOR H ANDWRITTEN D OCUMENTS Omar Alaql Fab. 20, 2014.
B. Krishna Mohan and Shamsuddin Ladha
COMPARISON OF IMAGE ANALYSIS FOR THAI HANDWRITTEN CHARACTER RECOGNITION Olarik Surinta, chatklaw Jareanpon Department of Management Information System.
September 23, 2014Computer Vision Lecture 5: Binary Image Processing 1 Binary Images Binary images are grayscale images with only two possible levels of.
Digital Image Processing CCS331 Relationships of Pixel 1.
Handwritten Recognition with Neural Network Chatklaw Jareanpon, Olarik Surinta Mahasarakham University.
Digital Camera and Computer Vision Laboratory Department of Computer Science and Information Engineering National Taiwan University, Taipei, Taiwan, R.O.C.
Handwritten Hindi Numerals Recognition Kritika Singh Akarshan Sarkar Mentor- Prof. Amitabha Mukerjee.
Digital imaging By : Alanoud Al Saleh. History: It started in 1960 by the National Aeronautics and Space Administration (NASA). The technology of digital.
Digital imaging By : Alanoud Al Saleh. History: It started in 1960 by the National Aeronautics and Space Administration (NASA). The technology of digital.
Face Image-Based Gender Recognition Using Complex-Valued Neural Network Instructor :Dr. Dong-Chul Kim Indrani Gorripati.
1 Machine Vision. 2 VISION the most powerful sense.
A Tutorial on using SIFT Presented by Jimmy Huff (Slightly modified by Josiah Yoder for Winter )
October 16, 2014Computer Vision Lecture 12: Image Segmentation II 1 Hough Transform The Hough transform is a very general technique for feature detection.
By Pushpita Biswas Under the guidance of Prof. S.Mukhopadhyay and Prof. P.K.Biswas.
Nottingham Image Analysis School, 23 – 25 June NITS Image Segmentation Guoping Qiu School of Computer Science, University of Nottingham
Wonjun Kim and Changick Kim, Member, IEEE
Machine Vision. Image Acquisition > Resolution Ability of a scanning system to distinguish between 2 closely separated points. > Contrast Ability to detect.
Digital Camera and Computer Vision Laboratory Department of Computer Science and Information Engineering National Taiwan University, Taipei, Taiwan, R.O.C.
Essential components of the implementation are:  Formation of the network and weight initialization routine  Pixel analysis of images for symbol detection.
Digital Image Processing CSC331
An intelligent strategy for checking the annual inspection status of motorcycles based on license plate recognition Yo-Ping Huang a, Chien-Hung Chen b,
License Plate Recognition of A Vehicle using MATLAB
Optical Character Recognition
Leaves Recognition By Zakir Mohammed Indiana State University Computer Science.
CONTENTS:  Introduction.  Face recognition task.  Image preprocessing.  Template Extraction and Normalization.  Template Correlation with image database.
1. 2 What is Digital Image Processing? The term image refers to a two-dimensional light intensity function f(x,y), where x and y denote spatial(plane)
Content Based Coding of Face Images
SIFT Scale-Invariant Feature Transform David Lowe
Computer and Robot Vision I
Digital image self-adaptive acquisition in medical x-ray imaging
Mean Shift Segmentation
R-CNN region By Ilia Iofedov 11/11/2018 BGU, DNN course 2016.
Computer Vision Lecture 5: Binary Image Processing
Machine Vision Acquisition of image data, followed by the processing and interpretation of these data by computer for some useful application like inspection,
Computer and Robot Vision I
Computer and Robot Vision I
An Infant Facial Expression Recognition System Based on Moment Feature Extraction C. Y. Fang, H. W. Lin, S. W. Chen Department of Computer Science and.
Computer and Robot Vision I
A Novel Smoke Detection Method Using Support Vector Machine
Computer and Robot Vision I
Introduction to Artificial Intelligence Lecture 22: Computer Vision II
Presentation transcript:

Patient information extraction in digitized X-ray imagery Hsien-Huang P. Wu Department of Electrical Engineering, National Yunlin University of Science and Technology, 123 University Road, Section 3, Yunlin, Touliu, Taiwan, ROC Received 26 October 2001; received in revised form 21 August 2003; accepted 15 September 2003

Abstract This paper presents a new method to extract the patient information number (PIN) field automatically from the film- scanned image using image analysis technique. This extracted PIN information is linked with Radiology Information System or Hospital Information System and the image scanned from the film can be filed into database automatically. We believe the success of this technique will benefit the development of the Picture Archiving and Communication System and teleradiology.

Introduction Two disadvantage – The first is that before the film was printed and scanned, the patients information (name, ID, etc.) had been recorded in Hospital Information System or Radiology Information System database when their radiograph was taken. – The user needs to enter patients information right before or after each scan in order to file the digitized image, therefore, batch scanning of the films is impossible.

Image acquisition

Patient information block search and extraction the patient information label is almost always attached to the corner of the film. The reason this algorithm works is because human body is full of complicated texture and only the label area contains many straight field lines.

Label extraction our next step is to remove all these extra details and leave only the area of the label. The main technique used here is horizontal and vertical projection. We accumulate (project) all the white pixels on each row in the image to form a vector P H ; and collect (project) all the white pixels on each column in the image to form a vector P V. given a M * N (M pixels by N lines) binary image with for white pixel and f for black pixel.

select two thresholds T V and T H any position x for which is set to 0 any position y for which is set to 0 After this thresholding process, all the significant line segments are now stored as nonzero elements in P V and P H vectors. These nonzero elements will be used to identify the field separating lines. left/right boundary corresponds to the first/last nonzero element of P V and top/bottom boundary corresponds to the first/last nonzero element of P H.

In the extracted region obtained above, there is extra white zone surrounding the label. – (1) Thresholding the image that we extracted above, and result in a binary image. – (2) From the center of the binary image, search to the left, right, top, and bottom for continuous white pixels. – (3) If there are more than three consecutive white pixels, we mark the one nearest to the center as the new position of the label boundary.

Label orientation correction and PIN field extraction correct orientation is necessary to identify the PIN field, further processing is needed. a blank label that is commonly used will be digitized, converted into binary image, and then save as a template. One way to recognize its orientation is to correlate the scanned image with its template image in eight different known orientations. From these eight different matching, the one with the highest matching score will indicate the real orientation of the input image.

To make the input image and the template image have the same resolution, length of the shorter side of the extracted label is normalized to have the same length as that of the template image. the size of the label template is X t * Y t the label extracted from the input image is X in * Y in Y represents line number in the image, and X is the pixel number per line. =>

Rotation The 90°-rotated condition is checked first. To avoid the interference from the written content, the first field of the label is used for matching. The vectors are used for checking if the scanned image needs a left/right flip correction. The vectors are used for examining if the scanned label has been gone through a 180°-rotation.

The correlation coefficients for vertical and horizontal directions are in the range from -1 to 1. In the extracted label image, if a 180°-rotation does exist, then the first field will become the last field. These two subimages are projected horizontally to obtain two vectors are computed between vectors and are computed between vectors the label image was scanned with a 180°-rotation, and correction is applied; otherwise, leave the image unchanged.

Left-Right-mirrored of the input image, and create a new vector are computed between vectors The correlation between are also computed for comparison. If then the label image was scanned with a left–right-mirrored condition and correction by left–right flip is needed; otherwise, the image is left unchanged.

Unconstrained handwritten numerals Using a multilayer cluster neural network(MCNN) The MCNN structure can achieve 97.1% recognition The MCNN contains two phases : the training phase & the recognition phase

Preprocessing (1) conversion of the input numeral to a binary image (2) removal of spurious features by morphological filtering (3) vertical and horizontal spatial histograms are used to close in on the region of numeral and the region is cut out and resized to 16-by-16

Feature extractor Kirsch edge detector is used in the feature extractor to detect directional line segments and generate feature maps. Four directional feature maps for horizontal (H), vertical (V), right-diagonal (R), and left-diagonal (L) are created using eight Kirsch masks

Classifier The classifier contains a three-layer cluster neural network with five independent sub-networks.

Results In order to evaluate the PIN field extraction algorithm, two formats of the label acquired from two different hospitals are tested (1) Image-blurring (2) Too much tilting (3) The labels are not positioned at the corner