Copyright © Magnum Semiconductor, Unpublished Introduction to Deinterlacing by Mark Korhonen.

Slides:



Advertisements
Similar presentations
Digital Media Lecture 9: Video, TV & Film Georgia Gwinnett College School of Science and Technology Dr. Jim Rowan.
Advertisements

Object Specific Compressed Sensing by minimizing a weighted L2-norm A. Mahalanobis.
Deinterlacing using Motion Detection
Digital Media Dr. Jim Rowan ITEC 2110 Video. Works because of persistence of vision Fusion frequency –~ 40 frames.
Basics of MPEG Picture sizes: up to 4095 x 4095 Most algorithms are for the CCIR 601 format for video frames Y-Cb-Cr color space NTSC: 525 lines per frame.
Digital Processing of Analog Television Lior Zimet EE392J Final Project Winter 2002.
What We Must Understand
CS 551 / CS 645 Antialiasing. What is a pixel? A pixel is not… –A box –A disk –A teeny tiny little light A pixel is a point –It has no dimension –It occupies.
Brushing Up on Upconversion Bruce Jacobs Twin Cities Public Television.
1 Pixel Interpolation By: Mieng Phu Supervisor: Peter Tischer.
An Improved 3DRS Algorithm for Video De-interlacing Songnan Li, Jianguo Du, Debin Zhao, Qian Huang, Wen Gao in IEEE Proc. Picture Coding Symposium (PCS),
Yen-Lin Lee and Truong Nguyen ECE Dept., UCSD, La Jolla, CA Method and Architecture Design for Motion Compensated Frame Interpolation in High-Definition.
Multimedia Data Introduction to Image Processing Dr Mike Spann Electronic, Electrical and Computer.
Object Detection and Tracking Mike Knowles 11 th January 2005
SWE 423: Multimedia Systems Chapter 5: Video Technology (1)
CS 223B Assignment 1 Help Session Dan Maynes-Aminzade.
CS 376b Introduction to Computer Vision 04 / 01 / 2008 Instructor: Michael Eckmann.
CS :: Fall 2003 MPEG-1 Video (Part 1) Ketan Mayer-Patel.
Comp :: Fall 2003 Video As A Datatype Ketan Mayer-Patel.
CSc 461/561 CSc 461/561 Multimedia Systems Part A: 3. Video.
An Introduction to H.264/AVC and 3D Video Coding.
1 Computer Science 631 Lecture 5: From photons to pixels Ramin Zabih Computer Science Department CORNELL UNIVERSITY.
1 CCTV SYSTEMS CCTV MONITORS. 2 CCTV SYSTEMS A monitor simply allows remote viewing of cameras in a CCTV system from a control room or other location.
Spatio-Temporal Quincunx Sub-Sampling.. and how we get there David Lyon.
Understanding Video.  Video Formats  Progressive vs. Interlaced  Video Image Sizes  Frame Rates  Video Outputs  Video as Digital Data  Compression.
CS 1308 Computer Literacy and the Internet. Creating Digital Pictures  A traditional photograph is an analog representation of an image.  Digitizing.
Chapter 3: Image Restoration Geometric Transforms.
Digital Media Lecture 9: Video, TV & Film Georgia Gwinnett College School of Science and Technology Dr. Jim Rowan.
Digital Media Dr. Jim Rowan ITEC 2110 Video.
By Meidika Wardana Kristi, NRP  Digital cameras used to take picture of an object requires three sensors to store the red, blue and green color.
Multimedia Data Video Compression The MPEG-1 Standard
LECTURE Copyright  1998, Texas Instruments Incorporated All Rights Reserved Encoding of Waveforms Encoding of Waveforms to Compress Information.
Copyright 1998, S.D. Personick. All Rights Reserved1 Telecommunications Networking I Lectures 2 & 3 Representing Information as a Signal.
Windows Media Video 9 Tarun Bhatia Multimedia Processing Lab University Of Texas at Arlington 11/05/04.
Computer Graphics Texture Mapping
Motion-Compensated Noise Reduction of B &W Motion Picture Films EE392J Final Project ZHU Xiaoqing March, 2002.
DIGITAL Video. Video Creation Video captures the real world therefore video cannot be created in the same sense that images can be created video must.
High-Resolution Interactive Panoramas with MPEG-4 발표자 : 김영백 임베디드시스템연구실.
Digital Image and Video Coding 11. Basics of Video Coding H. Danyali
Multimedia Data Introduction to Image Processing Dr Sandra I. Woolley Electronic, Electrical.
Image Processing Edge detection Filtering: Noise suppresion.
Media Processor Lab. Media Processor Lab. High Performance De-Interlacing Algorithm for Digital Television Displays Media Processor Lab.
Segmentation of Vehicles in Traffic Video Tun-Yu Chiang Wilson Lau.
Industry workflow:  Scripting: Movie is conceived or written  Production: Where you create your footage, capturing performances using video or film cameras,
Digital Video Digital video is basically a sequence of digital images  Processing of digital video has much in common with digital image processing First.
INF 385T Advanced Digital Imaging Creating Sustainable Collections April 7, 2008.
Image File Formats. What is an Image File Format? Image file formats are standard way of organizing and storing of image files. Image files are composed.
Visual Computing Computer Vision 2 INFO410 & INFO350 S2 2015
Igor Jánoš. Goal of This Project Decode and process a full-HD video clip using only software resources Dimension – 1920 x 1080 pixels.
IntroductiontMyn1 Introduction MPEG, Moving Picture Experts Group was started in 1988 as a working group within ISO/IEC with the aim of defining standards.
Digital Media Dr. Jim Rowan ITEC 2110 Video.
CS 376b Introduction to Computer Vision 03 / 31 / 2008 Instructor: Michael Eckmann.
Instructor: Mircea Nicolescu Lecture 5 CS 485 / 685 Computer Vision.
infinity-project.org Engineering education for today’s classroom Outline Images Then and Now Digitizing Images Design Choices in Digital Images Better.
A Hybrid Edge-Enhanced Motion Adaptive Deinterlacer By Marc Ramirez.
Matte-Based Restoration of Vintage Video 指導老師 : 張元翔 主講人員 : 鄭功運.
PRESENT BY:- DHVANI BHANKHAR RUCHA PATEL. INTRODUCTION  HD IS DESCRIBED FROM THE LATE 1930s.  HIGH DEFINITION TELEVISION.  DIGITAL TV BROAD CASTING.
Sejong University, DMS Lab. Ki-Hun Han AN EFFECTIVE DE-INTERACING TECHNIQUE USING MOTION COMPENSATED INTERPOLATION IEEE TRANSACTION ON Consumer Electronics,
High Definition Television. 2 Overview Technology advancements History Why HDTV? Current TV standards HDTV specifications Timeline Application Current.
April / 2010 UFOAnalyzerV2 1 UFOAnalyzerV2 (UA2) the key of accuracy UA2 inputs video clip files and outputs meteor trajectories. UA2 does following steps.
Video format conversion
UFOAnalyzerV2 (UA2) the key of accuracy
CSI-447 : Multimedia Systems
Reverse-Projection Method for Measuring Camera MTF
Digital Media Dr. Jim Rowan ITEC 2110 Video.
Conversion of Standard Broadcast Video Signals for HDTV Compatibility
Digital Media Dr. Jim Rowan ITEC 2110 Video.
Digital 2D Image Basic Masaki Hayashi
Image and Video Processing
IT472 Digital Image Processing
Presentation transcript:

Copyright © Magnum Semiconductor, Unpublished Introduction to Deinterlacing by Mark Korhonen

Copyright © Magnum Semiconductor, Unpublished 2 Overview: Introduction To Deinterlacing Background Progressive video Interlaced video Deinterlacing Basic Deinterlacing Algorithms Weave Vertical Interpolation (aka Bob) Advanced Deinterlacing Algorithms Diagonal Interpolation Cadence Detection Motion Adaptive Deinterlacing (MADI) Motion Compensated Deinterlacing (MCDI) Deinterlacing Applications

Picture source: Copyright © Magnum Semiconductor, Unpublished 3 Background: Progressive Video A complete video frame is displayed at regular intervals View on computers and digital televisions Example resolutions and frame rates: SD: 720x480p30, 720x576p25 HD: 1280x720p60, 1920x1080p30 Sources: film (movies), animation, progressive cameras Example: 30p = 30 frames/s

Picture source: Copyright © Magnum Semiconductor, Unpublished 4 Background: Interlaced Video Frames of video are sampled at two time intervals Even rows sampled at: 2t Odd rows sampled at: 2t + 1 Terminology: Top field: even rows in the frame Bottom field: odd rows in the frame Field polarity: indicates if a field is a top field or bot. field Example: 60i = 60 fields/s

Copyright © Magnum Semiconductor, Unpublished 5 Background: Interlaced Video Why interlaced video exists Double frame rate w/ same bandwidth = smoother motion Original television formats are all interlaced –NTSC=720x480i60 –PAL=720x576i x1080i60 exists because –Processing requirements for 1920x1080p60 are very high –Motion is smoother than 1920x1080p30 –More detail than 1280x720p60

Copyright © Magnum Semiconductor, Unpublished 6 Background: Deinterlacing Deinterlacing: convert interlaced video to progressive Input: X field/s (e.g. i60) Output: X frames/s (e.g p60) This process generates X “missing” fields/s

Copyright © Magnum Semiconductor, Unpublished 7 Background: Deinterlacing Fields typically used to generate the missing field The current field is the opposite polarity of the field that needs to be generated, but displayed at the same time –E.g. output = bottom field, curr = top field The previous and next fields are the same polarity as the field that needs to be generated –E.g. output = bottom field, prev & next = bottom fields

Picture source: doom9.org Copyright © Magnum Semiconductor, Unpublished 8 Basic Deinterlacing: Weave Weave: use “prev” or “next” as the missing field For still images and progressive content, no missing data Very severe artifacts for moving interlaced content

Copyright © Magnum Semiconductor, Unpublished 9 Basic Deinterlacing: Weave Extension If stationary, average the previous and next fields together as the missing field –Noise reduction (averaging two identical images reduces noise) –Handles fades better if luminance changing every field Output[x,y] = m prev[x,y] + (1-m) next[x,y] m is typically 0.0, 0.5 or 1.0

Copyright © Magnum Semiconductor, Unpublished 10 Basic Deinterlacing: Bob Bob (aka vertical interpolation) Generate the opposite polarity by interpolating along vertical columns in the current field Implementations Line doubling – just use current field –e.g. Output[x,y] = curr[x,y] 2-tap FIR filter – average the line above and below –e.g. Output[x,y] = 0.5 curr[x,y] curr[x,y+1] 8-tap FIR filter – better frequency response

Copyright © Magnum Semiconductor, Unpublished 11 Basic Deinterlacing: Bob Bob Video Quality Looks good for most video content –Still content without a lot of detail –Moving content (hard to see detail on moving objects) Interpolation fails if a lot of detail – causes flickering

Picture source: Computer Desktop Encyclopedia Copyright © Magnum Semiconductor, Unpublished 12 Advanced Deinterlacing: Diagonal Interpolation Problem: Bob isn’t ideal for diagonal edges Solution: Apply FIR filter along a diagonal

Copyright © Magnum Semiconductor, Unpublished 13 Advanced Deinterlacing: Diagonal Interpolation Diagonal Edge Detection Algorithm Try a bunch of different angles and see which is best –Pattern recognition problem: classify edges as a supported angle Determine which angle is most correlated at each pel We can assume that the angle of the edge is wide »Allows using neighbouring pels to improve accuracy

Picture source: Copyright © Magnum Semiconductor, Unpublished 14 Advanced Deinterlacing: Cadence Detection Observation: If interlaced video was generated from a progressive source, we can safely weave it with no artifacts Sample cadences

Copyright © Magnum Semiconductor, Unpublished 15 Advanced Deinterlacing: Cadence Detection Detection Area Size of cadence area can be variable –e.g. entire field, 16x16 block, every pel Detecting entire fields sufficient for ~99% of video Smaller areas only needed if video is a mixed source –moving interlaced text over progressive video e.g. weather warnings on a TV movie –different frame rates of progressive video edited together e.g. source of border was 30p, but source of contents were 24p

Picture source: Copyright © Magnum Semiconductor, Unpublished 16 Advanced Deinterlacing: Cadence Detection Cadence Algorithm 1: detect regular repeated fields For 3:2, every 5th field is a repeat –SAD will be very low every 5th field  called Inverse Telecine Very robust for a lot of content Doesn't work for 2:2 (no repeats), or changing cadences (e.g. slow-motion replays, edited video)

Copyright © Magnum Semiconductor, Unpublished 17 Advanced Deinterlacing: Cadence Detection Cadence Algorithm 2: detect weaving artifacts 1) Weave curr with prev, 2) Weave curr with next Determine if weaving artifacts are less in a field –Pattern recognition problem: classify into three states Progressive – weave with previous Progressive – weave with next Interlaced – perform other deinterlacing

Copyright © Magnum Semiconductor, Unpublished 18 Advanced Deinterlacing: Cadence Detection Cadence Algorithm 2: detect weaving artifacts Tricky for stationary content and at scene changes –Fairly easy to compensate Tricky for vertical motion of detailed video –e.g. pan of venetian blinds Can be tricky for video that looks like it already contains weaving artifacts –Artifacts introduced by video compression can look like weaving artifacts –Certain textures

Copyright © Magnum Semiconductor, Unpublished 19 Advanced Deinterlacing: Motion Adaptive Deinterlacing (MADI) Idea: weave stationary areas, interpolate moving areas Stationary detection = pattern recognition problem Size of stationary area can vary (e.g. entire field, 16x16 block, every pel) –Smaller stationary areas = less flickering but more computation Possible implementations: –SAD of prev and next Watch out for periodic motion –min SAD of last X fields Watch out for how long it takes stationary regions to be detected

Copyright © Magnum Semiconductor, Unpublished 20 Advanced Deinterlacing: Motion Adaptive Deinterlacing (MADI) Basic algorithm: Inputs: –Weave – generated from previous and/or next –Inter – generated from current (diagonal interpolation) Output[x,y] = (K) Weave[x,y] + (1-K) Inter[x,y] –K ~= 1 for stationary/progressive areas (weave) –K ~= 0 for moving areas (interpolation) NOTES: Fades may need special handling –e.g. bias more towards interpolation x and y advance through regions in the video –Could be every pel, 16x16 blocks, or the entire field

Copyright © Magnum Semiconductor, Unpublished 21 Advanced Deinterlacing: Motion Compensated Deinterlacing (MCDI) Idea: If we can’t weave a particular region, could we weave a motion compensated version of this region from curr, prev or next? Comments: Potential for preserving all of the detail Very difficult to do correctly –Especially rotations, zooms, morphing, lighting changes, etc. –Objects can get covered/uncovered –Even small errors can look very bad weaving artifacts, ghosting around edges, weird motion, etc. Motion compensation should be done for every pel –Very computationally demanding

Copyright © Magnum Semiconductor, Unpublished 22 Advanced Deinterlacing: Motion Compensated Deinterlacing (MCDI) Basic Algorithm: Input fields –MC_curr[x,y] = curr[x + dx_c, y + dy_c] –MC_prev[x,y] = prev[x + dx_p, y + dy_p] –MC_next[x,y] = next[x + dx_n, y + dy_n] –MADI[x,y] = pel generated using motion adaptive deinterlacing Output: p + q + r ~= 1 if high confidence in motion compensation p + q + r ~= 0 if low confidence in motion compensation –Confidence in the motion compensation can be based on The smoothness of the motion vector field Estimated SAD (true SAD unknown because the field is missing) Consistency of motion from field to field

Copyright © Magnum Semiconductor, Unpublished 23 Deinterlacing Applications Frame rate conversion from interlaced to interlaced No deinterlacing = motion judder –Need to drop a pair of fields – increases motion judder Deinterlacing = smoother motion –Requires excellent quality deinterlacing Deinterlaced fields will be deinterlaced twice

Copyright © Magnum Semiconductor, Unpublished 24 Deinterlacing Applications Upscaling interlaced video requires deinterlacing Perserves more detail Minimizes aliasing

Copyright © Magnum Semiconductor, Unpublished 25 Deinterlacing Applications If the output frame rate is low, flicker can be severe If input frame rate is high enough –In the output, pick a polarity that only use fields from the input –The other polarity will always be generated (e.g. via interpolation) This works because flicker is caused by alternating rows between a) original and b) interpolated – we have removed the alternating

Copyright © Magnum Semiconductor, Unpublished 26 Appendices

Copyright © Magnum Semiconductor, Unpublished 27 Video Resolutions Common video resolutions SD: 720x480 (NTSC), 720x576 (PAL) HD: 1280x720, 1920x1080 Sub SD: VGA, QVGA, CIF, QCIF, etc. Definitions pels = 1x1 area of video data pixel = 1x1 area on video display E.g. 720x480 video is 720 pels wide, but may be displayed on a television with 1920 pixels

Copyright © Magnum Semiconductor, Unpublished 28 Color Formats RGB vs. YUV Y = greyscale, making deinterlacing decisions primarily based on Y is simpler (but not as accurate) Data in U and V isn't as critical so typically downsampled –4:2:0 vs 4:2:2 vs 4:4:4