Presentation is loading. Please wait.

Presentation is loading. Please wait.

Supervisor: INA RIVKIN Students: Video manipulation algorithm on ZYNQ Part B.

Similar presentations


Presentation on theme: "Supervisor: INA RIVKIN Students: Video manipulation algorithm on ZYNQ Part B."— Presentation transcript:

1 supervisor: INA RIVKIN Students: Video manipulation algorithm on ZYNQ Part B

2 Motivation our goal is to build an embedded system which can receive video, process the video by hardware and software and finally send it out to a monitor. The system is based on the ZYNQ device of Xillinx. embedded system

3 Project goal Zed Board FMC module Add the ability to receive and draw out video signals to and from the embedded system. Add the ability to process the video signals by hardware, software and both. HDMI IN HDMI OUT

4 Background The ZYNQ component The board we are working on is called ZedBoard. The main device on our board is the ZYNQ. The ZYNQ consists of two main parts: the FPGA (programmable logic), and the ARM dual processor..We consider the above to be an embedded system

5 The The HDMI Input/Output FMC Module The FMC module enables us to receive and send the video HDMI data, the FMC module connects to an FMC carrier in the ZedBoard, and provides the following interfaces: 1) HDMI Input 2) HDMI Output 3) The interface for the ZedBoard The interface for the ZedBoard HDMI INPUT HDMI OUTPUT

6 Work environment Hardware design: Planahead 14.4 – for Xillinx FPGA design Xillinx Platform Studio (XPS) Software design: Xillinx Software Development Kit (SDK) Debugging SDK

7 At the previous project-part A  We built the embedded system, based on the ZYNQ device and the FMC module.  The system was able to receive and send video signals.  Input video signals passed through the FPGA components and through the ARM processor.  Video processing was by hardware and software  Video manipulation was simple and done by the software.

8 The full system part A Software block Hardware block AXI FMC interface Video detector Video resolution Frame buffer Video generator HDMI out HDMI in Video out Video in VDMA Frame buffer DDR AXI4S _in VTC _0 AXI4S_ out VTC _1

9 Project part B  Use the embedded we built at the previous project.  Perform complex processing using the software.  Perform a complex processing using the hardware.  Combine the two kinds of processing into single project.

10  Our system works with “luma” representation “YCbCr” instead of RGB in order to be more efficiant.  RGB pixel -> 32 bit  YCbCr pixel ->16 bit Video color space

11  RGB format uses 32 bit for each pixel, each pixel is composed of 4 channels 8 bits each the firs three are the color channels the RED GREEN and BLUE colors  The fourth channel is the intensity (transparency) channel (0-transperent pixel, 1- opaque).  RGB format is divided into two main representations the alpha and beta, alpha is represented as we discussed above and beta is represented by already multiplied RGB channels in the intensity channel to achieve higher efficiency RGB video format

12  This format is encoding of RGB format and the final pixel we see on screen depends on the interpretation we do to this format (hardware video out component)  8 lsbits are for Y component, that is the intensity component ( luminance)  8 msbits are for Cb and Cr component 4 bits each, they are the color components (chroma) YCbCr video format

13  We build a (x,y) grid on the standard color map, the x,y axis's are controlled with the Cb and Cr components, so given CbCr we know the color.  Now we add z axis to set the brightness of the pixel, which is controlled by thy Y luminance components. How does it works

14  Cb and Cr is the X and Y axises (respectively), as we can see at the figure below.  Z axis (luminance component) is directed inside the page How does it looks like

15  The way our eye sees colors allows us to modify and manipulate pixels,instead of the larger 32 bit RGB format, in the much smaller luma format (16 bit)  It allows us to achieve higher efficiency when trying to manipulate the pixel (or just streaming video frames) thus many displays use the luma format.  for more accurate manipulation it is possible to use rgb format but then we have to change the hardware components *note: Xilinx offers suitable IP cores for example we will show the hdmi_out_rgb core In conclusion

16 RGB Hardware components This is the rgb -> luma interpretation Timing control convert data from the AXI4-Stream video protocol interface to video domain interface used to convert between different video formats Interpretation from signals to pixels

17 SOFTWARE

18 Software manipulation Pressing the switches to freeze a video frame and pressing switches back to return to streaming. While frames are frozen we can manipulate them as much as we want We choose to manipulate the colors.

19 Micro controller manipulation

20 Software block diagram Video detector Video resolution Frame buffer Video generator Software Input video signals Manipula tion block output video signals

21 Software processing results Each switch is responsible for one processing

22 How does it looks Frames enter and leave the buffer Frames in Frames out Frame buffer

23 Freeze frame and process it Frames being processed in frame buffer Frames thrown away Frames out frozen Frame buffer

24 Proceed frames sent to display Frames thrown away Processed Frames frozen Frame buffer No frames in frame buffer

25  Inside the frame buffer we have up to 7 frames, each frame consists of 1920X1080 pixels.  We iterate over the frames then we iterate over each pixel to manipulate its data.  We build 4 manipulations (one for each switch), the manipulations are closing each stream(cr,cb), switching between the streams and switching between the intensity and the color(y,cbcr). Software architecture

26 How does it look We have our frames inside the Frame buffer We iterate over each frame in series when frame is selected it is sent to the manipulation process

27 Manipulation process We iterate over each pixel in the manipulation process *note: picture not in scale

28 Pixel manipulation Our 16 bit pixel representation 8 bit Y intensity channel 4 bit Cb color channel 4 bit Cr color channel At this point we have full access to the pixel information and we can manipulate the pixel by changing the information represented in these bits

29 1 st manipulation Our 16 bit pixel representation 8 bit Y intensity channel 4 bit Cb color channel 4 bit Cr color channel Cb channel all zeroes, y&Cr channel untouched

30 2 nd manipulation Our 16 bit pixel representation 8 bit Y intensity channel 4 bit Cb color channel 4 bit Cr color channel Cr channel all zeroes, y&Cb channel untouched

31 3 rd manipulation Our 16 bit pixel representation 8 bit Y intensity channel 4 bit Cb color channel 4 bit Cr color channel Cb and Cr channels are swapped, y channel untouched

32 4 th manipulation Our 16 bit pixel representation 8 bit Y intensity channel 4 bit Cb color channel 4 bit Cr color channel Cb&cr channel are swapped with y channel

33  Our system clock 148.5 MHZ  Axi4 bandwidth is 32 bit I/O  Pixel represented in luma (YCrCb) is 16 bit  So totally every clk cycle 2 pixels enter and leave the cpu  Our frame size is 1920X1080  So frame frequency is Processing speed

34  Trying to do so is equally implementing a single core GPU  More elaborate explanation:  each frame has 1920X1080 pixels and the buffer has up to 15 frames, the minimum is 3 frames so total amount of pixels to manipulate 6e6  software processing is actually iterating over each pixels on every cycle so even without taking into consideration architecture penalties such as branch miss predictions on every end of loop(about 6e3 outer loops) and tacking into account multi scalar pipe we still have e6 iterations to do in short time. Impossible to manipulate video on the fly

35  time between each and every two frames (1/150)sec  Our CPU works at ~1GHZ so each cycle takes ~1 nsec  Each iteration consists of at list 100 cycles (again assuming best case scenario that is all data and instruction in L1 cache for instance ), so manipulating the whole buffer will take almost 1sec while we have only 1/150 sec Impossible to manipulate video on the fly

36  Can only manipulate single frame each time  Need to stop streaming  May be useful for some applications  For processing HDMI video on the fly we have only one solution: Use Hardware, FPGA Problems & solutions

37 HARDWARE

38 Hardware manipulation The hardware is able to achieve a real time manipulation of video streaming, the manipulation we choose to implement is color manipulation, which is based on our software manipulation.

39  as we saw it is impossible to process video in real time with only software running on our micro controller so we have to use hardware In order to achieve our goal  we would like to perform similar manipulation, which we tried to do with our software, in our hardware  The place on the embedded system we choose for our hardware manipulation is the HDMI_OUT block Hardware manipulation

40 HDMI_OUT block manipulation

41  Similar to the software in our hardware we also have access to the whole pixel information (its bits).  Our manipulation is done by changing the wiring of each pixel using 3 video processing algorithms  The algorithms are Luma2RGB,RGB2RGB grey scale and RGB2Luma  We can achieve any real time color processing due to the use of wiring manipulation. HDMI_OUT block manipulation

42 How does it look Received luma Received chroma Arriving pixels buffer block(HDMI_OUT) Signals in hexadecimal Luma data chroma data Video_data signal[0-15] arriving from axi block Video_data_d signal[0-15] leaving hdmi_out block

43 Adding manipulation Arriving pixels New buffer block(HDMI_OUT) Video_data signal[0-15] arriving from axi block Manipulation block buffer block(HDMI_OUT) Video_data_d signal[0-15] leaving hdmi_out block

44 Manipulation block Colored pixel Greyscale pixel luma2rgb rgb2luma rgb2rgb grey scale

45 We use the ordinary equations to perform the transformation :  R´ = 1.164(Y - 16) + 1.596(Cr - 128)  G´ = 1.164(Y - 16) - 0.813(Cr - 128) - 0.392(Cb - 128)  B´ = 1.164(Y - 16) + 2.017(Cb - 128) Luma 2 RGB

46 We use the ordinary equations to perform the transformation :  R = 0.3*R’ + 0.587*G’ + 0.115*B’  G = 0.3*R’ + 0.587*G’ + 0.115*B’  B = 0.3*R’ + 0.587*G’ + 0.115*B’ RGB 2 RGB grey scale

47 We use the ordinary equations to perform the transformation :  Y = 0.299*R + 0.587*G + 0.114*B  Cb = (B-Y)*0.564 + 0.5  Cr = (R-Y)*0.713 + 0.5 RGB 2 Luma

48 Hardware processing results

49 Software VS. Hardware  Software:  Can only manipulate single frame each time.  Need to stop streaming.  Not a real time processing.  Hardware:  Can manipulate the arriving frames.  Don’t need to stop streaming.  A real time processing.

50

51 In our project we learned the correct way to work with embedded systems, the more powerful and less powerful sides of each component, for example we learned that for achieving real time processing of the data itself we must use the hardware components of our system, but for stream processing or data parking we can use our microcontroller and software processing In conclusion


Download ppt "Supervisor: INA RIVKIN Students: Video manipulation algorithm on ZYNQ Part B."

Similar presentations


Ads by Google