Presentation is loading. Please wait.

Presentation is loading. Please wait.

OpenCV and Embedded System

Similar presentations


Presentation on theme: "OpenCV and Embedded System"— Presentation transcript:

1 OpenCV and Embedded System
Jordan Buford Hui Xu Yifan Zhu

2 Content Intro Raspberry Pi Camera with OpenCV Concrete OpenCV examples Raspberry Pi to FPGA Connection

3 OpenCV Intro Open source computer vision and machine learning library
Cross platform Windows, Linux, MacOS Android, iOS Language support C/C++, Python Install at Documentation at

4 A lot of function, and some examples. Good transition
A lot of function, and some examples. Good transition. Higher level talk. Context

5 Example: Controlling a Drone
Drone → Controller → Pi Running OpenCV → Camera Here is a concrete design of self-tracking drone. First the camera will take a picture, and the first control board will use openCV to find the object. It will transmit corresponding data to another control board, and this board will send out signals to control the drone. So here, the second control board is the master, and the first board and the drone are both slaves. Next I will talk more on the configurations of camera and the control board.

6 Raspberry Pi with Camera
-- type of camera In this example, we use camera module with raspberry pi board. There are various type of camera modules for raspberry pi, and the most commonly used ones are the normal camera module and the noir camera module. NO IR stands for No ‘infrared cut filter’ installed, so NoIR camera can get infrared lights. There are other cameras like fisheye camera.

7 This is an example of the difference in NoIR camera and normal camera.
NoIR camera can be used as night vision camera. Here is an example of how normal camera and noir camera differs. So if there is an infrared light source, like human body, then this camera could be used as night vision camera.

8 Raspberry pi software UI
Open up a terminal and execute the following command: $ sudo raspi-config This will bring up a screen that looks like this: Here is specifically how to use raspberry pi camera. This is the UI for raspberry pi.

9 An easy example of how to take photo using picamera
#!/usr/bin/python Import picamera Import time camera = picamera.PiCamera() time.sleep(2) # camera warm-up time camera.capture(‘test.jpg’) Here is a starter code on how to take photo using python with picamera package.

10 Once we can read from the camera...
Now we can use OpenCV to do more analysis on our images or video For example Tracking Object detection

11 Example: Tracking cap = cv.VideoCapture('slow.flv')
# take first frame of the video ret,frame = cap.read() # setup initial location of window r,h,c,w = 250,90,400,125 # simply hardcoded the values track_window = (c,r,w,h) # set up the ROI for tracking roi = frame[r:r+h, c:c+w] hsv_roi = cv.cvtColor(roi, cv.COLOR_BGR2HSV) mask = cv.inRange(hsv_roi, np.array((0., 60.,32.)), np.array((180.,255.,255.))) roi_hist = cv.calcHist([hsv_roi],[0],mask,[180],[0,180]) cv.normalize(roi_hist,roi_hist,0,255,cv.NORM_MINMAX) # Setup the termination criteria, either 10 iteration or move by atleast 1 pt term_crit = ( cv.TERM_CRITERIA_EPS | cv.TERM_CRITERIA_COUNT, 10, 1 ) while(1): ret ,frame = cap.read() if ret == True: hsv = cv.cvtColor(frame, cv.COLOR_BGR2HSV) dst = cv.calcBackProject([hsv],[0],roi_hist,[0,180],1) # apply meanshift to get the new location ret, track_window = cv.meanShift(dst, track_window, term_crit) x,y,w,h = track_window img2 = cv.rectangle(frame, (x,y), (x+w,y+h), 255,2) cv.imshow('img2',img2) k = cv.waitKey(60) & 0xff if k == 27: break else: cv.imwrite(chr(k)+".jpg",img2) cv.destroyAllWindows() cap.release()

12 Example: Face Detection
import numpy as np import cv2 as cv # Load classifiers face_cascade = cv.CascadeClassifier('haarcascade_frontalface_default.xml') eye_cascade = cv.CascadeClassifier('haarcascade_eye.xml') # Read Image img = cv.imread('sachin.jpg') gray = cv.cvtColor(img, cv.COLOR_BGR2GRAY) # Detect faces = face_cascade.detectMultiScale(gray, 1.3, 5) for (x,y,w,h) in faces: cv.rectangle(img,(x,y),(x+w,y+h),(255,0,0),2) roi_gray = gray[y:y+h, x:x+w] roi_color = img[y:y+h, x:x+w] eyes = eye_cascade.detectMultiScale(roi_gray) for (ex,ey,ew,eh) in eyes: cv.rectangle(roi_color,(ex,ey),(ex+ew,ey+eh),(0,255,0),2) cv.imshow('img',img) cv.waitKey(0) cv.destroyAllWindows()

13 More Detection Through machine learning, we are able to train our own classifier

14 Train Your Own Classifier
Place positive images into a folder and create a list Place negative images into a folder and create a list Generate samples Run training script find ./negative_images -iname "*.jpg" > negatives.txt find ./positive_images -iname "*.jpg" > positives.txt perl bin/createsamples.pl positives.txt negatives.txt samples 5000 python ./tools/mergevec.py -v samples/ -o samples.vec opencv_traincascade -data lbp -vec samples.vec -bg negatives.txt -numStages 20 -minHitRate maxFalseAlarmRate 0.5 -numPos numNeg w 40 -h 40 -mode ALL -precalcValBufSize precalcIdxBufSize featureType LBP find ./negative_images -iname "*.jpg" > negatives.txt find ./positive_images -iname "*.jpg" > positives.txt perl bin/createsamples.pl positives.txt negatives.txt samples 5000 python ./tools/mergevec.py -v samples/ -o samples.vec opencv_traincascade -data lbp -vec samples.vec -bg negatives.txt -numStages 20 -minHitRate maxFalseAlarmRate 0.5 -numPos numNeg w 40 -h 40 -mode ALL -precalcValBufSize precalcIdxBufSize featureType LBP

15 What We Can Do... With our ability to detect and track object, we are able to build a simple drone following program

16 What’s Next Generate control signal according to detection
Communicating with FPGA Drone → Controller → Pi Running OpenCV → Camera

17 Raspberry Pi to FPGA Communication
Raspberry Pi supports three serial communication protocols via it’s GPIO headers by default: UART SPI I2C Also available: USB and Ethernet

18 Raspberry Pi to FPGA Communication
By default the SPI and I2C peripherals are not activated The UART peripherals are connected to the Bluetooth module (if present) and console output Pin 14-UART TX Pin 15-UART RX

19 Bus Access via Software Libraries
Various open-source libraries are available to interface with these busses through GPIO and include functions to configure and read Example libraries: wiringPi spidev smbus pigpio int fd, result; unsigned char buffer[100]; fd = wiringPiSPISetup(CHANNEL, ); cout << "Init result: " << fd << endl; // clear display buffer[0] = 0x76; wiringPiSPIDataRW(CHANNEL, buffer, 1);

20 What can we do with data sent to/from Raspberry Pi?
Give instructions to Raspberry Pi dependent on entire system state e.g. enable and disable camera, change entities tracked, switch camera Continuously track environment parameters for OpenCV e.g. ambient light sensor data can prescale brightness of image Use OpenCV data to generate interrupts and, for example, control motors Use camera data to encode a video file

21 Why SPI for Communication?
Full-duplex Bidirectional simultaneous communication No collisions Push-pull transistor design Faster rise times Increased data throughput No clock synchronization issues Master drives clock Easier implementation

22 Future Design Possibilities

23 Use of Raspberry Pi GPU: Broadcom VideoCore IV

24 FPGA Video Encoding Highly parallelizable Various standard codecs

25 Q&A


Download ppt "OpenCV and Embedded System"

Similar presentations


Ads by Google