Presentation is loading. Please wait.

Presentation is loading. Please wait.

Advanced Cubesat Imaging Payload Robert Urberger, Joseph Mayer, and Nathan Bossart ECE 490 – Senior Design I – Department of Electrical and Computer Engineering.

Similar presentations


Presentation on theme: "Advanced Cubesat Imaging Payload Robert Urberger, Joseph Mayer, and Nathan Bossart ECE 490 – Senior Design I – Department of Electrical and Computer Engineering."— Presentation transcript:

1 Advanced Cubesat Imaging Payload Robert Urberger, Joseph Mayer, and Nathan Bossart ECE 490 – Senior Design I – Department of Electrical and Computer Engineering Background Description Materials & Methods Results Progress Current Status References & Thanks http://acip.us RASCAL: A two spacecraft mission to demonstrate key technologies for proximity operations.  Two spacecraft passively drift apart  Each spacecraft uses its propulsion system in conjunction with the imaging payload in order to facilitate orbiting and docking Imaging Payload: Computer Vision in Space  To achieve the goals of the RASCAL mission, each spacecraft will identify the other and interpolate knowledge of parameters such as distance. The goal of this project is to design and implement an imaging payload for obtaining raw image data and converting it into actionable high-level data. The Imaging Payload consists of three main components: Face Identification:  Involves using LED patterns on the side of each Cubesat for identification Capturing Image Data:  A single CMOS camera inside the Cubesat will obtain raw images of surroundings  The camera will be custom built (imager and lens) and will be space-ready  Custom camera interface will be created for the purposes of this project Processing Image Data:  Low-level processing on raw image data will be transformed into actionable high-level data  Three main image processing goals: 1. Distance detection: identifying the relative distance between the two Cubesats 2. Object detection: identifying objects of interest 3. Object classification: identifying the current visible face of the other Cubesat  Solutions of these goals will be used to provide functionality for identifying the relative distance, identifying the approximate attitude, and identifying the other Cubesat (as well as its currently visible face)  Processing will be performed in hardware on a Zynq-7000 Xilinx Chip and in software on a bare-metal C or Linux core  LEDs have been obtained and patterns are being created.  Software verification is in progress and is completed for object detection.  Bill of Materials has been created for the Camera. Special thanks to Dr. Kyle Mitchell, Dr. Jason Fritts, and Dr. Will Ebel. [1] Jan Erik Solem, Programming Computer Vision with Python. Creative Commons. [2] Milan Sonka, Vaclav Hlavac, Roger Boyle, Image Processing, Analysis, and Machine Vision. Cengage Learning; 3rd edition. [3] http://www.cs.columbia.edu/~jebara/htmlpapers/UTHESIS/node14.html October 29, 2013 [4] http://cubesat.slu.edu/AstroLab/SLU-03__Rascal.html October 31, 2013 [5] http://docs.opencv.org November 11, 2013 [6] Gary Bradski, Adrian Kaehler, Learning OpenCV: Computer Vision with the OpenCV Library. O'Reilly Media, Inc.; 1st edition. [7] http://i.space.com/images/i/000/022/868/original/cubesat-satellites-launch-1.jpg?1350419887 November 25, 2013 Our main focus this semester was in software verification and interfacing with our camera. The software verification and LED pattern creation are nearly complete while the camera will have to be moved to next semester and be done along side the hardware verification of the image processing pipeline. Preliminary progress has been made into assembling a bus interaction pipeline with the ARM processor on the Zynq. Figure 2: Preliminary LED Pattern Ideas Figure 6: Team Photo – Joe Mayer, Bob Urberger, and Nathan Bossart Figure 1: RASCAL mission diagram  Remainder of software verification in progress  Software verification will be ported to High Level Synthesis toolkit code and converted directly into hardware  Camera board is being designed and constructed, and camera interface to FPGA finalized  LED patterns are being verified through processing pipeline in software to verify visibility of different patterns Figure 3: Previous in-space cubesat imaging Figure 4: Preliminary software verification of LED pattern ID Figure 5: Gantt Chart (as of 25 November 2013) The camera components have been selected for the custom imager design, along with a standard lens housing which can use multiple different lens sizes and apertures. LED identification through image segmentation has been successfully proven. The other stages of the image processing pipeline are actively under development. Image Capture Preprocess Image Face Classification Depth Identification Store / Output Data Figure 6: Functional Decomposition


Download ppt "Advanced Cubesat Imaging Payload Robert Urberger, Joseph Mayer, and Nathan Bossart ECE 490 – Senior Design I – Department of Electrical and Computer Engineering."

Similar presentations


Ads by Google