Multi-Channel Radar Depth Sounder (MCRDS) Signal Processing: A Distributed Computing Approach Je’aime Powell 1, Dr. Linda Hayden 1, Dr. Eric Akers 1, Richard.

Slides:



Advertisements
Similar presentations
Parallel Computing in Matlab
Advertisements

EHarmony in Cloud Subtitle Brian Ko. eHarmony Online subscription-based matchmaking service Available in United States, Canada, Australia and United Kingdom.
Evaluation of Cloud Storage for Preservation and Distribution of Polar Data. Nadirah Cogbill Mentors: Marlon Pierce, Yu (Marie) Ma, Xiaoming Gao, and Jun.
Xrootd and clouds Doug Benjamin Duke University. Introduction Cloud computing is here to stay – likely more than just Hype (Gartner Research Hype Cycle.
S.Chechelnitskiy / SFU Simon Fraser Running CE and SE in a XEN virtualized environment S.Chechelnitskiy Simon Fraser University CHEP 2007 September 6 th.
Elements of a Microprocessor system Central processing unit. This performs the arithmetic and logical operations, such as add/subtract, multiply/divide,
Esma Yildirim Department of Computer Engineering Fatih University Istanbul, Turkey DATACLOUD 2013.
Chapter 5: Server Hardware and Availability. Hardware Reliability and LAN The more reliable a component, the more expensive it is. Server hardware is.
IT Equipment Efficiency Peter Rumsey, Rumsey Engineers.
Linux Clustering A way to supercomputing. What is Cluster? A group of individual computers bundled together using hardware and software in order to make.
Presented by: Yash Gurung, ICFAI UNIVERSITY.Sikkim BUILDING of 3 R'sCLUSTER PARALLEL COMPUTER.
High Performance Computing Center North  HPC2N 2002 all rights reserved HPC2N and SweGrid Åke Sandgren, HPC2N and SweGrid Technology Group.
AStudy on the Viability of Hadoop Usage on the Umfort Cluster for the Processing and Storage of CReSIS Polar Data Mentor: Je’aime Powell, Dr. Mohammad.
Silicon Graphics, Inc. Poster Presented by: SGI Proprietary Technologies for Breakthrough Research Rosario Caltabiano North East Higher Education & Research.
CompuNet Grid Computing Milena Natanov Keren Kotlovsky Project Supervisor: Zvika Berkovich Lab Chief Engineer: Dr. Ilana David Spring, /
Energy Efficient Web Server Cluster Andrew Krioukov, Sara Alspaugh, Laura Keys, David Culler, Randy Katz.
1 Supplemental line if need be (example: Supported by the National Science Foundation) Delete if not needed. Supporting Polar Research with National Cyberinfrastructure.
High Performance Computing (HPC) at Center for Information Communication and Technology in UTM.
TPB Models Development Status Report Presentation to the Travel Forecasting Subcommittee Ron Milone National Capital Region Transportation Planning Board.
A+ Guide to Hardware: Managing, Maintaining, and Troubleshooting, Sixth Edition Chapter 9, Part 11 Satisfying Customer Needs.
The Center for Remote Sensing of Ice Sheets (CReSIS) has been compiling Greenland ice sheet thickness data since The airborne program utilizes a.
High Performance Computing G Burton – ICG – Oct12 – v1.1 1.
The Impact of CReSIS Summer Research Programs that Influence Students’ Choice of a STEM Related Major in College By: Alica Reynolds, Jessica.
© 2008 The MathWorks, Inc. ® ® Parallel Computing with MATLAB ® Silvina Grad-Freilich Manager, Parallel Computing Marketing
XI HE Computing and Information Science Rochester Institute of Technology Rochester, NY USA Rochester Institute of Technology Service.
TeraScan Curriculum Development and Integration of SeaSpace Technology into the Classroom.
"Parallel MATLAB in production supercomputing with applications in signal and image processing" Ashok Krishnamurthy David Hudak John Nehrbass Siddharth.
PolarGrid Geoffrey Fox (PI) Indiana University Associate Dean for Graduate Studies and Research, School of Informatics and Computing, Indiana University.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
Stern Center for Research Computing
Utilizing Data Sets from the CReSIS Data Archives to Visualize Greenland Echograms Information in Google Earth 2012 Research Experience for Undergraduates.
Planning and Designing Server Virtualisation.
BalticGrid-II Project MATLAB implementation and application in Grid Ilmars Slaidins, Lauris Cikovskis Riga Technical University AHM Riga May 12-14, 2009.
The SLAC Cluster Chuck Boeheim Assistant Director, SLAC Computing Services.
Grid Appliance – On the Design of Self-Organizing, Decentralized Grids David Wolinsky, Arjun Prakash, and Renato Figueiredo ACIS Lab at the University.
By Rashid Khan Lesson 10-From Here to There: Remote Installation of the Windows XP Professional Client.
Parallel Computing with Matlab CBI Lab Parallel Computing Toolbox TM An Introduction Oct. 27, 2011 By: CBI Development Team.
A Cloudy View on Computing Workshop and CReSIS Field Data Accessibility Jerome E. Mitchell Indiana University.
PROCESSED RADAR DATA INTEGRATION WITH SOCIAL NETWORKING SITES FOR POLAR EDUCATION Jeffrey A. Wood April 19, 2010 A Thesis submitted to the Graduate Faculty.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
Implementation of a Polycom VSX 8000 Teleconferencing System: Developing Standards and Practices for Participating in Virtual Conferences.
Abstract The Center for Remote Sensing of Ice Sheets (CReSIS) has collected hundreds of terabytes of radar depth sounder data over the Greenland and Antarctic.
CLUSTER COMPUTING TECHNOLOGY BY-1.SACHIN YADAV 2.MADHAV SHINDE SECTION-3.
1 CReSIS Lawrence Kansas February Geoffrey Fox (PI) Computer Science, Informatics, Physics Chair Informatics Department Director Digital Science.
Forward Observer In-Flight Dual Copy System Richard Knepper, Matthew Standish NASA Operation Ice Bridge Field Support Research Technologies Indiana University.
LOGO Development of the distributed computing system for the MPD at the NICA collider, analytical estimations Mathematical Modeling and Computational Physics.
COSC 3330/6308 Solutions to the Third Problem Set Jehan-François Pâris November 2012.
A Comparative Analysis of Localized Command Line Execution, Remote Execution through Command Line, and Torque Submissions of MATLAB® Scripts for the Charting.
 Does the addition of computing cores increase the performance of the CReSIS Synthetic Aperture Radar Processor (CSARP)?  What MATLAB toolkits and/or.
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Enabling the use of e-Infrastructures with.
XI HE Computing and Information Science Rochester Institute of Technology Rochester, NY USA Rochester Institute of Technology Service.
Installation of Storage Foundation for Windows High Availability 5.1 SP2 1 Daniel Schnack Principle Technical Support Engineer.
Using HPS resources in airborne science Aaron Wells and Barb Hallock UITS Research Technologies, Cyberinfrastructure and Service Center Indiana University.
A Comparison of Job Duration Utilizing High Performance Computing on a Distributed Grid Team Members JerNettie Burney Robyn Evans Michael Austin Mentor.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
Final Implementation of a High Performance Computing Cluster at Florida Tech P. FORD, X. FAVE, K. GNANVO, R. HOCH, M. HOHLMANN, D. MITRA Physics and Space.
This material is based upon work supported by the National Science Foundation under Grant No. ANT Any opinions, findings, and conclusions or recommendations.
Evangelos Markatos and Charalampos Gkikas FORTH-ICS Athens, th Mar Institute of Computer Science - FORTH Christos.
Planning Server Deployments Chapter 1. Server Deployment When planning a server deployment for a large enterprise network, the operating system edition.
A Comparison of Passive Microwave Derive Melt Extent to Melt Intensity Estimated from Combined Optical and Thermal Satellite Signatures Over the Greenland.
Page 1 NSTec –Impact of Server virtualization OFFICIAL USE ONLY Vision Service Partnership Impact of Virtualization on the Data Center Robert Morrow National.
A Web Based Job Submission System for a Physics Computing Cluster David Jones IOP Particle Physics 2004 Birmingham 1.
Using ROSSMANN to Run GOSET Studies Omar Laldin ( using materials from Jonathan Crider, Harish Suryanarayana ) Feb. 3, 2014.
Compute and Storage For the Farm at Jlab
Our Experience with Desktop Virtualization
Installation 1. Installation Sources
NGS computation services: APIs and Parallel Jobs
Cloud Computing Dr. Sharad Saxena.
CCR Advanced Seminar: Running CPLEX Computations on the ISE Cluster
Undergraduate Research Experience with African Nation Component
Presentation transcript:

Multi-Channel Radar Depth Sounder (MCRDS) Signal Processing: A Distributed Computing Approach Je’aime Powell 1, Dr. Linda Hayden 1, Dr. Eric Akers 1, Richard Knepper 2 [1] Elizabeth City State University, Elizabeth City, North Carolina [2] Indiana University, Bloomington, Indiana ABSTRACT In response to problems surrounding measuring ice sheet thickness in high attenuation areas of Greenland and the Antarctic, the Center for the Remote Sensing of Ice Sheets (CReSIS) created a Multi- Channel RADAR Depth Sounder (MCRDS). The MCRDS system was used to measure ice thicknesses of up to five kilometers in depth. This system produced large datasets, which required greater processing capabilities in the field. The purpose of this project was to test processing performance on a 32-core cluster through distributed computing resources. Testing involved a six-node cluster with an attached storage array and use of the CReSIS Synthetic Aperture RADAR Processor (CSARP) through the MATLAB Distributed Server Job Manager. Performance testing was derived from average run times collected once CSARP jobs completed. The run times were then compared using an ANOVA test with a five percent significance level. SAR and MCRDS Relation CSARP Function Job Distribution Research Questions Does the addition of computing cores increase the performance of the CReSIS Synthetic Aperture Radar Processor (CSARP) with a 5% level of significance? What MATLAB toolkits and/or expansion kits are necessary to run CSARP ? What hardware requirements are necessary to store and process CReSIS collected data? What facility environmental requirements are there to house a cluster of at least 32 cores to process a data set? What is the process to prepare a cluster from a middleware stand- point? Can an open-source job scheduler replace the MATLAB proprietary Distributed Computing Server currently required by CSARP? CSARP Requirements Power and Cooling Consumption Comparison Madogo Cluster Middleware Setup License Manager Installation MATLAB Distributed Server – Head Node MATLAB Distributed Server – Client Nodes Start MATLAB Distributed Computing Engine Start the MATLAB Job Manager Start the MATLAB Workers Scheduler Options Cluster Setup (Madogo) Data Collection Results There is significant evidence to indicate there is a difference in the performance times of CSARP with the inclusion of additional workers with a 5% level of significance. Conclusions Does this study prove within a 5% level of significance that the addition of computing cores increases the performance of the CSARP algorithm? There was significant evidence to indicate there was a difference in the performance times of CSARP due to the inclusion of additional workers within a 5% level of significance. Moreover the run time difference from ~30 minutes with one worker to ~10 minutes with 32 workers constituted an ~67% increase in performance. What hardware requirements are necessary to store and process CReSIS collected data? In respect to the 2008 field season a minimum of ~10TB of space was required to store the data. In reference to computing hardware needs, the minimum requirements of the MATLAB Distributed Server must be met which were an Intel Pentium 4 processor, 1GB of memory, and any necessary network connection equipment between server and clients. What facility environmental requirements are there to house a cluster with 32-cores to process a data set? Including networking, storage and computing, a total of 9.7kW of power was required with a minimum of 2.7 tons of cooling for the cluster. What is the process to prepare a cluster from a middle-ware stand- point? In terms of the MATLAB Distributed Server (MDS) as the middleware solution, a Flex license manger credentials had to be appended to the site license. MDS was then installed on the head-node. Included in this step was the addition of the MATLAB Distributed Computing Environment to the startup processes of the unit. MDS was then installed on all client machines. The Job Manager included in MDS was then started and client machines added to complete the preparations. What MATLAB toolkits and/or expansion kits are necessary to run CSARP? The required toolboxes for CSARP were MATLAB 2009b, the Signal Processing Toolbox, the Parallel Computing Toolbox, the MATLAB compiler and the CReSIS Toolbox. Recommended toolboxes also included the MATLAB Distributed Server in order to use greater than eight workers. Can an open-source job scheduler replace the MATLAB proprietary Distributed Computing Server currently required by CSARP? It was found that the MDS does support third-party schedulers natively. Through this research path it was also discovered that the MDCE portion of the MDS would still be necessary on all client machines. This information there by solidified the need for MDS to run CSARP in MATLAB script form even if a different job scheduler was selected. The only way to alleviate that dependency was through the creation of a CSARP executable binary. CSARP binary creation in the current form would not be possible due to the need for coded script changes in order to process data. Acknowledgements Dr. Linda Hayden Mr. Richard Knepper Dr. Benjamin Branch Mr. William Blake Mr. Mathew Link Mr. Jefferson Davis Mr. Shelton Spence Acknowledgements Mrs. Sharonda Walton Dr. Vinod Manglik Dr. Eric Akers Dr. Paul Viltz Mr. Jeff Wood Mr. Corey Shields Mr. Timothy Barclift Acknowledgements Mr. Kuchumbi Hayden Mr. Randy Saunders Mrs. Doris Johnson Dr. David Touretzky Mrs. Sonya Powell