© 2010 Pittsburgh Supercomputing Center Lustre WAN ExTENCI Kickoff Meeting August 19, 2010 Josephine Palencia J. Ray Scott.

Slides:



Advertisements
Similar presentations
NATIONAL PARTNERSHIP FOR ADVANCED COMPUTATIONAL INFRASTRUCTURE SAN DIEGO SUPERCOMPUTER CENTER Particle Physics Data Grid PPDG Data Handling System Reagan.
Advertisements

ParaMEDIC: Parallel Metadata Environment for Distributed I/O and Computing P. Balaji, Argonne National Laboratory W. Feng and J. Archuleta, Virginia Tech.
© 2008 Pittsburgh Supercomputing Center WAN-LUSTRE WITH KERBEROS AUTHENTICATION Thursday, August 14, 2008 Ben Bennett Josephine Palencia J. Ray Scott UPDATE.
(e)Science-Driven, Production- Quality, Distributed Grid and Cloud Data Infrastructure for the Transformative, Disruptive, Revolutionary, Next-Generation.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
 Introduction Originally developed by Open Software Foundation (OSF), which is now called The Open Group ( Provides a set of tools and.
Introduction to Systems Architecture Kieran Mathieson.
Final Design and Implementation
Configuring File Services Lesson 6. Skills Matrix Technology SkillObjective DomainObjective # Configuring a File ServerConfigure a file server4.1 Using.
11 REVIEWING MICROSOFT ACTIVE DIRECTORY CONCEPTS Chapter 1.
LinuxUNIX Red HatSUSECentOSUbuntuDebianOracleAIXHP-UXSolaris Configuration Manager * * * * * * Endpoint Protection No Plans.
Purdue RP Highlights TeraGrid Round Table September 23, 2010 Carol Song Purdue TeraGrid RP PI Rosen Center for Advanced Computing Purdue University.
Test Organization and Management
Making the Internet a Better Place for Business NIST PKI Steering Committee March 14, 2002.
11 SECURITY TEMPLATES AND PLANNING Chapter 7. Chapter 7: SECURITY TEMPLATES AND PLANNING2 OVERVIEW  Understand the uses of security templates  Explain.
Implementing Dual-Boot Clusters in a Distributed Environment Surajit Bose, Technology Services Manager Dustin King, Systems Imaging Architect.
Current Job Components Information Technology Department Network Systems Administration Telecommunications Database Design and Administration.
A comparison of distributed data storage middleware for HPC, GRID and Cloud Mikhail Goldshtein 1, Andrey Sozykin 1, Grigory Masich 2 and Valeria Gribova.
Microsoft Active Directory(AD) A presentation by Robert, Jasmine, Val and Scott IMT546 December 11, 2004.
System Center 2012 Certification and Training May 2012.
Copyright © 2011 EMC Corporation. All Rights Reserved. MODULE – 6 VIRTUALIZED DATA CENTER – DESKTOP AND APPLICATION 1.
Rule-Based Data Management Systems Reagan W. Moore Wayne Schroeder Mike Wan Arcot Rajasekar {moore, schroede, mwan, {moore, schroede, mwan,
1 School of Computer, National University of Defense Technology A Profile on the Grid Data Engine (GridDaEn) Xiao Nong
1 Florida Cyberinfrastructure Development: SSERCA Fall Internet2 Meeting Raleigh, Va October 3, 2011 Paul Avery University of Florida
The Grid System Design Liu Xiangrui Beijing Institute of Technology.
1 Windows 2008 Configuring Server Roles and Services.
Scaling NT To The Campus Integrating NT into the MIT Computing Environment Danilo Almeida, MIT.
1 OSG Accounting Service Requirements Matteo Melani SLAC for the OSG Accounting Activity.
03/03/09USCMS T2 Workshop1 Future of storage: Lustre Dimitri Bourilkov, Yu Fu, Bockjoo Kim, Craig Prescott, Jorge L. Rodiguez, Yujun Wu.
Authentication Proxy for the VistA Hospital Information System William Majurski Information Technology Laboratory.
And Tier 3 monitoring Tier 3 Ivan Kadochnikov LIT JINR
9 Systems Analysis and Design in a Changing World, Fourth Edition.
NA-MIC National Alliance for Medical Image Computing UCSD: Engineering Core 2 Portal and Grid Infrastructure.
BalticGrid-II Project The Second BalticGrid-II All-Hands Meeting, Riga, May, Joint Research Activity Enhanced Application Services on Sustainable.
GRIDS Center Middleware Overview Sandra Redman Information Technology and Systems Center and Information Technology Research Center National Space Science.
GCRC Meeting 2004 BIRN Coordinating Center Software Development Vicky Rowley.
National Computational Science National Center for Supercomputing Applications National Computational Science GSI Online Credential Retrieval Requirements.
Rob Davidson, Partner Technology Specialist Microsoft Management Servers: Using management to stay secure.
NeuroLOG ANR-06-TLOG-024 Software technologies for integration of process and data in medical imaging A transitional.
Distributed Data for Science Workflows Data Architecture Progress Report December 2008.
Cyberinfrastructure: Many Things to Many People Russ Hobby Program Manager Internet2.
A Case Study Julie J. Smith Wachovia Corporation Superdome Implementation at a Glance.
Commonwealth of Massachusetts Executive Office of Health and Human Services EOHHS Infrastructure Consolidation and Rationalization October 2008.
Federated Data Stores Volume, Velocity & Variety Future of Big Data Management Workshop Imperial College London June 27-28, 2013 Andrew Hanushevsky, SLAC.
LHCC Referees Meeting – 28 June LCG-2 Data Management Planning Ian Bird LHCC Referees Meeting 28 th June 2004.
Distributed File System. Outline Basic Concepts Current project Hadoop Distributed File System Future work Reference.
ETICS An Environment for Distributed Software Development in Aerospace Applications SpaceTransfer09 Hannover Messe, April 2009.
9/22/10 OSG Storage Forum 1 CMS Florida T2 Storage Status Bockjoo Kim for the CMS Florida T2.
Building Preservation Environments with Data Grid Technology Reagan W. Moore Presenter: Praveen Namburi.
© ExplorNet’s Centers for Quality Teaching and Learning 1 Explain the purpose of Microsoft virtualization. Objective Course Weight 2%
Unit 2 VIRTUALISATION. Unit 2 - Syllabus Basics of Virtualization Types of Virtualization Implementation Levels of Virtualization Virtualization Structures.
1 © 2007 Cisco Systems, Inc. All rights reserved.Cisco Public Network Architecture Characteristics  Explain four characteristics that are addressed by.
Building on virtualization capabilities for ExTENCI Carol Song and Preston Smith Rosen Center for Advanced Computing Purdue University ExTENCI Kickoff.
University of California Cloud Computing Task Force Russ Hobby.
1 © 2014 DataDirect Networks, Inc. * Other names and brands may be claimed as the property of others. Any statements or representations around future events.
Lustre File System chris. Outlines  What is lustre  How does it works  Features  Performance.
LQCD Computing Project Overview
Configuring File Services
Bentley Systems, Incorporated
Introduction to Distributed Platforms
Scaling Spark on HPC Systems
Prime Service Catalog 12.0 SAML 2.0 Single Sign-On Support
Bridges and Clouds Sergiu Sanielevici, PSC Director of User Support for Scientific Applications October 12, 2017 © 2017 Pittsburgh Supercomputing Center.
CERN Lustre Evaluation and Storage Outlook
Welcome! Thank you for joining us. We’ll get started in a few minutes.
IT Services Portfolio Todd Endicott – Senior Network and System Engineer Mary Monroe – Implementation Engineer.
Oracle Solaris Zones Study Purpose Only
Chromebooks and Cloud Computing
PLANNING A SECURE BASELINE INSTALLATION
Preparing for the Windows 8. 1 MCSA Module 6: Securing Windows 8
Presentation transcript:

© 2010 Pittsburgh Supercomputing Center Lustre WAN ExTENCI Kickoff Meeting August 19, 2010 Josephine Palencia J. Ray Scott

© 2010 Pittsburgh Supercomputing Center Project Summary Evaluation of a Global Widearea File System for: –Performance –Robustness Leverage Work from Teragrid Software Support –PSC –Josephine Palencia, Brian Johanson Hardware Support –UF Testing –UF, FSI, FIU, PSC, others

© 2010 Pittsburgh Supercomputing Center Project Approach Secure Infrastructure Installation Support Authentication Mapping Network Performance Measurement Application Integration Assessment and Project Support

© 2010 Pittsburgh Supercomputing Center Secure Infrastructure Kerberos security infrastructure Lustre 2.0 Installation Packages –Ease the software installation –Hide Kerberos from site administration

© 2010 Pittsburgh Supercomputing Center Example Site Configuration MDS1 CLIENT UF KERBEROS REALM UF OST Pool UFPSC.UF.EDU.PSC.EDU PSC OST Pool.PSC.EDU MGS MDS2 OST1 OST2 OST3 OSS1 OSS2 OSS3

© 2010 Pittsburgh Supercomputing Center Kerberized scp/kftp/gridftp: konFUSEd

© 2010 Pittsburgh Supercomputing Center Installation Support – RPM Packaging  Lustre 2.0 Beta 1  VM client/server rpms

© 2010 Pittsburgh Supercomputing Center Authentication Mapping UID Mapping Using IU Developed Code Only Necessary Across Administrative Domains –Without UID synchronization

© 2010 Pittsburgh Supercomputing Center Network Performance Testing Pre-Production Baseline Testing Ongoing Production Testing

© 2010 Pittsburgh Supercomputing Center Performance Measurement –Internal Testing

© 2010 Pittsburgh Supercomputing Center Performance Measurement – Internal Testing

© 2010 Pittsburgh Supercomputing Center Performance Measurement –TeraGrid

© 2010 Pittsburgh Supercomputing Center On Going Network Performance Testing

© 2010 Pittsburgh Supercomputing Center Application Integration Largely Invisible to Application Performance –Large Metadata Operations –Data Locality Independent Assessment for LQCD, CMS services to include: – data integrity –accessibility –usability

© 2010 Pittsburgh Supercomputing Center Application Integration, cont. –maintainability –ability to troubleshoot/isolate problems –namespace –IO performance –Metrics and Assessment evaluate acceptabiity as production storage for LHC physics –compare with Hadoop20 implementation –test with SCEC and Protein Structure applications

© 2010 Pittsburgh Supercomputing Center Assessment and Project Support

© 2010 Pittsburgh Supercomputing Center Project WIKI

© 2010 Pittsburgh Supercomputing Center Teragrid WIKI =JWAN:_lustre-wan_advanced_features_testing

© 2010 Pittsburgh Supercomputing Center Thank You Josephine Palencia – J. Ray Scott –