Update on the LIGO Data Analysis System LIGO Scientific Collaboration Meeting LIGO Hanford Observatory August 19 th, 2002 Kent Blackburn Albert Lazzarini.

Slides:



Advertisements
Similar presentations
System Area Network Abhiram Shandilya 12/06/01. Overview Introduction to System Area Networks SAN Design and Examples SAN Applications.
Advertisements

11/12/2003LIGO Document G Z1 Data reduction for S3 I Leonor (UOregon), P Charlton (CIT), S Anderson (CIT), K Bayer (MIT), M Foster (PSU), S Grunewald.
Server 2012 R2 Essentials - What’s new ? Bart #techninebe Technine Group.
Profit from the cloud TM Parallels Dynamic Infrastructure AndOpenStack.
Windows Deployment Services WDS for Large Scale Enterprises and Small IT Shops Presented By: Ryan Drown Systems Administrator for Krannert.
Cacti Workshop Tony Roman Agenda What is Cacti? The Origins of Cacti Large Installation Considerations Automation The Current.
MCTS Guide to Microsoft Windows Server 2008 Network Infrastructure Configuration Chapter 8 Introduction to Printers in a Windows Server 2008 Network.
LIGO- GXXXXXX-XX-X Advanced LIGO Data & Computing Material for the breakout session NSF review of the Advanced LIGO proposal Albert Lazzarini Caltech,
Enterprise Reporting with Reporting Services SQL Server 2005 Donald Farmer Group Program Manager Microsoft Corporation.
Hands-On Microsoft Windows Server 2008 Chapter 1 Introduction to Windows Server 2008.
October 24, 2000Milestones, Funding of USCMS S&C Matthias Kasemann1 US CMS Software and Computing Milestones and Funding Profiles Matthias Kasemann Fermilab.

Module 13: Configuring Availability of Network Resources and Content.
Bob Thome, Senior Director of Product Management, Oracle SIMPLIFYING YOUR HIGH AVAILABILITY DATABASE.
Hands-On Microsoft Windows Server 2008 Chapter 1 Introduction to Windows Server 2008.
©Kwan Sai Kit, All Rights Reserved Windows Small Business Server 2003 Features.
Technology Overview. Agenda What’s New and Better in Windows Server 2003? Why Upgrade to Windows Server 2003 ?  From Windows NT 4.0  From Windows 2000.
© 2008 Cisco Systems, Inc. All rights reserved.CIPT1 v6.0—2-1 Administering Cisco Unified Communications Manager Understanding Cisco Unified Communications.
LSC Segment Database Duncan Brown Caltech LIGO-G Z.
STEALTH Content Store for SharePoint using Caringo CAStor  Boosting your SharePoint to the MAX! "Optimizing your Business behind the scenes"
CA ARCserve and CA XOsoft Simplified Pricing Program October 2007.
Hands-On Microsoft Windows Server 2008 Chapter 5 Configuring, Managing, and Troubleshooting Resource Access.
Module 7: Fundamentals of Administering Windows Server 2008.
material assembled from the web pages at
Farm Management D. Andreotti 1), A. Crescente 2), A. Dorigo 2), F. Galeazzi 2), M. Marzolla 3), M. Morandin 2), F.
LIGO-G E PAC Meeting LIGO Laboratory at Caltech 1 LIGO Data & Computing Update Albert Lazzarini LIGO Laboratory at Caltech 12 th.
LDAS L IGO Data Analysis System (LDAS) PAC 11 Meeting California Institute of Technology November 29, 2001 James Kent Blackburn LIGO Laboratory, Caltech.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
Introduction to dCache Zhenping (Jane) Liu ATLAS Computing Facility, Physics Department Brookhaven National Lab 09/12 – 09/13, 2005 USATLAS Tier-1 & Tier-2.
Database Architectures Database System Architectures Considerations – Data storage: Where do the data and DBMS reside? – Processing: Where.
SLAC Site Report Chuck Boeheim Assistant Director, SLAC Computing Services.
Week #3 Objectives Partition Disks in Windows® 7 Manage Disk Volumes Maintain Disks in Windows 7 Install and Configure Device Drivers.
1 Week #10Business Continuity Backing Up Data Configuring Shadow Copies Providing Server and Service Availability.
1 The new Fabric Management Tools in Production at CERN Thorsten Kleinwort for CERN IT/FIO HEPiX Autumn 2003 Triumf Vancouver Monday, October 20, 2003.
CLASS Information Management Presented at NOAATECH Conference 2006 Presented by Pat Schafer (CLASS-WV Development Lead)
LIGO-G E LIGO Scientific Collaboration Data Grid Status Albert Lazzarini Caltech LIGO Laboratory Trillium Steering Committee Meeting 20 May 2004.
VMware vSphere Configuration and Management v6
The Million Point PI System – PI Server 3.4 The Million Point PI System PI Server 3.4 Jon Peterson Rulik Perla Denis Vacher.
LIGO-G Z 5 June 2001L.S.Finn/LIGO Scientific Collaboration1 LDAS Camp.
NSF Review, 18 Nov 2003 Peter Shawhan (LIGO/Caltech)1 How to Develop a LIGO Search Peter Shawhan (LIGO / Caltech) NSF Review November 18, 2003 LIGO-G E.
Jan 12, 2009LIGO-G Z1 DMT and NDS2 John Zweizig LIGO/Caltech Ligo PAC, Caltech, Jan 12, 2009.
Module 9 Planning and Implementing Monitoring and Maintenance.
State of LSC Data Analysis and Software LSC Meeting LIGO Hanford Observatory November 11 th, 2003 Kent Blackburn, Stuart Anderson, Albert Lazzarini LIGO.
CERN - IT Department CH-1211 Genève 23 Switzerland t High Availability Databases based on Oracle 10g RAC on Linux WLCG Tier2 Tutorials, CERN,
Scott Koranda, UWM & NCSA 14 January 2016www.griphyn.org Lightweight Data Replicator Scott Koranda University of Wisconsin-Milwaukee & National Center.
11/12/2003LIGO-G Z1 Data reduction for S3 P Charlton (CIT), I Leonor (UOregon), S Anderson (CIT), K Bayer (MIT), M Foster (PSU), S Grunewald (AEI),
Gregory Mendell, LIGO Hanford Observatory LIGO-G WLIGO-G WLIGO-G W LIGO S5 Reduced Data Set Generation March 2007.
LSC Meeting, March 22, 2002 Peter Shawhan (LIGO/Caltech)LIGO-G E Access to Data from the LIGO Engineering Runs Peter Shawhan (LIGO Lab / Caltech)
LSC Meeting LIGO Scientific Collaboration - University of Wisconsin - Milwaukee 1 LSC Software and Other Things Alan Wiseman University of Wisconsin.
CASTOR project status CASTOR project status CERNIT-PDP/DM October 1999.
LIGO-G W Use of Condor by the LIGO Scientific Collaboration Gregory Mendell, LIGO Hanford Observatory On behalf of the LIGO Scientific Collaboration.
LIGO-G E PAC9 Meeting LIGO Laboratory at Caltech 1 The LIGO I Science Run Data & Computing Group Operations Plan 9 th Meeting of the.
Page 1 Monitoring, Optimization, and Troubleshooting Lecture 10 Hassan Shuja 11/30/2004.
Oct. 6, 1999PHENIX Comp. Mtg.1 CC-J: Progress, Prospects and PBS Shin’ya Sawada (KEK) For CCJ-WG.
Next Generation of Apache Hadoop MapReduce Owen
Extending Auto-Tiering to the Cloud For additional, on-demand, offsite storage resources 1.
Cofax Scalability Document Version Scaling Cofax in General The scalability of Cofax is directly related to the system software, hardware and network.
Report on the FCT MDC Stuart Anderson, Kent Blackburn, Philip Charlton, Jeffrey Edlund, Rick Jenet, Albert Lazzarini, Tom Prince, Massimo Tinto, Linqing.
Microsoft Virtual Academy Module 9 Configuring and Managing the VMM Library.
High Performance Storage System (HPSS) Jason Hick Mass Storage Group HEPiX October 26-30, 2009.
Scott Koranda, UWM & NCSA 20 November 2016www.griphyn.org Lightweight Replication of Heavyweight Data Scott Koranda University of Wisconsin-Milwaukee &
Yosemite Server Backup 8.8 Product Overview and Update
Lenovo – DataCore Deployment Ready Offerings
SAM at CCIN2P3 configuration issues
Grid Computing.
THE STEPS TO MANAGE THE GRID
FICEER 2017 Docker as a Solution for Data Confidentiality Issues in Learning Management System.
Research Data Archive - technology
Alarm Extension Pack from Wonderware Finland (Klinkmann)
Technical Capabilities
Presentation transcript:

Update on the LIGO Data Analysis System LIGO Scientific Collaboration Meeting LIGO Hanford Observatory August 19 th, 2002 Kent Blackburn Albert Lazzarini & Stuart Anderson LIGO-G E

LDAS Software Update LIGO-G E

Changes to Release Schedule Accelerated release of to support rescheduled S1 Accelerating release of to support new Frame 6 Specification Most likely will have a & before Jan 2002 FebDecMarJunMayAprJulAugOctSep NovJan 2003 FebDecMarJunMayAprJulAugOctSep Nov LDAS 0.2.0LDAS Planned 0.4.0Planned LDAS LDAS E7 E8S1 plan S2S3 LDAS S1 new LDAS LDAS 0.6.0LDAS 0.7.0

LDAS usernames/password Secure Web Server All shared account have been removed!

LDAS usernames/password Secure Application Form

Lessons Learned: E7 DONE!Rework configuration & build rules - DONE! DONE!Create new diskCacheAPI; Remove from frameAPI - DONE! DONE!Improve reliability of dataConditionAPI (thread issues) - DONE! DONE!Create shared resampling library for use in both the frameAPI & dataConditionAPI - DONE! Extend system monitoring - DONE! MOSTLY DONE!Add interpolation, Kalman filters, regression and rework intermediate() function in dataConditionAPI - MOSTLY DONE! 3x!Reduce memory usage in dataConditionAPI below 5x data - 3x! “High priority items needed to get LDAS on track for Science Runs”

Lessons Learned: E7 Continued DONE!Move frameAPI / diskCacheAPI to new dataserver - DONE! DONE!Improve docs, interfaces and table design with LSUG - DONE! DONE!Implement new TCL channel management interface to better control data sockets - DONE! DONE!Add detector geometry metadata to dataPipeline for stochastic analyses - DONE! Add system load monitoring tools for GriPhyN - DONE! MUCH DONE - MORE TO DO!Determine archival technology (SAM-QFS vs HPSS) - MUCH DONE - MORE TO DO! FULLY FUNCTIONAL!Build up CIT LDAS System - FULLY FUNCTIONAL! “High priority items needed to get LDAS on track for Science Runs”

Performance Enhancements Increased concurrency of jobs (20 assistant managers now the default). Removed 30 seconds of startup overhead from “mpirun wrapperAPI” on LDAS beowulfs (down to ~3 seconds). Reduced latency to transmit data from one API to another by several seconds. Increased computational efficiency of managerAPI by order of magnitude. Improved overall compute performance by factor of two using C++ compiler optimization options. More extensive test scripts using more search DSOs.

System Monitoring

Job Monitoring

Other New Functionality New User Commands –createRDS: Creates the Reduced Data Set (RDS) Frames. Includes support for resampling and runs unique thread in frameAPI. –dataStandAlone: A dataPipeline job that stops short of issuing the mpirun command used to start the wrapperAPI on an LDAS beowulf cluster. Used to integrate GRID technology with LDAS. –putStandAlone: The second half of a dataPipeline that takes data products from a standalone wrapperAPI running on a GRID resource and re-integrates the data products into LDAS and the LIGO database. New Web pages for monitoring job progress in system –Monitors jobs in queue and jobs that are actively being processed. –Shows progress of active jobs through the system. New encrypted LDAS password exchange when using ldasjobs package found in LIGO Tools.

Software To Do: Short List All new frameCPP being developed based on Frame Specification Version 6 (previously labeled 5). The frameAPI is being completely rewritten to use multiple CPUs in threads to support increased demands placed by more complex searches and reduced frames. Further improve queue performance in the managerAPI. Adding option to wrapperAPI resource file to remove additional half a minute of overhead and improve efficiency of node usage. Implement new actions, improved metadata, resolve remaining memory leaks in dataConditionAPI.

LDAS Hardware Update LIGO-G E

LDAS Hardware Update Initial archive analysis system is operating at Caltech. Beowulf upgrades at LHO, LLO, and MIT. Large IDE disk cache for raw S1 data at Caltech. All servers upgraded to Gigabit Ethernet. Shared SAN file system operational as data interface between LDAS and CDS.

Science Run Configurations “Increased computational capacity over E7 and investigated advanced storage configurations but delay full compute farm deployment until S2+” SAN (TB) IDE (TB) CPU (GHz) Tape (TB) LHO LLO CIT MIT DEV TEST

Archive Storage Solutions SAM-QFS advantages –Simplicity/reliability –Media import/export –License cost allows for use at observatories –Disaster recovery (GNU TAR) –Metadata performance (x1000) –Single vendor solution (server, software and OEM storage) –Reduced dependency on CACR HPSS advantages –Few man year experience –Free at Caltech –40 TB successfully stored SAM-QFS verses HPSS

Hardware To Do: Short List Virtual Private Network (VPN) between LDAS systems to allow for database federation and replication Finalize HSM decision: –Fibre Channel tape drives. –~500 tape slot library at observatories? Grow SAN at Observatories to allow high-speed access to raw frames from DMT and General Computing machines. Install full-scale Beowulf clusters in time for S2+.