SEGL HPC Workflow System

Slides:



Advertisements
Similar presentations
Höchstleistungsrechenzentrum Stuttgart SEGL Parameter Study Slide 1 Science Experimental Grid Laboratory (SEGL) Dynamical Parameter Study in Distributed.
Advertisements

Parallel Scripting on Beagle with Swift Ketan Maheshwari Postdoctoral Appointee (Argonne National.
Southgreen HPC system Concepts Cluster : compute farm i.e. a collection of compute servers that can be shared and accessed through a single “portal”
HORIZONT 1 ProcMan ® The Handover Process Manager Product Presentation HORIZONT Software for Datacenters Garmischer Str. 8 D München Tel ++49(0)89.
LUNARC, Lund UniversityLSCS 2002 Transparent access to finite element applications using grid and web technology J. Lindemann P.A. Wernberg and G. Sandberg.
ProActive Task Manager Component for SEGL Parameter Sweeping Natalia Currle-Linde and Wasseim Alzouabi High Performance Computing Center Stuttgart (HLRS),
Introduction to EMF Server Communication and Cases Beta Testing November 4, 2009.
UNIT-V The MVC architecture and Struts Framework.
Apache Airavata GSOC Knowledge and Expertise Computational Resources Scientific Instruments Algorithms and Models Archived Data and Metadata Advanced.
Christian Kocks April 3, 2012 High-Performance Computing Cluster in Aachen.
National Alliance for Medical Image Computing Grid Computing with BatchMake Julien Jomier Kitware Inc.
Selecting and Combining Tools F. Duveau 02/03/12 F. Duveau 02/03/12 Chapter 14.
Running Climate Models On The NERC Cluster Grid Using G-Rex Dan Bretherton, Jon Blower and Keith Haines Reading e-Science Centre Environmental.
Flexibility and user-friendliness of grid portals: the PROGRESS approach Michal Kosiedowski
WestGrid Seminar Series Copyright © 2006 University of Alberta. All rights reserved Integrating Gridstore Into The Job Submission Process With GSUB Edmund.
Job Submission Condor, Globus, Java CoG Kit Young Suk Moon.
17-April-2007 High Performance Computing Basics April 17, 2007 Dr. David J. Haglin.
Javascript Cog Kit By Zhenhua Guo. Grid Applications Currently, most grid related applications are written as separate software. –server side: Globus,
Describe workflows used to maintain and provide the RDA to users – Both are 24x7 operations Transition to the NWSC with zero downtime NWSC is new environment.
Some Design Notes Iteration - 2 Method - 1 Extractor main program Runs from an external VM Listens for RabbitMQ messages Starts a light database engine.
Network Queuing System (NQS). Controls batch queues Only on Cray SV1 Presently 8 queues available for general use and one queue for the Cray analyst.
Resource Brokering in the PROGRESS Project Juliusz Pukacki Grid Resource Management Workshop, October 2003.
Grid Execution Management for Legacy Code Applications Grid Enabling Legacy Code Applications Tamas Kiss Centre for Parallel.
Styx Grid Services: Lightweight, easy-to-use middleware for e-Science Jon Blower Keith Haines Reading e-Science Centre, ESSC, University of Reading, RG6.
Unified scripts ● Currently they are composed of a main shell script and a few auxiliary ones that handle mostly the local differences. ● Local scripts.
The EDGeS project receives Community research funding 1 Porting Applications to the EDGeS Infrastructure A comparison of the available methods, APIs, and.
Interactive Workflows Branislav Šimo, Ondrej Habala, Ladislav Hluchý Institute of Informatics, Slovak Academy of Sciences.
NEES Cyberinfrastructure Center at the San Diego Supercomputer Center, UCSD George E. Brown, Jr. Network for Earthquake Engineering Simulation NEES TeraGrid.
Sep 13, 2006 Scientific Computing 1 Managing Scientific Computing Projects Erik Deumens QTP and HPC Center.
Introduction to Taverna Online and Interaction service Aleksandra Pawlik University of Manchester.
GEOL882.3 Seismic Processing Systems Objective Processing Systems SEGY and similar file formats General structure of several systems.
AliEn AliEn at OSC The ALICE distributed computing environment by Bjørn S. Nilsen The Ohio State University.
1 The EDIT System, Overview European Commission – Eurostat.
Grid Remote Execution of Large Climate Models (NERC Cluster Grid) Dan Bretherton, Jon Blower and Keith Haines Reading e-Science Centre
Remote & Collaborative Visualization. TACC Remote Visualization Systems Longhorn – Dell XD Visualization Cluster –256 nodes, each with 48 GB (or 144 GB)
Portable Batch System – Definition and 3 Primary Roles Definition: PBS is a distributed workload management system. It handles the management and monitoring.
Cloud Computing project NSYSU Sec. 1 Demo. NSYSU EE IT_LAB2 Outline  Our system’s architecture  Flow chart of the hadoop’s job(web crawler) working.
+ Support multiple virtual environment for Grid computing Dr. Lizhe Wang.
D.Spiga, L.Servoli, L.Faina INFN & University of Perugia CRAB WorkFlow : CRAB: CMS Remote Analysis Builder A CMS specific tool written in python and developed.
Wouter Verkerke, NIKHEF 1 Using ‘stoomboot’ for NIKHEF-ATLAS batch computing What is ‘stoomboot’ – Hardware –16 machines, each 2x quad-core Pentium = 128.
NREL is a national laboratory of the U.S. Department of Energy, Office of Energy Efficiency and Renewable Energy, operated by the Alliance for Sustainable.
Geant4 GRID production Sangwan Kim, Vu Trong Hieu, AD At KISTI.
1 An unattended, fault-tolerant approach for the execution of distributed applications Manuel Rodríguez-Pascual, Rafael Mayo-García CIEMAT Madrid, Spain.
Advanced Higher Computing Science The Project. Introduction Worth 60% of the total marks for the course Must include: An appropriate interface using input.
Setting up visualization. Make output folder for visualization files Log into vieques $ ssh
© 2011 Pittsburgh Supercomputing Center XSEDE 2012 Intro To Our Environment John Urbanic Pittsburgh Supercomputing Center July, 2012.
An Brief Introduction Charlie Taylor Associate Director, Research Computing UF Research Computing.
Advanced Computing Facility Introduction
Advanced Higher Computing Science
Interacting with the cluster ssh, sftp, & slurm batch scripts
Internet/Web Databases
Specialized Computing Cluster An Introduction
Welcome to Indiana University Clusters
PARADOX Cluster job management
MSSL Astrogrid Workshop
Unix Scripts and PBS on BioU
HPC usage and software packages
A three-dimensional representation of flow cytometry data
Data Bridge Solving diverse data access in scientific applications
Hodor HPC Cluster LON MNG HPN Head Node Comp Node Comp Node Comp Node
CO6025 Advanced Programming
Getting SSH to Work Between Computers
MIK 2.1 DBNS - introduction to WS-PGRADE, 2013
Short Read Sequencing Analysis Workshop
DUCKS – Distributed User-mode Chirp-Knowledgeable Server
Comext Architecture and data flows
EGEE Middleware: gLite Information Systems (IS)
Introduction to High Performance Computing Using Sapelo2 at GACRC
Working in The IITJ HPC System
Production Manager Tools (New Architecture)
Presentation transcript:

SEGL HPC Workflow System SEGL© www.HLRS.DE Speaker: Yuriy Yudin segl-team@hlrs.de Natalia Currle-Linde linde@hlrs.de Yevgen Dorozhko dorozhko@hlrs.de Tatyana Krasikova krasikova@hlrs.de Yuriy Yudin yudin@hlrs.de

How to start a job? HPC resource input data transfer shell script scp, sftp, rsync,... transfer shell script PBS queue qsub submit Waiting… qstat … qstat … … YES! 20.09.2010

How to start a complex experiment? Specific HPC knowledge + Number of datasets 20.09.2010

Description of the experiment Is this workflow? This is an example of using GriCoL Control flow level Data flow level 20.09.2010

input, output and intermediate data Data flow level Dataspace stores input, output and intermediate data Dataspace “X” Dataset 1 Dataset 2 … Dataset n file 1, file 2 file 1 file1,…, file 100 file1, file2, file3 ---------------- file 20.09.2010

Data flow level. Component wrapping computational module Implementation for Resource 1 Implementation for Resource n Implementation for Resource 2 SEGL database Module library All experiment models 20.09.2010

HPC Resources How does it work? HPC Organization SEGL Server Database Agent https ssh jxta SEGL Server ssh https HPC Resources ssh jxta Agent https ssh jxta ssh Database Agent 20.09.2010

SEGL Middleware 20.09.2010

N > M Details PBS queue Limitation: max M jobs N jobs job 1 job 2 … job 1 job 2 job n common job Internal System queue 20.09.2010

It is very simple. No changes SUDO command Simple user account for SEGL Agent ROOT privileges Special groups 20.09.2010

Thank you for your attention. SEGL© www.HLRS.DE Speaker: Yuriy Yudin segl-team@hlrs.de Natalia Currle-Linde linde@hlrs.de Yevgen Dorozhko dorozhko@hlrs.de Tatyana Krasikova krasikova@hlrs.de Yuriy Yudin yudin@hlrs.de