Grid Developers’ use of FermiCloud (to be integrated with master slides)

Slides:



Advertisements
Similar presentations
Wei Lu 1, Kate Keahey 2, Tim Freeman 2, Frank Siebenlist 2 1 Indiana University, 2 Argonne National Lab
Advertisements

Jun 29, 20111/13 Investigation of storage options for scientific computing on Grid and Cloud facilities Jun 29, 2011 Gabriele Garzoglio & Ted Hesselroth.
Xrootd and clouds Doug Benjamin Duke University. Introduction Cloud computing is here to stay – likely more than just Hype (Gartner Research Hype Cycle.
© 2010 VMware Inc. All rights reserved Confidential Performance Tuning for Windows Guest OS IT Pro Camp Presented by: Matthew Mitchell.
NWCLUG 01/05/2010 Jared Moore Xen Open Source Virtualization.
GSIAF "CAF" experience at GSI Kilian Schwarz. GSIAF Present status Present status installation and configuration installation and configuration usage.
Adam Duffy Edina Public Schools.  The heart of virtualization is the “virtual machine” (VM), a tightly isolated software container with an operating.
Ceph vs Local Storage for Virtual Machine 26 th March 2015 HEPiX Spring 2015, Oxford Alexander Dibbo George Ryall, Ian Collier, Andrew Lahiff, Frazer Barnsley.
Towards High-Availability for IP Telephony using Virtual Machines Devdutt Patnaik, Ashish Bijlani and Vishal K Singh.
A comparison between xen and kvm Andrea Chierici Riccardo Veraldi INFN-CNAF.
Introduction to DoC Private Cloud
Introducing VMware vSphere 5.0
Hyper-V Recovery Service DR Orchestration Extensible Data Channel (Hyper-V Replica, SQL AlwaysOn)
Virtual Network Servers. What is a Server? 1. A software application that provides a specific one or more services to other computers  Example: Apache.
VIRTUALISATION OF HADOOP CLUSTERS Dr G Sudha Sadasivam Assistant Professor Department of CSE PSGCT.
Installing and Setting up mongoDB replica set PREPARED BY SUDHEER KONDLA SOLUTIONS ARCHITECT.
Paper on Best implemented scientific concept for E-Governance Virtual Machine By Nitin V. Choudhari, DIO,NIC,Akola By Nitin V. Choudhari, DIO,NIC,Akola.
Tier-1 experience with provisioning virtualised worker nodes on demand Andrew Lahiff, Ian Collier STFC Rutherford Appleton Laboratory, Harwell Oxford,
CVMFS AT TIER2S Sarah Williams Indiana University.
Jun 29, 20101/25 Storage Evaluation on FG, FC, and GPCF Jun 29, 2010 Gabriele Garzoglio Computing Division, Fermilab Overview Introduction Lustre Evaluation:
Condor Project Computer Sciences Department University of Wisconsin-Madison Virtual Machines in Condor.
Free, online, technical courses Take a free online course. Microsoft Virtual Academy.
Paper on Best implemented scientific concept for E-Governance projects Virtual Machine By Nitin V. Choudhari, DIO,NIC,Akola.
Investigation of Storage Systems for use in Grid Applications 1/20 Investigation of Storage Systems for use in Grid Applications ISGC 2012 Feb 27, 2012.
Appendix B Planning a Virtualization Strategy for Exchange Server 2010.
Copyright © 2011 EMC Corporation. All Rights Reserved. MODULE – 6 VIRTUALIZED DATA CENTER – DESKTOP AND APPLICATION 1.
Ceph Storage in OpenStack Part 2 openstack-ch,
Mar 24, 20111/17 Investigation of storage options for scientific computing on Grid and Cloud facilities Mar 24, 2011 Keith Chadwick for Gabriele Garzoglio.
Farm Management D. Andreotti 1), A. Crescente 2), A. Dorigo 2), F. Galeazzi 2), M. Marzolla 3), M. Morandin 2), F.
Linux in a Virtual Environment Nagarajan Prabakar School of Computing and Information Sciences Florida International University.
Adam Duffy Edina Public Schools.  Traditional server ◦ One physical server ◦ One OS ◦ All installed hardware is limited to that one server ◦ If hardware.
Sandor Acs 05/07/
PROOF Cluster Management in ALICE Jan Fiete Grosse-Oetringhaus, CERN PH/ALICE CAF / PROOF Workshop,
Investigation of Storage Systems for use in Grid Applications 1/20 Investigation of Storage Systems for use in Grid Applications ISGC 2012 Feb 27, 2012.
Atlas Tier 3 Virtualization Project Doug Benjamin Duke University.
THE UNIVERSITY OF MELBOURNE Melbourne, Australia ATLAS Tier 2 Site Status Report Marco La Rosa
OSG Tier 3 support Marco Mambelli - OSG Tier 3 Dan Fraser - OSG Tier 3 liaison Tanya Levshina - OSG.
RAL Site Report HEPiX FAll 2014 Lincoln, Nebraska October 2014 Martin Bly, STFC-RAL.
© 2008 IBM Corporation AIX Workload Partion Manger.
© Copyright 2011 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. HP Restricted Module 7.
Mastering Windows Network Forensics and Investigation Chapter 17: The Challenges of Cloud Computing and Virtualization.
BlueArc IOZone Root Benchmark How well do VM clients perform vs. Bare Metal clients? Bare Metal Reads are (~10%) faster than VM Reads. Bare Metal Writes.
Doug Benjamin Duke University. 2 ESD/AOD, D 1 PD, D 2 PD - POOL based D 3 PD - flat ntuple Contents defined by physics group(s) - made in official production.
Cloud Computing Lecture 5-6 Muhammad Ahmad Jan.
The 2001 Tier-1 prototype for LHCb-Italy Vincenzo Vagnoni Genève, November 2000.
Atlas Software Structure Complicated system maintained at CERN – Framework for Monte Carlo and real data (Athena) MC data generation, simulation and reconstruction.
15-Feb-02Steve Traylen, RAL WP6 Test Bed Report1 RAL/UK WP6 Test Bed Report Steve Traylen, WP6 PPGRID/RAL, UK
Purdue RP Highlights TeraGrid Round Table May 20, 2010 Preston Smith Manager - HPC Grid Systems Rosen Center for Advanced Computing Purdue University.
Data transfers and storage Kilian Schwarz GSI. GSI – current storage capacities vobox LCG RB/CE GSI batchfarm: ALICE cluster (67 nodes/480 cores for batch.
BNL dCache Status and Plan CHEP07: September 2-7, 2007 Zhenping (Jane) Liu for the BNL RACF Storage Group.
Nov 18, 20091/15 Grid Storage on FermiGrid and GPCF – Activity Discussion Grid Storage on FermiGrid and GPCF Activity Discussion Nov 18, 2009 Gabriele.
Auxiliary services Web page Secrets repository RSV Nagios Monitoring Ganglia NIS server Syslog Forward FermiCloud: A private cloud to support Fermilab.
Create a Microsoft Image Rhonda Layfield Sr. Deployment Architect
ATLAS TIER3 in Valencia Santiago González de la Hoz IFIC – Instituto de Física Corpuscular (Valencia)
FermiCloud Status Report Fall 2010 Keith Chadwick Grid & Cloud Computing Department Head Fermilab Work supported by the U.S. Department.
Authentication, Authorization, and Contextualization in FermiCloud S. Timm, D. Yocum, F. Lowe, K. Chadwick, G. Garzoglio, D. Strain, D. Dykstra, T. Hesselroth.
Claudio Grandi INFN Bologna Virtual Pools for Interactive Analysis and Software Development through an Integrated Cloud Environment Claudio Grandi (INFN.
INFN/IGI contributions Federated Clouds Task Force F2F meeting November 24, 2011, Amsterdam.
FermiCloud Project: Integration, Development, Futures Gabriele Garzoglio Associate Head Grid & Cloud Computing Department Fermilab Work supported by the.
Virtual machines ALICE 2 Experience and use cases Services at CERN Worker nodes at sites – CNAF – GSI Site services (VoBoxes)
NEWS LAB 薛智文 嵌入式系統暨無線網路實驗室
Blueprint of Persistent Infrastructure as a Service
Virtualization with libvirt Kashyap Chamarthy
GSIAF & Anar Manafov, Victor Penso, Carsten Preuss, and Kilian Schwarz, GSI Darmstadt, ALICE Offline week, v. 0.8.
Oxford Site Report HEPSYSMAN
Virtualization in the gLite Grid Middleware software process
AliEn central services (structure and operation)
Investigation of Storage Systems for use in Grid Applications
Overview Context Test Bed Lustre Evaluation Standard benchmarks
Different types of Linux installation
Presentation transcript:

Grid Developers’ use of FermiCloud (to be integrated with master slides)

Grid Developers Use of clouds Storage Investigation OSG Storage Test Bed MCAS Production System Development VM – OSG User Support – FermiCloud Development – MCAS integration system

Storage Investigation: Lustre Test Bed 2 TB 6 Disks eth FCL Lustre: 3 OST & 1 MDT FG ITB Clients (7 nodes - 21 VM) BA mount Dom0: - 8 CPU - 24 GB RAM Lustre Client VM 7 x Lustre Server VM - ITB clients vs. Lustre Virtual Server - FCL clients vs. Lustre V.S. - FCL + ITB clients vs. Lutre V.S.

ITB clts vs. FCL Virt. Srv. Lustre Changing Disk and Net drivers on the Lustre Srv VM… 350 MB/s read 70 MB/s write (250 MB/s write on Bare M.) Bare Metal Virt I/O for Disk and Net Virt I/O for Disk and default for Net Default driver for Disk and Net Read I/O Rates Write I/O Rates Use Virt I/O drivers for Net

21 Nova clt vs. bare m. & virt. srv. Read – ITB vs. virt. srv. BW =  0.08 MB/s (1 ITB cl.: 15.3  0.1 MB/s) Read – FCL vs. virt. srv. BW =  0.05 MB/s (1 FCL cl.: 14.4  0.1 MB/s) Read – ITB vs. bare metal BW =  0.06 MB/s (1 cl. vs. b.m.: 15.6  0.2 MB/s) Virtual Clients on-board (on the same machine as the Virtual Server) are as fast as bare metal for read Virtual Server is almost as fast as bare metal for read

OSG Storage Test Bed Official test bed resources 5 nodes purchased ~ 2 years ago – 4 VM on each node (2 VM SL5, 2 VM SL4) Test Systems: BeStMan-gateway/xrootd – BeStMan-gateway, GridFTP-xrootd, xrootdfs – Xrootd redirector – 5 data server nodes BeStMan-gateway/HDFS – BeStMan-gateway/GridFTP-hdfs, hdfs name nodes – 8 data server nodes Client nodes (4 VMs): – Client installation tests – Certification tests – Apache/tomcat to monitor/display test results etc

OSG Storage Test Bed Additional test bed resources 6 VMs on nodes outside of the official testbed Test systems: – BeStMan-gateway with disk – BeStMan-fullmode – Xrootd (Atlas-Tier3, WLCG demonstrator project) – Various test installation In addition, 6 “old” physical nodes are used as dCache test bed – These will be migrated to FermiCloud

MCAS Production System FermiCloud hosts the production server (mcas.fnal.gov) VM Config: 2 CPUs, 4GB RAM, 2GB swap Disk Config: – 10GB root partition for OS and system files – 250GB disk image as data partition for MCAS software and data Independent disk image makes is easier to upgrade the VM On VM boot up: Data partition is staged and auto mounted in VM On VM shutdown: Data partition is saved Work in progress: Restart the VM without having to save and stage in the data partition to/from central image storage MCAS services hosted on the server – Mule ESB – JBoss – XML Berkeley DB

Metric Analysis and Correlation Service. CD Seminar 9