Presentation is loading. Please wait.

Presentation is loading. Please wait.

Grid Developers’ use of FermiCloud (to be integrated with master slides)

Similar presentations


Presentation on theme: "Grid Developers’ use of FermiCloud (to be integrated with master slides)"— Presentation transcript:

1 Grid Developers’ use of FermiCloud (to be integrated with master slides)

2 Grid Developers Use of clouds Storage Investigation OSG Storage Test Bed MCAS Production System Development VM – OSG User Support – FermiCloud Development – MCAS integration system

3 Storage Investigation: Lustre Test Bed 2 TB 6 Disks eth FCL Lustre: 3 OST & 1 MDT FG ITB Clients (7 nodes - 21 VM) BA mount Dom0: - 8 CPU - 24 GB RAM Lustre Client VM 7 x Lustre Server VM - ITB clients vs. Lustre Virtual Server - FCL clients vs. Lustre V.S. - FCL + ITB clients vs. Lutre V.S.

4 ITB clts vs. FCL Virt. Srv. Lustre Changing Disk and Net drivers on the Lustre Srv VM… 350 MB/s read 70 MB/s write (250 MB/s write on Bare M.) Bare Metal Virt I/O for Disk and Net Virt I/O for Disk and default for Net Default driver for Disk and Net Read I/O Rates Write I/O Rates Use Virt I/O drivers for Net

5 21 Nova clt vs. bare m. & virt. srv. Read – ITB vs. virt. srv. BW = 12.27  0.08 MB/s (1 ITB cl.: 15.3  0.1 MB/s) Read – FCL vs. virt. srv. BW = 13.02  0.05 MB/s (1 FCL cl.: 14.4  0.1 MB/s) Read – ITB vs. bare metal BW = 12.55  0.06 MB/s (1 cl. vs. b.m.: 15.6  0.2 MB/s) Virtual Clients on-board (on the same machine as the Virtual Server) are as fast as bare metal for read Virtual Server is almost as fast as bare metal for read

6 OSG Storage Test Bed Official test bed resources 5 nodes purchased ~ 2 years ago – 4 VM on each node (2 VM SL5, 2 VM SL4) Test Systems: BeStMan-gateway/xrootd – BeStMan-gateway, GridFTP-xrootd, xrootdfs – Xrootd redirector – 5 data server nodes BeStMan-gateway/HDFS – BeStMan-gateway/GridFTP-hdfs, hdfs name nodes – 8 data server nodes Client nodes (4 VMs): – Client installation tests – Certification tests – Apache/tomcat to monitor/display test results etc

7 OSG Storage Test Bed Additional test bed resources 6 VMs on nodes outside of the official testbed Test systems: – BeStMan-gateway with disk – BeStMan-fullmode – Xrootd (Atlas-Tier3, WLCG demonstrator project) – Various test installation In addition, 6 “old” physical nodes are used as dCache test bed – These will be migrated to FermiCloud

8 MCAS Production System FermiCloud hosts the production server (mcas.fnal.gov) VM Config: 2 CPUs, 4GB RAM, 2GB swap Disk Config: – 10GB root partition for OS and system files – 250GB disk image as data partition for MCAS software and data Independent disk image makes is easier to upgrade the VM On VM boot up: Data partition is staged and auto mounted in VM On VM shutdown: Data partition is saved Work in progress: Restart the VM without having to save and stage in the data partition to/from central image storage MCAS services hosted on the server – Mule ESB – JBoss – XML Berkeley DB

9 Metric Analysis and Correlation Service. CD Seminar 9


Download ppt "Grid Developers’ use of FermiCloud (to be integrated with master slides)"

Similar presentations


Ads by Google