INDIACMS-TIFR Tier 2 Grid Status Report I IndiaCMS Meeting, April 05-06, 2007.

Slides:



Advertisements
Similar presentations
Computing Infrastructure
Advertisements

CBPF J. Magnin LAFEX-CBPF. Outline What is the GRID ? Why GRID at CBPF ? What are our needs ? Status of GRID at CBPF.
Southgrid Status Pete Gronbech: 27th June 2006 GridPP 16 QMUL.
Linux Clustering A way to supercomputing. What is Cluster? A group of individual computers bundled together using hardware and software in order to make.
Institute for High Energy Physics ( ) NEC’2007 Varna, Bulgaria, September Activities of IHEP in LCG/EGEE.
Site Report HEPHY-UIBK Austrian federated Tier 2 meeting
IFIN-HH LHCB GRID Activities Eduard Pauna Radu Stoica.
Site Report US CMS T2 Workshop Samir Cury on behalf of T2_BR_UERJ Team.
Virtual Network Servers. What is a Server? 1. A software application that provides a specific one or more services to other computers  Example: Apache.
CPP Staff - 30 CPP Staff - 30 FCIPT Staff - 35 IPR Staff IPR Staff ITER-India Staff ITER-India Staff Research Areas: 1.Studies.
March 27, IndiaCMS Meeting, Delhi1 T2_IN_TIFR of all-of-us, for all-of-us, by some-of-us Tier-2 Status Report.
SouthGrid Status Pete Gronbech: 4 th September 2008 GridPP 21 Swansea.
1 INDIACMS-TIFR TIER-2 Grid Status Report IndiaCMS Meeting, Sep 27-28, 2007 Delhi University, India.
Cluster computing facility for CMS simulation work at NPD-BARC Raman Sehgal.
Tier 3g Infrastructure Doug Benjamin Duke University.
Computing/Tier 3 Status at Panjab S. Gautam, V. Bhatnagar India-CMS Meeting, Sept 27-28, 2007 Delhi University, Delhi Centre of Advanced Study in Physics,
ScotGrid: a Prototype Tier-2 Centre – Steve Thorn, Edinburgh University SCOTGRID: A PROTOTYPE TIER-2 CENTRE Steve Thorn Authors: A. Earl, P. Clark, S.
The PC The PC is a standard computing platform, built around a EISA bus (1988) –IBM compatible –“Intel Architecture” from Intel or AMD or other companies.
Online Systems Status Review of requirements System configuration Current acquisitions Next steps... Upgrade Meeting 4-Sep-1997 Stu Fuess.
ASGC 1 ASGC Site Status 3D CERN. ASGC 2 Outlines Current activity Hardware and software specifications Configuration issues and experience.
UNIT 1 INFRASTRUCTURE AND APPLICATION SUPPORT. UNIT OBJECTIVES Name the different ROSS application platforms. Describe the difference between client hardware,
RAL PPD Site Update and other odds and ends Chris Brew.
Preparation of KIPT (Kharkov) computing facilities for CMS data analysis L. Levchuk Kharkov Institute of Physics and Technology (KIPT), Kharkov, Ukraine.
BINP/GCF Status Report BINP LCG Site Registration Oct 2009
INTRODUCTION The GRID Data Center at INFN Pisa hosts a big Tier2 for the CMS experiment, together with local usage from other HEP related/not related activities.
March 2003 CERN 1 EDG and AliEn in Prague Dagmar Adamova INP Rez near Prague.
ScotGRID:The Scottish LHC Computing Centre Summary of the ScotGRID Project Summary of the ScotGRID Project Phase2 of the ScotGRID Project Phase2 of the.
Romanian Tier-2 Federation One site for all: RO-07-NIPNE Mihai Ciubancan on behalf of IT Department.
RAL Site Report Andrew Sansum e-Science Centre, CCLRC-RAL HEPiX May 2004.
NCPHEP ATLAS/CMS Tier3: status update V.Mossolov, S.Yanush, Dz.Yermak National Centre of Particle and High Energy Physics of Belarusian State University.
INFSO-RI Enabling Grids for E-sciencE Hellas Grid infrastructure update Kostas Koumantaros, Christos Aposkitis EGEE-HellasGrid Coordination.
Tier2 Centre in Prague Jiří Chudoba FZU AV ČR - Institute of Physics of the Academy of Sciences of the Czech Republic.
KOLKATA Grid Site Name :- IN-DAE-VECC-02Monalisa Name:- Kolkata-Cream VO :- ALICECity:- KOLKATACountry :- INDIA Shown many data transfers.
July 29' 2010INDIA-CMS_meeting_BARC1 LHC Computing Grid Makrand Siddhabhatti DHEP, TIFR Mumbai.
Status Report of WLCG Tier-1 candidate for KISTI-GSDC Sang-Un Ahn, for the GSDC Tier-1 Team GSDC Tier-1 Team ATHIC2012, Busan,
HEP Computing Status Sheffield University Matt Robinson Paul Hodgson Andrew Beresford.
1 Copyright © 2015 Pexus LLC Patriot PS Personal Server Installing Patriot PS ISO Image on.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE Site Architecture Resource Center Deployment Considerations MIMOS EGEE Tutorial.
CERN Computer Centre Tier SC4 Planning FZK October 20 th 2005 CERN.ch.
Tier 3 Status at Panjab V. Bhatnagar, S. Gautam India-CMS Meeting, July 20-21, 2007 BARC, Mumbai Centre of Advanced Study in Physics, Panjab University,
UKI-SouthGrid Overview and Oxford Status Report Pete Gronbech SouthGrid Technical Coordinator HEPSYSMAN – RAL 10 th June 2010.
Status of India CMS Grid Computing Facility (T2-IN-TIFR) Rajesh Babu Muda TIFR, Mumbai On behalf of IndiaCMS T2 Team July 28, 20111Status of India CMS.
ClinicalSoftwareSolutions Patient focused.Business minded. Slide 1 Opus Server Architecture Fritz Feltner Sept 7, 2007 Director, IT and Systems Integration.
Site Report: Prague Jiří Chudoba Institute of Physics, Prague WLCG GridKa+T2s Workshop.
The 2001 Tier-1 prototype for LHCb-Italy Vincenzo Vagnoni Genève, November 2000.
SA1 operational policy training, Athens 20-21/01/05 Presentation of the HG Node “Isabella” and operational experience Antonis Zissimos Member of ICCS administration.
CNAF Database Service Barbara Martelli CNAF-INFN Elisabetta Vilucchi CNAF-INFN Simone Dalla Fina INFN-Padua.
Status of Tokyo LCG tier-2 center for atlas / H. Sakamoto / ISGC07 Status of Tokyo LCG Tier 2 Center for ATLAS Hiroshi Sakamoto International Center for.
Evangelos Markatos and Charalampos Gkikas FORTH-ICS Athens, th Mar Institute of Computer Science - FORTH Christos.
INRNE's participation in LCG Elena Puncheva Preslav Konstantinov IT Department.
HP Proliant Server  Intel Xeon E3-1220v3 (3.1GHz / 4-core / 8MB / 80W).  HP 4GB Dual Rank x8 PC E (DDR3-1600) Unbuffered Memory Kit.  HP Ethernet.
10/18/01Linux Reconstruction Farms at Fermilab 1 Steven C. Timm--Fermilab.
Status of GSDC, KISTI Sang-Un Ahn, for the GSDC Tier-1 Team
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
Tier2 Centre in Prague Jiří Chudoba FZU AV ČR - Institute of Physics of the Academy of Sciences of the Czech Republic.
Bernd Panzer-Steindel CERN/IT/ADC1 Medium Term Issues for the Data Challenges.
13 January 2004GDB Geneva, Milos Lokajicek Institute of Physics AS CR, Prague LCG regional centre in Prague
KOLKATA Grid Kolkata Tier-2 Status and Plan Site Name :- IN-DAE-VECC-02 Gocdb Name:- IN-DAE-VECC-02 VO :- ALICE City:- KOLKATA Country :-
Managing HP Enterprise Virtual Array
The Beijing Tier 2: status and plans
LCG Service Challenge: Planning and Milestones
Paul Kuipers Nikhef Site Report Paul Kuipers
Mattias Wadenstein Hepix 2012 Fall Meeting , Beijing
Belle II Physics Analysis Center at TIFR
NL Service Challenge Plans
Virtualization OVERVIEW
LCG Deployment in Japan
UK GridPP Tier-1/A Centre at CLRC
HIGH-PERFORMANCE COMPUTING SYSTEM FOR HIGH ENERGY PHYSICS
Cost Effective Network Storage Solutions
Presentation transcript:

INDIACMS-TIFR Tier 2 Grid Status Report I IndiaCMS Meeting, April 05-06, 2007

Old Status Hardware  4 Grid managing servers. (CE+BDII, SE, UI, Mon)  36 Intel Pentium-IV worker nodes.  34 Mbps internet connection.  SE with 1 TB storage Software and maintenance  Scientific Linux O.S.  Middleware used: gLite 3.0  Upkeep/upgradation according to broadcast intimations  Regular (mandatory) report submission  > 99 % uptime Report on TIFR Tier2:

Site information: Site name: INDIACMS-TIFR Site address: User Interface:ui.indiacms.res.in Job submission Queues: “ce.indiacms.res.in:2119/jobmanager-lcgpbs-dteam ce.indiacms.res.in:2119/jobmanager-lcgpbs-cms”

Current Status (CPUs) 80kSI2000 equivalent 1U Rack mountable CPUs have arrived Assembled and mounted in the rack Dual Xeon/2GB/80GB (Total 24 servers) Connected to GigaBit and KVM Switches Burn in Test done BIOS upgraded OS and middleware installation done Latest rpms installed

Current Status (Disk) 50 TB Disk Space has arrived HP EVA 8000 machine 33x300GB FastStorage 80x500 GB Slow Storage Fully populated single rack mount Gateway Server configured No propritory software DPM (SRM support) to be installed

EVA 8000 installation of modules o 2 x HSV 210 Controllers for EVA Mounted o 10 x Drive enclosures.... Mounted o 2 x Loop Switches.... Mounted o 2 x 4/8 SAN Switches.... Mounted o 1 x ProLient DL 140 G3 Server.... Mounted o 1 x ProLient DL 360 Server.... Mounted

Disk formation o 80 x 500 GB FATA HDD.... Populated o 33 x 300 GB Fibre Channel HDD.... Populated o 6 x 146 GB SAS HDD (Inside DL 360 G5 Server).... Installed o 1 x 80 GB SATA HDD (Inside DL 140 G3 Server).... Installed o 2 x 4 GB PCI Fibre Channel Card (Both the servers).... Installed o 1 x 2 GB PCI Fibre Channel Card.... Installed

Memory Modules 2 x 512 MB PC Memory Module.... Installed 2 x 1 GB PC Memory Module....Installed 1 x DVD ROM (DL 140 G3 Server)....Installed

Hard Disk placement Total 14 slots 1, 13 and 14 are free slots in 7 enclosures 1 and 14 are free in 3 enclosures F F S S S S S S S S F 0 0 (1 to 7 enclosures from top to bottom) 0 F F S S S S S S S S F F 0 (8 to 10 enclosures from top to bottom)

Network Upgradation 34 MBPS will become 1GBPS eventually  620 MBPS by end of 2007  > 1 GBPS in 2008 Point to Point connection (1:1)  CISCO 1482 Router

Software Updates  New Hosts Certificates installed.  Middleware updated to gLite 3.2  Latest packages installed  Latest LCG Status Report Submitted  Weekly RC Report Submission  Broadcast intimations applied  Upgrade to SLC 4 in few weeks time

Infrastructure Revisited New Room Fabricated Electric Connections rewired 20KVA UPS Arrived (3 Hrs Battery Backup) Batteries rack mounted Provision for existing electric load and A/C Floor Plan for Full Capacity Ready