Cluster computing facility for CMS simulation work at NPD-BARC Raman Sehgal.

Slides:



Advertisements
Similar presentations
PC Blades Wilson Edgar. Objective - 44% Lower Operating Cost than traditional PCs - 50% More Energy Efficient than traditional PCs - Unsurpassed Security.
Advertisements

Computing Infrastructure
PowerEdge T20 Customer Presentation. Product overview Customer benefits Use cases Summary PowerEdge T20 Overview 2 PowerEdge T20 mini tower server.
PowerEdge T20 Channel NDA presentation Dell Confidential – NDA Required.
Confidential Prepared by: System Sales PM Version: 1.0 Lean Design with Luxury Performance.
Premio Predator G2 Workstation Training
1 Agenda … HPC Technology & Trends HPC Platforms & Roadmaps HP Supercomputing Vision HP Today.
Custom’s K-12 Education Technology Council Presents… Custom Computer Specialists Server Technology Solutions Designed for NYCDOE Affordable and.
Information Technology Center Introduction to High Performance Computing at KFUPM.
Tower Dual Processor 1 x 2.13GHz Quad Core Intel Xeon E5506 1x250GB 7200RPM Drive Four open 3.5” Direct- Cabled SATA Bays 2GB 2 (1 DIMM) PC MHz.
Linux Clustering A way to supercomputing. What is Cluster? A group of individual computers bundled together using hardware and software in order to make.
Presented by: Yash Gurung, ICFAI UNIVERSITY.Sikkim BUILDING of 3 R'sCLUSTER PARALLEL COMPUTER.
Novell Server Linux vs. windows server 2008 By: Gabe Miller.
IT Infrastructure: Software September 18, LEARNING GOALS Identify the different types of systems software. Explain the main functions of operating.
1 SOFTWARE TECHNOLOGIES BUS Abdou Illia, Spring 2007 (Week 2, Thursday 1/18/2007)
The Components of a PC (By Lewis Barrett)
Supermicro © 2009 GPU Solutions Universal I/O Double-Sided Datacenter Optimized Twin Architecture SuperBlade ® Storage Confidential Mainstream Server.
Virtual Network Servers. What is a Server? 1. A software application that provides a specific one or more services to other computers  Example: Apache.
CPP Staff - 30 CPP Staff - 30 FCIPT Staff - 35 IPR Staff IPR Staff ITER-India Staff ITER-India Staff Research Areas: 1.Studies.
Cluster Computers. Introduction Cluster computing –Standard PCs or workstations connected by a fast network –Good price/performance ratio –Exploit existing.
All content in this presentation is protected – © 2008 American Power Conversion Corporation Symmetra ® LX – redundant UPS High performance, redundant.
MIS 1.5 Training. Day 1 Section 1 Overview and Features (30 min) Section 2 Hardware Overview (1 hour) Section 3 Internal Cabling (1 hour) Lab 1 Show and.
ww w.p ost ers essi on. co m E quipped with latest high end computing systems for providing wide range of services.
Basic Computer Structure and Knowledge Project Work.
KYLIN-I 麒麟一号 High-Performance Computing Cluster Institute for Fusion Theory and Simulation, Zhejiang University
ADVANCE FORENSIC WORKSTATION. SPECIFICATION Mother board : Xeon 5000 Series Server Board support 667MHz, 1066MHz and 1333MHz1 Processor : Two Intel Quad.
Mobile Server T echnology. Eurocom The Worlds Leading Developer of fully configurable and customizable Mobile Workstations, Mobile Servers and Desktop.
Computing/Tier 3 Status at Panjab S. Gautam, V. Bhatnagar India-CMS Meeting, Sept 27-28, 2007 Delhi University, Delhi Centre of Advanced Study in Physics,
Corporate Partner Overview and Update September 27, 2007 Gary Crane SURA Director IT Initiatives.
Bob Thome, Senior Director of Product Management, Oracle SIMPLIFYING YOUR HIGH AVAILABILITY DATABASE.
Introducing Snap Server™ 700i Series. 2 Introducing the Snap Server 700i series Hardware −iSCSI storage appliances with mid-market features −1U 19” rack-mount.
ASGC 1 ASGC Site Status 3D CERN. ASGC 2 Outlines Current activity Hardware and software specifications Configuration issues and experience.
Organization of a computer: The motherboard and its components.
Confidential Prepared by: System Sales PM Version: 1.0 Eco-Focused and Efficient 1U Server.
Objective  CEO of a small company  Create a small office network  $10,000 and $20,000 Budget  Three servers (workstations)  Firewall device  Switch.
Hardware Trends. Contents Memory Hard Disks Processors Network Accessories Future.
Sandor Acs 05/07/
Computer Anatomy Chin-Sung Lin Eleanor Roosevelt High School.
6/26/01High Throughput Linux Clustering at Fermilab--S. Timm 1 High Throughput Linux Clustering at Fermilab Steven C. Timm--Fermilab.
ITEP computing center and plans for supercomputing Plans for Tier 1 for FAIR (GSI) in ITEP  8000 cores in 3 years, in this year  Distributed.
Rob Allan Daresbury Laboratory NW-GRID Training Event 25 th January 2007 Introduction to NW-GRID R.J. Allan CCLRC Daresbury Laboratory.
KOLKATA Grid Site Name :- IN-DAE-VECC-02Monalisa Name:- Kolkata-Cream VO :- ALICECity:- KOLKATACountry :- INDIA Shown many data transfers.
Agenda  Mother Board – P4M266  Types Of Mother Boards  Components - Processor - RAM - Cards - Ports and Slots - BIOS.
Weekly Report By: Devin Trejo Week of June 21, 2015-> June 28, 2015.
CIMON TOUCH KDT SYSTEMS Introduction  Excellent Durability  The mobile CPU of the TOUCH gives excellent durability under industrial environment.
The 2001 Tier-1 prototype for LHCb-Italy Vincenzo Vagnoni Genève, November 2000.
SA1 operational policy training, Athens 20-21/01/05 Presentation of the HG Node “Isabella” and operational experience Antonis Zissimos Member of ICCS administration.
LSST Cluster Chris Cribbs (NCSA). LSST Cluster Power edge 1855 / 1955 Power Edge 1855 (*LSST1 – LSST 4) –Duel Core Xeon 3.6GHz (*LSST1 2XDuel Core Xeon)
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
 System Requirements are the prerequisites needed in order for a software or any other resources to execute efficiently.  Most software defines two.
$ 1000 COMPUTER. EVGA 132-CK-NF79-A1 NVIDIA nForce 790i Ultra SLI Socket 775 ATX MB w/RAID, 3-Way SLI, DDR3 & Core 2 Extreme Support Supports up to 8.
Motherboard Group 1 1.
Pathway to Petaflops A vendor contribution Philippe Trautmann Business Development Manager HPC & Grid Global Education, Government & Healthcare.
Running clusters on a Shoestring Fermilab SC 2007.
10/18/01Linux Reconstruction Farms at Fermilab 1 Steven C. Timm--Fermilab.
By Harshal Ghule Guided by Mrs. Anita Mahajan G.H.Raisoni Institute Of Engineering And Technology.
1062m0656 between 10692m2192 DS/ICI/CIF EqualLogic PS6510E
Running clusters on a Shoestring US Lattice QCD Fermilab SC 2007.
Brief introduction about “Grid at LNS”
Personal Computers A Research and Reverse Engineering
Module 2: DriveScale architecture and components
Cluster Active Archive
Unit 2 Computer Systems HND in Computing and Systems Development
What’s in the Box?.
Power couple. Dell EMC servers powered by Intel® Xeon® processors and running Windows Server* 2016, ready to securely handle dynamic business workloads.
المحور 3 : العمليات الأساسية والمفاهيم
Design Unit 26 Design a small or home office network
Cost Effective Network Storage Solutions
QMUL Site Report by Dave Kant HEPSYSMAN Meeting /09/2019
Cluster Computers.
Presentation transcript:

Cluster computing facility for CMS simulation work at NPD-BARC Raman Sehgal

Outline of the talk Example of typical cluster. Cluster types. Setup up of the Cluster Components required to make up a cluster. Selection of different Components for cluster. Overview of selected components.

NODE-01 NODE-02NODE-03NODE-04NODE-05NODE-06NODE-07NODE-08 NODE-09 NODE-10NODE-11NODE-12NODE-13NODE-14NODE-15NODE-16 FILE SERVER FAST / GIGABIT ETHERNET SWITCH CAT-5/6 CABLE A Typical Cluster

Cluster Types High Performance clusters Parallel computers and jobs Optimized for better job performance Emphasis on better interconnects High Throughput clusters Large numbers of sequential jobs Emphasis on better storage and I/O Load Balancing High Availability clusters Clusters to provide reliable service Web servers, database servers etc. Emphasis on better redundancy

user1user2user3 user4 Head Node (UI,PBS,NIS) Gigabit Switch Worker Nodes Cat 5/6 Cable To CMS Grid Node1 Node2 Node3 Node 4 Node 5 Node 6 Node 7 Node 8 Node 9 Node 10 Node 11 Node 12 Node 13 Node14 Node TB Storage Box Infiniband Public N/w Private N/w Infiniband Cable Cluster Setup

Components that make up a cluster Nodes Compute (Worker) nodes : For running jobs Service nodes (Head Node) : Management, monitoring, User interface Storage nodes : Central storage area for users and all I/P and O/P files Network Switches Cables Support equipment Racks : House nodes and network equipment KVM switches, Terminal servers : For console sharing Power distribution units : Power sequencing and control Software Operating System Networking protocols Applications

Selection of Node Computing power of a cluster depends on the power of a compute node Choice of processor (Xeon, Opteron, Itanium …), cache, frequency Memory Single/Dual/Quad Processor Network ports (Single/Dual) Expansion slots (PCI, PCIe) Management (IPMI …) Form Factor (Tower, Rack mountable, Chassis, Power supply) Heat Dissipation, Cooling OS Support Desktop nodeRack Mount server

Selection of network switch Selection of a good interconnect is an important part in the design of any cluster. Gigabit Ethernet with copper is the standard for cluster interconnects Infiniband provides high throughput and low latency. Infiniband is useful when there is high I/O and communication overhead. Choice of 24 and 48 port stackable switches Chassis (core) switches available for large configurations (upto 672 ports) Stackable Fixed Port SwitchChassis based switch

Storage Box Central storage server(s) is (are) needed to store all users’ directories and all I/P and O/P files Should have high capacity. Ensure reliability using RAID. Automated backup Console Sharing Access to each node’s console (keyboard, mouse, display) is required during installation KVM Switches share a single Keyboard, Mouse, Video among multiple nodes

Operating System Majority of clusters nowadays run some distribution of Linux Robust Open source solution Cost-effective Support for clusters (auto installers, cluster management tools) Widely used distributions: Redhat, SuSe and Debian Other Linux distributions : Mandriva and Gentoo Redhat based distributions : Scientific Linux Cern, Centos Selection of a distribution for a cluster depends mainly on compatiability of code to be run, so for our purpose the selected distribution is Scientific Linux Cern 4.0 or higher Code to be run : CMSSW

Overview of selected components 1)Worker Node and Head Node Processor: Dual Intel Quad Core Xeon 3.0 GHz or higher Harpertown series with 12 MB cache or higher with 1600 MHz FSB. Memory: 16 GB 800 MHz DDR2 memory Hard Disk: GB or higher Enterprise SATA II / SAS hard disks Network: 1. Two or more PXE boot compliant on-board 1000BaseT Gigabit Ethernet Ports 2. One Infiniband 4x DDR (20 Gbps) port on PCI Express x8 or higher 3. Form Factor: 1 U Rackmountable

2).Storage Server : 2 Number Processor: Dual Intel Quad Core 2.5 GHz or higher with 12 MB cache or higher with 1333 MHz FSB, 80W per processor Memory: 16 GB, 667 MHz DIMM DDR-2 Capacity : 24 x 450 GB = 10.8 TB with SAS disks of 15 K rpm. Network:  Two PXE boot complaint on-board 1000BaseT Gigabit Ethernet Ports  Dual 10G Ethernet multimode fiber port on PCI Express  One Infiniband 4x DDR(20 Gbps) port on PCI Express Form Factor :3 / 4 U rackmountable. RAID level : Support for RAID levels 0,1,10,5,6 Server Management : Using Intelligent Platform Management Interface ( IPMI)

Networking Switches 3). Infiniband Switch with following specifications : No. of Ports : 24 port InfiniBand 4X DDR (20Gbps) ports Bandwidth : 960 Gbps Chassis: 1 U Rack Mountable chassis Management protocols : SNMP, Telnet, SSH, HTTP, FTP Infiniband cables : 4xDDR infiniband CX4, 30 AWG Passive cable 4).Gigabit Ethernet Switch (a) 24 port autosensing, unmanaged Gigabit Switch (b) 24 port autosensing, managed Gigabit Switch

5). Rack Mountable KVM switch with following specifications No. of ports: 16 ports with PS/2 and USB keyboard and mouse support Features: Integrated with 15 inch LCD monitor, Keyboard and touch pad Form Factor: 1U with LCD panel folded 6) Uninterruptible Power Supply (UPS) Capacity:6KVA Form Factor: 3 U Rack Mountable with rail kit and internal batteries Nominal Output Voltage: Hz Backup time: 30 minutes at half load with internal battery or higher

THANKS