The 2001 Tier-1 prototype for LHCb-Italy Vincenzo Vagnoni Genève, November 2000.

Slides:



Advertisements
Similar presentations
Computing Infrastructure
Advertisements

1 Dynamic DNS. 2 Module - Dynamic DNS ♦ Overview The domain names and IP addresses of hosts and the devices may change for many reasons. This module focuses.
INSTALLING LINUX.  Identify the proper Hardware  Methods for installing Linux  Determine a purpose for the Linux Machine  Linux File Systems  Linux.
1 CSC 486/586 Network Storage. 2 Objectives Familiarization with network data storage technologies Understanding of RAID concepts and RAID levels Discuss.
Chapter 5: Server Hardware and Availability. Hardware Reliability and LAN The more reliable a component, the more expensive it is. Server hardware is.
What to expect.  Linux  Windows Server (2008 or 2012)
Linux Clustering A way to supercomputing. What is Cluster? A group of individual computers bundled together using hardware and software in order to make.
Presented by: Yash Gurung, ICFAI UNIVERSITY.Sikkim BUILDING of 3 R'sCLUSTER PARALLEL COMPUTER.
Novell Server Linux vs. windows server 2008 By: Gabe Miller.
Linux+ Guide to Linux Certification, Second Edition Chapter 3 Linux Installation and Usage.
Lesson 15 – INSTALL AND SET UP NETWARE 5.1. Understanding NetWare 5.1 Preparing for installation Installing NetWare 5.1 Configuring NetWare 5.1 client.
High Performance Computing Course Notes High Performance Storage.
Installation. Installation   There are three phases to building an LTSP server: – –Installing the LTSP utilities – –Installing the LTSP client packages.
Chapter 1 Introducing Windows Server 2012/R2
Virtual Network Servers. What is a Server? 1. A software application that provides a specific one or more services to other computers  Example: Apache.
Vincenzo Vagnoni LHCb Real Time Trigger Challenge Meeting CERN, 24 th February 2005.
GDC Workshop Session 1 - Storage 2003/11. Agenda NAS Quick installation (15 min) Major functions demo (30 min) System recovery (10 min) Disassembly (20.
Installing Windows Deployment Service
Cluster computing facility for CMS simulation work at NPD-BARC Raman Sehgal.
Windows Server MIS 424 Professor Sandvig. Overview Role of servers Performance Requirements Server Hardware Software Windows Server IIS.
CNT-150VT. Question #1 Your name Question #2 Your computer number ##
Installing and maintaining clusters of FreeBSD servers using PXE and Rsync Cor Bosman XS4ALL
Module 9 Review Questions 1. The ability for a system to continue when a hardware failure occurs is A. Failure tolerance B. Hardware tolerance C. Fault.
Chapter-4 Windows 2000 Professional Win2K Professional provides a very usable interface and was designed for use in the desktop PC. Microsoft server system.
Configuration of Linux Terminal Server Group: LNS10A6 Thebe Laxmi, Sharma Prabhakar, Patrick Appiah.
Linux & Library – Web Kiosks for Peanuts Sam Deeljore Pius XII Memorial/HSC Libraries Saint Louis University LITA 2004 National Forum St. Louis, Missouri.
IT:NETWORK:MICROSOFT SERVER 2 DHCP AND WINDOWS DEPLOYMENT SERVICES.
Guide to Linux Installation and Administration, 2e 1 Chapter 9 Preparing for Emergencies.
Sydney Region IT School Support Term Smaller Servers available on Contract.
Enabling Palacios PXE-Boot Chen Jin Bharath Pattabiraman Patrick Foley.
Step By Step Windows Server 2003 Installation Guide Step By Step Windows Server 2003 Installation Guide.
Guide to Linux Installation and Administration, 2e1 Chapter 2 Planning Your System.
Chapter 3 Installing Windows XP Professional. Preparing for installation Pre-installation requirement; ◦ Hardware requirements ◦ Hardware compatibility.
Sandor Acs 05/07/
InstantGrid: A Framework for On- Demand Grid Point Construction R.S.C. Ho, K.K. Yin, D.C.M. Lee, D.H.F. Hung, C.L. Wang, and F.C.M. Lau Dept. of Computer.
INFN Computing for LHCb Domenico Galli, Umberto Marconi, and Vincenzo Vagnoni Genève, February 15, 2001.
Computing for LHCb-Italy Domenico Galli, Umberto Marconi and Vincenzo Vagnoni Genève, January 17, 2001.
6/26/01High Throughput Linux Clustering at Fermilab--S. Timm 1 High Throughput Linux Clustering at Fermilab Steven C. Timm--Fermilab.
Computer Systems Lab The University of Wisconsin - Madison Department of Computer Sciences Linux Clusters David Thompson
K12LTSP Linux Terminal Server Project for K-12 schools Brought to you by: Eric Harrison, Multnomah Education Service District
Terminal Servers in Schools A second life for your older computers.
 Load balancing is the process of distributing a workload evenly throughout a group or cluster of computers to maximize throughput.  This means that.
CCNA4 v3 Module 6 v3 CCNA 4 Module 6 JEOPARDY K. Martin.
IDE disk servers at CERN Helge Meinhard / CERN-IT CERN OpenLab workshop 17 March 2003.
RAL Site report John Gordon ITD October 1999
HEP Computing Status Sheffield University Matt Robinson Paul Hodgson Andrew Beresford.
Minimalist’s Linux Cluster Changyoung Choi, Jeonghyun Kim, Seyong Kim Department of Physics Sejong University.
Day12 Network OS. What is an OS? Provides resource management and conflict resolution. –This includes Memory CPU Network Cards.
Page 1 Printing & Terminal Services Lecture 8 Hassan Shuja 11/16/2004.
Microsoft Windows XP Professional MCSE Exam
Status of the new NA60 “cluster” Objectives, implementation and utilization NA60 weekly meetings Pedro Martins 03/03/2005.
Status of the Bologna Computing Farm and GRID related activities Vincenzo M. Vagnoni Thursday, 7 March 2002.
Install, configure and test ICT Networks
Week1: Introduction to Computer Networks. Copyright © 2012 Cengage Learning. All rights reserved.2 Objectives 2 Describe basic computer components and.
15-Feb-02Steve Traylen, RAL WP6 Test Bed Report1 RAL/UK WP6 Test Bed Report Steve Traylen, WP6 PPGRID/RAL, UK
1 CEG 2400 Fall 2012 Network Servers. 2 Network Servers Critical Network servers – Contain redundant components Power supplies Fans Memory CPU Hard Drives.
10/18/01Linux Reconstruction Farms at Fermilab 1 Steven C. Timm--Fermilab.
Bernd Panzer-Steindel CERN/IT/ADC1 Medium Term Issues for the Data Challenges.
Thousands of Linux Installations (and only one administrator) A Linux cluster client for the University of Manchester A V Le Blanc I T Services University.
BY: SALMAN 1.
Andrea Chierici Virtualization tutorial Catania 1-3 dicember 2010
Chapter 1 Introducing Windows Server 2012/R2
Create setup scripts simply and easily.
Chapter Objectives In this chapter, you will learn:
BY: SALMAN.
UBUNTU INSTALLATION
Heterogeneous Computation Team HybriLIT
Diskless Remote Boot Linux
Design Unit 26 Design a small or home office network
Web Server Administration
Presentation transcript:

The 2001 Tier-1 prototype for LHCb-Italy Vincenzo Vagnoni Genève, November 2000

The 2001 Tier-1 prototype for LHCb-Italy Vincenzo Vagnoni Outline 2001 LHCb Italian Tier-1 prototype architecture Linux diskless nodes howto Network attached storage Tests made in Bologna Conclusions

The 2001 Tier-1 prototype for LHCb-Italy Vincenzo Vagnoni 2001 LHCb Italian Tier-1 protoype 15 CPU 2001 (700 SI95) Tier-1 computer farm (at the beginning of the year, with further increase if we can show that more CPU power is needed for the italian groups) 15 single-processor motherboards, 256 MB RAM each or more, rack-mounted, diskless, with redundant power supply and cooling. 1 TB IDE disk array in RAID-5 configuration, hosted in a NAS (Network Attached Storage) unit (also to be increased if needed). 100 Mbps Ethernet Switch.

The 2001 Tier-1 prototype for LHCb-Italy Vincenzo Vagnoni 2001 Tier-1 Computer Farm Prototype

The 2001 Tier-1 prototype for LHCb-Italy Vincenzo Vagnoni Components (Motherboards, Racks, NAS, Switches)

The 2001 Tier-1 prototype for LHCb-Italy Vincenzo Vagnoni Advantage of diskless nodes Easy Operating System installation and maintainance Adding a machine to the farm requires to run a simple script on the disk server The Operating Systems are centralized and accessible through the file system of the disk server Enhanced disk fault tolerance The disk server can have large hard disk arrays with redundant information by using a RAID 5 No need of UPS battery for the client nodes Only the disk server needs UPS, no damage if a client is powered off System compactness The nodes are basically simple motherboards with integrated ethernet cards They can be arranged in racks, ~25 nodes in only 1 m 2.

The 2001 Tier-1 prototype for LHCb-Italy Vincenzo Vagnoni Working concept (using for example a PXE BootROM) At boot time the diskless client sends its ethernet card MAC address by making a bootp (or dhcp) broadcast request over the LAN The diskless client ethernet controller must be equipped with a BootROM (for example a PXE, Pre-eXecution Environment, compliant one) The bootp (or dhcp) server replies assigning the IP address to the client and passing the file name of the boot code on the server Using a PXE BootROM the boot code is a small loader (pxelinux.bin), provided with the syslinux package, also able to read a configuration file on the server with some additional information (i.e. the arguments to pass to the linux kernel) The client makes a TFTP connection to the server asking to download the PXE loader and executes it The loader tftp-downloads the linux kernel, and executes it

The 2001 Tier-1 prototype for LHCb-Italy Vincenzo Vagnoni Further details… The linux kernel needs to be compiled with root-over-NFS support to be able to mount the root filesystem through NFS The Ethernet Card driver must be compiled resident into the kernel (in principle could be also provided by an initial ramdisk) The clients can share some directories of the linux filesystem (/usr, /opt, …), but need some others to be private (at least /etc, /dev, /var, /tmp) Optionally some non-shareable directories (for example /dev, /var, /tmp) can be mounted on ramdisks

The 2001 Tier-1 prototype for LHCb-Italy Vincenzo Vagnoni Network Attached Storage OpenNAS RS15 RaidZone File Server supports NFS (also Windows SMB, but fortunately we don’t need this guy) Built-in Web Server and DNS Server R3 RAID5 (Redundancy with Rapid Recovery) Java-based remote interface for all OpenNAS functions, including disk array configuration 15 hot-swappable drives bays, with 15 Ultra ATA/100 drives (80 GB each) + 1 hot-swappable spare Dual PIII 800MHz CPUs and 256 MB ECC RAM Dual 100BaseT network connections (Gigabit optional) Dual redundant 300W hot-swappable power supplies Possibility of two RAIDZONE disk array expansions (up to 3 TB) Cost: $20850 (RS15) + optional $19595 (1 TB expansion)

The 2001 Tier-1 prototype for LHCb-Italy Vincenzo Vagnoni Tests made in Bologna Hardware setup 4 PC HP Vectra, single processor PIII 500 MHz, 256 MB RAM, one acting as root filesystem disk server and three as diskless clients 1 PC HP Kayak, dual processor PIII 733 MHz, 256 MB RAM, as an additional client with different architecture Software setup Linux RedHat 6.2, upgraded with kernel , or optionally (to be choosen at boot time) Linux SuSE 6.4 Tests made to evaluate: Complexity of installation and administration Performance of the system (especially interesting due to the absence of the system swap area, even if in principle a network based one could be arranged)

The 2001 Tier-1 prototype for LHCb-Italy Vincenzo Vagnoni Installation tests results Installation trivial Once the server system is configured, adding a new client takes 1 minute All the installations are by default identical No additional network activity observed (monitored directly into the switch) The OSes rarely need to access non cached information on the server (server disk almost completely inactive) Robust system Simulated network failures for few seconds do not interfere with the OS work Powering off the client machines (without a shutdown!) doesn’t create any problem at all (no need to check the disk data integrity at reboot)

The 2001 Tier-1 prototype for LHCb-Italy Vincenzo Vagnoni Performance test results Performance tested by running two HERA-B MonteCarlo simulation (ARTE) jobs on a dual processor 733 MHz, 256 MB RAM, HP Kayak with diskless installation (running Linux SuSE 6.4 in this case) Performance compared with an identical machine with non- diskless installation and local disk swap area Identical performances! The MC jobs were tuned to allocate ~100 MB RAM each Even with 256 MB RAM, two heavy jobs can run together without using a local disk swap area at all Conclusion: no problem for a single-processor motherboard equipped with 256 MB RAM running one single job

The 2001 Tier-1 prototype for LHCb-Italy Vincenzo Vagnoni Conclusions The 2001 LHCb Tier-1 prototype architecture is presented Based on diskless nodes configuration with a Network Attached Storage disk server and a 100 Mbps switched ethernet System proof of principle tested This kind of system can be an interesting contribution to the LHC computing community It saves time, room and money!