CPP Staff - 30 CPP Staff - 30 FCIPT Staff - 35 IPR Staff - 500 IPR Staff - 500 ITER-India Staff - 150 ITER-India Staff - 150 Research Areas: 1.Studies.

Slides:



Advertisements
Similar presentations
Computing Infrastructure
Advertisements

IBM Software Group ® Integrated Server and Virtual Storage Management an IT Optimization Infrastructure Solution from IBM Small and Medium Business Software.
ITT NETWORK SOLUTIONS. Quick Network Facts Constant 100 Mbps operation for users Infrastructure ready for 1000 Mbps operation to the user Cisco routing.
Servidor Rack 2583ECU - x3250_M4 Express x3250 M4, Xeon 4C E3-1220v2 69W 3.1GHz/1600MHz/8MB, 1x4GB, O/Bay SS 3.5in SATA, SR C100, Multi- Burner, 300W p/s,
IBM 1350 Cluster Expansion Doug Johnson Senior Systems Developer.
Beowulf Supercomputer System Lee, Jung won CS843.
Performance Analysis of Virtualization for High Performance Computing A Practical Evaluation of Hypervisor Overheads Matthew Cawood University of Cape.
Information Technology Center Introduction to High Performance Computing at KFUPM.
AASPI Software Computational Environment Tim Kwiatkowski Welcome Consortium Members November 18, 2008.
Linux Clustering A way to supercomputing. What is Cluster? A group of individual computers bundled together using hardware and software in order to make.
Presented by: Yash Gurung, ICFAI UNIVERSITY.Sikkim BUILDING of 3 R'sCLUSTER PARALLEL COMPUTER.
Novell Server Linux vs. windows server 2008 By: Gabe Miller.
HIGH PERFORMANCE COMPUTING ENVIRONMENT The High Performance Computing environment consists of high-end systems used for executing complex number crunching.
SUMS Storage Requirement 250 TB fixed disk cache 130 TB annual increment for permanently on- line data 100 TB work area (not controlled by SUMS) 2 PB near-line.
An Introduction to Princeton’s New Computing Resources: IBM Blue Gene, SGI Altix, and Dell Beowulf Cluster PICASso Mini-Course October 18, 2006 Curt Hillegas.
Linux clustering Morris Law, IT Coordinator, Science Faculty, Hong Kong Baptist University.
High Performance Computing (HPC) at Center for Information Communication and Technology in UTM.
Bondyakov A.S. Institute of Physics of ANAS, Azerbaijan JINR, Dubna.
1 petaFLOPS+ in 10 racks TB2–TL system announcement Rev 1A.
BNL Oracle database services status and future plans Carlos Fernando Gamboa RACF Facility Brookhaven National Laboratory, US Distributed Database Operations.
Cluster computing facility for CMS simulation work at NPD-BARC Raman Sehgal.
Building a High-performance Computing Cluster Using FreeBSD BSDCon '03 September 10, 2003 Brooks Davis, Michael AuYeung, Gary Green, Craig Lee The Aerospace.
ww w.p ost ers essi on. co m E quipped with latest high end computing systems for providing wide range of services.
KYLIN-I 麒麟一号 High-Performance Computing Cluster Institute for Fusion Theory and Simulation, Zhejiang University
Computing/Tier 3 Status at Panjab S. Gautam, V. Bhatnagar India-CMS Meeting, Sept 27-28, 2007 Delhi University, Delhi Centre of Advanced Study in Physics,
Corporate Partner Overview and Update September 27, 2007 Gary Crane SURA Director IT Initiatives.
ScotGrid: a Prototype Tier-2 Centre – Steve Thorn, Edinburgh University SCOTGRID: A PROTOTYPE TIER-2 CENTRE Steve Thorn Authors: A. Earl, P. Clark, S.
DELL PowerEdge 6800 performance for MR study Alexander Molodozhentsev KEK for RCS-MR group meeting November 29, 2005.
The SLAC Cluster Chuck Boeheim Assistant Director, SLAC Computing Services.
Sobolev Showcase Computational Mathematics and Imaging Lab.
Objective  CEO of a small company  Create a small office network  $10,000 and $20,000 Budget  Three servers (workstations)  Firewall device  Switch.
Taking the Complexity out of Cluster Computing Vendor Update HPC User Forum Arend Dittmer Director Product Management HPC April,
ScotGRID:The Scottish LHC Computing Centre Summary of the ScotGRID Project Summary of the ScotGRID Project Phase2 of the ScotGRID Project Phase2 of the.
INDIACMS-TIFR Tier 2 Grid Status Report I IndiaCMS Meeting, April 05-06, 2007.
HPCVL High Performance Computing Virtual Laboratory Founded 1998 as a joint HPC lab between –Carleton U. (Comp. Sci.) –Queen’s U. (Engineering) –U. of.
6/26/01High Throughput Linux Clustering at Fermilab--S. Timm 1 High Throughput Linux Clustering at Fermilab Steven C. Timm--Fermilab.
Laboratório de Instrumentação e Física Experimental de Partículas GRID Activities at LIP Jorge Gomes - (LIP Computer Centre)
JLab Scientific Computing: Theory HPC & Experimental Physics Thomas Jefferson National Accelerator Facility Newport News, VA Sandy Philpott.
ITEP computing center and plans for supercomputing Plans for Tier 1 for FAIR (GSI) in ITEP  8000 cores in 3 years, in this year  Distributed.
Computing Resources at Vilnius Gediminas Technical University Dalius Mažeika Parallel Computing Laboratory Vilnius Gediminas Technical University
On High Performance Computing and Grid Activities at Vilnius Gediminas Technical University (VGTU) dr. Vadimas Starikovičius VGTU, Parallel Computing Laboratory.
Nanco: a large HPC cluster for RBNI (Russell Berrie Nanotechnology Institute) Anne Weill – Zrahia Technion,Computer Center October 2008.
KOLKATA Grid Site Name :- IN-DAE-VECC-02Monalisa Name:- Kolkata-Cream VO :- ALICECity:- KOLKATACountry :- INDIA Shown many data transfers.
ClinicalSoftwareSolutions Patient focused.Business minded. Slide 1 Opus Server Architecture Fritz Feltner Sept 7, 2007 Director, IT and Systems Integration.
Randy MelenApril 14, Stanford Linear Accelerator Center Site Report April 1999 Randy Melen SLAC Computing Services/Systems HPC Team Leader.
The 2001 Tier-1 prototype for LHCb-Italy Vincenzo Vagnoni Genève, November 2000.
SA1 operational policy training, Athens 20-21/01/05 Presentation of the HG Node “Isabella” and operational experience Antonis Zissimos Member of ICCS administration.
Ole’ Miss DOSAR Grid Michael D. Joy Institutional Analysis Center.
Evangelos Markatos and Charalampos Gkikas FORTH-ICS Athens, th Mar Institute of Computer Science - FORTH Christos.
HP Proliant Server  Intel Xeon E3-1220v3 (3.1GHz / 4-core / 8MB / 80W).  HP 4GB Dual Rank x8 PC E (DDR3-1600) Unbuffered Memory Kit.  HP Ethernet.
Acer Altos T110 F3 E3-1220v3 4GB 500GB: 24,900 Baht
A Practical Evaluation of Hypervisor Overheads Matthew Cawood Supervised by: Dr. Simon Winberg University of Cape Town Performance Analysis of Virtualization.
Multicore Applications in Physics and Biochemical Research Hristo Iliev Faculty of Physics Sofia University “St. Kliment Ohridski” 3 rd Balkan Conference.
1062m0656 between 10692m2192 DS/ICI/CIF EqualLogic PS6510E
CNAF - 24 September 2004 EGEE SA-1 SPACI Activity Italo Epicoco.
KOLKATA Grid Kolkata Tier-2 Status and Plan Site Name :- IN-DAE-VECC-02 Gocdb Name:- IN-DAE-VECC-02 VO :- ALICE City:- KOLKATA Country :-
Power Systems with POWER8 Technical Sales Skills V1
HPC Roadshow Overview of HPC systems and software available within the LinkSCEEM project.
TYBIS IP-Matrix Virtualized Total Video Surveillance System Edge Technology, World Best Server Virtualization.
Lenovo New Thinksystem and
Lenovo New Thinksystem and
Retail Price List 06 August 2018 ThinkServers TS150
NCSA Supercluster Administration
Overview of HPC systems and software available within
Lenovo Thinksystem and
SiCortex Update IDC HPC User Forum
Lenovo Thinksystem and
Lenovo ThinkSystem Servers Special Promo
HP SERVERS & OPTIONS PRODUCTS THAT DELIVER
HP SERVERS & OPTIONS PRODUCTS THAT DELIVER
Presentation transcript:

CPP Staff - 30 CPP Staff - 30 FCIPT Staff - 35 IPR Staff IPR Staff ITER-India Staff ITER-India Staff Research Areas: 1.Studies on high temperature magnetically confined plasmas. 2.Basic experiments in plasma physics 3.Industrial plasma processing and applications.

 “Aditya” Tokamak :  “ Aditya” is India’s first indigenous medium size tokamak.  Major Radius is 0.75 m  Minor radius is 0.25 m  Plasma discharge duration is ~100msec  Plasma current ~ kA  Steady State Superconducting Tokamak (SST-1):  Upgraded version of Aditya Tokamak  Major Radius is 1.1 m  Minor radius is 0.20 m  Plasma discharge duration is 1000 sec.  Plasma current ~ kA

Computer Division is responsible for providing computational facility to IPR users through LAN of roughly 600 machines, comprising of Servers and Desktops. A fast Ethernet network comprising of L3 and L2 switch, with mix of copper and fiber connectivity across the Institute. Total IT staff – 6 nos. Service Areas :  Mail server  Internet server  DNS server  Intranet servers  Network etc.

IPR - Network

Computer Division - IPR Computer Division has two types of HPC cluster –  34 node Desktop based HPC cluster  2 node Rack mount based HPC cluster

Desktop based Cluster Head node (1 No.):  Intel Xeon 3.2 GHz 4 core processor with 2MB L2 cache  Memory – 5GB  Storage Capacity – 293GB Compute Node (34 nos.):  Intel Pentium 4 3GHz 2 core processor with 1MB L2 cache  Memory – 3GB  Storage Capacity – 80GB These Compute nodes and head node are interconnected with a 48 port gigabit Ethernet switch.

Desktop based Cluster This cluster contains a gamut of free open source scientific software as well as procured compilers and computational libraries and various flavours of MPI. Maui and Torque are being used as scheduler and resource manager. Following is the list of software used in the cluster :  Intel C and Fortran compilers  Intel MKL libraries  Intel MPI 2.0 and Mpich2  Lahey/Fujitsu Fortran 95  GNU scientific library  Scilab  Automatically tuned Linear Algebra Software (ATLAS)  FFTW3.1.2 library for computing discrete Fourier transform

Desktop based Cluster  Theoretical peak performance – GFlops  Through Linpack – 50%  Approximately 30 nos. of Scientists and Research Scholars are using for computational purpose Computer Centre helps cluster users in porting different applications on this cluster and also actively involved in parallelizing scientific codes using MPI

Rack Mount based Cluster Head Node (1 No.):  Intel Xeon E5405 2GHz processor  No. of cores – 4  Memory – 4GB  Storage Capacity – 300 GB  Network – 1Gbps Compute Node (2 nos.):  Intel Xeon E GHz processor  No. of cores – 24  Memory – 24 GB  Storage capacity – 1.2TB  Network – 1Gbps

Rack Mount Cluster  Theoretical peak performance – GFlops  Through Linpack – 70%  Number of users using this cluster – 25 This cluster also contains a gamut of free open source scientific software as well as procured compilers and computational libraries. Maui and Torque are being used as scheduler and resource manager

Cluster Related Responsibilities  Installing, configuring and maintaining software of HPC clusters  Coordinating with vendors to resolve hardware and software problems  Tuning OS and applications to increase performance and reliability of services  Automating system administration tasks utilizing both open source configuration management tools and in-house scripts  Documenting system administration procedures for routine and complex tasks  Providing reliable and efficient backups/restores for all managed systems.

New HPC cluster The hardware configuration of 5.2 TF HPC cluster : Master/ Head Node (Qty – 1No.) and Computer Node (Qty – 9Nos.)  CPU based IBM System x3750 M4  4 Processor – Intel Xeon E C (2.4 GHz, 20MB L3, 95 W 4s)  256GB RAM (per core 8GB,ECC,DDR3,1.3GHz LP RDIMM)  Mellanox single port QSFP QDR/FDR10/10GbE HCA The storage capacity for Head node is 2TB and for Compute nodes is 1 TB Cluster Management Node (Qty. – 1 No.):  CPU based IBM System x3250 M4  Intel Xeon Processor E C (3.1GHz,8MB cache,80W)  16GB RAM(per core 4GB, ECC, DDR3, 1.3GHz LP UDIMM)

New HPC Cluster Storage Node (Qty - 1No.):  IBM system x3650 M4  2 Processor Intel Xeon E C (2.4 GHz, 10MB cache, 80W)  32GB RAM (per core 4GB, ECC,DDR3, 1.6GHz LP RDIMM)  Storage capacity is 1TB  Mallanox Dual port QSFP QDR/FDR10/10GbE HCA  1 no. of 6GB SAS HBA  2 no. of Emulex 8Gb FC Dual Port HBA Storage System:  IBM storwize V3700 SFF Dual Control Enclosure  20 nos. of 900 GB SAS HDD  2 nos. of 8GB FC 4 port Host interface card

New HPC Cluster Tape Back-up Drive:  TS2900 tape library with LTO5 SAS drive and rack mount kit Miscellaneous items:  IBM 42U Rack – 1 no.  IB 18 port Mellanox switch – 1 No.  26 port Gig managed switch – 1 no.  KVM (with console, keyboard, mouse) switch – 1 no. List of supplied software with cluster:  SuSe Linux 11.0  Intel Cluster Studio XE  System X V3.x monitoring software  V3.x LSF Express for job scheduling  Tivoli storage backup software

Block Diagram of 5.2TF HPC Cluster