Tamnun Hardware.

Slides:



Advertisements
Similar presentations
Shared Computing Cluster Transition Plan Glenn Bresnahan June 10, 2013.
Advertisements

Computing Infrastructure
Servidor Rack 2583ECU - x3250_M4 Express x3250 M4, Xeon 4C E3-1220v2 69W 3.1GHz/1600MHz/8MB, 1x4GB, O/Bay SS 3.5in SATA, SR C100, Multi- Burner, 300W p/s,
IBM 1350 Cluster Expansion Doug Johnson Senior Systems Developer.
HPCC Mid-Morning Break High Performance Computing on a GPU cluster Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Discovery.
Introduction to TAMNUN server and basics of PBS usage Yulia Halupovich CIS, Core Systems Group.
Jia Yao Director: Vishwani D. Agrawal High Performance Compute Cluster April 13,
Tamnun Hardware. Tamnun Cluster inventory – system Login node (Intel 2 E GB ) – user login – PBS – compilations, – YP master Admin.
Linux Clustering A way to supercomputing. What is Cluster? A group of individual computers bundled together using hardware and software in order to make.
Presented by: Yash Gurung, ICFAI UNIVERSITY.Sikkim BUILDING of 3 R'sCLUSTER PARALLEL COMPUTER.
ASKAP Central Processor: Design and Implementation Calibration and Imaging Workshop 2014 ASTRONOMY AND SPACE SCIENCE Ben Humphreys | ASKAP Software and.
HIGH PERFORMANCE COMPUTING ENVIRONMENT The High Performance Computing environment consists of high-end systems used for executing complex number crunching.
SUMS Storage Requirement 250 TB fixed disk cache 130 TB annual increment for permanently on- line data 100 TB work area (not controlled by SUMS) 2 PB near-line.
Project Cysera Hardware Configuration Drafted by Zoebir Bong.
Introduction to DoC Private Cloud
1 petaFLOPS+ in 10 racks TB2–TL system announcement Rev 1A.
Gordon: Using Flash Memory to Build Fast, Power-efficient Clusters for Data-intensive Applications A. Caulfield, L. Grupp, S. Swanson, UCSD, ASPLOS’09.
CPP Staff - 30 CPP Staff - 30 FCIPT Staff - 35 IPR Staff IPR Staff ITER-India Staff ITER-India Staff Research Areas: 1.Studies.
 Model: ASUS SABERTOOTH Z77 Intel Series 7 Motherboard – ATX, Socket H2 (LGA115), Intel Z77 Express, 1866MHz DDR3, SATA III (6Gb/s), RAID, 8-CH Audio,
HPCC Mid-Morning Break Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Discovery Introduction to the new GPU (GFX) cluster.
Computer Design Corby Milliron. Mother Board specs Model: Processor Socket Intel Processor Interface LGA1150 Form Factor ATX Processors Supported 4th.
Cluster computing facility for CMS simulation work at NPD-BARC Raman Sehgal.
KYLIN-I 麒麟一号 High-Performance Computing Cluster Institute for Fusion Theory and Simulation, Zhejiang University
Hardware Overview Iomega Network Storage LENOVO | EMC CONFIDENTIAL. ALL RIGHTS RESERVED. Storage for SMB and Distributed Enterprise PX SERIES.
Corporate Partner Overview and Update September 27, 2007 Gary Crane SURA Director IT Initiatives.
Site Lightning Report: MWT2 Mark Neubauer University of Illinois at Urbana-Champaign US ATLAS Facilities UC Santa Cruz Nov 14, 2012.
Big Red II & Supporting Infrastructure Craig A. Stewart, Matthew R. Link, David Y Hancock Presented at IUPUI Faculty Council Information Technology Subcommittee.
MY PERSONAL COMPUTER Monica Sheffo. MOTHERBOARD  Model: Intel BOXDZ77GA-70K Intel Extreme Motherboard  Supported Processors: 2 nd generation Intel Core.
Introduction to the HPCC Dirk Colbry Research Specialist Institute for Cyber Enabled Research.
Hardware Trends. Contents Memory Hard Disks Processors Network Accessories Future.
£899 – Ultimatum Computers indiegogo.com/ultimatumcomputers The Ultimatum.
PDSF at NERSC Site Report HEPiX April 2010 Jay Srinivasan (w/contributions from I. Sakrejda, C. Whitney, and B. Draney) (Presented by Sandy.
Rob Allan Daresbury Laboratory NW-GRID Training Event 25 th January 2007 Introduction to NW-GRID R.J. Allan CCLRC Daresbury Laboratory.
ONStor Pantera 3110 ONStor NAS. Copyright 2008 · ONStor Confidential Pantera 3110 – An Integrated Channel only NAS  Integrated standalone NAS system.
NICS RP Update TeraGrid Round Table March 10, 2011 Ryan Braby NICS HPC Operations Group Lead.
LACEY ANDERSON PERIOD2 Computer Design. Motherboard Model: GA-Z77-DS3H Supported Processor(s):  2nd generation Intel® Core™ i3  2nd generation Intel®
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
Tested, seen, heard… Andrei Maslennikov Rome, April 2006.
Pathway to Petaflops A vendor contribution Philippe Trautmann Business Development Manager HPC & Grid Global Education, Government & Healthcare.
Hopper The next step in High Performance Computing at Auburn University February 16, 2016.
Introduction to Data Analysis with R on HPC Texas Advanced Computing Center Feb
Asus G74SX. ASUSTeK Computer Inc. Multinatinal computer technology and consumer electronics product manufacturer Headquater in Taipei, Taiwan Product.
SGI Rackable C2108-GP5 “Arcadia” Server
Sobolev(+Node 6, 7) Showcase +K20m GPU Accelerator.
An Brief Introduction Charlie Taylor Associate Director, Research Computing UF Research Computing.
NIIF HPC services for research and education
HPC Roadshow Overview of HPC systems and software available within the LinkSCEEM project.
Buying into “Summit” under the “Condo” model
HPC usage and software packages
LinkSCEEM-2: A computational resource for the development of Computational Sciences in the Eastern Mediterranean Mostafa Zoubi SESAME Outreach SESAME,
Heterogeneous Computation Team HybriLIT
Tamnun Cluster inventory – compute nodes, cores/RAM
Computing Board Report CHIPP Plenary Meeting
UTFSM computer cluster
Constructing a system with multiple computers or processors
Southwest Tier 2.
3U CompactPCI board based on 4th Generation Intel® Core™ processor
Power couple. Dell EMC servers powered by Intel® Xeon® processors and running Windows Server* 2016, ready to securely handle dynamic business workloads.
المحور 3 : العمليات الأساسية والمفاهيم
Introduction to TAMNUN server and basics of PBS usage
Footer.
Overview of HPC systems and software available within
Constructing a system with multiple computers or processors
Constructing a system with multiple computers or processors
High Performance Computing in Bioinformatics
Constructing a system with multiple computers or processors
IBM Power Systems.
SAP HANA Cost-optimized Hardware for Non-Production
The Neuronix HPC Cluster:
Office of Information Technology February 16, 2016
Presentation transcript:

Tamnun Hardware

Tamnun Cluster inventory – system Login node (Intel 2 E5645 6core@2.4GHz, 96GB ) user login PBS compilations, YP master Admin node (Intel 2 E5-2640 6core@2.5GHz, 64GB ) SMC 2 NAS heads (NFS, CIFS) 1st enclosure – 60 slots, 60 x 1TB drives 2nd enclosure - 60 slots, 10 x 3TB drives Network Solution: - 15 QDR Infiniband switches with 2:1 blocking topology - 6 GiGE switches for the management network

Tamnun Cluster inventory – compute nodes (1) Tamnun consists of public cluster, available for general Technion users and private sub-clusters purchased by Technion researchers Public Cluster Specifications: 80 Compute Nodes consisting of two 2.40 GHz six core Xeon Intel processors: 960 cores with 8GB DDR3 memory per core 4 Graphical Processing Units (GPU): 4 servers with NVIDIA TeslaM2090 GPU Computing Modules, 512 CUDA cores Storage: 36 nodes with 500 GB and 52 nodes with 1 TB Sata Drives, 4 nodes with fast 1200 GB SAS drives, raw NAS storage capacity is 50 TB.

Tamnun Cluster inventory – compute nodes (2) • Nodes n001 – n028 - RBNI (public) • Nodes n029 – n080 - Minerva (public) • Nodes n097 – n100 - “Gaussian” nodes with large and fast drive (public) • Nodes gn001 – gn004 - GPU (public) • Nodes gn005 – gn007 - GPU (private nodes of Hagai Perets) • Nodes n081 – n096, sn001 - private cluster (Dan Mordehai) • Nodes n101 – n108 - private cluster (Oded Amir) • Nodes n109 – n172, n217 – n232, gn008 – gn011 - private cluster (Steven Frankel) • Nodes n173 – n180 - private cluster (Omri Barak) • Nodes n181 – n184 - private cluster (Rimon Arieli) • Nodes n185 – n192 - private cluster (Shmuel Osovski) • Nodes n193 – n216 - private cluster (Maytal Caspary) • Nodes n233 – n240 - private cluster (Joan Adler) • Nodes n241 – n244 - private cluster (Ronen Talmon) • Node sn002 - private node (Fabian Glaser)