Sun FIRE Jani Raitavuo Niko Ronkainen. Sun FIRE 15K Most powerful and scalable Up to 106 processors, 576 GB memory and 250 TB online disk storage Fireplane.

Slides:



Advertisements
Similar presentations
Slides Prepared from the CI-Tutor Courses at NCSA By S. Masoud Sadjadi School of Computing and Information Sciences Florida.
Advertisements

Introductions to Parallel Programming Using OpenMP
IBM 1350 Cluster Expansion Doug Johnson Senior Systems Developer.
1 Parallel Scientific Computing: Algorithms and Tools Lecture #3 APMA 2821A, Spring 2008 Instructors: George Em Karniadakis Leopold Grinberg.
Types of Parallel Computers
Information Technology Center Introduction to High Performance Computing at KFUPM.
Presented by: Yash Gurung, ICFAI UNIVERSITY.Sikkim BUILDING of 3 R'sCLUSTER PARALLEL COMPUTER.
OGO 2.1 SGI Origin 2000 Robert van Liere CWI, Amsterdam TU/e, Eindhoven 11 September 2001.
HIGH PERFORMANCE COMPUTING ENVIRONMENT The High Performance Computing environment consists of high-end systems used for executing complex number crunching.
IBM RS6000/SP Overview Advanced IBM Unix computers series Multiple different configurations Available from entry level to high-end machines. POWER (1,2,3,4)
Silicon Graphics, Inc. Poster Presented by: SGI Proprietary Technologies for Breakthrough Research Rosario Caltabiano North East Higher Education & Research.
SUMS Storage Requirement 250 TB fixed disk cache 130 TB annual increment for permanently on- line data 100 TB work area (not controlled by SUMS) 2 PB near-line.
An Introduction to Princeton’s New Computing Resources: IBM Blue Gene, SGI Altix, and Dell Beowulf Cluster PICASso Mini-Course October 18, 2006 Curt Hillegas.
Hitachi SR8000 Supercomputer LAPPEENRANTA UNIVERSITY OF TECHNOLOGY Department of Information Technology Introduction to Parallel Computing Group.
Earth Simulator Jari Halla-aho Pekka Keränen. Architecture MIMD type distributed memory 640 Nodes, 8 vector processors each. 16GB shared memory per node.
IBM RS/6000 SP POWER3 SMP Jari Jokinen Pekka Laurila.
Parallel Computer Architectures
Real Parallel Computers. Background Information Recent trends in the marketplace of high performance computing Strohmaier, Dongarra, Meuer, Simon Parallel.
CPP Staff - 30 CPP Staff - 30 FCIPT Staff - 35 IPR Staff IPR Staff ITER-India Staff ITER-India Staff Research Areas: 1.Studies.
1b.1 Types of Parallel Computers Two principal approaches: Shared memory multiprocessor Distributed memory multicomputer ITCS 4/5145 Parallel Programming,
Cluster Computers. Introduction Cluster computing –Standard PCs or workstations connected by a fast network –Good price/performance ratio –Exploit existing.
1 Down Place Hammersmith London UK 530 Lytton Ave. Palo Alto CA USA.
1 Lecture 7: Part 2: Message Passing Multicomputers (Distributed Memory Machines)
GPU Programming with CUDA – Accelerated Architectures Mike Griffiths
Computer System Architectures Computer System Software
Executing OpenMP Programs Mitesh Meswani. Presentation Outline Introduction to OpenMP Machine Architectures Shared Memory (SMP) Distributed Memory MPI.
A brief overview about Distributed Systems Group A4 Chris Sun Bryan Maden Min Fang.
Small File File Systems USC Jim Pepin. Level Setting  Small files are ‘normal’ for lots of people Metadata substitute (lots of image data are done this.
CLUSTER COMPUTING STIMI K.O. ROLL NO:53 MCA B-5. INTRODUCTION  A computer cluster is a group of tightly coupled computers that work together closely.
1b.1 Types of Parallel Computers Two principal approaches: Shared memory multiprocessor Distributed memory multicomputer ITCS 4/5145 Parallel Programming,
Company LOGO High Performance Processors Miguel J. González Blanco Miguel A. Padilla Puig Felix Rivera Rivas.
Computers organization & Assembly Language Chapter 0 INTRODUCTION TO COMPUTING Basic Concepts.
Sun Fire™ E25K Server Keith Schoby Midwestern State University June 13, 2005.
The Red Storm High Performance Computer March 19, 2008 Sue Kelly Sandia National Laboratories Abstract: Sandia National.
Taking the Complexity out of Cluster Computing Vendor Update HPC User Forum Arend Dittmer Director Product Management HPC April,
Frank Casilio Computer Engineering May 15, 1997 Multithreaded Processors.
CMAQ Runtime Performance as Affected by Number of Processors and NFS Writes Patricia A. Bresnahan, a * Ahmed Ibrahim b, Jesse Bash a and David Miller a.
Batch Scheduling at LeSC with Sun Grid Engine David McBride Systems Programmer London e-Science Centre Department of Computing, Imperial College.
HPCVL High Performance Computing Virtual Laboratory Founded 1998 as a joint HPC lab between –Carleton U. (Comp. Sci.) –Queen’s U. (Engineering) –U. of.
Parallel Programming on the SGI Origin2000 With thanks to Igor Zacharov / Benoit Marchand, SGI Taub Computer Center Technion Moshe Goldberg,
Copyright © 2009 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Principles of Parallel Programming First Edition by Calvin Lin Lawrence Snyder.
PARALLEL COMPUTING overview What is Parallel Computing? Traditionally, software has been written for serial computation: To be run on a single computer.
Computer Organization & Assembly Language © by DR. M. Amer.
Nanco: a large HPC cluster for RBNI (Russell Berrie Nanotechnology Institute) Anne Weill – Zrahia Technion,Computer Center October 2008.
Sun Starfire: Extending the SMP Envelope Presented by Jen Miller 2/9/2004.
Interconnection network network interface and a case study.
By Chi-Chang Chen.  Cluster computing is a technique of linking two or more computers into a network (usually through a local area network) in order.
COMP381 by M. Hamdi 1 Clusters: Networks of WS/PC.
Copyright © 2009 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Principles of Parallel Programming First Edition by Calvin Lin Lawrence Snyder.
Randy MelenApril 14, Stanford Linear Accelerator Center Site Report April 1999 Randy Melen SLAC Computing Services/Systems HPC Team Leader.
3/12/2013Computer Engg, IIT(BHU)1 PARALLEL COMPUTERS- 1.
3/12/2013Computer Engg, IIT(BHU)1 INTRODUCTION-2.
Spring EE 437 Lillevik 437s06-l22 University of Portland School of Engineering Advanced Computer Architecture Lecture 22 Distributed computer Interconnection.
CDA-5155 Computer Architecture Principles Fall 2000 Multiprocessor Architectures.
Pathway to Petaflops A vendor contribution Philippe Trautmann Business Development Manager HPC & Grid Global Education, Government & Healthcare.
Background Computer System Architectures Computer System Software.
UltraSparc IV Tolga TOLGAY. OUTLINE Introduction History What is new? Chip Multitreading Pipeline Cache Branch Prediction Conclusion Introduction History.
Multicore Applications in Physics and Biochemical Research Hristo Iliev Faculty of Physics Sofia University “St. Kliment Ohridski” 3 rd Balkan Conference.
Constructing a system with multiple computers or processors 1 ITCS 4/5145 Parallel Programming, UNC-Charlotte, B. Wilkinson. Jan 13, 2016.
PERFORMANCE OF THE OPENMP AND MPI IMPLEMENTATIONS ON ULTRASPARC SYSTEM Abstract Programmers and developers interested in utilizing parallel programming.
USEIMPROVEEVANGELIZE ● Yue Chao ● Sun Campus Ambassador High-Performance Computing with Sun Studio 12.
Constructing a system with multiple computers or processors
STARFIRE Extending the SMP Envelope
Web Server Administration
Interconnect with Cache Coherency Manager
Constructing a system with multiple computers or processors
Constructing a system with multiple computers or processors
Constructing a system with multiple computers or processors
High Performance Computing
Types of Parallel Computers
Presentation transcript:

Sun FIRE Jani Raitavuo Niko Ronkainen

Sun FIRE 15K Most powerful and scalable Up to 106 processors, 576 GB memory and 250 TB online disk storage Fireplane Interconnect SPARC architecture Solaris Operating Environment

Sun and TOP 500 list 88 systems manufactured by Sun Microsystems Fastest system at place 145 (cluster) 81 Fire K15/144 systems (places ) Users include Cambrige University (4), Commerzbank (3), DaimlerChrysler (5), Deutch Bank (4)

Universitaet Aachen / RWTH 16 nodes of Fire 6800 (24GB memory / 24 CPUs per node ) 4 nodes of Fire 15K (144GB memory / 72 CPUs per node) 15K nodes (288 CPUs) is listed in TOP500 at place 173, Rmax and Rpeak

The UltraSPARC III processor Sun’s third generation 64-bit processor Clock frequencies 750 and 900 MHz (900 and 1050 MHz UltraSPARC III Cu) 4-way Superscalar 6 execution pipelines Capable of addressing 16GB of main memory at 2,4GB/s

The UltraSPARC III processor

Nodes A typical node contains up to 72 processors Nodes connected to each other with Fast Ethernet, Gigabit Ethernet, Myrinet 2000 or the recently announced Fire Link technology

Interconnect The up to 72 processors in a node form up to 18 snooping coherency domains. Each snooping coherency domain contains a CPU/memory board with 4 CPU’s, an I/O assembly board with 2 I/O controllers and an expander board. Sun uses the Sun Fireplane two-level cache- coherency protocol.

Fireplane First Sun’s interconnect protocol to use point- to-point (directory) coherency.

Software Can be divided into 4 categories: -Solaris Operating Environment -Sun One Studio -Sun HPC ClusterTools -Sun Cluster

Solaris Operating Environment 64-bit Operating system Multithreaded Latest version is Solaris 9

Sun One Studio Collection of software development tools Compiler Collection package contains - Sun Fortran - C and C++ - Debuggers OpenMP support

Sun HPC ClusterTools Full environment for parallel computing - Parallel program development - Resource management - System Administration - Cluster management

S3L library Sun MPI Prism debugger Sun Cluster Runtime Environment (CRE) Remote Shared Memory (RSM) Loadable Protocol Module Cluster Console Management (CCM) Features

Sun Cluster Parallel support limited (RSM supported) High Availability All Sun servers can be used, up to eight nodes