Presentation is loading. Please wait.

Presentation is loading. Please wait.

Table of Contents BULL – Company introduction

Similar presentations


Presentation on theme: "Table of Contents BULL – Company introduction"— Presentation transcript:

1

2 Table of Contents BULL – Company introduction
bullx – European HPC offer BULL/Academic community co-operation + references ISC’10 – News from Top European HPC Event Discussion / Forum

3 Bull Strategy Fundamentals
Jaroslav Vojtěch, BULL HPC sales representative

4 Bull Group A growing and profitable company A solid customer base
REVENUE BREAKDOWN A growing and profitable company A solid customer base Public sector, Europe Bull, Architect of an Open World™ Our motto, our heritage, our culture Group commitment to become a leading player in Extreme Computing in Europe The largest HPC R&D effort in Europe 500 Extreme Computing experts = the largest pool in Europe public sector – a distinctive advantage in times of crisis!

5 Business segments and key offerings
20/04/2017 Business segments and key offerings Global offers and third-party products €77 m Mainframes Unix AIX systems x86 systems (Windows/Linux) Outsourcing e-govt. Telco Others Enterprise Extreme Computing & Storage Hardware & System Solutions €358 m Supercomputers Solution integration Consultancy, optimization Services & Solutions €483 m Systems Integration Open Source Business Intelligence Security solutions Infrastructure integration Maintenance & Product-Related Services €192 m Green IT Data Center relocation Disaster recovery Third-party product maintenance Outsourcing Operations 5

6 Extreme Computing offer
Boris Mittelmann, HPC Conslutant

7 Bull positioning in Extreme Computing
Presence in education, government and industrial markets From mid-size solutions to high end On the front line for innovation: large hybrid system for GENCI Prepared to deliver petaflops-scale systems starting in 2010 The European Extreme Computing provider

8 Addressing the HPC market
HPC market evolution (EMEA) 2007 2012 Targeted HPC market Super Computers Divisional Departmental Divisional Departmental Workstation Super Computers Workstation 3 B$ 5 B$ Expand into manufacturing, oil and gas Open Framework: OS, NFS, DB and Services HPC Grand Challenges PetaFlops-class HPC Tera-100 CEA project Hybrid architectures Leverage Intel Xeon roadmap, time to market Manage and deliver complex projects Our ambition: be the European leader in Extreme Computing

9 Target markets for Bull in Extreme Computing
Production HPC Government Defense Economic Intelligence National Research Centers Weather prediction Climate research, modeling and change Ocean circulation Oil & Gas Seismic: Imaging, 3D interpretation, Prestack data analysis Reservoir modeling & simulation Geophysics sites Data Center Refinery Data Center Automotive & Aerospace CAE: Fluid dynamics, Crash simulation EDA: Mechatronics, Simulation & Verification Finance Derivatives Pricing Risk Analysis Portfolio Optimization

10 The most complete HPC value chain in Europe
Offer Services Bull organizations Customers Security Encryption Access Control Operations and management (SLA) G e r m a n y Hosting / Outsourcing Services Design and architecture deployment System design Bull Systems R&D +500 specialists in Europe

11 Innovation through partnerships
Bull’s experts are preparing the intensive computing technologies of tomorrow by having an active role in many European cooperative research projects A strong commitment to many projects, such as: Infrastructure projects: FAME2, CARRIOCAS, POPS, SCOS Application projects: NUMASIS (seismic), TELEDOS (health), ParMA (manufacturing, embedded), POPS (pharmaceutical, automotive, financial applications, multi-physical simulations…), HiPIP (image processing), OPSIM (Numerical optimization and robust design techniques), CSDL (complex systems design), EXPAMTION (CAD Mechatronics), CILOE (CAD Electronics), APAS-IPK (Life Sciences). Tools: application parallelizing, debugging and optimizing (PARA, POPS, ParMA) Hybrid systems: OpenGPU Hundreds of Bull experts are dedicated to cooperative projects related to HPC innovation

12 Major Extreme Computing trends and issues
Networking Prevalence of Open architectures (Ethernet, InfiniBand) Multi-core Multi core is here to stay and multiply Programming for multi core is THE HPC challenge Accelerators Incredible performance per watt Turbo charge performance… by a factor of 1 to 100… Storage Vital component Integrated through parallel file system

13 Bull’s vision Extreme Computing Innovation Performance/Watt
bullx blade system bullx accelerator blade bullx supernode SMP servers bullx cluster suite peta-scalability mobull containers Research with European Institutes Extreme Computing Performance/Watt Integration Green data center design Accelerators Mid-size to High-end Cost efficiency Accelerators Mid-size to high end Cost efficiency Optimization an issue Green data center design Off the shelf components bullx blade system Mid-size systems Cost efficiency

14 The bullx range Designed with Extreme Computing in mind

15 hardware for peta-scalability
cluster suite Water cooling bullx supernodes bullx blade system bullx rack-mounted servers Storage Architecture ACCELERATORS

16 The bullx blade system Dense and open
Best HPC server product or technology Top 5 new products or technologies to watch

17 bullx blade system No compromise on: Performance Density Efficiency
Dense and open No compromise on: Performance Latest Xeon processors from Intel (Westmere EP) Memory-dense Fast I/Os Fast interconnect: full non blocking InfiniBand QDR Accelerated blades Density 12% more compute power per rack than the densest equivalent competitor solution Up to 1296 cores in a standard 42U rack Up to 15,2 Tflops of compute power per rack (with CPUs) Efficiency All the energy efficiency of Westmere EP + exclusive ultra capacitor Advanced reliability (redundant power and fans, diskless option) Water-cooled cabinet available Openness Based on industry standards and open source technologies Runs all standard software

18 bullx supernode An expandable SMP node for memory-hungry applications
SMP of up to 16 sockets based on Bull-designed BCS: Intel Xeon Nehalem EX processors Shared memory of up to 1TB (2TB with 16GB DIMMS) RAS features: Self-healing of the QPI and XQPI Hot swap disk, fans, power supplies Available in 2 formats: High-density 1.5U compute node High I/O connectivity node Green features: Ultra Capacitor Processor power management features

19 bullx rack-mounted systems
A large choice of options R422 E2 R423 E2 R425 E2 COMPUTE NODE SERVICE NODE VISUALIZATION 2 nodes in 1U for unprecedented density NEW: more memory Xeon 5600 2x 2-Socket 2x 12 DIMMs QPI up to 6.4 GT/s 2x 1 PCI-Express x16 Gen2 InfiniBand DDR/QDR embedded (optional) 2x 2 SATA2 hot-swap HDD 80 PLUS Gold PSU Enhanced connectivity and storage 2U Xeon 5600 2-Socket 18 DIMMs 2 PCI-Express x16 Gen2 Up to 8 SATA2 or SAS HDD Redundant 80 PLUS Gold power supply Hot-swap fans Supports latest graphics & accelerator cards 4U or tower 2-Socket Xeon 5600 18 DIMMs 2 PCI-Express x16 Gen2 Up to 8 SATA2 or SAS HDD Powerful power supply Hot-swap Fans

20 GPU accelerators for bullx
NVIDIA® Tesla™ computing systems: teraflops many-core processors that provide outstanding energy efficient parallel computing power NVIDIA Tesla C1060 NVIDIA Tesla S1070 To turn an R425 E2 server into a supercomputer Dual slot wide card Tesla T10P chip 240 cores Performance: close to 1 Tflops (32 bit FP) Connects to PCIe x16 Gen2 The ideal booster for R422 E2 or S6030 -based clusters 1U drawer 4 x Tesla T10P chips 960 cores Performance: 4 Tflops (32 bit FP) Connects to 2 PCIe x16 Gen2

21 Bull Storage Systems for HPC
DataDirect Networks S2A 9900 (consult us) StoreWay Optima 1500 StoreWay EMC CX4 FC/SATA Up to 480 HDDs Up to 16 host ports 3U drawers SAS/SATA Up to 1200 HDDs 8 host ports 4+ 2/3/4 U drawers SAS/SATA 3 to 144 HDDs Up to 12 host ports 2U drawers cluster suite

22 Bull Cool Cabinet Door Bull’s contribution to reducing energy consumption Enables world densest Extreme Computing solution ! 28kW/m² (40kW on 1.44m²) 29 ‘U’/m² (42U + 6PDUs on 1.44m²) 77% energy saving compared to air conditioning! Water thermal density much more efficient than air 600W instead of 2.6kW to extract 40kW Priced well below all competitors 12K€ for fully redundant rack same price as Schroff for twice the performance (20kW) half price from HP for better performance (35kW) and better density

23 Extreme Computing solutions
Built from standard components, optimized by Bull’s innovation Hardware platforms Software environments Services  Design  Architecture  Project Management  Optimisation cluster suite Interconnect Open hardware architecture + Open software architecture Storage systems StoreWay

24 Choice of Operating Systems
cluster suite Standard Linux distribution + bullx cluster suite A fully integrated and optimized HPC cluster software environment A robust and efficient solution delivering global cluster management… … and a comprehensive development environment Microsoft’s Windows HPC Server 2008 An easy to deploy, cost-effective solution with enhanced productivity and scalable performance Global cluster management = - installation and software deployment - monitoring and fault handling - cluster optimization and expansion

25 cluster suite Cluster DB benefits Master complexity +100,000 nodes
Mainly based on Open Source components Engineered by Bull to deliver RAS features Reliability, Availability, Serviceability Cluster DB benefits Master complexity +100,000 nodes Make management easier, monitoring accurate and maintenance quicker Improve overall utilization rate and application performances bullx cluster suite advantages Unlimited Scalability – Boot management and boot time Automated configuration – Hybrid systems Management Fast installation and updates – Health monitoring & preventive maintenance

26 Software Partners Batch management
Platform Computing LSF Altair PBS Pro Development, debugging, optimisation TotalView Allinea DDT Intel Software Parallel File System Lustre

27 Industrialized project management
Expertise & project management Solution design Servers Interconnect Storage Software 1 Computer room design Racks Cooling Hot spots Air flow (CFD) 2 Factory Integration & Staging 3 Rack integration Cabling Solution staging On site Installation & acceptance 4 Installation Connection Acceptance Trainings Workshops Administrators Users 5 Start production On-site engineers Support from Bull HPC expertise centre 6 Support and Assistance Call centre 7 days/week On-site intervention Software support HPC expert support 7 Partnership Joint research projects 8

28 Factory integration and staging
Two integration and staging sites in Europe To guarantee delivery of « turn key » systems 2 sites = Angers + Serviware 5 300 m2 technical platforms available for assembly, integration, tests and staging m2 logistics platforms 60 test stations 150 technicians 150 logistics employees Certified ISO 9001, ISO and OHSAS 18000

29 mobull, the container solution from Bull
The plug & boot data center Up to Teraflops per container Up and running within 8 weeks Innovative cooling system Can be installed indoor or outdoor Available in Regular or Large sizes

30 mobull, the container solution from Bull
MOBILE FLEXIBLE POWERFUL PLUG & BOOT MADE TO MEASURE DENSE Transportable container Modular Buy / lease Fast deployment 550 kW 227,8 Tflops 18 PB Hosts any 19’’ equipment - servers, storage, network Can host bullx servers Complete turnkey solution confidentiel 30 30

31 A worldwide presence

32 Worldwide references in a variety of sectors
Educ/Research Industry Aerospace/Defence Other … and many others

33 bullx design honoured by OEM of CRAY

34 bullx design honoured by OEM of CRAY

35 Perspectives

36 (W/2x2 NH-EP-Std/DDR/QDR) Air (20kw) or water-cooled (40kw)
HPC Roadmap for 09Q2 09Q3 09Q4 10Q1 10Q2 10Q3 10Q4 11Q1 11Q2 11Q3 11Q4 Processor Nehalem EP Westmere EP Sandy Bridge EP Nehalem EX Westmere EX std R423-E2 (W/2xNH-EP) R423-E2 (W/Westmere-EP) R423-E3 Servers Gfx R425-E3 R425-E2 (W/2xNH-EP) R425-E2 (W/Westmere-EP) Twin R422-E2 (W/2x2 NH-EP-Std/DDR/QDR) R422-E2 (W/Westmere-EP) R422-E3 2x Eth switch Interc. IB/QDR Ultracapa. ESM. 10Gb Blade GPU Blade next platform INCA - 18x blades w/2x NH-EP INCA - 18x blades w/Westmere-EP INCA w/ SB. EP 8SNL/ 16S3U R480-E1 (Dunnington - 24x cores) SMP MESCA w/Nehalem EX MESCA w/Westm. EX 4S3U/4SNL On R422-E1/E2 w/Tesla S1070) On blade/SMP system GPU nVIDIA C S1070 nVidia T20 / new generation of accelerators & GPUs IB interconnect QDR 36p QDR 324p EDR Racks Air cooled 20kw Air (20kw) or water-cooled (40kw) direct liquid Storage Optima/EMC CX DDN 9900/66xx/LSI-Panasas (TBC) Future storage offer Cluster suite XR 5v3.1U1 - ADDON2 XR 5v3.1U1 ADDON3 New generation cluster manager Extreme/Enterprise Tentative dates, subject to change without notice

37 A constantly innovating Extreme Computing offer
Hybrid clusters (Xeon + GPUs) Integrated clusters High-end servers with petaflops performance 2008 2009 2010

38 Our R&D initiatives address key HPC design points
The HPC market makes extensive use of two architecture models Bull R&D investments Scale-out Massive deployment of “thin nodes”, with low electrical consumption and limited heat dissipation bullx blade system Key issues: Network topology System Management Application software Scale-up “Fat nodes” with many processors and large memories to reduce the complexity of very large centers bullx super-nodes Key issues: Internal architecture Memory coherency Ambitious long-term developments in cooperation with CEA and competitiveness cluster, competence centre, and many organizations from the worlds of industry and education

39 Tera 100: designing the 1st European petascale system
Collaboration contract between Bull and CEA, the French Atomic Authority Joint R&D High-performance servers Cluster software for large-scale systems System architecture Application development Infrastructures for very large Data Centers Operational in 2010 Tera 100 in 5 figures 100,000 cores (X86 processors) TB memory GB/s data throughput m² footprint 5 MW estimated power consumption

40 Product Descriptions – ON REQUEST

41 BULL IN EDUCATION/RESEARCH - References

42 27 Tflops peak performance at the end of phase 2
University of Münster Germany's 3rd largest university and one of the foremost centers of German intellectual life Need More computing power and a high degree of flexibility, to meet the varied requirements of the different codes to be run Solution A new-generation bullx system, installed in 2 phases Phase 1 2 bullx blade chassis containing 36 bullx B500 compute blades 8 bullx R423 E2 service nodes DataDirect Networks S2A9900 storage system Ultra fast InfiniBand QDR interconnect Lustre shared parallel file system hpc.manage cluster suite (from Bull and s+c) and CentOS Linux Phase 2 (to be installed in 2010) 10 additional bullx blade chassis containing 180 bullx B500 compute blades equipped with Intel® Xeon® ‘Westmere’ 4 future SMP bullx servers, with 32 cores each 27 Tflops peak performance at the end of phase 2

43 University of Cologne One of Germany’s largest universities, it has been involved in HPC for over 50 years Need More computing power to run new simulations and refine existing simulations, in such diverse areas as genetics, high-tech materials, meteorology, astrophysics, economy Solution A new-generation bullx system, installed in 2 phases Phase 1 (2009) 12 bullx blade chassis containing 216 bullx B500 compute blades 12 bullx R423 E2 service nodes 2 DataDirect Networks S2A9900 storage systems Ultra fast InfiniBand QDR interconnect Lustre shared parallel file system bullx cluster suite and Red Hat Enterprise Linux Bull water-cooled racks for compute racks Phase 2 (2010) 34 additional bullx blade chassis containing 612 bullx B500 compute blades equipped with Intel® Xeon® ‘Westmere’ 4 future SMP bullx servers, with 128 cores each Performance at the end of phase 2: peak 100 Tflops 26 TB RAM and 500 TB disk storage

44 Jülich Research Center
JuRoPa supercomputer “Jülich Research on Petaflops Architectures“: accelerating the development of high performance cluster computing in Europe 200-teraflops general purpose supercomputer Bull is prime contractor in this project which also includes Intel, Partec, and Sun HPC-FF supercomputer 100 Teraflops to host applications for the European Union Fusion community Bull cluster : 1,080 Bull R422 E2 computing nodes New generation Intel® Xeon series 5500 processors interconnected via an InfiniBand® QDR network water-cooled cabinets for maximum density and optimal energy efficiency The leading and largest HPC centre in Germany A major contributor to European-wide HPC projects JuRoPa: to investigate emerging cluster technologies and achieve a new class of cost-efficient supercomputers for peta-scale computing Together, the 2 supercomputers rank #10 in the TOP500, with Tflops (Linpack) Efficiency: 91.6 %

45 GENCI - CEA A hybrid architecture designed to meet production
and research needs, with a large cluster combining general purpose servers and specialized servers: 1068 Bull nodes, i.e Intel® Xeon® 5500 cores providing a peak performance of 103 Tflops 48 GPU NVIDIA® nodes, i.e cores, providing an additional theoretical performance of 192 Tflops 25 TB memory InfiniBand interconnect network Integrated Bull software environment based on Open Source components Common Lustre® file system Outstanding density with water cooling 295 Tflops peak: first large European hybrid system “In just two weeks, a common team from Bull and CEA/DAM successfully installed the GENCI's new supercomputer for the CCRT. Three days after the installation, we are already witnessing the exceptional effectiveness of the new 8000X5570 cores cluster, which has achieved a 88% efficiency on the Linpack benchmark, demonstrating the sheer scalability of Bull architecture and the remarkable performance of Intel's Xeon 5500 processor." (Jean Gonnord, Program Director for Numerical Simulation at CEA/DAM, at the occasion of the launch of the Intel Xeon 5500 processor in Paris)

46 One of Britain’s leading teaching and research universities
Cardiff University One of Britain’s leading teaching and research universities Need Provide central HPC service to users in the various academic schools, who previously had to use small departmental facilities Foster the adoption of advanced research computing across a broad range of disciplines Find a supplier that will take a partnership approach – including knowledge transfer Solution 25 Teraflops peak performance Over 2,000 Intel® Xeon® Harpertown cores with Infiniband Interconnect Over 100 TBs of storage The partnership between Cardiff and Bull involves the development of a centre of excellence for high end computing in the UK, with Cardiff particularly impressed by Bull’s collaborative spirit. Key factors in Cardiff selecting Bull were the quality of Bull’s technology, HPC expertise and commitment to joint working. “The University is delighted to be working in partnership with Bull on this project that will open up a range of new research frontiers” said Pr Martyn Guest, Director of Advanced Research Computing

47 Commissariat à l’Energie Atomique
France's Atomic Energy Authority (CEA) is a key player in European research. It operates in three main areas: energy, information technology and healthcare, defence and security. Need A world-class supercomputer to run the CEA/DAM’s Nuclear Simulation applications Solution A cluster of 625 Bull NovaScale servers, including 567 compute servers, 56 dedicated I/O servers and 2 administration servers 10,000 Intel® Itanium® 2 cores 30 terabytes of core memory Quadrics interconnect network Bull integrated software environment based on Open Source components A processing capacity in excess of 52 teraflops # 1 European supercomputer (# 5 in the world) in the June 2006 TOP500 Supercomputer ranking “Bull offered the best solution both in terms of global performance and cost of ownership, in other words, acquisition and operation over a five-year period.” Daniel Verwaerde, CEA, Director of nuclear armaments “It is essential to understand that what we are asking for is extremely complex. It is not simply a question of processing, networking or software. It involves ensuring that thousands of elements work effectively together and integrating them to create a system that faultlessly supports the different tasks it is asked to perform, whilst also being confident that we are supported by a team of experts.” Jean Gonnord, Program Director for Numerical Simulation & Computer Sciences at CEA/DAM

48 Atomic Weapons Establishment
AWE provides the warheads for the United Kingdom’s nuclear deterrent. It is a centre of scientific and technological excellence. Need A substantial increase in production computing resources for scientific and engineering numerical modeling The solution must fit within strict environmental constraints on footprint, power consumption and cooling Solution Two identical bullx clusters + a test cluster with a total of: 53 bullx blade chassis containing 944 bullx B500 compute blades, i.e cores 6 bullx R423 E2 management nodes, 8 login nodes 16 bullx R423 E2 I/O and storage nodes DataDirect Networks S2A9900 storage system Ultra fast InfiniBand QDR interconnect Lustre shared parallel file system bullx cluster suite to ensure total cluster management Combined peak performance in excess of 75 Tflops

49 Petrobras Leader in the Brazilian petrochemical sector, and one of the largest integrated energy companies in the world Need A super computing system: to be installed at Petrobras’ new Data Center, at the University Campus of Rio de Janeiro equipped with GPU accelerator technology dedicated to the development of new subsurface imaging techniques to support oil exploration and production Solution A hybrid architecture coupling 66 general-purpose servers to 66 GPU systems: 66 bullx R422 E2 servers, i.e. 132 compute nodes or 1056 Intel® Xeon® 5500 cores providing a peak performance of 12.4 Tflops 66 NVIDIA® Tesla S1070 GPU systems, i.e cores, providing an additional theoretical performance of 246 Tflops 1 bullx R423 E2 service node Ultra fast InfiniBand QDR interconnect bullx cluster suite and Red Hat Enterprise Linux Over 250 Tflops peak One of the largest supercomputers in Latin America

50 ILION Animation Studios
Need Ilion Animation Studios (Spain) needed to double their render farm to produce Planet 51 Solution Bull provided: 64 Bull R422 E1 servers, i.e. 128 compute nodes 1 Bull R423 E1 head node GB Ethernet interconnect running Microsoft Windows Compute Cluster Server 2003 Rendered on Bull servers Released end 2009

51 ISC’10 – Top News Intel Unveils Plans for HPC Coprocessor
25TH ANNIVERSARY – Record breaking attendance Intel Unveils Plans for HPC Coprocessor Tens of GPGPU-like cores with x86 instructions A 32-core development version of the MIC coprocessor, codenamed "Knights Ferry," is now shipping to selected customers. A team at CERN has already migrated one of its parallel C++ codes. TOP500 released – China gained 2nd place!!! TERA100 in production - by BULL / CEA: FIRST EUROPEAN PETA-SCALE ARCHITECTURE WORLD’S LARGEST INTEL-BASED CLUSTER WORLD’S FASTEST FILESYSTEM (500GB/s)

52 Questions & Answers

53 bullx blade system – Block Diagram
18x compute blades 2x Westmere-EP sockets 12x memory DDR3 DIMMs 1x SATA HDD/SSD slot (optional – diskless an option) 1x IB ConnectX/QDR chip 1x InfiniBand Switch Module (ISM) for cluster interconnect 36 ports QDR IB switch 18x internal connections 18x external connections 1x Chassis Management Module (CMM) OPMA board 24 ports GbE switch 18x internal ports to Blades 3x external ports 1x optional Ethernet Switch Module (ESM) 24ports GbE switch 1x optional Ultra Capacitor Module (UCM) 3x GbE/1G 3x GbE/1G CMM ESM UCM 18x blades ISM 18x IB/QDR

54 bullx blade system – blade block diagrams
bullx B500 compute blade bullx B505 accelerator blade 31.2GB/s 31.2GB/s 31.2GB/s 31.2GB/s Westmere EP Westmere EP Nehalem EP Westmere EP Westmere EP Nehalem EP QPI QPI QPI 12.8GB/s Each direction QPI 12.8GB/s Each direction QPI QPI I/O Controller (Tylersburg) I/O Controller (Tylersburg) I/O Controller (Tylersburg) SATA SSD diskless QPI GBE SATA SSD diskless PCIe 8x 4GB/s PCIe 8x 4GB/s PCIe 16x 8GB/s PCIe 8x 4GB/s PCIe 16x 8GB/s GBE InfiniBand InfiniBand Accelerator InfiniBand Accelerator

55 bullx B500 compute blade Connector to backplane
WESTMERE EP w/ 1U heatsink Fans Tylersburg w/ short heatsink © CEA HDD/SSD 1.8" DDR III (x12) 143.5 425 ConnectX QDR iBMC ICH10

56 Ultracapacitor Module (UCM)
NESSCAP Capacitors (2x6) Embedded protection against short power outages Protect one chassis with all its equipment under load Up to 250ms Avoid on site UPS save on infrastructure costs save up to 15% on electrical costs Improve overall availability Run longer jobs Board Leds

57 Bull StoreWay Optima1500 750MB/s bandwidth
12 x 4Gbps front-end connections 4 x 3Gbps point to point back-end disk connections Supports up to 144 SAS and/or SATA HDDs: SAS 146GB (15krpm), 300GB (15krpm), SATA 1000GB (7,2Krpm) RAID 1, 10, 5, 50, 6, 10, 50, 3, 3DualParity, Triple Mirror 2GB to 4GB cache memory Windows, Linux, VmWare Interoperability (SFR for UNIX) 3 Models: Single Controller 2 front-end ports Dual Controllers 4 front-end ports Dual Controllers 12 front-end ports FE: Front-End (to the hosts) BE: Back-End (to the Hard Disk Drives)

58 CLARiiON CX4-120 UltraScale Architecture Connectivity Scalability
Two 1.2Ghz dual core LV-Woodcrest CPU Modules 6 GB system memory Connectivity 128 high-availability hosts Up to 6 I/O modules (FC or ISCSI) 8 front-end 1 Gb/s ISCSI host ports max 12 front-end 4 Gb/s / 8 Gb/s Fibre Channel host ports max 2 back-end 4 Gb/s Fibre Channel disk ports Scalability Up to 1,024 LUNs Up to 120 drives 58

59 CLARiiON CX4-480 UltraScale Architecture Connectivity Scalability
Two 2.2Ghz dual core LV-Woodcrest CPU Modules 16 GB system memory Connectivity 256 high-availability hosts Up to 10 I/O modules (FC or iSCSI at GA) 16 front-end 1 Gb/s iSCIS host ports max 16 front-end 4 Gb/s / 8 Gb/s Fibre Channel host ports max 8 back-end 4 Gb/s Fibre Channel disk ports Scalability Up to 4,096 LUNs Up to 480 drives 59

60 DataDirect Networks S2A 9900
Performance Single System S2A9900 delivers 6GB/s Reads & Writes Multiple System Configurations are Proven to Scale beyond 250GB/s Real-Time, Zero-Latency Data Access, Parallel Processing Native FC-4, FC-8 and/or InfiniBand 4X DDR Capacity Single System: Up to 1.2 Petabytes Multiple Systems Scale Beyond: 100’s of Petabytes Intermix SAS & SATA in Same Enclosure Manage up to 1,200 Drives 1.2PB in Just Two Racks Innovation High Performance DirectRAID™ 6 Zero Degraded Mode SATAssure™ PlusData Integrity Verification & Drive Repair Power Saving Drive Spin-down with S2A SleepMode Power Cycle Individual Drives

61 bullx R422 E2 characteristics
1U rackmount – 2 nodes in a 1U form factor Intel S5520 Chipset (Tylersburg) QPI up to 6.4 GT/S Processor: 2x Intel® Xeon® 5600 per node Memory: 12 x DIMM sockets Reg ECC DDR3 1GB / 2GB / 4GB / 8GB Up to 96 GB per node at 1333MHz (with 8GB DIMMs) 2 x HDD per node Hot swap SATA2 rpm 250/500/750/1000/1500/2000 GB Independent power control circuitry built in for power management 1x shared Power Supply Unit 1200w max fixed / no redundancy 80 PLUS Gold InfiniBand 1 optional on-board DDR or QDR controller per node Expansion slots 1 PCI-E x 16 Gen2 (per node) Rear I/O 1 external IB, 1 COM port, VGA, 2 Gigabit NIC, 2 USB ports (per node) Management BMC (IPMI 2.0 with virtual media-over-LAN) Embedded Winbond WPCM450-R (per node)

62 bullx R423 E2 The perfect server for service nodes 2U rack mount
Processor: 2x Intel® Xeon® 5600 Chipset: 2x Intel® S (Tylersburg) QPI: up to 6.4 GT/s Memory: 18 DIMM sockets DDR up to 144GB at 1333MHz Disks Without add-on adapter : 6 SATA2 HDD (7200 tpm - 250/ 500/ 750/ 1000/ 1500/ 2000 Go) With PCI-E RAID SAS/SATA add-on adapter : Support of RAID 0, 1, 5, 10 8 SATA2 (7200 tpm - 250/ 500/ 750/ 1000/ 1500/ 2000 Go) or SAS HDD (15000 tpm - 146/ 300/ 450 Go) All disks 3.5 inches Expansion slots (low profile) 2 PCI-E Gen 2 x16 4 PCI-E Gen 2 x8 1 PCI-E Gen 2 x4 Redundant Power Supply Unit Matrox Graphics MGA G200eW embedded video controller Management BMC (IPMI 2.0 with virtual media-over-LAN) Embedded Winbond WPCM450-R on dedicated RJ45 port WxHxD: 437mm X 89mm x 648mm

63 Bull System Manager Suite
Consistent administration environment thanks to the cluster database Ease of use through centralized monitoring fast and reliable deployment configurable notifications Built from the best Open Source and commercial software packages integrated tested supported Easy system operation using Bull System Manager HPC Edition Conman - centralized console management NSCommands - nodes power-on control through IPMI Ksis - preparation and deployment of software images and patches Nagios - hardware and software components monitoring, alerts triggering Ganglia - system activity monitoring pdsh - parallel commands

64 Detailed knowledge of cluster structure
Customized Clusters Architecture drawing Standard Clusters Expertise Centre R & D Logical NetList Physical NetList Equipment and Description Generator Cable Labels Factory Preload File Model « B » Preload File Model « A » Preload File Installer

65 Product descriptions bullx blade system Bullx supernodes
bullx rack-mounted systems NVIDIA Tesla Systems Bull Storage Cool cabinet door mobull bullx cluster suite Windows HPC Server 2008

66 bullx blade system – overall concept

67 bullx blade system – overall concept
Uncompromised performances Support of high frequency Westmere Memory bandwidth: 12x mem slots Fully non blocking IB QDR interconnect Up to 2.53 TFLOPS per chassis Up to 15.2 TFLOPS per rack (with CPUs) General purpose, versatile Xeon Westmere processor 12 memory slots per blade Local HDD/SSD or Diskless IB / GBE RH, Suse, Win HPC2008, CentOs, … Compilers: GNU, Intel, … Leading edge technologies Intel Nehalem InfiniBand QDR Diskless GPU blades High density 7U chassis 18x blades with 2 proc, 12x DIMMs, HDD/SSD slot/IB connection 1x IB switch (36 ports) 1x GBE switch (24p) Ultracapacitor Optimized Power consumption Typical 5.5 kW / chassis High efficiency (90%) PSU Smart fan control in each chassis Smart fan control in water-cooled rack Ultracapacitor  no UPS required

68 bullx chassis packaging
LCD unit CMM Fans 7U chassis PSU x4 IB Switch Module Ultracapacitor 18x blades ESM

69 bullx B505 accelerator blade
Embedded Accelerator for high performance with high energy efficiency 2.1 TFLOPS 0.863kw 2 x Intel Xeon 5600 2 x NVIDIA T10(*) 2 x IB QDR 7U 18.9 TFLOPS in 7 U (*) T20 is on the roadmap

70 bullx B505 accelerator blade
Double-width blade 2 NVIDIA Tesla M1060 GPUs 2 Intel® Xeon® 5600 quad-core CPUs 1 dedicated PCI-e 16x connection for each GPU Double InfiniBand QDR connections between blades 2 x CPUs x GPUs Front view Exploded view

71 Product descriptions bullx blade system bullx supernodes
Bullx rack-mounted systems NVIDIA Tesla Systems Bull Storage Cool cabinet door mobull bullx cluster suite Windows HPC Server 2008

72 bullx supernode An expandable SMP node for memory-hungry applications
SMP of up to 16 sockets based on Bull-designed BCS: Intel Xeon Nehalem EX processors Shared memory of up to 1TB (2TB with 16GB DIMMS) RAS features: Self-healing of the QPI and XQPI Hot swap disk, fans, power supplies Available in 2 formats: High-density 1.5U compute node High I/O connectivity node Green features: Ultra Capacitor Processor power management features

73 bullx supernode: CC-NUMA server
QPI- connected module NHM EX IOH BCS SMP (CC-NUMA) 128 cores Up to 1TB RAM (2TB with 16 GB DIMMs) QPI- connected module QPI- connected module BCS BCS XQPI fabric BCS BCS QPI- connected module 1 QPI- connected module Max configuration 4 modules 4 sockets/module 16 sockets 128 cores 128 memory slots

74 Bull's Coherence Switch (BCS)
Heart of the CC-NUMA Insure global memory and cache coherence Optimize traffic and latencies MPI collective operations in HW Reductions Synchronization Barrier Key characteristics 18x18 mm in 90 nm technology 6 QPI and 6 XQPI High speed serial interfaces up to 8GT/s Power-conscious design with selective power-down capabilities Aggregate data rate: 230GB/s

75 bullx S6030 – 3U Service Module / Node
BCS Ultra-capacitor Nehalem EX (4 max) Fans Hot swap Disks (8 max) Hot swap SATA RAID 1 SAS RAID 5 2 Power supplies Hot swap 6 PCI-e 1-x16, 5-x8 32 DDR3

76 bullx S6010 - Compute Module / Node
BCS 2 Modules (64 cores / 512GB RAM) Nehalem EX (4 max) 1 PCI-e x16 1 Power supply Ultra-capacitor Disk SATA 32 DDR3 Fans

77 Product descriptions bullx blade system bullx supernodes
bullx rack-mounted systems NVIDIA Tesla Systems Bull Storage Cool cabinet door mobull bullx cluster suite Windows HPC Server 2008

78 bullx rack-mounted systems
A large choice of options R422 E2 R423 E2 R425 E2 COMPUTE NODE SERVICE NODE VISUALIZATION 2 nodes in 1U for unprecedented density NEW: more memory Xeon 5600 2x 2-Socket 2x 12 DIMMs QPI up to 6.4 GT/s 2x 1 PCI-Express x16 Gen2 InfiniBand DDR/QDR embedded (optional) 2x 2 SATA2 hot-swap HDD 80 PLUS Gold PSU Enhanced connectivity and storage 2U Xeon 5600 2-Socket 18 DIMMs 2 PCI-Express x16 Gen2 Up to 8 SAS or SATA2 HDD Redundant 80 PLUS Gold power supply Hot-swap fans Supports latest graphics & accelerator cards 4U or tower 2-Socket Xeon 5600 18 DIMMs 2 PCI-Express x16 Gen2 Up to 8 SATA2 or SAS HDD Powerful power supply Hot-swap Fans

79 bullx R425 E2 For high performance visualization 4U / tower rack mount
Processor: 2x Intel® Xeon® 5600 Chipset: 2x Intel® S (Tylersburg) QPI up to 6.4 GT/s Memory: 18 DIMM sockets DDR up to 144GB at 1333MHz Disks Without add-on adapter : 6 SATA2 HDD (7200 tpm - 250/ 500/ 750/ 1000/ 1500/ 2000 Go) With PCI-E RAID SAS/SATA add-on adapter : Support of RAID 0, 1, 5, 10 8 SATA2 (7200 tpm - 250/ 500/ 750/ 1000/ 1500/ 2000 Go) or SAS HDD (15000 tpm - 146/ 300/ 450 Go) All disks 3.5 inches Expansion slots (high profile) 2 PCI-E Gen 2 x16 4 PCI-E Gen 2 x8 1 PCI-E Gen 2 x4 Powerful Power Supply Unit Matrox Graphics MGA G200eW embedded video controller Management BMC (IPMI 2.0 with virtual media-over-LAN) Embedded Winbond WPCM450-R on dedicated RJ45 port WxHxD: 437mm X 178mm x 648mm

80 Product descriptions bullx blade system bullx rack-mounted systems
bullx supernodes NVIDIA Tesla Systems Bull Storage Cool cabinet door mobull bullx cluster suite Windows HPC Server 2008

81 GPU accelerators for bullx
NVIDIA® Tesla™ computing systems: teraflops many-core processors that provide outstanding energy efficient parallel computing power NVIDIA Tesla C1060 NVIDIA Tesla S1070 To turn an R425 E2 server into a supercomputer Dual slot wide card Tesla T10P chip 240 cores Performance: close to 1 Tflops (32 bit FP) Connects to PCIe x16 Gen2 The ideal booster for R422 E2 or S6030 -based clusters 1U drawer 4 x Tesla T10P chips 960 cores Performance: 4 Tflops (32 bit FP) Connects to 2 PCIe x16 Gen2

82 Ready for future Tesla processors (Fermi)
Tesla C2070 Gigaflop DP 6 GB Memory ECC Tesla C2050 Gigaflop DP 3 GB Memory ECC Large Datasets P e r f o r m a n c e Tesla C1060 933 Gigaflop SP 78 Gigaflop DP 4 GB Memory 8x Peak DP Performance Talk about value of 6 GB for seismic imaging, medical imaging, CFD Mid-Range Performance Q4 Q1 Q2 Q3 2009 2010 Disclaimer: performance specification may change 82

83 Ready for future Tesla 1U Systems (Fermi)
Tesla S2070 Teraflop DP 6 GB Memory / GPU ECC Tesla S2050 Teraflop DP 3 GB Memory / GPU ECC Large Datasets P e r f o r m a n c e Tesla S 4.14 Teraflop SP 345 Gigaflop DP 4 GB Memory / GPU 8x Peak DP Performance Mid-Range Performance Q4 Q1 Q2 Q3 2009 2010 Disclaimer: performance specification may change 83

84 NVIDIA Tesla 1U system & bullx R422 E2
Tesla 1U system connection to the host PCIe Gen2 Host Interface Cards PCIe Gen2 Cables bullx R422 E2 server PCIe Gen2 Cables 1U 1U PCIe Gen2 Host Interface Cards NVIDIA Tesla 1U system

85 Product descriptions bullx blade system bullx rack-mounted systems
bullx supernodes NVIDIA Tesla Systems Bull Storage Cool cabinet door mobull bullx cluster suite Windows HPC Server 2008

86 Bull Storage for HPC clusters
A complete line of storage systems Performance Modularity High Availability* A rich management suite Monitoring Grid & standalone system deployment Performance analysis *: with Lustre

87 Bull Storage Systems for HPC - details
Optima1500 CX4-120 CX4-480 S2A 9900 couplet #disk 144 120 480 1200 Disk type SAS 146/300/450 GB SATA 1TB FC 146/300/400/450 GB FC 10Krpm 400 GB 15Krpm 146/300/450 GB SAS 15Krpm 300/450/600GB SATA 500/750/1000/2000 GB RAID R1, 3, 3DP, 5, 6, 10, 50 and TM R0, R1, R10, R3, R5, R6 8+2 (RAID 6) Host ports 2/12 FC 4 4/12 FC4 8/16 FC4 8 FC4 Back end ports 2 SAS 4X 2 8 20 SAS 4X Cache size (max) 4 GB 6GB 16GB 5 GB RAID-protected Controller size 2 U base with disks 3 U 4 U (couplet) Disk drawer 2 U 12 slots 3 U 15 slots 3/2/4 U 16/24/60 slots Performance (MB/s; Raid5) R: Read; W:Write R: up to 900 MB/s W: up to 440 MB/s R: up to 720 MB/s W: up to 410 MB/s R: up to 1.25 GB/s W: up to 800 MB/s R&W: up to 6 GB/s

88 Bull storage systems - Administration & monitoring
HPC-specific Administration Framework Specific administration commands developed on CLI: ddn_admin, nec_admin, dgc_admin, xyr_admin Model file for configuration deployment: LUNs information, Access Control information, etc. Easily Replicable for many Storage Subsystems HPC specific Monitoring Framework Specific SNMP trap management Periodic monitoring of all Storage Subsystems in cluster Storage Views in Bull System Manager HPC edition: Detailed status for each item (power supply, fan, disk, FC port, Ethernet port, etc.) LUN/zoning information

89 Product descriptions bullx blade system bullx supernodes
bullx rack-mounted systems NVIDIA Tesla Systems Bull Storage Cool cabinet door mobull bullx cluster suite Windows HPC Server 2008

90 Bull Cool Cabinet Door Innovative Bull design
‘Intelligent’ door (self regulates fan speed depending on temperature) survives handily fan or water incidents (fans increase speed and extract hot air) optimized serviceability A/C redundancy Side benefits for customer no more hot spots in computer room – Good for overall MTBF ! Ready for upcoming Bull Extreme Computing systems 40kW is perfect match for a rack configured with bullx blades or future SMP servers at highest density

91 Jülich Research Center: water-cooled system

92 Cool cabinet door: Characteristics
Width 600mm (19”) Height mm (42U) Depth 200mm (8”) Weight kg Cooling capacity Up to 40 kW Power supply Redundant Power consumption 700 W Input water temperature 7-12 °C Output water temperature °C Water flow 2 liter/second (7 m3/hour) Ventilation 14 managed multi-speed fans Recommended cabinet air inlet 20°C +- 2°C Cabinet air outlet 20°C +- 2°C Management Integrated management board for local regulation and alert reporting to Bull System Manager

93 Cool Cabinet Door: how it works

94 Flexible operating conditions
Operating parameters adaptable to various customer conditions Energy savings further optimized depending on servers activity Next step: Mastering water distribution Predict Temperature, flow velocity, pressure drop within customer water distribution system Promote optimized solution

95 Product descriptions bullx blade system bullx supernodes
bullx rack-mounted systems NVIDIA Tesla Systems Bull Storage Cool cabinet door mobull bullx cluster suite Windows HPC Server 2008

96 Product descriptions bullx blade system bullx supernodes
bullx rack-mounted systems NVIDIA Tesla Systems Bull Storage Cool cabinet door mobull bullx cluster suite Windows HPC Server 2008

97 bullx cluster suite benefits
With the features developed and optimized by Bull, your HPC cluster is all that a production system should be: efficient, reliable, predictable, secure, easy to manage. Cluster Management Fast deployment Reliable management Intuitive monitoring Powerful control tools Save administration time Help prevent system downtime MPI Scalability of parallel application performance Standard interface compatible with many interconnects Improve system productivity Increase system flexibility File system: Lustre Improved I/O performance Scalability of storage Easy management of storage system Improve system performance Provide unparalleled flexibility Interconnect Cutting edge InfiniBand stack support Improve system performance Development tools Fast development and tuning of applications Easy analysis of code behaviour Help get the best performance and thus the best return on investment Kernel debugging and optimization tools Better performance on memory-intensive applications Easy optimization of applications Save development and optimization time SMP & NUMA architectures Optimized performance Reliable and predictable performance Improve system performance Red Hat distribution Standard Application support / certification Large variety of supported applications

98 bullx cluster suite components
System environment Installation/configuration Monitoring/control/diagnostics Application environment Execution environment Development File systems Lustre config Ksis Nscontrol // cmds Nagios Ganglia Job scheduling Libraries & tools Lustre NFSv4 NFSv3 Resource management MPIBull2 Bull System Manager cluster database Interconnect access layer (OFED,…) Linux OS Bull Cluster Suite incorporates a compre­hensive set of tools enabling users to efficiently operate the cluster at all stages. Easy system operation using Bull System Manager HPC Edition Conman - centralized console management NSCommands - nodes power-on control through IPMI Ksis - preparation and deployment of software images and patches Nagios - hardware and software components monitoring, alerts triggering Ganglia - system activity monitoring pdsh - parallel commands Optimized execution environment Integrated high speed network drivers and debugging and monitoring tools Scientific computing and communication libraries: Bull MPI 2, OpenMP, FFTW, BlockSolve, Intel MKL (commercial) Lustre parallel file system -- optimized and integrated within Bull Advanced Server to facilitate configuration Job scheduling software -- tightly integrated with the Bull MPI 2 libraries: SLURM, Platform LSF (commercial) or Altair PBS Pro (commercial) Development and tuning environment Compilers: GNU Compilers, Intel C/C++/Fortran 10 (commercial) Debuggers: IDB, GDB, Etnus TotalView (commercial), Alinea DDT (commercial) Profilers: oprofile, GNU profiler, Intel Vtune (commercial), Intel Trace Analyzer and Collector (commercial) Administration network HPC interconnect Linux kernel Hardware XPF SMP platforms InfiniBand/GigE interconnects GigE network switches Bull StoreWay/disk arrays

99 Product descriptions bullx blade system bullx supernodes
bullx rack-mounted systems NVIDIA Tesla Systems Bull Storage Cool cabinet door mobull bullx cluster suite Windows HPC Server 2008

100 Bull and Windows HPC Server 2008
Clusters of bullx R422 E2 servers Intel® 5500 processors Compact rack design: 2 compute nodes in 1U or 18 compute nodes in 7U, depending on model Fast & reliable InfiniBand interconnect supporting Microsoft® Windows HPC Server 2008 Simplified cluster deployment and management Broad application support Enterprise-class performance and scalability Common collaboration with leading ISVs to provide complete solutions The right technologies to handle industrial applications efficiently

101 Windows HPC Server 2008 Combining the power of the Windows Server platform with rich, out-of-the-box functionality to help improve the productivity and reduce the complexity of your HPC environment Microsoft® Windows Server® 2008 HPC Edition Microsoft® HPC Pack 2008 Microsoft® Windows® HPC Server 2008 + = Support for high performance hardware (x64 bit architecture) Winsock Direct support for RDMA for high performance interconnects (Gigabit Ethernet, InfiniBand, Myrinet, and others) Support for Industry Standards MPI2 Integrated Job Scheduler Cluster Resource Management Tools Integrated “out of the box” solution Leverages past investments in Windows skills and tools Makes cluster operation just as simple and secure as operating a single system 101

102 A complete turn-key solution
Bull delivers a complete ready-to-run solution Sizing Factory pre-installed and pre-configured Roll) Installation, integration in the existing infrastructure 1st and 2nd level support Monitoring, audit Training Bull has a Microsoft Competence Center

103 bullx cluster 400-W Enter the world of High Performance Computing with bullx cluster 400-W running Windows HPC Server 2008 bullx cluster 400-W4 4 compute nodes to relieve the strain on your work stations bullx cluster 400-W8 8 compute nodes to give independent compute resources to a small team of users, enabling them to submit large jobs or several jobs simultaneously bullx cluster 400-W16 16 compute nodes to equip a workgroup with independent high performance computing resources that can handle their global compute workload A solution that combines: The performance of bullx rack servers equipped with Intel® Xeon® processors The advantages of Windows HPC Server 2008 Simplified cluster deployment and management Easy integration with IT infrastructure Broad application support Familiar development environment And expert support from Bull’s Microsoft Competence Center


Download ppt "Table of Contents BULL – Company introduction"

Similar presentations


Ads by Google