Presentation is loading. Please wait.

Presentation is loading. Please wait.

NW-GRID, HEP and sustainability Cliff Addison Computing Services July 2008

Similar presentations


Presentation on theme: "NW-GRID, HEP and sustainability Cliff Addison Computing Services July 2008"— Presentation transcript:

1 NW-GRID, HEP and sustainability Cliff Addison Computing Services July 2008 http://www.nw-grid.ac.uk

2 Top end: HPCx and CSAR Hooks to other Grid consortia: NGS, WRG Applications and industry Sensor networks and experimental facilities Technology “tuned to the needs of practicing scientists”. Pharma, meds, bio, social, env, CCPs User interfaces Desktop pools: Condor etc. Portals, client toolkits, active overlays Advanced Network technology Mid range: NW- GRID and local clusters NW-GRID Vision

3 Project Aims and Partners ● Aims: – Establish, for the region, a world-class activity in the deployment and exploitation of Grid middleware – realise the capabilities of the Grid in leading edge academic, industrial and business computing applications – Leverage 100 posts plus £15M of additional investment ● Project Partners: – Daresbury Laboratory: CSED and e-Science Centre – Lancaster University: Management School, Physics, e- science and computer science – University of Liverpool: Physics and Computer Services – University of Manchester: Computing, Computer Science, Chemistry, bio-informatics + systems biology – Proudman Oceanographic Laboratory, Liverpool

4 Project Funding ● North West Development Agency ● £5M over 4 years commencing April 2004 – So funding has just ended - we’re looking for more! ● £2M capital for systems at four participating sites with initial systems in year 1 (Jan 2006) and upgrades in year 3 (Jan 2008) ● £3M for staff – about 15 staff for 3 years ● POL plus institutional contributions from Daresbury and Lancaster Complemented by “Teragrid competitive” private Gbit/s link among sites.

5 Hardware 2008 upgrade ● Lancaster and Liverpool procured upgrades that are now coming on stream. ● Procured systems: – now contain dual-core Opterons – are slowly being upgraded to quad-core Barcelona ● Lancaster: – 67 Sun x2200 nodes, (38 - 32 GB, 29 - 16GB) – 24 TB storage (Sun x4500) ● Liverpool – 110 Sun x2200 nodes, (73 – 32 GB, 27 – 16 GB) – 24 TB storage (Sun x4500) + 48 TB back-up – Complete rework of existing cluster into two TFlops systems (connected via 10 gbps fibre).

6 Liverpool Opteron Clusters High Capability Cluster Gig-Ether Cluster 58 dual processor, dual core nodes (140 cores),2.4GHz, 8GB RAM, 200GB disk Front-end node(ulgbc3) Infinipath Interconnect Gigabit Ethernet Interconnect 44 dual processor, dual core nodes (176 cores),2.2GHz, 8GB RAM, 72 GB disk Panasas Disk Subsystem (8 TB) SATA RAID Disk Subsystem (5.2 TB) 50 dual processor, quad core nodes (400 cores),2.3GHz, 32GB RAM, 500GB disk 23 dual processor, quad core nodes (184 cores),2.3GHz, 32GB RAM, 500 GB disk 37 dual processor, quad core nodes (296 cores),2.3GHz, 16GB RAM, 500 GB disk 24 TB “Thumper” disk subsystem Totals: 212 nodes, 1196 cores, 37 TB disk = Upgrade in 1Q2008 Front-end node(ulgbc5) 10 gbps

7 Bipedal Gait Analysis Researchers used 170,000 core hours to estimate the maximum running speeds of Tyrannosaurus rex, Allosaurus, Dilophosaurus, Velociraptor, Compsognathus, an ostrich, an emu and a human T. rex might have been too fast for humans to out-run!

8 NW-GRID for HEP ● Why? – Liv-HEP systems largely limited to 1 GB memory / processor – NW-GRID nodes minimum 2 GB / core with new Barcelona nodes on stream by end of July - 8 cores per node – The future is multi-core (more on that later) ● How? – CE in Physics connects to multiple nodes on GigE cluster ● Effectively these nodes are part of Liv-HEP ● Details on how this connection is made are being worked out ● Some concerns about network traffic as SE still in Physics – Ideally image nodes for Liv-HEP, re-image for NW-GRID quickly so potential there for moving nodes quickly between NW-GRID and Liv-HEP ● Also want some serial NW-GRID traffic onto Liv-HEP

9 Why multi-core?? ● Heat – multi-core chip has same thermal profile as its single-core cousin ● Space – Liverpool NW-GRID has ~1200 cores in 8 racks ● Management – Tends to be at the node level - more cores per node, the easier the management

10 BUT!!

11 Why not multi-core? ● Exploitation requires separate threads of execution for each core. – Multiple tasks - e.g. N serial processes on N cores – Parallel tasks - N MPI processes on N cores – Parallel threads - single program with “loop- level” parallelism (shared-memory parallel) ● Multi-core shifts the serial bottleneck – Memory access – Communicating off-node (e.g NICs)

12 Multi-core - good news ● Experience with multi-core nodes at Liverpool suggests – N cores usually better than.8*N performance of 1 core ● HEP codes often have thousands of cases to run - multi-core just needs a good job scheduler. ● Lots of experience and software for developing SMP parallel versions of codes – OpenMP - directives based parallel specification – Good performance on 4-8 threads often fairly easy. – Higher levels of performance often require careful look at load balance over threads. ● Race conditions the difficult debugging problem

13 Sustainability issues ● Multi-core to reduce heat, ease management just tip of the iceberg! ● How fund refresh of necessary compute infrastructure? ● How cope with needs to reduce watt per flop? ● How exploit new developments taking place?

14 New Developments ● Heterogeneous architectures – Largely driven by watt per flop considerations – IBM: Opteron and Cell - new petaflops system ● PowerXCell 8i processors ~ 100 Gflop/s – AMD and Nvidia - multi-core “general purpose” GPUs ● AMD: 500 cores, 200 Gflop/s dp at 150W for $1000 ● Nvidia: 1U node: 4 GPU (960 cores), ~800 Gflop/s at 700W for $8,000 ● Think vector processing ● Homogeneous but lots of cores – Sun Niagara 3 - 16 cores, 16 threads per core ● (but only one FP unit per core) ● Can put multiple chips on a node

15 Higher-level developments ● Cloud computing – Package code into a virtual appliance (os+application) then run anywhere on the cloud (Sound familiar?) – Charge per CPU-hr ● Storage services (e.g. Amazon S3) – Charge per Gbyte of storage, plus access – Current commercial systems not slanted for scientific computing ● Idea - build next generation GridPP around these concepts - and sell excess…

16 NW-GRID and sustainability ● Want to cover costs and refresh over 3 years ● Need additional money – Universities view as a strategic resource – Push computational modelling by University groups for external contracts – Lease hardware to groups with “small” hardware grants – Work with system vendors to act as a “buyer’s friend” – Direct external contracts

17 External contracts ● Direct industry use impeded by licensing models - this is slowly changing. ● Attract interest of ISV’s to new hardware on site (latest processors+memory+interconnect) ● Target priority areas as defined by funding bodies (e.g. NWDA, TSB) ● Involve regional Knowledge Transfer activities (e.g. Centre for Materials Discovery, NWVEC) ● Become a “cloud computing” site - possible for Daresbury.

18 Conclusions ● After saying it for 20 years, the age of parallel computing is finally upon us! ● Heat and space constraints will greatly limit choice of future systems. – Associated hardware changes will require some major software redesign ● Must virtualise software stack so can easily run serial code almost anywhere (build with just enough OS). – Lots of work needed to “virtualise” MPI codes ● How do you back-up a petabyte of data?


Download ppt "NW-GRID, HEP and sustainability Cliff Addison Computing Services July 2008"

Similar presentations


Ads by Google