Presentation on theme: "Founded in 2010: UCL, Southampton, Oxford and Bristol Key Objectives of the Consortium: Prove the concept of shared, regional e-infrastructure services."— Presentation transcript:
Founded in 2010: UCL, Southampton, Oxford and Bristol Key Objectives of the Consortium: Prove the concept of shared, regional e-infrastructure services. Focus on providing a sustainable, effective research platform service for the Consortium members. Drive Consortium collaboration Drive industry engagement Explore all areas of e-infrastructure (Research Data, Scientific Software, …) Governance: Strategic/Policy: Executive Board, User Group Centre/Project/Operations: Project Board, Operations Group.
UK Government decided there was a need for regional research infrastructure to link into national facilities National HPC Northern 8 Strathclyde/Glasgow (West) Mid + Leicester, Loughborough
EPSRC Regional HPC call Dec 11 Oxford, UCL, Southampton, Bristol (+STFC RAL) £2.82 million Capital £701K recurrent (1 year only) Centre for Innovation, two facilities: 1. General Purpose Intel based HPC cluster (IRIDIS) £1.7 million, 12,000 cores in year two. Based and operated by Southampton 2. GPGPU cluster (EMERALD) £1.1 million (GPGPU cluster based on 372 NVIDIA M2090 GPUs. ) Largest in UK. 2 nd largest in Europe. Based at Harwell campus Operated by RAL/STFC on behalf of the Consortium To support multi- disciplinary research, with a centre of gravity in engineering and physical sciences, reaching out to other disciplines To encourage and enable industrial usage and collaboration
Two HPC systems to create unique regional facility System 1. – Hosted at Southampton University System provided by OCF and IBM Based on IBM iDataplex system Upgrade to existing system to create – 12,000 core Intel x86 Westmere system – 113 TFLOP peak performance – High speed infiniband connection – High speed GPFS parallel file system – Managed by MOAB/Torque
System 2. - hosted by STFC – Rutherford Appleton Laboratories Largest in UK, Second largest GP-GPU system in Europe Built by HP, integrating Panasas storage and Gnodal networking Based on nVidia Tesla M2090 GPU cards 114 Tflop Performance measured for Top500 372 nVidia M2090 GP-GPUs 3 Login Nodes 84 HP Compute nodes (3 gpu and 8 gpu per node mix) 135TB Panasas ActiveStor 11 Storage Dual node interconnectivity across entire cluster 10Gbe (Gnodal) QDR Infiniband Managed via LSF scheduler
earth sciences and earth systems modelling, climate modelling, biomaterials, catalysis, renewable energy, earth materials, astronomy and astrophysics, atmospheric physics, chemistry, biochemistry, zoology, oncology, human genetics, neuroscience and neuroimaging, structural biology usage by processor hours, 4000 core partition
molecular dynamics, chemistry, CFD (aeronautics and mathematics), software engineering (optimisation), biological signalling, computational statistics, chemistry, biochemistry, zoology, neuroscience and neuroimaging, maths, finance, statistics industrial: pharmaceuticals, aerospace usage by CPU hours