HPC-related R&D in 863 Program Depei Qian Sino-German Joint Software Institute (JSI) Beihang University Aug. 27, 2010.

Slides:



Advertisements
Similar presentations
Grid Computing and Applications in China Kai Nan Computer Network Information Center (CNIC) Chinese Academy of Sciences (CAS) 17 Mar 2008.
Advertisements

Inetrconnection of CNGrid and European Grid Infrastructure Depei Qian Beihang University Feb. 20, 2006.
CNGrid: Progress, achievements, and prospects Prof. Qian Depei Director, Sino-German Joint Software Institute (JSI) Beihang University
Founded in 2010: UCL, Southampton, Oxford and Bristol Key Objectives of the Consortium: Prove the concept of shared, regional e-infrastructure services.
Development of China-VO ZHAO Yongheng NAOC, Beijing Nov
The Development of Mellanox - NVIDIA GPUDirect over InfiniBand A New Model for GPU to GPU Communications Gilad Shainer.
Cloud Computing at GES DISC Presented by: Long Pham Contributors: Aijun Chen, Bruce Vollmer, Ed Esfandiari and Mike Theobald GES DISC UWG May 11, 2011.
©2009 HP Confidential template rev Ed Turkel Manager, WorldWide HPC Marketing 4/7/2011 BUILDING THE GREENEST PRODUCTION SUPERCOMPUTER IN THE.
C3.ca in Atlantic Canada Virendra Bhavsar Director, Advanced Computational Research Laboratory (ACRL) Faculty of Computer Science University of New Brunswick.
Appro Xtreme-X Supercomputers A P P R O I N T E R N A T I O N A L I N C.
2. Computer Clusters for Scalable Parallel Computing
CURRENT AND FUTURE HPC SOLUTIONS. T-PLATFORMS  Russia’s leading developer of turn-key solutions for supercomputing  Privately owned  140+ employees.
Early Linpack Performance Benchmarking on IPE Mole-8.5 Fermi GPU Cluster Xianyi Zhang 1),2) and Yunquan Zhang 1),3) 1) Laboratory of Parallel Software.
Towards a European Training Network in Scientific Computing Pekka Manninen, PhD CSC – IT Center for Science Ltd.
IDC HPC User Forum Conference Appro Product Update Anthony Kenisky, VP of Sales.
LinkSCEEM-2: A computational resource for the development of Computational Sciences in the Eastern Mediterranean Mostafa Zoubi SESAME SESAME – LinkSCEEM.
Progress in China National Grid Qian Depei Xian Jiaotong University Sanya, Hainan, China Dec. 26, 2002.
Introduction to Scientific Data Grid Kai Nan Computer Network Information Center, CAS
1 petaFLOPS+ in 10 racks TB2–TL system announcement Rev 1A.
点击此处添加幻灯主标题 点击此处添加幻灯副标题 Overview of NSCCSZ HPC and Cloud Computing Services in South China Region Dr. Huang Xiaohui Marketing Developing Department.
HPC and e-Infrastructure Development in China’s High- tech R&D Program Danfeng Zhu Sino-German Joint Software Institute (JSI), Beihang University Dec.
Cloud Computing Introduction to China-cloud Project and Related Works in JSI Yi Liu Sino-German Joint Software Institute, Beihang Univ. May 2011.
Perspective on Extreme Scale Computing in China Depei Qian Sino-German Joint Software Institute (JSI) Beihang University Co-design 2013, Guilin, Oct. 29,
Scientific Data Infrastructure in CAS Dr. Jianhui Scientific Data Center Computer Network Information Center Chinese Academy of Sciences.
Copyright 2009 Fujitsu America, Inc. 0 Fujitsu PRIMERGY Servers “Next Generation HPC and Cloud Architecture” PRIMERGY CX1000 Tom Donnelly April
Computing Environment in Chinese Academy of Sciences Dr. Xue-bin Dr. Zhonghua Supercomputing Center Computer Network.
Grid Activities in China CHEP’06 Mumbai, 16/Feb/2006 Gang CHEN Institute of High Energy Physics, CAS Baoping YAN Computer Network Information Center, CAS.
CCS machine development plan for post- peta scale computing and Japanese the next generation supercomputer project Mitsuhisa Sato CCS, University of Tsukuba.
Maximizing The Compute Power With Mellanox InfiniBand Connectivity Gilad Shainer Wolfram Technology Conference 2006.
Status of CNGrid and Interests on International Testbed Depei Qian Sino-German Joint Software Institute (JSI) Beihang University SC12 BoF on Computing.
Brief Introduction of CNGrid Operation and Support Center Dr. Zhonghua Lu Supercomputing Center, Computer Network Information Center Chinese.
CRISP & SKA WP19 Status. Overview Staffing SKA Preconstruction phase Tiered Data Delivery Infrastructure Prototype deployment.
Looking Ahead: A New PSU Research Cloud Architecture Chuck Gilbert - Systems Architect and Systems Team Lead Research CI Coordinating Committee Meeting.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
Presented by Leadership Computing Facility (LCF) Roadmap Buddy Bland Center for Computational Sciences Leadership Computing Facility Project.
Large Scale Parallel File System and Cluster Management ICT, CAS.
August 3, March, The AC3 GRID An investment in the future of Atlantic Canadian R&D Infrastructure Dr. Virendra C. Bhavsar UNB, Fredericton.
© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. HP Update IDC HPC Forum.
Biryaltsev E.V., Galimov M.R., Demidov D.E., Elizarov A.M. HPC CLUSTER DEVELOPMENT AND OPERATION EXPERIENCE FOR SOLVING THE INVERSE PROBLEMS OF SEISMIC.
A scalable and flexible platform to run various types of resource intensive applications on clouds ISWG June 2015 Budapest, Hungary Tamas Kiss,
B5: Exascale Hardware. Capability Requirements Several different requirements –Exaflops/Exascale single application –Ensembles of Petaflop apps requiring.
CISC 849 : Applications in Fintech Namami Shukla Dept of Computer & Information Sciences University of Delaware A Cloud Computing Methodology Study of.
NICS Update Bruce Loftis 16 December National Institute for Computational Sciences University of Tennessee and ORNL partnership  NICS is the 2.
Tackling I/O Issues 1 David Race 16 March 2010.
EU-Russia Call Dr. Panagiotis Tsarchopoulos Computing Systems ICT Programme European Commission.
Parallel Computers Today Oak Ridge / Cray Jaguar > 1.75 PFLOPS Two Nvidia 8800 GPUs > 1 TFLOPS Intel 80- core chip > 1 TFLOPS  TFLOPS = floating.
北京大学计算中心 Computer Center of Peking University Building an IaaS System in Peking University Weijia Song April 23, 2012.
Bilateral Research and Industrial Development enhancing and integrating Grid Enabled technologies Gilbert Kalb, Fraunhofer-Gesellschaft EU-China Cooperation.
The Evolution of the Italian HPC Infrastructure Carlo Cavazzoni CINECA – Supercomputing Application & Innovation 31 Marzo 2015.
Gang Chen, Institute of High Energy Physics Feb. 27, 2012, CHAIN workshop,Taipei Co-ordination & Harmonisation of Advanced e-Infrastructures Research Infrastructures.
Architecture of a platform for innovation and research Erik Deumens – University of Florida SC15 – Austin – Nov 17, 2015.
E-Infrastructure and HPC development in China Rui Wang , Depei Qian Sino-German Joint Software Institute (JSI) Beihang University Sept. 28, 2012.
ChinaGrid: National Education and Research Infrastructure Hai Jin Huazhong University of Science and Technology
Scientific Computing at Fermilab Lothar Bauerdick, Deputy Head Scientific Computing Division 1 of 7 10k slot tape robots.
Grids and SMEs: Experience and Perspectives Emanouil Atanassov, Todor Gurov, and Aneta Karaivanova Institute for Parallel Processing, Bulgarian Academy.
PL-Grid: Polish Infrastructure for Supporting Computational Science in the European Research Space 1 ESIF - The PLGrid Experience ACK Cyfronet AGH PL-Grid.
High Performance Computing (HPC)
NIIF HPC services for research and education
Sun Gongxing, IHEP, Beijing
HPC Roadshow Overview of HPC systems and software available within the LinkSCEEM project.
Univa Grid Engine Makes Work Management Automatic and Efficient, Accelerates Deployment of Cloud Services with Power of Microsoft Azure MICROSOFT AZURE.
Status of WLCG FCPPL project
HPC & e-Infrastructures in China
Modern supercomputers, Georgian supercomputer project and usage areas
Tools and Services Workshop
LinkSCEEM-2: A computational resource for the development of Computational Sciences in the Eastern Mediterranean Mostafa Zoubi SESAME Outreach SESAME,
Grid infrastructure development: current state
Appro Xtreme-X Supercomputers
Super Computing By RIsaj t r S3 ece, roll 50.
Grid Computing.
Presentation transcript:

HPC-related R&D in 863 Program Depei Qian Sino-German Joint Software Institute (JSI) Beihang University Aug. 27, 2010

Outline The 863 key project on HPC and Grid Status and Next 5 years

863 efforts on HPC and Grid “High performance computer and core software” 4-year effort, May 2002 to Dec million Yuan funding from the MOST More than 2Χ associated funds from local government, application organizations, and industry Outcomes: China National Grid (CNGrid) “High productivity Computer and Grid Service Environment” Period: million Yuan from the MOST and more matching investment from other sources

Major R&D activities HPC development Grid software development CNGrid development Grid and HPC applications

1. HPC development Two phase development First phase: two 100TFlops machines for Dawning 5000A for SSC Lenovo DeepComp 7000 for CASSC Second phase: three 1000Tflops machines for Tianjin Supercomputing Center (Binhai New Zoon) South China Supercomputing Center (Shenzhen) Shandong Supercomputing Center (Jinan)

Dawning 5000A Constellation based on AMD multicore processors Low power CPU and high density blade design High performance InfiniBand switch TFlops peak performance, 180.6TFlops Linpack performance The 10th in TOP500 in Nov. 2008, the fastest machine outside USA

Lenovo DeepComp 7000 Hybrid cluster architecture using Intel multicore processors Two sets of interconnect InInfiniBand Gb Ethernet SAN connection between I/O nodes and disk array TFlops peak performance Tflops Linpack performance The 19th in TOP500 in Nov. 2008

Tianhe I Hybrid system General purpose unit--Intel 4- core processors Acceleration unit--AMD GPUs Service unit—Intel 4-core processors 1206TFlops peak General purpose: 200+TF GPU: 900+TF 560 TFlops Linpack performance Reasonable efficiency for a hybrid system 1.6MW Announced on Oct. 29, 2009

Tianhe I High speed network chips developed 2x QDR InfiniBand Coordination of heterogeneous components of the system Programming support for the heterogeneous system The second phase of Tianhe (Tianhe II?) will be completed by the end of this year Adding more general purpose computing power (1PF) Adding small amount of domestic processor Replacing the AMD GPUs with nVIDIA’s new Fermi GPUs (1.3PF)

Dawning 6000 Hybrid system Service unit (Nebula) 9600 Inter 6-core Westmere processor 4800 nVidia Fermi GPGPU 3PF peak performance 1.27 Linpack performance About 2.6 MW Computing unit Implemented with Loonson 3B processor Complete in 2011

Issues about hybrid architectures How to effectively use heterogeneous parts of the system? Applications which can effectively use the accelerators are still limited Difficulty in programming the hybrid system Need programming support for the hybrid systems

2. Grid software development Goal Developing system level software for supporting grid environment operation and grid applications Pursuing technological innovation Emphasizing maturity and robustness of the software

CNGrid GOS Architecture Grid Portal, Gsh+CLI, GSML Workshop and Grid Apps OS (Linux/Unix/Windows) PC Server (Grid Server) J2SE( 1.4.2_07, 1.5.0_07 ) Tomcat( ) + Axis( 1.2 rc2 ) Axis Handlers for Message Level Security Core, System and App Level Services

CNGrid GOS deployment CNGrid GOS deployed on 10 sites and some application Grids Support heterogeneous HPCs: Galaxy, Dawning, DeepComp Support multiple platforms Unix, Linux, Windows Using public network connection, enable only HTTP port Flexible client Web browser Special client GSML client

3. CNGrid development Ten sites CNIC, CAS (Beijing, major site) Shanghai Supercomputer Center (Shanghai, major site ) Tsinghua University (Beijing) Institute of Applied Physics and Computational Mathematics (Beijing) University of Science and Technology of China (Hefei, Anhui) Xi’an Jiaotong University (Xi’an, Shaanxi) Shenzhen Institute of Advanced Technology (Shenzhen, Guangdong) University of Hong Kong (Hong Kong) Shandong University (Jinan, Shandong) Huazhong University of Science and Technology (Wuhan, Hubei) The CNGrid Operation Center (based on CNIC, CAS) Three PF sites will be integrated into CNGrid

IAPCM: 1TFlops, 4.9TB storage, 10 applications, 138 users, IPv4/v6 access CNIC: 150TFlops, 1.4PB storage , 30 applications, 269 users all over the country, IPv4/v6 access Shandong University 1.3TFlops, 18TB storage, 7 applications, 60+ users, IPv4/v6 access SSC: 200TFlops, 600TB storage, 15 applications, 286 users, IPv4/v6 access USTC: 1TFlops, 15TB storage, 18 applications, 60+ users, IPv4/v6 access HKU: 2.6TFlops, 80+ users, IPv4/v6 access SIAT: 1.44TFlops, 17.6TB storage, IPv4/v6 access Tsinghua University: 1.33TFlops, 158TB storage, 29 applications, 100+ users. IPV4/V6 access XJTU: 4TFlops, 25TB storage, 14 applications, 120+ users, IPv4/v6 access HUST: 1.7TFlops, 15TB storage, IPv4/v6 access

CNGrid: resources 10 sites 380TFlops 2200TB storage

CNGrid : services and users 230 services >1400 users China commercial Aircraft Corp Bao Steel automobile institutes of CAS universities ……

CNGrid : applications Supporting >700 projects 973, 863, NSFC, CAS Innovative, and Engineering projects

CAS Supercomputing Center (SCCAS) Established at CNIC CAS in 1996 Currently installation Lenovo DeepComp 7000 Applications 200+ registered users Supporting more than 120 important projects A number of important achievement obtained

Shanghai Supercomputing Center (SSC) Established by Shanghai city government in 2000 Currently installation Dawning 5000A 300+ users

Utilization (SSC)

Application scale (SSC)

4: Grid and HPC applications Developing productive HPC and Grid applications Verification of the technologies Applications from some selected areas Resource and Environment Research Services Manufacturing

Grid applications Drug Discovery Weather forecasting Scientific data Grid and its application in research Water resource Information system Grid-enabled railway freight Information system Grid for Chinese medicine database applications HPC and Grid for Aerospace Industry (AviGrid) National forestry project planning, monitoring and evaluation

HPC applications Computational chemistry Computational Astronomy Parallel program for large fluid machinery design Fusion ignition simulation Parallel algorithms for bio- and pharmacy applications Parallel algorithms for weather forecasting based on GRAPES core scale simulation for aircraft design Seismic imaging for oil exploration Parallel algorithm libraries for PetaFlops systems

Domain application Grids Domain application Grids for Simulation and optimization automobile industry aircraft design steel industry Scientific computing Bio-information application computational chemistry Introducing Cloud Computing concept CNGrid—as IaaS and partially PaaS Domain application Grids—as SaaS and partially PaaS

CNGrid ( ) HPC Systems Two 100 Tflops 3 PFlops Grid Software: CNGrid GOS CNGrid Environment 12 sites One OP Centers Some domain application Grids Applications Research Resource & Environment Manufacturing Services

China’s status in related fields Significant progress in certain areas Still far behind in many aspects kernel technologies applications multi-disciplinary research talent people Sustainable development is crucial

Next 5 years ChinaCloud A priority key project that will be launched this year Strategic study to identify the goal and the scope of the project Network OS, cloud based search engine, and cloud based language translation High-end computing infrastructure (Still pending) Goals? Demands? HPC in cloud?

HPC Consortium A HPC consortium based on CNGrid is under consideration Part of the country’s cyber-infrastructure High-end computing facility for research Simulation and optimization capability needed by industry A consortium for taking national R&D projects A new model of operation need to be defined HKU site is extremely valuable to CNGrid and the future consortium Effective operation Supporting applications Promoting international cooperation

Thank you!