1 Science Grid Program NAREGI And Cyber Science Infrastructure November 1, 2007 Kenichi Miura, Ph.D. Information Systems Architecture Research Division.

Slides:



Advertisements
Similar presentations
National Institute of Advanced Industrial Science and Technology Ninf-G - Core GridRPC Infrastructure Software OGF19 Yoshio Tanaka (AIST) On behalf.
Advertisements

May 9, 2007Astro-RG, OGF201 Japanese Virtual Observatory and NaReGi Masatoshi Ohishi / NAOJ, Sokendai & NII /, &
CPSCG: Constructive Platform for Specialized Computing Grid Institute of High Performance Computing Department of Computer Science Tsinghua University.
Current status of grids: the need for standards Mike Mineter TOE-NeSC, Edinburgh.
Cyber Science Infrastructure and NAREGI Grid middleware Kento Aida National Institute of Informatics Kento Aida, National Institute of Informatics 1.
Towards a Virtual European Supercomputing Infrastructure Vision & issues Sanzio Bassini
Contact: Hirofumi Amano at Kyushu 40 Years of HPC Services In this memorable year, the.
Toward Production Level Operation of Authentication System for High Performance Computing Infrastructure in Japan Eisaku Sakane and Kento Aida National.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
Federation of Campus PKI and Grid PKI for Academic GOC Management Conformable to APGrid PMA National Institute of Informatics, JAPAN Toshiyuki Kataoka,
Cracow Grid Workshop, November 5-6, 2001 Towards the CrossGrid Architecture Marian Bubak, Marek Garbacz, Maciej Malawski, and Katarzyna Zając.
1 NEGST Workshop, June 23-24, 2006 Outline of NAREGI Project: Current Status and Future Direction Kenichi Miura, Ph.D. Information Systems Architecture.
Sergey Belov, LIT JINR 15 September, NEC’2011, Varna, Bulgaria.
1 NEGST Workshop, May 28-29, 2007 Current Status of the NAREGI Project Kenichi Miura, Ph.D. Information Systems Architecture Research Division Centr for.
CSC Grid Activities Arto Teräs HIP Research Seminar February 18th 2005.
High Performance Computing (HPC) at Center for Information Communication and Technology in UTM.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks GINGIN Grid Interoperation on Data Movement.
Assessment of Core Services provided to USLHC by OSG.
Grid security in NAREGI project NAREGI the Japanese national science grid project is doing research and development of grid middleware to create e- Science.
GRACE Project IST EGAAP meeting – Den Haag, 25/11/2004 Giuseppe Sisto – Telecom Italia Lab.
Open Science Grid For CI-Days Internet2: Fall Member Meeting, 2007 John McGee – OSG Engagement Manager Renaissance Computing Institute.
2006/1/23Yutaka Ishikawa, The University of Tokyo1 An Introduction of GridMPI Yutaka Ishikawa and Motohiko Matsuda University of Tokyo Grid Technology.
Grid security in NAREGI project July 19, 2006 National Institute of Informatics, Japan Shinichi Mineo APAN Grid-Middleware Workshop 2006.
The Japanese Virtual Observatory (JVO) Yuji Shirasaki National Astronomical Observatory of Japan.
NAREGI WP4 (Data Grid Environment) Hideo Matsuda Osaka University.
1 Project Leader, NAREGI Project Professor, National Institute of Informatics Kenichi Miura, Ph.D. Grid Activities in Japan - Overview of NAREGI Project.
Kento Aida, Tokyo Institute of Technology Grid Challenge - programming competition on the Grid - Kento Aida Tokyo Institute of Technology 22nd APAN Meeting.
ARGONNE  CHICAGO Ian Foster Discussion Points l Maintaining the right balance between research and development l Maintaining focus vs. accepting broader.
OGF 25/EGEE User Forum Catania, March 2 nd 2009 Meta Scheduling and Advanced Application Support on the Spanish NGI Enol Fernández del Castillo (IFCA-CSIC)
Data GRID Activity in Japan Yoshiyuki WATASE KEK (High energy Accelerator Research Organization) Tsukuba, Japan
GRAM5 - A sustainable, scalable, reliable GRAM service Stuart Martin - UC/ANL.
Introduction of NAREGI-CA National Institute of Informatics JAPAN Toshiyuki Kataoka, July 19, 2006 APAN Grid-Middleware Workshop, Singapore.
GEM Portal and SERVOGrid for Earthquake Science PTLIU Laboratory for Community Grids Geoffrey Fox, Marlon Pierce Computer Science, Informatics, Physics.
Experiences with the Globus Toolkit on AIX and deploying the Large Scale Air Pollution Model as a grid service Ashish Thandavan Advanced Computing and.
Jun Adachi & Masamitsu Negishi National Institute of Informatics, Japan NII October 23, 2006 Beijing, China Cyber Science Infrastructure for.
Interoperability Grids, Clouds and Collaboratories Ruth Pordes Executive Director Open Science Grid, Fermilab.
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks, An Overview of the GridWay Metascheduler.
Ames Research CenterDivision 1 Information Power Grid (IPG) Overview Anthony Lisotta Computer Sciences Corporation NASA Ames May 2,
GRIDS Center Middleware Overview Sandra Redman Information Technology and Systems Center and Information Technology Research Center National Space Science.
GRID Overview Internet2 Member Meeting Spring 2003 Sandra Redman Information Technology and Systems Center and Information Technology Research Center National.
EVGM081 Multi-Site Virtual Cluster: A User-Oriented, Distributed Deployment and Management Mechanism for Grid Computing Environments Takahiro Hirofuchi,
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
Introduction to Grids By: Fetahi Z. Wuhib [CSD2004-Team19]
1 Grid Activity Summary » Grid Testbed » CFD Application » Virtualization » Information Grid » Grid CA.
7. Grid Computing Systems and Resource Management
International Symposium on Grid Computing (ISGC-07), Taipei - March 26-29, 2007 Of 16 1 A Novel Grid Resource Broker Cum Meta Scheduler - Asvija B System.
Development of e-Science Application Portal on GAP WeiLong Ueng Academia Sinica Grid Computing
Securing the Grid & other Middleware Challenges Ian Foster Mathematics and Computer Science Division Argonne National Laboratory and Department of Computer.
National Institute of Advanced Industrial Science and Technology Developing Scientific Applications Using Standard Grid Middleware Hiroshi Takemiya Grid.
Università di Perugia Enabling Grids for E-sciencE Status of and requirements for Computational Chemistry NA4 – SA1 Meeting – 6 th April.
© Copyright AARNet Pty Ltd PRAGMA Update & some personal observations James Sankar Network Engineer - Middleware.
EGI Technical Forum Amsterdam, 16 September 2010 Sylvain Reynaud.
The National Grid Service Mike Mineter.
The Roadmap of NAREGI Security Services Masataka Kanamori NAREGI WP
Campus grids: e-Infrastructure within a University Mike Mineter National e-Science Centre 22 February 2006.
INTRODUCTION TO GRID & CLOUD COMPUTING U. Jhashuva 1 Asst. Professor Dept. of CSE.
RI EGI-InSPIRE RI Astronomy and Astrophysics Dr. Giuliano Taffoni Dr. Claudio Vuerli.
Gang Chen, Institute of High Energy Physics Feb. 27, 2012, CHAIN workshop,Taipei Co-ordination & Harmonisation of Advanced e-Infrastructures Research Infrastructures.
Page : 1 SC2004 Pittsburgh, November 12, 2004 DEISA : integrating HPC infrastructures in Europe DEISA : integrating HPC infrastructures in Europe Victor.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) gLite Grid Introduction Salma Saber Electronic.
INTRODUCTION TO XSEDE. INTRODUCTION  Extreme Science and Engineering Discovery Environment (XSEDE)  “most advanced, powerful, and robust collection.
GRID & Parallel Processing Koichi Murakami11 th Geant4 Collaboration Workshop / LIP - Lisboa (10-14/Oct./2006) 1 GRID-related activity in Japan Go Iwai,
NAREGI PSE with ACS S.Kawata 1, H.Usami 2, M.Yamada 3, Y.Miyahara 3, Y.Hayase 4 1 Utsunomiya University 2 National Institute of Informatics 3 FUJITSU Limited.
Grid Services for Digital Archive Tao-Sheng Chen Academia Sinica Computing Centre
Enabling Grids for E-sciencE University of Perugia Computational Chemistry status report EGAAP Meeting – 21 rst April 2005 Athens, Greece.
Bob Jones EGEE Technical Director
Accessing the VI-SEEM infrastructure
Grid Computing.
eInfrastructure the international dimension
Presentation transcript:

1 Science Grid Program NAREGI And Cyber Science Infrastructure November 1, 2007 Kenichi Miura, Ph.D. Information Systems Architecture Research Division Center for Grid Research and Development National Institute of Informatics Tokyo, Japan

2 1.National Research Grid Initiatve (NAREGI) 2. Cyber Science Infrastructure(CSI) Outline

3 National Research Grid Initiative (NAREGI) Project:Overview - Originally started as an R&D project funded by MEXT (FY2003-FY2007) 2 B Yen(~17M$) budget in FY Collaboration of National Labs. Universities and Industry in the R&D activities (IT and Nano-science Apps.) -Project redirected as a part of the Next Generation Supercomputer Development Project (FY2006-…..) MEXT:Ministry of Education, Culture, Sports,Science and Technology

(1)To develop a Grid Software System (R&D in Grid Middleware and Upper Layer) as the prototype of future Grid Infrastructure in scientific research in Japan (2)To provide a Testbed to prove that the High-end Grid Computing Environment (100+Tflop/s expected by 2007) can be practically utilized by the nano-science research community over the Super SINET (now, SINET3). (3) To Participate in International collaboration/Interoperability (U.S., Europe, Asian Pacific)  GIN (4) To Contribute to Standardization Activities, e.g., OGF National Research Grid Initiative (NAREGI) Project:Goals

5 Grid Middleware Integration and Operation Group Grid Middleware And Upper Layer R&D Project Leader: Dr. K.Miura Center for Grid Research and Development (National Institute of Informatics) Ministry of Education, Culture, Sports, Science and industry ( MEXT) Computational Nano Center (Inst. Molecular science) R&D on Grand Challenge Problems for Grid Applications ( ISSP, Tohoku-U, AIST, Inst. Chem. Research, KEK etc.) ITBL SINET3 Cyber Science Infrastructure ( CSI) Coordination and Operation Committee Dir.: Dr. F.Hirata Grid Technology Research Center (AIST), JAEA Computing and Communication Centers (7 National Universities) etc. TiTech, Kyushu-U, Osaka-U, Kyushu- Tech., Fujitsu, Hitachi, NEC Industrial Association for Promotion ofSupercomputing Technology Collaboration Joint Research Joint R&D Collaboration Oparation And Collaboration Unitization Deployment Organization of NAREGI

NAREGI Software Stack SINET3 ( Globus,Condor,UNICORE  OGSA) Grid-Enabled Nano-Applications Grid PSE Grid Programming -Grid RPC -Grid MPI Grid Visualization Grid VM Distributed Information Service Grid Workflow Super Scheduler High-Performance & Secure Grid Networking Computing Resources NII IMS Research Organizations etc Data Grid Packaging

7 VO and Resources in Beta 2 IS A.RO1 B.RO1 N.RO1 Research Org (RO)1 GridVMIS Policy VO-R01 VO-APL1 VO-APL2 GridVMIS Policy VO-R01 GridVMIS Policy VO-R01 VO-APL1 VO- RO1 ISSS Client VO- APL1 ISSS IS .RO2 .RO2.RO2 RO2 Policy VO-R02 VO-APL2 VO- RO2 ISSS Client GridVMIS Policy VO-R02 GridVMIS Policy VO-R01 VO-APL1 VO-APL2 ISSS GridVMIS Client RO3 Decoupling VOs and Resource Providers (Centers) VOs & Users Resource Providers Grid VOMS

8 WP-2:Grid Programming – GridRPC/Ninf-G2 (AIST/GTRC) GridRPC Server sideClient side Client GRAM 3. invoke Executable 4. connect back Numerical Library IDL Compiler Remote Executable 1. interface request 2. interface reply fork MDS Interface Information LDIF File retrieve IDL FILE generate Programming Model using RPC on the Grid High-level, taylored for Scientific Computing (c.f. SOAP-RPC) GridRPC API standardization by GGF GridRPC WG Ninf-G Version 2 A reference implementation of GridRPC API Implemented on top of Globus Toolkit 2.0 (3.0 experimental) Provides C and Java APIs

9 ■ GridMPI is a library which enables MPI communication between parallel systems in the grid environment. This realizes; ・ Huge data size jobs which cannot be executed in a single cluster system ・ Multi-Physics jobs in the heterogeneous CPU architecture environment ① Interoperability: - IMPI ( Interoperable MPI ) compliance communication protocol - Strict adherence to MPI standard in implementation ② High performance: - Simple implementation - Buit-in wrapper to vendor-provided MPI library Cluster A: YAMPIIIMPI YAMPII IMPI server Cluster B: WP-2:Grid Programming -GridMPI (AIST and U-Tokyo)

10 Grid PSE -Deployment of applications on the Grid -Support for execution of deployed applications Grid Workflow -Workflow language independent of specific Grid middleware -GUI in task-flow representation Grid Visualization -Remote visualization of massive data distributed over the Grid -General Grid services for visualization WP-3: User-Level Grid Tools & PSE

12 njs_png2002njs_png2012 njs_png2002 njs_png2003 njs_png2004 njs_png2010 njs_png2009 njs_png2008 njs_png2007 njs_png2006 njs_png2005 njs_png2011 njs_png2057 dpcd052 dpcd053 dpcd054 dpcd055 dpcd056 dpcd057 dpcd052 dpcd053 dpcd054 dpcd055 dpcd056 dpcd057 njs_png2002njs_png2012 njs_png2002 njs_png2003 njs_png2004 njs_png2010 njs_png2009 njs_png2008 njs_png2007 njs_png2006 njs_png2005 njs_png2011 njs_png2057 monomer calculation dimer calculation NII Resources IMS Resources fragment data input data total energy calculation density exchange visuali- zation Workflow based Grid FMO Simulations of Proteins By courtesy of Prof. Aoyagi (Kyushu Univ.) Data component

13 MPI RISM Job Local Scheduler IMPI Server GridVM FMO Job Local Scheduler Super Scheduler WFT RISM source FMO source Work- flow PSECA Site A Site B (SMP machine) Site C (PC cluster) 6: Co-Allocation 3: Negotiation Agreement 6: Submission 10: Accounting 10: Monitoring 4: Reservation 5: Sub-Job 3: Negotiation 1: Submission c: Edit b: Deployment a: Registration CA Resource Query GridVM Distributed Information Service GridMPI RISM SMP 64 CPUs FMO PC Cluster 128 CPUs Grid Visualization Output files Input files IMPI Scenario for Multi-sites MPI Job Execution

14 RISM FMO Reference Interaction Site Model Fragment Molecular Orbital method IMS MPICH-G2, Globus RISM FMO NII GridMPI Data Transformation between Different Meshes Electronic Structure Analysis Solvent Distribution Analysis Grid Middleware Electronic Structure in Solutions Adaptation of Nano-science Applications to Grid Environment (Sinet3)

15 NAREGI Application: Nanoscience 3D-RISM FMO find correlations between mesh points Mediator Data exchange between meshes pair correlation functions monomer calculations evenly-spaced mesh adaptive meshes effective charges on solute sites solvent distribution dimer calculation Simulation Scheme By courtesy of Prof. Aoyagi (Kyushu Univ.)

16 Collaboration in Data Grid Area High Energy Physics ( GIN) - KEK - EGEE Astronomy - National Astronomical Observatory (Virtual Observatory) Bio-informatics - BioGrid Project

17 Data 1 Data 2 Data n Grid-wide File System Metadata Construction Data Access Management Data Resource Management Job 1 Meta- data Meta- data Data 1 Grid Workflow Data 2Data n NAREGI Data Grid Environment Job 2Job n Meta- data Job 1 Grid-wide DB Querying Job 2 Job n Data Grid Components Import data into workflow Place & register data on the Grid Assign metadata to data Store data into distributed file nodes

18 FY 2003 FY 2004 FY 2005 FY 2006 FY 2007 UNICORE -based R&D Framework OGSA /WSRF-based R&D Framework Roadmap of NAREGI Grid Middleware Utilization of NAREGI NII-IMS Testbed Utilization of NAREGI-Wide Area Testbed Prototyping NAREGI Middleware Components Development and Integration of αVer. Middleware Development and Integration of βVer. Middleware Evaluation on NAREGI Wide-area Testbed Development of OGSA-based Middleware Verification & Evaluation Of Ver. 1 Apply Component Technologies to Nano Apps and Evaluation Evaluation of αVer. In NII-IMS Testbed Evaluation of βVersion By IMS and other Collaborating Institutes Deployment of βVersion αVer. (Internal) β1 Ver. Release Version 1 . 0 Release Midpoint Evaluation β2 Ver. Limited Distr.

19 Highlights of NAREGI  release (‘05-’ 0 6) 1.Resource and Execution Management GT4/WSRF based OGSA-EMS incarnation Job Management, Brokering, Reservation based co-allocation, Monitoring, Accounting Network traffic measurement and control 2.Security Production-quality CA VOMS/MyProxy based identity/security/monitoring/accounting 3.Data Grid WSRF based grid-wide data sharing with Gfarm 4.Grid Ready Programming Libraries Standards compliant GridMPI (MPI-2) and GridRPC Bridge tools for different type applications in a concurrent job 5.User Tools Web based Portal Workflow tool w/NAREGI-WFML WS based application contents and deployment service Large-Scale Interactive Grid Visualization The first incarnation In the world  NAREGI is operating production level CA in APGrid PMA Grid wide seamless data access Support data form exchange High performance communication A reference implementation of OGSA-ACS

20 NAREGI Version 1.0 To be developed in FY2007 More flexible scheduling methods - Reservation-based scheduling - Coexistence with locally scheduled jobs - Support of Non-reservation-based scheduling - Support of “Bulk submission” for parameter sweep type jobs Improvement in maintainability - More systematic logging using Information Service (IS) Easier installation procedure - apt-rpm - VM Operability, Robustness, Maintainability

21 Science Grid NAREGI - Middleware Version. 1.0 Architecture -

22 : 1 Gbps to 20 Gbps : Edge node (edge L1 switch) : Core node (core L1 switch + IP router) : 10 Gbps to 40 Gbps It has 63 edge nodes and 12 core nodes (75 layer-1 switches and 12 IP routers). It deploys Japan’s first 40 Gbps lines between Tokyo, Nagoya, and Osaka. The backbone links form three loops to enable quick service recovery against network failures and the efficient use of the network bandwidth. Japan’s first 40Gbps (STM256) lines 10Gbps 2.4Gbps Los Angeles New York Hong Kong Singapore Network Topology of SINET3 622Mbps

23 ~ 3000 CPUs ~ 17 Tflops Center for GRID R&D (NII) ~ 5 Tflops Computational Nano-science Center(IMS) ~ 10 Tflops Osaka Univ. BioGrid TiTech Campus Grid AIST SuperCluster ISSP Small Test App Clusters Kyoto Univ. Small Test App Clusters Tohoku Univ. Small Test App Clusters KEK Small Test App Clusters Kyushu Univ. Small Test App Clusters AIST Small Test App Clusters NAREGI Phase 1 Testbed SINET3 (10Gbps MPLS)

24 Computer System for Grid Software Infrastructure R & D Center for Grid Research and Development ( 5 Tflop/s , 700GB) File Server (PRIMEPOWER 900 + ETERNUS3000 + ETERNUS LT160 ) (SPARC64V1.3GHz) (SPARC64V1.3GHz) 1node 1node / 8CPU SMP type Compute Server (PRIMEPOWER HPC2500 ) 1node (UNIX, SPARC64V1.3GHz/64CPU) SMP type Compute Server (SGI Altix3700 ) 1node (Itanium2 1.3GHz/32CPU) SMP type Compute Server (IBM pSeries690 ) 1node (Power4 1.3GHz /32CPU) InfiniBand 4X (8Gbps) Distributed-memory type Compute Server(HPC LinuxNetworx ) GbE (1Gbps) Distributed-memory type Compute Server(Express 5800 ) GbE (1Gbps) SINET 3 High Perf. Distributed-memory Type Compute Server (PRIMERGY RX200 ) High Perf. Distributed-memory type Compute Server (PRIMERGY RX200 ) Memory 130GB Storage 9.4TB Memory 130GB Storage 9.4TB 128CPUs (Xeon, 3.06GHz) +Control Node Memory 65GB Storage 9.4TB Memory 65GB Storage 9.4TB 128 CPUs (Xeon, 3.06GHz) +Control Node Memory 65GB Storage 4.7TB Memory 65GB Storage 4.7TB Memory 65GB Storage 4.7TB Memory 65GB Storage 4.7TB Memory 65GB Storage 4.7TB Memory 65GB Storage 4.7TB Memory 65GB Storage 4.7TB Memory 65GB Storage 4.7TB 128 CPUs (Xeon, 2.8GHz) +Control Node Distributed-memory type Compute Server (Express 5800 ) 128 CPUs (Xeon, 2.8GHz) +Control Node Distributed-memory type Compute Server(HPC LinuxNetworx ) Ext. NW Intra NW-A Intra NW Intra NW-B L3 SW 1Gbps(upgradable To 10Gbps) L3 SW 1Gbps(Upgradable to 10Gbps) Memory 16GB Storage 10TB Back-up Max.36.4TB Memory 16GB Storage 10TB Back-up Max.36.4TB Memory 128GB Storage 441GB Memory 128GB Storage 441GB Memory 32GB Storage 180GB Memory 32GB Storage 180GB Memory 64GB Storage 480GB Memory 64GB Storage 480GB

25 SMP type Computer Server Memory 3072GB Storage 2.2TB Memory 3072GB Storage 2.2TB Distributed-memory type Computer Server(4 units) 818 CPUs(Xeon, 3.06GHz)+Control Nodes Myrinet2000 (2Gbps) Memory 1.6TB Storage 1.1TB/unit Memory 1.6TB Storage 1.1TB/unit File Server (SPARC64 GP, 675MHz) 16CPUs (SPARC64 GP, 675MHz) Memory 8GB Storage 30TB Back-up 25TB Memory 8GB Storage 30TB Back-up 25TB 5.4 TFLOPS 5.0 TFLOPS 16ways×50nodes (POWER4+ 1.7GHz) Multi-stage Crossbar Network L3 SW 1Gbps (Upgradable to 10Gbps) VPN Firewall CA/RA Server SINET3 Center for Grid R & D Front-end Server Computer System for Nano Application R & D Computational Nano science Center ( 10 Tflop/s , 5TB) Front-end Server

26 Science Grid Environment Toward Petascale Computing Environment for Scientific Research Cyber Science Infrastructure ( CSI ) Grid Middleware for Large Computer Centers Productization of Generalpurpose Grid Middleware for Scientific Computing Personnel Training (IT and Application Engineers) Contribution to International Scientific Community and Standardization Resource Management in the Grid Environment Grid Application Environment High-Performance & Secure Grid Networking Grid Programming Environment Grid-Enabled Nano Applications Center for Grid Research and Development ( National Institute of Informatics) Grid Middleware Computational Methods for Nanoscience using the Lastest Grid Technology Research Areas Large-scale Computation High Throughput Computation New Methodology for Computational Science Computational Nano- science Center ( Institute for Molecular Science ) Requirement from the Industry with regard to Science Grid for Industrial Applications Solicited Research Proposals from the Industry to Evaluate Applications Industrial Committee for Super Computing Promotion Data Grid Environment Use In Industry (New Intellectual Product Development) Progress in the Latest Research and Development (Nano, Biotechnology) Vitalization of Industry Evaluation of Grid System w ith Nano Applications Future Direction of NAREGI Grid Middleware

27 Outline 1.National Research Grid Initiatve (NAREGI) 2. Cyber Science Infrastructure(CSI)

28 Cyber Science Infrastructure: background A new information infrastructure is needed in order to boost today’s advanced scientific research. –Integrated information resources and system Supercomputer and high-performance computing Software Databases and digital contents such as e-journals “Human” and research processes themselves –U.S.A : Cyber-Infrastructure (CI) –Europe : EU e-Infrastructure (EGEE,DEISA,….) Break-through in research methodology is required in various fields such as nano-Science/technology, bioinformatics/life sciences,… –the key to industry/academia cooperation: from ‘Science’ to ‘Intellectual Production’ A new comprehensive framework of information infrastructure in Japan Cyber Science Infrastructure Advanced information infrastructure for research will be the key in international cooperation and competitiveness in future science and engineering areas

29 Industry/Societal Feedback International Infrastructural Collaboration Restructuring Univ. IT Research Resources Extensive On-Line Publications of Results Deployment of NAREGI Middleware Virtual Labs Live Collaborations Cyber-Science Infrastructure for R & D UPKI: National Research PKI Infrastructure Cyber-Science Infrastructure ( CSI) ● ★ ★ ★ ★ ★ ★ ★ ☆ SuperSINET and Beyond: Lambda-based Academic Networking Backbone Hokkaido-U Tohoku-U Tokyo-U NIINII Nagoya-U Kyoto-U Osaka-U Kyushu-U ( Titech, Waseda-U, KEK, etc.) NAREGI Outputs GeNii (Global Environment for Networked Intellectual Information) NII-REO (Repository of Electronic Journals and Online Publications

30 Structure of CSI and Role of Grid Operation Center (GOC) Center for Grid Research and Development Peta-scale System VO Networking Infrastructure ( SINET3 ) UPKI System e-Science Community GOC (Grid Operation Center) ・ Deployment & Operations of Middleware ・ Tech. Support ・ Operations of CA ・ VO Users Admin. ・ Users Training ・ Feedbacks to R&D Group National Institute of Informatics ・ EGEE ・ TeraGrid ・ DEISA ・ OGF etc Industrial Project VOs Research Project VOs Univ./National Supercomputing Center VOs Domain-specific Research Organization VO ( IMS,AIST,KEK,NAO etc) NAREGI Middleware R&D and Operational Collaboration Planning/Oper ations/Support WG for Inter-university PKI WG for Grid Middleware R&D/Support to Operations Planning/Collaboration International Collaboration International Collaboration Academic Contents Service Cyber-Science Infrastructure Research Community VO WG for Networking Planning/Operations

31 Cyber Science Infrastructure

32 Expansion Plan of NAREGI Grid National Supercomputer Grid (Tokyo,Kyoto,Nagoya…) Domain-specific Research Organizations (IMS,KEK,NAOJ….) Petascale Computing Environment Domain-specific Research Communities Departmental Computing Resources Laboratory-level PC Clusters NAREGI Grid Middleware Interoperability ( GIN,EGEE,Teragrid etc.)

33 CyberInfrastruc t ure ( NSF) Track1 Petascale System ( NCSA ) NSF Supercomputer Centers (SDSC,NCSA,PSC) Track2: (TACC,UTK/ORNL,FY2009) > 1Pflops National:>500 Tflops Network Infrastructure : TeraGrid Four Important Areas ( ) ・ High Performance Computing ・ Data, Data Analysis & Visualization ・ Virtual Organization for Distributed Communities ・ Learning & Workforce Development Local: Tflops Leadership Class Machine Slogan : Deep – Wide - Open

34 EU’s e-Infrastruc t ure (HET) PACE Petascale Project ( 2009? ) DEISA Europtean HPC center(s): >1Pflops National/Regional centers with Grid Colaboration: Tflops EGEE EGI HET:HPC in Europe Task Force PACE: Partnership for Advanced Computing in Europe DEISA: Distributed European Infrastructure for Supercomputer Applications EGEE: Enabling Grid for E-SciencE EGI: European Grid Initiative Network Infrastructure : GEANT2 Tier 1 Tier 2 Tier 3 Local centers

35 Summary NAREGI Grid middleware will enable seamless federation of heterogeneous computational resources. Computations in Nano-science/technology applications over Grid is to be promoted, including participation from industry. NAREGI Grid Middleware is to be adopted as one of the important components in the new Japanese Cyber Science Infrastructure Framework. NAREGI is planned to provide the access and computational infrastructure for the Next Generation Supercomputer System.

36 Thank you!