Knowledge Paradigm High Performance Computing (Petaflop by 2010 and beyond) Development, acquisition as well as creative use scaling up momentum Petabytes of storages and beyond From Remote Sensing Satellites, MST Radars & Global Scientific mission forums demanding tremendous storage High speed Networks (Terabits per second by 2010 and beyond) Key enabler to support engineering data and bandwidth intensive applications Grid Computing - supporting distributing computing, Problem solving environments and collaboration tools Already identified as an most important area and PoC Garuda under implementation by C-DAC
Knowledge Paradigm (Contd…) Data Centre and Web Services Emergence of world class of telecom infrastructure and success of IT sector augurs well for a host of applications sector from Bio- informatics to E-governance Knowledge Management tools Host of with over 300 of Fortune 500 and all top Global ICT MNCs setting up development centers and increasingly positioning of research labs in India, as also BPO, KPO centers and VC funding, GRID Marketing and innovation is happening. Secure Cyber Infrastructure Demands for trusted and reliable infrastructure service is increasing. Multilingual Computing With 22 official languages touching over 90% of non-english speaking people Broadband and mobile wireless Fastest growing in India, permeating to villages
Presentation Outline Background Indian Initiatives C-DAC & PoC Garuda Grid Partners Discussions 06-Sept-2005 Presentation to Internet2 & Worldbank 6
Indian Initiatives Networking, Grid Computing and applications/sectoral domains ERNET IQNET PoC GARUDA
ERNET: Education & Research Network Started as a collaborative initiative of 8 Premier Institutions (5 IITs, IISc, NCST, DOE) by Department of Electronics (Now Department of Information Technology), Government of India in 1986 with UNDP funding Research and development in computer networking Built campus LANs, established WAN (terrestrial and satellite) and first connection from India to Internet (UUNET) in February 89.
ERNET: Education & Research Network (Contd…) Today 13 Point of presence (POPs) at premier E&R institutions in the country. STM-1(155Mbps)Ready Fibre-Optic Backbone Satellite Hub in C-band (Bangalore) Beaming 3 Transponder of 36 MHz IP Multicasting Webcasting Channel for Distance Learning / Video Broadcasting Secure Infrastructure Intrusion Detection Server Firewall Intruder Alert Manager Gateway Antivirus Server Anti Spam Control Sniffer Lab dedicated to Network education and training.
Internet connectivity being provided by ERNET User Base 172 Universities (250-300) 245 R&D Institutions ( 500) 52 Engineering colleges (800) 251 Navodya and Govt Schools 274 ICAR Institutions 322 Other educational users/organizations. Installed Base 559 TDM/TDMA and 77 SCPC and DAMA VSATs 14 Radio Links and 173 Leased Lines.
INTERNET connectivity under various schemes AICTE Net Connectivity to AICTE recognized colleges and regional centers. A total of 40 institutions connected. Very Large potential UGC Infonet MOU signed on 4 th April 2003 152 universities connected over ERNET backbone.- Scalable network Multimedia capabilities for video conferencing and distance learning ICAR Net Network to be implemented in two phases. A total of 274 institutions have been connected. Network to support applications such as VOIP,IP,FAX, Video conferencing. NVS Net Connectivity provided by VSATS to: NVS Head quarters at New Delhi,100 schools have been
Univ. of Jammu Panjab Univ. Chandigarh Univ. of Raj. Jaipur (TIFR,BARC) IUCAA Pune CAT Indore IISC Banglore IIT Chennai Univ. of Hyderabad IOP Kolkata IIT Guwahati AMU Mumbai Bhubaneshwar VECC IIT Kanpur (DU)Delhi Universities / R&D Institutions proposed to be connected in I st Phase ERNET PoPs 34 Mbps IPLC Multi-Gigabit pan-European Research Network Connecting 32 European Countries and 28 NRENs Backbone capacity in the range of: 34Mb/s-10Gb/s AT BE CH CY CZ HR HU IE IL GRDE DK EE ES FI FR Austria Germany Belgium Cyprus Switzerland Czech Republic Denmark Estonia Finland Spain Greece France IS IT LT LU LV MT NL NO PL PT RO SE SI SK Croatia Hungary Ireland Israel Iceland Italy Lithuania Luxembourg Netherlands Latvia Norway Malta Poland Portugal Romania Sweden Slovenia Slovakia TR UK United Kingdom Turkey ERNET Backbone Links Additional Links Proposed
IQNET : National QoS Test bed (2005-2007) Collaborative effort between C-DAC, ERNET, IITs (Madras, Bombay, Delhi & Kharagpur) QoS Test bed for experimenting with research ideas Research activities Measurement Initiative VoIP Initiative Policy based QoS Initiative Outcomes expected would include Providing QoS in the Internet Interplay with non QoS networked applications Control and Management of QoS in IP networks
IQNET : Envisaged Connectivity Creation of Local test beds connected to the QoS WAN Test bed QoS WAN Test bed will overlay on existing ERNET backbone QoS test bed traffic and regular ERNET traffic logically separated by running them over two separate VPNs Mumbai (P) Pune(P) Kanpur(6PE) Kolkata(P) Univ. ofHyd(6PE) IIT Chennai (P) Delhi (P) IIT Delhi IIT Mumbai IISC (6PE) Banglore(P) IIT Kharagpur CDACBanglore (6PE) ERNET Delhi IISCBanglore IIT Chennai LAN QoS-Net Link P Provider Router 6PE IPv4/IPv6 Enabled Provider Edge Backbone Link MPLS Cloud LAN
IQNET : Research Areas in the QoS Testbed Development and deployment of technologies and solutions for distance education Experiment to provide application QoS by providing priority, RSVP and IntServ architecture Non Co-operative and Co-operative Measurement and Characterization IP Telephony applications Protocol support for Mobile Wireless Endpoints
IQNET : Applications Robotic control applications Distributed simulation and CAD conferencing Applications end point API support for QoS on IPv6 Telemedicine and Real Time guided clinical investigations Deployment of IPTV, H.323 and SIP based telephony and Content Delivery Network
IQNET : Status Measurement Initiative : PingER (Ping End-to-End Reporting) Collaboration Associated with SLAC (Stanford University Linear Accelerator Centre) since May 06, 2004 Internet End-to-end Performance Measurement (IEPM) project to monitor end- to-end performance of Internet links Metrics Measured: response time (rtt(ms)) variability of the response time both short term (time scale of seconds) and longer, packet loss percentages and the lack of reachability; Content Distribution Initiative: Planet-lab Collaboration Experimentations explored MPLS for Linux is a open source effort to create a set of MPLS signaling protocols and an MPLS forwarding plane for the Linux operating system Simulation of MPLS using NS-2
Started as a remote node and enhanced to monitoring node (ours is one of the 37 monitoring nodes across the globe) Monitoring 59 IPs across the globe Sends 100 and 1000 byte icmp packets periodically Statistics are stored at our location and reports are generated SLAC maintains the central database IQNET : Status (Contd…)
Presentation Outline Background Indian Initiatives C-DAC & PoC Garuda Grid Partners Discussions 06-Sept-2005 Presentation to Internet2 & Worldbank 20
About C-DAC - I 10 Locations 14 Labs 2000 members
High Performance Computing & Grid Computing Scientific & Engineering Applications Multilingual Computing, AAI, Speech Processing & Software Technologies, OSS, Multimedia ICT for masses Digital Broadband, Wireless Systems & Network Technologies e-Security Technologies and Services Power Electronics, Real-Time Systems & Embedded Systems, VLSI/ ANSI Design Geomatics, Health Informatics, e-Governance & Agri Electronics Education & Training & e-Learning Technologies & Services About C-DAC – II R&D areas
High Performance Computing Hardware Architecture High-Performance System Design VLSI Design System Area Networks (SAN) Switches Engineering Application Software Scientific & Engineering Business & Commercial System Software & Utilities Compilers Libraries Tools Benchmarking Advanced Computing, Marketing and Solutions Group HPC Systems & Technology Consultation Design and Delivery of HPC facilities and services End user education and training Partnering and collaborations
Garuda – Grid Computing Social Computing with participatory approach High Performance Computing Applications Development High Speed Networks Reconfigurable Computing Testing and Certification HPC Solutions and Training 1991 1994 1998 2002 2007 PARAM Padma Viable HPC business computing environment PARAM 10000 Platform for User community to interact/ collaborate PARAM 8000 Technology Denial MORE
Indian Grid Computing Initiative Proof of Concept (PoC) GARUDA phase Precursor to the National Grid Computing Initiative (GRID Garuda) Project Duration of 12 months (April 2005 upto March 2006), starting with Networking Fabric in Collaboration with ERNET, India Major Deliverables Technology Development & Research in Grid Computing Nation-wide high-speed communication fabric Grid Resources Deployment of Select applications of National Importance Grid Strategic User Group Implemented by C-DAC
Proof of Concept (PoC) GARUDA phase (Contd..) 17 locations till date, 100 Mbps connections with MPLS backbone, configured for peak load of 2.48 Gbps Planned for multidiscipline academic, research and Engineering applications with some visible demonstratable applications to trigger progression to main phase (Disaster Management and Bioinformatics) Teraflops of Computing power (including existing 1 Teraflop with C-DAC and planned 5 Teraflops early next year), 100s of terabytes data from various Institutions made available to Grid Parteners Community Intended to migrate smoothly to the main Grid Project from 2006, to target/address to variety of sectors from basic sciences to major applications Indian Grid Computing Initiative
GRID GARUDA PoC Components Technology Development and Research Communication Fabric Computational Resources Applications
Technology Development & Research Technology Deliverables : Architecture & Deployment, Grid Access Mechanisms, Application Frameworks, Problem Solving & Program Development Environments, Grid Middleware and Security, Grid Management and Monitoring Achievements so far : Research Initiatives of Integrated Development Environments, Resource Brokers & Meta Schedulers, Mobile Agent Framework, Semantic Grid Services (with MIT Chennai)
Communication Fabric An ultra-high speed multi services communication fabric connecting across 17 cities in the country to be deployed jointly by C-DAC & ERNET. Ethernet based High Bandwidth capacity, Scalable over entire geographic area with High levels of reliability, fault tolerance and redundancy. Current progress: L2 VPN at 100 Mbps Connectivity between C-DAC, Pune and C-DAC, Bangalore.
Module and Cities Module I Pune, Bangalore, Delhi, Kolkata, Chennai, Ahmedabad, Mumbai, Hyderabad Module II Roorke, Guwahati, Kharagpur, Thiruvananthapuram, Kanpur Module III Allahabad, Chandigarh, Lucknow, Varanasi
Grid Resources Objective is to Provide heterogeneous resources in the Grid including Compute, Data, Software and Scientific Instruments Deploy Test facilitates for Grid related research and development activities Deliverables Grid enablement of C-DAC resources at Bangalore and Pune, Aggregation of Partner Resources Setting up of PoC Test Bed and Grid Labs at Bangalore, Pune, Hyderabad and Chennai Aggregation of Resources: Prospective partners for grid resources have been identified & Finalization of specific details under progress Setting up of PoC Test Bed & Grid Labs: Grid Lab equipment has been received and testing under progress, C-DAC to also set up a grid lab at SAC, Ahmedabad
Applications of Importance for PoC Garuda Objective: Enable applications of national importance. TeraScale Applications Weather and Climate modeling Seismic Data processing Computational Fluid Dynamics Structural Mechanics Basic Sciences Grid-enabled Applications Bioinformatics Disaster Management Data Integration & Sharing Earthquake Research Cryptanalysis PARAM Padma at Bangalore ASAR flight data transmission from nearby Airport S A C Ahmedabad at Pune User Agencies GRID Communication Fabric High Speed Commn User Agencies Disaster Management Application on Garuda
Presentation Outline Background Indian Initiatives C-DAC & PoC Garuda Grid Partners Discussions 06-Sept-2005 Presentation to Internet2 & Worldbank 35
Collaborators & Partners : PoC Garuda Research Labs –National Chemical Laboratory, Pune –Bhabha Atomic Research Centre, Mumbai –Space Applications Centre, Ahmedabad –Institute for Plasma Research, Ahmedabad –Physical Research Laboratory, Ahmedabad –Saha Institute of Nuclear Physics / Variable Energy Cyclotron Centre, Chennai –Regional Cancer Centre, Chennai –Vikram Sarabhai Space Centre, Chennai Institutions –Jawaharlal Nehru Centre for Advanced Scientific Research, Bangalore –Indian Institute of Astrophysics, Pune –National Center for Radio Astrophysics, Pune –Centre for DNA Fingerprinting and Diagnostics –Institute of Mathematical Sciences, Chennai –Institute of Microbial Technology, Chandigarh –Harish-Chandra Research Institute, Allahabad –Bhabha Atomic Research Centre, Mumbai –Central Drug Research Institute, Lucknow –Sanjay Gandhi Post Graduate Institute of Medical Sciences, Lucknow Government Collaborators –ERNET India C-DAC Centers (10 Locations) –Centre for Development of Advanced Computing at :- –Pune, –Bangalore, –Delhi, –Hyderabad, –Mumbai, –Chennai, –Kolkata, –Mohali, –Noida, –Thiruvananthpuram Academia –Indian Institute of Science, Bangalore –Madras Institute of Technology, Chennai –University of Pune, Pune –Central University, Hyderabad –Indian Institute of Technology at :- –Kharagpur –Kanpur –Delhi –Mumbai –Chennai & –Guwahati –Guwahati University, Guwahati –Motilal Nehru National Institute of Technology, Allahabad –Jawaharlal Nehru University, Delhi –Institute of Technology, Banaras Hindu University, Varanasi
PoC GARUDA Collaborations - In place.. SAC, Ahmedabad : Collaboration on Disaster Management and Grid Middleware Indian Institute of Science, Bangalore : Collaboration with Centre for Atmospheric and Oceanic Sciences (CAOS) Department for Simulations on GRID Garuda with coupled atmosphere-ocean-land model MIT, Chennai : Collaboration on Grid Middleware Development, Development of Front End Tools for Grid Services IIT, Mumbai: Collaboration & MoU for porting of CFD solution University of Pune : In application Areas of Quantum Chemistry, Materials Modeling, Bioinformatics NCL, Pune : Collaboration in the field of Multi-scale Modeling & Simulation, Large-scale Data Analysis & Mining, HPC & Grid Tools
Where we stand.. Grid Computing and High Speed Networking: Main Phase of C-DACs Grid Computing project connecting major 200+ Universities, major 300+ R&D/S&T Labs with a backbone of 10+ Gbps, international connectivity of 10 Gbps, 50+ teraflops of computing power, petaflops of storage, major mission and sectoral applications Planned collaboration in areas of Application, Middleware and Mission Critical Use for Institutions/Industrial R & D units/labs to the above, with dependable, consistent, pervasive, secure and inexpensive access to computational resources 06-Sept-2005 Presentation to Internet2 & Worldbank 38
Coupled GCM-RCM Simulations on Mausam GRID Advantages: i)General Circulation Models (GCMs) and Regional Climate Models (RCMs) can run on machines that are physically distributed ii)Both the models need not be ported to the same platforms iii)The models can be owned by different organizations
Mausam GRID Application Drivers: i)Monsoon Forecasting using GCM ii)Monsoon Rainfall Downscaling using coupled atmosphere-ocean system iii)Extended range monsoon prediction – multimodel simulations data grid iv)Coupled regional atmosphere-air quality models
Grid Computing for Bioinformatics More than 276 genomes have been sequenced and genome sequencing of 1220 organisms are at various levels of completion. Information retrieved from genome data can prove invaluable for pharmaceutical industries, for in silico drug target identification and new drug discovery. The enormity of data and complexity of algorithms make the above tasks computationally demanding, necessitating the effective use of computational resources beyond those available to researchers at a single location Grid technologies enable sharing of bioinformatics data from different sites by creating a virtual organization of the data.
Main Features - This will connect all the major Earthquake Engineering centers and some identified high performance computer centers of India with high speed network. This will maintain a database of the digital earthquake data from different earthquake observatories and experimental results from Earthquake laboratories. This will host all the standard software those are necessary to analyze, process and visualize earthquake data. Earthquake researchers from remote places can access this facility through web browsers. The algorithms developed by the EE researchers will also be plugged into this facility and make it available to the other researchers. Data Server Internet User- 2 User- 3 User -1 User- 4 Software Server Compute nodes Earthquake Observatory EE Labs Earthquake Research Grid
www.cdac.in Advanced Computing for Human Advancement Thank You! 06-Sept-2005 Presentation to Internet2 & Worldbank 44