Presentation is loading. Please wait.

Presentation is loading. Please wait.

Grid Computing at DAE, India B.S. Jagadeesh Computer Division BARC. 18/February/2011.

Similar presentations


Presentation on theme: "Grid Computing at DAE, India B.S. Jagadeesh Computer Division BARC. 18/February/2011."— Presentation transcript:

1 Grid Computing at DAE, India B.S. Jagadeesh Computer Division BARC. 18/February/2011

2 Our Approach to Grids has been an evolutionary approach Anupam Supercomputers To Achieve Supercomputing Speeds –At least 10 times faster than the available sequential machines in BARC To build a General Purpose Parallel Computer –Catering to wide variety of problems –General Purpose compute nodes and interconnection network To keep development cycle short –Use readily available, off-the-shelf components

3 ANUPAM Performance over the years

4 ‘ANUPAM-ADHYA’ 47 Tera Flops

5 Medium Range Weather Forecasting, Delhi 2+8 -node ANUPAM-Alpha fully operational since December 1999 Aeronautical Development Agency (ADA), Computational Fluid Dynamics calculations related to Light Combat Aircraft (LCA) LCA design computed on 38 node ANUPAM-860. VSSC, Trivandrum 8 node Alpha and 16 node Pentium for Aero Space related CFD applications ANUPAM Applications outside DAE

6  Processing ( parallelization of computation)  I/O ( parallel file system)  Visualization (parallelized graphic pipeline/ Tile Display Unit) Complete solution to scientific problems by exploiting parallelism for

7 Visualization Rendering Pipeline Geometry Database Geometry Transformation Rasterization Image Per Vertex Per Pixel Transformation,clipping, Lighting, etc Scan-conversion,shading and visibility

8 Parallel Visualization Taxonomy

9 Snapshots of Tiled Image Viewer

10 We now have, Large tiled display Rendering power with distributed rendering Scalable to many many pixels and polygons An attractive alternative to high end graphics system Deep and rich scientific visualization system

11 Post-tsunami: Nagappattinam, India (Lat: 10.7906° N Lon: 79.8428° E) This one-meter resolution image was taken by Space Imaging's IKONOS satellite on Dec. 29, 2004 — just three days after the devastating tsunami hit. 1M IKONOS Image Acquired: 29 December 2004 Credit "Space Imaging"

12 So, We Need Lots Of Resources Like High Performance Computers, Visualization Tools, Data Collection Tools, Sophisticated Laboratory Equipments Etc. “ Science Has Become Mega Science” “Laboratory Has To Be A Collaboratory” Key Concept is “Sharing By Respecting Administrative Policies”

13 Grid concept? Many jobs per system One job per system Two systems per JOB Many systems per JOB View all of the above as Single unified resource -- Early days of Computation -- Risc / Workstation Era -- Client-Server Model -- Parallel / Distributed Computing -- Grid Computing

14 User Access Point Resource Broker Grid Resources Result GRID CONCEPT

15 LHC Computing LHC (Large Hadron Collider) has become operational and is churning out data. Data rates per experiment of >100 Mbytes/sec. >1 Pbytes/year of storage for raw data per experiment. Computationally problem is so large that can not be solved by a single computer centre World-wide collaborations and analysis. –Desirable to share computing and analysis throughout the world.

16 A collision at LHC 16 Bunches, each containing 100 billion protons, cross 40 million times a second in the centre of each experiment 1 billion proton-proton interactions per second in ATLAS & CMS ! Large Numbers of collisions per event ~ 1000 tracks stream into the detector every 25 ns a large number of channels (~ 100 M ch)  ~ 1 MB/25ns i.e. 40 TB/s !

17 Data Grids for HEP Tier2 Centre ~1 TIPS Online System Offline Processor Farm ~20 TIPS CERN Computer Centre FermiLab ~4 TIPS France Regional Centre Italy Regional Centre Germany Regional Centre Institute Institute ~0.25TIPS Physicist workstations ~100 MBytes/sec ~622 Mbits/sec ~1 MBytes/sec There is a “bunch crossing” every 25 nsecs. There are 100 “triggers” per second Each triggered event is ~1 MByte in size Physicists work on analysis “channels”. Each institute will have ~10 physicists working on one or more channels; data for these channels should be cached by the institute server Physics data cache ~PBytes/sec ~622 Mbits/sec or Air Freight (deprecated) Tier2 Centre ~1 TIPS Caltech ~1 TIPS ~622 Mbits/sec Tier 0 Tier 1 Tier 2 Tier 4 1 TIPS is approximately 25,000 SpecInt95 equivalents Image courtesy Harvey Newman, Caltech Tier 3

18 … based on advanced technology 23 km of superconducting magnets cooled in superfluid helium at 1.9 K A big instrument !!

19 LHC is a very large scientific instrument… Lake Geneva Large Hadron Collider 27 km circumference Large Hadron Collider 27 km circumference CMS ATLAS LHCb ALICE

20 Tier 0 at CERN: Acquisition, First pass processing Storage & Distribution Ian.Bird@cern.ch20

21 LEMON architecture

22 QUATTOR Quattor is a tool suite providing automated installation, configuration and management of clusters and farms Highly suitable to install, configure and manage Grid computing clusters correctly and automatically At CERN, currently used to auto manage nodes >2000 with heterogeneous hardware and software applications Centrally configurable & reproducible installations, run time management for functional & security updates to maximize availability

23

24 NKN TOPOLOGY All pops are covered by atleast Two NLDs

25 Mumbai Gauribidunir Mysore Hyderabad Chennai Kalpakkam Vizag Manguru Bhubneshwar Kolkata Indore Delhi Gandhinagar Mount Abu Jaduguda Tarapur Allahabad Shilong Kota ANUNET Leased Links Existing Proposed Leased Links Plan All over India

26 ANUNET WIDE AREA NETWORK INSAT 3C 8 Carriers of 768 Kbps each CAT, Indore IOP Bhubaneshwar IPR, Ahmedabad HRI, Allahabad BARC, Mysore BARC, Gauribidnur BARC, Tarapur BARC, Mt.Abu AERBNPCIL HWB BRIT CTCRS, BARC Anushaktinagar, Mumbai BARC NFC CCCM ECIL, Hyderabad TIFRIRE TMC DAE, Mumbai SAHA INST. VECC, Kolkata BARC FACL IGCAR, Kalpakkam IMS, Chennai MRPU Notes: Sites shown in yellow oblong are connected over dedicated landlines. Quarter Transponder 9 MHz HWB, Manuguru HWB, Kota AMD, Secund’bad AMD, Shillong UCIL-I Jaduguda UCIL-II Jaduguda UCIL-III Jaduguda TMH Navi Mumbai BARC, Trombay Hos p.

27 Multiple Zones in NKN Router ANUNET Router WLCG Segment at site To NKN NKN Router at Site Internet Segment at site ANUNET Segment at site GARUDA Segment at site NKN-GEN Segment at site Additional CUG

28 0 National Grid Computing CDAC, Pune WLCG Collaboration Common Users Group (CUG) Anunet (DAE units) BARC – IGCAR NKN Router NKN-Internet (Grenoble-France) NKN-General (National Collaborations) Intranet segment of BARC Internet segment of BARC Logical Communication Domains Through NKN

29 Teacher 55“ LED Projection Screen Elevation Front 55“ LED HD Camera 55“ LED Elevation Back 55“ LED LAYOUT OF VIRTUAL CLASSROOM

30 An Example of High bandwidth Application

31 Collaboratory?

32 Depicts a one degree oscillation photograph on crystals of HIV-1 PR M36I mutant recorded by remotely operating the FIP beamline at ESRF, and OMIT density for the mutation residue I. (Credits: Dr. Jean- Luc Ferrer, Dr. Michel Pirochi & Dr. Jacques Joley, IBS/ESRF, France, Dr. M.V. Hosur & Colleagues, Solid State Physics Division & Computer Division, BARC) E-Governance?

33

34 COLLABORATIVE DESIGN Collaborative design of reactor components Credits : IGCAR, Kalpakkam, NIC-Delhi, Comp Divn, BARC

35 UTKARSH Utkarsh – dual processor-quad core 80 node (BARC) Aksha-itanium – dual processor 10 node (BARC) Ramanujam – dual core dual processor 14 node (RRCAT) Daksha – dual processor 8 node (RRCAT) Igcgrid-xeon – dual processor 8 node (IGCAR) Igcgrid2-xeon – quad core 16 node (IGCAR) DAEGrid Connectivity is through NKN – 1 Gbps 4 Sites 6 Clusters 800 Cores DAEGrid

36 DAEGrid (Continued) BARC -> Three clusters connected Aksha – Dual Processor 10 – node Itanium 2 Surya -- Dual Processor 32- node Xeon (EM64T) Utkarsh -- 640 Cores (Dual quad-core E5472 @3.0 GHz) Services Certification Authority Server (CA), Resource Broker (RB), GridICE Server or Monitoring and Accounting Server (MAS), Virtual Organization Membership Server (VOMS), Repository Server, Grid Portal (UI), Storage Element (DPM SE – 13 GB), Catalogue (LFC), DNS & NTP, Three Computing Element (one each for three clusters), and 10 + 32 + 8 Worker Nodes (WN’s).

37 IGCAR: wide-area Data dissemination BARC: Computing with shared controls DAE Grid 4 Mbps Links VECC: real-time Data collection CAT: archival storage Resource sharing and coordinated problem solving in dynamic, multiple R&D units

38 Interesting Issues being investigated - Effect of Process Migration in distributed environments - A Novel Distributed Memory file system - Implementation of Network Swap for Performance Enhancement - Redundancy issues to address the failure of resource brokers - Service center concept using Glite Middleware -All of the above lead towards ensuring Better quality of service and imparting simplicity in Grid usage.

39 THANK YOU


Download ppt "Grid Computing at DAE, India B.S. Jagadeesh Computer Division BARC. 18/February/2011."

Similar presentations


Ads by Google