Presentation is loading. Please wait.

Presentation is loading. Please wait.

Summary GRID and Computing Takashi Sasaki KEK Computing Research Center.

Similar presentations


Presentation on theme: "Summary GRID and Computing Takashi Sasaki KEK Computing Research Center."— Presentation transcript:

1

2 Summary GRID and Computing Takashi Sasaki KEK Computing Research Center

3 Covered talks Extension –LHC_2 ATLAS Computing –Comp_3 GRID Interoperability –SDA_1 Event generators and Higgs Physics at LHC New proposal –Bio_1 Geant4 new developments

4 LHC_2 ATLAS Computing

5 Activities in 2006 Mainly tests for data transfer –In the overall ATLAS framework: `SC4’ (Service Challenge 4) –Also special tests Communications mainly by e-mail (Visits in February and March 1997)

6 SC4 (Lyon → Tokyo) RTT (Round Trip Time) ~ 280 msec The available bandwidth limited to 1 Gbps Linux kernel 2.4, no tuning, standard LCG middleware (GridFTP) ~ 20 MB/s (15 files parallel, each 10 streams) Not satisfactory (packet loss)

7 Test with iperf (memory to memory) Linux kernel 2.6.17.7 Congestion control –TCP Reno vs. BIC TCP Try also PSPacer 2.0.1 (from AIST, Tsukuba) Best result: BIC TCP + PSPacer Tokyo → Lyon: >800 Mbps (with 2 streams): shown below

8 Summary for iperf results SL(C)4 (kernel 2.6 with BIC TCP): much better in congestion control than SL3 (kernel 2.4) Software Pacer (PSPacer by AIST) in addition gives a stable and good performance 1 stream10 streams Lyon → Tokyo0-5 MB/s2-20 MB/s Tokyo → Lyon10-15 MB/s44-60 MB/s 1 stream2 to 8 streams Lyon → Tokyo45 MB/s Tokyo → Lyon70 MB/s100 MB/s

9 Year 2007 Not only purely technical R&Ds, but also studies for data movement in view of physicists’ point of view The available network bandwidth will increase (probably this year) A new computer system has been installed at ICEPP and more man power is soon available for technical studies More intensive R&D for this year toward the LHC start-up

10 LHC_2: ATLAS Computing The Tier-1 center at CC-IN2P3 hosts the tier-2 center at ICEPP –Toward LHC commissioning, both of them are working hard to prepare Monitoring and improvements are necessary for long distance communication –Packet loss happens easily if one of router between two points becomes busy The “traceroute” command shows them –Standard TCP/IP has a logic for very slow windows size recovery and this slows down the transfer rate –PSPacer helped to obtain higher performance Support for physicists will be considered

11 Comp_3 GRID interoperability

12 Project presentation The LIA "Comp_3" project is proposing to work toward Grid Interoperability The LIA "Comp_3" project is proposing to work toward Grid Interoperability –Was an accepted project for the 2006 call –We propose to extend the project for 2007 Work will concentrate mainly on Work will concentrate mainly on –EGEE / NAREGI interoperability Crucial for ILCCrucial for ILC Will become important for LHCWill become important for LHC –NAREGI is a huge effort in Japan, and will certainly become a piece of the W-LCG organization –SRB / iRODS data grid (see later) ImplementationImplementation DevelopmentDevelopment Interoperability with EGEE / NAREGIInteroperability with EGEE / NAREGI NAREGI: 2003-2007 10 billion Yen Extended up to 2009

13 Interoperability between EGEE and NAREGI 2 possible approaches 2 possible approaches –Implement the GIN (Grid Interoperability Now) layer in NAREGI Defined by the GIN group from the OGFDefined by the GIN group from the OGF Short term solution in order to get the Interoperability Now !Short term solution in order to get the Interoperability Now ! Pragmatic approachPragmatic approach –Work with longer term standards defined within the OGF Develop a Meta Scheduler compatible with many Grid implementationsDevelop a Meta Scheduler compatible with many Grid implementations Based on SAGA (Simple API for Grid Applications) "Instead of interfacing directly to Grid Services, the applications can so access basic Grid Capabilities with a simple, consistent and stable API"Based on SAGA (Simple API for Grid Applications) "Instead of interfacing directly to Grid Services, the applications can so access basic Grid Capabilities with a simple, consistent and stable API" and JSDL (Job Submission Description Language)and JSDL (Job Submission Description Language)

14 Next steps on NAREGI / EGEE interoperability Continue work on both directions: GIN and SAGA / JSDL Continue work on both directions: GIN and SAGA / JSDL Try cross job submission on both Grid middleware Try cross job submission on both Grid middleware Explore data exchange between NAREGI and EGEE Explore data exchange between NAREGI and EGEE

15 The Storage Resource Broker (SRB) SRB is relatively light data grid system developed at SDSC SRB is relatively light data grid system developed at SDSC Considerable experience has been gained at KEK, SLAC, RAL and CC-IN2P3 Considerable experience has been gained at KEK, SLAC, RAL and CC-IN2P3 Heavily used for BaBar data transfer since years (up to 5 TB/day) Heavily used for BaBar data transfer since years (up to 5 TB/day) Very interesting solution to store and share biomedical data (images) Very interesting solution to store and share biomedical data (images) Advantages Advantages –Easy and fast development of applications –Extensibility –Reliability –Easiness of administration

16 From SRB to iRODS iRODS (iRule Oriented Data Systems ) is the SRB successor iRODS (iRule Oriented Data Systems ) is the SRB successor –KEK and CC-IN2P3 are both involved in iRODS developments and tests –Should bring many new functionalities

17 Next step for SRB / LCG interoperability In order to have LCG and SRB fully interoperable we need to develop an SRB / SRM interface In order to have LCG and SRB fully interoperable we need to develop an SRB / SRM interface This will be a common area of work for KEK and CC-IN2P3 in the near future This will be a common area of work for KEK and CC-IN2P3 in the near future –Iida-san 8 month stay at CC-IN2P3 Then we will explore the possibility to make SRB an alternative for LCG storage Then we will explore the possibility to make SRB an alternative for LCG storage

18 Comp_3:GRID interoperability GRID is the fundamental infrastructure for distributed data analysis GRID interoperability is the key issue –Interoperability among different GRID middleware (globas based), e.g. GLite, NAREGI and etc –Interoperability to storage resource managements systems, e.g. SRB iRODs, the successor of SRB, is underdevelopment and common interests between both sides

19 SDA_1 Event generators and Higgs Physics at LHC

20 Activity of SDA1 in 2006 Subject: Developing NLO event- Generator in LHC physics Exchange in 2006: J→F:8 people, 45 days in total F→J:3 people, 25 days in total Publications(related): 1.“NLO-QCD calculation in GRACE “, Y. Kurihara et.al.Nucl.Phys.Proc.Suppl. 157(2006)157 2.“GR@PPA 2.7 event generator for pp / p anti-p collisions.”S. Tsuno et.al., Comput.Phys.Commun.175:665-677,2006. 3.“Algebraic evaluation of rational polynomials in one-loop amplitudes.” T. Binoth et.al, JHEP0702(2006)13. 4.“New one-loop techniques and first applications to LHC phenomenology.”,T. Binoth et.al, Nucl.Phys.Proc.Suppl.160:61- 65,2006.

21 NLO Inclusive Generator Fully exclusive Generator

22

23 H→  & background studies High p T photon detector performance Jet rejection Photon conversion Primary vertex reconstruction SM background studies Reducible background Irreducible background (event generator) Japanese Experiment group Inner detector (SCT) e  identification Event generator French Experiment group Liquid Argon calorimeter e  identification H→  analysis tools Japanese theory group NLO Event generator Resummation Parton shower French theory group NLO Event generator Resummation subtraction method

24 SDA_1:Event generators and Higgs Physics at LHC The collaboration among experimentalists and theorists in both sides Estimation of background in H->gamma gamma will be done precisely using DIPHOX This will help to discover Higgs or decrease systematic errors on the estimation of the mass

25 BIO_1 Geant4 new development

26 Courseware in a book style

27 All EM interactions activated Example of virtual lab: a sandwich calorimeter lead/aerogel

28 Geant4 at the cellular scale 3D phantoms PIXE analysis Microbeam Geant4 p ionisation p & H charge change © Geant4 DNA new Physics processes

29 Visualization samples p, 200 MeV

30 Monte Carlo simulations & jobs Metadata management Results visualization Image anonymization Image visualization Working station Starting of the installation at Centre Jean Perrin Internet connexion Genius

31 Bio_1:Geant4 new developments Geant4 is develped by the internatinal collaboration –Significant contributions from French and Japan Both side will work for the Gean4 kernel improvemtns and also the development of applications in following area –Educational Applications –Medicine, Biology and Space extensions –Computing grids

32 Summary of summary GRID is a lifeline of research –Still further research and investment on resources are necessary LHC will be commissioned within one year probably –Infrastructure and good tools for analysis are necessary –Collaboration between experimentalists and theorists is mandate for data analysis Geant4 is a toolkit developed in HEP and transferred to other fields, e.g., space, medical, biology and so on –Still many issues are left for improvements in the Geant4 kernel –Application developments in education and bio-medical fields are common interests We saw strong collaboration between French and Japan even before beginning of AIL AIL will boost the researches in these collaborations


Download ppt "Summary GRID and Computing Takashi Sasaki KEK Computing Research Center."

Similar presentations


Ads by Google