Presentation is loading. Please wait.

Presentation is loading. Please wait.

12 September 2013, NEC2013/Varna René Brun/CERN*.

Similar presentations

Presentation on theme: "12 September 2013, NEC2013/Varna René Brun/CERN*."— Presentation transcript:

1 12 September 2013, NEC2013/Varna René Brun/CERN*

2 plan In this talk I present the views of somebody involved in some aspects of scientific computing as seen from a major lab in HEP. Having been involved in the design and implementation of many systems, my views are necessarily biased by my path in several experiments and the development of some general tools. I plan to describe the creation and evolution of the main systems that have shaped the current HEP software, with some views for the near future. 12/09/13R.Brun : Evolution of HEP software2

3 Machines 12/09/13R.Brun : Evolution of HEP software3 From Mainframes =====  Clusters Walls of cores GRIDs & Clouds

4 Machine Units (bits) 12/09/13R.Brun : Evolution of HEP software4 16 32 36 48 56 60 64 pdp 11 nord50besm6 cdc many univac With even more combinations of exponent/mantissa size or byte ordering A strong push to develop portable machine independent I/O systems

5 User machine interface 12/09/13R.Brun : Evolution of HEP software5

6 General Software in 1973 Software for bubble chambers: Thresh, Grind, Hydra Histogram tool: SUMX from Berkeley Simulation with EGS3 (SLAC), MCNP(Oak Ridge) Small Fortran IV programs (1000 LOC, 50 kbytes) Punched cards, line printers, pen plotters (GD3) Small archive libraries (cernlib), lib.a 12/09/13R.Brun : Evolution of HEP software6

7 Software in 1974 First “Large Electronic Experiments” Data Handling Division  ==  Track Chambers Well organized software in TC with HYDRA, Thresh, Grind, anarchy elsewhere HBOOK: from 3 routines to 100, from 3 users to many First software group in DD 12/09/13R.Brun : Evolution of HEP software7

8 GEANT1 in 1975 Very basic framework to drive a simulation program, reading data cards with FFREAD, step actions with GUSTEP, GUNEXT, apply mag-field (GUFLD). Output (Hist/Digits) was user defined Histograms with HBOOK About 2,000 LOC 12/09/13R.Brun : Evolution of HEP software8

9 ZBOOK in 1975 Extraction of the HBOOK memory manager in an independent package. Creation of banks and data structures anywhere in common blocks Machine independent I/O, sequential and random About 5,000 LOC 12/09/13R.Brun : Evolution of HEP software9

10 GEANT2 in 1976 Extension of GEANT1 with more physics (e-showers based on a subset of EGS, mult-scattering, decays, energy loss Kinematics, hits/digits data structures in ZBOOK Used by several SPS experiments (NA3, NA4, NA10, Omega) About 10,000 LOC 12/09/13R.Brun : Evolution of HEP software10

11 Problems with GEANT2 Very successful small framework. However, the detector description was user written and defined via “if” statements at tracking time. This was becoming a hard task for large and always evolving detectors (case with NA4 and C.Rubbia) Many attempts to describe a detector geometry via data cards (a bit like XML), but the main problem was the poor and inefficient detector description in memory. 12/09/13R.Brun : Evolution of HEP software11

12 GEANT3 in 1980 A data structure (ZBOOK tree) describing complex geometries introduced, then gradually the geometry routines computing distances, etc This was a huge step forward implemented first in OPAL, then L3 and ALEPH. Full electromagnetic showers (first based on EGS, then own developments) 12/09/13R.Brun : Evolution of HEP software12

13 Systems in 1980 12/09/13R.Brun : Evolution of HEP software13 OS & fortran Libraries HBOOK, Naglib, cernlib Experiment Software End user Analysis software CDC, IBM 1000 KLOC 500 KLOC 100 KLOC 10 KLOC Vax780 Tapes RAM 1 MB

14 GEANT3 with ZEBRA ZEBRA was very rapidly implemented in 1983. We introduced ZEBRA in GEANT3 in 1984. From 1984 to 1993 we introduced plenty of new features in GEANT3: extensions of the geometry, hadronic models with Tatina, Gheisha and Fluka, Graphics tools. In 1998, GEANT3 interface with ROOT via the VMC (Virtual Monte Carlo) GEANT3 has been used and still in use by many experiments. 12/09/13R.Brun : Evolution of HEP software14

15 PAW First minimal version in 1984 Attempt to merge with GEP (DESY) in 1985, but take the idea of ntuples for storage and analysis. GEP was written in PL1. Package growing until 1994 with more and more functions. Column-wise ntuples in 1990. Users liked it, mainly once the system was frozen in 1994. 12/09/13R.Brun : Evolution of HEP software15

16 Vectorization attempts During the years 1985->1990 a big effort was invested in vectorizing GEANT3 (work in collaboration with Florida State University) on CRAY/YMP, CYBER205,ETA10. The minor gains obtained did not justify the big manpower investment. GEANT3 transport was still essentially sequential and we had a big overhead with vectors creation, gather/scatter. However this experience and failure was very important for us and many messages useful for the design of GEANT5 many years later. 12/09/13R.Brun : Evolution of HEP software16

17 Parallelism in the 80s & early 90s Many attempts (all failing) with parallel architectures Transputers and OCCAM MPP (CM2, CM5, ELXI,..) with OpenMP-like software Too many GLOBAL variables/structures with Fortran common blocks. RISC architectures or emulators perceived as a cheaper solution in the early 90s. Then MPPs died with the advent of the Pentium Pro (1994) and farms of PCs or workstations. 12/09/13R.Brun : Evolution of HEP software17

18 1992: CHEP Annecy Web, web, web, web………… Attempts to replace/upgrade ZEBRA to support/use F90 modules and structures, but modules parsing and analysis was thought to be too difficult. With ZEBRA the bank description was within the bank itself (just a few bits). A bank was typically a few integers followed by a dynamic array of floats/doubles. We did not realize at the time that parsing user data structures was going to be a big challenge!! 12/09/13R.Brun : Evolution of HEP software18

19 Consequences In 1993/1994 performance was not anymore the main problem. Our field invaded by computer scientists. Program design, object-oriented programming, move to more sexy languages was becoming a priority. The “goal” was thought less important than the “how” This situation deteriorates even more with the death of the SSC. 12/09/13R.Brun : Evolution of HEP software19

20 1993: Warning Danger 3 “clans” in my group 1/3 pro F90 1/3 pro C++ 1/3 pro commercial products (any language) for graphics, User Interfaces, I/O and data bases My proposal to continue with PAW, develop ZOO(ZEBRA Object-Oriented) and GEANT3 geometry in C++ is not accepted. Evolution vs Revolution 12/09/13R.Brun : Evolution of HEP software20

21 1995: roads for ROOT The official line was with GEANT4 and Objectivity, not much room left for success with an alternative product when you are alone. The best tactic had to be a mixture of sociology, technicalities and very hard work. Strong support from PAW and GEANT3 users Strong support from HP (workstations + manpower) In November we were ready for a first ROOT show Java is announced (problem?) 12/09/13R.Brun : Evolution of HEP software21

22 1998: work & smile RUN II projects at FNAL Data Analysis and Visualization Data Formats and storage ROOT competing with HistoScope, JAS, LHC++ CHEP98 (September) Chicago ROOT selected by FNAL, followed by RHIC Vital decision for ROOT But official support at CERN only in 2002 12/09/13R.Brun : Evolution of HEP software22

23 ROOT evolution No time to discuss the creation/evolution of the 110 ROOT shared libs/packages. ROOT has gradually evolved from a data storage, analysis and visualization system to a more general software environment replacing totally what was known before as CERNLIB. This has been possible thanks to MANY contributors from experiments, labs or people working on other fields. ROOT6 coming soon includes a new interpret CLING and supports all the C++11 features 12/09/13R.Brun : Evolution of HEP software23

24 Input/Output: Major Steps 24 R.Brun : Evolution of HEP software parallel merge TreeCache member-wise streaming for STL collections member-wise streaming for TClonesArray automatic streamers from dictionary with StreamerInfos in self-describing files streamers generated by rootcint User written streamers filling TBuffer 12/09/13

25 GEANT4 Evolution GEANT4 is an important software tool for current experiments with more and more physics improvements and validation procedures. However, the GEANT4 transport system is not any more suitable for parallel architectures. Too many changes are required. GEANT5: keep the Geant4 physics and a radically new transport system. 12/09/13R.Brun : Evolution of HEP software25

26 Tools & Libs 10/09/13R.Brun : Computing in HEP26 hbook zebra paw zbook hydra geant1 geant2 geant3 geant4 Root 1,2,3,4,5,6 minuit bos Geant4+5

27 Systems today 12/09/13R.Brun : Evolution of HEP software27 OS & compilers Frameworks like ROOT, Geant4 Experiment Software End user Analysis software Hardware 20 MLOC 5 MLOC 4 MLOC 0.1 MLOC Hardware Clusters of multi-core machines 10000x8 GRIDS CLOUDS Networks 10 Gbit/s Disks 1o PB RAM 16 GB

28 Systems in 2025 ? 12/09/13R.Brun : Evolution of HEP software28 OS & compilers Frameworks like ROOT, Geant5 Experiment Software End user Analysis software Hardware 40 MLOC 10 MLOC 0.2 MLOC Hardware Multi-level parallel machines 10000x1000x1000 GRIDS CLOUDS on demand Networks 100 Gbit/s Disks 1o00 PB Networks 100 Gbit/s Networks 10 Tbit/s RAM 10 TB

29 BUT !!!!! It looks like the amount of money devoted to computing is not going to increase with the same slope as it used to increase in the past few years. The Moore’s law does not apply anymore for one single processor. However, the Moore’s law looks still OK when looking at the amount of computing delivered/$, € when REALLY using parallel architectures. Using these architectures is going to be a big challenge, but we do not have the choice!!!! 12/09/13R.Brun : Evolution of HEP software29

30 Software and Hardware GRIDs/Clouds are inherently parallel. However, because the hardware has been relatively cheap, GRIDs have pushed towards job-level parallelism at the expense of parallelism within one job. It is not clear today what will be the winning hardware systems: supercomputer?, walls of cores with accelerators?, zillions of ARM-like systems?,.. Our software must be upgraded keeping in mind all these possible solutions. A big challenge! 12/09/13R.Brun : Evolution of HEP software30

31 Expected Directions Parallelism: Today we do not exploit well the existing hardware (0.6 instructions/cycle in average) because our code was designed “sequential”. Important gains foreseen (10?), eg in detector simulation. Automatic Data Caches: Many improvements are required to speed-up and simplify skimming procedures and data analysis. 12/09/13R.Brun : Evolution of HEP software31

32 Data caches More effort is required to simplify the analysis of large data sets (typically ROOT Trees). When zillions of files are distributed in Tiers1/2, automatic, transparent, performing, safe caches are becoming mandatory on Tiers2/3 or even laptops. This must be taken into account in the dilemma: sending jobs to data or vice-versa. This will require changes in ROOT itself and in the various data handling or parallel file systems. 12/09/13R.Brun : Evolution of HEP software32

33 Parallelism: key points 12/09/13R.Brun : Evolution of HEP software33 Minimize the sequential/synchronization parts (Amdhal law): Very difficult Run the same code (processes) on all cores to optimize the memory use (code and read-only data sharing) Job-level is better than event-level parallelism for offline systems. Use the good-old principle of data locality to minimize the cache misses. Exploit the vector capabilities but be careful with the new/delete/gather/scatter problem Reorganize your code to reduce tails

34 Data Structures & parallelism 12/09/13R.Brun : Evolution of HEP software34 event vertices tracks C++ pointers specific to a process Copying the structure implies a relocation of all pointers I/O is a nightmare Update of the structure from a different thread implies a lock/mutex

35 Data Structures & Locality 12/09/13R.Brun : Evolution of HEP software35 sparse data structures defeat the system memory caches Group object elements/collections such that the storage matches the traversal processes For example: group the cross- sections for all processes per material instead of all materials per process

36 Create Vectors & exploit Locality By making vectors, you optimize the instruction cache (gain >2) and data cache (gain >2) By making vectors, you can use the built-in pipeline instructions of existing processors (gain >2) But, there is no point in making vectors if your algorithm is still sequential or badly designed for parallelism, eg: Too many threads synchronization points (Amdhal) Vectors gather/scatter 12/09/13R.Brun : Evolution of HEP software36

37 Conventional Transport 11/07/2011LPCC workshop Rene Brun37 o o o o o o o o o o o o o o o o o o o o o o T1 T3 T2 o o o o o o o o o o o o o o o o o o o o T4 Each particle tracked step by step through hundreds of volumes when all hits for all tracks are in memory summable digits are computed

38 Analogy with car traffic 11/07/2011LPCC workshop Rene Brun38

39 New Transport Scheme 11/07/2011LPCC workshop Rene Brun39 o o o o o o o o o o o o o o o o o o o o o o T1 T3 T2 o o o o o o o o o o o o o o o o o o o o T4 All particles in the same volume type are transported in parallel. Particles entering new volumes or generated are accumulated in the volume basket. Events for which all hits are available are digitized in parallel

40 Towards Parallel Software A long way to go!! There is no point in just making your code thread- safe. Use of parallel architectures requires a deep rethinking of the algorithms and dataflow. One such project is GEANT  GEANT4+5 launched 2 years ago. We start having very nice results. But still a long way to go to adapt (or write radically new software) for the emerging parallel systems. 12/09/13R.Brun : Evolution of HEP software40

41 A global effort Software development is nowadays a world-wide effort with people scattered in many labs developing simulation, production or analysis code. It remains a very interesting area for new people not scared by big challenges. I had the fantastic opportunity to work for many decades in the development of many general tools in close cooperation with many people to whom I am very grateful. 12/09/13R.Brun : Evolution of HEP software41

Download ppt "12 September 2013, NEC2013/Varna René Brun/CERN*."

Similar presentations

Ads by Google