Presentation is loading. Please wait.

Presentation is loading. Please wait.

Jochen Thäder – Kirchhoff Institute of Physics - University of Heidelberg 1 HLT Data Challenge - PC² - - Setup / Results – - Clusterfinder Benchmarks –

Similar presentations


Presentation on theme: "Jochen Thäder – Kirchhoff Institute of Physics - University of Heidelberg 1 HLT Data Challenge - PC² - - Setup / Results – - Clusterfinder Benchmarks –"— Presentation transcript:

1 Jochen Thäder – Kirchhoff Institute of Physics - University of Heidelberg 1 HLT Data Challenge - PC² - - Setup / Results – - Clusterfinder Benchmarks – - Setup / Results –

2 Jochen Thäder – Kirchhoff Institute of Physics - University of Heidelberg 2 PC² Paderborn PC² - Paderborn Center for Parallel Computing Architecture of the ARMINIUS cluster –200 nodes with Dual Intel Xeon 64-bit, 3.2 GHz –800 GByte main memory (4 GByte each) –InfiniBand network –Gigabit Ethernet network –RedHat Linux 4

3 Jochen Thäder – Kirchhoff Institute of Physics - University of Heidelberg 3 General Test - Configuration Hardware Configuration –200 nodes with Dual 3.2 GHz Intel Xeon CPUs –Gigabit Ethernet Framework Configuration –HLT Data Framework with TCP Dump Subscriber processes (TDS) –HLT Online Display connecting to TDS Software Configuration –RHEL 4 update 1 –RHEL kernel version –2.6 bigphys area patch –PSI2 driver for 2.6

4 Jochen Thäder – Kirchhoff Institute of Physics - University of Heidelberg 4 Full TPC (36 slices) on 188 nodes (I) Hardware Configuration –188 nodes with Dual 3.2 GHz Intel Xeon CPUs Framework Configuration –Compiled in debug mode, no optimizations –Setup per slice (6 incoming DDLs) 3 nodes for cluster finding each node with 2 filepublisher processes and 2 cluster finding processes 2 nodes for tracking each node with 1 tracking processes –8 Global Merger processes merging the tracks of the 72 tracking nodes

5 Jochen Thäder – Kirchhoff Institute of Physics - University of Heidelberg 5 Full TPC (36 slices) on 188 nodes (II) Framework Setup HLT Data Framework setup for 1 slice Node GM Online Display Node GM Node GM CF Node CF DDL CF Patch CF DDL CF Patch Node TR CF Node CF DDL CF Patch CF DDL CF Patch Node TR CF Node CF DDL CF Patch CF DDL CF Patch Simulated TPC data......

6 Jochen Thäder – Kirchhoff Institute of Physics - University of Heidelberg 6 Full TPC (36 slices) on 188 nodes (III) Empty Events –Real data format, empty events, no hits/tracks –Rate approx. 2.9 kHz after tracking –Limited by the filepublisher processes

7 Jochen Thäder – Kirchhoff Institute of Physics - University of Heidelberg 7 Full TPC (36 slices) on 188 nodes (IV) Simulated Events –simulated pp data (14 TeV, 0.5 T) –Rate approx. 220 Hz after tracking –Limited by the tracking processes Solution: use more nodes

8 Jochen Thäder – Kirchhoff Institute of Physics - University of Heidelberg 8 Conclusion of Full TPC Test Main bottleneck is the processing of the data itself The system is not limited by the HLT data transport framework Test limitations by number of available nodes

9 Jochen Thäder – Kirchhoff Institute of Physics - University of Heidelberg 9 „Test Setup“

10 Jochen Thäder – Kirchhoff Institute of Physics - University of Heidelberg 10 Clusterfinder Benchmarks (CFB) pp – Events 14 TeV, 0.5 T Number of Events: 1200 Iterations: 100 TestBench: SimpleComponentWrapper TestNodes: –HD ClusterNodes e304, e307 (PIII, 733 MHz) –HD ClusterNodes e106, e107 (PIII, 800 MHz) –HD GatewayNode alfa (PIII, 1.0 GHz) –HD ClusterNode eh001 (Opteron, 1.6 GHz) –CERN ClusterNode eh000 (Opteron, 1.8 GHz)

11 Jochen Thäder – Kirchhoff Institute of Physics - University of Heidelberg 11 CFB – Signal Distribution per patch

12 Jochen Thäder – Kirchhoff Institute of Physics - University of Heidelberg 12 CFB – Cluster Distribution per patch

13 Jochen Thäder – Kirchhoff Institute of Physics - University of Heidelberg 13 CFB – PadRow / Pad Distribution

14 Jochen Thäder – Kirchhoff Institute of Physics - University of Heidelberg 14 CFB – Timing Results (I)

15 Jochen Thäder – Kirchhoff Institute of Physics - University of Heidelberg 15 CFB - Timing Results (II) CPU Patch 0 [ms] Patch 1 [ms] Patch 2 [ms] Patch 3 [ms] Patch 4 [ms] Patch 5 [ms] Average [ms] Opteron 1,6 GHz2,933,922,732,962,932,903,06 Opteron 1,8 GHz3,965,323,663,983,943,994,13 PIII 1,0 GHz4,956,654,514,904,874,815,11 PIII 800 MHz6,048,105,646,126,066,016,33 PIII 733MHz6,578,826,146,676,616,546,90

16 Jochen Thäder – Kirchhoff Institute of Physics - University of Heidelberg 16 CFB – Conclusion / Outlook Learned about different needs for each patch Number of processing components have to be adjusted to particular patch


Download ppt "Jochen Thäder – Kirchhoff Institute of Physics - University of Heidelberg 1 HLT Data Challenge - PC² - - Setup / Results – - Clusterfinder Benchmarks –"

Similar presentations


Ads by Google