Presentation is loading. Please wait.

Presentation is loading. Please wait.

“L1/L2 farm: some thought ” G.Lamanna, R.Fantechi & D. Di Filippo (CERN) Computing WG – 28.9.2011.

Similar presentations


Presentation on theme: "“L1/L2 farm: some thought ” G.Lamanna, R.Fantechi & D. Di Filippo (CERN) Computing WG – 28.9.2011."— Presentation transcript:

1 “L1/L2 farm: some thought ” G.Lamanna, R.Fantechi & D. Di Filippo (CERN) Computing WG – 28.9.2011

2 The “Old” online farm L1 = M pc, L2 = N pc Blue Switch  (M+40+1) ports mono-directionals Green switch  (N+M+40+1) ports bi-directionals 40x10 Gb/s Services (DCS, Boot, …) L1 farm L2 farm <50 Gb/s 180 Gb/s 2.5 Gb/s 1 x10 Gb/s CREAM CDR

3 The “new” online farm Blue switch  (N+M+40+40+1+1+1) bidirectionals ports Number of ports available for computing = N tot(128) – 83 = 35 <50 Gb/s 40x10 Gb/s 180 Gb/s CREAM Services (DCS, Boot, …) L1/2 farm CDR 2.5 Gb/s 1 x10 Gb/s TCP or UDP? Storage farm

4 Advantages: more efficient reuse of PCs, smaller number of PCs Disavantages: bottleneck, transmission protocol, switch cost and support, flexibility and limited upgradability (up to the switch back plane capability)

5 Blue switch as before Each PC elaborates one single event (both L1 and L2) instead of a fraction of event No needed of routing among the PCs (no problem on protocol, no latency due to the network, …) <50 Gb/s 40x10 Gb/s 180 Gb/s Services (DCS, Boot, …) CREAM Only final results 2.5 Gb/s 1 x10 Gb/s L1/2 farm CDR 2.5 Gb/s Storage farm

6 From the PCs to the switch = L1 trigger request for the LKr (peanuts) and final events for CDR (2.5 Gb/s) Max rate per PC: 180 Gb/s / (Number of PC) Multiprocessor PCs: the parallelism is exploited at the PC levels instead of at network level. Routing strategies at TEL62 level similar to the Jonas farm (the round robin with “per burst routing table upgrade” is a little bit more safe) ev: #123 L1 – RICH (fragment) ev: #123 L1 – CHOD (fragment) ev: #123 L1 – MUV (fragment) ev: #123 L2 - merge (Full event) SwitchSwitch SwitchSwitch ev: #123 L1 – CHOD L1 – RICH L1 – MUV … LKr DATA Merging L2 ev: #124 L1 – CHOD L1 – RICH L1 – MUV … LKr DATA Merging L2 ev: #125 L1 – CHOD L1 – RICH L1 – MUV … LKr DATA Merging L2 ev: #126 L1 – CHOD L1 – RICH L1 – MUV … LKr DATA Merging L2

7 Since the routing is univocal/injective (the PC doesn’t to talk each other) a tree structure can be foreseen: upgradability and smaller switches (cost) Still open the possibility to use 1 Gb instead of 10 Gb SwitchSwitch


Download ppt "“L1/L2 farm: some thought ” G.Lamanna, R.Fantechi & D. Di Filippo (CERN) Computing WG – 28.9.2011."

Similar presentations


Ads by Google