Presentation is loading. Please wait.

Presentation is loading. Please wait.

PC Farms & Central Data Recording

Similar presentations


Presentation on theme: "PC Farms & Central Data Recording"— Presentation transcript:

1 PC Farms & Central Data Recording
CERN - European Laboratory for Particle Physics ATLAS Trigger/DAQ Workshop Chamonix, October 20, 1998 Frédéric HEMMER - CERN/IT

2 Frédéric Hemmer CERN/IT
Overview NA48 Data Recording NA45 Data Recording in Objectivity NA57 Data Recording in HPSS Summary ATLAS T/DAQ Workshop-Chamonix 20 Oct. 98 Frédéric Hemmer CERN/IT

3 NA48 Central Data Recording
Sub detector VME crates Cisco 5505 Event Builder Online PC Farm FDDI Fast Ethernet SUN E450 500 GB Disk space XLNT Gbit 3Com 3900 Fast Ethernet 7 KM Gigabit Ethernet 3Com 9300 GigaRouter HiPPI HiPPI FDDI Offline PC Farm CS/2 2.5 TB Disk space ATLAS T/DAQ Workshop-Chamonix 20 Oct. 98 Frédéric Hemmer CERN/IT

4 Frédéric Hemmer CERN/IT
NA 48 Data Recording in 98 May  September 1998 Raw Data on Tape 68 TB (1450 tapes, mainly 50 GB tapes) 12.5 TB Selected Reconstructed Data Total with 97 data : 96 TB Average Data Rate : 18 MB/s 23 MB/s) CDR system can do MB/s; limitation is CPU Time available Data recorded as files (4 million) ATLAS T/DAQ Workshop-Chamonix 20 Oct. 98 Frédéric Hemmer CERN/IT

5 Frédéric Hemmer CERN/IT
NA48 On Line Farm 11 Subdetector PC’s (dual PII-266, 128 MB) 8 Event Building PC’s (dual PII-266, 128 MB, 18 GB SCSI) 4 CDR routing PC’s (dual PII-266, 64 MB, FDDI) All running Linux Software event building in the interburst gap Optional Software Filter (tags data) Send data to computer center (local disk buffers : 144 GB , 2 hours) On CS/2 : L3 Filtering and tape writing ATLAS T/DAQ Workshop-Chamonix 20 Oct. 98 Frédéric Hemmer CERN/IT

6 Frédéric Hemmer CERN/IT
NA48 Plans for 1999 Sub detector VME crates Fast Ethernet Cisco 5505 Event Builder 7 KM 4 * SUN E450 4.5 TB Disk space Gigabit Ethernet Fast Ethernet 3Com 3900 3Com 9300 HiPPI HiPPI Gigabit Ethernet On/Offline PC Farm ATLAS T/DAQ Workshop-Chamonix 20 Oct. 98 Frédéric Hemmer CERN/IT

7 Sub detector VME crates Frédéric Hemmer CERN/IT
NA45 Data Recording Sub detector VME crates NA48 SCI Event Builder On Line PC Farm Fast Ethernet 3Com 3900 PCSF 7 KM Gigabit Ethernet 2 * SUN E450 500 GB Disk space Fast Ethernet 3Com 9300 HiPPI HiPPI 3Com 3900 Gigabit Ethernet ATLAS T/DAQ Workshop-Chamonix 20 Oct. 98 Frédéric Hemmer CERN/IT

8 NA45 Raw Data recording in Objectivity
October 98 ; November 98 Estimated bandwidth : 15 MB/s Processes translate Raw Data format to Objectivity Database files (1.5 GB) are closed, then written on tape Steering done using a set of perl scripts on the disk servers On line filtering/reconstruction/calibration possible Farm is running Windows NT Reconstruction can use PCSF ATLAS T/DAQ Workshop-Chamonix 20 Oct. 98 Frédéric Hemmer CERN/IT

9 Frédéric Hemmer CERN/IT
PCSF Configuration (1) Server running NT 4.0 Server SP3 1 dual capable 200 MHz, 96 MB, with 9 GB data disk (with mirroring). LSF central queues. Server running NT Terminal Server Beta 2 1 dual 200 MHz, 128 MB, with 4 GB data disk. Runs IIS 3.0 and is accessible from outside CERN. It also host the asp’s for Web access Servers running NT 4.0 Workstation SP3 9 dual 200 MHz, 64 MB, 2*4GB 25 dual 300 MHz, 128 MB, 2*4GB All equipped with boot proms We requested 2 GB disks, they were unable to deliver given the time between the quote and the order. They gave 4 35$ increase in price per disk. ATLAS T/DAQ Workshop-Chamonix 20 Oct. 98 Frédéric Hemmer CERN/IT

10 Frédéric Hemmer CERN/IT
PCSF Configuration (2) Machines interconnected with 4 3com BaseT switch Display/Keyboard/Mouse connected to a Raritan multiplexor PC Duo for remote admin access  There were problems with other products All running LSF 3.0.  LSF 3.2 does not work, support weak Completely integrated with NICE 4*3000 -> 2*3900 with Gbit Enet Some Ethernet cards did not work on some PC’s The same card marketing name may come with different chip set (e.g , 82558, 82559) PC Anywhere gave problems on MP machines Remotely possible did not install well on unattended setup ATLAS T/DAQ Workshop-Chamonix 20 Oct. 98 Frédéric Hemmer CERN/IT

11 Frédéric Hemmer CERN/IT
Racking evolution 1997 1998 ATLAS T/DAQ Workshop-Chamonix 20 Oct. 98 Frédéric Hemmer CERN/IT

12 Frédéric Hemmer CERN/IT
HPSS test system A schematic view of the CERN HPSS test installation. Shown is also the NA57 CDR computer. As can see the NA57 CDR to HPSS is over FDDI. The CERN HPSS test system consists of: one IBM G40 as HPSS main server and disk and 3590 tape mover (very busy machine) one IBM F50 (1 CPU) as disk mover with SSA disks one DEC Alpha 500 with disk mover and Redwood tape mover and STK PVR. This node was also used for the DEC port. ATLAS T/DAQ Workshop-Chamonix 20 Oct. 98 Frédéric Hemmer CERN/IT

13 HPSS Tests results (NA57)
Disk to disk transfers without tape migration: sustained: 5 - 6MB/s (1GB files) 4-5MB/s (200MB files) average: ~5.5MB/s peak: 6.5MB/s With disk to tape migration: sustained: 2 - 3MB/s (200MB files) average: 3 - 5MB/s peak: MB/s DAQ limited to 7 MB/s read ! Pure disk to disk transfers with 1GB files gave good rate, 5-6MB/s sustained whereas with 200MB files we only achieved 4-5MB/s. This difference is explained by delays in our own CDR software: as it was configured it took in average about 5 seconds for it to detect a new file and for a 200MB file this is about 20% of the total transfer time. With migration the bandwidth into the HPSS main server, the G40, put additional constraints on the transfer rates. The conclusion from the tests with our HPSS test system was that it was not suitable for performance tests. However we learned where our bottlenecks are, e.g. local migration, dead-times in the CDR software. HPSS itself does not seem to have put any constraint. It used the H/W as it could. ATLAS T/DAQ Workshop-Chamonix 20 Oct. 98 Frédéric Hemmer CERN/IT

14 Frédéric Hemmer CERN/IT
Production system MPPC2604 Barracuda-9, 18GB 3900 switch Gbit Ethernet STK Powderhorn silos Redwood 100BaseT Computer center 3590 NA57 FDDI HiPPI ~120 GB mirrored ~120 GB mirrored IBM 3494 robot RS6000/F50, 2CPU, 512MB 2 * DEC Alpha 4100 4 CPU, 512Mb RS6000/F50, 2CPU, 256MB During this (‘98) summer an HPSS production system has been setup at CERN. It consists of: two IBM F50 with 2 CPUs and 512 and 256 MB/s memory one (or two) DEC (Compaq) Alpha 4100 with 4 CPUs and 512MB memory Two disk storage classes with 120GB mirrored disk each one 3590 tape storage class with two drives one Redwood tape (50GB) storage class with 2 or 3 drives. Two pre-production services are about to start: NA57 CDR to the DEC disk storage class in hierarchy with the Redwoods. “user tapes” on the IBM disk storage class in hierarchy with the 3590s. First preliminary tests of transfers between NA57 and the DEC disk storage class gave between 5.5 and 7MB/s over FDDI. If ready in time a new GBit ethernet link will be used, which should allow about 10MB/s band\width. ATLAS T/DAQ Workshop-Chamonix 20 Oct. 98 Frédéric Hemmer CERN/IT

15 Current & Future Data rates
ATLAS T/DAQ Workshop-Chamonix 20 Oct. 98 Frédéric Hemmer CERN/IT

16 Frédéric Hemmer CERN/IT
Summary On line PC farms are being used to record data at sensible rates (Linux) Off line PC farms are being used for reconstruction/filtering/analysis (Linux/NT) New paradigms of recording data are explored (Objectivity/HPSS) Still a lot to do on scalable farm management, global steering, CDR monitoring, etc.. ATLAS T/DAQ Workshop-Chamonix 20 Oct. 98 Frédéric Hemmer CERN/IT


Download ppt "PC Farms & Central Data Recording"

Similar presentations


Ads by Google