Presentation is loading. Please wait.

Presentation is loading. Please wait.

Tier1 Status Report Andrew Sansum GRIDPP12 1 February 2004.

Similar presentations


Presentation on theme: "Tier1 Status Report Andrew Sansum GRIDPP12 1 February 2004."— Presentation transcript:

1 Tier1 Status Report Andrew Sansum GRIDPP12 1 February 2004

2 Tier-1 Status Report Overview Hardware configuration/utilisation That old SRM story dCache deployment Network developments Security stuff

3 1 February 2004Tier-1 Status Report Production Objectives Q1 Q2Q3 Q4 SC3 SC2 LHCB

4 1 February 2004Tier-1 Status Report Hardware CPU –500 dual Processor Intel – PIII and Xeon servers mainly rack mounts (About 884KSI2K) also 100 older systems. Disk Service – mainly standard configuration –Commodity disk service based on core of 57 Linux servers –External IDE&SATA/SCSI RAID arrays (Accusys and Infortrend) –ATA and SATA drives (1500 spinning drives – another 500 old SCSI drives) –About 220TB disk (last tranch deployed in October) –Cheap and (fairly) cheerful Tape Service –STK Powderhorn 9310 silo with 8 9940B drives. Max capacity 1PB at present but capable of 3PB by 2007.

5 1 February 2004Tier-1 Status Report LCG in September

6 1 February 2004Tier-1 Status Report LCG Load

7 1 February 2004Tier-1 Status Report Babar Tier-A

8 1 February 2004Tier-1 Status Report Last 7 Days ACTIVE: Babar,D0, LHCB,SNO, H1, ATLAS,ZEUS

9 1 February 2004Tier-1 Status Report dCache Motivation: –Needed SRM access to disk (and tape) –We have 60+ disk servers (140+ filesystems) – needed disk pool management dCache was only plausible candidate

10 1 February 2004Tier-1 Status Report History (Norse saga) of dCache at RAL Mid 2003 –We deployed a non grid version for CMS. It was never used in production. End of 2003/Start of 2004 –RAL offered to package a production quality dCache. –Stalled due to bugs and holidays – went back to developers and LCG developers. September 2004 –Redeployed DCache into LCG system for CMS, and DTeam VOs. dCache deployed within JRA1 testing infrastructure for gLite i/o daemon testing. Jan 2005: Still working with CMS to resolve interoperation issues, partly due to hybrid grid/non-grid use. Jan 2005: Prototype back end to tape written.

11 1 February 2004Tier-1 Status Report dCache Deployment dCache deployed as production service (also test instance, JRA1, developer1 and developer2?) Now available in production for ATLAS, CMS, LHCB and DTEAM (17TB now configured – 4TB used) Reliability good – but load is light Will use dCache (preferably production instance) as interface to Service Challenge 2. Work underway to provide Tape backend, prototype already operational. This will be production SRM to tape at least until after July service challenge

12 1 February 2004Tier-1 Status Report Current Deployment at RAL

13 1 February 2004Tier-1 Status Report Original Plan for Service Challenges Experiments SJ4 UKLIGHT Production dCache test dCache(?) Service Challenges technology

14 1 February 2004Tier-1 Status Report head gridftp dcache Dcache for Service Challenges SJ4 UKLIGHT Disk Servers Service Challenge Experiments

15 1 February 2004Tier-1 Status Report Network Recent upgrade to Tier1 Network Begin to put in place new generation of network infrastructure Low cost solution based on commodity hardware 10 Gigabit ready Able to meet needs of: – forthcoming service challenges –Increasing production data flows

16 1 February 2004Tier-1 Status Report September Summit 7i 1 Gbit Site Router Disk CPU Disk CPU Summit 7i

17 1 February 2004Tier-1 Status Report Now (Production) Disk+CPU Nortel 5510 stack (80Gbit) Summit 7i N*1 Gbit 1 Gbit/link Site Router

18 1 February 2004Tier-1 Status Report Soon (Lightpath) Disk+CPU Nortel 5510 stack (80Gbit) Summit 7i N*1 Gbit 1 Gbit 1 Gbit/link Site Router UKLIGHTUKLIGHT Dual attach 10Gb10Gb 2*1Gbit

19 1 February 2004Tier-1 Status Report Next (Production) Disk+CPU Nortel 5510 stack (80Gbit) 10 Gigabit Switch N*10 Gbit 10 Gb 1 Gbit/link New Site Router RALSITERALSITE

20 1 February 2004Tier-1 Status Report Machine Room Upgrade Large mainframe cooling (cooking) Infrastructure: ~540KW –Substantial overhaul now completed good performance gains, but was close to maximum by mid summer, temporary chillers over August (funded by CCLRC) –Substantial additional capacity ramp-up planned for Tier-1 (and other E-Science) Service November (Annual) Air-conditioning RPM Shutdown –Major upgrade – new (independent) cooling system (400KW+) – funded by CCLRC Also: profiling power distribution, new access control system. Worrying about power stability (brownout and blackout in last quarter.

21 1 February 2004Tier-1 Status Report MSS Stress Testing Preparation for SC 3 (and beyond) underway (Tim Folkes). Underway since August. Motivation – service load has been historically rather low. Look for Gotchas Review known limitations. Stress test – part of the way through the process – just a taster here –Measure performance –Fix trivial limitations –Repeat –Buy more hardware –Repeat

22 1 February 2004Tier-1 Status Report STK 9310 8 x 9940 tape drives ADS_switch_1 ADS_Switch_2 Brocade FC switches 4 drives to each switch ermintrude AIX dataserver florence AIX dataserver zebedee AIX dataserver dougal AIX dataserver mchenry1 AIX Test flfsys basil AIX test dataserver brian AIX flfsys ADS0CNTR Redhat counter ADS0PT01 Redhat pathtape ADS0SB01 Redhat SRB interface dylan AIX Import/export buxton SunOS ACSLS User array4 array3 array2 array1 catalogue cache catalogue cache Test system SRB Inq; S commands; MySRB Tape devices ADS tape ADS sysreq admin commands create query User pathtape commands Logging Physical connection (FC/SCSI) Sysreq udp command User SRB command VTP data transfer SRB data transfer STK ACSLS command All sysreq, vtp and ACSLS connections to dougal also apply to the other dataserver machines, but are left out for clarity Production system SRB pathtape commands Thursday, 04 November 2004

23 1 February 2004Tier-1 Status Report flfstk tapeserv Farm Server flfsys (+libflf) user flfscan data transfer (libvtp) catalogue data STK tape drive cellmgr Catalogue Server (brian) flfdoexp (+libflf) flfdoback (+libflf) datastore (script) Robot Server (buxton) ACSLS API control info (mount/ dismount) data Tape Robot flfsys user commands (sysreq) SE recycling (+libflf) read Atlas Datastore Architecture 28 Feb 03 - 2 B Strong SSI CSI flfsys farm commands (sysreq) LMU flfsys admin commands (sysreq) administrators flfaio IBM tape drive flfqryoff (copy of flfsys code) Backup catalogue stats flfsys tape commands (sysreq) servesys pathtape long name (sysreq) short name (sysreq) frontend backend Pathtape Server (rusty) (sysreq) importexport flfsys import/export commands (sysreq) libvtp User Node I/E Server (dylan) ? Copy B Copy C ACSLS cache disk Copy A vtp user program tape (sysreq)

24 1 February 2004Tier-1 Status Report Catalogue Manipulation

25 1 February 2004Tier-1 Status Report Write Performance Single Server Test

26 1 February 2004Tier-1 Status Report Conclusions Have found a number of easily fixable bugs Have found some less easily fixable architecture issues Have much better understanding of limitations of architecture Estimate suggests 60-80MB/s -> tape now. Buy more/faster disk and try again. Current drives good for 240MB/s peak – actual performance likely to be limited by ratio of drive (read+write):(load+unload+seek)

27 1 February 2004Tier-1 Status Report Security Incident 26 August X11 scan acquires userid/pw at upstream site Hacker logs on to upstream site – snoops known_hosts Ssh to Tier-1 front end host using unencrypted private key from upstream site Upstream site responds – but does not notify RAL Hacker loads IRC bot on Tier1 and registers at remote IRC server (for command/control/monitoring) Tries to root exploit (fails), attempts login to downstream sites (fails – we think) 7 th October – Tier-1 incident notified by IRC service and begins response. 2000-3000 sites involved globally

28 1 February 2004Tier-1 Status Report Objectives Comply with site security policy – disconnect … etc etc. –Will disconnect hosts promptly once active intrusion detected. Comply with external Security Policies (eg LCG) –Notification Protect downstream sites, by notification, pro-active disconnection –Identify involved sites –Establish direct contacts with uptream/downstream Minimise service outage –Eradicate infestation

29 1 February 2004Tier-1 Status Report Roles Incident controllermange information flood – deal with external contacts Log trawlerhunt for contacts Hunter/killer(s)Searching for compromised hosts Detailed forensicsUnderstand what happened on compromised hosts End user contactsGet in touch with users, confirm usage patterns, terminate IDs

30 1 February 2004Tier-1 Status Report Chronology At 09:30 on 7th October RAL network group forwarded a complaint from Undernet suggesting unauthorized connections from Tier1 hosts to Undernet. At 10:00 initial investigation suggests unauthorised activity on csfmove02. csfmove02 physically disconnected from network. By now 5 Tier1 staff + E-Science security Officer 100% engaged on incident. Additional support from CCLRC network group, and site security officer. Effort remained at this level for several days. Babar support staff at RAL also active tracking down unexplained activity. At 10:07 request made to site firewall admin for firewall logs for all contacts with suspected hostile IRC servers. At 10:37 firewall admin provides initial report, confirming unexplained current outbound activity from csfmove02, but no other nodes involved. At 11:29 babar report that bfactory account password was common to the following additional Ids (bbdatsrv and babartst) At 11:31 Steve completes rootkit check – no hosts found – although possible false positives on Redhat 7.2 which we are uneasy aboutBy 11:40 preliminary investigations at RAL had concluded that an unauthorized access had taken place onto host csfmove02 (a data mover node) which in turn was connected outbound to an IRC service. At this point we notified security mailing lists (hepix-security, lcg-security, hepsysman):

31 1 February 2004Tier-1 Status Report Security Events

32 1 February 2004Tier-1 Status Report Security Summary The intrusion took 1-2 staff months of CCLRC effort to investigate (6 staff fulltime for 3 days (5 Tier-1 plus E- Science security officer), working long hours. Also: –Networking group –CCLRC site security. –Babar support –Other sites Prompt notification of the incident by up-stream site would have substantially reduced the size and complexity of the investigation. The good standard of patching on the Tier1 minimised the spread of the incident internally (but we were lucky) Can no longer trust who logged on uses are many userids (globally) probably compromised.

33 1 February 2004Tier-1 Status Report Conclusions A period of consolidation User demand continues to fluctuate, but increasing number of experiments able to use LCG. Good progress on SRM to DISK (DCACHE Making progress with SRM to tape Having an SRM isnt enough – it has to meet the needs of the experiments Expect focus to shift (somewhat) towards service challenge


Download ppt "Tier1 Status Report Andrew Sansum GRIDPP12 1 February 2004."

Similar presentations


Ads by Google