Presentation is loading. Please wait.

Presentation is loading. Please wait.

HAMBURG ZEUTHEN DESY Site Report HEPiX/HEPNT Fermilab 2002-10-23 Knut Woller.

Similar presentations


Presentation on theme: "HAMBURG ZEUTHEN DESY Site Report HEPiX/HEPNT Fermilab 2002-10-23 Knut Woller."— Presentation transcript:

1 HAMBURG ZEUTHEN DESY Site Report HEPiX/HEPNT Fermilab 2002-10-23 Knut Woller

2 HAMBURG ZEUTHEN 2002-10-23 Knut WollerDESY Site ReportIT–Systems1 Overview I will focus on ongoing activities and projects:  Storage and data management dCache ExaStore  User Registry Project  Windows Migration Project  Mail Consolidation

3 HAMBURG ZEUTHEN 2002-10-23 Knut WollerDESY Site ReportIT–Systems2 Storage and Data Management New requirements and challenges:  Need to decrease storage costs  Increasing number of clients burdens HSM  Distributed clients create awkward data paths, and distributed NFS does not scale  The “Traveling Scientist” requires mobility  Users are increasingly unable or unwilling to judge features or cost of a specific store. They just want to use it.

4 HAMBURG ZEUTHEN 2002-10-23 Knut WollerDESY Site ReportIT–Systems3 About dCache  Distributed cache between clients and HSM  Collaborative development at DESY & FNAL  In production use at DESY and FNAL  More labs are looking into it  DESY currently runs about 30TB read pool on IDE RAID servers  All major DESY groups use it by now  For us, it is the method to access HSM data

5 HAMBURG ZEUTHEN 2002-10-23 Knut WollerDESY Site ReportIT–Systems4 dCache Features  Allows the use of cheap tape media by largely reducing the number of mounts  Coordinates the site wide data staging and reduces data management manpower  Supports several HSMs (OSM, EnStore, Eurogate)  Can be transparently used by applications through C- API (ROOT supports dCache)  Scales well to thousands of clients and hundreds of pool servers  Can be used in GRIDs (bbFTP, gridFTP)

6 HAMBURG ZEUTHEN 2002-10-23 Knut WollerDESY Site ReportIT–Systems5 dCache Development  DESY / FNAL project is well advanced  Presentations have been made at recent HEPiX and CHEP conferences  Project information is on http://www-dcache.desy.de http://www-dcache.desy.dehttp://www-dcache.desy.de  We plan to set up a central read disk pool of 100+TB when we migrate to large, cheap tapes (STK 9940B) in a few months.

7 HAMBURG ZEUTHEN 2002-10-23 Knut WollerDESY Site ReportIT–Systems6 ExaStore  Since 1999, major user groups have demanded a “Large Central File Store” at DESY  Features: Multi-Terabyte, high performance, single filesystem view, random access  AFS will not scale to this size  dCache does not fit the requirements  Commercial NAS solutions do not scale well  EXANET came along in 2000 with a product proposal that suits our needs

8 HAMBURG ZEUTHEN 2002-10-23 Knut WollerDESY Site ReportIT–Systems7 About ExaStore  Seen from the outside, the ExaStore is a highly scalable, high performance NAS (or a huge virtual disk)  Internally, it is built from disk and CPU servers and independent RAID arrays. ExaStore’s spice is The use of commodity components Their cluster file system Their redundant server mesh  ExaStore scales in (at least) two dimensions: In capacity by adding disks In performance by adding nodes and/or uplinks

9 HAMBURG ZEUTHEN 2002-10-23 Knut WollerDESY Site ReportIT–Systems8 Why Exastore at DESY?  Because the current jungle of cross mounted NFS disks is an administrative nightmare  Because NFS data management at DESY today is handled decentrally in the user groups. IT wants to fill this gap to make better resource use.  Because scaling the current system of distributed NFS servers reduces stability and manageability  Because current NAS solutions are limited to 12-18TB per box and a fixed number of uplinks and server nodes  Because we do not think it would be wise to invent our own SAN/NAS solution.

10 HAMBURG ZEUTHEN 2002-10-23 Knut WollerDESY Site ReportIT–Systems9 ExaStore Experiences  First test system at DESY since April, in beta test since June (4 nodes, 1.5TB)  No crashes in four months  Performance is not yet where we want it to be, but well on the road  We want to acquire a production system with 8 nodes and 12 TB (management approval pending)

11 HAMBURG ZEUTHEN 2002-10-23 Knut WollerDESY Site ReportIT–Systems10 User Registry Project  DESY User Registry is old, limited, inflexible  Number of user groups is increasing  Each new complex software system today comes with a proprietary registry (e.g. mailserver, calendar server, Oracle, SAP, …)  Interfaces to HR database, phonebook etc. are required  We need a site wide metadirectory toolbox  Groups have a large demand for delegations of rights

12 HAMBURG ZEUTHEN 2002-10-23 Knut WollerDESY Site ReportIT–Systems11 Project Approach  Design phase started in January  We have a clear functional description now  We looked in to commercial (Tivoli, CA, …) and open source (Ganymede) tools, none of which seem to fit our needs  We are gathering troops to start coding  Platform account (unix, windows, kerberos) should be manageable in Q2/2003  Platform adaptors will take some time

13 HAMBURG ZEUTHEN 2002-10-23 Knut WollerDESY Site ReportIT–Systems12 Windows Migration Project  The DESY Windows Domain is still NT4  We started rolling out W2K and WXP clients in the DESYNT domain (mostly notebooks)  Basic software support (netinstall) for WXP desktops in DESYNT available this year  Domain servers are NT4, newer ones W2K .net server look promising, but are not in production use yet  Where possible, we are skipping W2K clients

14 HAMBURG ZEUTHEN 2002-10-23 Knut WollerDESY Site ReportIT–Systems13 W2K Migration Status  New project team has been formed within IT  We are finalizing the site wide AD design  New hardware has been / is being acquired  Homedir storage is under reconsideration  We plan to have a working domain in Q1/2003  Migration start foreseen in Q2/2003  DESYNT will stay alive for control systems

15 HAMBURG ZEUTHEN 2002-10-23 Knut WollerDESY Site ReportIT–Systems14 Mail Consolidation  We are still in the sad state of supporting sendmail, Exchange, and PMDF  We experience load and capacity problems on all three systems  User ‘requirements’ (real or not) have limited us in the past years  Next step will be mail routing consolidation to get rid of PMDF  We want to end up with one mail router and one mail server solution, both yet unnamed

16 HAMBURG ZEUTHEN 2002-10-23 Knut WollerDESY Site ReportIT–Systems15 In General …  … we have been able to increase our IT staff with bright, young colleages (IT is back to 1999 staffing level)  … we start seeing synergy effects by treating windows and unix systems in one group (e.g. Samba, hardware standards)  … we have been able to start a few major efforts and projects  … we are striving for more coherence between Hamburg and Zeuthen  … much of our effort is still required to clean up or legacy from the past (technologically and socially)  … I think we have a few very well working and scalable solutions, e.g. in mass storage (dcache), Linux support, printing

17 HAMBURG ZEUTHEN 2002-10-23 Knut WollerDESY Site ReportIT–Systems16 That’s It Thank you for your attention


Download ppt "HAMBURG ZEUTHEN DESY Site Report HEPiX/HEPNT Fermilab 2002-10-23 Knut Woller."

Similar presentations


Ads by Google