Presentation is loading. Please wait.

Presentation is loading. Please wait.

O AK R IDGE N ATIONAL L ABORATORY U.S. D EPARTMENT OF E NERGY Facilities and How They Are Used ORNL/Probe Randy Burris Dan Million – facility administrator.

Similar presentations


Presentation on theme: "O AK R IDGE N ATIONAL L ABORATORY U.S. D EPARTMENT OF E NERGY Facilities and How They Are Used ORNL/Probe Randy Burris Dan Million – facility administrator."— Presentation transcript:

1 O AK R IDGE N ATIONAL L ABORATORY U.S. D EPARTMENT OF E NERGY Facilities and How They Are Used ORNL/Probe Randy Burris Dan Million – facility administrator Mike Gleicher – development contractor Florence Fowler – network research and linux support

2 OAK RIDGE NATIONAL LABORATORY U.S. DEPARTMENT OF ENERGY Overview  Background  Facilities for Data Analysis  Facilities for file transfers and Grid  Facilities for file system integration and improved access to tertiary storage

3 OAK RIDGE NATIONAL LABORATORY U.S. DEPARTMENT OF ENERGY Backgrouhd – Goals  Provide an environment for SDM ISIC activities  Provide an environment for SciDAC applications  Climate  Terascale Supernova Initiative  High Energy Nuclear Physics  Provide an environment that is appropriate for production deployment  Support and perform R&D on improved tertiary storage access

4 OAK RIDGE NATIONAL LABORATORY U.S. DEPARTMENT OF ENERGY Background – Supplying the environments  Predict equipment needs, assuming:  Large memory  Large storage capacity – disk and tape  Good bandwidth – network and storage  Moderate CPU power  AIX or RedHat Linux 7.2 operating systems  Provide basic software (compiler)  Respond to subsequent requests

5 OAK RIDGE NATIONAL LABORATORY U.S. DEPARTMENT OF ENERGY AIX RS/6000 H70 4-processor 340 MHz RS64-II; 4 GB External Networks ESnet OC12 OC192 Facilities for Data Analysis AIX RS/6000 p630 4-processor 1 GHz Power 4; 4 GB Each Dual P-III 240 GB Alice cluster OSCAR; PVFS Each: One Athlon 480 GB Argonne cluster PVFS 1 TB FibreChannel 360 GB T3 FibreChannel 200 GB SCSI RAID Linux 4 processor 1.4 GHz Xeon 8 GB 360 GB internal 180 GB FibreChannel Switch 180 GB Probe HPSSProduction HPSS 1 TB FibreChannel AIX S80 6-processor 467 MHz RS64-II, 2 GB

6 OAK RIDGE NATIONAL LABORATORY U.S. DEPARTMENT OF ENERGY Data Analysis Software Environment - AIX and linux R data analysis GGobi data visualization Java gnu Fortran, CAIX Fortran, Cgnu Fortran, C

7 OAK RIDGE NATIONAL LABORATORY U.S. DEPARTMENT OF ENERGY IBM 44P 1 proc, 512 AIX Facilities for file transfers and Grid Sun E250 512 MB Solaris (Grid nodes) 1 TB FibreChannel 300 GB SCSI 180 GB FibreChannel External Networks ESnet OC12 OC192 2 proc 2 GB linux Unassigned or floating IBM B80 2 proc, 1 GB AIX Sun E450 512 MB Solaris 2 proc 2 GB linux Globus 2.0 on AIX Globus 2.0 on Solaris HRM research on Solaris

8 OAK RIDGE NATIONAL LABORATORY U.S. DEPARTMENT OF ENERGY HRM Research Environment 180 GB Sun FibreChannel RAID STK Library Disk Cache Probe Production STK Tape Drive 180 GB Sun FibreChannel RAID STK Tape Drive Sleepy Sun E250 pftp, hsi, GridFTP Globus 2.0 Two-stripe NERSC Production HPSS ORNL Probe HPSS ORNL Production HPSS

9 OAK RIDGE NATIONAL LABORATORY U.S. DEPARTMENT OF ENERGY HRM Research Environment, soon STK Library Disk Cache Probe Production 3 TB Tape storaage 1 TB FibreChannel RAID Selected STAR data Sleepy Sun E250 pftp, hsi, GridFTP Globus 2.0 NERSC Production HPSS Brookhaven Production HPSS Probe HPSS Production HPSS

10 OAK RIDGE NATIONAL LABORATORY U.S. DEPARTMENT OF ENERGY Improving tertiary access  Improve end-user access (hsi)  Investigate linking PVFS and HPSS  Study HPSS I/O involving Unix files  Investigate use of HPSS linux movers  Monitor (and evaluate when available) HPSS I/O redirection  Coordinate with a proposed project  Shared very large disk storage  Shared file system  HPSS integrated with file system and sharing the disk capacity

11 OAK RIDGE NATIONAL LABORATORY U.S. DEPARTMENT OF ENERGY HSI improvements  Added Grid authentication service  Implemented partial file transfers  Added scheduling mechanism  Find all files to be sent  Sort by tape and location on tape

12 OAK RIDGE NATIONAL LABORATORY U.S. DEPARTMENT OF ENERGY File system integration Each Dual P-III 240 GB Alice cluster OSCAR; PVFS Each Dual Athlon 480 GB “Argonne” cluster PVFS 1 TB FibreChannel 2 proc 2 GB linux Probe HPSS

13 OAK RIDGE NATIONAL LABORATORY U.S. DEPARTMENT OF ENERGY HPSS, Unix files and linux Each Dual P-III 240 GB Alice cluster OSCAR; PVFS Each Dual Athlon 480 GB Argonne cluster PVFS 1 TB FibreChannel 2 proc 2 GB linux RAIDzone 2 proc, 1 GB 2 TB RAID XFS; linux RAIDzone – use to study: HPSS I/O involving Unix files HPSS linux mover NFS with HPSS archive

14 OAK RIDGE NATIONAL LABORATORY U.S. DEPARTMENT OF ENERGY Since 10PM yesterday…  To support  SciDAC SDM and SciDAC Applications  Research (I/O to/from Cray X1)  Production  New equipment on order; hope to get this month  Two more Dell 2650 linux processors 2.4 GHz, 2 GB, 5 73GB disks  Four more 1 TB FibreChannel RAID arrays  Two more IBM servers  Tape cartridges sufficient for 30 TB using existing tape drives

15 OAK RIDGE NATIONAL LABORATORY U.S. DEPARTMENT OF ENERGY Questions and answers  What goals/accomplishments by Feb 2003?  All equipment installed and operational  HPSS/RAIDzone linux mover testing complete  Link between PVFS and HPSS (linux mover and/or hsi) tested  Star data residing in ORNL HPSS (Probe or production)  Some data passed over OC192 link  Why are the goals important (and to whom)?  Data analysis staff need big-resource node  Visualization activities in support of TSI need disk cache and flexible computational resources  HPSS I/O of Unix files is central to many HPSS plans  Users of PVFS should have an archive capability  HRM needs three data sites for thorough testing of selection algorithms  OC192 bandwidth can satisfy a lot of bulk WAN transfer needs

16 OAK RIDGE NATIONAL LABORATORY U.S. DEPARTMENT OF ENERGY Questions and answers, continued  Which scientific domain? HENP and TSI, mostly  Why is the work significant? Who will use it?  Visualization of massive data is necessary for scientific research. This facility targets storage and movement of massive data.  Application scientists and ISIC researchers  Are others doing the same work?No.  Status at end of three years? Achievable?  HPSS and filesystems will be better integrated  Access to HPSS will be faster and easier  WAN throughput will be greatly improved  Will unsolved problems remain? Proposals?  There is never enough bandwidth or effective throughput  Filesystems and equipment are evolving rapidly; keeping up is tough  Distributed operations – data analysis, visualization – will remain challenges

17 OAK RIDGE NATIONAL LABORATORY U.S. DEPARTMENT OF ENERGY Discussion?


Download ppt "O AK R IDGE N ATIONAL L ABORATORY U.S. D EPARTMENT OF E NERGY Facilities and How They Are Used ORNL/Probe Randy Burris Dan Million – facility administrator."

Similar presentations


Ads by Google