Presentation is loading. Please wait.

Presentation is loading. Please wait.

A year & a summer of June 21 2012 – August 31 2013.

Similar presentations


Presentation on theme: "A year & a summer of June 21 2012 – August 31 2013."— Presentation transcript:

1 A year & a summer of XRootd@UW June 21 2012 – August 31 2013

2 Intro The plan: use UW data to estimate operational parameters for XRootd caching proxy – UW chosen as it uses XRootd for internal access, too … so we have all the monitoring data For cross-check, also analyzed: – remote access to UW – jobs running at UW and reading from elsewhere Separate also on /store/XXX/ top-level: – use user & group vs. all the rest (PhEDEx) – but I have histograms for individual subdirs, too.

3 Data Filtered out monitoring access (Brian & Dan) Things seemed a bit weird: – majority of accesses read a whole file; – pronounced peak in read-rate of 10 MB/s – pronounced peak in average read request size of 128 MB So I also cut out accesses with: – duration < 100s – bytes read < 1 MB This is being shown in the histograms that follow … … and it didn’t make the weirdness go away.

4 Number of file accesses weed out monitoringtighter cuts: 100s, 1 MB UW -> UW56,528,30217,108,892 UW -> XXX1,009,351694,696 XXX -> UW855,761702,402 Total number of records: 107,586,853 (all CMS XRootd monitoring) Wow, this cut down to a third! Somebody at UW is doing funny things

5 Hourly traffic & its histogram Just to give you an impression of scale … UW as a whole serves about 1GByte/s This is “cumulative” information, obtained by summing up individual transfers.

6 Fraction of file read Note the log scale: the bin around 1 is 100- times higher … and there are 200 bins from 0 to 2. So … about 50% of access read a file in full! I thought this is way lower …

7 Average read rate This isn’t so dramatic … but the highest peak is at 10MB/s! What are those jobs? Skimming? Funny thing: this peak is not there for UW -> XXX sample (but is there for XXX -> UW) so it almost seems like a UW peculiarity.

8 Average request size Again a well pronounced peak (order of magnitude) at 100 – 128 MB request size. I assume this is XRootd maximum Do we really manage to make requests this big? This is a bit of a pain for caching proxy …

9 Proto-conclusions & confusions All these three are a bit of a bad news for both caching proxy and less disk-full T2 operation. 100 Gbps networks are coming to the rescue – but this will not be free lunch, based on all the issues Alja and I see with the proxy operation on a 1Gbps node I’m a bit confused about the high per-job data rate compared to the average output of whole UW at 1GB/s


Download ppt "A year & a summer of June 21 2012 – August 31 2013."

Similar presentations


Ads by Google