Presentation is loading. Please wait.

Presentation is loading. Please wait.

TechFair ‘05 University of Arlington November 16, 2005.

Similar presentations


Presentation on theme: "TechFair ‘05 University of Arlington November 16, 2005."— Presentation transcript:

1 TechFair ‘05 University of Texas @ Arlington November 16, 2005

2 What’s the Point? High Energy Particle Physics is a study of the smallest pieces of matter. It investigates (among other things) the nature of the universe immediately after the Big Bang. It also explores physics at temperatures not common for the past 15 billion years (or so). It’s a lot of fun.

3 Particle Colliders Provide HEP Data Colliders form two countercirculating beams of subatomic particles Particles are accelerated to nearly the speed of light Beams are forced to intersect at detector facilities Detectors inspect and record outcome of particle collisions The Tevatron is currently most powerful collider 2. TeV (2 Trillion degrees) of collision energy of proton and anti-proton beams The Large Hadron Collider (LHC) will become most powerful in 2007 14.0 TeV (14 Trillion Degrees) of collision energy between two proton beams LHC at CERN in Geneva Switzerland ATLAS Mont Blanc P P

4 Detector experiments are large collaborations DØ Collaboration 650 Scientists 78 Institutions 18 Countries ATLAS Collaboration 1700 Scientists 150 Institutions 34 Countries

5 Detector Construction A recorded collision is a snapshot from sensors within detector Detectors are arranged in layers around collision point –Each layer is sensitive to a different physical process –Sensors are arranged spatially within each layer –Sensor outputs are electrical signals

6 DØ Detector (Tevatron) Weighs 5000 tons Can inspect 3,000,000 collisions/second Will record 50 collisions/second Records approximately 10,000,000 bytes/second Will record 4x10 15 (4 Petabytes) of data in current run 30’ 50’ ATLAS Detector (LHC) Weighs 10,000 tons Can inspect 1,000,000,000 collisions/second Will record 100 collisions/second Records approximately 300,000,000 bytes/second Will record 1.5x10 15 (1,500,000,000,000,000) bytes each year (1.5 PetaByte).

7 How are computers used in HEP? Computers inspect collisions and store interesting raw data to tape Triggers control filtering Raw data is converted to physics data through reconstruction process Converts electronic signals to physics objects Analysis is performed on reconstructed data Searching for new phenomena Measuring and verifying existing phenomena Monte-Carlo simulations performed to generate pseudo-raw data (MC data) Serves as guide for analysis

8 What is a Computing Grid? Grid: Geographically distributed computing resources configured for coordinated use Physical resources & networks provide raw capability “Middleware” software ties it together

9 How HEP Uses the Grid HEP experiments are getting larger –Much more data –Larger collaborations in terms of personnel and countries –Problematic to manage all computing needs at central facility –Solution: Distributed access to data/processors Grids provide access to large amounts of computing resources –Experiment funded resources (Tier1, Tier2 facilities) –Underutilized resources at other experiments’ facilities –University and possibly National Laboratory facilities Grid middleware key to using resources –Provides uniform methods for data movement, job execution, monitoring –Provides single sign-on for access to resources

10

11 UTA HEP’s Grid Activities DØ –SAM Grid –Remote MC Production –Student Internships at Fermilab –Data Reprocessing –Offsite Analysis ATLAS –Prototype Tier2 Facility using DPCC –MC Production –Service Challenges –Southwestern Tier2 Facility with OU and Langston Software Development –ATLAS MC Production System Grats/Windmill/PANDA –ATLAS Distributed Analysis System (DIAL) –DØ MC Production System McFARM –Monitoring Systems (SAMGrid, McPerm, McGraph) Founding Member of Distributed Organization for Scientific and Academic Research (DOSAR)

12 UTA HEP’s Grid Resources Distributed and Parallel Computing Cluster 80 Compute node Linux cluster –Dual Pentium Xeon nodes 2.4 or 2.6 GHz –2GB RAM per node –45 TB of network storage 3 IBM P5-570 machines –8 way 1.5GHz P5 processors –32GB RAM –5 TB of SAN storage Funded through NSF-MRI grant between CSE / HEP / UT-SW Commissioned 9/03 Resource provider for DØ and ATLAS experiments –SAMGrid –Grid3 project –Open Science Grid (OSG)

13 Figure :Cumulative number of Monte-Carlo events produced since August, 2003 for the D0 collaboration by remote site. UTA DØ Grid Processing MC Production 8/03-9/04Additional DØ Activities P14 FixTMB processing 50 Million events 1.5 TB processed P17 Reprocessing 30 Million Events 6 TB reprocessed

14 Figure 1: Percentage contribution toward US-ATLAS DC2 production by computing site. UTA ATLAS Grid Processing Figure 2: Percentage contribution toward US-ATLAS Rome production by computing site.

15 UTA has the first and the only US RAC (Regional Analysis Center) UTA is the only US DØ RAC DOSAR formed around UTA, now a VO in Open Science Grid Mexico/Brazil OU/ LU UAZ Rice LTU UTA KU KSU Ole Miss

16 PANDA Panda is the next generation data production system for US-ATLAS –Schedules computational jobs to participating grid sites –Manages data placement and delivery of results –Performs bookkeeping to track status of requests UTA responsibilities: –Project Lead –Database schema design –Testing and Integration –Packaging

17 Other Grid Efforts UTA is the founding member of the Distributed Organization for Scientific and Academic Research (DOSAR) –DOSAR is researching workflows and methodologies for performing analysis of HEP data in a distributed manner. –DOSAR is a recognized Virtual Organization within the Open Science Grid initiative –http://www-hep.uta.edu/dosar UTA is collaborating on the DIAL (Distributed Interactive Analysis for Large datasets) project at BNL UTA has joined THEGrid (Texas High Energy Grid) for sharing resources among Texas institutions for furthering HEP work. –http://thegrid.dpcc.uta.edu/thegrid/http://thegrid.dpcc.uta.edu/thegrid/ UTA HEP has sponsored CSE student internships at Fermilab with exposure to grid software development

18 UTA Monitoring Applications Developed, implemented and improved by UTA students Number of Jobs % of Total Available CPUs Time from Present (hours) Anticipated CPU Occupation Jobs in Distribute Queue Commissioned and being deployed

19 ATLAS Southwestern Tier2 Facility US-ATLAS uses Brookhaven National Laboratory (BNL) as national Tier1 center Three Tier2 centers have been funded in 2005: –Southwestern Tier2 (UTA, University of Oklahoma and Langston Univ.) –Northeastern Tier2 (Boston University and Harvard) –Midwestern Tier2 (University of Chicago and Indiana University) Tier2 funding is expected for each center for the duration of the ATLAS experiment (20+ years) UTA’s Kaushik De is Principal Investigator for Tier2 center Initial hardware is expected to be in place and operational by December, 2005 at UTA and OU

20 ATLAS Southwestern Tier2 Facility UTA’s portion of SWT2 is is expected to be on the order of 50 – 100 times the scale of DPCC –5000-10000 processors –Thousands of TB (few peta bytes) of storage Challenges for the SWT2 center –Network bandwidth needs Recent predictions show a need for 1Gbps bandwidth between Tier 1 and Tier2 by 2008 –Coordination of distributed resources Providing a unified view of SWT2 resources to outside collaborators Management of distributed resources between OU and UTA –Management of resources on campus SWT2 resources likely to be split between UTACC and new Physics Building


Download ppt "TechFair ‘05 University of Arlington November 16, 2005."

Similar presentations


Ads by Google