Presentation is loading. Please wait.

Presentation is loading. Please wait.

Vladimir Litvin, Harvey Newman, Sergey Schevchenko Caltech CMS Scott Koranda, Bruce Loftis, John Towns NCSA Miron Livny, Peter Couvares, Todd Tannenbaum,

Similar presentations


Presentation on theme: "Vladimir Litvin, Harvey Newman, Sergey Schevchenko Caltech CMS Scott Koranda, Bruce Loftis, John Towns NCSA Miron Livny, Peter Couvares, Todd Tannenbaum,"— Presentation transcript:

1 Vladimir Litvin, Harvey Newman, Sergey Schevchenko Caltech CMS Scott Koranda, Bruce Loftis, John Towns NCSA Miron Livny, Peter Couvares, Todd Tannenbaum, Jamie Frey Wisconsin Condor Using of Grid Prototype Infrastructure for QCD Background Study to the H   Process on Alliance Resources

2 CMS Physics The CMS detector at the LHC will probe fundamental forces in our Universe and search for the yet undetected Higgs Boson Detector expected to come online 2007

3 CMS Physics

4 Leveraging Alliance Grid Resources The Caltech CMS group is using Alliance Grid resources today for detector simulation and data processing prototyping Even during this simulation and prototyping phase the computational and data challenges are substantial

5 Goal to simulate QCD background The QCD jet-jet background cross section is huge (~ 10 10 pb). Previous studies of QCD jet-jet background have got the estimation of the rate, R jet, when jet might be misidentified as photon and, due to the limited CPU power, for QCD jet-jet background rates where simply squared (R jet 2 ). Hence, the correlations within event have not been taken into account in previous studies Previous simulations have been done with simplified geometry and non-gaussian tails in the resolution have not been adequately simulated Our goal is to make full simulation of relatively large QCD sample, measure the rate of diphoton misidentification and compare it with other types of background

6 Generation of QCD background QCD jet cross section strongly depends on the p T of the parton in hard interaction QCD jet cross section is huge. We need reasonable preselection at the generator level before pass events through full detector simulation The optimal choice of p T is needed Our choice is p T = 35GeV p T = 35 GeV is safe cut: we do not lose significant fraction of events, which could fake the Higgs signal, at the preselection level

7 Generator level cuts QCD background Standard CMS cuts: E t1 >40 GeV, E t2 >25 GeV, |  1,2 |<2.5 at least one pair of any two neutral particles (  0, , e, ,  ’, ,  ) with –E t1 > 37.5 GeV –E t2 > 22.5 GeV –|  1,2 | < 2.5 –m inv in 80-160 GeV Rejection factor at generator level ~3000 Photon bremsstrahlung background Standard CMS cuts: E t1 >40 GeV, E t2 >25 GeV, |  1,2 |<2.5 at least one neutral particle (  0, , e, ,  ’, ,  ) with –E t > 37.5 GeV –|  | < 2.5 Rejection factor at generator level ~6

8 Challenges of a CMS Run CMS run naturally divided into two phases –Monte Carlo detector response simulation –100’s of jobs per run –each generating ~ 1 GB –all data passed to next phase and archived –reconstruct physics from simulated data –100’s of jobs per run –jobs coupled via Objectivity database access –~100 GB data archived Specific challenges –each run generates ~100 GB of data to be moved and archived –many, many runs necessary –simulation & reconstruction jobs at different sites –large human effort starting & monitoring jobs, moving data

9 Tools Generation level - PYTHIA 6.152 (CTEQ 4L structure functions) http://www.thep.lu.se/~torbjorn/Pythia.html Full Detector Simulation - CMSIM 121 (includes full silicon version of the tracker) http://cmsdoc.cern.ch/cmsim/cmsim.html Reconstruction - ORCA 5.2.0 with pileup at L = 2 *10 33 cm -2 /s (~30 pileup events per signal event) - http://cmsdoc.cern.ch/orca

10 Analysis Chain Full analysis chain

11 Meeting Challenge With Globus and Condor Globus middleware deployed across entire Alliance Grid remote access to computational resources dependable, robust, automated data transfer Condor strong fault tolerance including checkpointing and migration job scheduling across multiple resources layered over Globus as “personal batch system” for the Grid

12 CMS Run on the Alliance Grid Caltech CMS staff prepares input files on local workstation Pushes “one button” to launch master Condor job Input files transferred by master Condor job to Wisconsin Condor pool (~700 CPUs) using Globus GASS file transfer Master Condor job running at Caltech Caltech workstation Input files via Globus GASS WI Condor pool

13 CMS Run on the Alliance Grid Master Condor job at Caltech launches secondary Condor job on Wisconsin pool Secondary Condor job launches 100 Monte Carlo jobs on Wisconsin pool –each runs 12~24 hours –each generates ~1GB data –Condor handles checkpointing & migration –no staff intervention Master Condor job running at Caltech Secondary Condor job on WI pool 100 Monte Carlo jobs on Wisconsin Condor pool

14 CMS Run on the Alliance Grid When each Monte Carlo job completes data automatically transferred to UniTree at NCSA –each file ~ 1 GB –transferred using Globus-enabled FTP client “gsiftp” –NCSA UniTree runs Globus-enabled FTP server –authentication to FTP server on user’s behalf using digital certificate 100 Monte Carlo jobs on Wisconsin Condor pool NCSA UniTree with Globus-enabled FTP server 100 data files transferred via gsiftp, ~ 1 GB each

15 CMS Run on the Alliance Grid When all Monte Carlo jobs complete Secondary Condor reports to Master Condor at Caltech Master Condor at Caltech launches job to stage data from NCSA UniTree to NCSA Linux cluster –job launched via Globus jobmanager on cluster –data transferred using Globus-enabled FTP –authentication on user’s behalf using digital certificate Master starts job via Globus jobmanager on cluster to stage data Secondary Condor job on WI pool NCSA Linux cluster Secondary reports complete to master Master Condor job running at Caltech gsiftp fetches data from UniTree

16 CMS Run on the Alliance Grid Master Condor at Caltech launches physics reconstruction jobs on NCSA Linux cluster –job launched via Globus jobmanager on cluster –Master Condor continually monitors job and logs progress locally at Caltech –no user intervention required –authentication on user’s behalf using digital certificate Master Condor job running at Caltech Master starts reconstruction jobs via Globus jobmanager on cluster NCSA Linux cluster

17 CMS Run on the Alliance Grid When reconstruction jobs complete data automatically archived to NCSA UniTree –data transferred using Globus-enabled FTP After data transferred run is complete and Master Condor at Caltech emails notification to staff NCSA Linux cluster data files transferred via gsiftp to UniTree for archiving

18 Condor Details for Experts Use CondorG –Condor + Globus –allows Condor to submit jobs to remote host via a Globus jobmanager –any Globus-enabled host reachable (with authorization) –Condor jobs run in the “Globus” universe –use familiar Condor classads for submitting jobs universe = globus globusscheduler = beak.cs.wisc.edu/jobmanager- condor-INTEL-LINUX environment = CONDOR_UNIVERSE=scheduler executable = CMS/condor_dagman_run arguments = -f -t -l. -Lockfile cms.lock -Condorlog cms.log -Dag cms.dag -Rescue cms.rescue input = CMS/hg_90.tar.gz remote_initialdir = Prod2001 output = CMS/hg_90.out error = CMS/hg_90.err log = CMS/condor.log notification = always queue

19 Condor Details for Experts Exploit Condor DAGman –DAG=directed acyclic graph –submission of Condor jobs based on dependencies –job B runs only after job A completes, job D runs only after job C completes, job E only after A,B,C & D complete… –includes both pre- and post-job script execution for data-staging, cleanup, or the like Job jobA_632 Prod2000/hg_90_gen_632.cdr Job jobB_632 Prod2000/hg_90_sim_632.cdr Script pre jobA_632 Prod2000/pre_632.csh Script post jobB_632 Prod2000/post_632.csh PARENT jobA_632 CHILD jobB_632 Job jobA_633 Prod2000/hg_90_gen_633.cdr Job jobB_633 Prod2000/hg_90_sim_633.cdr Script pre jobA_633 Prod2000/pre_633.csh Script post jobB_633 Prod2000/post_633.csh PARENT jobA_633 CHILD jobB_633

20 Monte Carlo Samples Simulated and Reconstructed

21 CPU timing

22 All cuts except isolation are applied Distributions are normalized to L int = 40 pb -1

23 Isolation Tracker isolation Isolation cut: Number of tracks with p T > 1.5 GeV in R = 0.30 cone around photon candidate is zero Still optimizing p T threshold and cone sizes Ecal isolation Sum of E t energy in the cone around photon candidate, using E t energies of ECAL clusters Isolation cut: Sum of E t energy in R = 0.30 cone around photon candidate is less than 0.8 GeV

24 Background Cross Section

25 Conclusions The goal of this study is to increase efficiency of computer resources and to reduce and minimize human intervention during simulation and reconstruction –“proof of concept” - it is possible to create the distributed system based on GLOBUS and Condor (MOP is operational now) –A lot of work ahead in order to make this system as automatic as possible Important results are obtained for the Higgs boson search in two photon decay mode –the main background is the background with one prompt photon plus bremsstrahlung photon or isolated  0, which is ~50% of the total background. QCD background is reduced down to the 15% of the total background –More precise studies need much more CPU time


Download ppt "Vladimir Litvin, Harvey Newman, Sergey Schevchenko Caltech CMS Scott Koranda, Bruce Loftis, John Towns NCSA Miron Livny, Peter Couvares, Todd Tannenbaum,"

Similar presentations


Ads by Google