Presentation on theme: "Update: PDF re- weighting technique summary 25/1/08."— Presentation transcript:
Update: PDF re- weighting technique summary 25/1/08
Problem with normal PDF uncertainty estimation: Need to generate 40 sets of data (for CTEQ6.1M NLO PDF set) in addition to central-value (i.e. PDF-averaged) data Extremely resource-intensive (CPU runtime and disk space) Even with *many* events, statistical uncertainty dominates over PDF uncertainty…
…as can clearly be seen in these significance plots!
Solution: re-weighting Run lots of events using *only* the central- value CTEC6.1M PDF Print-out GEN_AOD parton-level Pythia information Extract flavours and p_z for initial state partons involved in hard sub-process Store these data for each event
…continued: Use this information to look up relevant parton densities in the CTEQ6.1M grid Then for 1 i 40, calculate weight: w = f i (x,Q 2 )/f 0 (x,Q 2 ) = weight for the ith data set Then re-weight central-value data by these weights to generate 40 sets of histograms Perform rest of dijet analysis as before Then have the worlds most well-deserved cup of tea and a sit down…
Your consent to our cookies if you continue to use this website.