Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 S-Matrix and the Grid Geoffrey Fox Professor of Computer Science, Informatics, Physics Pervasive Technology Laboratories Indiana University Bloomington.

Similar presentations


Presentation on theme: "1 S-Matrix and the Grid Geoffrey Fox Professor of Computer Science, Informatics, Physics Pervasive Technology Laboratories Indiana University Bloomington."— Presentation transcript:

1 1 S-Matrix and the Grid Geoffrey Fox Professor of Computer Science, Informatics, Physics Pervasive Technology Laboratories Indiana University Bloomington IN 47401 December 12 2003 gcf@indiana.edu http://www.infomall.org

2 2 S-Matrix and PWA We need an amplitude analysis to find most “interesting” resonances If this makes sense, we are effectively parameterizing photon-Reggeon amplitude with resonance at “top” vertex in full (123 in diagram) or partial (12, 23, 31) channel Complicated as off diagonal, one “fake” particle and often more than 2 final particles This requires a lot of approximations whose effect can be estimated with S-Matrix Theory Analyticity, Unitarity, Crossing, Regge Theory, Spin formalism, Duality, Finite Energy Sum Rules 1 2 3 Reggeon Exchange for Production  Exchange  Target Regge in Top Vertex

3 3 Some Lessons from the past I All confusing effects exist and no fundamental (correct) way to remove. So one should: Minimize effect of the hard (insoluble) problems such as “particles from wrong vertex”, “unestimatable exchange effects” sensitive to slope of unclear Regge trajectories, absorption etc. Carefully identify where effects are “additive” and where confusingly overlapping Note many of effects are intrinsically MORE important in multiparticle case than in relatively well studied π N  π N Try to estimate impact of uncertainties from each effect on results It would be very helpful to get systematic very high statistic studies of relatively clean cases where spectroscopy may be less interesting but one can examine uncertainties Possibilities are A 1 A 2 A 3 B 1 peripherally produced and even π N  π π N; K or π beams good

4 4 S-Matrix Approach S-Matrix ideas that work reasonably include: Regge theory for production process Two-component duality adding Regge dual to Regge to background dual to the Pomeron Can help to identify if a resonance is classic qq or exotic Use of Regge exchange at top vertex to estimate high partial waves in amplitude analysis Finite Energy Sum Rules for top vertex as constraints on low mass amplitudes and most quantitative way of linking high and low masses Ignore Regge Cuts in Production Unitarity effects not included directly due to duality double counting

5 5 Investigate Uncertainties There are several possible sources of error Errors in Quasi 2-body and limited number of amplitudes approximation Unitarity (final state interactions) Errors in the two-component duality picture Exotic particles are produced and are just different Photon beams, π exchange or some other “classic effect” not present in original πN analyses behaves unexpectedly Failure of quasi two body approximation Regge cuts cannot be ignored Background from other channels Develop tests for these in both “easy” cases (such as “old” meson beam data) and in photon beam data at Jefferson laboratory Investigate all effects on any interesting result from PWA

6 6 Grid Computing: Making The Global Infrastructure a Reality Note book with Fran Berman and Anthony J.G. Hey, ISBN: 0-470-85319-0 Hardcover 1080 Pages Published March 2003 http://www.grid2002.org I had more fun in days gone by; no more do I write “Skeletons in the Regge Cupboard” or “The Importance of being an Amplitude”

7 7 Some Further Links A talk on Grid and e-Science was webcast in an Oracle technology series http://webevents.broadcast.com/techtarget/Oracle/100303/index.asp?loc=10 http://webevents.broadcast.com/techtarget/Oracle/100303/index.asp?loc=10 See also the “Gap Analysis” survey of Grid technology http://grids.ucs.indiana.edu/ptliupages/publications/GapAnalysis30June03v2.pdf http://grids.ucs.indiana.edu/ptliupages/publications/GapAnalysis30June03v2.pdf This presentation is at http://grids.ucs.indiana.edu/ptliupages/presentations http://grids.ucs.indiana.edu/ptliupages/presentations Next Semester – course on “e-Science and the Grid” given by Access Grid Write up for May Conference describes proposed Physics Strategy http://grids.ucs.indiana.edu/ptliupages/publications/gluonic_gcf.pdf http://grids.ucs.indiana.edu/ptliupages/presentations/pwamay03.ppt http://grids.ucs.indiana.edu/ptliupages/publications/gluonic_gcf.pdf http://grids.ucs.indiana.edu/ptliupages/presentations/pwamay03.ppt

8 8 e-Business e-Science and the Grid e-Business captures an emerging view of corporations as dynamic virtual organizations linking employees, customers and stakeholders across the world. The growing use of outsourcing is one example e-Science is the similar vision for scientific research with international participation in large accelerators, satellites or distributed gene analyses. The Grid integrates the best of the Web, traditional enterprise software, high performance computing and Peer- to-peer systems to provide the information technology infrastructure for e-moreorlessanything. A deluge of data of unprecedented and inevitable size must be managed and understood. People, computers, data and instruments must be linked. On demand assignment of experts, computers, networks and storage resources must be supported

9 9 What is a High Performance Computer? We might wish to consider three classes of multi-node computers 1) Classic MPP with microsecond latency and scalable internode bandwidth (t comm /t calc ~ 10 or so) 2) Classic Cluster which can vary from configurations like 1) to 3) but typically have millisecond latency and modest bandwidth 3) Classic Grid or distributed systems of computers around the network Latencies of inter-node communication – 100’s of milliseconds but can have good bandwidth All have same peak CPU performance but synchronization costs increase as one goes from 1) to 3) Cost of system (dollars per gigaflop) decreases by factors of 2 at each step from 1) to 2) to 3) One should NOT use classic MPP if class 2) or 3) suffices unless some security or data issues dominates over cost-performance One should not use a Grid as a true parallel computer – it can link parallel computers together for convenient access etc.

10 10 Sources of Grid Technology Grids support distributed collaboratories or virtual organizations integrating concepts from The Web Agents Distributed Objects (CORBA Java/Jini COM) Globus, Legion, Condor, NetSolve, Ninf and other High Performance Computing activities Peer-to-peer Networks With perhaps the Web and P2P networks being the most important for “Information Grids” and Globus for “Compute Grids” Service Architecture based on Web Services most critical feature

11 11 Raw (HPC) Resources Middleware Database Portal Services System Services Application Service System Services User Services “Core” Grid Typical Grid Architecture

12 12 A typical Web Service In principle, services can be in any language (Fortran.. Java.. Perl.. Python) and the interfaces can be method calls, Java RMI Messages, CGI Web invocations, totally compiled away (inlining) The simplest implementations involve XML messages (SOAP) and programs written in net friendly languages like Java and Python Payment Credit Card Warehouse Shipping control WSDL interfaces SecurityCatalog Portal Service Web Services

13 13 What is Happening? Grid ideas are being developed in (at least) two communities Web Service – W3C, OASIS Grid Forum (High Performance Computing, e-Science) Service Standards are being debated Grid Operational Infrastructure is being deployed Grid Architecture and core software being developed Particular System Services are being developed “centrally” – OGSA framework for this in Lots of fields are setting domain specific standards and building domain specific services There is a lot of hype Grids are viewed differently in different areas Largely “computing-on-demand” in industry (IBM, Oracle, HP, Sun) Largely distributed collaboratories in academia

14 14 Technical Activities of Note Look at different styles of Grids such as Autonomic (Robust Reliable Resilient) New Grid architectures hard due to investment required Critical Services Such as Security – build message based not connection based Notification – event services Metadata – Use Semantic Web, provenance Databases and repositories – instruments, sensors Computing – Submit job, scheduling, distributed file systems Visualization, Computational Steering Fabric and Service Management Network performance Program the Grid – Workflow Access the Grid – Portals, Grid Computing Environments

15 15 Issues and Types of Grid Services 1) Types of Grid R3 Lightweight P2P Federation and Interoperability 2) Core Infrastructure and Hosting Environment Service Management Component Model Service wrapper/Invocation Messaging 3) Security Services Certificate Authority Authentication Authorization Policy 4) Workflow Services and Programming Model Enactment Engines (Runtime) Languages and Programming Compiler Composition/Development 5) Notification Services 6) Metadata and Information Services Basic including Registry Semantically rich Services and meta- data Information Aggregation (events) Provenance 7) Information Grid Services OGSA-DAI/DAIT Integration with compute resources P2P and database models 8) Compute/File Grid Services Job Submission Job Planning Scheduling Management Access to Remote Files, Storage and Computers Replica (cache) Management Virtual Data Parallel Computing 9) Other services including Grid Shell Accounting Fabric Management Visualization Data-mining and Computational Steering Collaboration 10) Portals and Problem Solving Environments 11) Network Services Performance Reservation Operations

16 16 OGSA OGSI & Hosting Environments Start with Web Services in a hosting environment Add OGSI to get a Grid service and a component model Add OGSA to get Interoperable Grid “correcting” differences in base platform and adding key functionalities OGSI on Web Services Broadly applicable services: registry, authorization, monitoring, data access, etc., etc. Hosting Environment for WS More specialized services: data replication, workflow, etc., etc. Domain- specific services Network OGSA Environment Possibly OGSA Not OGSA Given to us from on high

17 17 Integration of Data and Filters One has the OGSA-DAI Data repository interface combined with WSDL of the (Perl, Fortran, Python …) filter User only sees WSDL not data syntax Some non-trivial issues as to where the filtering compute power is Microsoft says filter next to data DB Filter WSDL Of Filter OGSA-DAI Interface

18 18 Data Technology Components of (Services in) a Computing Grid 1: Job Management Service (Grid Service Interface to user or program client) 2: Schedule and control Execution 1: Plan Execution4: Job Submittal Remote Grid Service 6: File and Storage Access 3: Access to Remote Computers Data 7: Cache Data Replicas 5: Data Transfer 10: Job Status 8: Virtual Data 9: Grid MPI

19 19 Grid Strategy LHC Computing will be very well established and handling 10-100 times as much data as GlueX when we need to go into production GriPhyn iVDGL EDG EGEE PPDG GridPP will customize core Grid technology for accelerator-based experiments Transport Data Cache Data Manage initial data analysis and Monte Carlo Not clear if GT2, GT3, OGSI but will certainly be Web Service based Need to keep in close touch with these activities Build GlueX physics analysis consistent with this infrastructure

20 20 Implementing Grids Need to design a service architecture for GlueX Build on services from HEP and other fields Need some specific gluexML meta-data specifying services and properties specific to GlueX Specify data structures and method interfaces in XML Use portlets for user-interfaces as in http://www.ogce.orghttp://www.ogce.org Break-up into services where-ever possible but only if “coarse-grain” Module A Module B Method Calls.001 to 1 millisecond Service A Service B Messages 0.1 to 1000 millisecond latency Coarse Grain Service ModelClosely coupled Java/Python …

21 21 Collage of Portals Earthquakes – NASA Fusion – DoE OGCE Components – NSF Publications -- CGL

22 22 Approach Convert every code into a Web Service Convert every utility like “visualization” into a Web service Have good support for authoring and manipulating meta-data Use existing code/database technology (SQL/Fortran/C++) linked to “Application Web/OGSA services” XML specification of models, computational steering, scale supported at “Web Service” level as don’t need “high performance” here Allows use of Semantic Grid technology Typical codes WS linking to user and Other WS (data sources) Application WS

23 23 Raw Data and Compute Resources Middleware Database Portal Services System Services Visualization Service Modeling Services Fitting Service Grid Computing Environments User Services “Core” Grid (Globus) Data Access Service

24 24 CERN LHC Data Analysis Grid


Download ppt "1 S-Matrix and the Grid Geoffrey Fox Professor of Computer Science, Informatics, Physics Pervasive Technology Laboratories Indiana University Bloomington."

Similar presentations


Ads by Google