Presentation is loading. Please wait.

Presentation is loading. Please wait.

CERN IT Department CH-1211 Genève 23 Switzerland www.cern.ch/i t ES CORAL Server & CORAL Server Proxy: Scalable Access to Relational Databases from CORAL.

Similar presentations


Presentation on theme: "CERN IT Department CH-1211 Genève 23 Switzerland www.cern.ch/i t ES CORAL Server & CORAL Server Proxy: Scalable Access to Relational Databases from CORAL."— Presentation transcript:

1 CERN IT Department CH-1211 Genève 23 Switzerland www.cern.ch/i t ES CORAL Server & CORAL Server Proxy: Scalable Access to Relational Databases from CORAL Applications A. Valassi, A. Kalkhof (CERN IT-ES) M. Wache (University of Mainz / Atlas) A. Salnikov, R. Bartoldus (SLAC / Atlas) CHEP2010 (Taiwan), 19 th October 2010

2 CERN IT Department CH-1211 Genève 23 Switzerland www.cern.ch/i t ES CORAL Server – 2 A. Valassi – CHEP 2010, 19 October 2010 Outline Introduction –ATLAS High Level Trigger specific requirements –Broader motivation for a general purpose CORAL middle-tier Development and deployment status –Successful experience with ATLAS HLT –Development outlook Conclusions

3 CERN IT Department CH-1211 Genève 23 Switzerland www.cern.ch/i t ES CORAL Server – 3 A. Valassi – CHEP 2010, 19 October 2010 Introduction CORAL is used by most applications accessing LHC physics data stored in relational databases –Important example: conditions data of Atlas, LHCb, CMS –Oracle is the main deployment technology at T0 and T1 Limitations of classic client/server architecture –Security, performance, software distribution –Several issues may be addressed by adding a middle tier Collaboration of two teams in ATLAS and IT with two sets of use cases from the start of the project –RO access with caching and multiplexing for ATLAS HLT –Secure access with RW capabilities for generic offline users –Converged on an open design that may cover both

4 CERN IT Department CH-1211 Genève 23 Switzerland www.cern.ch/i t ES CORAL Server – 4 A. Valassi – CHEP 2010, 19 October 2010 ATLAS DAQ/HLT architecture All HLT nodes (500 L2 + 1600 EF) need to read configuration data from the database (trigger configuration, detector geometry, conditions data) to be able to process events

5 CERN IT Department CH-1211 Genève 23 Switzerland www.cern.ch/i t ES CORAL Server – 5 A. Valassi – CHEP 2010, 19 October 2010 HLT requirements for DB access Every HLT process (up to 8 on each of ~2000 nodes) must read 10-100 MB of data from Oracle –Too many simultaneous clients for the DB servers –Too much data to get in a short time (hundreds of GBs) →Must reduce both data volume from the DB and # clients Positive point: each L2 or EF client needs to retrieve the same data from the database –Database only needs to send the L2 and EF data once →Add an intermediate ‘proxy’ layer Cache the data retrieved from DB Multiplex client connections Chain proxies for more scalability

6 CERN IT Department CH-1211 Genève 23 Switzerland www.cern.ch/i t ES CORAL Server – 6 A. Valassi – CHEP 2010, 19 October 2010 DbProxy and M2O for ATLAS HLT DbProxy: implementation with MySQL protocol –MySQL was used during HLT commissioning –Included some useful tools for database client monitoring –Essential part of TDAQ since 2007 till CoralServer deployed M2O: short-term MySQL-to-Oracle bridge –Oracle is used by HLT in production during data-taking –Oracle protocol is closed and proprietary Short-term workaround till CoralServer fully functional –Essential part of TDAQ since 2008 till CoralServer deployed

7 CERN IT Department CH-1211 Genève 23 Switzerland www.cern.ch/i t CORAL Server – 7 ES A. Valassi – CHEP 2010, 19 October 2010 CoralServer broader motivation Efficient and scalable use of DB server resources –Multiplex clients using fewer physical connections to DB –Optional caching tier for R/O access (CORAL server proxy) Also useful for further multiplexing of logical client sessions Secure access (R/O and R/W) to DB servers –Authentication via Grid certificates No support of Oracle for X.509 proxy certificates Hide database ports within the firewall (reduce vulnerabilities) –Authorization via VOMS groups in Grid certificates Client software deployment –CoralAccess client plugin using custom network protocol No need for Oracle/MySQL/SQLite client installation

8 CERN IT Department CH-1211 Genève 23 Switzerland www.cern.ch/i t ES CORAL Server – 8 A. Valassi – CHEP 2010, 19 October 2010 High-level architecture (e.g. COOL) Oracle DB server FIREWALL Oracle OCI protocol (OPEN PORTS) COOL API OracleAccess Oracle OCI ConnectionSvc CORAL API User code CoralServer OracleAccess Oracle OCI ConnectionSvc CORAL API CORAL protocol Oracle OCI protocol (NO OPEN PORTS) User code CoralAccess COOL API ConnectionSvc CORAL API CORAL protocol CoralServer Proxy CORAL protocol

9 CERN IT Department CH-1211 Genève 23 Switzerland www.cern.ch/i t ES CORAL Server – 9 A. Valassi – CHEP 2010, 19 October 2010 CoralServer development in 2009 The development of the current code base started in January 2009 –New team and new design, benefitted from requirement gathering and earlier developments and prototypes in 2008 –Joint architecture design for server and client, modular components decoupled using abstract interfaces Priority was to solve the HLT requirements first –But include both offline and online needs in the design –Weekly meetings to keep track of the progress –Network protocol agreed with HLT proxy developers –A few features are specific to HLT (e.g. transactions)

10 CERN IT Department CH-1211 Genève 23 Switzerland www.cern.ch/i t ES CORAL Server – 10 A. Valassi – CHEP 2010, 19 October 2010 CoralServer deployment for ATLAS CoralServer deployed at Point1 in October 2009 –Deployment was very smooth, it worked almost immediately Largely thanks to systematic testing during the development process (unit tests, standalone HLT test, TDAQ farm tests…) –It provides full read-only functionalities –It simplified authentication handling (single credential store) –Performance is adequate for the current purposes It is successfully used since then for data taking –Now an essential part of TDAQ, replacing M2O/DbProxy –Very stable: only one issue, cured by restart, on a DB failure General problem with CORAL reconnections after DB/network glitches (not specific to CoralServer), which is being worked on –Monitoring features are still limited and are being extended –Adopted by other online systems, interest for R/W features

11 CERN IT Department CH-1211 Genève 23 Switzerland www.cern.ch/i t ES CORAL Server – 11 A. Valassi – CHEP 2010, 19 October 2010 Deployment model for ATLAS HLT A single CoralServer for the ATLAS HLT system –Two chains of CoralServerProxy’s for the L2 and EF subsystems

12 CERN IT Department CH-1211 Genève 23 Switzerland www.cern.ch/i t ES CORAL Server – 12 A. Valassi – CHEP 2010, 19 October 2010 Work in progress (at low priority) Monitoring enhancements for ATLAS HLT –SQLite prototype, similar features to M2O monitoring Complete secure authentication/authorization –SSL sockets and VOMS proxy certificates Dependencies on SSL/VOMS versions only recently sorted out –Tool to load Oracle passwords into the server Further performance tests and optimizations –Compare to Oracle direct and Frontier, add proxy disk cache Deploy general purpose server/proxy at CERN –Test CoralServer already deployed for the nightlies Full Read-Write functionalities –DML (e.g. insert table rows) and DDL (e.g. create tables)

13 CERN IT Department CH-1211 Genève 23 Switzerland www.cern.ch/i t ES CORAL Server – 13 A. Valassi – CHEP 2010, 19 October 2010 Conclusions CoralServer has been successfully used by the ATLAS HLT during data taking since October 2009 –Smoothly deployed and stable during production operation –Full R/O functionalities with data caching and multiplexing The production software used by the ATLAS HLT was essentially developed in only 9 months –Modular design and strong emphasis on testing –Excellent cooperation between the teams involved Work on other areas is (slowly) progressing –Enhanced monitoring, secure access, R/W functionalities… –Current priority during data taking is now experiment support and service operation for CORAL, COOL and POOL

14 CERN IT Department CH-1211 Genève 23 Switzerland www.cern.ch/i t CORAL Server – 14 ES A. Valassi – CHEP 2010, 19 October 2010 Reserve slides

15 CERN IT Department CH-1211 Genève 23 Switzerland www.cern.ch/i t CORAL Server – 15 ES A. Valassi – CHEP 2010, 19 October 2010 COOL C++ API C++ code of LHC experiments (independent of DB choice) POOL C++ API use CORAL directly OracleAccess (CORAL Plugin) OCI C API CORAL C++ API (technology-independent) Oracle DB SQLiteAccess (CORAL Plugin) SQLite C API MySQLAccess (CORAL Plugin) MySQL C API MySQL DB SQLite DB (file) OCI FrontierAccess (CORAL Plugin) Frontier API CoralAccess (CORAL Plugin) coral protocol Frontier Server (web server) Coral Server JDBC http coral Squid (web cache) Coral Proxy (cache) coral http No longer used CORAL DB access plugins mysql

16 CORAL Server – 16 A. Valassi – CHEP 2010, 19 October 2010 S/w architecture components client bridge classes User application CORAL application (plugins for Oracle, MySQL...) ServerStub ServerSocketMgr ServerFacade ClientStub ClientSocketMgr invoke remote call return remote results marshal arguments unmarshal results send request receive reply return local results invoke local call marshal results unmarshal arguments send reply receive request ICoralFacade IRequestHandler RelationalAccess interfaces CoralServer CoralStubs CoralSockets CoralAccess CoralServerBase 1 2 3 13 12 11 8 9 10 6 5 4 7 Package 1 (a/b) Package 2 Package 3

17 CORAL Server – 17 A. Valassi – CHEP 2010, 19 October 2010 Add package 3 “server” (full chain) Add package 1 (a/b) “façade only” Add package 2 “stub + façade” OracleAccess OCI implementation RelationalAccess interfaces User application Incremental tests of applications User application OracleAccess OCI implementation RelationalAccess interfaces CoralAccess bridge from RelationalAccess RelationalAccess interfaces ServerFacade facade to RelationalAccess ICoralFacade CoralAccess bridge from RelationalAccess RelationalAccess interfaces ServerFacade facade to RelationalAccess ICoralFacade OracleAccess OCI implementation RelationalAccess interfaces User application ClientStub marshal / unmarshal ICoralFacade ServerStub unmarshal / marshal IRequestHandler Traditional CORAL “direct” (OracleAccess) CoralAccess bridge from RelationalAccess RelationalAccess interfaces ClientStub marshal / unmarshal ICoralFacade ServerStub unmarshal / marshal IRequestHandler ServerFacade facade to RelationalAccess ICoralFacade OracleAccess OCI implementation RelationalAccess interfaces User application ClientSocket send / receive IRequestHandler ServerSocket receive / send TCP/IP

18 CORAL Server – 18 A. Valassi – CHEP 2010, 19 October 2010 Secure access scenario Oracle DB server FIREWALL CoralServer OracleAccess Oracle OCI ConnectionSvc CORAL API User Code CoralAccess COOL API ConnectionSvc CORAL API DB Connection String X509 Proxy Certificate DB Username DB Password DB Connection String VO attributes DB connection string DB Username DB Password SSL implementation of CoralSockets library decodes proxy certificates using VOMS CoralAuthentication Service Experiment admins also need a tool to load into the CORAL server the DB username and password available with given VO attributes

19 CERN IT Department CH-1211 Genève 23 Switzerland www.cern.ch/i t ES CORAL Server – 19 A. Valassi – CHEP 2010, 19 October 2010 Guidelines of present design Joint design of server and client components –Split system into packages ‘horizontally’ (each package includes both the server-side and the client-side components) Proxy now standalone but can be reengineered with this design –RPC architecture based on Dec. 2007 python/C++ prototype Different people work in parallel on different pkgs –Minimize software dependencies and couplings –Upgrades in one package should not impact the others Strong emphasis on functional tests –Include standalone package tests in the design Aim to intercept issues before they show up in system tests –System tests may be split back into incremental subtests Decouple components using abstract interfaces –Modular architecture based on object-oriented design –Thin base package with common abstract interfaces –Implementation encapsulated in concrete derived classes

20 CERN IT Department CH-1211 Genève 23 Switzerland www.cern.ch/i t CORAL Server – 20 ES A. Valassi – CHEP 2010, 19 October 2010 A few implementation details Server is multi-threaded –Threads are managed by the SocketServer component –One listener thread (to accept new client connections) –One socket thread per client connection Pool of handler threads (many per client connection if needed) Network protocol agreed with proxy developers –Weekly meetings (ongoing for regular progress review) –Most application-level content is opaque to the proxy Proxy understands transport-level metadata and a few special application-level messages (connect, start transaction…) Most requests are flagged as cacheable: only hit the DB once –Server may identify (via a packet flag) client connections from a proxy and establishes a special ‘stateless’ mode Session multiplexing: cache connect, drop disconnect requests One RO transaction per session (drop transaction requests) ‘Push all rows’ model for queries (no open cursors in the DB)

21 CERN IT Department CH-1211 Genève 23 Switzerland www.cern.ch/i t CORAL Server – 21 ES A. Valassi – CHEP 2010, 19 October 2010 ATLAS HLT proxy specificities Development has largely focused on HLT so far –Very complex or very simple, depending on the point of view Some specific choices for HLT (mainly in the proxy layer) may need to be changed for a general-purpose offline service Oracle connection sharing was initially disabled –Not needed for HLT, thanks to session multiplexing in proxy But needed for connection multiplexing in a generic server –Hang had been observed with connection sharing Problem identified as Oracle bug for MT and solved in 11g client HLT needs non-serializable R/O transactions –Requirement: see data added after start of R/O connection Connections are potentially long –Presently handled by hidden environment variable in CORAL May need cleaner way out (API extension: 3 transaction modes)

22 CORAL Server – 22 A. Valassi – CHEP 2010, 19 October 2010 client bridge classes User application CORAL application (plugins for Oracle, MySQL...) ServerStub ServerSocketMgr ServerFacade ClientStub ClientSocketMgr invoke remote call return remote results marshal arguments unmarshal results send request receive reply return local results invoke local call marshal results unmarshal arguments send reply receive request ICoralFacade IRequestHandler RelationalAccess interfaces Cache ClientSocketMgr IRequestHandler ServerSocketMgr send request receive reply receive request send reply forward request forward reply reply to request from cache; cache forwarded reply CORAL proxy application (possible future design)


Download ppt "CERN IT Department CH-1211 Genève 23 Switzerland www.cern.ch/i t ES CORAL Server & CORAL Server Proxy: Scalable Access to Relational Databases from CORAL."

Similar presentations


Ads by Google