Presentation is loading. Please wait.

Presentation is loading. Please wait.

LHCb Conditions Database TEG Workshop 7 November 2011 Marco Clemencic

Similar presentations


Presentation on theme: "LHCb Conditions Database TEG Workshop 7 November 2011 Marco Clemencic"— Presentation transcript:

1 LHCb Conditions Database TEG Workshop 7 November 2011 Marco Clemencic

2 Overview LHCb Conditions Database Deployment Model planned actual
future Considerations Conclusions

3 CondDB Deployment Model (plan)
Oracle T0, PIT, T1s Streams Cross replication CERN IT ⇄ PIT Replication CERN IT → T1s Based on the original computing model (T0-1 reconstruction, T2 simulation). Using COOL as CondDB library (CORAL to connect to DBs).

4 CondDB Deployment Model (real)
Use Oracle for first reconstr. required for Online conds. Use SQLite for analysis and reprocessing Tier2s joining 1st reconstruction Oracle currently used only during data taking for first pass reconstruction. SQLite used whenever possible, including the Filter Farm at the PIT. Investigation to use CPU resources of Tier-2s for reconstruction (need direct access to Oracle servers at T1s).

5 Why the change? Problems of Oracle Authentication
no X509 proxy certificates Replication lack of feedback no control Support licensing heavy infrastructure maintenance Authentication: missing support for X509 proxies required to drop authentication or use alternative ways, like LFC. Unfortunately LFC couldn't cope, so we are using DIRAC Configuration System. Replication: several times we had problems with replication not yet done. We need to know when the conditions are available everywhere (possibly to integrate it with the scheduling). Support: T2s providing CPUs cannot afford Oracle licenses, administrators, servers... need to access directly Oracle from T1s.

6 Alternatives to Oracle
SQLite originally meant for disconnected use deployment model not dynamic relative small work to improve cannot scale (limit still far) Frontier better scalability (just add a proxy) requires work for tuning SQLite has been always used by LHCb, before the Oracle setup and for disconnected analysis. The deployment model of SQLite was tuned for the foresaw use cases, and it needs review to make it more dynamic. Unfortunately the distribution of very big databases is problematic. Frontier has better performances than Oracle (CORAL plugin problem?) and is much more easy to scale up.

7 Plans (near future) Change the SQLite deployment model
pull instead of push local caches automatic update of Online conds. Prepare the adoption of Frontier use cache-friendly queries analyze the access pattern tune cache parameters The update of the SQLite deployment model is already available and it will be in production in the next few weeks. The adoption of Frontier was planned for the summer, but it has been delayed because of lack of manpower. SQLite will cover the short term issues, Frontier the long term ones.

8 CondDB Deployment Model (next)
Oracle to Online conds. SQLite for everything else new distrib. model We will still use Oracle to write conditions at the PIT and replicate them to the T0 database.

9 Considerations Is Oracle really needed?
MySQL/PostgreSQL solutions could be more easily adopted (T1s, T2s…) open source tools use open source DBs We need better control on the replication process pull instead of push (Frontier) is “eventual consistency” better than “hoped consistency”?

10 Conclusions Oracle more a problem than a solution High Availability
local replicas Synchronization “pull” rather than “push” Migration? no, thanks (not yet, at least)


Download ppt "LHCb Conditions Database TEG Workshop 7 November 2011 Marco Clemencic"

Similar presentations


Ads by Google