Presentation is loading. Please wait.

Presentation is loading. Please wait.

DAQ & ConfDB Configuration DB workshop CERN September 21 st, 2005 Artur Barczyk & Niko Neufeld.

Similar presentations


Presentation on theme: "DAQ & ConfDB Configuration DB workshop CERN September 21 st, 2005 Artur Barczyk & Niko Neufeld."— Presentation transcript:

1 DAQ & ConfDB Configuration DB workshop CERN September 21 st, 2005 Artur Barczyk & Niko Neufeld

2 Niko Neufeld CERN, PH 2 The LHCb DAQ (as seen by the DAQ) ECS

3 Niko Neufeld CERN, PH 3 Things we need to be configured TELL1s –only partially covered in this talk Switches –event builder switch –farm switch CPU nodes –only partially covered in this talk: system, event- builder, buffer-manager Storage System

4 Niko Neufeld CERN, PH 4 Configuration mechanisms Configuration data are not always requested by or pushed into the entity, which is configured: –Example: IP address configuration of farm-nodes Farm nodes configure the IP address via DHCP, served from a DHCP server on the Controls-PC DHCP server configuration file is created using information from the ConfDB on the Controls-PC Assume that the Control-PCs themselves have been configured already…

5 Niko Neufeld CERN, PH 5 TELL1 Configuration sequence: –CCPCs boot from the Control-PC via dhcp CCPC IP address, service configuration (daemons) configured by the Control-PC from the ConfDB –TELL1s are configured by the Control-PC from the ConfDB (using the CCPC) Central Configuration items: – Ethernet and IP source address(es) for data interfaces: 4 x 10 Bytes @ RUN_START –Bank-type and source ID for raw data banks: 4 bytes –Configuration of GigE card: ~ 128 Bytes –+ a host of other settings (which are more SD-specific): TTCrx, Thresholds, Throttle Limits, Tables, Conf)

6 Niko Neufeld CERN, PH 6 Switches Static information: link aggregation information / queue depths / queuing policies / global parameters (like spanning tree) etc… –only configured for a COLD_START of the system –normally this configuration is stored in a Flash RAM and loaded automatically during boot / reset Dynamic information: Forwarding table (for Layer-2 and Layer- 3) –will be updated whenever a new CPU node enters or exits the system (probably not really necessary - see later) to avoid learning –initially during RUN_START, then asynchronously during running as required.

7 Niko Neufeld CERN, PH 7 Switches 2 How: switches are usually configured via a CLI / Web interface or proprietary management tools - uniform scripting via SNMP exists only for monitoring. I assume we will run some special scripts on a Control-PC (using e.g. something like expect) which retrieve the info from the ConfigDB What: –Ethernet / VLAN / IP address of the management interface (this is either given by an out-of-band connection or via DHCP by a Control- PC) –Forwarding table / link aggregation information / queue depths / queuing policies / global parameters (spanning tree, etc…), VLAN configuration (tags and ports ) –Configuration can be: be dumped (in that case we would store an ASCII/XML blob) be composed of individual values (depths, addresses) configuration scripts (to be executed on the CLI) Maximum of a few 1 MB per switch –Asynchronous updates for forwarding tables are small (10 bytes) and should be rare if needed at all (dynamic load balancing, fixed programmed hardware and IP addresses on subfarm nodes)

8 Niko Neufeld CERN, PH 8 Farmnodes n DAQ interfaces –2 VLANs (DAQ (untagged) & Storage (tagged)) – set on the node at node start time –IP and HW address for each interface number of event-buffers (for Buffer Manager / dynamic load balancing) ECS interface (via DHCP from ControlPC) at boot-time expected sources for event- builder (at RUN_START) Static description of the node characteristics: –# cores –# memory –# jobs in parallel –# node type (calibration / filter) Process images to run, depends on run-type Configuration of processes via Job options or Gaucho (details not covered in this presentation)

9 Niko Neufeld CERN, PH 9 Storage System Datastreams –producers (for alarms) –consumers (for monitoring, quality checking etc…) Like for farm-nodes: –ECS interface(s) via DHCP –DAQ interface(s) on storage VLAN –WAN interface (to CASTOR) Alarm thresholds (memory cache, disk- space) etc…

10 Niko Neufeld CERN, PH 10 Summary DAQ & ConfDB Context of DAQ: TELL1s, Network, Farm & Storage ConfDB updates are not very frequent Some 3000 entities to configure –config data small for most of them < 500 Bytes –exception: switches (routing tables) which can be several 100 kB (but almost static!) This list is not complete: I forgot certainly a few quantities along the way - but total size & frequency will not change significantly For some items not yet clear who is “responsible” to configure them - e.g. I did not “budget” the (detailed) configuration of the algorithms on the farm nodes


Download ppt "DAQ & ConfDB Configuration DB workshop CERN September 21 st, 2005 Artur Barczyk & Niko Neufeld."

Similar presentations


Ads by Google