Presentation is loading. Please wait.

Presentation is loading. Please wait.

Jeff Hill.  LANSCE Requirements – a Review  EPICS Paradigm Shift  Magic Pipes  Data Access, is it Easy CA?  Database CA Service  Server Upgrades.

Similar presentations


Presentation on theme: "Jeff Hill.  LANSCE Requirements – a Review  EPICS Paradigm Shift  Magic Pipes  Data Access, is it Easy CA?  Database CA Service  Server Upgrades."— Presentation transcript:

1 Jeff Hill

2  LANSCE Requirements – a Review  EPICS Paradigm Shift  Magic Pipes  Data Access, is it Easy CA?  Database CA Service  Server Upgrades  On How We Can Move Forward, IMHO  Conclusion

3 LANSCE, a versatile machine – Originally producing H+, H-, and polarized H- Each with different intensities, duty factors, and even energies – depending on experimental and medical isotope production needs LANSCE timing and flavoring of data – Flavoring Selection based on - logical combinatorial of beam gates – Timing Selection based on - time window sampling Many permutations – Too many to, a-priori, install records for all of them – Subscription update filtering is needed

4 Distributed Control Data Acquisition Physics Control Open Source Vendor Neutral OS Neutral Small Footprint EPICS

5  What is a Data Acquisition System?  Replacing …

6  What is a Data Acquisition System?  Must efficiently filter, and archive, copious amounts of data ▪ selecting interesting occurrences ▪ Saving them for later detailed processing/evaluation  Must be easily reconfigurable

7  What is a Distributed Data Acquisition System?  Must be runtime reconfigurable by clients ▪ Don’t expect to know initially, ▪ When designing/compiling runtime system ▪ What experiments/filters might be devised later on ▪ Experiments/filters configured when client subscribes ▪ Don’t expect to know initially ▪ When designing/compiling runtime system ▪ What data aggregations will be on different data branches to different clients

8  Current weaknesses deploying EPICS into Data Acquisition situations?  Record processing provides good flexibility to create event filters, but … ▪ Frequently, it isn't possible to know all of the experiments when the IOC’s database is designed ▪ A distributed data acquisition system needs runtime reconfiguration, initiated by client side tools ▪ Limited data model ▪ No runtime aggregation, or user defined types

9  Current weaknesses deploying EPICS into Data Acquisition situations?  No support for site specific tagging of the data ▪ If a site needs to filter for LANSCE H- beam ▪ Filtering based on process control attributes such as the time- stamp is awkward ▪ Filtering based on site specific parasitic PV attribute data (the LANSCE flavor) leads to better structured control room applications

10  Before  EPICS, a process control system by design, sometimes used for data acquisition  Now  EPICS, a process control and data acquisition system by design  Not an upgrade, but a leap forward in terms of the general utility of the system

11 Alarm State PV ValueSignal Data Time Stamp Device Support Record Support DB Common CA Server Record Specific Values Device Specific Values

12  Issues transporting data through software layers  Independency  Data Lifecycle  Concurrency  Efficiency

13  Internal code changes in one of Device Support, Record Support, CA Server shouldn’t require matching changes in one of the others  Need runtime data introspection

14  Data Access provides runtime introspection  Catalog, an abstract interface to a structured data ▪ traverse reveals all of the fields and their purposes ▪ find locates a field of a particular purpose  Clerk provides simple get interface ▪ clerk.get ( id_units, unitsString ); ▪ Range errors detected during conversion (an upgrade)

15  Data is created during record processing, but must not be destroyed until the last per-client thread in the server is done filtering / copying it  Reference counting smart pointers manage data lifecycle  When the last pointer reference is destroyed the data are destroyed

16  Data is modified during record processing, but must not be modified at the same time that a server’s per-client thread is filtering / copying it  Auto-locking smart pointers manage concurrency

17  Smart Pointers work through flexible abstract handle interface  Application chooses locking strategy ▪ Locks can be shared between objects ▪ If the data are immutable then no locking is required  Applications choose reference counting strategy ▪ Reference counter might be shared between objects ▪ Reference counter might be embedded with the data ▪ Immortal data? no reference counter required

18  Smart pointer reference counting  Uses new atomic operations library ▪ Much faster than a mutex  Data Access  Arrays transferred in moderate, fixed, sized chunks

19 In the EPICS community we have – Application developers (well adjusted, etc) EPICS database, screens, matlab, python, tcl, etc – System programmers (geeks) Device drivers, EPICS internals, etc Implementing Data Access interface for particular data structure / class System programmer job

20 Once interfaced with Data Access – A high level (i.e. easy ca interface is available) – DA is not the data manipulation interface used by application level users – Users use the public interface of the data structure / class which has been interfaced – Communities develop around the data structures / classes standardized by particular applications, industries, and instruments

21 CA Server Database Device Support Database CA Service

22  General strategy  Database service – part of the database implementation ▪ Therefore can be (should be) intimately aware of database internals  Improvements  Eliminate subscription list protection mutex allocated in every record  Allow communication via non-contiguous arrays ▪ Eliminate EPICS_CA_MAX_ARRAY_SIZE parameter

23 Eliminate EPICS_CA_MAX_ARRAY_SIZE dbGetField, dbPutField, dbPutCallback array API – void pointer, number of elements, type code – Contiguous buffer greater than equal largest array must exist in ca server In contrast, Data Access – Arrays passed as a sequence of compile-time-typed contiguous blocks with multi-dimensional bounds – Sever now uses moderate-sized, fixed-sized, communication buffers Somehow Data Access interfaced data must be the source/sink for a database field

24  Eliminate subscription list protection mutex allocated in every record  Database service can protect its subscription list using DB scan lock  Smart pointer handle for the db service ▪ Uses the db scan lock for synchronization

25  General strategy  Minimal changes to existing database access code which can be easily verified ▪ No architectural changes  Refactor dbGetField, dbPutField, dbPutCallback code onto lowest common denominator interface ▪ Presuming db scan lock already owned ▪ Having been taken at a higher level ▪ Field modification callback function pointer parameter ▪ Lowest common denominator interface is private

26  Lock, in every record for subscription list is eliminated  Opportunities to eliminate locking ▪ Consolidate CA service locking and db scan locking  dbGetField, dbPutField, dbPutField API retained exactly backwards compatible  Functionally and almost line-by-line equivalent, but refactored, code in dbGetField, dbPutField, dbPutField

27  Designed to transport polymorphic data  Event queue carrying polymorphic/parasitic data  New API ▪ Identical server-to-service and client-side-application-to- client-lib  Designed for SMP  Eliminated from server  EPICS_CA_MAX_ARRAY_SIZE  Multicast listener  Binding to specific network interfaces  Inherited from PCAS

28  Event filtering >camonitor "fred$F $(PV:)>30 && $(PV)<40" fred$F $(PV:)>30 && $(PV)<40 2010-06-03 07:58:47.224969 36.6466 fred$F $(PV:)>30 && $(PV)<40 2010-06-03 07:58:47.227964 37.1654 fred$F $(PV:)>30 && $(PV)<40 2010-06-03 07:58:47.267460 33.9427 fred$F $(PV:)>30 && $(PV)<40 2010-06-03 07:58:47.276013 33.9976 fred$F $(PV:)>30 && $(PV)<40 2010-06-03 07:58:47.299041 37.8033 fred$F $(PV:)>30 && $(PV)<40 2010-06-03 07:58:47.319065 33.549 >camonitor "fred$F $(PV:flavor)==30 " fred$F $(PV:flavor)==30 2010-06-03 07:58:18.906049 44.1145 fred$F $(PV:flavor)==30 2010-06-03 07:58:21.899019 39.2743 fred$F $(PV:flavor)==30 2010-06-03 07:58:24.885000 54.3352 fred$F $(PV:flavor)==30 2010-06-03 07:58:27.855063 93.9634 fred$F $(PV:flavor)==30 2010-06-03 07:58:30.811997 97.7081

29 pv { timeStamp alarm { acknowledge { pending } condition { status, severity } } limits { display { upper, lower } control { upper, lower } alarm { major { upper, lower } minor { upper, lower } } labels units precision class { name } } Green indicates that a value is stored. In a DA tree a node does not need to be a leaf node in order to carry a value. This allows for less hierarchy traversal when doing a basic fetch. For example. Catalog & someData; Clerk clerk (someData ); double value; clerk.get (pi_pv, value );

30 pv { signal { devicdName timeStamp waveform { value sampleRate } LANSCE { flavor }

31  Addressing  Configuration  Routing issues

32  One to many communication, but unlike IP broadcasting  Designed to make IP routers transparent  IP Multicasting is mature, widely deployed  Commercial stock exchanges  Multimedia content delivery industries

33  Administratively scoped multicast groups  Multicasting has good potential to simplify configuring of Channel Access in large systems with multiple subnets ScopeIPV4 Range link-local224.0.0.0 to 224.0.0.255 Site-local239.255.0.0 to 239.255.255.255 Org-local239.192.0.0 to 239.195.255.255 Global224.0.1.0 to 238.255.255.255

34  Search requests  Clients send to a special IP v4 address ▪ No client side code changes  Server listens to a ▪ multicast group, or ▪ multicast group on a specified interface  Beacon messages  Ditto, but visa-versa  Can eliminate need for CA repeater

35  Client side  EPICS_CA_ADDR_LIST ▪ If address is multicast group ▪ Search messages sent to that mc group  EPICS_CA_BEACON_ADDR_LIST ▪ If not defined ▪ Monitor mc groups in EPICS_CA_ADDR_LIST for beacons ▪ If defined and not empty ▪ Specifies mc group set to be monitored for beacon messages ▪ if defined and empty - will not receive mc beacons

36  Server side  EPICS_CAS_INTF_ADDR_LIST isn't defined ▪ Server listens on all network interfaces ▪ For messages sent to EPICS_CAS_SERVER_PORT  Unicast and broadcast messages  Multicast messages, sent to any multicast group address found in EPICS_CA_ADDR_LIST

37  Server side  EPICS_CAS_INTF_ADDR_LIST is defined ▪ If address is a multicast group then multicasts sent to that group are received on all configured interfaces ▪ Addresses of the form ▪ {multicast group address, interface address} ▪ Multicasts sent to specified mc group will be received  Only on specified interface ▪ Unicast and broadcast traffic  Will not be received on specified interface (unless enabled elsewhere)

38  EPICS_CAS_BEACON_ADDR_LIST  Specify beacon destinations (mc or otherwise) ▪ When EPICS_CAS_INTF_ADDR_LIST isn't defined then ▪ Defaults to EPICS_CA_BEACON_ADDR_LIST, or if that isn't defined, EPICS_CA_ADDR_LIST

39  Logical names for multicast groups  Recommended, but ▪ Just use DNS, local host files, etc

40  EPICS_CA_BEACON_PORT  Defaults to EPICS_CA_REPEATER_PORT ▪ EPICS_CA_REPEATER_PORT becomes deprecated  Just an appropriate name change

41  Network admin must decide where the boundaries will lie for different levels of administratively scoped IPV4 multicasting  A multicast will not be auto-forwarded outside of its scope

42  Conservative approach is nature state of control system community  Don’t fix it if it isn't broken  Its very simple, robust, and efficient now ▪ Will new features only detract from the fundamentals responsible for success?

43  Hazards of conservative approach  New features only added as evolution instead of architectural upgrades  There can be a patchwork of new features instead of a grand design  Learning the system becomes more difficult because of a need to ▪ Carefully absorb a chapter on each patchwork ▪ Carefully deciding which patchwork will be best

44  Perhaps we can afford some new capabilities  We started out on 20MHz 68020 with 2MB of memory  Quad core SMP will soon be the default  With a larger user base we can beat the bugs out of the carpet faster, and amortize the expense over more projects  Low cost of embedded processors ▪ only a few signals per processor is cost effective

45  Basic principals of software quality  New features, only in new-feature release  Patches, only in patch releases

46  Perhaps the path is easily navigated  Provide backwards compatibility  New features, only in new-feature releases  Patches only, in patch releases

47  Full disclosure  Users of new-features release pay a price ▪ Document new bugs so that they can be fixed ▪ New programming Interfaces need user feedback ▪ They should not be locked down early on  Until a new release reaches sufficient maturity ▪ Not a good choice for essential production systems

48  Next generation CA server library substantially complete  Provides many advantages  Database CA Service nearing completion  Multicasting  Simplified configuration of large systems!


Download ppt "Jeff Hill.  LANSCE Requirements – a Review  EPICS Paradigm Shift  Magic Pipes  Data Access, is it Easy CA?  Database CA Service  Server Upgrades."

Similar presentations


Ads by Google