Presentation is loading. Please wait.

Presentation is loading. Please wait.

INTEGRATING TIMING AND FREE-RUNNING ACTORS WITH DATAFLOW Tim Hayles Principal Engineer National Instruments September 9, 2008

Similar presentations


Presentation on theme: "INTEGRATING TIMING AND FREE-RUNNING ACTORS WITH DATAFLOW Tim Hayles Principal Engineer National Instruments September 9, 2008"— Presentation transcript:

1 INTEGRATING TIMING AND FREE-RUNNING ACTORS WITH DATAFLOW Tim Hayles Principal Engineer National Instruments September 9, 2008 hayles@ni.com

2 Agenda  What is LabVIEW Dataflow?  Why is timing important to NI?  Further motivation for this research  Key elements  Asynchronous Data Wires  Asynchronous Timing Wires  A small set of “Free Running” Actors  Implementation Details  Examples

3 What is LabVIEW Dataflow?  aka Structured Dataflow  Loops, cases, sequences, …  aka Homogenous Dataflow  Always a single token on a wire  aka simply as ‘G’  Dynamically scheduled  Data driven  Simple to use and understand  Widely successful

4 LabVIEW Dataflow

5 Why is Timing Important to NI?  We make I/O products  I/O timing requires  Configuration  Routing  Synchronization Multiple subsystems on one board Multiple boards in one chassis Multiple chassis  I/O timing must play well with application timing

6 DAQmx STC Clock Model

7 DAQmx API

8 Dialog Configuration

9 Maybe this would work …

10 Data Acquisition: Dataflow API  Generate a single, re-triggerable pulse, delayed from the trigger  DAQmx dataflow configures a circuit and activates it  Clearing the task has the side effect of disabling the circuit

11 Data Acquisition to the Pin  This mixed model is much closer to the actual hardware  Opening up the pulse generator can reveal more detail

12 Now we also want …  To further exploit parallelism even between nodes sharing a data transformation  To incorporate common data exchange techniques into the language  First class support for  Pipelines  Multi-rate  Streaming  Timing  Offer a multi-target programming canvas  Deployment, startup and shutdown order

13 Asynchronous Wire  No inherent semantics  All behavior conferred by implementation outside LabVIEW – though written in LabVIEW  One wire for data  One wire for timing  Producer and consumer nodes become “free- running” actors firing based on  Data or space availability  Timing

14 Timing Wires  Carry not time stamps, but ‘ticks’  Time stamps are data  Ticks are unit-less and ephemeral state changes Voltage transition Boolean transition  Route signals  Clock domains for FPGA logic  Clocks for I/O timing and triggering  Triggers for timing computations  Abstract  Copper traces (actual wires)  Memory locations (‘soft’ clocks and triggers)

15 Timing Wires

16  Triggers can be polymorphic to accept time stamps (but that’s a data wire)

17 Asynchronous Data Wire  Buffered  Writer usually, though not always, ‘owns’ the buffer  Besides depth, buffer has type  Register  Fifo  Circular buffer  Type determines behavior  aka data exchange policy

18 IF-RIO Experiment  RF Transceiver product  Async wires and actors used on same canvas as LabVIEW dataflow

19 IF-RIO Experiment Now  A new palette of seven async actors and two dataflow actors  FPGA only

20 IF-RIO Experiment – FPGA code

21 Harnessed User G Code  False case writes false to Start DAC otherwise empty  No notion of buffering policies used in async layer

22 IF-RIO Experiment – FPGA code

23 Harnessed User G Code  LabVIEW dataflow makes the copy of the data  When the first copy is made, Start goes true and the DAC begins consuming data

24 Harness Integration of Dataflow  A technique for complete separation of the MOCs  The VI is pure LabVIEW dataflow  The free-running Harness generates the code to read and write the asynchronous wires and call the VI  Code generation is optimized for streaming

25 Harness Generated Code  If the harnessed VI does not support any of the streaming protocol terminals, the IP Block only executes when  There is no pending write  A read was succesful

26 System Diagram  MOC integration  Complete separation  Limited mixing  I/O integration  Discover  Configure  Operate  Target integration  Multiple targets  One canvas

27 Four Target System Diagram

28 G behind System Diagram  Just the accessors and terminals here

29 G behind System Diagram G code must provide the loop

30 Three Target System Diagram

31 Physically Constrained View

32 More System Diagram Research  Relation to timestamp based MOCs  MOC hosting  Debugging  Simulation  Scheduling  A state machine for deployment, etc  Communication between the state machine and the hosted code  Mapping IP among processing resources

33 Contact Information  hayles@ni.com hayles@ni.com  www.ni.com\labview www.ni.com\labview  Local contact: hugo.andrade@ni.comhugo.andrade@ni.com

34 Dataflow Integration Strategies  Freely mixed with dataflow  Completely separated from dataflow  Limited mixing with dataflow

35 LabVIEW Dataflow  Naturally concurrent, but nodes with dataflow dependencies are sequential  Simple firing rule, but  greedy, must have one and only one token per wire on all wires  timing of I/O often independent of dataflow

36 Freely Mixed Integration  Asynchronous wires and actors mixed with G  “Amorphous Heterogeneous Dataflow”  Both structured and non-structured  Any number of tokens may be on a wire  IF-RIO experiment  FPGA target  RF applications

37 Complete Separation  Asynchronous wires and actors in a separate diagram  The System Diagram  A multiple processing target development canvas  A host for other models of computation  Integrates well with our I/O  Interoperation with LabVIEW dataflow is managed by the System Diagram  Scheduling  Data transfer  G remains G

38 Limited Mixing  Asynchronous actors in System Diagram only  Asynchronous wires in both System Diagram and G  Data is moved between dataflow and asynchronous wires only by dataflow actors in G  Scheduling of G code is shared  Initial invocation controlled by System Diagram  Subsequent scheduling controlled by G

39 IF-RIO to the Pin  Drill into hidden actors  Reveal both educational and configurable elements  CDC 7005

40 CDC 7005 plus inputs

41 Initial Drop Default configuration Thick border indicates further detail is available maybe via double-click

42 In place expansion of configuration options Thick border on VCXO hints are more detail there To collapse perform same action that expanded Next Level: Timebase Choices

43 Choice of DAC value now offered Performing expand/collapse on the original node collapses the whole thing Any configuration changes made at deeper levels are preserved if collapsed Next Level: PLL Configuration

44 Register Data Wire  Single element deep  No waiting policy  Both data and space are always available  Unread data clobbered  Same data read multiple times

45 FIFO Data Wire  Same as a queue  N element size, N >= 1  Writer waits on space availability  Reader waits on data availability  Exceptions  Waiting can be disabled  I/O actors never wait

46 Circular Buffer Data Wire  N element size, N >= 2  When N = 1 it’s a Register  Like Register, no waiting  Unread data clobbered  Same data read multiple times  Unlike Register, a writer can wait for I/O read pointer (regeneration case)  Like FIFO, data is first in first out  Unlike FIFO I/O writer can push read pointer (pretrigger case)

47 DVB-T from NIWeek 2007

48 Streaming Handshaking Protocol  Enables FPGA IP to execute every cycle  May not be able to accept data  May not have data to emit  But still has useful work to do  The IP can support all or some of these  output valid  input valid  ready for input  ready for output

49 Stream Protocol in Use

50 Harness Generated Code  The harnessed VI supports all of the streaming protocol terminals  The IP Block executes every cycle


Download ppt "INTEGRATING TIMING AND FREE-RUNNING ACTORS WITH DATAFLOW Tim Hayles Principal Engineer National Instruments September 9, 2008"

Similar presentations


Ads by Google