Presentation is loading. Please wait.

Presentation is loading. Please wait.

An Overview of Data Communication in LabVIEW

Similar presentations


Presentation on theme: "An Overview of Data Communication in LabVIEW"— Presentation transcript:

1 An Overview of Data Communication in LabVIEW
Elijah Kerry – LabVIEW Product Manager Certified LabVIEW Architect (CLA) The goal of this presentation is to provide an overview of the common approaches for sending and receiving data between different processes in LabVIEW. We’ll look at a few examples and try to help you understand and articulate the goals of the communication you need in your application so that you can choose the most appropriate implementation.

2 Data Communication Options in LabVIEW
TCP and UDP Local Variables Network Streams Programmatic Front Panel Interface Shared Variables Target-scoped FIFOs DMAs Notifier Web Services Simple TCP/IP Messaging (STM) Peer-to-Peer Streaming Queues AMC Dynamic Events HTTP Functional Global Variables FTP RT FIFOs Global variables Datasocket The point to convey here is that there are a lot of different technologies in LabVIEW, many of which have been developed with a specific use case in mind. That being said, the biggest challenge we have as programmers is selecting the right one. When you hear people complaining about LabVIEW or a certain technology, like shared variables, they’ve likely selected the wrong tool for the job. Hopefully, by the end of this presentation, you’ll have a better idea of the criteria and considerations that can help you select the most appropriate feature. … just to name a few …

3 Communication is Important
Windows Real-Time FPGA And it’s not just communication between two loops, it’s important to realize that this problem spans targets and requires different solutions for different environments.

4 Agenda ni.com/largeapps Introduction of Data Communication
Define Communication Types Identify Scope of Communication Inter-process Inter-target Next Steps ni.com/largeapps

5 Demonstration The pitfalls of local variables
-Demonstrate what happens when shared variables go awry or are misused -Send data from acquisition loop to graphing loop at different rates -Create and display sine wave in one loop, send data over a shared variable to another loop that is running at a different rate. -This shows the result of a lossy data transfer using shared variables when streaming should be used. Questions to ask the class: -When/Why would you want to stream data from one loop to another? Offload data processing/logging to another less time critical loop -Why is this a bad method? Data can be missed (Lossy) or data can be read multiple times (stale data) -Can you think of another way to accomplish this in LabVIEW? Queues, Dynamic Events, RT FIFOs -What should you use shared variables for? Reading the latest value, monitoring a variable such as engine temperature, flow rate, etc. Passing Tag data such as VI configuration or control settings. -We will cover the correct way to accomplish this with Queues and RT FIFOs in about 12 slides. KEY TAKE-AWAY: Do not use shared variables for buffered data transfer or for streaming applications. They are to be used to update and contain the latest value because they are inherently lossy. You can enable buffering on network shared variables, but there is a better way of doing this. Network streams—will be covered later.

6 Common Pitfalls of Data Communication
Race conditions- two requests made to the same shared resource Deadlock- two or more depended processes are waiting for each other to release the same resource Data loss- gaps or discontinuities when transferring data Performance degradation- poor processing speed due to dependencies on shared resources Buffer overflows- writing to a buffer faster than it is read from the buffer Stale data- reading the same data point more than once Examples of each: Race condition—two writes to the same shared resource at the same time, cannot be sure which value will be written Deadlock 1—2 people want to draw a straight line, so they need a ruler and a pencil. One grabs the pencil, the other grabs the ruler. Then they both are waiting for the other to finish using each respective item so that they can have both. Deadlock 2—two trains pull into a station at the same time, and neither can leave until the other has completely exited the station. Data loss—watching television from a satellite or antenna feed and the signal gets interrupted Performance degradation—running multiple processes in the same loop causes a bottleneck where a fast process can only run as fast as the slowest process Buffer overflows—writing to a buffer faster than data is being read from it. Pouring water into a bucket with a hole at the bottom faster than it can drain out of the hole Stale data—reading the same value more than once in a streaming application. Data appears to be stair-stepped or digitized on a waveform graph.

7 The Dining Philosophers
The dining philosophers problem is summarized as five silent philosophers sitting at a circular table doing one of two things: eating or thinking. While eating, they are not thinking, and while thinking, they are not eating. A large bowl of spaghetti is placed in the center, which requires two forks to serve and to eat (the problem is therefore sometimes explained using rice and chopsticks rather than spaghetti and forks). A fork is placed in between each pair of adjacent philosophers, and each philosopher may only use the fork to his left and the fork to his right. However, the philosophers do not speak to each other. Taken from:

8 Communication Types Message/Command “Get me a soda!”
Update/Monitor “The current time is…” Stream/Buffer “…the day the music died…” Variable/Tag “Set Point = 72F” This introduces each of the communication types, and we will step into each in more detail in the following slides.

9 *some commands may need to pre-empt other commands based on priority
Message/Command Commander (Host) and Worker (Target) Systems Must be lossless* (can be buffered) Minimal latency Typically processed one at a time Reads are destructive Example: stop button, alarm, error *some commands may need to pre-empt other commands based on priority

10 Update/Monitor Periodic transfer of latest value
Often used for HMIs or GUIs N Targets: 1 Host Can be lossy Non-buffered Example: monitoring current engine temperature

11 Stream/Buffer Continuous transfer, but not deterministic
High throughput No data loss, buffered 1 Target: 1 Host; Unidirectional Example: High speed acquisition on target, sent to host PC for data logging

12 Variable/Tag Set Points and PID Constants Initial configuration data
Can be updated during run-time Only latest value is of interest 1 Host: N Targets Example: reading/writing the set-point of a thermostat, .ini configuration files

13 Choosing Transfer Types
Message Update Stream Variable (Tag) Examples Exec Action Error Heartbeat Movie Waveform Image Setpoint Fundamental Features Buffering Blocking (Timeout) Single-Read Nonhistorical Optional Features Ack Broadcast Multi-layer Buffering Dynamic Lookup Group Mgmt Latching Performance Low-Latency High-Throughput High-Count Configuration N Targets: 1 Host N Targets:1 Host 1 Target:1 Host Unidirectional This is not an all-inclusive list. Merely serves as an overview, with some of the features involved for each transfer type. This does not mean that these are hard-fast rules. There are definitely exceptions and blurred lines for certain situations. This is also a time to give an example of choosing a transfer type. For example, I want to transfer an entire waveform from one process to another, how would I do this? --> stream/buffer

14 Scope of Communication
Inter-process: the exchange of data takes place within a single application context Inter-target: communication between multiple physical targets, often over a network layer

15 Defining Inter-process Communication
Communication on same PC or Target Communicate between parallel processes or loops Offload data logging or processing to another CPU/Core/Thread within same VI/executable Loops can vary in processing priority Used to communicate synchronously and asynchronously ACQ LOG High Low Med Inter-process: the exchange of data takes place within a single application context or VI, no network involvement

16 Inter-process Communication Options
Shared Variables Update GUI loop with latest value Queues Stream continuous data between loops on a non-deterministic target Dynamic Events Register Dynamic Events to execute sections of code Functional Global Variables (FGV) Use a non-reentrant subVI to protect critical data RT FIFOs Stream continuous data between time critical loops on a single RT target I’ve listed some of the commonly used options. It’s important when considering these to be aware of when it’s appropriate to use one over another. In examining a few of the more prominent options, it is important to consider when it is appropriate to use certain features and the inherent advantages and considerations each feature provides. To start off with, TCP and UDP VIs available in LabVIEW for users who are interested developing networked applications at a lower-level. This is beneficial for quickly establishing extremely simple data communication, but requires expertise in order to develop any advanced functionality. Shared variables were introduced in LabVIEW 8.0 in order to make it easy for LabVIEW developers to network distributed LabVIEW applications and communicate with hardware such as NI CompactRIO and NI Compact Field Point. These variables can be dragged and dropped from the project with minimal configuration or development. Transition RFP provide the ability to remotely view a LabVIEW interface from a web browser quickly and easily; however, the client requires installation of the run-time engine. Remote Front Panels are probably the most similar in functionality as web services, and therefore we’ll spend a little extra time examining these.

17 Basic Actions Set the value of the shift register INITIALIZE
Now that you’ve demonstrated how uninitialized shift registers work, explain the basic working idea behind a fgv. This diagram shows how we can store a new value to the shift register INITIALIZE

18 Basic Actions Get the value currently stored in the shift register GET
Similarly, we can retrieve the value stored as shown GET

19 Action Engine Perform an operation upon stored value and save result
You can also output the new value ACTION Perhaps one of the most powerful aspects of a FGV is the ability to customize what actions are performed on the value stored in memory. The cloud in this diagram can represent any number of actions or computations that are to be performed upon any number of data-types ACTION

20 How It Works Functional Global Variable is a Non-Reentrant SubVI
Actions can be performed upon data Enumerator selects action Stores result in uninitialized shift register Loop only executes once A functional global VI is called as a subVI from your application. The only required parameter is the action to be performed. The initialize too can be passed in or be a constant Note that it only executes once because the stop button is always set to true

21 Demonstration Introduction to Functional Global Variables
Use the example VI to demonstrate initializing, incrementing and retrieving the value of a Functional Global Variable

22 Benefits: Comparison Functional Global Variables
Global and Local Variables Prevent race conditions No copies of data Can behave like action engines Can handle error wires Take time to make Can cause race conditions Create copies of data in memory Cannot perform actions on data Cannot handle error wires Drag and drop FGVs are often a good substitution for the use of variables, as they address many of the problems people often have, such as race conditions. They do, however, require more work.

23 Understanding Data Dependency
Code in a VI is organized into ‘diagrams’ Diagrams are executed in order based on data dependency Objects within the diagrams are executed in order based on data dependency Data dependency is dictated by the flow of wires

24 Understanding Dataflow in LabVIEW
Clump 0 Clump 1 Clump 0 Clump 2 The diagram in the previous slide is “clumped” by the LabVIEW compiler from a graph containing numerous nodes to a graph of clumps with fewer clumps. Each of the clumps produces a section of code which can be scheduled by LabVIEW. Within a clump, LabVIEW offers no parallelism. Between clumps, LabVIEW can multi-task by using its executions system.

25 Doing Everything in One Loop Can Cause Problems
While Loop Acquire Analyze Log Present 10ms 50ms 250ms 20ms One cycle takes at least 330 ms If the acquisition is reading from a buffer, it may fill up User interface can only be updated every 330 ms Things to discuss with audience: -Why could this be problematic? Buffer overflow (discussed in next point), only updates the GUI 3 times a second, etc. -What if you were sampling data at a very fast, let’s say 1Mhz, and were pulling data off the buffer in chunk sizes of 50k Samples per read? This would need a while loop iteration time of 50ms to keep the buffer flow filling up. -Could anything be done in parallel? Yes, use this to transition to next slide. We could log and present in parallel as a start.

26 Doing Everything in One Loop Can Cause Problems
While Loop Log Acquire Analyze 250ms Present 10ms 50ms 20ms This is a little bit better. The logging and presenting are in parallel, which does cut down the loop time by 20 seconds. But it could still be better. -If we could communicate between loops, there is no reason that all loops cannot run in parallel. -With a multiple core processor, we can split the loops up and put them onto separate cores/processors. LabVIEW does this for us automatically, but can be controlled and assigned as well. -It can be split into different threads on the same processor as well. One cycle still takes at least 310 ms If the acquisition is reading from a buffer, it may fill up User interface can only be updated every 310 ms

27 Inter-Process Communication:
While Loop Inter-Process Communication: Acquire ensures tasks run asynchronously and efficiently How? Loops are running independently User interface can be updated every 20 ms Acquisition runs every 10ms, helping to not overflow the buffer All while loops run entirely parallel of each other 10ms While Loop Analyze 50ms While Loop Log THERE ARE HIDDEN SLIDES AT THE END OF PRESENTATION ON MULTI-THREADING IF THE AUDIENCE ASKS FOR THIS 250ms While Loop Present 20ms

28 Producer Consumer Producer Thread 1 Thread 2 Thread 3 Best Practices
One consumer per queue Keep at least one reference to a named queue available at any time Consumers can be their own producers Do not use variables Considerations How do you stop all loops? What data should the queue send? Thread 1 Thread 2 This gives you more control on how your application is timed, and gives the user more control over your application. The standard Producer/Consumer design pattern approach for this application would be to put the acquisition processes into two separate loops (slave loops), both driven by a master loop that polls the user interface (UI) controls to see if the parameters have been changed. To communicate with the slave loops, the producer loop writes to local variables. This will ensure that each acquisition process will not affect the other, and that any delays caused by the user interface (for example, bringing up a dialog) will not delay any iteration of the acquisition processes. Counterexample: Data-acquisition and data-logging in the same loop. It would be difficult to increase the rate of DAQ without increasing rate of data logging. If they are separate of each other then this is easier to do Thread 3

29 LabVIEW FIFOs Queues RT FIFOs Network Streams DMAs User Events
The picture illustrates a common example of FIFOs. The milk in the firdge is typically stocked from the back, which means that the first milk carton in is the last one someone will grab. First in, first out. In general, FIFOs are good if you need lossless communication that preserves historical information

30 Queues Adding Elements to the Queue Dequeueing Elements
Select the data type the queue will hold Reference to existing queue in memory Dequeueing Elements Main elements of a queue: Obtain Queue – Create and name the queue, define the data type Enqueue Element – Add data of that type to the queue Dequeue Element – Remove data from the queue as it becomes available Release Queue – Destroys the queue when called, eliminating any remaining data. Dequeue will wait for data or time-out (defaults to -1)

31 Demonstration Introduction to LabVIEW Queues
Recall back to the opening example with shared variables for streaming. Transferring data between two parallel loops. Queues are needed for this instance because we need to ensure that each data point is read once, and only once, to prevent data loss and stale reads. Queues are ideal for Windows/PC communication. They are not deterministic for a few reasons—This is discussed in the next few slides. Why would we do this? Logging data or processing data is perfect example. Also works great for sending messages. For most situations, the user would like for every message to be handled. Queues process one element from the queue/buffer at a time, and are not lossy. This will ensure that every message is read. Passing Data within parallel loops on a Windows Machine Discuss what transfer model is used: Stream/Buffer Present all the possible choices for streaming: could use RT FIFOs, Network streams, TCP, STM, shared variables with RT FIFOs or buffers Go over recommended method: Queue/ Producer Consumer Reasons why it is the best choice: Windows based. Same process, so no need for network architecture. Run in separate threads, independently, in parallel. Allows the acquisition loop or producer loop to run with most priority, and then pass data to less time critical loop for logging or processing. Questions to ask the class: -Do you ever acquire data from a DAQ device and need to run FFTs or intense data processing? -If yes, do you ever get buffer overflows because you are not reading from the buffer fast enough? -Do you ever need to pass waveform/image data from one loop to another? Then you need a buffered data transfer without data loss. KEY TAKE-AWAY: If running on a non-RT machine such as a Windows PC, use Queues if you need to stream/buffer data from one loop to another for whatever reason. Further, if you have a message architecture where one loop processes all messages, add the messages to a queue to ensure every message is handled. Do not use shared variables for buffered data transfer or for streaming applications or for messaging applications (typically).

32 The Anatomy of Dynamic Events
VI Gets Run on Event Dynamic Events Terminal Defines Data Type We want to use user events to broadcast or multicast messages to all the listerners (in this case, plugins). We do this using the tools shown here and on the next slide Multiple Loops Can Register for Same Event Data Sent

33 Using User Events LabVIEW API for Managing User Events
Register User Events with Listeners Keep in mind that multiple plugins can register for the same event, or that we can create different events for different plugins

34 Choosing Transfer Types for Inter-process
Message Update Stream Variable (Tag) Windows Queue Shared Variable (Blocking, Buffered) SE Queue Notifier Shared Variable (Blocking) Local/Global Variable FGV Shared Variable DVR RT Same as Windows RT FIFO SE RT FIFO FPGA FIFO (2009) SE FIFO (2009) FIFO Note that the transfer type is only one of the considerations when picking a messaging system, there are certainly valid use cases for implementing an update using a variable mechanism or vice versa based the other considerations mentioned such as ease of implementation, scalability, or performance, as well as existing communication protocols. This table is to be used as a decision matrix as well. For example: I would like to read the latest value of a process variable on my Windows machine in LabVIEW. How would I accomplish this? --->Shared variable, etc.

35 RT FIFOs vs. Queues Queues can handle string, variant, and other variable size data types, while RT FIFOs can not RT FIFOs are pre-determined in size, queues can grow as elements are added to them Queues use blocking calls when reading/writing to a shared resource, RT FIFOs do not RT FIFOs do not handle errors, but can produce and propagate them Key Takeaway: RT FIFOs are more deterministic for the above reasons

36 What is Determinism? Determinism: An application (or critical piece of an application) that runs on a hard real-time operating system is referred to as deterministic if its timing can be guaranteed within a certain margin of error.

37 Desktop or Industrial PC
LabVIEW Real-Time Hardware Targets LabVIEW Real-Time CompactRIO PXI Desktop or Industrial PC Vision Systems Single-Board RIO National Instruments provides a variety of real-time platforms to deploy to depending on your application needs. These targets range from highly durable form factors for the harshest of environments, such as CompactRIO to high-performance multi-core industrial computing platforms like PXI. All, LabVIEW Real-Time targets have access to a wide variety of I/O with ready to run driver software, so that you can get up and running quickly.

38 RT FIFOs Write Data to the RT FIFO Read Data from the RT FIFO
Select the data type the RT FIFO will hold Reference to existing RT FIFO in memory Read Data from the RT FIFO Main elements of a RT FIFO: -RT FIFO Create– Create or Reference a RT FIFO by name -RT FIFO Delete– Remove any references to the RT FIFO in memory -RT FIFO Read/ RT FIFO Write– Remove/Add data from/to the queue as it becomes available Talk about how RT FIFOs have a timeout and how they can overwrite data or ignore data in certain situations. The Overwrite on Timeout? terminal specifies whether to overwrite the oldest value in the RT FIFO if the FIFO does not have an available open slot and the value of the timeout in ms input expires. Use the timeout in ms input to specify an amount of time to wait for an open slot before overwriting the oldest value. If the value of the timeout in ms input expires and the overwrite on timeout input equals TRUE, the RT FIFO overwrites the oldest value and returns TRUE in the timed out? output. Unsupported data types: -RT FIFOs do not support data types of variable size, such as clusters, strings, and variants. -RT FIFOs also do not support multi-dimensional arrays. Read/Write wait for data or time-out (defaults to 0) Write can overwrite data on a timeout condition

39 Demonstration Inter-process Communication Using RT FIFOs
Recall back to the previous example for Queues. The same architecture applies with a Producer and Consumer loop on a RT machine. However, instead of a Queue, we will use RT FIFOs for their determinism. The key benefits will be outlined in the following slides. Further, we will discuss the differences between a Queue and a RT FIFO, and when to use one over the other. Why would we do this? Same as queues. Offloading data to be logged or processed, or if you need to send data back to the host PC in a less time critical loop it is best to use RT FIFOs to pass the data from the time critical loop to the less time critical loop. Passing Data within parallel loops on a RT Target, deterministically Discuss what transfer model is used: Stream/Buffer Present all the possible choices for streaming: could use Network streams, TCP, STM, Queues, Shared Variables with RT FIFOs enabled or buffering enabled Go over recommended method: RT FIFOs Reasons why it is the best choice: More Deterministic. Pre-allocates the buffer before run-time, and does not support variable sized data types, unlike queues, which do, and can grow during run-time. Questions to ask the class: -Are you running a time critical deterministic control that acquires and needs to process or log data? -Do you ever acquire data from a DAQ device and need to run FFTs or intense data processing that cannot slow down your deterministic loop? -If yes, do you ever get buffer overflows because you are not reading from the buffer fast enough? -Do you ever need to pass waveform/image data from one loop to another? Then you need a buffered data transfer without data loss. KEY TAKE-AWAY: If running on a RT machine such as cRIO or PXI RT, use RT FIFOs if you need to stream/buffer data from one loop to another for whatever reason. Further, if you have a message architecture where one loop processes all messages, add the messages to a queue to ensure every message is handled. Do not use shared variables for buffered data transfer or for streaming applications or for messaging applications (typically).

40 Defining Inter-target Communication
PC, RT, FPGA, Mobile Device Offload data logging and data processing to another target Multi-target/device application Network based Inter-Target: communication between multiple physical targets, often over a network layer Most of the transfer mechanisms also apply for: Inter-Application: communication between two running executables or application contexts within the same Operating System However, there are also other methods such as VI server, functional globals, shared global variables, etc., but they will not be discussed in this presentation.

41 Common Network Transfer Policies
“Latest Value” or “Network Publishing” Making the current value of a data item available on the network to one or many clients Examples I/O variables publishing to an HMI for monitoring Logging temperature values on a remote PC Values persist until over written by a new value Lossy – client only cares about the latest value 1-1 1-N

42 Latest Value Communication
API Type Performance Ease of Use Supported Configurations 3rd Party APIs? Shared Variable* LabVIEW Feature 1:1, 1:N, N:1 Measurement Studio CVI CCC (CVT) Ref. Arch. Publishes the CVT 1:1 Yes (TCP/IP) UDP LabVIEW Prim. Yes *Network buffering should be disabled

43 Using Shared Variables Effectively
Programming Best Practices: Initialize shared variables Serialize shared variable execution Avoid reading stale shared variable data -Initializing shared variables helps to ensure the proper value is present, and that we are not reading the value from the last time the VI was called. -Serializing the reading and writing of the variables helps to avoid race conditions, within the same execution structure or process at least, by forcing order of execution -Stale data is difficult to avoid and guarantee, but serializing helps to avoid this if it is guaranteed that a new value will be written for each iteration of the loop

44 Common Network Transfer Policies
“Streaming” Sending a lossless stream of information Examples Offloading waveform data from cRIO to remote PC for intensive processing Sending waveform data over the network for remote storage Values don’t persist (reads are destructive) Lossless – client must receive all of the data High-throughput required (latency not important) 1-1

45 Streaming Lossless Data
API Type Performance Ease of Use Supported Configurations 3rd Party APIs? Network Streams NEW! LabVIEW Feature 1:1 Not this year STM Ref. Arch. Yes (TCP/IP) What about the shared variable with buffering enabled? NO!

46 Pitfalls of Streaming with Variables
Lack of flow control can result in data loss Data may be lost if the TCP/IP connection is dropped Data loss does not result in an error, only a warning Machine 1 Machine 2 Server Client Readers 5 6 7 1 2 3 4 1 2 5 6 7 4 3 4 Client Writers 1 2 3 4 5 6 7

47 Network Streams NEW! Machine A Machine B
Make it clear on this slide that these are two different machines

48 Network Streams in Action
Machine 1 Machine 2 Writer Endpoint Reader Endpoint 6 7 8 9 1 2 3 4 5 4 5 6 7 8 9 1 2 3 3 1 2 4 5 4 5 4 5 4 5 3 4 5 1 2 3 1 2 Acknowledge (2) Flow Control (2) Acknowledge (3) Use Streams!

49 Demonstration Inter-target Communication Using Network Streams
Streaming data from RT target back to host for data logging Discuss what transfer model is used (streaming) Present all the possible choices Go over recommended method (Network Stream) Reasons why it is the best choice Discuss properties of the method Demonstrate how it works Discuss STM- open protocol vs. closed protocol CHANGE INFO BELOW Recall back to the opening example with shared variables for streaming. Transferring data between two parallel loops. Queues are needed for this instance because we need to ensure that each data point is read once, and only once, to prevent data loss and stale reads. Queues are ideal for Windows/PC communication. They are not deterministic for a few reasons—This is discussed in the next few slides. Why would we do this? Logging data or processing data is perfect example. Also works great for sending messages. For most situations, the user would lie for every message to be handled. Queues process one element from the queue/buffer at a time, and are not lossy. This will ensure that every message is read. Passing Data within parallel loops on a Windows Machine Discuss what transfer model is used: Stream/Buffer Present all the possible choices for streaming: could use RT FIFOs, Network streams, TCP, STM, shared variables with RT FIFOs or buffers Go over recommended method: Queue/ Producer Consumer Reasons why it is the best choice: Windows based. Same process, so no need for network architecture. Run in separate threads, independently, in parallel. Allows the acquisition loop or producer loop to run with most priority, and then pass data to less time critical loop for logging or processing. Discuss properties of the method: Demonstrate how it works: Questions to ask the class: -Do you ever acquire data from a DAQ device and need to run FFTs or intense data processing? -If yes, do you ever get buffer overflows because you are not reading from the buffer fast enough? -Do you ever need to pass waveform/image data from one loop to another? Then you need a buffered data transfer without data loss. KEY TAKE-AWAY: If running on a non-RT machine such as a Windows PC, use Queues if you need to stream/buffer data from one loop to another for whatever reason. Further, if you have a message architecture where one loop processes all messages, add the messages to a queue to ensure every message is handled. Do not use shared variables for buffered data transfer or for streaming applications or for messaging applications (typically).

50 Common Network Transfer Policies
“Command” or “Message” Requesting an action from a worker Examples Requesting an autonomous vehicle to move to a given position Telling a process controller to begin its recipe Values don’t persist (reads are destructive) Lossless – client must receive every command Low latency – deliver the command as fast as possible 1-1 N-1 1-N

51 Network Command Mechanisms
API Type Performance Ease of Use Supported Configurations 3rd Party APIs? Network Streams LabVIEW Feature 1:1 No Shared Variable 1:1, 1:N, N:1 Measurement Studio CVI AMC Ref. Arch. 1:N Yes (UDP) Web Services Web Standard (New VIs in 2010) Yes Makes streams the recommended one Say that variables and AMC can also be used

52 Network Streams Writing Elements to the Stream
Select the data type the queue will hold Reference to reader URL Reading Elements from Stream Main elements of a Network Stream: Create Reader Endpoint– Creates the reader endpoint of a network stream. This function will not run unless you also create a writer endpoint. Create Write Endpoint– Creates the writer endpoint of a network stream. This function will not run unless you also create a reader endpoint. NOTE: You must wire either the writer url input of the Create Network Stream Read Endpoint or the reader url input of the Create Network Stream Writer Endpoint function. If you do not wire either of these inputs, your endpoints will not create a network stream. Write (1 or N elements)– Writes a single element or an array of elements from a network stream. Read (1 or N elements)– Reads a single element or an array of elements from a network stream. Destroy Stream Endpoint—Destroys the specified endpoint. To completely destroy a stream and free up the memory allocated to that stream, you must destroy both the reader and writer endpoints. To ensure that you do not lose any data when you destroy a stream, use the Flush Stream function before this function on the writer endpoint. Flush Stream—Transfers all data to the reader endpoint before data flow resumes. You can call this function from the writer endpoint only. Use this function before using the Destroy Endpoint function to ensure that the writer endpoint buffer is empty before you destroy it. NOTE: There are two options for the Flush: -All Elements Read from Stream (default)—Specifies that data flow resumes when the Network Streams Engine transfers all data to the reader endpoint and the reader endpoint finishes reading that data. -All Elements Available for Reading—Specifies that data flow resumes when the Network Streams Engine transfers all data to the reader endpoint but before the reader endpoint finishes reading that data. Further, the Flush is used for Message Communication to force the message to be delivered even though the data transfer packet is not filled, as shown in demonstration V. Read will wait for data or time-out (defaults to -1)

53 Network Streams Lossless transfer, even in connection loss*
Can be tuned for high-throughput (streaming) or low-latency (messaging) Unidirectional, P2P, LabVIEW only Not deterministic Acquire/Control Log Data/Process Network streams are an easy-to-configure, tightly integrated, and dynamic communication method for transferring data from one application to another with throughput and latency characteristics that are comparable with TCP.  However, unlike TCP, network streams directly support transmission of arbitrary data types without the need to first flatten and unflatten the data into an intermediate data type. Network streams were designed and optimized for lossless, high throughput data communication.  Network streams use a one-way, point-to-point buffered communication model to transmit data between applications. You can accomplish bidirectional communication by using two streams, where each computer contains a reader and a writer that is paired to a writer and reader on the opposite computer. What happens in the event of a connection loss? In any given network stream, one of the endpoints is designated as the active endpoint, and the other endpoint is designated as the passive endpoint.  The active endpoint is the endpoint that originates the initial network connection and is responsible for actively trying to reestablish the connection whenever it detects the network status has become disconnected.  Conversely, the passive endpoint is the endpoint that listens or waits to receive a connection request.  In terms of client/server terminology, the active endpoint is synonymous with the client, and the passive endpoint is synonymous with the server.  The create endpoint function designates which endpoint is the active one.  The endpoint that specifies the URL of the remote endpoint it wants to connect to is designated as the active endpoint, and the endpoint that doesn’t specify a URL for the remote endpoint is designated as the passive endpoint. In the event of a disconnection, the active endpoint will continuously try to reestablish communication with the passive endpoint in the background.  This background process will continue until the connection is repaired or until the endpoint is destroyed.  While in the disconnected state, writes to the writer endpoint will continue to succeed until the writer is full, and reads from the reader endpoint will continue to succeed until the reader is empty.  Once the writer is full or the reader is empty, read and write calls will block and return timeout indicators as appropriate.  However, no errors will be returned from the read or write call itself. 

54 Sending Commands with Streams

55 Using Shared Variables Effectively
Limit Shared Variable Usage Good for small applications with low channel counts Single-process (local) and Networked Combine identically-typed channels into an array Utilize RT FIFO functions Scalable architecture for large applications (local-only) Polling and blocking modes (high performance or low CPU, respectively) A good message is that shared variables are very easy to use and very easy to get started with. However, there are more ideal data transfer methods, as discussed in this presentation, depending on the situation, that should be used if possible.

56 Using Shared Variables Effectively
Avoid unnecessary buffering: Does not guarantee lossless data transfer Consumes more CPU and memory On by default with Network-Published Shared Variables Select an appropriate host for shared variables: RT hosting simplifies multiple host access and increases stability RT hosting also increases CPU and memory overhead Use to retrieve/monitor latest value Use in cases where latency is not a priority

57 When to Use Network Streams
No data can be lost from the acquisition during transfer between RT target and host computer Transfer data from RT target to host computer for logging data to file Transfer data from RT target to host computer for data processing and analysis that requires more memory than the RT target has available So, Why not use shared variables? This is a question that can be brought up…look back to reasons why not to use shared variables for buffering Determining When to Use Network Streams Instead of Shared Variables Use shared variables to publish the latest value in a data set to many computers. Conversely, use network streams to log every point of data on one computer. For example, assume that you are using an accelerometer to detect the vibrations of a pump that is re-pressuring natural gas in a pipeline. You are processing the vibration data on a CompactRIO target to monitor for bearing fault to ensure that the pump does not fail. However, the CompactRIO target does not have enough memory to analyze the data. Therefore, you must send the data to a desktop computer that has enough memory to store, analyze, and display that data. Because shared variables are optimized for publishing the latest value of data only, they could miss a critical data point. However, network streams would stream every point of data to the desktop computer so you could monitor the condition of the engine. Note:  Network streams can induce jitter in real-time (RT), time-critical loops. Therefore, if you want to stream data from a time-critical loop with network streams, National Instruments recommends that you first share the data with a lower-priority loop. Then, use network streams to stream the data to another application. When not to use Network Streams: -In general, network streams are not suitable for use within a control algorithm.  Ethernet is typically not a reliable communication bus for control applications, and reads and writes to a stream are not deterministic.  Network streams may still be used within the control application to communicate with a remote HMI, but they shouldn’t be used within any time critical loops -Network streams were designed for lossless, point-to-point communication.  While this works well for data streaming and command based applications, it makes establishing arbitrary N:1 or many to many communication paths very difficult. -Finally, if the application needs the highest network throughput possible, it may be desirable to use the TCP API instead of network streams.

58 DMA (Direct Memory Access)
Use for Host to Target Communication (ie: RT to FPGA) Available for newer FPGAs Useful for transferring chunks of data High latency

59 Demonstration Introduction to Direct Memory Access
Streaming data from RT target back to host for data logging Discuss what transfer model is used (streaming) Present all the possible choices Go over recommended method (Network Stream) Reasons why it is the best choice Discuss properties of the method Demonstrate how it works Discuss STM- open protocol vs. closed protocol CHANGE INFO BELOW Recall back to the opening example with shared variables for streaming. Transferring data between two parallel loops. Queues are needed for this instance because we need to ensure that each data point is read once, and only once, to prevent data loss and stale reads. Queues are ideal for Windows/PC communication. They are not deterministic for a few reasons—This is discussed in the next few slides. Why would we do this? Logging data or processing data is perfect example. Also works great for sending messages. For most situations, the user would lie for every message to be handled. Queues process one element from the queue/buffer at a time, and are not lossy. This will ensure that every message is read. Passing Data within parallel loops on a Windows Machine Discuss what transfer model is used: Stream/Buffer Present all the possible choices for streaming: could use RT FIFOs, Network streams, TCP, STM, shared variables with RT FIFOs or buffers Go over recommended method: Queue/ Producer Consumer Reasons why it is the best choice: Windows based. Same process, so no need for network architecture. Run in separate threads, independently, in parallel. Allows the acquisition loop or producer loop to run with most priority, and then pass data to less time critical loop for logging or processing. Discuss properties of the method: Demonstrate how it works: Questions to ask the class: -Do you ever acquire data from a DAQ device and need to run FFTs or intense data processing? -If yes, do you ever get buffer overflows because you are not reading from the buffer fast enough? -Do you ever need to pass waveform/image data from one loop to another? Then you need a buffered data transfer without data loss. KEY TAKE-AWAY: If running on a non-RT machine such as a Windows PC, use Queues if you need to stream/buffer data from one loop to another for whatever reason. Further, if you have a message architecture where one loop processes all messages, add the messages to a queue to ensure every message is handled. Do not use shared variables for buffered data transfer or for streaming applications or for messaging applications (typically).

60 Target to Host Transfer – Continuous
Total Samples to Read = ??? Read Size = 4 RT Buffer Size = ~5x Read Size Data element DMA Engine This slide illustrates the way that a continuous, buffered data transfer should take place. The size of data chunks to read, sample rate, and buffer size are all key players in this situation. A good starting point is to have a read size that is equal to 1/10th of the ample rate, and a buffer that is 5x’s as large as the read chunk size. For example: Fs=100Hz Read Size= 10 Samples Buffer Size=50 Samples The slide speaks to FPGA to RT Buffer, but this concept applies for any buffering transfer The following slide demonstrates the event of a buffer overflowing FPGA FIFO RT Buffer

61 Continuous Transfer - Buffer Overflow
Total Samples to Read = ??? Read Size = 4 RT Buffer Size = ~5x Read Size Data element This slide is to show what happens when the settings for a buffering data transfer are not ideal. In this situation, the read does not occur and the buffer fills up. The slide speaks to FPGA to RT Buffer, but this concept applies for any buffering transfer Key Takeaway: Buffering is ideal for continuous data transfer for waveforms, images, movies, etc. However, it is important to look at sample rate, sample size, and buffer size to enusre the buffer does fill up, as data will then be lost. Further, this applies to inter-target applications as well, as we will discuss next. DMA Engine FPGA FIFO RT Data Buffer

62 LabVIEW Web Services LabVIEW Application LabVIEW Web Service Client
Application Architecture: Request LabVIEW Application LabVIEW Web Service Client Response Sending Requests via URL: Physical Location of Server Name of Web Service Mapping to a VI Terminal Inputs (Optional)

63 Web Services in LabVIEW
Web Server Windows and Real-Time Custom web clients No runtime engine needed Standard http protocol Firewall friendly Since LabVIEW 8.6 you can deploy any VI as a web service. This means that you can invoke it over the web using any web capable device – even if it doesn’t support LabVIEW or the LabVIEW RTE. Now we have Web UI Builder: To provide engineers and scientists with the ability to develop thin client applications, NI has introduced LabVIEW Web UI Builder. LabVIEW Web UI Builder is a standalone, web-based editor that enables LabVIEW users to apply their existing knowledge of graphical programming to create lightweight, web-based applications that are accessible via a web browser. Although LabVIEW Web UI Builder is a separate product from LabVIEW, it is intended to complement LabVIEW and extended the capabilities of users familiar with graphical programming (similar to a LabVIEW add-on module). This feature was made in response to a common request from customers that they needed to be able to communicate with LabVIEW using standard web services from any device, even if it doesn't’ support the LabVIEW runtime engine. Customers also need to be able to communicate using protocols that were accepted by IT departments and didn’t raise flags because of custom data packets using non-standard ports. Point to drive home: As shown on this slide, with web services, you can interact with LabVIEW applications without the RTE and from any device that can communicate over the web. In this diagram, we see the image of an iPhone communicating with a CompactRIO, which is not possible if the RTE or Development Environment is required. The Web UI Builder allows the user to create the thin client in an environment similar to the LabVIEW environment to easily plug into a NI Web Service. Additional point: With this new feature we invoke a VI by ‘sending a request.’ What does this mean? Well, we ‘send a request’ anytime we enter a URL or click a link. Our ‘request’ is sent to a remote server, which then executes some process and replies with the appropriate response. One great analogy to consider is the experience of configuring a Linksys router. Much like we configure and monitor the status of the router, we can use web services to have a similar experience with our hardware products. Additional Details If the URL contains a domain name, the browser first connects to a domain name server and retrieves the corresponding IP address for the web server. The web browser connects to the web server and sends an HTTP request (via the protocol stack) for the desired web page. The web server receives the request and checks for the desired page. If the page exists, the web server sends it. If the server cannot find the requested page, it will send an HTTP 404 error message. (404 means 'Page Not Found' as anyone who has surfed the web probably knows.) The web browser receives the page back and the connection is closed. The browser then parses through the page and looks for other page elements it needs to complete the web page. These usually include images, applets, etc. For each element needed, the browser makes additional connections and HTTP requests to the server for each element. When the browser has finished loading all images, applets, etc. the page will be completely loaded in the browser window. Any Client Thin Client

64 Demonstration Basic Web Services
Streaming data from RT target back to host for data logging Discuss what transfer model is used (streaming) Present all the possible choices Go over recommended method (Network Stream) Reasons why it is the best choice Discuss properties of the method Demonstrate how it works Discuss STM- open protocol vs. closed protocol CHANGE INFO BELOW Recall back to the opening example with shared variables for streaming. Transferring data between two parallel loops. Queues are needed for this instance because we need to ensure that each data point is read once, and only once, to prevent data loss and stale reads. Queues are ideal for Windows/PC communication. They are not deterministic for a few reasons—This is discussed in the next few slides. Why would we do this? Logging data or processing data is perfect example. Also works great for sending messages. For most situations, the user would lie for every message to be handled. Queues process one element from the queue/buffer at a time, and are not lossy. This will ensure that every message is read. Passing Data within parallel loops on a Windows Machine Discuss what transfer model is used: Stream/Buffer Present all the possible choices for streaming: could use RT FIFOs, Network streams, TCP, STM, shared variables with RT FIFOs or buffers Go over recommended method: Queue/ Producer Consumer Reasons why it is the best choice: Windows based. Same process, so no need for network architecture. Run in separate threads, independently, in parallel. Allows the acquisition loop or producer loop to run with most priority, and then pass data to less time critical loop for logging or processing. Discuss properties of the method: Demonstrate how it works: Questions to ask the class: -Do you ever acquire data from a DAQ device and need to run FFTs or intense data processing? -If yes, do you ever get buffer overflows because you are not reading from the buffer fast enough? -Do you ever need to pass waveform/image data from one loop to another? Then you need a buffered data transfer without data loss. KEY TAKE-AWAY: If running on a non-RT machine such as a Windows PC, use Queues if you need to stream/buffer data from one loop to another for whatever reason. Further, if you have a message architecture where one loop processes all messages, add the messages to a queue to ensure every message is handled. Do not use shared variables for buffered data transfer or for streaming applications or for messaging applications (typically).

65 ni.com/uibuilder Visit ni.com/uibuilder to get started using the product today.

66 Demonstration Thin-Client Web Interfaces
Streaming data from RT target back to host for data logging Discuss what transfer model is used (streaming) Present all the possible choices Go over recommended method (Network Stream) Reasons why it is the best choice Discuss properties of the method Demonstrate how it works Discuss STM- open protocol vs. closed protocol CHANGE INFO BELOW Recall back to the opening example with shared variables for streaming. Transferring data between two parallel loops. Queues are needed for this instance because we need to ensure that each data point is read once, and only once, to prevent data loss and stale reads. Queues are ideal for Windows/PC communication. They are not deterministic for a few reasons—This is discussed in the next few slides. Why would we do this? Logging data or processing data is perfect example. Also works great for sending messages. For most situations, the user would lie for every message to be handled. Queues process one element from the queue/buffer at a time, and are not lossy. This will ensure that every message is read. Passing Data within parallel loops on a Windows Machine Discuss what transfer model is used: Stream/Buffer Present all the possible choices for streaming: could use RT FIFOs, Network streams, TCP, STM, shared variables with RT FIFOs or buffers Go over recommended method: Queue/ Producer Consumer Reasons why it is the best choice: Windows based. Same process, so no need for network architecture. Run in separate threads, independently, in parallel. Allows the acquisition loop or producer loop to run with most priority, and then pass data to less time critical loop for logging or processing. Discuss properties of the method: Demonstrate how it works: Questions to ask the class: -Do you ever acquire data from a DAQ device and need to run FFTs or intense data processing? -If yes, do you ever get buffer overflows because you are not reading from the buffer fast enough? -Do you ever need to pass waveform/image data from one loop to another? Then you need a buffered data transfer without data loss. KEY TAKE-AWAY: If running on a non-RT machine such as a Windows PC, use Queues if you need to stream/buffer data from one loop to another for whatever reason. Further, if you have a message architecture where one loop processes all messages, add the messages to a queue to ensure every message is handled. Do not use shared variables for buffered data transfer or for streaming applications or for messaging applications (typically).

67 Early Access Release Details
Anyone can evaluate for free Fully functional except for ‘Build and Deploy’ License for ‘Build and Deploy’ is $1,499 per user License is sold as one-year software lease Not part of Developer Suite or Partner Lease This slide is used to describe that Web UI Builder is in Early Access Release. Anyone can access the editor and create applications that run inside the editor for free. It costs $1,499 USD per developer to be able to package up the source code into a standalone web application that no longer requires the editor. However, once a developer pays the $1,499 there are no additional costs; the user can create unlimited numbers of applications. The feature is sold as a software lease, which means that it must be renewed after one year. The cost is 20% of the original price, or $300.

68 Inter-Target Communication Options
TCP/IP and UDP Define low-level communication protocols to optimize throughput and latency Shared Variables Access latest value for a network published variable Network Streams Point to Point streaming in LabVIEW with high throughput and minimal coding Web UI Builder Create a thin client to communicate with a LabVIEW Web Service DMAs Direct memory access between to different components of a system I’ve listed some of the commonly used options. It’s important when considering these to be aware of when it’s appropriate to use one over another. In examining a few of the more prominent options, it is important to consider when it is appropriate to use certain features and the inherent advantages and considerations each feature provides. To start off with, TCP and UDP VIs available in LabVIEW for users who are interested developing networked applications at a lower-level. This is beneficial for quickly establishing extremely simple data communication, but requires expertise in order to develop any advanced functionality. Shared variables were introduced in LabVIEW 8.0 in order to make it easy for LabVIEW developers to network distributed LabVIEW applications and communicate with hardware such as NI CompactRIO and NI Compact Field Point. These variables can be dragged and dropped from the project with minimal configuration or development.

69 NI Certifications Align with Training
Developer Senior Developer Software Architect / Project Manager Certified LabVIEW Associate Developer Exam Certified LabVIEW Developer Exam Certified LabVIEW Architect Exam LabVIEW Core 1 LabVIEW Core 2 LabVIEW Core 3 LabVIEW OOP Advanced Architecture Managing Software Engineering in LabVIEW NEW There is a close link between training and certification. Certification is a quantifiable way of ensuring individuals have developed the skills need to create applications. NI also offers certifications for LabWindows/CVI and TestStand "Certification is an absolute must for anyone serious about calling himself a LabVIEW expert... At our organization, we require that every LabVIEW developer be on a professional path to become a Certified LabVIEW Architect." - President, JKI Software, Inc.

70 Download Examples and Slides
ni.com/largeapps Software Engineering Tools Development Practices LargeApp Community This is the questions slide – keep this up to encourage them all to write down the URL at the top of the screen


Download ppt "An Overview of Data Communication in LabVIEW"

Similar presentations


Ads by Google