PLC Shift MQTT Implementation Details
top of page
background.jpg
Paraj Kayande

PLC Shift MQTT Implementation Details

Updated: Jul 16, 2023

In this blog post we take a closer look at how PLC Shift uses MQTT and Sparkplug B. We'll start with some MQTT and Sparkplug B basics and then look at how the node and device lifecycle work. After that, we'll discuss publishing and subscribing to tag data. We'll finish up by looking at how PLC Shift uses the DRECORD functionality in the Cirrus Link Recorder module to export record based data, like flow computer history.


MQTT

MQTT is an open messaging protocol that is lightweight and flexible. By "lightweight" we mean that it's easily implemented and works on low cost and low performance devices, and by "flexible" we mean that because the MQTT protocol itself has no strict requirements for the format of the payload, it's suitable for moving many different types of data. MQTT messages always have a topic, which is a string, and a payload, which can be anything from a string, to JSON, to binary data.


MQTT is fundamentally a publish/subscribe protocol. This means that clients always publish messages to a broker, and have no idea about subscribers that may be listening. In the same way, clients only ever subscribe to messages from the broker, and do not communicate with publishing clients directly. A broker is required, and using a broker allows for publishers and subscribers to be decoupled and have no knowledge of one another.


MQTT Basic System Architecture
MQTT Basic System Architecture

Because the payload can be anything, MQTT is extremely flexible, and it's easy to get started with your own messaging scheme. However, this flexibility means that with just MQTT, there really is no idea of interoperability between products that weren't designed to use the use messaging scheme. This is where Sparkplug B comes in. Sparkplug B is a binary serialization mechanism which is used to format messages in a known way. This allows for products that use MQTT and the Sparkplug B serialization format to communicate with one another.


Sparkplug B Payloads

Sparkplug B is built on top of Google's extremely popular protocol buffers technology. Because of its popularity, serialization and deserialization libraries for protobufs are available in pretty much any programming language imaginable. PLC Shift apps are written in C#, so we use Google's GRPC.Tools library. The library compiles a .proto file into C# code. The .proto file describes the structure of the serialized data. The .proto file for Sparkplug B is available online at Github.


The proto compiler takes a message, like the Metrics message type below, and turns it into a C# class defintion. We populate an instance of the metrics class with the data that we want to send, and then the protobufs library turns that class into a sequence of bytes that we can transmit over TCP/IP, MQTT, or some other transport mechanism. On the receiving side, the protobufs library takes the sequence of bytes that was transmitted and turns it back into data. The receiver can be implemented in some other programming language, as long as that language has protobufs support.


To be clear, it's not possible to send a Metrics message directly. This is merely an example to illustrate how protobufs work. All SparkPlug B messages are a serialized "Payload" message. This message type can contain multiple Metrics as well as other nested data types.


Proto definition and generated code
Proto Definition and Generated Code

SparkplugB Node and Device Lifecycle

Sparkplug B is not just a serialization and deserialization mechanism, however. Sparkplug B also brings some statefulness to MQTT communications. Sparkplug B has the concepts of nodes and devices. A node is analgous to an edge computer, and a device is analgous to a physical sensor, RTU, or some other standalone object (I'm trying hard not to use the word "device" here!). What's important to understand is that devices belong to nodes, and that nodes are the top-level object in the hierarchy. One node owns multiple devices.


In PLC Shift, a device, which owns apps, is equivalent to a Sparkplug B node, and the apps under that device are equivalent to Sparkplug B devices. The naming here is unfortunate, but there's only 2 levels of hierarchy, so the complexity is manageable.



PLC Shift and Sparkplug B maming
PLC Shift and Sparkplug B Naming

Complete information on the lifecycle of nodes and devices can be found in the Official Sparkplug Specification. What follows is a summary of the behavior, and some details are omitted.


When a node first comes online and connects to a broker, it sends a NBIRTH message. This message will contain a mix of static and variable information about all the node level metrics. Static information is things like the name of the metric, its 64 bit alias, units and range. The variable information is the value of the metric, the quality and other information that will change during the time that the node is connected. Changing the static information requires a new NBIRTH to be issued.


The Sparkplug B aware host keeps a map of metric names to aliases, and further updates of metrics only require the alias. This means that we don't need to send the metric name with every transmission, which is good, but it also makes it a bit harder to get those metric aliases as a subscriber unless you get the BIRTH message or use some other mechanism. Aliases for all metrics must be unique across all metrics that the node owns, so this constraint also applies to device level metrics. No two device level metrics that are owned by the same node can have the same alias.


For PLC Shift specifically, where each device (aka Sparkplug B node) can own multiples of the same app, and thus the same tag IDs at the app level, we use a mechanism whereby the top 32 bits of the metric alias comes from the name that is unique per app, and the bottom 32 bits of the metric alias comes from the specific parameter's ID. Each parameter in a PLC Shift app has an ID that is unique, but fixed for the app. Each app parameter in PLC Shift that has a "Publish to MQTT" option selected is mapped to a Sparkplug B metric.


After a node issues a NBIRTH message, the node will issue DBIRTH messages on behalf of all the devices that it owns. A DBIRTH message is very similar to an NBIRTH message and contains a list of metrics, with each metric having a mix of static and variable information. Just as with a node, and changes to the static information will require a new DBIRTH message to be issued. For PLC Shift, the device will issue DBIRTHs for all the apps that have tags that are published via MQTT.


To update node metric values, a node sends NDATA messages. To update device metric values, the node sends DDATA messages on behalf of the connected devices.


When a device is ready to go offline, it issues a DDEATH message. This allows the Sparkplug B host to release resources owned by that device and otherwise clean up. When a node is ready to go offline, it first issues DDEATH messages for all the connected devices, and then issues its own NDEATH message. Devices and nodes may not always go offline cleanly, however, such as when there's a communications outage or power failure. In this case, the MQTT Last Will and Testament features can be used to indicate that a node and all its devices went offline in an ungraceful fashion.


For PLC Shift, when a new app is added to the device, a DBIRTH is issued for just that app, assuming that the app has some data that is published via MQTT. When an app is deleted, the device issues a DDEATH for the app. When the app is updated, and some static information changes, like the units for a parameter, or the number of parameters that are published, a DDEATH is issued followed by a DBIRTH with the updated information.


Sparkplug B Topics

Publishing

In the previous section, we discussed the Sparkplug B node and device lifecycle. When we say that a node sends a NBIRTH message, or a DDATA message, what we're really saying is the PLC Shift runtime sends a Sparkplug encoded payload to a specific topic. Topics are strings that the broker uses to decide what type of message that it's receiving and thus what to do with the message.


From the Sparkplug B specification, NBIRTH messages should be published on a topic that looks like:

namespace/group_id/NBIRTH/edge_node_id
  • The namespace for Sparkplug B is always "spBv1.0".

  • For PLC Shift, the group_id is configurable at the PLC Shift device level using the Export Settings->MQTT Group Name configuration setting and has a default value of "plc-shift".

  • NBIRTH indicates the type of Sparkplug B message. The spec has a list of all the legal values here.

  • For PLC Shift, the edge_node_id is either the Device Name or the Device Export Name. The Device Name is used if the Device Export Name is empty. Note that both of these will be cleaned for characters that are illegal in MQTT topic names.

From the Sparkplug B specification, DDATA messages should be published on a topic that looks like:

namespace/group_id/DDATA/edge_node_id/device_id

This beginning of this topic is very similar to a NBIRTH message, but the topic has one extra field: The device_id field. For PLC Shift apps, this is either the app's name (User Configured Name), or the App Export Name. Both of these are configurable at the app level. The User Configured Name is used when the App Export Name is empty.



Configure a PLC Shift App's Name
Configure a PLC Shift App's Name

Subscribing

Up to now, we've shown how to publish data from nodes or devices to broker using MQTT and Sparkplug B. To subscribe to changes to metrics from the host to the node, or from the host to a device, we subscribe to *CMD topics. Specifically:

namespace/group_id/NCMD/edge_node_id
namespace/group_id/DCMD/edge_node_id/device_id

The NCMD topic is used to subscribe to node level metrics and the DCMD topic is used to subscribe to device level metrics. The values in the topics are the same as described above. In the PLC Shift runtime, when a message is received on those topics, the Sparkplug B payload is deserialized, and the new value is processed. Only PLC Shift parameters that have their "Subcribe MQTT" option set will be updated. The Sparkplug B payload will be matched to the parameter name using the alias if the alias is non-zero, or by the parameter name if the alias is 0. See our PLC Shift Apps - Cloud Deployment blog post for more details on how to update parameters in apps using Node Red.


Data Records

PLC Shift apps can send tabular, record based data using MQTT and Sparkplug B. At the time that this blog was written, this is not yet a part of the official spec, and is only supported by the Cirrus Link Recorder module. However, this is a very powerul feature that expands the capabilities of Sparkplug B from working with just streaming data to also being able to handle record based data. Record based data typically has a single timestamp and then multiple columns for each value that occured at that time.


Cirrus Link explains how this works in their Recorder application note. The idea is fairly straightforward, and is very similar to publishing streaming data. A DRECORD payload consists of a list of metrics. A JSON example is below.


The serialized payload is published to a topic that looks like:

spBv1.0/group/DRECORD/edge_node_id/device_id

Specifically for PLC Shift, an app may be configured to publish record data, but not be configured to publish streaming data, so the mechanism to publish records is as follows:

  • When record data is ready, the node publishes a NBIRTH using the device name _RECORDS appended. This creates a unique node topic. No metrics are required in this NBIRTH message, because no streaming values will be sent.

  • For each app that has record data, the node publishes a DBIRTH using the app name with an _RECORDS appended. This creates a unique device topic. No metrics are required in this DBIRTH message, because no streaming values will be sent.

  • The node publishes serialized record payloads to the DRECORD topic as required, until there are no more records left to upload.

  • The node issues a DDEATH for the records device.

  • The node issues a NDEATH for the records node.

This mechanism allows record based data to be uploaded in the background and not interfere with streaming data upload. Streaming data can be sent immediately on change and doesn't get held up because records are being uploaded.


Code snippets below show how a record and a single column value are encoded by PLC Shift in C#.


Conclusion

MQTT is a lightweight and flexible communications protocol that implements a publish/subscribe model. Sparkplug B is a serialization mechanism that leverages Protocol Buffers technology to bring interoperability to the the MQTT protocol. Sparkplug B isn't just a serialization message, however. It also brings some statefulness to the MQTT protocol through the birth and death mechanism. MQTT and Sparkplug B have recently been extended to allow for transmission of record based data.


The combination of MQTT and Sparkplug B is an excellent way to publish data to external systems. PLC Shift has a great MQTT implementation that allows for the publishing of high resolution contextualized data with just a few configuration settings.





bottom of page