cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

Community Tip - You can Bookmark boards, posts or articles that you'd like to access again easily! X

DataChange Event using Store and Forward executes Subscription/Service with Duplicate Value

ST_10591134
4-Participant

DataChange Event using Store and Forward executes Subscription/Service with Duplicate Value

Hi all, 

I'm testing the Store and Forward feature in KepserverEx. The idea was to test whether it is possible if the data stored in local disk whenever the connection between KepserverEx and Thingworx is down, can trigger a DataChange Event on Thingworx to execute a service once the connection is restored. The service is to process the raw data from Kepware and store it to our DB. To test the accuracy of the data, we also logged it using Value Stream in Thingworx. 

According to this article, this scheme should work. Based on our test, we found that the value stream worked properly and the DataChange event triggered the execution of the service. But, for the data that was forwarded from the storage (it was stored due to disconnection between Thingworx and Kepware), rather than executing the service for each value that triggers the DataChange event, we saw duplicate results in our DB. It seems like the event was triggered multiple times based on the DataChange from Kepware but the service execution is done using 1 value only.

My questions are:

  1. Does it have anything to do with the forward method? I’m not sure how the data is forwarded from Kepware to Thingworx using Store and Forward
  2. What is the best way to achieve this other than processing the data directly from the value stream?

Note: We use Active Mode for Store and Forward, KepserverEX V6, and Thingworx 9

 

 

ACCEPTED SOLUTION

Accepted Solutions
geva
15-Moonstone
(To:ST_10591134)

Hello @ST_10591134 

 

That article you referenced says "the Subscriptions still could get the ordered DataChange event and be triggered after Connection is restored"...  this might happen effectively, but what would be more correct to say is that order may or may not be maintained.  The reason for this is that Kepware has been built with data transmission performance in mind, and that this sacrifices ensuring order.

 

To achieve what you're trying to do, you actually need to subscribe to both the DataChange and HistoricalDataLogged events.  I made a video explaining this here.

 

You can also use the Utilization Subsystem statistics to count the Subscriptions fired and Service executions which would call your code to inject to the database. 

 

As for "what's the best way to do this without using Value Streams" question, I'd suggest using Value Streams, as that ingestion pipeline is very well rounded and will be far more reliable and performant than something that you'd likely spend the time to build.  However if I were going to set out to do this, I'd probably look have an InfoTable that would catch the updates in memory and then do batched writes every few hundred entries.  Note that such an approach would have a memory impact, as well as any Ignite cache synchronisation, however doing inserts value by value will be incredibly inefficient on the DB side.

 

Greg

 

View solution in original post

Once upon a time I witnessed large chunks of missing telemetry data after having shut down my IoT Hub Connector for awhile. Since then my particular knack at dealing with escalated situations has provided me with the opportunity to dig deep into this subject in assistance of a number of customers
4 REPLIES 4
PaiChung
22-Sapphire I
(To:ST_10591134)

Within Thingworx there should be two options as to what to do with the data, I think either process individually or only the last value received.

It's been a while though since I've worked with it.

ST_10591134
4-Participant
(To:PaiChung)

Hi, thanks for the answer!

Can you elaborate more on what you mean by the options? What exactly is the configuration that we have to change?

PaiChung
22-Sapphire I
(To:ST_10591134)

geva
15-Moonstone
(To:ST_10591134)

Hello @ST_10591134 

 

That article you referenced says "the Subscriptions still could get the ordered DataChange event and be triggered after Connection is restored"...  this might happen effectively, but what would be more correct to say is that order may or may not be maintained.  The reason for this is that Kepware has been built with data transmission performance in mind, and that this sacrifices ensuring order.

 

To achieve what you're trying to do, you actually need to subscribe to both the DataChange and HistoricalDataLogged events.  I made a video explaining this here.

 

You can also use the Utilization Subsystem statistics to count the Subscriptions fired and Service executions which would call your code to inject to the database. 

 

As for "what's the best way to do this without using Value Streams" question, I'd suggest using Value Streams, as that ingestion pipeline is very well rounded and will be far more reliable and performant than something that you'd likely spend the time to build.  However if I were going to set out to do this, I'd probably look have an InfoTable that would catch the updates in memory and then do batched writes every few hundred entries.  Note that such an approach would have a memory impact, as well as any Ignite cache synchronisation, however doing inserts value by value will be incredibly inefficient on the DB side.

 

Greg

 

Once upon a time I witnessed large chunks of missing telemetry data after having shut down my IoT Hub Connector for awhile. Since then my particular knack at dealing with escalated situations has provided me with the opportunity to dig deep into this subject in assistance of a number of customers
Announcements


Top Tags