Community Tip - Your Friends List is a way to easily have access to the community members that you interact with the most! X
Hi all,
I'm testing the Store and Forward feature in KepserverEx. The idea was to test whether it is possible if the data stored in local disk whenever the connection between KepserverEx and Thingworx is down, can trigger a DataChange Event on Thingworx to execute a service once the connection is restored. The service is to process the raw data from Kepware and store it to our DB. To test the accuracy of the data, we also logged it using Value Stream in Thingworx.
According to this article, this scheme should work. Based on our test, we found that the value stream worked properly and the DataChange event triggered the execution of the service. But, for the data that was forwarded from the storage (it was stored due to disconnection between Thingworx and Kepware), rather than executing the service for each value that triggers the DataChange event, we saw duplicate results in our DB. It seems like the event was triggered multiple times based on the DataChange from Kepware but the service execution is done using 1 value only.
My questions are:
Note: We use Active Mode for Store and Forward, KepserverEX V6, and Thingworx 9
Solved! Go to Solution.
Hello @ST_10591134
That article you referenced says "the Subscriptions still could get the ordered DataChange event and be triggered after Connection is restored"... this might happen effectively, but what would be more correct to say is that order may or may not be maintained. The reason for this is that Kepware has been built with data transmission performance in mind, and that this sacrifices ensuring order.
To achieve what you're trying to do, you actually need to subscribe to both the DataChange and HistoricalDataLogged events. I made a video explaining this here.
You can also use the Utilization Subsystem statistics to count the Subscriptions fired and Service executions which would call your code to inject to the database.
As for "what's the best way to do this without using Value Streams" question, I'd suggest using Value Streams, as that ingestion pipeline is very well rounded and will be far more reliable and performant than something that you'd likely spend the time to build. However if I were going to set out to do this, I'd probably look have an InfoTable that would catch the updates in memory and then do batched writes every few hundred entries. Note that such an approach would have a memory impact, as well as any Ignite cache synchronisation, however doing inserts value by value will be incredibly inefficient on the DB side.
Greg
Within Thingworx there should be two options as to what to do with the data, I think either process individually or only the last value received.
It's been a while though since I've worked with it.
Hi, thanks for the answer!
Can you elaborate more on what you mean by the options? What exactly is the configuration that we have to change?
Hopefully this article helps: https://www.ptc.com/en/support/article/CS252792?source=search
Hello @ST_10591134
That article you referenced says "the Subscriptions still could get the ordered DataChange event and be triggered after Connection is restored"... this might happen effectively, but what would be more correct to say is that order may or may not be maintained. The reason for this is that Kepware has been built with data transmission performance in mind, and that this sacrifices ensuring order.
To achieve what you're trying to do, you actually need to subscribe to both the DataChange and HistoricalDataLogged events. I made a video explaining this here.
You can also use the Utilization Subsystem statistics to count the Subscriptions fired and Service executions which would call your code to inject to the database.
As for "what's the best way to do this without using Value Streams" question, I'd suggest using Value Streams, as that ingestion pipeline is very well rounded and will be far more reliable and performant than something that you'd likely spend the time to build. However if I were going to set out to do this, I'd probably look have an InfoTable that would catch the updates in memory and then do batched writes every few hundred entries. Note that such an approach would have a memory impact, as well as any Ignite cache synchronisation, however doing inserts value by value will be incredibly inefficient on the DB side.
Greg