cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

Community Tip - If community subscription notifications are filling up your inbox you can set up a daily digest and get all your notifications in a single email. X

Best practices for Thingworx and MQTT

trungkien2104
7-Bedrock

Best practices for Thingworx and MQTT

Hello everybody,

I have an application to monitoring a list of rooms with some attribute as temperature, humidity, power consumption, etc. and I will use MQTT to send the data to Thingworx Foundation. Actually, I have 2 solutions to do:

 - Solution 1: Using MQTT extension in market place. The workflow is as following: Sensor -> Gateway (Publish) -> MQTT Broker -> MQTT Extension (Subscriber). Then, by decoding the data in json (I plan to have only one MQTT client instance and subscriber all topic, then, a message includes thing name, attribute and values in pair), I will set directly the thing attribute. e.g. Things[A].attribute = MQTTThing.message.attribute_value. I am facing with problem in link , so, I assume that I can do it without any problem.

 - Solution 2: Using Thingworx Industrial Connectivity / Kepware ThingWorx. Sensor -> Gateway (Publish) -> MQTT Broker -> Kepware -> Thingworx. The Kepware will use as MQTT client and send data to Thingworx using "Always On" connection.

Hence, which solution is the best practices for using MQTT with Thingworx ??

With the scale of system, a technique support of PTC recommends me to use the solution 2. As he said, because the MQTT Extension receives data from broker and after that, MQTT Extension shall update the Thing attribute by using REST API. So, it causes high load in the server because of http request header. But I didn't find any documents mentioned about this information. I hope someone know about the Thingworx structure can confirm this information is right or wrong.

And with the solution 2, are there any tools to scale the problem ? Because as my experiment with Kepware and PLC, I need to map each tag in PLC to Thing attribute. If I scale my problem to 1000 rooms, it is a nightmare if I do it manually..

18 REPLIES 18

Hello @trungkien2104,

 

Just my 2 cents:

  1. You shouldn't use MQTT (and similar) extensions for any sizeable load simply because they run in ThingWorx JVM and compete with the platform for computational resources.
  2. This extension does not use HTTP to set property values.

Unfortunately I don't know Kepware enough to comment on the recommended best practices.

 

Regards,
Constantine

Thank for your response. I also think the extension does not use HTTP request. But you have some document mentioned about it. I tried to use tcpdump in Thingworx Foundation port and didn't receive any HTTP request to modify attribute from extension.

Hi,

 

I'm not sure how REST comes into play in the MQTT extension (the extension is running in the platform JVM).

 

In addition to the MQTT extension  and KEPServer intergration, you may want to consider the ThingWorx Protocol Adapter Toolkit - it has an out of the box MQTT channel and it is very convenient for data ingestion if your message payload includes thing name, attribute and values in pair.

Thank for your suggestion. I'm not familiar with Java ( I'm Python Developer), so, I prefer using the available solution than develop a new one using SDK.

Hi @trungkien2104.

 

For something of this nature, we recommend reaching out to Sales for engaging with one of Global Services consultants.  They can assess your needs and develop a scalable solution.

 

Regards.

 

--Sharon

iguerra
14-Alexandrite
(To:trungkien2104)

MQTT extension does not uses REST API

it will work as you wrote on solution 1, you'll get JSON message from the broker/subscriber_thing and after decoding it you can update directly the Thing property.
For scalability I can't say what is the limit, it seems quite fast, for sure you'll not have problems with hundreds of things with not may updates per minute each one  (MQTT is not ideal for too fast data changes /streamed data), but for sure it is much higer


The limit of MQTT extension used "as is" is that it does not handles incoming messages in a FIFO queue. It is event-driven so for fast incoming messages (consecutive messages within few milliseconds) it is not granted the "sequence execution" of incoming data. For sure is not a problem for most cases, but if so an external queuing should be implemented.

 

Thank for your response. I used tcpdump to monitor the thingworx port and confirmed that MQTT extension does not use REST API to update value. In my use case, missing data in some intervals is not critical, hence, I don't think the queue is necessary. If queue is necessary, the broker needs to implement, not in Thingworx side. I concern about the scale because I have not enough of device to do a benchmark. I prefer the MQTT extentions as the post in this link because it is really bad experiment to bind each thing property from Kepware to Thingworx Thing. In my case, with 1000 Things (12 properties per thing), doing it manually is not a good ideal. In your experiment, to limit the mqtt connection, should I use a json message e.g. /device topic and payload is {"temperature: 20, "humidity": 50, "set_point": 20, "Power": 40} or each property per topic e.g /device/temperature, device/humidity, device/setpoint ???

 

 

 

iguerra
14-Alexandrite
(To:trungkien2104)

About the queue ... I think the broker sends messages in sequence (I checked and this happens) the problem is at thingworx side where it runs at event: it is not granted that the msg_handler of #msg1 ends before the msg_handler of #msg2 if they are started almost at the same time (and not in sequence). Anycase this is not a problem for you.

 

About implementation I use topic with ThingName, and payload linked to a single JSON variable containing all the message, and this message is parsed with a single service ran at datachange event. Have a subscription for each var could be made but is less flexible for me, and the subsystem has to manage much more subscriptions. Moreover with single JSON message and service handler you can do much more with the incoming data (may be array of data, structured data, or anything contained in the JSON)

 

 

I agree with you that the "subsystem has to manage much more subscriptions". But it is practice if you use the "auto mapping" of Thingworx MQTT Extension. This function will subscribe all topic with the format "/thing name/thing propery" without manual declaration. When you need to change the property (add new property, change a property, etc.), just change it in thing and restart thing, then, it works without writing some codes. According to this link "https://www.hivemq.com/blog/mqtt-essentials-part-5-mqtt-topics-best-practices/" , author also recommend to use specific topics. Hence, did you tested the performance between using the json with single topic or using multiple topics ???

iguerra
14-Alexandrite
(To:trungkien2104)

Your approach is correct, and if you just have few properties (just values) you may go very well with automapping.

 

I also use automapping but not for all prop, I have many properties and the most of them are structured (array, alarms, etc), not just a single numeric value, so I go better with json automapped message that I parse with a "switch - case " using the "msg_type tag" coming from the json message. There is a good control of the incoming data.

 

I didn't tested performance, but I think there is no much difference.

 

Yes, I totally agree with you. It depends on your data. I think I will do a test to see the performance between using single json topic and using multiple topics. Thank you for sharing your experiment. I think I will let this post is open if someone can contribute a comparison between MQTT extension and Kepware (performance). For me, I will try to find the best ways to do with MQTT extension (single json topic or multiple topics)

Just a little benchmark from my case for other people concern about using a topic (json message) or multiple topic for each attribute. I use a message with 12 properties (all is float). Then, each 10s, I send 200 messages corresponding 200 different things from 2 local brokers to a central broker and thingworx. I use Python script to spam concurrent connection and measure the cpu load, memory usage and network bandwidth in central broker and thingworx. Using a single json message has higher cpu load than using 12 single messages (a message per a property), but less memory usage and network usage in both central broker and thingworx because using a single json creates less connection. Then, because there are some limit connection (I already tried with some configuration of sysconf, limit.conf for Mosquitto), so, the received data is always missing. Using a single json has less missing data because it send all data in a message. Hence, for my case, using a single json message is better solution.

When you send the properties one by one -- where do you lose the data -- in the broker, or in the platform?

I think I lost some data from broker. Actually, when I tested, I used the QoS 0 for the message (I want to spam as ddos the server to see the peak). Hence, this does not guarantee the MQTT extension can receive the message. It seems that the Mosquitto broker limits the number of concurrent connections, so, when I spam 200 concurrent connections, it lost some message with QoS 0.

Just a guess -- you might be "losing" data because the primary key in your table is based on the timestamp, which itself is granular. So you're not "losing" the data, but instead overwriting existing rows. The easy way to verify it would be to log all records that you receive (just before saving them) and compare that log with the data in the table.

 

...but anyway, the approach with processing complete JSON at once should give you better overall performance and make your life much easier.

Hi Constantine,

I'm sure that it is not overwriting because I use "InfluxDB" as Persistence Storage. Each Thing (Thingworx) shall be stored as a measurement (like "table" in Postgresql or MySQL). So, as in the picture I posted, I use the number to numbered the thing from 1 to 200 (I test to send 200 message each 10 second, one message per one thing). So, it is absolutely parallel: send parallel message and save in different table. Each Thing, I subscribe only the message format like: "Thingname/data_input" (for json message) and "Thingname/Property name" (by using auto mapping feature of MQTT Extension). I think I can avoid the missing data by using QoS level 1 or 2 because configuring the Mosquitto to increase the limit of connection is very hard (I configured the Mosquitto configuration file and the OS network configuration but still can't get lost. But in my case, missing data in some interval is not critical, so, I do not try so much to resolve). Anyway, I think my test is helpful for other people to choose the topic according to their case. Now, I hope someone already tried the MQTT with Kepware to compare the solution between Kepware and MQTT extension to close this topic. Then, this topic can be used as a guide for people to choose the right solution for their case with MQTT.

Hi @trungkien2104.

 

Please let us know if you have found a solution to your issue.  If so, please post it here and mark it as the Accepted Solution for the benefit of others on the community.

 

Regards.

 

--Sharon

 

 

Hi Sharon,

I am waiting someone who already tested between MQTT extension and Kepware to see which one has higher performance.

Top Tags