cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

Community Tip - Learn all about PTC Community Badges. Engage with PTC and see how many you can earn! X

Performance - query large value streams & downloading of data

SN_10359527
4-Participant

Performance - query large value streams & downloading of data

We get timeout issues when we try retrieve a lot of data from a value streams. This causes the service to fail, and then data cannot be displayed in the mashup.


1. How can we improve our services that use queries to handle large value streams?
2. Are there limits for the size of the query it should stick to that ThingWorx can handle?
3. Are there settings we can configure in Influx or ThingWorx to improve the performance or allow larger queries?

4. What best practices can we use to handle large data queries?

5. Sometime we see that in the Dev tools network monitor that a service will fail before it reaches the 30 second timeout, is ThingWorx estimating how long it will take and then stopping it running?

 

Some of the errors we are getting from the application logs:


1. We get an error 2006 - lnfluxdb2DataExceptionTranslator
2. Execution error in service script [*****] :: com.thingworx.common.exceptions.DataAccessException: [2,006] 3. Unknown error occurred. Contact the administrator if this re-occurs.
4. Content Length (216861088) of 'service name' bigger then max allowed

 

Additional Info:
ThingWorx 9.4.0-b49
InfluxDB 2.7

 

 

 

 

 

1 ACCEPTED SOLUTION

Accepted Solutions

The problem you explained is one of the most common causes of slowness people encounter when they first start their ThingWorx journey.

The main reason is that you are simply displaying or querying too much data, and you need to understand clearly where is the block.

You will observe that QueryNamedPropertyHistory does exactly what you ask it to do: if you put a large time range, you'll get absolutely all the rows in that time range.

Most common blocking points are these:

  1. The data size received in the response from the Server takes too much to be received. Looking at Developer Tools you will be able to observe the timings on any specific HTTP request: eg how much time it takes for the server to process, how much time it takes for the response to be sent to client:VladimirRosu_0-1689318561209.png

    Key reason (and no way around it) is to understand how much time does it take for data to reach your client browser (the Receiving time). If that takes 10 seconds, well, the only thing you can do is to make data size smaller (you'll see later more on this) or to improve the network connectivity between server and users.

  2. Displaying the data is also another potential source of slowness. Specifically, if you're trying to display 30k rows, each with 6 columns, that will not work at all. The charts are optimized to display a number of lines that make sense to the user when he's looking at the screen. If you may try to display 30k points, I doubt you'll have something actionable for the end user there. From very very old experience I remember max 2000 k points would make sense for the user to see. Key aspect here is to understand if the chart itself is the reason of slowness: maybe the data transfer above took 1-2 seconds (=very good infra) and the chart rendering is the reason for slowness?

What I usually do:

  1. If it's possible, I try to pre-aggregate the historical property values, to what make sense in that use-case. Eg: Temperature, Temperature_daily, Temperature_monthly (or _shift etc..). That by itself is always the best approach because you no longer spend time to aggregate on demand (if you do this, what's going to happen when 100 users simultaneously request data for a large time range?)
  2. If pre-aggregation is not possible, then I try to down-sample in the following ways:
    1. If using Influx (which by default you should always do in these scenarios), you can use InfluxDB's CQL to do that (GROUP BY time(12m)
    2. If using classic DB to store historical property values (I recommend this only for small time ranges), you can down-sample in JS in a ThingWorx service. There's plenty of code out there to help with this - also note there are different down-sampling strategies you can choose from. Note however, that this will impact TW's CPU processing, obviously.
    3. There situations, but this does not apply to your use-case, where historical data is stored in historians. Those historians already have down-sample services in them, and you can use them in a similar way like Influx.

Now, a good source of inspiration to see how proper chart display should be implemented is Dynatrace, and you can replicate this in ThingWorx. In that monitoring tool, their agents read and send to the server data at a 2 minute interval. When the user selects for example, last 30 minutes, their charts display the raw data (2m). However, if the user selects "last 90 days", the chart will display data at a 6 hours (interval) by using down-sampled data, which would mean max 360 points. It's a very common pattern I see in all these tools out there.

You can design if you want custom charts that can display 30k points if you want speed at rendering time, but then those won't have the same capabilities as of the built in chart, and also visually, they will most probably be extremely tough for the user to digest - imagine they will see all the max, min as spikes...

View solution in original post

5 REPLIES 5

Article - "How to increase the size of accepted files sent to ThingWorx and Flow": https://www.ptc.com/en/support/article/CS334518

SN_10359527
4-Participant
(To:VladimirN)

Hi @VladimirN , does this solution apply to ThingWorx 9.4, and does it also apply to exporting data to csv 

slangley
23-Emerald II
(To:SN_10359527)

Hi @SN_10359527.

 

Can you share the details of your use case?  How are you presenting the data to your end users and how many rows of data are you pulling back in the response?

 

Regards.

 

--Sharon

 

 

SN_10359527
4-Participant
(To:slangley)

Hi @slangley 

 

I am using the QueryNamedPropertyHistory service, to get data which can range from: 0-  30 000 rows  and 6 columns/properties (daily). Note that the number of properties can also increase

 

I am displaying these data points on a chart (which by default is set to show the daily data), the problem begins when the user request data for over a period of 3 weeks and more, the response time drops significantly.

The problem you explained is one of the most common causes of slowness people encounter when they first start their ThingWorx journey.

The main reason is that you are simply displaying or querying too much data, and you need to understand clearly where is the block.

You will observe that QueryNamedPropertyHistory does exactly what you ask it to do: if you put a large time range, you'll get absolutely all the rows in that time range.

Most common blocking points are these:

  1. The data size received in the response from the Server takes too much to be received. Looking at Developer Tools you will be able to observe the timings on any specific HTTP request: eg how much time it takes for the server to process, how much time it takes for the response to be sent to client:VladimirRosu_0-1689318561209.png

    Key reason (and no way around it) is to understand how much time does it take for data to reach your client browser (the Receiving time). If that takes 10 seconds, well, the only thing you can do is to make data size smaller (you'll see later more on this) or to improve the network connectivity between server and users.

  2. Displaying the data is also another potential source of slowness. Specifically, if you're trying to display 30k rows, each with 6 columns, that will not work at all. The charts are optimized to display a number of lines that make sense to the user when he's looking at the screen. If you may try to display 30k points, I doubt you'll have something actionable for the end user there. From very very old experience I remember max 2000 k points would make sense for the user to see. Key aspect here is to understand if the chart itself is the reason of slowness: maybe the data transfer above took 1-2 seconds (=very good infra) and the chart rendering is the reason for slowness?

What I usually do:

  1. If it's possible, I try to pre-aggregate the historical property values, to what make sense in that use-case. Eg: Temperature, Temperature_daily, Temperature_monthly (or _shift etc..). That by itself is always the best approach because you no longer spend time to aggregate on demand (if you do this, what's going to happen when 100 users simultaneously request data for a large time range?)
  2. If pre-aggregation is not possible, then I try to down-sample in the following ways:
    1. If using Influx (which by default you should always do in these scenarios), you can use InfluxDB's CQL to do that (GROUP BY time(12m)
    2. If using classic DB to store historical property values (I recommend this only for small time ranges), you can down-sample in JS in a ThingWorx service. There's plenty of code out there to help with this - also note there are different down-sampling strategies you can choose from. Note however, that this will impact TW's CPU processing, obviously.
    3. There situations, but this does not apply to your use-case, where historical data is stored in historians. Those historians already have down-sample services in them, and you can use them in a similar way like Influx.

Now, a good source of inspiration to see how proper chart display should be implemented is Dynatrace, and you can replicate this in ThingWorx. In that monitoring tool, their agents read and send to the server data at a 2 minute interval. When the user selects for example, last 30 minutes, their charts display the raw data (2m). However, if the user selects "last 90 days", the chart will display data at a 6 hours (interval) by using down-sampled data, which would mean max 360 points. It's a very common pattern I see in all these tools out there.

You can design if you want custom charts that can display 30k points if you want speed at rendering time, but then those won't have the same capabilities as of the built in chart, and also visually, they will most probably be extremely tough for the user to digest - imagine they will see all the max, min as spikes...

Top Tags