Community Tip - New to the community? Learn how to post a question and get help from PTC and industry experts! X
We get timeout issues when we try retrieve a lot of data from a value streams. This causes the service to fail, and then data cannot be displayed in the mashup.
1. How can we improve our services that use queries to handle large value streams?
2. Are there limits for the size of the query it should stick to that ThingWorx can handle?
3. Are there settings we can configure in Influx or ThingWorx to improve the performance or allow larger queries?
4. What best practices can we use to handle large data queries?
5. Sometime we see that in the Dev tools network monitor that a service will fail before it reaches the 30 second timeout, is ThingWorx estimating how long it will take and then stopping it running?
Some of the errors we are getting from the application logs:
1. We get an error 2006 - lnfluxdb2DataExceptionTranslator
2. Execution error in service script [*****] :: com.thingworx.common.exceptions.DataAccessException: [2,006] 3. Unknown error occurred. Contact the administrator if this re-occurs.
4. Content Length (216861088) of 'service name' bigger then max allowed
Additional Info:
ThingWorx 9.4.0-b49
InfluxDB 2.7
Solved! Go to Solution.
The problem you explained is one of the most common causes of slowness people encounter when they first start their ThingWorx journey.
The main reason is that you are simply displaying or querying too much data, and you need to understand clearly where is the block.
You will observe that QueryNamedPropertyHistory does exactly what you ask it to do: if you put a large time range, you'll get absolutely all the rows in that time range.
Most common blocking points are these:
Key reason (and no way around it) is to understand how much time does it take for data to reach your client browser (the Receiving time). If that takes 10 seconds, well, the only thing you can do is to make data size smaller (you'll see later more on this) or to improve the network connectivity between server and users.
What I usually do:
Now, a good source of inspiration to see how proper chart display should be implemented is Dynatrace, and you can replicate this in ThingWorx. In that monitoring tool, their agents read and send to the server data at a 2 minute interval. When the user selects for example, last 30 minutes, their charts display the raw data (2m). However, if the user selects "last 90 days", the chart will display data at a 6 hours (interval) by using down-sampled data, which would mean max 360 points. It's a very common pattern I see in all these tools out there.
You can design if you want custom charts that can display 30k points if you want speed at rendering time, but then those won't have the same capabilities as of the built in chart, and also visually, they will most probably be extremely tough for the user to digest - imagine they will see all the max, min as spikes...
Article - "How to increase the size of accepted files sent to ThingWorx and Flow": https://www.ptc.com/en/support/article/CS334518
Hi @VladimirN , does this solution apply to ThingWorx 9.4, and does it also apply to exporting data to csv
Hi @SN_10359527.
Can you share the details of your use case? How are you presenting the data to your end users and how many rows of data are you pulling back in the response?
Regards.
--Sharon
Hi @slangley
I am using the QueryNamedPropertyHistory service, to get data which can range from: 0- 30 000 rows and 6 columns/properties (daily). Note that the number of properties can also increase
I am displaying these data points on a chart (which by default is set to show the daily data), the problem begins when the user request data for over a period of 3 weeks and more, the response time drops significantly.
The problem you explained is one of the most common causes of slowness people encounter when they first start their ThingWorx journey.
The main reason is that you are simply displaying or querying too much data, and you need to understand clearly where is the block.
You will observe that QueryNamedPropertyHistory does exactly what you ask it to do: if you put a large time range, you'll get absolutely all the rows in that time range.
Most common blocking points are these:
Key reason (and no way around it) is to understand how much time does it take for data to reach your client browser (the Receiving time). If that takes 10 seconds, well, the only thing you can do is to make data size smaller (you'll see later more on this) or to improve the network connectivity between server and users.
What I usually do:
Now, a good source of inspiration to see how proper chart display should be implemented is Dynatrace, and you can replicate this in ThingWorx. In that monitoring tool, their agents read and send to the server data at a 2 minute interval. When the user selects for example, last 30 minutes, their charts display the raw data (2m). However, if the user selects "last 90 days", the chart will display data at a 6 hours (interval) by using down-sampled data, which would mean max 360 points. It's a very common pattern I see in all these tools out there.
You can design if you want custom charts that can display 30k points if you want speed at rendering time, but then those won't have the same capabilities as of the built in chart, and also visually, they will most probably be extremely tough for the user to digest - imagine they will see all the max, min as spikes...