I have 5000 records to display in a Mashup at a time. I have the machine configured as required.
16 GB Memory
4 Cores Processor,etc.
But it still takes time to load that much records. I also tried increasing the size of Xmx and Xms as 12 GB in JVM variables but no luck.
Kindly suggest a way to speedup the loading of such size of records.
There are many things to consider with slow performance. For example, do you experience the same slow response with different browsers? How about your environment? Are you running the database on the same machine as ThingWorx? Are you running other applications on the ThingWorx machine? Which O/S are you running?
For starters, take a look at the following Knowledge Base articles to see if these help:
- Performance is slow when using Microsoft Internet Explorer(IE) 11 to navigate ThingWorx comparing with other browsers
- How to capture and view the full HTTP Request/Response sequence for performance investigation against a ThingWorx server
I'm using only Chrome browser(recommended one).I have Thingworx installed with Postgres SQL on same machine and there is no other application is installed on that machine. Windows is the OS being used for it.
Requirement: I'm querying 2000 records from DataTable at a time,to display onto grid.which takes around 8-10 seconds to display. Is there any way to reduce this loading time of records?
The .har file requested above will let us know if the performance drag is due to the server processing (e.g. slow queries?), or if it's a rendering issue in our default grid. Note that we have a Grid Advanced extension on the PTC Marketplace that has been optimized for larger datasets (and will replace the built-in grid eventually). Thus, in parallel you may want to test the performance of the Grid-Advanced extension.
Please find attached .HAR details of the action.
Thanks for the output. It looks like the 9 seconds are spent fetching the data, not on the rendering side.
The why is a bit unclear from the screenshot: it could be a networking delay, or a delay fetching the data.
I'd recommend loading the .har file in a tool like fiddler which gives a better breakdown of where the delay in communication occurs:
I am unaware of fiddler and never used it,so hard to understand its debugging. And I'm not using any external DB but Thingworx data tables(default Persistence provider with Postgres installed on same machine).
Also, I need to maintain historic data so, not sure of using VACCUM operation.
Getting data response in 9 seconds for 2-5K records. Could you please let me know if this is expected or some pre-configurations can be done to reduce it anyhow?
Thank you for confirming the setup.
9 seconds to fetch 2000 records is not expected behavior. We'd expect the service itself to fetch data in under a second from the DB.
At this point it sounds like we'd need to deep dive into how the data is returned from the DB and how quickly that operation completes. If you find the articles above on data table performance to be challenging, I'd advise opening a case with support so we can walk through the steps. But DB tuning/indexing as recommended in the article I posted above would be the first step here.
Are you still experiencing performance issues? If so, I will be happy to open a case on your behalf. To do so, you will need to drop me a private message with the name of your corporate account and customer number.
If you have resolved the issue, please post the solution and mark it as the Accepted Solution for the benefit of others with similar issues.
No, I didn't find any solution and couldn't improve the performance.
Before opening a case with PTC, I would like to do it from my end once again.
I will try to divide the data set into clusters with more drilling (filters). Also, want to try invoking data response from external DB. I will share the solution here, once I find any improvement doing above strategy.
Please let us know if you have resolved your performance issues and provide the solution you found. If you are still experiencing issues, I recommend opening a case for further investigation.
Solution is on hold for now. I will surely post the solution,if any or create a case with PTC, once resume this solution back.