cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

Community Tip - Your Friends List is a way to easily have access to the community members that you interact with the most! X

Slow response even after Enough Memory allocation

TanmeyTWX
17-Peridot

Slow response even after Enough Memory allocation

Hi Folks,

 

I have 5000 records to display in a Mashup at a time. I have the machine configured as required.

16 GB Memory 

4 Cores Processor,etc.

But it still takes time to load that much records. I also tried increasing the size of Xmx and Xms as 12 GB in JVM variables but no luck.

 

Kindly suggest a way to speedup the loading of such size of records.

 

Thanks! 

11 REPLIES 11
slangley
23-Emerald II
(To:TanmeyTWX)


Hi @TanmeyTWX.

 

There are many things to consider with slow performance.  For example, do you experience the same slow response with different browsers?  How about your environment?  Are you running the database on the same machine as ThingWorx?  Are you running other applications on the ThingWorx machine?  Which O/S are you running?

 

For starters, take a look at the following Knowledge Base articles to see if these help:


- Performance is slow when using Microsoft Internet Explorer(IE) 11 to navigate ThingWorx comparing with other browsers
https://www.ptc.com/en/support/article?n=CS227037

- How to capture and view the full HTTP Request/Response sequence for performance investigation against a ThingWorx server
https://www.ptc.com/en/support/article?n=CS224691


Regards.

 

--Sharon

Hi Sharon,

I'm using only Chrome browser(recommended one).I have Thingworx installed with Postgres SQL on same machine and there is no other application is installed on that machine. Windows is the OS being used for it.

 

Requirement: I'm querying 2000 records from DataTable at a time,to display onto grid.which takes around 8-10 seconds to display. Is there any way to reduce this loading time of records?

Tudor
12-Amethyst
(To:TanmeyTWX)

The .har file requested above will let us know if the performance drag is due to the server processing (e.g. slow queries?), or if it's a rendering issue in our default grid. Note that we have a Grid Advanced extension on the PTC Marketplace that has been optimized for larger datasets (and will replace the built-in grid eventually).  Thus, in parallel you may want to test the performance of the Grid-Advanced extension.

TanmeyTWX
17-Peridot
(To:Tudor)

Hi Tudor,

 

Please find attached .HAR details of the action.

 

 

Thanks!

Tudor
12-Amethyst
(To:TanmeyTWX)

Thanks for the output.  It looks like the 9 seconds are spent fetching the data, not on the rendering side.

 

The why is a bit unclear from the screenshot: it could be a networking delay, or a delay fetching the data.

 

I'd recommend loading the .har file in a tool like fiddler which gives a better breakdown of where the delay in communication occurs:

https://www.telerik.com/fiddler

 

Additionally:

  • Since this is a Data Table operation, please verify a required index has been added: https://www.ptc.com/en/support/article?n=CS261063
  • Add logging to the custom service that fetches data from the DB to indicate start/stop times to see if the delay is on the DB side
    • This can be done by simply adding logging before and after the DB query, and analyzing the script log afterwards
TanmeyTWX
17-Peridot
(To:Tudor)

Hi Tudor,

 

I am unaware of fiddler and never used it,so hard to understand its debugging. And I'm not using any external DB but Thingworx data tables(default Persistence provider with Postgres installed on same machine).

Also, I need to maintain historic data so, not sure of using VACCUM operation.

 

Getting data response in 9 seconds for 2-5K records. Could you please let me know if this is expected or some pre-configurations can be done to reduce it anyhow?

Tudor
12-Amethyst
(To:TanmeyTWX)

Thank you for confirming the setup.

 

9 seconds to fetch 2000 records is not expected behavior.  We'd expect the service itself to fetch data in under a second from the DB.

 

At this point it sounds like we'd need to deep dive into how the data is returned from the DB and how quickly that operation completes.  If you find the articles above on data table performance to be challenging, I'd advise opening a case with support so we can walk through the steps.  But DB tuning/indexing as recommended in the article I posted above would be the first step here.

slangley
23-Emerald II
(To:slangley)

Hi @TanmeyTWX.

 

Are you still experiencing performance issues?  If so, I will be happy to open a case on your behalf.  To do so, you will need to drop me a private message with the name of your corporate account and customer number.

 

If you have resolved the issue, please post the solution and mark it as the Accepted Solution for the benefit of others with similar issues.

 

Regards.

 

--Sharon

Hi Sharon,

 

No, I didn't find any solution and couldn't improve the performance.

Before opening a case with PTC, I would like to do it from my end once again.

I will try to divide the data set into clusters with more drilling (filters). Also, want to try invoking data response from external DB. I will share the solution here, once I find any improvement doing above strategy.

 

Thanks!

slangley
23-Emerald II
(To:slangley)

Hi @TanmeyTWX.

 

Please let us know if you have resolved your performance issues and provide the solution you found.  If you are still experiencing issues, I recommend opening a case for further investigation.

 

Regards.

 

--Sharon

Hi Sharon,

 

Solution is on hold for now. I will surely post the solution,if any or create a case with PTC, once resume this solution back.

 

Thanks!

Top Tags