cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

Community Tip - Visit the PTCooler (the community lounge) to get to know your fellow community members and check out some of Dale's Friday Humor posts! X

General Performance Recommendations and some perks (unofficial)

CarlesColl
18-Opal

General Performance Recommendations and some perks (unofficial)

Here few findings I had done during last 2 years of development with TW Platform ( until version 7.X ):

Performance Recommendations and Findings:

  • Javascript:  Never do an “unexplicit” check condition on any data type ( Boolean, String, Thing,… ), for example:
      • Istead of:
        var myThing = Things[“whateverThing”];
        if (myThing) { // do whatever }
      • Do:
        if (myThing!=null)  { // do whatever  }
    • This can improve performance ( 4x!!! ) and also prevents generating RHINO engine warnings
  • DataTables: Use whenever you can Indexes and FindDataTableEntries on any DataTable speed may be 10x, also you may split a complex query with first a FindDataTableEntries to fetch a subset of data and then a Query on memory. To enable correctly an Index, you must set it, restart the thing and then execute Reindex (Neo4j, not needed anymore on PostgreSQL). FindDataTableEntries doesn’t returns more than 500 entries! then better use QueryDataTableEntries with values parameter as it seems it has the same performance as FindDataTableEntries.
  • On memory Infotable : It's faster query than Filter.
  • DataTables: Never set maxItems to a “Super High” number like 1000000, why? It prefetch an array in memory of that size and also it makes queries slower ( as it’s allocating the memory ), always pass on maxItems: me.GetDataTableEntryCount() it costs almost 0ms to get the rows count ( Neo4J, on PostgreSQL it has some cost this service** ).
  • Javascript: Prevent Try/Catch exception triggers, you can use Try/Catch but maximize the cases where they aren’t thrown, for instance an Iterator service which triggers a lot of exceptions will be super slow.
  • 7.X / Localization: Don’t use Standard GetEffectiveToken Server Side: Starting on 7.X ( and at least until 7.3 and 7.2.6 ) it’s freaking slow, you should build a hashtable in order to query it faster, sample code (tokensCache it’s a JSON not persistent property ) Update on 2017/11/18 -> There's an article official article that talks about that: https://www.ptc.com/en/support/article?n=CS274193

var language = Resources["CurrentSessionInfo"].GetCurrentUserLanguage();

// -- Hash Table Solution one property

var tCache = me.tokensCache[language];

if (tCache==undefined) {

    tCache = {};

    var tokens =Resources["RuntimeLocalizationFunctions"].GetEffectiveTokens();

    for each(t in tokens.rows) tCache[t.name] = t.value;

    me.tokensCache[language] = tCache;

}

result = tCache[token];

if (result==undefined) {

    result = "???";

}

  • PostgreSQL**: GetDataTableEntryCount it’s super slow compared to Neo4J, We implemented a count cache.
  • PostgreSQL it's freaking slow compared to Neo4j ( 3x slower* ), if you are migrating from Neo4j you will have to tweak a lot of things in relation to performance, after a lot of tweaks and code refactor we ended up at 1,75x slower*.
  • 7.X it's slower than 6.X ( in Neo4j and PostgreSQL both 1,14x* )
  • Platform start on PostgreSQL it's slower than on Neo4j ( 1,3x* )

* We had done extensive testing to end up with this numbers.

Perks:

  • Long lasting services ( >10minutes --> Just an approximation ): Never ever execute a service that lasts longer than 10 minutes, if you can have services that last at much 1 minute you will be better suited. Why? Long lasting services usually causes a lot of blocking conditions on Things and Resources, then your system starts to cause deadlocks and ends up blocking the whole system and the previous service never ends.  All our code that can last long has a timeout break that prevents extending the 10 minutes limit.
  • Infotable Properties: When updating persistent Infotable properties never update it right away ( me.infotableProperty.AddRow() ), always clone first, update the cloned infotable and then set again the Infotable property with the cloned one.
  • Neo4j it's not an option, if you are on it you should be preparing migration, dot.
  • Build a queue system, dot.
  • When having complex mashups ( to many services/bindings ), just split it down into smaller mashups and use Contained Mashup to build it up the complex one.
  • The use of Async = true on a Service should be really scarce and you should be 200% sure that it won't have edge cases ( which for sure will have and will happen ), and as it's a Async process don't think that you can predict it's behavior, graveyard it's plenty if Async futurists
  • Add/Remove permissions with the service (or custom code better) always not with composer, composer causes the whole entity restart which it's slower and can have side effects on a running system on a unnecessary thing/things restart.
  • Use naming conventions on your solution, the most clear one it's on ThingShapes in order to prevent that two services on two different ThingShapes has the same name, what will cause that a Thing can't implement both ThingShapes, uops.
  • Dynamic subscriptions are lost when a thing restarts (of course the ones related to that thing, and if the subscription it's from one thing to the other it happens on both directions).

Hope it helps,

Carles Coll

4 REPLIES 4

May I add that for the point "When having complex mashups" you need to take care and do not send too many large infotables as Mashup Parameters. I have just seen an example following this exact path with 1867 rows infotables being passed as Mashup Parameters.

Hi Vlad,

Where is the problem on passing Infotables through Mashups parameters? all it's done on Javascript client side, should not be a big performance issue.

Best Regards,

Carles.

So, the story is that those large infotables were passed as input parameters to at least 5 or 6 other services on the contained mashup (it was a 2-level chain of contained mashups). So the original service returning the Infotable was returning after 4-5 seconds - all download time, and the other services were doing separate POSTs with the received infotable payload ...which resulted in additional 4 seconds * 5 services = 20 seconds only for the upload part, not taking into consideration the server processing time.

Ok but then the problem is not passing infotables through Mashups, it's passing "big" infotables to the server which I don't fully agree that's a bad approach.

Imagine that you have a Server side Infotable which lasts 10 seconds to be calculated ( CPU intensive process ) and you already have it back in the browser. Imagine that you need to totalize this Infotable, if you want to do directly on the server you will have to calculate again the 10seconds process in order to totalize, instead if you send the already calculated infotable back to the server to totalize it it will be less resource intensive.

Announcements


Top Tags