Community Tip - If community subscription notifications are filling up your inbox you can set up a daily digest and get all your notifications in a single email. X
ThingWorx is good at injesting data from systems and devices and persisting data in Value Stream.
When users build mashups/services which pull anything more than small amounts of data back, or have many users making queries at the same time, they encounter slow performance and/or server failure.
Let's take a typical example which may happen among many of our end users:
User A is currently gathering data from their edge devices and placing them in a Value Stream. About 80 properties are sharing the same Value Stream and they are estimating roughly 10 million rows a month of data into the platform.
Their application requires querying large sections of this data and displaying aggregate info to certain mashups. A ton of work has been done to optimize these queries to make them as efficient as possible. However, querying with results larger than a few thousand rows causes RAM usage to spike so high, that the JVM runs out of heap space and brings down the server.
1) These two query services are optimized by:
(1) Before ThingWorx 8.2, we query property history from database for each properties(imagine 80 properties in our example in the beginning), and then combine the
dataset(80 datasets in our example!) and display them in one table.
(2) In ThingWorx 8.2, we have replaced the "query for each property" with "single query for all properties"
2) Moved most of the query work to database level which is formally done post processing after grabing all the dataset from database including:
(1) Moved Combiner logic to database
(2) Moved Filter functionality in database
Note: This improvement is only implemented on MS SQL and PostgreSQL, other persistence providers are not considered in this version, and this may be the plan for future release.
Memory reduction for these services are implemented by storing only one copy of each distinct string
QueryPropertyHistory and QueryNamedPropertyHistory Services
From our internal test scenario we have seen approximately a 20% reduction in Thingworx platform memory utilization for these services with no Filter applied.
Filtering may reduce memory an additional amount.
Besides, a approximately 10% execution time saved as a result of this new improvement!
Query<Basetype>PropertyHistory services
Memory reduction is highly dependent on the stored data.
From our internal test scenario, this improvement is providing up to 10% memory reduction on the platform.
Note: This improvement is only implemented on MS SQL and PostgreSQL, so the memory reduction is not applied to other persistence providers.
Although there are some Query service improvements in 8.2, still by following some rules or choosing a suitable service would bring extra performance improvement according to different use scenarios. From this point of view, this secion applies to all ThingWorx versions not specially for 8.2.
By repeatedly calling QueryPropertyHistory to display data may cause severe performance problems. Try to clean up unnecessary service calls in mashup and there should be a significant
improvement to the system.
Note: That was without any of the above improvements.
Unless customer needs to create a normalized dataset for multiple properties, try using QueryIntegerPropertyHistory, QueryStringPropertyHistory, etc. as they return smaller datasets and do not
use the combiner to normalize the data across a large number of properties.
Similarly, QueryNamedPropertyHistory can provide a smaller dataset if you don’t need the values from all the properties on the Thing. So for example if a Thing has 10 logged properties but
you only need 3 returned; using QueryNamedPropertyHistory and identifying the three properties specifically needed will return a significantly reduced data set.
Where/when it makes sense, use a Query to narrow down the dataset; similarly to option 3, the returned dataset will be smaller.
Note: this will have limited use scenario though.
All the changes are transparent to the user!
We improved the underlying implementation of the existing services so users won’t need to do anything after upgrade to see the improvements.
There are no changes to the databases or schema.