This article explores an approach on how to optimize ThingWorx thread usage during service execution. It addresses common scenario where the Event Processor executor thread pool can become saturated, which can lead to queuing and slow responsiveness to users. This article is a proposal for configuring and optimizing services, with an example.
Please, use the post comments to provide feedback or suggestions.
All used capabilities are 100% ThingWorx OOTB.
In the following, the acronym SETO (Service execution thread optimization) will be used to refer to entities and their capabilities.
During the execution of a service, events trigger a subscription that runs in a new thread. In addition to the best practices for timers and schedulers, SETO uses this mechanism to parallelize the execution of a service across subdivisions of a population of Things.
When using ThingWorx, clients often encounter issues related to the regular execution of services such as "get data" or "update KPIs". These services, when executed frequently, can lead to several challenges:
Data processing requirements often lead to decreasing execution time without the required consideration towards impact on stability and availability of the other required services. This post provides explanations and examples of how to think differently about how to execute data processing quickly and reliably while also focusing on the systems stability and reliability.
To address these issues, Service Execution Thread Optimization (SETO) is essential. It allows for effective management of resource saturation and its consequences. SETO enables you to control the percent or number of used threads and the safety inter-execution delay.
Prioritized processing involves sorting tasks from oldest to newest, focusing on older tasks to prevent them from falling too far behind. However, it is crucial not to limit execution to only old tasks but to find a balance to process all eligible tasks. If all things don’t execute the service before the inter-execution delay, SETO skips them. These remaining things will be prioritized for the next timer event.
Selectively orchestrating work execution allows designing your data processing workflow so that it is coherent with your various use cases functional requirements as well as being tuned for performance and your provisioned resources. This example covers functional requirements which require data to be processed within a 1-hour period, and hence sorts the processing by oldest first, and allows for skipping execution windows if the system is busy at that time. Your application of this approach may leverage a more complex approach suited to your needs that could add sensitive assets that should be processed with more urgency depending on the type or location of an asset. This is where multiple copies of the SETO entity would adapt according to your use cases, selection criteria, work sorting, timing, thread usage configurations, etc.
SETO has been designed to avoid touching the existing model as much as possible. The only prerequisite is to have a Datetime property on each Thing to track the last execution of the service, one property per service.
1. Create an Thing from the PTCSC.SETO.WorkProcessManager_TT ThingTemplate.
2. Configure the AllocatedThreadsPerService Table:
3. Creating a Subscription in the new Thing.
Create a Subscription to link the Timer to the Service to execute.
The following example demonstrates how logging is seamlessly integrated into SETO during the execution optimization process.
Logging is part of the ServiceExecution_SUB subscription in PTCSC.SETO.WorkProcessManager_TT ThingTemplate.
By optimizing Service execution, businesses can improve the performance and reliability of their applications built on the ThingWorx Platform. Proper configuration helps prevent common issues and ensures smooth service execution.
Find attached to this article the SETO entities valid from version 9.3.
I've just put resource here as more generic as it is not only the thread pool that would become saturated in the given context. It could be many other aspects downstream which are tightly coupled like the database.
NB: thanks to Greg EVA for sharing knowledge and review.