cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

ThingWorx Navigate is now Windchill Navigate Learn More

Service execution thread optimization

100% helpful (1/1)

Abstract

This article explores an approach on how to optimize ThingWorx thread usage during service execution. It addresses common scenario where the Event Processor executor thread pool can become saturated, which can lead to queuing and slow responsiveness to users. This article is a proposal for configuring and optimizing services, with an example.

Please, use the post comments to provide feedback or suggestions.

All used capabilities are 100% ThingWorx OOTB.

In the following, the acronym SETO (Service execution thread optimization) will be used to refer to entities and their capabilities.

 

Introduction

During the execution of a service, events trigger a subscription that runs in a new thread. In addition to the best practices for timers and schedulers, SETO uses this mechanism to parallelize the execution of a service across subdivisions of a population of Things.

When using ThingWorx, clients often encounter issues related to the regular execution of services such as "get data" or "update KPIs".  These services, when executed frequently, can lead to several challenges:

  1. Single thread processing: Processing occurs only in one thread, might lead to long execution time.
  2. Thread Pool Saturation: When too many services are executed simultaneously, the thread pool can become saturated, slowing down the entire system.
  3. Queue Overflows: Thread pool saturation can lead to queue overflows, where new tasks cannot be processed in time.
  4. Task Loss: Overflows can result in the loss of important tasks, affecting service reliability.
  5. Unresponsive Server: Due to overload, the server can become unresponsive, impacting user experience and operational continuity.

Data processing requirements often lead to decreasing execution time without the required consideration towards impact on stability and availability of the other required services.  This post provides explanations and examples of how to think differently about how to execute data processing quickly and reliably while also focusing on the systems stability and reliability.

 

Why Use Service Execution Thread Optimization?

To address these issues, Service Execution Thread Optimization (SETO) is essential. It allows for effective management of resource saturation and its consequences. SETO enables you to control the percent or number of used threads and the safety inter-execution delay.

QuentinR_0-1747817033651.jpeg

 

Selective Execution Management

Prioritized processing involves sorting tasks from oldest to newest, focusing on older tasks to prevent them from falling too far behind. However, it is crucial not to limit execution to only old tasks but to find a balance to process all eligible tasks. If all things don’t execute the service before the inter-execution delay, SETO skips them. These remaining things will be prioritized for the next timer event.

Selectively orchestrating work execution allows designing your data processing workflow so that it is coherent with your various use cases functional requirements as well as being tuned for performance and your provisioned resources. This example covers functional requirements which require data to be processed within a 1-hour period, and hence sorts the processing by oldest first, and allows for skipping execution windows if the system is busy at that time. Your application of this approach may leverage a more complex approach suited to your needs that could add sensitive assets that should be processed with more urgency depending on the type or location of an asset.  This is where multiple copies of the SETO entity would adapt according to your use cases, selection criteria, work sorting, timing, thread usage configurations, etc.

QuentinR_1-1747817033658.jpeg

 

Applicability and Ease of Use

SETO has been designed to avoid touching the existing model as much as possible. The only prerequisite is to have a Datetime property on each Thing to track the last execution of the service, one property per service.

QuentinR_2-1747817033667.jpeg

 

How to use it? Configuration of Allocated Threads Per Service

1. Create an Thing from the PTCSC.SETO.WorkProcessManager_TT ThingTemplate.

2. Configure the AllocatedThreadsPerService Table:

QuentinR_3-1747817033669.jpeg

 

  • TS or TT: The entity type that carries the service. Accepted values are “TT” for ThingTemplate and “TS” for ThingShape
  • TS or TT name: The name of the entity that carries the service.
  • Service name: The name of the service you want to streamline the execution.
  • Safety timeout: A security time value that defines a hard stop before the next timer trigger, in seconds. Minimum of 5 seconds. Example: If my timer is set to 3 minutes and my safety timeout is set to 10, the service won’t be executed on remaining things at 2 minutes and 50 seconds. The safety timeout must be less than the timer update rate.
  • Last update property name: The name of the datetime property on things used to store the last service execution timestamp.
  • Update older than: The minimum number of minutes since the last update before the service is triggered on an object. If set to 0, all things will be taken in account anytime.
  • Percent available thread: The percentage of available thread to use. Value constrained from 0 to 99. This value is truncated to select a round number. Example: 13 threads available, 60% requested → 13*0,6=7,8, so 7 threads will be used.
  • Use number of threads: Way to the select the requested number of threads. TRUE, use percent of available threads / FALSE, use the number of threads value column.
  • Number of threads: Maximum number of threads allowed to run in parallel to process the list of things. This number should be set to leave at least a few threads free. SETO let at least on thread free, so the number of request threads might be lower or equal to zero. The remaining thread might be used by other execution outside SETO.

3. Creating a Subscription in the new Thing.

 

Create a Subscription to link the Timer to the Service to execute.

QuentinR_4-1747817033678.jpeg

 

Result example

The following example demonstrates how logging is seamlessly integrated into SETO during the execution optimization process.

Logging is part of the ServiceExecution_SUB subscription in PTCSC.SETO.WorkProcessManager_TT ThingTemplate.

QuentinR_5-1747817033690.jpeg

 

Important Notes

  • Appropriate entity selection, sorting, scheduling algorithms will depend on the specific use cases and workload. SETO uses QueryImplementingThingsOptimized for better performance, so indexing is encouraged.
  • Processing needs to address resulting status and potentially needed retries as a part of the scheduling algorithm.  You'll want to add use case specific logic which ensures the entities with failing processing are only attempted a couple of times and that their error state is notified and addressed by a system administrator.
    • Ex: just sorting by oldest last execution time will cause anomalous non-working entities to surface to the top and taking priority over all other processing (this is an anti-pattern to avoid, requiring a circuit breaker)
  • SETO can only trigger Services without input parameters.  You could evolve the concept to leverage passing Service parameters through JSON property event data.
  • In this article, a Timer is used to trigger a regular event. As best practice, it is better to use a Scheduler (regular as a Timer)
    • Indeed, when server starts, all timers will start. Scheduler trigger depends on current datetime and not on sever startup.

 

Conclusion and provided file

By optimizing Service execution, businesses can improve the performance and reliability of their applications built on the ThingWorx Platform. Proper configuration helps prevent common issues and ensures smooth service execution.

Find attached to this article the SETO entities valid from version 9.3.

 

I've just put resource here as more generic as it is not only the thread pool that would become saturated in the given context.  It could be many other aspects downstream which are tightly coupled like the database.

 

 

NB: thanks to Greg EVA for sharing knowledge and review. 

Version history
Last update:
‎May 27, 2025 09:22 AM
Updated by:
Labels (3)
Attachments
Contributors