cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

ThingWorx Navigate is now Windchill Navigate Learn More

IoT & Connectivity Tips

Sort by:
Basic Mashup Widgets Guide Part 1    Overview   This project will introduce how to use some basic Widgets in a Mashup. Following the steps in this guide, you will create a Mashup that reacts to user input using a Button, Toggle Button, and Slider Widget. We will also teach you how to display data to users with the Grid Advanced, Gauge, and Property Display widgets. NOTE: This guide's content aligns with ThingWorx 9.3. The estimated time to complete ALL 3 parts of this guide is 60 minutes.    Step 1: Create Mashup   Build Mashup Click the Browse folder icon on the top left of ThingWorx Composer. Select Mashups in the left-hand navigation, then click + New to create a new Mashup.        3. For Mashup Type select Responsive. NOTE: A Responsive Mashup scales with a browser’s screen size. In the steps below we will create 5 containers, one for each widget, to organize how the widgets are presented.          4. Click OK. 5. Enter a name for your Mashup. 6. If Project is not already set, click the + in the Project text box and select the PTCDefaultProject. 7. Click Save. 8. Select the Design tab to display Mashup Builder. 9. Select the Layout tab in the upper panel of the left dock. 10. Click Add Bottom to split the Mashup canvas into two halves. 11. Click inside the bottom container to selected it, then click Add Left. 12. Click inside the bottom-right container to select it, then click Add Right. 13. Click inside the top container to select it, then click Add Right again. You should now have 5 containers in two rows, ready to have widgets added.   Step 2: Button   A button allows users to trigger an action, or stop and start long-running processes. Select the Widgets tab in the uppper panel of the left dock, then enter button inside the Filter field in the top-left. Drag-and-drop the Button widget onto the upper left container.        3. Click the drop-down arrow on the left side of the Button widget. 4. Click and drag the Clicked service shown in the drop-down onto a free area of the Mashup canvas.         5. When the Select Service pop-up appears, Click the ResetInputsToDefaultValues service that is provided by the Container. 6. Click Save. 7. Click View Mashup then click Show/Hide Debug Info. 8. Click the Trace tab and click Start Trace. 9. Click the button you created in your Mashup then click Stop Trace to see the log of the Clicked event triggering the ResetInputsToDefaultValues service.   Many properties are available that give control over how a Button widget will be displayed. Many properties can be changed both statically when designing the Mashup, and dynamically in response to changes in property values.   Bindable Name Type Default Direction Description ContextId String None Input/Output User-definable value that can be used by downstream triggered widgets Disabled Boolean False Input Widget is not usable and is displayed greyed-out if set to true Label String Button Input The text that appears on the button ToolTipField String None Input Text shown when user hovers over widget Visible Boolean True Input Widget is visible if set to true CustomClass String None Input/Output User-definable CSS class applied to top di of the button   Static Name Type Default Direction DisplayName String auto-generated Descriptor used for referring to widget in Composer and Mashup Builder Description String None Description used for widget in user-facing interactions TabSequence Number 0 Tab sequence index Height Number Autosize Height of button Width Number Autosize Width of button Z-index Number 10 Controls widget placement on top or below other widgets   Widget Events Name Description Clicked Fired when user clicks button   Click here to view Part 2 of this guide. 
View full tip
Install ThingWorx Kepware Server Guide    Overview   This guide will walk you through the steps to install ThingWorx Kepware Server. NOTE: This guide's content aligns with ThingWorx 9.3. The estimated time to complete this guide is 30 minutes.    Step 1: Learning Path Overview   This guide is the first on the Connect and Monitor Industrial Plant Equipment Learning Path, and it explains how to get up and running with ThingWorx Kepware Server. If you want to learn to install ThingWorx Kepware Server, this guide will be useful to you, and it can be used independently from the full Learning Path. In the next guide on the Learning Path, we will create an Application Key which is used to secure the connection between Kepware Server and ThingWorx Foundation. Later in the Learning Path, we will send information from ThingWorx Kepware Server into ThingWorx Foundation. In other guides in this Learning Path, we will use Foundation's Mashup Builder to construct a website dashboard that displays information from ThingWorx Kepware Server. We hope you enjoy this Learning Path.   Step 2: Install ThingWorx Kepware Server   ThingWorx Kepware Server includes over 150 factory-automation protocols. ThingWorx Kepware Server communicates between industrial assets and ThingWorx Foundation, providing streamlined, real-time access to OT and IT data — whether that data is sourced from on-premise web servers, off-premise cloud applications, or at the edge. This step will download and install ThingWorx Kepware Server. Download the ThingWorx Kepware Server executable installer. Select your Language and click OK 3. On the "Welcome" screen, click Next.        4. the End-User License Agreement and click Next.   5. Set the destination folder for the installation and click Next.   6. Set the Application Data Folder location and click Next. Note that it is recommended NOT to change this path. 7. Select whether you'd like a Shortcut to be created and click Next. 8. On the "Vertical Suite Selection" screen, keep the default of Typical and click Next. 9. On the "Select Features" screen, keep the defaults and click Next. 10. The "External Dependencies" screen simply lists everything that will be installed; click Next. 11. On the "Default Application Settings" screen, leave the default of Allow client applications to request data through Dynamic Tag addressing and click Next. 12. On the “User Manager Credentials” screen, set a unique strong password for the Administrator account and click Next. Note that skipping setting a password can leave your system less secure and is not recommended in a production environment. 13. Click install to begin the installation. 14. Click finish to exit the installer.     Step 3: Open ThingWorx Kepware Server   Now that ThingWorx Kepware Server is installed, you will need to open it. In the bottom-right Windows Taskbar, click Show hidden icons. 2. Double-click on the ThingWorx Kepware Server icon. 3. ThingWorx Kepware Server is now installed. 4. For additional information on ThingWorx Kepware Server, click Server Help on the Menu Bar.   Step 4: Next Steps   Congratulations! You've successfully completed the Install ThingWorx Kepware Server guide. In this guide, you learned how to:   Download, install, and open ThingWorx Kepware Server   The next guide in the Connect and Monitor Industrial Plant Equipment learning path is Connect Kepware Server to ThingWorx Foundation.    The next guide in the Using an Allen-Bradley PLC with ThingWorx learning path is Connect to an Allen-Bradley PLC. . 
View full tip
Data Model Introduction    Overview   This project will introduce the ThingWorx Foundation Data Model. Following the steps in this guide, you will consider data interactions based on user needs and requirements, as well as application modularity, reusability, and future updates. We will teach you how to think about a properly constructed foundation that will allow your application to be scalable, flexible, and more secure. NOTE: This guide's content aligns with ThingWorx 9.3. The estimated time to complete this guide is 30 minutes.    Step 1: Benefits   A Data Model creates a uniform representation of all items that interact with one another. There are multiple benefits to such an approach, and the ability to break up items and reuse components is considered a best practice. ThingWorx has adopted this model at a high level to represent individual components of an IoT solution. Feature Benefit Flexibility Once a model has been created, it is simple to update, modify, or remove components without needing to rework the system or retest existing components. Scalability It’s easy to clone and modify devices that are either identical or similar when changing from a Proof of Concept or Pilot Program to a Scaled Business Model. Interoperability Seamlessly plug into other applications. Collaboration A Data Model allows pre-defined links between components, meaning that various parts can be defined when designing the model so that multiple people can work on those individual parts without compromising the interoperability of the components. Seamless platform A Data Model allows for seamless integration with other systems. A properly-formed model will make it easier to create high-value IoT capabilities such as analytics, augmented/virtual reality, industrial connectivity, etc.   Step 2: Entities   Entities   Building an IoT solution in Foundation begins with defining your Data Model, the collection of Entities that represent your connected devices, business processes, and your application. Entities are the highest-level objects created and maintained in Foundation, as explained below.     Thing Shape   Thing Shapes provide a set of characteristics represented as Properties, Services, Events, and Subscriptions that are shared across a group of physical assets. A Thing Shape is best used for composition to describe relationships between objects in your model. They promote reuse of contained Properties and business logic that can be inherited by one or more Thing Templates. In Foundation, the model allows a Thing Template to implement one or more Thing Shapes, which is similar to a class definition in C++ that has multiple inheritance. When you make a change to the Thing Shape, the change is propagated to the Thing Templates and Things that implement that Thing Shape; so, maintaining the model is quick and easy.   Thing Template   Thing Templates provide base functionality with Properties, Services, Events, and Subscriptions that Thing instances use in their execution. Every Thing is created from a Thing Template. A Thing Template can extend another Thing Template. When you release a new version of a product, you simply add the additional characteristics of the version without having to redefine the entire model. This model configuration provides multiple levels of generalization of an asset. A Thing Template can derive one or more additional characteristics by implementing Thing Shapes. When you make a change to the Thing Template, the change is propagated to the Things that implement that Thing Template; so again, maintaining the model is quick and easy. A Thing Template can be used to classify the kind of a Thing or asset class or as a specific product model with unique capabilities. If you have two product models and their interaction with the solution is the same (same Properties, Services, and Events), you could model them as one Thing Template. Classifying Thing Templates is useful for aggregating Things into collections, which are useful in Mashups. You may want separate Thing Templates for indexing, searching, and future evolutions of the products   Thing   Things are representations of physical devices, assets, products, systems, people, or processes that have Properties and business logic. All Things are based on Thing Templates (inheritance) and can implement one or more Thing Shapes (composition). A Thing can have its own Properties, Services, Events, and Subscriptions and can inherit other Properties, Services, Events, and Subscriptions from its Thing Template and Thing Shape(s). How you model the interconnected Things, Thing Templates, and Thing Shapes is key to making your solution easy to develop and maintain in the future as the physical assets change. End users will interface with Things for information in applications and for reading/writing data.   Best Practice: Create a Thing Template to describe a Thing, then create an instance of that Thing Template as a Thing. This practice leverages inheritance in your model and reduces the amount of time you spend maintaining and updating your model.   Step 3: Inheritance Model   Defining Things, Thing Templates, and Thing Shapes in your Data Model allows your application to handle both simple and complex scenarios. Entity Function Thing Shapes Assemble individual components. Thing Templates Combine those components into fully functional objects. Thing Unique representation of a set of identical components defined by the Thing Template.       In this example, there is a Parent/Child model between two related Thing Templates. NOTE: Things and Thing Templates may only inherit ONE Thing Template. Both Things and Thing Templates may inherit any number of Thing Shapes. Thing Templates employ a linear-relationship, while Thing Shapes employ a modular-relationship. Any Thing or Thing Template may have any number of sub-components (i.e. Thing Shapes), but each Thing or Thing Template is just one description of one object as a whole. How you decide to compartmentalize your Data Model into Thing Shapes and Thing Templates to create the actual Things that you’ll be using is a custom design that will be specific to each implementation.   Step 4: Scenario   The ThingWorx Data Model provides a way for you to describe your connected devices and match the complexity of a real-world scenario. Things, Thing Templates, and Thing Shapes are building blocks that define your data model.     You can define the components of Things, Thing Templates, and Thing Shapes, including Properties, Services, Events, and Subscriptions. Component Definition Properties Each Property has a name, description, and a data type (Base Type). Depending on the base type, additional fields may be enabled. A simple scalar type, like a number or string, adds basic fields like default value. More complex base types have more options. Properties can be static (i.e. Model Number) or dynamic (i.e. Temperature). Services A Service is a method/function defined by a block of code that performs logic specifying actions a Thing can take. There are several implementation methods, or handlers (for example: Script, SQLQuery, and SQL command), for services depending on the template you use. The specific implementation of a user-defined Service is done via a server-side script. The Service can then be invoked through a URL, a REST client capable application, or by another Service in ThingWorx. When you create a new service, you can define input properties and an output. You can define individual runtime permissions for each Service. Events Events are triggers that define changes of state (example: device is on, temperature is above/below threshold) of an asset or system and often require an action to correct or respond to a change. Business logic and actions in a ThingWorx application are driven by Events. Subscriptions Action associated with an Event, primary method to set up intelligence in ThingWorx model which enable you to optimize/automate. Subscriptions use Javascript code to define what you want your application to do when the Event occurs.   NOTE: Anything inherited by a Thing Template or Thing will inherit the associated Components.     This diagram shows what a specific Inheritance Model might look like for a connected Tractor. There is one master Template at the top. In this case, it’s a collection of similar types of tractors. The parent Template inherits a few Shapes - an Engine and a Deck that have been used in previous designs. Importing them as Shapes allows us to reuse previous design work and expedite the development process. One of the child Templates incorporates another Shape, this time in the form of a GPS tracking device. Then, at the bottom, there are the specific tractors with individual serial numbers that will report their connected data back to an IoT Application.   Step 5: Next Steps   Congratulations! You've successfully completed the Data Model Introduction, and learned about: The function of a data model for your IoT application Data model components, including Thing, Thing, Shape and Thing Template How ThingWorx components correspond to connected devices Please comment on this post so we can improve this guide in future ThingWorx version iterations.   This guide is part of 2 learning paths: The next guide in the Getting Started on the ThingWorx Platform learning path is Configure Permissions.  The next guide in the Design and Implement Data Models to Enable Predictive Analytics learning path is Design Your Data Model.      
View full tip
  Create An Application Key Guide   Overview   In order for a device to send data to the Platform, it needs to be authenticated. One authentication method is to use an Application Key. Application Keys, or appKeys, are security tokens used for authentication in ThingWorx. They are associated with a given User and have all of the permissions granted to the User with which they are associated. This is one of the most common ways of assigning permission control to a connected device. NOTE: This guide's content aligns with ThingWorx 9.3. The estimated time to complete this guide is 30 minutes.    Step 1: Learning Path Overview   This guide explains the steps to create a ThingWorx Application Key, and is part of a Learning Path. You can use this guide independently from the full Learning Path. If you want to learn to create a ThingWorx Application Key, this guide will be useful to you. When used as part of the Industrial Plant Learning Path, you should already have installed ThingWorx Kepware Server. We will use the Application Key to send information from ThingWorx Kepware Server into ThingWorx Foundation. Other guides demonstrate Foundation's Mashup Builder to construct a website dashboard that displays information from ThingWorx Kepware Server. We hope you enjoy this Learning Path.   Step 2: Create Application Key   Application Keys are assigned to a specific User for secure access to the platform. Using the Application Key for the default User (Administrator) is not recommended. If administrative access is absolutely necessary, create a User and place the User as a member of the SecurityAdministrators and Administrators User groups. Create the User the Application Key will be assigned to. 1. On the Home screen of Composer click + New. 2. In the dropdown list, click Applications Key. 3. Give your application key a name (i.e. MyAppKey). 4. If Project is not already set, click the + in the Project text box and select the PTCDefaultProject. 5. Set the User Name Reference to a User you created. 6. Update the Expiration Date field, otherwise it will default to 1 day. 7. Click Save. A Key ID has been generated and can be used to make secure connections.   IP Whitelisting for Application Keys   One of the features of an Application Key is the ability to set an IP whitelist. This allows the server to specify that only certain IP addresses should be able to use a given Key ID for access. This is a great way to lock down security on the platform for anything that will maintain a static IP address. For example, connected Web-based business systems may have a static IP from which all calls should be made. Similarly, you can use wildcards to specify a network to narrow the range of IP addresses allowed while still offering some flexibility for devices with dynamic IP addresses. Extremely mobile devices should likely not attempt to implement this, however, as they will often change networks and IP addresses and may lose the ability to connect when the IP whitelist feature is used.   Interact with Application Keys Programmatically Service Name Description GetKeyID Returns the ID of this Application Key GetUserName Get the username associated with this Application Key IsExpired Returns if this Application Key is expired ResetExpirationDateToDefault Resets the expiration date of the Application Key to the default time based on configuration in the UserManagement subsystem SetClientName Sets the client name for this Application Key SetExpirationDate Sets the expiration date of this Application Key to a provided date SetIPWhiteList Sets the values for the IP whitelist for this Application Key SetUserName Sets the associated user name for this Application Key   Tip: To learn more about Application Keys, refer to the Help Center   Step 3: Next Steps   Congratulations! You have successfully created an application key. We hope you found this guide useful.     The next guide in the Connect and Monitor Industrial Plant Equipment learning path is Install ThingWorx Kepware Server.    The next guide in the Azure MXChip Development Kit learning path is Connect Azure IoT Devices.   The next guide in the Medical Device Service learning path is Use the Edge MicroServer (EMS) to Connect to ThingWorx.   The next guide in the Using an Allen-Bradley PLC with ThingWorx learning path is Model an Allen-Bradley PLC.
View full tip
Create Custom Business Logic    Overview   This project will introduce you to creating your first ThingWorx Business Rules Engine.   Following the steps in this guide, you will know how to create your business rules engine and have an idea of how you might want to develop your own. We will teach you how to use your data model with Services, Events, and Subscriptions to establish a rules engine within the ThingWorx platform.   NOTE: This guide's content aligns with ThingWorx 9.3. The estimated time to complete this guide is 60 minutes.    Step 1: Completed Example   Download the attached, completed files for this tutorial: BusinessLogicEntities.xml.   The BusinessLogicEntities.xml file contains a completed example of a Business Rules Engine. Utilize this file to see a finished example and return to it as a reference if you become stuck during this guide and need some extra help or clarification. Keep in mind, this download uses the exact names for entities used in this tutorial. If you would like to import this example and also create entities on your own, change the names of the entities you create.   Step 2: Rules Engine Introduction   Before implementing a business rule engine from scratch, there are a number of questions that should first be answered. There are times in which a business rule engine is necessary, and times when the work can be down all within regular application coding.   When to Create a Rules Engine: When there are logic changes that will often occur within the application. This can be decisions on how to do billing based on the state or how machines in factories should operate based on a release. When business analysts are directly involved in the development or utilization of the application. In general, these roles are often non-technical, but being involved with the application directly will mean the need for a way to make changes. When a problem is highly complex and no obvious algorithm can be created for the solution. This often covered scenarios in which an algorithm might not be the best option, but a set of conditions will suffice.   Advantages of a Rules Engine The key reward is having an outlet to express solutions to difficult problems than can be easily verifiable. A consolidated knowledge base for how a part of a system works and a possible source of documentation. This source of information provides people with varying levels of technical skill to all have insight into a business model.   Business Logic with ThingWorx Core Platform: A centralized location for development, data management, versioning, tagging, and utilization of third party applications. The ability to create the rules engine within the ThingWorx platform and outside of ThingWorx platform. Being that the rules engine can be created outside of the ThingWorx platform, third party rules engines can be used. The ThingWorx platform provides customizable security and provided services that can decrease the time in development.     Step 3: Establish Rules   In order to design a business rules engine and establish rules before starting the development phase, you must capture requirements and designate rule characteristics.   Capture Requirements The first step to building a business rules engine is to understand the needs of the system and capture the rules necessary for success.   Brainstorm and discuss the conditions that will be covered within the rules engine Construct a precise list Identify exact rules and tie them to specific business requirements.   Each business rule and set of conditions within the business rule will need to be independent of other business rules. When there are several scenarios involved, it is best to create multiple rules – one handling each. When business rules are related to similar scenarios, the best methodology is to group the rules into categories.   Category Description Decision Rules Set of conditions regarding business choices Validation Rules Set of conditions regarding data verifications Generation Rules Set of conditions used for data object creation in the system Calculation Rules Set of conditions that handle data input utilized for computing values or assessments   Designate Rule Characteristics Characteristics for the rules include, but are not limited to: Naming conventions/identifiers Rule grouping Rule definition/description Priority Actions that take place in each rule.   After this is completed, you will be ready to tie business requirements to business rules, and those directly to creating your business rules engine within the platform.   Rules Translation to ThingWorx There are different methods for how the one to one connections can be made between rules and ThingWorx. The simplified method below shows one way that all of this can be done within the ThingWorx platform:   Characteristic  ThingWorx Aspect Rule name/identifier Service Name Ruleset  Thing/ThingTemplate Rule definition  Service Implementation Rule conditions Service Implementation Rule actions Service Implementation Data management DataTables/Streams   Much of the rule implementation is handled by ThingWorx Services using JavaScript. This allows for direct access to data, other provided Services, and a central location for all information pertaining to a set of rules. The design provided above also allows for easier testing and security management.   Step 4: Scenario Business Rule Engine    An important aspect to think about before implementing your business rules engine, is how the Service implementation will flow.   Will you have a singular entry path for the entire rules engine? Or will you have several entries based on what is being requested of it? Will you have create only Services to handle each path? Or will you create Events and Subscriptions (Triggers and Listeners) in addition to Services to split the workload?   Based on how you answer those questions, dictates how you will need to break up your implementation. The business rules for the delivery truck scenario are below. Think about how you would break down this implementation.   High Level Flow 1 Customer makes an order with a Company (Merchant). 1.A Customer to Merchant order information is created. 2 The Merchant creates an order with our delivery company, PTCDelivers. 2.A Merchant order information is populated. 2.B Merchant sets delivery speed requested. 2.C Merchant sets customer information for the delivery. 3 The package is added to a vehicle owned by PTCDelivers. 4 The vehicle makes the delivery to the merchant's customer.   Lower Level: Vehicles 1 Package is loaded onto vehicle 1.i Based on the speed selected, add to a truck or plane. 1.ii Ground speed option is a truck. 1.iii Air and Expedited speed options are based on planes usage and trucks when needed. 2 Delivery system handles the deliveries of packages 3 Delivery system finds the best vehicle option for delivery 4 An airplane or truck can be fitted with a limited number of packages.   Lower Level: Delivery 1 Delivery speed is set by the customer and passed on to PTCDelivers. 2 Delivery pricing is set based on a simple formula of (Speed Multiplier * Weight) + $1 (Flat Fee). 2.i Ground arrives in 7 days. The ground speed multiplier is $2. 2.ii Air arrives in 4 days. The air speed multiplier is $8. 2.iii Expedited arrives in 1 day. The expedited speed multiplier is $16. 3 Deliveries can be prioritized based on a number of outside variables. 4 Deliveries can be prioritized based on a number of outside variables. 5 Bulk rate pricing can be implemented.   How would you implement this logic and add in your own business logic for added profits? Logic such as finding the appropriate vehicle to make a delivery can be handled by regular Services. Bulk rates, prioritizing merchants and packages, delivery pricing, and how orders are handled would fall under Business Logic. The MerchantThingTemplate Thing contains a DataChange Subscription for it's list of orders. This Subscription triggers an Event in the PTCDelivers Thing.   The PTCDelivers Thing contains an Event for new orders coming in and a Subscription for adding orders and merchants to their respective DataTables. This Subscription can be seen as the entry point for this scenario. Nevertheless, you can create a follow-up Service to handle your business logic. We have created the PTCDeliversBusinessLogic to house your business rules engine.   Step 5: Scenario Data Model Breakdown   This guide will not go into detail of the data model of the application, but here is a high level view of the roles played within the application.   Thing Shapes ClientThingShape Shape used to represent the various types of clients the business faces (merchants/customers). VehicleThingShape Shape used to represent different forms of transportation throughout the system.   Templates PlaneThingTemplate Template used to construct all representations of a delivery plane. TruckThingTemplate Template used to construct all representations of a delivery truck. MerchantThingTemplate Template used to construct all representations of a merchant where goods are purchased from. CustomerThingTemplate Template used to construct all representations of a customer who purchases goods.   Things/Systems PTCDeliversBusinessLogic This Thing will hold a majority of the business rule implementation and convenience services. PTCDelivers A Thing that will provide helper functions in the application.   DataShapes PackageDeliveryDataShape DataShape used with the package delivery event. Will provide necessary information about deliveries. PackageDataShape DataShape used for processing a package. OrderDataShape DataShape used for processing customer orders. MerchantOrderDataShape DataShape used for processing merchant orders. MerchantDataShape DataShape used for tracking merchants.   DataTables OrdersDatabase DataTable used to store all orders made with customers. MerchantDatabase DataTable used to store all information for merchants.     Step 6: Next Steps   Congratulations! You've successfully completed the Create Custom Business Logic guide, and learned how to: Create business logic for IoT with resources provided in the ThingWorx platform Utilize the ThingWorx Edge SDK platforms with a pre-established business rule engine   We hope you found this guide useful.    The next guide in the Design and Implement Data Models to Enable Predictive Analytics learning path is Implement Services, Events, and Subscriptions.     
View full tip
ThingWorx v10.1 is officially here, and it’s the perfect end to a massive year!! This release brings together AI Assistants, Model Context Protocol for agent to systems communication, Adv. MQTT/Sparkplug B support to simplify industrial data management, robust solutions, and more     Reflecting on the year, our team delivered 80+ product releases and 1,000+ valuable improvements, impacting 7,000+ manufacturers worldwide. This milestone belongs to our incredible team, our vibrant community, and our partners. But what truly sets us apart is the deep AI expertise we’ve cultivated together. We’ve already moved past the 'experimental' phase by shipping real-world AI capabilities—and our roadmap for next year is even more ambitious. The bottom line: AI isn’t a threat to low-code; it’s a force multiplier. We aren't just keeping pace; we’re supercharging the platform, and we’re only just getting started. Have a safe and happy holiday, everyone! See you on the other side. P.S:  Also find attached the downloadable version of the slide that highlights our major on-prem releases, though it doesn't include the nearly double number of regular updates delivered through our SaaS offerings. Cheers, Ayush Tiwari Director Product Management    
View full tip
Introduction ThingWorx 10.1 takes a major step toward intelligent interoperability by introducing the Model Context Protocol (MCP) in Public Preview. As industries accelerate their use of AI-assisted operations, MCP provides a standardized way for AI agents to securely connect with real-world industrial data and perform actions across systems.   Our key goal is enabling agent-driven automation along with intelligent interoperability. MCP allows AI models and agents to interact with external systems such as ERP, MES, CRM, and analytics platforms in a structured and secure way. It removes the need for custom connectors and integrations, enabling developers to use a single, open standard to bridge AI and operational data. In essence, MCP allows AI to work natively with ThingWorx services and data models.   ThingWorx and MCP   Low-Code Enablement for Agentic Automation ThingWorx now embeds an MCP Server directly into the platform. This means that your existing ThingWorx services can be instantly made “AI-ready” without the need for external deployments. Through a simple low-code interface called MCPServices, users can define, manage, and expose three core MCP entities — Tools, Resources, and Prompts — that form the building blocks of contextual AI workflows.   MCP Services   Tools represent actions or functions AI can call (such as retrieving machine KPIs), Resources represent contextual data the AI can reference, and Prompts are reusable templates that guide AI on how to interact with them. This model creates a foundation for secure, context-aware automation that can be reused across multiple agents and applications. Secure by Design, Open by Standard MCP in ThingWorx follows industry-standard security protocols by introducing OAuth-protected metadata endpoints, compliant with RFC 9728. These endpoints let clients authenticate and discover the resources they’re authorized to use — ensuring data access remains secure while supporting open interoperability.   This aligns with ThingWorx’s broader goal: creating an ecosystem where AI agents can safely access contextualized industrial data across systems. Whether connecting to SAP, Salesforce, or another MCP-compatible server, your ThingWorx instance can now participate in a larger agentic ecosystem. Seamless Interoperability and Scalability All MCP configurations — tools, resources, and prompts — are stored within ThingWorx’s persistence layer, ensuring that your MCP setup scales with your enterprise environment. Agents can connect to multiple ThingWorx servers deployed globally, retrieve contextual data from each, and feed it into AI-driven workflows that span factories, regions, or entire business units.   This design lays the groundwork for domain-specific large language models (DSLMS) and enterprise AI assistants that understand and act on operational data directly from ThingWorx. Feature Summary For instance, you may have ThingWorx Server 1 supporting factory sites in the USA, another in Germany, and a third in Mexico. These local ThingWorx deployments, powered by MCP capabilities, can expose rich contextualized data for agentic automation.   MCP clients could then be used to perform enterprise-wide KPI calculations to benchmark performance across factory sites over standard metrics such as OEE.   As local systems (ERP, MES, or factory systems) adopt MCP, ThingWorx MCP clients can directly access their data, enabling seamless integration without building custom REST connectors — saving significant development effort in API integration and data mapping.     MCP Usage with TWX - Concept     Capability Description Embedded MCP Server Native support inside ThingWorx – no extra setup required. MCPServices Resource Low-code interface to manage Tools, Resources, and Prompts. OAuth Security RFC 9728-compliant protected metadata for secure AI access. Persistence Layer Stores MCP configurations across databases for scalability. AI Integration Enables context-aware, agentic automation via MCP Tools.   Feature Benefits Overall, MCP support with ThingWorx delivers several key benefits: Standardized, secure, and discoverable data access via the ThingWorx MCP Server, making operations AI-ready. Dynamic population of MCP tools without the need for custom code. Ability for AI agent developers to create tailored agents that use selective ThingWorx services enriched with context for higher accuracy. Seamless alignment with agentic AI architectures for automated workflows. Direct interoperability with enterprise systems already supporting MCP servers, allowing AI agents to connect and retrieve data easily. Upgrade to ThingWorx 10.1 to try MCP The introduction of MCP marks the start of AI-native industrial automation in ThingWorx. By adopting 10.1, you gain early access to a framework that will power the next generation of connected, intelligent systems — helping your business stay ahead as AI integration accelerates across industries.   We encourage feedback from users who have tried the MCP preview — what capabilities would you like to see next, and how can we improve interoperability and automation through ThingWorx   If you’re new to ThingWorx release phases, read about Public Preview and other stages here Vineet Khokhar Principal Product Manager, IoT Security   Stay tuned for more updates as we approach the release of ThingWorx 10.1, and as always, in case of issues, feel free to reach out to <support.ptc.com>  
View full tip
Introduction With ThingWorx 10.1, we're excited to announce MQTT support for IoT streams for data egress as Private Preview. This is a significant value add that allows ThingWorx to continue to be used as a DataOps layer for end-to-end industrial data management. With its ability to ingest data from disparate sources, contextualize and transform it, and send data in a custom-defined format over Kafka, and now MQTT, ThingWorx enables real-time consumption of data by end applications, allowing necessary interoperability without the worry of data lock-in. Especially in the age of AI, customers can use ThingWorx not only to ingest data from connected machines, systems, and people but also to push processed, contextualized, AI-ready data outward. This enables external systems, data lakes, cloud systems, or UNS layers to consume the data in real time, making IT/OT convergence seamless. With this capability, having a Sparkplug B connector built in, users can use ThingWorx to directly load contextualized data onto MQTT brokers that is transformed into Sparkplug B format, helping them achieve their goals for establishing a decoupled, modular, and modern manufacturing architecture allowing complete access and traceability of their own data. Modern Manufacturing Architecture   Built for Reliability and Scale MQTT (Message Queuing Telemetry Transport) is a lightweight, publish-subscribe protocol designed for IoT efficiency. The ThingWorx 10.1 implementation uses the Durable Queue Framework, the same foundation behind IoT Streams, to ensure reliable message delivery even under network fluctuations. Developers can configure MQTTQueueProviders to define topics and manage data publishing with minimal setup. Each queue can dynamically format its topic names, ensuring scalable routing for multiple devices or templates. The system supports both MQTT 3.1.1 and 5.0 for broad compatibility. Seamless Setup, Strong Integration New ThingWorx entities such as SparkplugBQueueProvider, SparkplugBDevice ThingShape, and related services simplify setup and connection management. Administrators can create and associate IoT Streams with MQTT queues in a few clicks, route property updates to external topics, and monitor connection health directly within Composer. This unified workflow makes it easy to connect ThingWorx data to external analytics systems, industrial data lakes, or cloud-based applications using low-code tools from ThingWorx. Adding Structure with Sparkplug B Beyond raw MQTT data, ThingWorx now supports Sparkplug B, an open industrial data specification that brings structure and state awareness to MQTT communications. Sparkplug B defines a common topic format and uses birth and death messages to indicate device lifecycle events, ensuring that systems always know which devices are online or offline. Payloads are serialized using Protocol Buffers (Protobuf) for lightweight, high-performance communication, making it ideal for bandwidth-constrained environments. Together, these enhancements position ThingWorx as both a data receiver and publisher, a core participant in UNS-aligned ecosystems where structured MQTT data flows freely across IT and OT systems, simplifying IT/OT convergence. Feature Summary   Capability Description MQTT Publishing Send Thing property data to brokers like HiveMQ or Mosquitto. Sparkplug B Support Structured MQTT with standardized topics and lifecycle management. Durable Queue Integration Ensures reliable, ordered message delivery. New Queue Providers MQTTQueueProvider and SparkplugBQueueProvider for outbound data. Unified Namespace Alignment Enables ThingWorx to act as a UNS-compliant data source. Join the Private Preview The MQTT Egress capability extends ThingWorx interoperability beyond ingestion, allowing contextualized data to be shared across the enterprise or to the cloud. You can now join the Private Preview to gain early access to this key building block of the future UNS-enabled ThingWorx ecosystem and contribute feedback that helps shape its evolution. To understand what Private Preview means in ThingWorx, check this overview here Vineet Khokhar Principal Product Manager, IoT Security Stay tuned for more updates as we approach the release of ThingWorx 10.1, and as always, in case of issues, feel free to reach out to <support.ptc.com>
View full tip
Introduction The JavaScript Debugger, first introduced in ThingWorx 10.0, is now generally available with the 10.1 release. This feature is now production-ready and provides a fully integrated debugging experience within the ThingWorx platform. It allows developers to build, test, and troubleshoot logic directly in Composer with greater efficiency and control. The debugger is built into ThingWorx Composer, enabling developers to step through code, set breakpoints, and view live variable data within the same environment. This integrated approach simplifies development workflows and improves both code quality and delivery speed. Bringing Real-Time Debugging to the Platform The feature is powered by a dedicated Debugging Subsystem that manages sessions, breakpoints, and state inspection in a structured way. Developers can initiate and control debugging sessions directly, while the platform automatically handles performance and stability through session timeouts and resource management. If a session becomes idle or runs for too long, ThingWorx automatically terminates it to preserve performance. The Variable Scope Window displays both ThingWorx base and complex variable types in real time. Developers can inspect and modify variable values as code executes, removing the need to rerun scripts for minor changes. Watch Windows enable tracking of expressions dynamically, improving visibility into logic flow. Security is built into the debugger. Access is limited to authorized users, with design-time and runtime permission controls that protect sensitive runtime data and maintain system integrity during testing. Smart, Intuitive, and Streamlined The debugger is designed to make troubleshooting more transparent and efficient. The call stack view provides detailed execution traces in JSON format, helping developers understand service-to-service interactions across complex workflows. Breakpoints can be defined across services, ThingTemplates, and ThingShapes, allowing visibility across the model. Developers can use keyboard shortcuts to control debugging steps such as step-in, step-over, and continue. JSON and InfoTable visualization capabilities allow structured data to be viewed and edited directly in the debugger interface. This provides a more interactive and contextual testing environment.   All functionality is available within the ThingWorx Composer interface. There is no external dependency or additional setup required—developers can enable the debugger in platform settings, restart the environment, and begin debugging immediately. Feature Summary JS Debugger Features with ThingWorx 10.1   Capability Description Integrated Debugging Native JavaScript debugging directly within ThingWorx Composer. Debugging Subsystem Manages sessions, breakpoints, and execution state efficiently. Variable Scope Window Real-time variable inspection and modification. Secure Permissions Role-based access to debugging tools and logs. Breakpoint & Call Stack View Trace code execution across entities in JSON format. Auto Timeout Management Sessions terminate automatically to preserve performance. JSON & InfoTable Viewer Visual inspection and editing of data structures. Why Upgrade to ThingWorx 10.1 The JavaScript Debugger in ThingWorx 10.1 enhances the development process with an integrated, secure, and modern debugging environment. It helps teams reduce development time, improve reliability, and simplify issue resolution.   Upgrade to ThingWorx 10.1 to experience a streamlined, secure, and developer-first debugging workflow that will transform how you build and maintain your industrial IoT applications. You can read about what Private Preview, Public Preview, and General Availability mean in ThingWorx here Vineet Khokhar Principal Product Manager, IoT Security Stay tuned for more updates as we approach the release of ThingWorx 10.1, and as always, in case of issues, feel free to reach out to <support.ptc.com>
View full tip
  Monitor the application that monitors your connected assets!   Hello ThingWorx Community! We are excited to announce the release of a new Best Practices Guide on monitoring your ThingWorx-based applications, complete with an accompanying enablement video!   We understand that it is critical to keep the IoT application up and running that is helping increase the uptime for your factory and critical assets. In recent ThingWorx releases, particularly since 2024, we've dedicated significant focus to strengthening IT admin and developer productivity. A major improvement is the native support for scraping all system metrics and logs through OpenTelemetry. This enhancement simplifies ThingWorx monitoring using industry-standard, open-source tools like Prometheus and Grafana, which are the de facto standard for any modern deployments and DevOps processes.   The comprehensive guide attached to this post, published by our field expert @geva , details all the best practices for monitoring your ThingWorx applications to ensure maximum uptime and smoother IT operations.   The guide covers essential topics including: Monitoring tools and layered architecture. Pre-built dashboards with concrete examples. FAQs and best practices for common issues. To further ease enablement and adoption, check out the deep dive video below where the guide's author, Greg Eva, walks through the key concepts and demonstrates the ThingWorx Foundation Dashboard for Grafana:   https://youtu.be/9efOeDUpUo4   This guide is especially valuable for customers running ThingWorx in a self-hosted environment. Of course, for those seeking a hassle-free experience, consider ThingWorx SaaS, where we fully manage and maintain your platform for you. We encourage you to review the guide and share your feedback on the content and let us know if any other topics you'd like us to cover next!   Cheers, Ayush Director Product Management, ThingWorx
View full tip
Hello ThingWorx community!   I’d like to invite you to join me and our ThingWorx experts for an exclusive webinar on October 30 at 11:00 AM (EDT) to explore the transformative potential of Agentic AI within the ThingWorx Industrial IoT and AI platform. In this session, you’ll gain insights into ThingWorx’s AI strategy, focused on delivering actionable intelligence and accelerating digital transformation across the product lifecycle. You’ll also see in action: Hands-on demos integrating Large Language Models (LLMs) with ThingWorx 10.X How to enable agentic automation using Model Context Protocol (MCP) Ways to enhance industrial apps with real-time decisions, natural language, and computer vision AI So, register here:   [Webinar]: ThingWorx Innovation With Industrial IoT and AI    Also, here is a personal video invitation message to you all!      Post webinar, please provide your feedback and comments below!    We look forward to having you join us!     Cheers, Ayush Tiwari Director Product Management  
View full tip
IoT Streams for your end-to-end Industrial Data management with ThingWorx Since its launch with ThingWorx 10.0 in June 2025, the IoT Streams feature has been a hit!! It’s exceeded my expectations to see the creative ways it adds value to the ThingWorx ecosystem. IoT Streams utilizes Apache Kafka or Azure Event Hubs as a distributed streaming platform, serving as a robust message broker and data pipeline. It empowers industrial data management by providing access to ThingWorx contextualized data for analytics, reporting, and generative AI. It enables streaming of ThingWorx data to data lakes and cold archival storage while maintaining hot data availability for real-time insights. It also enhances the platform’s scalability and reliability with robust processing of high event volumes through a durable message broker. Working with early adopters, our services team has authored a best practices guide along with sample code to help you start with IoT Streams. The GitHub artifact located at https://github.com/PTCInc/iot_stream provides practical guidance for leveraging ThingWorx IoT Streams to: Export data from ThingWorx to external systems Implement durable queues for reliability Design scalable streaming architectures Integrate with Kafka and Azure Event Hubs In addition, with the recent ThingWorx 10.0.1 maintenance release, users can send custom JSON payloads to external messaging systems using the new WriteJSONToQueue service. Lastly, I'm also enclosing an excerpt from our ThingWorx 10.0 Webinar where you can see a demo on IoT streams and hear directly from ThingWorx and Microsoft on how it enables integration between ThingWorx and Microsoft Fabric: For full video, checkout the replay of ThingWorx 10.0 webinar here: https://www.ptc.com/en/resources/industrials/webcast/thingworx-10-launch-replay So, check it out, try IoT Streams in your projects, and share how you’re using it in the comments. For questions or support, please do not hesitate to log a technical support ticket.   Cheers, Ayush Tiwari Director Product Management
View full tip
ThingWorx 10.1: Practical Tools to Power Your Industrial IoT and AI Hello ThingWorx Community! Thanks for the warm response to ThingWorx 10.0 and all your feedback, it’s been invaluable!! I’m excited to share what’s coming with ThingWorx 10.1, rolling out private preview this fall. We’ve focused on delivering practical updates to help you get more from your IoT investments, with new AI capabilities, stronger solutions, and tools to make developers’ lives easier. Here’s what’s new: Industrial AI That Works for You We’re doubling down on AI to make your operations smarter and faster ThingWorx AI assistant powered by Industrial AI Agents: Our private preview of AI agents lets you embed chat assistants directly into your mashup applications. These agents tap into ThingWorx’s contextualized IoT data, enabling quicker, data-driven decisions without complex setups. Model Context Protocol Support: MCP support with ThingWorx allows you to run centralized agents on a hub ThingWorx server, pulling data from connected systems or spoke ThingWorx instances, streamlining agentic AI automation across your operations. While there are several benefits to MCP, one of the top use cases is achieving enterprise-wide KPI calculations using agents and Large Language Models (LLMs). ThingWorx’s low-code platform makes it quick and easy to start your AI automation journey. MQTT Egress for IoT Streams: Building on IoT streams from 10.0, we’re introducing preview support for MQTT-based egress in formats like Sparkplug B. This helps you manage industrial data end-to-end and aligns with your Unified Namespace strategy for seamless interoperability. ThingWorx AI Accelerator V2: Now with predictive capabilities, this tool connects your LLMs to private data, letting you build tailored AI assistants for predictive and interactive experiences—at no cost. Check out the post here for details: https://community.ptc.com/t5/ThingWorx-Developers/Using-an-LLM-with-ThingWorx-Latest-Version-V2-Update/td-p/1038099 Solutions Built for the Factory Floor Our out-of-the-box applications have become more powerful to tackle real-world challenges: Real-Time Production Performance Monitoring (RTPPM) Application: A revamped event model powering the KPI calculations and UI-based configuration make it reliable and easier to set up and use. It delivers accurate, real-time factory performance data to keep your shop floor running smoothly. Asset Monitoring and Utilization (AMU): For high-scale environments, this application now supports up to 1800% better performance, handling massive data ingestion from connected machines with ease. Digital Performance Management (DPM) & Connected Work Cell (CWC): We’ve also made several targeted fixes and improvements to boost manufacturing efficiency and worker productivity. Common Building Blocks (BBs): With our commitment to Building Blocks strategy, we’re rolling making common BB’s (e.g., PTC.Base, PTC.DBConnection, PTC.ModelManagement, PTC.UserManagement) available to all users, including Windchill Navigate and non-SCO customers. Previously exclusive to SCO licenses, these pre-built components simplify app development, speed up time to value, and streamline app management. Empowering Developers We know developer experience is critical, so we’ve added tools to streamline your work: JavaScript Debugger (GA): Now available with 10.1, it offers debugging from Thing Templates, JSON and infotable display, expression editing, improved layouts, keyboard shortcuts, a dedicated debugging subsystem and more!! Developer Productivity Enhancements: The new Rich Text Widget lets you embed a custom rich text editor to create sophisticated user interfaces, (for e.g., mimicking those in Windchill). It has controls to allow users to format text and lists, enable undo and redo, allow file uploads, add dividers and more. Collection widget enhancements also enable effortless drag-and-drop reordering of collection cells. Concurrency fixes for user and organization management improve the reliability of ThingWorx solutions. Security Updates: Updated libraries and tech stack enhance security and performance. For regulated industries, we’ve updated OpenSSL support for Axeda Edge and C-SDK, ensuring FIPS 140-3 compliance to meet strict standards. Join the Private Preview We’re opening the private preview for features like AI agents, Model Context Protocol, MQTT egress, and updated RTPPM capabilities. Interested? Leave a comment below, and we’ll reach out with next steps to get you started. Not ready for the preview? Start planning your upgrade to the ThingWorx 10.1 GA release this winter to take advantage of these features in production. Thanks for being part of the ThingWorx journey. Let’s keep building smarter industrial solutions together! Cheers, Ayush Tiwari, Director of Product Management, ThingWorx
View full tip
Often when we think about monitoring an applications health, we look to performance metrics and observing changes over time.   However, when it comes to critical issues which need immediate attention, alerts setup on relevant ThingWorx logs are the way to notify Ops Teams of events.  Logs provide contextualised detail of an event that has occurred, allowing for triage and directing troubleshooting.   Let me illustrate an example: ThingWorx is a database application and requires that DB for proper function.  A log message indicating that the DB connection has been severed, and another one indicating that a connection to the database cannot be established immediately tells you that your problem is with the DB - right when it occurs, no analysis required.   Given this, here is a list of some log message substrings to use as examples to build out your own production system monitoring aimed at detecting common critical or high severity issues using your log management system (Splunk, Loki, DataDog, ElasticSearch, etc.).   ThingWorx Platform Apparent Deadlock org.postgresql.util.PSQLException: Connection to *:* refused Unable to write entry in stream Data store unknown error org.postgresql.util.PSQLException: Connection to *:* refused Error getting database connection Unable to connect to the PTC license server Unable To Initialize Entity Unable to persist metric Unable to persist entries Error executing batch Too many open files CRITICAL ERROR: EventRouter is over capacity OutofMemoryError Client timed out while waiting to acquire a resource (2,002) No connection Acquisition Attempt Failed Connection Servers io.vertx.core.VertxException: Thread blocked network unavailable Lost connection to platform Have any log messages that you've found that could be added here?  Post them in the comments and I'll add them to the list.  
View full tip
Overview A global leader in chemical processing and industrial manufacturing, with a strong international footprint and multiple production sites worldwide, set out to transform its production ecosystem by adopting Industrial IoT (IIoT). The objective was to unify fragmented factory data, enable real-time analytics, and drive operational efficiency through AI-powered insights. Based on detailed use case documentation and architectural workshop findings, this reference architecture outlines a robust, scalable solution designed to integrate factory systems, deliver AI-supported insights in real time, and empower teams through self-service applications.   The solution leverages PTC’s ThingWorx suite—along with Microsoft Azure services and complementary technologies—to address key challenges in production, quality, and efficiency across engineering, manufacturing, and operations. About Beyond the Pilot series Use Case   A. Engineering – Process Optimization & Quality Control   Problem: Resolving Data Integration & Visibility Challenges   Customer’s engineering teams struggled with fragmented data across various factory systems, limiting their ability to analyze process performance and optimize production parameters. Without a unified data platform, engineers could not effectively compare historical and real-time machine center lining values, making it difficult to maintain consistent production quality.   Solution: Unified Data Integration & Advanced Process Analytics   The reference architecture establishes a central, cloud-based data platform that aggregates and correlates machine data from various sources in real time. By integrating OPC Aggregators and Kepware with Azure IoT Hub, factory data is ingested, processed, and made accessible via ThingWorx applications. Engineers can now visualize mechanical and digital process values, set dynamic thresholds, and receive alerts when deviations occur—ensuring precise process control and quality optimization.   Role of PTC Products:   PTC Kepware: Standardizes and integrates machine data from disparate factory systems, ensuring a seamless flow of real-time process variables. ThingWorx Platform: Provides a robust dashboard for analyzing centerlining data, visualizing production trends, and enabling data-driven decision-making. ThingWorx Digital Performance Management (DPM): Automates the identification of process inefficiencies, allowing engineers to fine-tune machine settings dynamically.   B. Manufacturing – Scrap Reduction & Production Efficiency   Problem: Enhancing Scalability and Reducing Operational Inefficiencies   Customer faced challenges in scaling its IIoT solution as new sensors and data sources were introduced. Traditional systems struggled with the increased volume of factory data, leading to slow system response times and ineffective real-time analytics. Additionally, manual process adjustments resulted in inconsistencies, contributing to increased scrap rates and wasted materials.   Solution: Cloud-Scalable Infrastructure with Real-Time Process Optimization   To address these issues, the architecture leverages Azure IoT Hub, Azure Data Explorer (ADX), and Influx DB to handle massive data streams and provide low-latency analytics. This ensures that production trends, environmental conditions, and machine parameters are continuously monitored and optimized in real time. Advanced machine learning models predict process inefficiencies, enabling operators to make automatic adjustments to reduce scrap and optimize yield.   Role of PTC Products:   ThingWorx Platform: Acts as the central command hub, enabling real-time decision-making based on factory data trends. ThingWorx Digital Performance Management (DPM): Uses historical data to provide AI-supported recommendations for reducing material waste and improving overall equipment effectiveness (OEE). PTC Kepware: Ensures reliable, high-speed data acquisition from sensors, production lines, and environmental monitoring systems, feeding critical information into ThingWorx for optimization.   C. Driving Digital Transformation & Quality Optimization   Problem: Lack of Digital Process Automation & AI-Powered Decision Making   Customer’s previous factory systems relied on manual reporting and fixed thresholds for process control, limiting the ability to detect and respond to process inefficiencies in real time. Operators needed a system that could provide intelligent, self-service applications with AI-driven recommendations for optimal production performance.   Solution: AI-Driven Automation & Dynamic Quality Control   The IIoT architecture integrates AI-powered predictive analytics to analyze deviations in real-time and suggest automatic machine adjustments. Real-time applications, customizable process recipes, and dynamic alerting systems empower production teams with actionable insights. By embedding self-service applications in ThingWorx, engineers and operators can fine-tune process settings and receive automated recommendations for improving quality and efficiency.   Role of PTC Products:   ThingWorx Platform: Serves as the central analytics hub, delivering AI-powered insights for continuous process improvement. ThingWorx DPM: Uses machine learning to correlate scrap rates with process variables, recommending changes that minimize waste and enhance quality. PTC Kepware: Captures real-time process data, ensuring that AI models receive accurate inputs for predictive analysis.   Customer’s digital transformation journey is now backed by a robust, PTC-powered IIoT ecosystem that delivers continuous improvement, higher production efficiency, and proactive maintenance capabilities—ultimately driving the future of smart manufacturing. Technical Architecture and Implementation Details   This section combines detailed technical descriptions with the overall reference architecture. It describes the core components, integration points, and implementation strategies that deliver a robust IIoT solution for the customer.   A. Architecture Overview Diagram       High-level architecture diagram for the final solution B. Detailed Technical Components     Component Role Key Features OPC Aggregators & Kepware Stream and bridge machine data from production, DEV, and QA environments to Azure IoT Hub for real-time processing in ThingWorx. Scalable ingestion; latency monitoring; secure device connectivity; segregated closed environments for DEV/QA. Azure IoT Hub Ingests and secures machine telemetry data for analytics. Centralized data ingestion; integration with Azure services. ThingWorx on VMs Hosts the core IIoT application that processes data, provides end-user applications, and manages workflows. High performance; disaster recovery via VM snapshots; enhanced security through Azure AD integration and SSL support. Managed PostgreSQL Provides high availability for persistent application data through replication and failover. Data redundancy; managed service benefits; automated backup and recovery. Azure Data Explorer / Influx DB Handles advanced analytics, timeseries visualization, and predictive insights for telemetry data. Real-time analytics; anomaly detection; cost-effective long-term storage. Monitoring & Logging Tools Ensure comprehensive observability and prompt incident response across all components. Real-time applications monitoring; alerting; centralized log aggregation. RESTful APIs Enable seamless integration with ERP systems, legacy data sources, and other IoT devices. Secure data exchange; standardized connectivity protocols.     C. User Personas   The success of this solution relies on a well-defined team of technical experts responsible for deployment and ongoing management:     Persona Key Responsibilities Plant Manager Oversee overall factory performance and use data insights for strategic decision-making Drive process improvements and efficiency Digital Transformation Lead Analyzes and prioritizes valuable use-cases for the business Implement IIoT solutions across factory operations and scale AI-driven automation and data analytics Ensure long-term digital innovation and adoption Operations Manager Oversee production lines and ensure efficiency and optimize machine settings based on real-time insights Troubleshoot and resolve process issues quickly Quality Assurance Engineer Monitor production quality in real time and ensure compliance with quality standards Reduce scrap and rework by addressing deviations early Maintenance Engineer Monitor equipment health and respond to alerts and perform predictive maintenance to prevent failures Minimize downtime through proactive repairs Software Engineer Develop and maintain IIoT backend and frontend systems and ensure seamless data integration and API connectivity Optimize system performance and scalability Cloud Architect Design and manage IIoT cloud infrastructure and ensure scalable and secure cloud deployments Optimize data storage and processing in the cloud Security Analyst Implement and monitor security measures for IIoT systems and conduct risk assessments and threat analysis Ensure compliance with cybersecurity standards DevOps Engineer Manage CI/CD pipelines for IIoT applications and automate deployments and infrastructure management Optimize system performance and reliability     NOTE : Although these personas were required, the needs were fulfilled by a team of only 4–5 developers effectively playing multiple roles. Outcome   Optimized Production Efficiency By unifying machine telemetry, process parameters, and historical trends, customer empowers engineers with real-time insights. AI-driven recommendations and automated adjustments replace trial-and-error, enabling precise, dynamic optimizations. Bottlenecks and inefficiencies are identified instantly, allowing rapid corrective actions for peak performance.   Reduced Waste & Enhanced Quality Real-time process optimization and automated quality control significantly reduce material waste and variability. The system detects deviations at the source, enabling instant adjustments and ensuring consistent product quality, minimizing scrap, rework, and compliance risks.   Seamless Data Visibility & Collaboration A centralized dashboard provides real-time access to critical metrics, eliminating fragmented reports and delays. Engineers and operators can compare production data across sites, standardize best practices, and drive continuous improvements across the network.   Future-Ready Innovation Beyond immediate gains, this IIoT transformation lays the foundation for scalable sensor integration, AI-driven automation, and advanced predictive analytics. It’s not just a solution for today—it’s a long-term framework for sustained digital innovation in smart manufacturing. This reference architecture is not just about solving today’s challenges—it establishes a long-term, adaptive framework that will continue to evolve, enabling our customer to remain at the forefront of smart manufacturing and industrial digitalization. Additional Information   This section provides further insights into the project implementation and future strategic direction.   Parameter Description Example/Notes Time to First Go-Live Estimated duration from project initiation to initial production deployment. Approximately 16 weeks Partner Involvement Key strategic and technical partners collaborating on the deployment. Microsoft, Ansys, and Deloitte were supporting the digital transformation initiative centered around ThingWorx. Customer Roadmap Future enhancements planned by customer, such as AI-based predictive analytics and further automation. An expansion to incorporate AI and advanced machine learning–driven insights is planned       Vineet Khokhar Principal Product Manager, IoT Security   Disclaimer: These reference architectures will be based on real-world implementation; however, specific customer details and proprietary information will be omitted or generalized to maintain confidentiality.   Stay tuned for more updates, and as always, in case of issues, feel free to reach out to <support.ptc.com>  
View full tip
  What Is “Beyond the Pilot” and Why it matters?   Beyond the Pilot is a reference architecture series that highlights how real-world manufacturing challenges are addressed at enterprise scale across engineering, manufacturing, and service lines of business.   These stories reflect solutions built using — but not limited to — industrial IoT (IIoT), AI, and cloud technologies. Each entry captures how a specific use case — ranging from quality optimization to predictive maintenance — was solved using a combination of proven tools and repeatable design patterns.   Our goal is simple: To provide a blueprint for repeatable success, enabling technical leaders, architects, and operations teams to move from isolated wins to sustained value — securely, scalably, and strategically. Because technology doesn’t create impact in the lab — it creates impact when it’s scaled across the enterprise. Because innovation is only as powerful as its ability to sustain value across engineering, manufacturing, and service. Because manufacturers everywhere face the same question: How do we take what works in a pilot and make it work at scale? That’s exactly what this series will explore.     What You’ll Find in This Series   Each blog will highlight: The Context – The real-world problem we set out to solve The Architecture – The design patterns and integrations that made it possible The Execution – How we turned a concept into a production-ready system The Outcome – Tangible business results, from efficiency gains to cost savings The Lessons Learned – Best practices and insights you can reuse in your own journey   What’s Next   Upcoming posts will dive into: Manufacturing efficiency gains through connected operations Secure, cloud-native deployments that balance scale with compliance How digital twins are reshaping design, operations, and customer experience And much more…   Each story will link back here as the anchor introduction to this series. So stay tuned — and join us as we explore what it really takes to go beyond the pilot. Because that’s where innovation becomes transformation. Vineet Khokhar Principal Product Manager, IoT Security   Disclaimer: These reference architectures will be based on real-world implementation; however, specific customer details and proprietary information will be omitted or generalized to maintain confidentiality.   Stay tuned for more updates, and as always, in case of issues, feel free to reach out to <support.ptc.com>  
View full tip
Hello ThingWorx community members!   First off, we greatly appreciate the feedback and enthusiasm for ThingWorx 10.0! Thank you!   With the release of 10.0, we’ve worked with customers to enhance their application performance using the caching feature. Based on this collaboration, our field services  team— @DeShengXu, @gregeva, and @ssauvage-2 —has developed the ThingWorx Cache Thing Guide v1.0.0, a comprehensive resource to help developers significantly improve query performance with in-memory caching.   This guide empowers you to optimize performance, reduce system load, and enhance user experience.   Why Use Cache Thing?  Boost Performance: Caching feature with 10.0 improves repetitive data access performance by orders of magnitude (~700x) compared to accessing the data from the database every time.  Reduce Costs: Minimize calls to external services by storing results, saving on resource usage.  Improve Scalability: Handle high concurrency with ease and reduce the strain of database.   Dive into the pdf guide attached for step-by-step instructions, code samples, and key design consideration. Follow the DOs and DON’Ts to create efficient cache keys, manage memory, and optimize database operations.   We’d love to hear your feedback as you upgrade to 10.0 and explore the caching capability. Feel free to share your thoughts here! Thank you! Cheers, Ayush Tiwari Director Product Management
View full tip
Hello ThingWorx'ers!   First off, we’re pleased to see the excitement around ThingWorx 10.0! Your feedback on features like the Debugger, Caching, and the Windchill Navigate View Work Instructions app has been fantastic. I’m especially amazed by the rapid adoption of IoT Streams over the past two months. I'm glad it’s delivering value!   Join us for an exclusive ThingWorx 10.0 Launch Webinar on September 9, 2025, at 11:00 AM EDT. We’ll dive into the latest features, including real-time IoT Streams, enhanced scalability, stronger security, and exciting AI updates that will power your Industrial IoT journey. Register now: ThingWorx 10.0 Webinar   After the webinar, we’d love to hear your thoughts! Share what you’re loving about 10.0 and any feedback you may have.     Keep pushing the boundaries of innovation!   Cheers, Ayush Director Product Management, ThingWorx        
View full tip
Containerization has been a cornerstone of modern software deployment, offering well-known benefits like portability, scalability, and consistency across environments. In the industrial IoT (IIoT) space, these advantages are particularly valuable, enabling organizations to manage complex systems with greater agility and efficiency. At PTC, we recognize that while containerization is not new, its application in IIoT continues to evolve, and our platforms—ThingWorx and Kepware—are designed to help you harness its full potential in practical, impactful ways. ThingWorx: Streamlining IIoT with Containerization ThingWorx has supported containerization for some time now allowing users to build ThingWorx Docker Container images and deploy applications with ease, whether on-premises, in the cloud, or in hybrid setups. This approach simplifies the deployment process, reduces configuration overhead, and ensures that your IIoT solutions can scale as your needs grow. For those already familiar with containerization, ThingWorx offers Dockerfiles allowing customers to build, run, and deploy, ThingWorx as Docker Containers for  development and production use cases. See our help center for already available information on this: https://support.ptc.com/help/thingworx/platform/r9.7/en/index.html#page/ThingWorx/Help/Installation/ThingWorxDockerGuide/thingworx_docker_landing_page.html     New Resource: Deploying ThingWorx on Kubernetes As container adoption matures, so does the need for robust orchestration tools. That’s why we’re excited to introduce a new best practices guide for deploying ThingWorx containers on Kubernetes, with a focus on Azure Kubernetes Service (AKS). This guide is designed to help you take the next step in managing your containerized applications at scale, offering information on: Setting up and managing Helm chart repositories. Preparing your Azure environment, including resource groups, virtual networks, and container registries. Creating and managing content repositories for Docker images. Deploying and configuring Azure Kubernetes Service (AKS) clusters. Implementing essential supporting components such as Monitoring Stacks, Certificate Managers, Ingress Controllers, Azure PostgreSQL Databases, and Storage Accounts to facilitate ThingWorx deployment. Detailed steps to deploy ThingWorx in various configurations, including standalone , high availability (HA) , and with eMessage Connector (eMC). Procedures for upgrading ThingWorx deployments   You can access this guide on our GitHub repository: ThingWorx Kubernetes Deployment (twx-k8s). Whether you’re scaling to support thousands of devices or simply looking for more efficient management of your IIoT infrastructure, this guide helps you with the best practices you need to succeed in your containerization efforts for ThingWorx.     Kepware Edge: Connectivity at the Source On the connectivity front, Kepware Edge brings the power of containerization directly to the edge of your operations. By packaging industrial-grade connectivity into a lightweight, container-friendly solution, Kepware Edge allows you to deploy secure, reliable data access right where your machines and devices are located. For more details on how Kepware Edge, check out our recent announcement: PTC Announces Kepware Edge and stay tuned for more updates on the availability of it.     Practical Tools for Your IIoT Journey Improving DevOps for applications built on ThingWorx is a key priority for us at PTC and containerization is a critical piece to it. We invite you to explore these resources and see how they can fit into your existing IIoT solution development workflows. Visit the ThingWorx Kubernetes guide on GitHub and let us know your feedback or any questions around containerization by posting on the IoT community.   Cheers, Ayush Tiwari Director Product Management, ThingWorx
View full tip
Abstract This article explores an approach on how to optimize ThingWorx thread usage during service execution. It addresses common scenario where the Event Processor executor thread pool can become saturated, which can lead to queuing and slow responsiveness to users. This article is a proposal for configuring and optimizing services, with an example. Please, use the post comments to provide feedback or suggestions. All used capabilities are 100% ThingWorx OOTB. In the following, the acronym SETO (Service execution thread optimization) will be used to refer to entities and their capabilities.   Introduction During the execution of a service, events trigger a subscription that runs in a new thread. In addition to the best practices for timers and schedulers, SETO uses this mechanism to parallelize the execution of a service across subdivisions of a population of Things. When using ThingWorx, clients often encounter issues related to the regular execution of services such as "get data" or "update KPIs".  These services, when executed frequently, can lead to several challenges: Single thread processing: Processing occurs only in one thread, might lead to long execution time. Thread Pool Saturation: When too many services are executed simultaneously, the thread pool can become saturated, slowing down the entire system. Queue Overflows: Thread pool saturation can lead to queue overflows, where new tasks cannot be processed in time. Task Loss: Overflows can result in the loss of important tasks, affecting service reliability. Unresponsive Server: Due to overload, the server can become unresponsive, impacting user experience and operational continuity. Data processing requirements often lead to decreasing execution time without the required consideration towards impact on stability and availability of the other required services.  This post provides explanations and examples of how to think differently about how to execute data processing quickly and reliably while also focusing on the systems stability and reliability.   Why Use Service Execution Thread Optimization? To address these issues, Service Execution Thread Optimization (SETO) is essential. It allows for effective management of resource saturation and its consequences. SETO enables you to control the percent or number of used threads and the safety inter-execution delay.   Selective Execution Management Prioritized processing involves sorting tasks from oldest to newest, focusing on older tasks to prevent them from falling too far behind. However, it is crucial not to limit execution to only old tasks but to find a balance to process all eligible tasks. If all things don’t execute the service before the inter-execution delay, SETO skips them. These remaining things will be prioritized for the next timer event. Selectively orchestrating work execution allows designing your data processing workflow so that it is coherent with your various use cases functional requirements as well as being tuned for performance and your provisioned resources. This example covers functional requirements which require data to be processed within a 1-hour period, and hence sorts the processing by oldest first, and allows for skipping execution windows if the system is busy at that time. Your application of this approach may leverage a more complex approach suited to your needs that could add sensitive assets that should be processed with more urgency depending on the type or location of an asset.  This is where multiple copies of the SETO entity would adapt according to your use cases, selection criteria, work sorting, timing, thread usage configurations, etc.   Applicability and Ease of Use SETO has been designed to avoid touching the existing model as much as possible. The only prerequisite is to have a Datetime property on each Thing to track the last execution of the service, one property per service.   How to use it? Configuration of Allocated Threads Per Service 1. Create an Thing from the PTCSC.SETO.WorkProcessManager_TT ThingTemplate. 2. Configure the AllocatedThreadsPerService Table:   TS or TT: The entity type that carries the service. Accepted values are “TT” for ThingTemplate and “TS” for ThingShape TS or TT name: The name of the entity that carries the service. Service name: The name of the service you want to streamline the execution. Safety timeout: A security time value that defines a hard stop before the next timer trigger, in seconds. Minimum of 5 seconds. Example: If my timer is set to 3 minutes and my safety timeout is set to 10, the service won’t be executed on remaining things at 2 minutes and 50 seconds. The safety timeout must be less than the timer update rate. Last update property name: The name of the datetime property on things used to store the last service execution timestamp. Update older than: The minimum number of minutes since the last update before the service is triggered on an object. If set to 0, all things will be taken in account anytime. Percent available thread: The percentage of available thread to use. Value constrained from 0 to 99. This value is truncated to select a round number. Example: 13 threads available, 60% requested → 13*0,6=7,8, so 7 threads will be used. Use number of threads: Way to the select the requested number of threads. TRUE, use percent of available threads / FALSE, use the number of threads value column. Number of threads: Maximum number of threads allowed to run in parallel to process the list of things. This number should be set to leave at least a few threads free. SETO let at least on thread free, so the number of request threads might be lower or equal to zero. The remaining thread might be used by other execution outside SETO. 3. Creating a Subscription in the new Thing.   Create a Subscription to link the Timer to the Service to execute.   Result example The following example demonstrates how logging is seamlessly integrated into SETO during the execution optimization process. Logging is part of the ServiceExecution_SUB subscription in PTCSC.SETO.WorkProcessManager_TT ThingTemplate.   Important Notes Appropriate entity selection, sorting, scheduling algorithms will depend on the specific use cases and workload. SETO uses QueryImplementingThingsOptimized for better performance, so indexing is encouraged. Processing needs to address resulting status and potentially needed retries as a part of the scheduling algorithm.  You'll want to add use case specific logic which ensures the entities with failing processing are only attempted a couple of times and that their error state is notified and addressed by a system administrator. Ex: just sorting by oldest last execution time will cause anomalous non-working entities to surface to the top and taking priority over all other processing (this is an anti-pattern to avoid, requiring a circuit breaker) SETO can only trigger Services without input parameters.  You could evolve the concept to leverage passing Service parameters through JSON property event data. In this article, a Timer is used to trigger a regular event. As best practice, it is better to use a Scheduler (regular as a Timer) Indeed, when server starts, all timers will start. Scheduler trigger depends on current datetime and not on sever startup.   Conclusion and provided file By optimizing Service execution, businesses can improve the performance and reliability of their applications built on the ThingWorx Platform. Proper configuration helps prevent common issues and ensures smooth service execution. Find attached to this article the SETO entities valid from version 9.3.   I've just put resource here as more generic as it is not only the thread pool that would become saturated in the given context.  It could be many other aspects downstream which are tightly coupled like the database.     NB: thanks to Greg EVA for sharing knowledge and review. 
View full tip
Durable Queues Are Here by Tori Firewind, Principal Cloud Architect   Introduction Well folks, the durable queue is here, and it is… durable! We tried everything in our dev ops arsenal to bring it down, but no matter what we threw at it, Event Hub stayed up. No data was lost in any test scenario. ThingWorx 10.0 is a remarkably more mature and stable offering than ever before with its new use of Kafka to prevent data loss, as well as internal queue management and queue diagnostics features.   As we announced last quarter, new diagnostics features allow us to record diagnostic data from the moment a problem starts, ensuring RCA can begin immediately, without further time spent waiting for issues to occur again. These are highly configurable, and PTC is ready to support customers opting-in to acceleration-based diagnostics!   Coming out now is the new internal throttling mechanism within ThingWorx, which ensures that even when queues max out, regardless of what those queues are doing, ThingWorx remains up and capable of other activity. In some of our failed scale test scenarios, the event queue was maxed out for many hours, without any subsequent out of memory crash of the Platform. It was remarkably durable!   Even better if the durable queue is opted-in, because then those events also happen faster and more reliably. The durable events fire immediately within the Platform when a durable property is updated. Both of these go to Event Hub simultaneously. The load within Event Hub is balanced independently and processed more quickly than by ThingWorx, improving overall performance of both property updates and events, while still leaning heavily on ThingWorx for the web access and data redirect and storage.   When the lag is well controlled, the subject of most of the rest of this article, the property values go in, they come out within milliseconds, and the latency is not significant in spite of the added component. And of course, if something happens to the Platform in the meantime, never fear, for the data truly is preserved and accessible within Event Hub.   Data loss is a thing of the past with ThingWorx 10.0!     Configuration Situation The one sacrifice of durability is scale, which can be challenging with Event Hub.  There are some key considerations when optimizing ThingWorx for throughput, which should be considered necessary, as well as when sizing Event Hub.   ThingWorx Throughput Optimization Within ThingWorx, go to the system object and edit it (may require admin permissions). Modify the Configuration to reduce the overall number of threads, which in turn reduces the distribution of the Event Hub load, allowing each to be processed more quickly. Also lower the buffer scan rate for persistent properties so they flush more often.   Especially lower the max number of items before flushing, the buffer that usually delays writes to the database so as not to overwhelm it. That is less of a factor here as Event Hub has an internal load balancer. Event Hub is better for throughput than a database would be, and these are the settings to put on all opt-in queues for optimal performance.   Event Hub Sizing and Partition Optimization Within Event Hub, there are several types of processing units. We will focus on the lower two tiers, as the highest tier is very expensive and less common to use, and the concept is the same as the Premium (mid) tier.   The Standard Tier uses TUs, a.k.a. “Throughput Units”, and these are less performant but also less resource intensive, and so much less expensive. There is a maximum of 40 TUs overall, and 32 partitions per Event Hub in Standard. There is one Event Hub each for Logged Properties, Persistent Properties, and Unordered Events in both Standard and Premium.   Premium Tier instead uses PUs, a,k.a. “Processing Units” and these are more performant and more resource intensive, with lower commit request latency, meaning the commits within Event Hub are happening faster. The data is received faster, and the cost for this is greater, but the stability is also greater and the risk of runaway lag or eventual data loss much lower. The risks are much more mild than before, and recovery is discussed below.   In Premium, there is a maximum of 100 partitions per Event Hub, with 200 total per PU. There is a maximum of 16 PUs, and these go only in increments of 2. There are diminishing returns with more resources, however, directly proportional to the number of things. More things overall will reduce the write capabilities within Event Hub, as more CPU resources have to be spent on the network communication portion of the data exchange. Low Partitions Medium Partitions High Partitions   It is better to use more partitions than less, and a higher number of partitions will result in less latency and lower mean lag. There is always some lag, however, as it is calculated from the number of items queued versus completed. Both of these queues are very active, and healthy lag is usually between 60 – 80% of the total property wps, with peaks that do not increase over time. Sometimes the lag can be spikey, which must be considered in the alerting infrastructure.   The mean load should be significantly less than 16 PUs and load tested, so that there is room to scale up and recover any lag that accrues from the unpredictable nature of production systems. Always leave room for spikes!   Recovery of the Event Hub The short version: do not modify the partitions to resolve runaway lag.    If more partitions are added while the lag is falling behind, then instead of helping to catch up, they significantly delay the recovery. Anything currently in Event Hub will not be distributed across the new partitions, only new things that are added later, but all of the partitions will still be polled for data, including the new ones, which will slow things down even more.   The right way to deal with runaway lag is to increase the TUs or PUs to a decently higher setting temporarily, let the lag catch up, and then increase the number of partitions, and then wait and see how the server responds before finally downsizing once again. It is important to consider that there is a maximum size for processing data, and a maximum number of partitions per Event Hub, creating a hard upper limit for performance and scale.   Make sure any Event Hub instance is sized small enough to allow for upsize in the case of runaway lag. Edge load is not guaranteed to remain perfectly steady, generally speaking, there can be surprise disconnections, reconnections, and spikes in utilization. There really is no other way to ensure no data loss occurs to runaway lag, especially since there usually is no way to turn off the Edge load at will in Production. Lag grew to the hundreds of thousands quickly, was surely beyond recovery at this size. The partitions were increased at 11:45 to demonstrate the poor distribution of data processing within Event Hub. Around 15 hours to recover. Here it is up close, see how every partition is doing a tiny amount of work, and it takes quite a long time? Too much lag, and the data will be lost in one of two ways: not being added to an already full queue within Event Hub, or by erroring out as Event Hub tries to pass it back with a variety of errors. If Event Hub backs up too much to be recovered by upsizing, or it cannot be upsized enough, it can be deleted, and only the type of data affected will see loss, and with no downtime for ThingWorx.   A Healthy Example An XL ThingWorx deployment was used to ensure that the Platform was not the limiting factor. The required TUs and PUs are the figures calculated by the Grafana dashboard, coming from the Kafka metrics. The average latency for subscriptions is calculated by having a start datetime property (not logged or persisted) update when the rest of the property updates fire, and then an end datetime property update when the subscriptions to the persistent properties run; the timespan is then calculated and written to the script log.   This example was an XL sized application, 80k things, each thing with  20 properties total, 10 Logged and 10 Persistent properties, that write to Event Hub twice in a minute. There were 5 events as well to measure the latency, but due to the design of the test (property updates fired from a timer subscription), opting in to Durable Events causes performance issues that affect the test results. That is why events show up in the Event Queue, which does not happen in opt-in tests. : These are calculated by the Kafka Metrics dashboard:  Required TUs:    115                Required PUs:   2        These were what was configured for this test: TUs Configured: N/A              PUs Configured: 16                Partitions (respectively): 100, 100, 0 Average Latency for Subscriptions: < 100ms   The test begins at 11 am. Lag is steady and the spikes are not increasing over time (though they come close).   The property write rate includes the 20 properties that go to Event Hub, plus the 10 date time properties for measuring latency, and one additional info table property for a more realistic load.   This looks the same as it usually does, there is no change to performance.   This is high because of the design of the test, all of the things update on thing template level timer subscriptions. This is much lower with opt-in for durable events.     How durable!
View full tip
Announcements