cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

ThingWorx Navigate is now Windchill Navigate Learn More

IoT & Connectivity Tips

Sort by:
Containerization has been a cornerstone of modern software deployment, offering well-known benefits like portability, scalability, and consistency across environments. In the industrial IoT (IIoT) space, these advantages are particularly valuable, enabling organizations to manage complex systems with greater agility and efficiency. At PTC, we recognize that while containerization is not new, its application in IIoT continues to evolve, and our platforms—ThingWorx and Kepware—are designed to help you harness its full potential in practical, impactful ways. ThingWorx: Streamlining IIoT with Containerization ThingWorx has supported containerization for some time now allowing users to build ThingWorx Docker Container images and deploy applications with ease, whether on-premises, in the cloud, or in hybrid setups. This approach simplifies the deployment process, reduces configuration overhead, and ensures that your IIoT solutions can scale as your needs grow. For those already familiar with containerization, ThingWorx offers Dockerfiles allowing customers to build, run, and deploy, ThingWorx as Docker Containers for  development and production use cases. See our help center for already available information on this: https://support.ptc.com/help/thingworx/platform/r9.7/en/index.html#page/ThingWorx/Help/Installation/ThingWorxDockerGuide/thingworx_docker_landing_page.html     New Resource: Deploying ThingWorx on Kubernetes As container adoption matures, so does the need for robust orchestration tools. That’s why we’re excited to introduce a new best practices guide for deploying ThingWorx containers on Kubernetes, with a focus on Azure Kubernetes Service (AKS). This guide is designed to help you take the next step in managing your containerized applications at scale, offering information on: Setting up and managing Helm chart repositories. Preparing your Azure environment, including resource groups, virtual networks, and container registries. Creating and managing content repositories for Docker images. Deploying and configuring Azure Kubernetes Service (AKS) clusters. Implementing essential supporting components such as Monitoring Stacks, Certificate Managers, Ingress Controllers, Azure PostgreSQL Databases, and Storage Accounts to facilitate ThingWorx deployment. Detailed steps to deploy ThingWorx in various configurations, including standalone , high availability (HA) , and with eMessage Connector (eMC). Procedures for upgrading ThingWorx deployments   You can access this guide on our GitHub repository: ThingWorx Kubernetes Deployment (twx-k8s). Whether you’re scaling to support thousands of devices or simply looking for more efficient management of your IIoT infrastructure, this guide helps you with the best practices you need to succeed in your containerization efforts for ThingWorx.     Kepware Edge: Connectivity at the Source On the connectivity front, Kepware Edge brings the power of containerization directly to the edge of your operations. By packaging industrial-grade connectivity into a lightweight, container-friendly solution, Kepware Edge allows you to deploy secure, reliable data access right where your machines and devices are located. For more details on how Kepware Edge, check out our recent announcement: PTC Announces Kepware Edge and stay tuned for more updates on the availability of it.     Practical Tools for Your IIoT Journey Improving DevOps for applications built on ThingWorx is a key priority for us at PTC and containerization is a critical piece to it. We invite you to explore these resources and see how they can fit into your existing IIoT solution development workflows. Visit the ThingWorx Kubernetes guide on GitHub and let us know your feedback or any questions around containerization by posting on the IoT community.   Cheers, Ayush Tiwari Director Product Management, ThingWorx
View full tip
Platform Support Windows Server 2008 R2 SP1, Windows 2012 R2, and Cent OS 7.1 (paid version only) are recommended and fully tested for production. Server Support • KEPServerEX v6.2, which includes the ThingWorx Native Interface. Note: Non-Kepware OPC Servers servers and earlier versions of KEPServerEX can be connected to KEPServerEX v6.2, functioning as an aggregator (OPC UA Server). KEPServerEX and ThingWorx can be installed on the same machine. However-- for production-- separate machines are recommended. • ThingWorx 8.0 with PostgresSQL 9.4.10-1 database, Express • ThingWorx 8.0, with the ThingWorx Manufacturing Apps imported as a ThingWorx extension Minimum recommended hardware • OS — Windows 2008R2 • SP1 / Windows 2012R2 • Disk Space — 100 GB • RAM — 7 GB • CPU — 3 Core Client Browser Support - Paid Version • Chrome 44 • Firefox 35+ • Safari 6.1.6+ • Internet Explorer 11+ For more information on the installation requirements, see the Product Requirements section of the install guide here: Not authorized to view the specified document 3992
View full tip
  Question: What are some best practices around building IIoT solutions with ThingWorx?   Meet Ward. Ward works on the product management team for our Manufacturing Apps (i.e. Asset Advisor, Operator Advisor, Production Advisor, etc.). He’s a super cool and smart guy, and he always has an answer to my ThingWorx questions. He has so many answers, in fact, that he worked closely with other ThingWorx experts like Sangeeta to create the ThingWorx Application Development Guide.   I sat down with him to hear his top few tips from the guide. And, just in case we don’t have enough fun around here on “Ask Kaya,” we decided to list his top tips not by “1”-“2”-“3”, but by “W”-“A”-“R”-“D”.   Without further to do, here are Ward’s top tips from the ThingWorx Application Development Guide.   Whitelist your IPs for application keys. (See page 67.) Auto Refresh widget vs. GetProperties service? How should I update live data to my mashup? (See page 25.) Reuse components to increase efficiency and improve your application design. (See page 69.) Don’t use a Thing Template when you really should use a Thing Shape. (See page 10.)   To see more, check out the full ThingWorx Application Development Guide here!   Look out for our next release of the App Development Guide in July! It’ll feature our Manufacturing Apps to share even more ThingWorx best practices!   Reach out with any questions and stay connected! - Kaya
View full tip
  Hi everyone,   In anticipation of ThingWorx 9.0’s biggest feature, active-active clustering, we’d like to provide an architectural overview of a sample active-active configuration and its underlying components. If you haven’t already seen it, we invite you to read our previous Community tech tip, where we introduce the concept of active-active clustering for ThingWorx Foundation, which enables you to: significantly reduce unplanned downtime for your mission-critical services and apps support horizontal scalability of the ThingWorx Server where you can scale your services up and down based on your requirements easily run, package, deploy, and operate advanced apps and services with the help of intuitive browser-based navigation, interactive monitoring and debugging tools, and more deploy anywhere - public cloud, private datacenter, on-premise, hybrid, or even locally on your laptop with deep optimizations for Azure Now, before we go too deep, we’d like to let you know that you can continue to seamlessly upgrade from previous versions of ThingWorx releases to upcoming ThingWorx 9.0  releases. Previously, you could deploy ThingWorx Foundation in a “single server” mode, and, for a high availability in “active-passive cluster” mode (see here for details). In the ThingWorx 9.0 release and onwards, you’ll be able to continue to deploy ThingWorx Foundation in a “single server” mode and for high availability scenarios via our new “active-active cluster” mode. Please note that active-passive clustering configuration will no longer be supported in ThingWorx 9.0 or onwards.   Let’s start with a quick recap on how the ThingWorx Foundation 9.0 release would look like in single server deployment.   ThingWorx 9.0 Deployed in a "Single-Server" Architecture Below is a high-level diagram depicting the main architectural layers and components of ThingWorx Foundation deployed in a single server mode.   Deployment Components Below is a brief summary of all major architectural components and their purpose in the deployment architecture:   The Client Layer This layer is comprised of everything that connects with, sends data to, and receives content from the ThingWorx platform. It be broken down into two groups: Devices/Things: Things, devices, agents, and other assets. Users/Clients: People and the respective products (primarily web browsers) they use to access ThingWorx.   The Application Layer This layer is where ThingWorx Foundation and other applications deployed with ThingWorx Foundation reside, such as ThingWorx Analytics, ThingWorx Connection Server, ThingWorx Azure IoT Hub Connector and others. This layer provides connectivity to the client layer, performs authentication and authorization checks, ingests/processes/analyzes content, and reacts to conditions by sending alerts. For a specific ThingWorx Foundation deployment that needs basic device data ingestion, processing and storage, you can setup only ThingWorx Foundation server. In some cases, with large number of device connections, you may want to setup ThingWorx Connection Server with ThingWorx Foundation in a single server for further scalable connectivity. ThingWorx Foundation: ThingWorx Foundation is a java-based application that serves as a rapid, model-based application development platform. Shared File Storage: Shared Disk space to contain ThingWorx Storage repositories, store and archive log files accessed by all ThingWorx Foundation servers. A NAS file storage, AWS Elastic File System, or Azure Files and others could be used for this purpose.   The Data Layer ThingWorx Foundation includes several persistence provider implementations that enable you to choose a database option that best fits your use case. A persistence provider enables the connection to a data store and the ability to perform a CRUD operation on that data. See here for more information. Currently, there are two basic variations of persistence providers: Model Provider – Responsible for ThingWorx model metadata and system data. Data Provider – Responsible for runtime data ingested against the model elements, including streams, value streams, data tables, etc. ThingWorx supports H2 (in-memory Database), PostgreSQL, MS SQL Server and AzureSQL as both model and data providers, and InfluxDB as only a data provider. Please see here for model and data best practices.   ThingWorx 9.0 Deployed in an Active-Active Clustering Reference Architecture Below is a reference architecture diagram for ThingWorx 9.0 with multiple ThingWorx Foundation servers configured in an active-active cluster deployment. Please note that this is only one reference example of how ThingWorx 9.0 can be deployed in an active-active clustered environment. There could be other architectural configurations dependent upon the needs of the specific deployment. Deployment Components Once you have developed an understanding of the basic architectural components in a single server mode, below are the additional components required to run ThingWorx in active-active cluster mode.   The Client Layer This will be similar to what has been mentioned in the above single server configuration.   The Application Layer In this layer, if you’re familiar with ThingWorx active-passive cluster configuration, then you may be aware of most of the components used below—with the exception of a new component: Apache Ignite that provides Distributed Caching for the horizontally scalable ThingWorx Foundation servers. Load Balancer: A third-party device that receives network traffic and distributes requests among available servers. In active-active cluster configuration, the load balancer is used to direct WebSocket-based traffic to the ThingWorx Connection Servers while user requests (http/https) traffic is directly distributed to the ThingWorx Foundation servers. Users can continue to use a load balancer with ThingWorx 9.0 that they might already be using for their existing active-passive or single server deployments with ThingWorx 8.X or previous releases.  Some example load balancers include, but are not limited to: HAProxy, Azure Application Gateway, and AWS Application Load Balancer. ThingWorx Connection Services: These services handle all messages routing to and from devices, providing scalable connectivity to the ThingWorx Foundation Server. With the ThingWorx 9.0 release, ThingWorx Connection Services have been upgraded with many additional features to support active-active clustering of the ThingWorx Foundation servers, where now they route all WebSocket traffic in a round robin fashion to the connected ThingWorx Foundation servers. Depending upon the various use cases, one could use multiple ThingWorx Connection Services available, such as ThingWorx Connection Server, ThingWorx Azure IoT Hub Connector, and ThingWorx Protocol Adapter Toolkit. Please see here for further details. Please note that for ThingWorx 9.0 releases, ThingWorx Connection Server would be required in an active-active configuration to support all the WebSocket-based traffic routing, including egress of files and device messages from multiple ThingWorx Foundation servers back to the devices, and it would also serve as WebSocket communication from ThingWorx Mashup-based applications to ThingWorx Composer. ThingWorx Foundation: With ThingWorx 9.0 and onwards, you can set up ThingWorx Foundation servers in an N-active-active cluster model to provide higher availability to your applications and horizontally scale the Foundation server nodes up and down based on your scalability needs. Apache Zookeeper: Apache ZooKeeper is a centralized service for maintaining configuration information and naming as well as providing distributed synchronization and group services. It is a coordination service for distributed applications that enables synchronization across a cluster. Specific to ThingWorx, ZooKeeper is used for distributed locking, selecting a singleton server during the server initialization, service discovery for Apache Ignite, allowing it to find instances of ThingWorx Foundation servers. Apache Ignite: This offers a distributed cache for the active-active cluster setup. It is used by ThingWorx Foundation Servers to share state. It may be embedded with each ThingWorx instance or can be run as a standalone cluster for larger scale. In this configuration, Ignite is set up in a standalone cluster but can be run embedded within the ThingWorx Foundation. Running Ignite in a standalone cluster is more ideal for larger scale, as it supports higher vertical scale of memory in the deployment setup.   The Data Layer ThingWorx is largely database agnostic. You can continue to use officially supported persistence providers that you may already be using in your existing deployments based off of ThingWorx 8.X or previous releases. Please look out for an upcoming ThingWorx update as well as enhanced installation documents to help with your upgrade and migration questions with the general availability of ThingWorx 9.0. Please note that this diagram does not make the distinction between model and data providers; depending on your data ingestion needs, separate model and data providers can be used. As a reminder, all databases should be deployed in a high-availability configuration to help eliminate any single point of failure.   In closing, we can't wait to launch active-active clustering in 9.0 to help you: dramatically further reduce application downtime scale your deployments and more efficiently manage your apps, regardless of where they’re deployed   If you have any questions about active-active clustering or its architecture, please do not hesitate to reach out!   Stay connected! Kaya
View full tip
  Sunshine, beach chairs & ThingWorx 9.2. What more could you need for your summer essentials?   Targeted for June 2021, our next release features intelligent one-click deploy with Solution Central*, new web components, and an enhanced IAM integration!   Let’s dive deeper into each.   Deploy an entire solution in one click with Solution Central’s intelligent one-click deploy. Good news: you followed a modular design pattern and broke up your application into smaller libraries and components. You can now enjoy easier maintenance and re-use of your app. Bad news: your app now has 10 different dependencies, with differing versions, each with a required order to import into ThingWorx. Now, try to share these modules with colleagues, or use them on environments where code may already exist. Not exactly a day at the beach, right? Fear not, one-click deploy has you covered. You click the button, we spin through and find the right dependencies, the right versions, the right order and load them all into the target platform upon a deploy request. Solution Central  one-click deploy means more sun and sand for you! Check out this post to learn more about what’s available in Solution Central 3.0! Intelligent One-Click Deploy with Solution Central Enhance your solutions with our latest web components! Imagine this: you’re a systems developer at a large parts manufacturer and your boss has asked for a detailed analysis of downtime over the last six months. Not to worry! ThingWorx 9.2 features a new waterfall chart that can be leveraged to understand dynamics in defect counts, loss reasons, time bottlenecks and other conditions. Be sure to try it out! And, while you’re at it, try out our new web components that are available now as preview: a toolbar to add key like filtering at the top of your screen or data intensive widgets (e.g., grids), a more flexible grid and a fancy new paradigm for interface developers. These three preview widgets are fully functional and tested in 9.2. Preview widgets will graduate in a future release when we add all planned functionality or address any perceived usability feedback.​ Don’t be afraid—it just means more good things are coming. Surf’s up, you can use these widgets safely now!​ New Waterfall Widget Coming in ThingWorx 9.2 Leverage new integrations with Azure Active Directory for more seamless user management. In prior releases we have offered integration to Azure Active Directory and SSO through Central Authorization Service type products or through custom authenticator extensions to ThingWorx.  With our new Azure AD integration, you can cut the custom extensions and additional software out of the picture.  We now accept direct SAML assertions from Azure AD directly to ThingWorx platform, which makes it that much easier to deploy your app in your organization’s SSO flow.  It’s as smooth as that frosty tropical drink when the sun goes down.   Like what you see? Want to try it out for yourself? ThingWorx 9.2 is targeted for June 2021, so be sure to keep a lookout on the horizon. Bump, set, spike!   Stay cool & connected, Kaya
View full tip
This example provides the ability to generate a simple entity structure and some historical data for each entity. Historical data is run through a ThingWorx service to generate histogram data for display in a bar chart.  The provided ThingWorx entities and PDF document provide the example as well as documentation.
View full tip
Parquet Data Format used in ThingWorx Analytics   Starting ThingWorx Analytics Version 8.1 Data storage will no longer require the installation of a PostgreSQL database. Instead, uploaded CSV data is converted to the optimized  Apache Parquet format and stored directly in the file system. This Blog explains some the features of Apache Parquet justifying this transition in ThingWorx Analytics Data Storage. features What is Apache Parquet: Apache Parquet is a column-oriented data store of the Apache Hadoop ecosystem. It is compatible with most of the data processing frameworks in the Hadoop environment. It provides efficient data compression and encoding schemes with enhanced performance to handle complex data in bulk. Below is an illustration of the Columnar Storage model: Apache Parquet Features and Benefits: Apache Parquet is implemented using the record shredding and assembly algorithm taking into account the complex data structures that can be used to store the data. Apache Parquet stores data where the values in each column are physically stored in contiguous memory locations.  Due to the columnar storage, Apache Parquet provides the following benefits: Column-wise compression is efficient and saves storage space Compression techniques specific to a type can be applied as the column values tend to be of the same type Queries that fetch specific column values need not read the entire row data thus improving performance Different encoding techniques can be applied to different columns Some advantages of using Parquet for ThingWorx Analytics: Apart from the above benefits of using Parquet which amount to higher efficiency and increased performance, below are some advantages that apply specifically to ThingWorx Analytics This change in ThingWorx Analytics from using a Database to using Parquet removes the limitations on the number of data columns the system can handle. It also allows for streamlining the dataset creation process. Since the data is converted to a Parquet format, there is no need to separately optimize the dataset. Even when new data is appended to an existing dataset, a new partition is added and re-optimization is optional but not required. Data could be appended easily so there is no longer a need to re-load the full Dataset when new Data values are added The illustration below shows the transition from Row-based Data Storage model VS the columnar based Storage of Parquet
View full tip
Please find here an Labview implementation to connect to Thingworx via RestCalls. Have Fun using it. Any Feedback is appreciated. https://github.com/Seppel1985/LabVIEW_TWX_RestAPI
View full tip
    Raise your hand if you’re ready for seamless, rapid deployment of your ThingWorx applications with visibility into your various environments! It’s time to say goodbye to error-prone deployments with manual dependency tracking and hello to Solution Central!   Releasing this fall as part of 8.5, Solution Central is a brand-new cloud service coming to the ThingWorx platform to enable you to efficiently manage your ThingWorx applications across the enterprise.   With Solution Central, you’ll no longer be caught chasing missing dependencies (like ThingShapes, Mashups, templates or library extensions). Solution Central automatically identifies and packages up the dependencies required for your application. No more manual dependency madness!   Whether you’re managing many apps deployed to a few environments or a single app deployed to hundreds of environments, Solution Central allows you to accelerate your deployment through an intuitive UI or powerful APIs for automation.   Here’s how it works: Begin by creating your application in Composer with a project. Let Solution Central automatically package up all the artifacts and dependencies required for your application. Allow Solution Central to publish your solution package to the cloud. Deploy your application to your various environments (local servers, data centers, cloud systems) directly from Solution Central. It’s like your company has its own private app store. Here’s a sneak peek of the Solution Central UI! Keep an eye out for the release of ThingWorx 8.5 at the end of Sept 2019 and begin accelerating your app deployment! Check out the presentation and demo my fellow PM Chris Baldwin and I delivered at LiveWorx19—and be sure to attend LiveWorx20! To navigate to our session recording, search for “Introducing Solution Central: Your Gateway to Accelerated IIoT Value Across the Enterprise” here.   Sound interesting? Message me directly to discover how you can become part of the Solution Central Private Preview Program!   -Kaya  
View full tip
Hi everyone, As everyone knows already, the main way to define Properties inside the EMS Java SDK is to use annotations at the beginning of the VirtualThing class implementation. There are some use-cases when we need to define those properties dynamically, at runtime, like for example when we use a VirtualThing to push a sensor's data from a Device Cloud to the ThingWorx server, for multiple customers. In this case, the number properties differ based on customers, and due to the large number of variations, we need to be able to define programmatically the Properties themselves. The following code will do just that: for (int i = 0; i < int_PropertiesLength; i++) {     Node nNode = device_Properties.item(i);     PropertyDefinition pd;     AspectCollection aspects = new AspectCollection();     if (NumberUtils.isNumber(str_NodeValue))     {         pd = new PropertyDefinition(nNode.getNodeName(), " ", BaseTypes.NUMBER);     }     else if (str_NodeValue=="true"|str_NodeValue=="false")     {         pd = new PropertyDefinition(nNode.getNodeName(), " ", BaseTypes.BOOLEAN);     }     else     pd = new PropertyDefinition(nNode.getNodeName(), " ", BaseTypes.STRING);     aspects.put(Aspects.ASPECT_DATACHANGETYPE,    new StringPrimitive(DataChangeType.VALUE.name()));     //Add the dataChangeThreshold aspect     aspects.put(Aspects.ASPECT_DATACHANGETHRESHOLD, new NumberPrimitive(0.0));     //Add the cacheTime aspect     aspects.put(Aspects.ASPECT_CACHETIME, new IntegerPrimitive(0));     //Add the isPersistent aspect     aspects.put(Aspects.ASPECT_ISPERSISTENT, new BooleanPrimitive(false));     //Add the isReadOnly aspect     aspects.put(Aspects.ASPECT_ISREADONLY, new BooleanPrimitive(true));     //Add the pushType aspect     aspects.put("pushType", new StringPrimitive(DataChangeType.ALWAYS.name()));     aspects.put(Aspects.ASPECT_ISLOGGED,new BooleanPrimitive(true));     //Add the defaultValue aspect if needed...     //aspects.put(Aspects.ASPECT_DEFAULTVALUE, new BooleanPrimitive(true));     pd.setAspects(aspects);     super.defineProperty(pd); }  //you need to comment initializeFromAnnotations() and use instead the initialize() in order for this to work. //super.initializeFromAnnotations();   super.initialize(); Please put this code in the Constructor method of your VirtualThing extending implementation. It needs to be run exactly once, at any instance creation. This method relies on the manual discovery of the sensor properties that you will do before this. Depending on the implementation you can either do the discovery of the properties here in this method (too slow), or you can pass it as a parameter to the constructor (better). Hope it helps!
View full tip
ThingWorx 10.1: Practical Tools to Power Your Industrial IoT and AI Hello ThingWorx Community! Thanks for the warm response to ThingWorx 10.0 and all your feedback, it’s been invaluable!! I’m excited to share what’s coming with ThingWorx 10.1, rolling out private preview this fall. We’ve focused on delivering practical updates to help you get more from your IoT investments, with new AI capabilities, stronger solutions, and tools to make developers’ lives easier. Here’s what’s new: Industrial AI That Works for You We’re doubling down on AI to make your operations smarter and faster ThingWorx AI assistant powered by Industrial AI Agents: Our private preview of AI agents lets you embed chat assistants directly into your mashup applications. These agents tap into ThingWorx’s contextualized IoT data, enabling quicker, data-driven decisions without complex setups. Model Context Protocol Support: MCP support with ThingWorx allows you to run centralized agents on a hub ThingWorx server, pulling data from connected systems or spoke ThingWorx instances, streamlining agentic AI automation across your operations. While there are several benefits to MCP, one of the top use cases is achieving enterprise-wide KPI calculations using agents and Large Language Models (LLMs). ThingWorx’s low-code platform makes it quick and easy to start your AI automation journey. MQTT Egress for IoT Streams: Building on IoT streams from 10.0, we’re introducing preview support for MQTT-based egress in formats like Sparkplug B. This helps you manage industrial data end-to-end and aligns with your Unified Namespace strategy for seamless interoperability. ThingWorx AI Accelerator V2: Now with predictive capabilities, this tool connects your LLMs to private data, letting you build tailored AI assistants for predictive and interactive experiences—at no cost. Check out the post here for details: https://community.ptc.com/t5/ThingWorx-Developers/Using-an-LLM-with-ThingWorx-Latest-Version-V2-Update/td-p/1038099 Solutions Built for the Factory Floor Our out-of-the-box applications have become more powerful to tackle real-world challenges: Real-Time Production Performance Monitoring (RTPPM) Application: A revamped event model powering the KPI calculations and UI-based configuration make it reliable and easier to set up and use. It delivers accurate, real-time factory performance data to keep your shop floor running smoothly. Asset Monitoring and Utilization (AMU): For high-scale environments, this application now supports up to 1800% better performance, handling massive data ingestion from connected machines with ease. Digital Performance Management (DPM) & Connected Work Cell (CWC): We’ve also made several targeted fixes and improvements to boost manufacturing efficiency and worker productivity. Common Building Blocks (BBs): With our commitment to Building Blocks strategy, we’re rolling making common BB’s (e.g., PTC.Base, PTC.DBConnection, PTC.ModelManagement, PTC.UserManagement) available to all users, including Windchill Navigate and non-SCO customers. Previously exclusive to SCO licenses, these pre-built components simplify app development, speed up time to value, and streamline app management. Empowering Developers We know developer experience is critical, so we’ve added tools to streamline your work: JavaScript Debugger (GA): Now available with 10.1, it offers debugging from Thing Templates, JSON and infotable display, expression editing, improved layouts, keyboard shortcuts, a dedicated debugging subsystem and more!! Developer Productivity Enhancements: The new Rich Text Widget lets you embed a custom rich text editor to create sophisticated user interfaces, (for e.g., mimicking those in Windchill). It has controls to allow users to format text and lists, enable undo and redo, allow file uploads, add dividers and more. Collection widget enhancements also enable effortless drag-and-drop reordering of collection cells. Concurrency fixes for user and organization management improve the reliability of ThingWorx solutions. Security Updates: Updated libraries and tech stack enhance security and performance. For regulated industries, we’ve updated OpenSSL support for Axeda Edge and C-SDK, ensuring FIPS 140-3 compliance to meet strict standards. Join the Private Preview We’re opening the private preview for features like AI agents, Model Context Protocol, MQTT egress, and updated RTPPM capabilities. Interested? Leave a comment below, and we’ll reach out with next steps to get you started. Not ready for the preview? Start planning your upgrade to the ThingWorx 10.1 GA release this winter to take advantage of these features in production. Thanks for being part of the ThingWorx journey. Let’s keep building smarter industrial solutions together! Cheers, Ayush Tiwari, Director of Product Management, ThingWorx
View full tip
Pre-built apps for manufacturing operations that rapidly deliver value.   Overview   Think Big, Start Small, Scale Fast   Getting started on an industrial IoT project can be daunting, especially deciding where and how to begin. After collecting the experiences gained by working with over 1500 companies on IIoT applications, PTC has made the process to get up and running easier. By grouping IIoT capabilities together in functionally oriented programs, PTC built out-of-the-box applications that rapidly deploy and scale across new or existing system infrastructure. Instead of starting from scratch, you can use ThingWorx ready-to-configure apps to quickly lay the foundation for industrial transformation and implement IIoT solutions in as little as 90 days. ThingWorx Apps offer a comprehensive, basic IIoT scheme to connect to your equipment, collect real-time data, create notification work flows, deliver role based dashboards, and more. Need something more tailored to your operation? No problem. You can iteratively extend and customize ThingWorx Apps into additional use cases for continuous innovation.     Manufacturing Apps   Manufacturers are under constant pressure to minimize downtime, improve quality, and respond faster to individual customer requirements, all while lowering costs. The ThingWorx Manufacturing Apps are pre-built solutions that can be tried in less than 60 minutes without disrupting production. These apps provide manufacturers with real-time visibility into factory floor operations, from individual PLCs to assets to plants to enterprise-wide operations.   ThingWorx Connected Work Cell   Connected Work Cell streamlines how information is delivered to frontline workers by aggregating critical data from multiple data siloes into a simplified visual application. It presents step-by-step work instructions with accurate, up-to-date information to drive efficiency, links instructions to work orders, assigns resources, and validates proper execution to ensure quality. Capabilities Lite work instruction authoring with multiple step types and versioning Workpiece routes editor and work order scheduling Step-by-Step tracking of operator execution  Smart tool configuration Work station dashboard display File storage and document management Benefits Present accurate, up-to-date work instructions using 3D models Aggregate work requirements from multiple sources into one simplified display at the work station Increase workforce flexibility by reducing upfront training before being assigned to a new work cell Improve quality by collecting actual tool use data Rapid implementation and fast time to value   ThingWorx Real-Time Production and Performance Monitoring   Real-Time Production and Performance Monitoring provides manufacturing executives and plant managers with top-down, real-time visibility into consistent KPIs such as overall equipment effectiveness, mean time between failure, and mean time to repair.   Capabilities Connect existing assets and gather real-time data View overall equipment effectiveness (OEE), mean time between failure (MTBF), and mean time to repair (MTTR) in real-time  Compare geographically separated assets, lines, or products based on date, time, shift or crew Benefits Improve operational performance of existing assets by increasing throughput and decreasing waste Balance labor vs. capital expenditures to meet production needs Determine true overall equipment effectiveness for multiple facilities   ThingWorx Asset Monitoring and Utilization   Asset Monitoring and Utilization helps manufacturers connect to existing assets, remotely monitor them in real-time, generate alerts based on abnormal conditions, and deliver critical insights with data trending and analysis tools.   Capabilities Performance Dashboards with real-time access to open alarms Email and SMS distribution rules for messaging and alarm acknowledgment Integration to maintenance systems Detailed screens showing asset health, configuration parameters, and sensor trends  Benefits Quickly identify anomalous data trends and perform Root Cause Analysis Maximize asset uptime and availability with alerts on critical issues before they impact performance Rapidly connect to and catalog existing assets Quickly Identify Anomalous Data Trends and perform Root Cause Analysis  
View full tip
This is a basic troubleshooting guide for ThingWorx. It goes over the importance, types and levels of logs, getting started on troubleshooting the Composer, Mashup and Remote Connectivity.     For full-sized viewing, click on the YouTube link in the player controls.   Visit the Online Success Guide to access our Expert Session videos at any time as well as additional information about ThingWorx training and services.
View full tip
The Asset Simulator can simulate actual device behavior without having to connect to a physical asset. It does this by replaying data sequences derived from mathematical distributions or actual asset data imported as CSV files. Virtual assets can be configured to reference these data sequences and expose them as asset behavior.   The Asset Simulator communicates with KepServerEX in the same way that a real device does. The simulated asset behavior is controlled through an administration console. If you would like to test with the Asset Simulator 8.2.0, please find attached a guide and the actual files necessary.   Notes: The attached Asset Simulator applies to both Manufacturing and Service Apps If using ThingWorx Manufacturing Apps, import the Manufacturing Apps demo data If using ThingWorx Service Apps, import the Service Apps demo data
View full tip
Let's assume I collect Timeseries Data of two temperature sensors, located next to each other. This is done for redundancy and ensuring the quality of measures. Each of the sensors is logged into its Property in ThingWorx and I can create a Timeseries for the individual sensors. However I would like to create a combined InfoTable that holds information for both sensors, but averages out their values.   Instead of reading values from a stream, I just create some custom data for both InfoTables. After this I use the UNION function to combine the two tables and sort them. Once they are sorted, the INTERPOLATE function allows to group the InfoTable by timestamp.   With this, I have combined the two sensor result into on result set. Taking the average of numbers will give closer results to the real value (as both sensors might not be 100% accurate). In case one sensor does not have data for a given point in time, it will still be considered in the final output.   InfoTable1:   2018-12-18 00:00:00.000 2 2018-12-19 00:00:00.000 3 2018-12-20 00:00:00.000 5 2018-12-21 00:00:00.000 7   InfoTable2:   2018-12-18 00:00:00.000 1 2018-12-19 12:00:00.000 2 2018-12-20 00:00:00.000 3 2018-12-21 00:00:00.000 4   Combined Result:   2018-12-18 00:00:00.000 1.5 2018-12-19 00:00:00.000 3 2018-12-19 12:00:00.000 2 2018-12-20 00:00:00.000 4 2018-12-21 00:00:00.000 5.5     This can be done with the following code:   // Required DataShape "myInfoTableShape": "timestamp" = DATETIME, "value" = NUMBER // The Service Output is an InfoTable based on the same DataShape var params = { infoTableName : "InfoTable", dataShapeName : "myInfoTableShape" }; // Create two InfoTables, representing the data of each sensor var infoTable1 = Resources["InfoTableFunctions"].CreateInfoTableFromDataShape(params); var infoTable2 = Resources["InfoTableFunctions"].CreateInfoTableFromDataShape(params); var newEntry = new Object(); // Create custom data for InfoTable1 newEntry.timestamp = 1545091200000; newEntry.value = 2; infoTable1.AddRow(newEntry); newEntry.timestamp = 1545177600000; newEntry.value = 3; infoTable1.AddRow(newEntry); newEntry.timestamp = 1545264000000; newEntry.value = 5; infoTable1.AddRow(newEntry); newEntry.timestamp = 1545350400000; newEntry.value = 7; infoTable1.AddRow(newEntry); // Create custom data for InfoTable2 newEntry.timestamp = 1545091200000; newEntry.value = 1; infoTable2.AddRow(newEntry); newEntry.timestamp = 1545220800000; newEntry.value = 2; infoTable2.AddRow(newEntry); newEntry.timestamp = 1545264000000; newEntry.value = 3; infoTable2.AddRow(newEntry); newEntry.timestamp = 1545350400000; newEntry.value = 4; infoTable2.AddRow(newEntry); // Combine the two InfoTables via the UNION function var unionTable = Resources["InfoTableFunctions"].Union({ t1: infoTable1, t2: infoTable2 }); // Optional: Sort the table by timestamp var sortedTable = Resources["InfoTableFunctions"].Sort({ sortColumn: "timestamp", t: unionTable, ascending: true }); // Interpolate the (sorted) table by Interval and take average values and build the result var result = Resources["InfoTableFunctions"].Interpolate({ mode: "INTERVAL", timeColumn: "timestamp", t: sortedTable, ignoreMissingData: undefined, stats: "AVG", endDate: 1545609600000, columns: "value", count: undefined, startDate: 1545004800000 });  
View full tip
Here is a demo that uses repeater widget and smart grid widget to display a nested table. Note: This demo is for testing use. Create a datashape for  the nested datatable named DataTableStructure Create DataShape named InfoStructure for Obj Create the nested datatable named DataTableTest as below Create a service named AddObjInput as below in this datatable Crate a Mashup named RepeaterMsh Add widgets like textbox, label, button and smart grid on the mashup There are some parameters we added before, please bind them to the widgets that we just added to the mashup Add DataTableTest service AddOrUpdateDataTableEntry to the mashup. Bind textbox and smart grid edited table to AddOrUpdateDataTableEntry parameter. Bind button click event to trigger the service AddOrUpdateDataTableEntry The smart grid widget need a special attribute named TableDefinition This demo sample is{"columns":[{"name":"PrimarySchool","type":"text","display":"PrimarySchool"},{"name":"SecondarySchool","type":"text","display":"SecondarySchool"},{"name":"HighSchool","type":"text","display":"HighSchool"}]} Create mashup which has repeater widgets The mashup over view is like below The first part is aimed to add a new record, the second part (repeater widget)is aimed to display table data, and also update data. In the first part, there are textboxs, labels, smart grid and button widget. They are banded to the service named AddDataTableEntry, aimed to add a new record in the table. (Smart grid is used to add the nested table data) There is a button named AddRecords, bind it’s clicked event to AddDataTableEntry service. And bind the AddDataTableEntry service’s event ServiceInvokeCompleteted to GetDataTableEntries The smart grid widget in here is an empty grid, but it should have a structure at beginning. That’s why to create a service named AddObjInput in step2. It returns an empty table but with the datashape. The second part is a repeater widget. Repeater widget has an attribute named Mashup, bind the mashup Repeatermsh. Bind GetDataTableEntries all data to the repeater widget. There should be some parameters that need to bind like gender, InfoTable, Sname… Testing the demo Note: Smart Grid widget is not a ThingWorx OOTB functionality The smart grid widget can be edited flexible. Please download with is Link https://marketplace.thingworx.com/Items/Smart%20Grid%20Widget There is a thread teach how to use smart grid: https://community.thingworx.com/message/53379#53379
View full tip
This document provides API information for all 51.0 releases of ThingWorx Machine Learning.
View full tip
There have been a number of questions from customers and partners on when they should use different tools for calculation of descriptive analytics within ThingWorx applications. The platform includes two different approaches for the implementation of many common statistical calculations on data for a property: descriptive services and property transforms. Both of these tools are easy to implement and orchestrate as part of a ThingWorx application. However, these tools are targeted for handling different scenarios and also differ in utilization of compute resources. When choosing between these two approaches it is important to consider the specific use case being implemented along with how the implemented approach will fit into the overall design and architecture of the ThingWorx environment. This article will provide some guidance on scenarios to use each of these approaches in ThingWorx applications and things to consider with each approach.   Let's look at the two different approaches and some guidelines for when they should be used.   Descriptive services (click for more details) provide a set of ThingWorx services to analyze a data set and perform many common data transformations.  These services are targeted for performing calculations and transformations on recent operating history of a single property.  Descriptive services are called on demand to perform batch calculations. Scenarios to use descriptive services: On demand calculations performed within a mashup, a service call or an event to determine action and calculation results are not (always) stored Regular occurring calculations on logged property values or generated datasets (batch calculations) Calculations are done regularly in minutes, hours or days on a discrete set of data.  Examples: average value in last hour, median value in last day, or max value in last half hour.  Time between data creation and analysis is minutes or hours.  Some latency in the calculation result is acceptable for the use case. Input data set has 10s to 100s to 1000s of values.  Keep the size of the input data at 10,800 values or less.  If larger data sizes are required, then break them into micro batches if possible or use other tools to handle the processing. Multiple calculations need to be done from the same set of input data.  Examples: average value in last hour, max value in the last hour and standard deviation value in the last hour are all required to be calculated. Things to consider when using descriptive services Requires input dataset to be in the specific datashape format that is used by descriptive services.  If property values are logged in a value stream, there is a service to query the values and prepare the dataset for processing.  If scenarios where the data is not for a logged property, then another service or sql query can be used to prepare the dataset for processing. Requires javascript development work to implement.   This includes creation of a service to execute the descriptive services and usage of subscriptions and events to orchestrate calculations. An example of the javascript to execute descriptive services is available in the help center (here) Typically retrieval of the input data from value stream (QueryTimedValuesForProperty) is slowest part of the process. The input data is sent to an out of process platform analytics service for all calculations. Broader set of calculation services available (see table at the end of this article) Remember that these services are not meant to be used for big data calculations or big data preparation.  Look for other approaches if the input data sets grow larger than 10,800 values Property Transforms (click for more details) provide a set of transformation services for streaming data as it enters ThingWorx.   Property transforms are targeted for performing continuous calculations on recent values in the stream of a single property and delivering results in (near) real-time.  Since property transforms are continuous calculations, they are always running and using compute resources. Before implementing property transforms review the information in the property transform sizing guide to better understand factors that impact the scaling of property transforms. Scenarios to use: Continuous calculations on a stream for a single property as new data comes into ThingWorx New values enter the stream faster than one value per minute (as a general guideline) Calculations required to be done in seconds or minutes.  Examples: average electrical current in last 10 seconds, median pressure in the last 10 readings,  or max torque in last minute Time between data creation and analysis is small (in seconds).  Results of property transform is required for rapid decisions and action so reducing latency is critical Data sets used for calculation are small and contain 10s to 100s of values.  Calculated results are stored in a new property in the ThingModel Things to consider when using property transforms Codeless process to create new property transforms on a single property in the ThingModel Does not require input property values to be logged as calculations are performed on streaming data as it enters ThingWorx Unlike descriptive services which only execute when called, each property transform creates a continuously running job that will always be using compute resources.  Resource allocations for property transforms must be included in the overall system architecture.  Before selecting the property transform approach, refer to the Property Transform Sizing Guide for more information about how different parameters affect the performance of Property Transforms and results of performance load test scenarios. Let’s apply these guidelines to a few different use cases to determine which approach to select. 1. Mashup application that allows users to calculate and view median temperature over a selected time window In this scenario, the calculation will be executed on-demand with a user defined time window. Descriptive services are the only option here since there is not a pre-defined schedule and the user can select which data to use for the calculation.   2. Calculate the max torque (readings arriving one per second) on a press over each minute without storing all of the individual readings. In this scenario, the calculation will be executed without storing the individual readings coming from the machine. The transformation is made to the data on its way into ThingWorx and continuously calculating based on new values. Property transforms are the only option here since the individual values are not being stored.   3. Calculation of average pressure value (readings arriving one per second) over a five minute window to monitor conditions and raise an alert when the median value is more than two standard deviations from expected. In this scenario, both descriptive services and property transforms can perform the calculation required. The calculation is going to occur every 5 minutes and each data set will have about 300 values. The selection of batch (descriptive services) or streaming (property transforms) will likely be determined by the usage of the result. In this case, the calculation result will be used to raise an alert for a specific five minute window which likely will require immediate action. Since the alert needs to be raised as soon as possible, property transforms are the best option (although descriptive services will handle this case also with less compute resource requirements).   4, Calculation of median temperature (readings each 20 seconds) over 48 hour period to use as input to predict error conditions in the next week. In this scenario, the calculation will be performed relatively infrequently (once every 48 hours) over a larger data set (about 8,640 values). Descriptive services are the best option in this case due to the data size and calculation frequency. If property transforms were used, then compute resources would be tied up holding all of the incoming values in memory for an extended period before performing a calculation. With descriptive services, the calculation will only consume resource when needed, or once every 48 hours.   Hopefully this information above provides some more insight and guidelines to help choose between property transforms and descriptive services. The table below provides some additional comparisons between the two approaches.     Descriptive Services Property Transforms Purpose Provide a set of ThingWorx services to analyze a data set and perform many common data transformations. Provide a set of prescribed transformation services for streaming data as it enters ThingWorx. Processing Mode Batch Streaming / Continuous Delivery API / Service Composer interface API / Service Input Data Discrete data set Must be logged Single property Configurable by time or lookback Rolling data set on property X Persistence is optional Single property Configurable by time or lookback Output Data Return object handled programmatically Single output for discrete data set New property f_X in the input model Continuous output at configurable frequency Output time aligned with input data Available Services Statistics (min, max, mean, median, mode, std deviation) SPC calculations (# continuous data points: above threshold, in / out of range, increasing / decreasing, alternating) Data distribution: count by bins (histogram) Five numbers (min, lower quartile, median, upper quartile, max) Confidence interval Sampling frequency Frequency transform (FFT) Statistics (min, max, mean, median, mode, std deviation) SPC calculations (# continuous data points: above threshold, in / out of range, increasing / decreasing, alternating)
View full tip
There are many choices in life and ThingWorx offers some persistence provider options as well. As of ThingWorx release 8.2, five Database options are provided. 1 PostgreSQL  9.4.5 minimum 2 DataStax Enterprise Edition 4.6.3,5 3 SAP HANA  SPS 11, 12 4 Microsoft SQL Server 2014 and later 5 H2 (version info is not available, maybe because it's an embedded?) H2 is for small scale, mainly for testing purpose, PostgreSQL and Microsoft SQL Server are for middle scale and finally DataStax Enterprise Edition is for big scale. I don't have enough information about SAP HANA so would like to leave it untouched in my comment... I don't have a number as to how many customers are using which database but my gut feeling tells me that PostgreSQL is a popular option, especially cost-wise. PostgreSQL offers powerful tools, such as logging and utilities, to troubleshoot issues.   In this post I would like to cover some useful information you can retrieve by using pgstattuple and pgstatindex of contrib module. By default, PostgreSQL takes a good care of fragmentation and reindex by itself. But in some cases, there's a situation that you want to review status of the database to narrow down the cause of your troubleshooting issue. There are many ways to achieve it but contrib module is provided to review stats of tables and indexes. As explained in this article, it is recommended to keep the number of records in value_stream and stream less than 100,000. That means you'll insert and delete many records when running ThingWorx. What happens then? If you delete(/update) a record in a table, PostgreSQL keeps the previous record in a page but mark it as deleted(and inserts a new record when it's update operation) If the number of those logically deleted records increases, PostgreSQL needs to access many pages of the table to obtain records which meets the criteria user might experience slow performance because of this Those logically deleted records will be ultimately removed from pages when vacuum is run   If you have installed contrib module and enabled it, you can review stats of tables by command below; select * from pgstattuple('stream');                             //This returns the stats of stream table select * from pgstatindex('stream_id_time_index');    //This returns the stats of an index on stream table   pgstattuple returns information below (I modified the format to make it more readable in this post) and meaning of each items are explained in the document .   table_len tuple_count tuple_len tuple_percent dead_tuple_count dead_tuple_len dead_tuple_percent free_space free_percent  8192 1 33 0.4 3  97 1.18 8004 97.71   Before obtaining the stat, I Inserted 4 records and Deleted 3 records and therefore it shows that tuple_count (the active record is 1) and dead_tuple_count (the logically deleted records are 3) and dead_tuple_percent is 1.18. If dead_tuple_percent is high, that means the table is not vacuumed or many DML were executed after the last vacuum operation and this could be the cause of the slow ststem performance.   * IMPORTANT: pgstattuple, pgstatindex consumes resources so it's recommended to run them during the maintenance window.   Takaaki
View full tip
In our interactions with PTC customers we often learn they have previously performed Analytics modeling in Python, Matlab, R, or even built home grown analyses in languages such as Java or C++. As expected, when adopting an Industrial Innovation Platform such as ThingWorx that also has its own ThingWorx Analytics module, customers do not want to reimplement everything from scratch and would rather integrate their previous work in the Smart Applications built in ThingWorx, leveraging a combination of their existing toolset together with ThingWorx Analytics modeling. That is certainly possible and there are multiple ways to do that. In this article we will focus on several general ways to make that happen, but it is important to keep in mind that language specific approaches are also possible and we are happy to discuss those in the specific context of the customer.   Here are five different ways to bring existing Analytics into ThingWorx: If the task is to reuse an existing predictive model developed in a language such as Python/R/Matlab, typically one can export that model in PMML (Predictive Model Markup Language), an xml format, and import it in ThingWorx Analytics using the AnalyticsServer_ResultsThing -> UploadModel service. Libraries such as sklearn2pmml & r2pmml can be utilized towards that goal. The imported model can then be used in the same fashion as a ThingWorx Analytics developed model to power smart applications built in ThingWorx. If the Analysis involves more complex tasks than Predictive Modeling, such as custom data normalizations or non-standard Machine Learning models or home grown algorithms, one can use the options below. Call the ThingWorx exposed REST Web API from Python/Matlab/R/Java/Javascript. Every service from ThingWorx can be called that way, and the API can also be used to push analyses results into ThingWorx for further consumption, perhaps together with other sources of data such as sensor readings, in the smart applications built there. The documentation for the ThingWorx REST API can be found here.  Expose the existing Analytics via using a thin layer of REST Web Services. For example, in Python, this can be done using Flask, with few lines of code. Then, the orchestration can happen from ThingWorx by calling the exposed Web Service and weaving the results back into smart applications. Often our customers' current architecture involves a relational database (e.g. SQL Server, Oracle, etc) that is powering the existing Analytics, and stores the end results (predictions, correlations, etc). In this scenario, we can connect ThingWorx directly to that database to read these results.  Finally, in the case of complex Analytics, where a tighter integration with ThingWorx is desired, existing Analytics / algorithms can be wrapped into a ThingWorx Extension or an Analytics Provider using the corresponding PTC SDKs.  When choosing an integration option, customers need to carefully balance complexity of integration, constraints of their architecture, Analytics modeling complexity, as well as end user consumption requirements.
View full tip
Remember that when you are calling an external URL to fetch data via an API call to another system that you must encode special characters specifically. For example the URL that you may type into a browser to test may look like this: https://someserver.somwhere.com:443/apicall?parameter1=test string&parameter2=test^number but when scripting that into a string variable you'll need to replace the space and the carrot with the proper encoded values (%20 and %5E) var params = { username : "me", password : "password", url : "https://someserver.somwhere.com:443/apicall?parameter1=test%20string&parameter2=test%5Enumber", ignoreSSLErrors : false, timeout : 60, headers : headers }; var result = Resources['ContentLoaderFunctions'].LoadXML(params); also note that in this instance we're making a secure connection therefore port 443 (typically the default) was explicitly specified...
View full tip
Announcements