cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

Community Tip - Want the oppurtunity to discuss enhancements to PTC products? Join a working group! X

IoT Tips

Sort by:
This is part of the continuing series of Blog posts regarding Troubleshooting the Application, this article will discuss more advance issues that some clients and customer have encountered while building or using ThingWorx Analytics. Packer Script Error – Unable to Download CentOS Image As the application is developed and built inside a CentOS image, the ThingWorx Analytics Packer Script tool for Virtual Machine Appliance creation utilizes the CentOS mirror repository in the creation process. When the end user is attempting to build the Virtual Machine Appliance with the Packer Script media creation tool, part of the process is to download the CentOS 7 ISO image file as the basis for the operating system that the ThingWorx Analytics Server software will be installed to. If CentOS updates or changes their mirror links for the source file ISO, you may encounter the following error: ==> virtualbox-iso: Downloading or copying Guest additions virtualbox-iso: Downloading or copying: file:///C:/Program%20Files/Oracle/VirtualBox/VBoxGuestAdditions.iso ==> virtualbox-iso: Downloading or copying ISO virtualbox-iso: Downloading or copying: file:///local-file-repo/CentOS-7-x86_64-Minimal-1511.iso virtualbox-iso: Error downloading: open local-file-repo/CentOS-7-x86_64-Minimal-1511.iso: The system cannot find the path specified. virtualbox-iso: Downloading or copying: http://mirror.spro.net/centos/7/isos/x86_64/CentOS-7-x86_64-Minimal-1511.iso virtualbox-iso: Error downloading: checksums didn't match expected: 88c0437f0a14c6e2c94426df9d43cd67 ==> virtualbox-iso: ISO download failed. Build 'virtualbox-iso' errored: ISO download failed. ==> Some builds didn't complete successfully and had errors: --> virtualbox-iso: ISO download failed. ==> Builds finished but no artifacts were created. Solution Method 1: Configuration File Replacement We have created a custom JSON configuration file that resolves the mirror issue for CentOS 7 v1611. You can download the JSON file here; you may have to right-click and “save link as” a JSON extension file. Also note, you will have to save/rename this JSON file as neuron-solo-variables.json. Using this file, navigate to your Packer Script builder directory, usually this is found in the following path: <PATH>\ThingWorx-Analytics-Server-Standalone\components\vm-builder\neuron-vm-builder Copy the new JSON file into this directory, and replace the current existing copy. You can now re-run the Packer Script for your desired Virtual Machine Appliance output. Method 2: Manual Configuration File Adjustment You will have to locate an active mirror for CentOS 7. A list of current active mirrors can be found here. When selecting a mirror, you will need to select the Minimal ISO install, as this is the base image that is used for the VM creation. Next, you will have to open the current neuron-solo-variables.json configuration file located in the <PATH>\ThingWorx-Analytics-Server-Standalone\components\vm-builder\neuron-vm-builder directory. You will have to replace the os_image_download_url value with an active Mirror URL from the list above. Next, for the os_iso_md5_checksum variable, you will need to replace the entry with the new SHA256 checksum from CentOS, which can be located here. Default Settings: New Settings: Save changes and close the neuron-solo-variables.json configuration file. CentOS has switched over from MD5 to SHA256 checksums. Even though in the following the variable name has “MD5” in the string, we will be modifying a second JSON configuration file to address this. In the same directory that we are currently working in, open the neuron-solo.json configuration file. You will need to modify the attribute iso_checksum_type to sha256 Default Settings: New Settings: Save changes and close the neuron-solo.json configuration file. You can now re-run the Packer Script for your desired Virtual Machine Appliance output.
View full tip
Let's consider that we have two Streams Stream1 and Stream2 with same DataShape StreamDS. DataShape StreamDS has two fields Id (number) and Name (string). We want to copy all the entries from Stream1 to Stream2. Steps: 1. Open Stream1 Stream in Composer and run GetStreamEntriesWithData service. 2. In the popup click on Create DataShape from Result option to create a new DataShape GetStreamEntriesDS. 3. Create a Service and use JavaScript like below (Added Comments for Details): // Create Temporary Infotable to hold output of GetStreamEntriesWithData Service var paramsForInfotable = {   infoTableName: "InfoTable" /* STRING */,   dataShapeName: "GetStreamEntriesDS" /* DATASHAPENAME */ }; // result: INFOTABLE var InfotableForCopy = Resources["InfoTableFunctions"].CreateInfoTableFromDataShape(paramsForInfotable); //Save output of GetStreamEntriesWithData Service to Temporary Infotable InfotableForCopy var paramsForGetStreamEntriesWithDataService = {   oldestFirst: false /* BOOLEAN */,   maxItems: 10000 /* NUMBER */ }; // result: INFOTABLE dataShape: "GetStreamEntriesDS" InfotableForCopy = Things["Stream1"].GetStreamEntriesWithData(paramsForGetStreamEntriesWithDataService); // Read the data from Infotable row by row and add it to new Stream var tableLength = InfotableForCopy.rows.length; for (var x = 0; x < tableLength; x++) {   var row = InfotableForCopy.rows ; // values:INFOTABLE(Datashape: StreamDS) var values = Things["Stream2"].CreateValues(); values.Id = row.Id; //NUMBER values.Name = row.Name; //STRING var paramsForAddStreamEntryService = {   sourceType: row.sourceType /* STRING */,   values: values /* INFOTABLE*/,   location: row.location /* LOCATION */,   source: row.source /* STRING */,   timestamp: row.timestamp /* DATETIME */,   tags: row.tags /* TAGS */ }; // AddStreamEntry(tags:TAGS, timestamp:DATETIME, source:STRING, values:INFOTABLE(StreamDS), location:LOCATION):NOTHING Things["Stream2"].AddStreamEntry(paramsForAddStreamEntryService); } var result = InfotableForCopy;
View full tip
In this video we cover the installation of the UploadThing module. This video applies to ThingWorx Analytics 52.2 till 8.0. This is no longer applicable with ThingWorx Analytics 8.1   Useful links: How to copy files from Windows to Linux Updated Link for access to this video:  Installing Thingworx Analytics Builder: Part 3 of 3  
View full tip
In this video we cover the different configuration steps required for ThingWorx Analytics Builder extension This video applies to ThingWorx Analytics 52.1 till 8.1.   Note though: - this video uses Classic Composer, the same operations can be done using the New Composer starting with version 8.0 as illustrated in the Help Center - For release 8.1, the Settings menu differs from previous versions, see Video Link : 2079 between times 00:12 sec to 00:40 sec for up to date menu selection.   Updated Link for access to this video:  Installing Thingworx Analytics Builder: Part 2 of 3
View full tip
Overview The Axeda Platform is a secure and scalable foundation to build and deploy enterprise-grade applications for connected products, both wired and wireless. This article provides you with a detailed feature overview and helpful links to more in-depth articles and tutorials for a deeper dive into key areas. Types of Connected Product Applications M2M applications can span many vertical markets and industries. Virtually every aspect of business and personal life is being transformed by ubiquitous connectivity. Some examples of M2M applications include: Vehicle Telematics and Fleet Management - Monitor and track the location, movements, status, and behavior of a vehicle or fleet of vehicles Home Energy Monitoring - Smart energy sensors and plugs providing homeowners with remote control and cost-saving suggestions Smart Television and Entertainment Delivery - Integrated set-top box providing in-view interaction with other devices – review your voicemails while watching a movie, chat with your Facebook friends, etc. Family Location Awareness - Set geofences on teenagers, apply curfews, locate family members in real time, with vehicle speed and condition Supply Chain Optimization - Combine status at key inspection points with logistics and present distribution managers with an interactive, real-time control panel Telemedicine - Self-monitoring/testing, telecardiology, teleradiology Why Use a Platform? Have you ever built a Web application? If so, you probably didn't create the Web server as part of your project. Web servers or Web application servers like Apache HTTPd or JBoss manage TCP sockets, threads, the HTTP protocol, the compilation of scripts, and include thousands of other base services. By leveraging the work done by the dedicated individuals who produced those core services, you were able to rapidly build an application that provided value to your users.  The Axeda Platform is analogous to a Web server or a Web application server, but provides services and a design paradigm that makes connected product development streamlined and efficient. Anatomy of an Axeda Connected Product Connected Products can really be anything that your product and imagination can offer, but it is helpful to take pause for some common considerations that apply to most, if not all of these types of solutions. Getting Connected - Bring your product or equipment to the network so that it can provide information to the solution, and react to commands and configuration changes orchestrated by the application logic. Manage and Orchestrate - Script your business logic in the cloud to tie together remote information with information from other business systems or cloud-based services, react to real-time conditions, and facilitate batch operations to synchronize, analyze, and integrate. Present and Report - Build your user experiences, enabling people to interact with your connected product, manage workflows around business processes, or facilitate data analysis. Let's take a look at the Axeda Platform services that help with each of these solution considerations. Getting Connected Wired & Wireless Getting connected can assume all sorts of shapes, depending on the environment of your product and the economics of your solution. The Axeda Platform makes no assumption about connectivity, but instead provides features and functionality to help you connect.For wireless applications, especially those which may use cellular or satellite communications, the speed and cost of communication will be an important factor.  For this reason, Axeda has created the Adaptive Machine Messaging Protocol (AMMP), a cross-platform, efficient HTTP-based protocol that you can use to communicate bi-directionally with the platform.  The protocol is expressive, robust, secure, and most importantly, able to be implemented on a wide range of hardware and programming environments. When you are faced with connecting legacy products that may be communicating with a proprietary messaging protocol, the Axeda Platform can be extended with Codecs to "learn" your protocol by translating your device's communication format into a form that the platform can understand. This is a great option for retrofitting existing, deployed products to get connectivity and value today, while designing your next-generation of product with AMMP support built-in. Manage and Orchestrate The Data Model defines the information and its behavior in the Axeda Platform. Rules Rules form the heart of a dynamic connected application. There are three types of rules that can be leveraged for your orchestration layer: Expression rules run in the cloud, and are configured and associated with your assets through the Axeda Admin UI or SDK. These rules have an If-Then-Else structure that's easy to create and understand. They're like a formula in a spreadsheet. For example, say your asset has a dataitem reading for temperature: [TODO: Image of dataitem TEMP] This rule compares the temperature to 80 every time a reading is received. When this happens, the rule creates an alarm with name "High Temp" and severity 100.Learn more about Expression Rules. State Machines help organize Expression Rules into manageable groups that apply to assets when the assets are in a certain state. For example, if your asset were a refrigerated truck, and you were interested in receiveing an alert when the temperature within the cargo area rose above a preset threshold, you would not want this rule to be applied when your truck asset is empty and parked in the distribution center lot. In this case, you might organize your rules into a state machine called “TruckStatus”. The TruckStatus state machine would then consist of a set of states that you define, and the rules that execute when the truck is in a particular state. [TODO - show state machine image?] State “Parked”: IF doorOpen THEN … State “In Transit”: IF temperature > 40 THEN… State “Maintenance”: <no rules> You can learn more about state machines in an upcoming technical article soon. Scripting Using Axeda Custom Objects, you can harness the power of the Axeda SDK to gain access to the complete set of platform data and functionality, all within a script that you customize. Custom Object scripts can be invoked in an Expression Rule to provide customized and flexible business logic for your application. Custom Object scripts are written in a powerful scripting language called Groovy, which is 100% Java syntax compatible. Groovy also offers very modern, concise syntax options that make your code simple and easy to understand. Groovy can also implement the body of a web service method. We call this Scripto. Scripto allows you to write code and call that code by name through a REST web service. This allows a client application to call a set of customized web services that return exactly the information and format needed by the application. Here is a beginning tutorial on Scripto.  This site includes many Scripto examples called by different Rich Internet Applications (RIA). Location-Based Services Knowing where something is, and where it has been, opens a world of possible application features. The Axeda Platform can keep track of an asset’s current and historical location, which allows your applications to plot the current position of your assets on a map, or show a breadcrumb trail of where a particular asset has been. Geofences are virtual perimeters around geographic locations. You can define a geofence around a physical location by describing a center point and radius, or by “drawing” a polygon around an arbitrary shape. For instance, you may have a geofence around the Boston metro that is defined as the center of town with a 10-mile radius. You may then compare an asset’s location to this geofence in a rule and trigger events to occur. IF InNamedGeofence(“Boston”) THEN CreateAlarm(…) You can learn more about geofences and other location-oriented rule features in an upcoming tutorial. Integration Queue In today’s software landscape, almost no complete solution is an island unto itself. Business systems need to interoperate by sharing data and events, so that specialized systems can do their job without tight coupling. Messaging is a robust and capable pattern for bridging the gap between systems, especially those that are hosted in the cloud. The Axeda Platform provides a message queue that can be subscribed to by external systems to trigger processes and workflows in those systems, based on events that are being tracked within the platform. A simple ExpressionRule can react to a condition by placing a message in the integration queue as follows: IF Alarm.severity > 100 THEN PublishObject() A message is then placed in the queue describing the platform event, and another business system may subscribe to these messages and react accordingly. Web Services Web Services are at the heart of a cloud-based API stack, and the Axeda Platform provides full comprehensiveness or flexibility. The platform exposes Web Service operations for all platform data and configuration meta data. As a result, you can determine the status of an asset, query historical data for assets, search for assets based on their current location, or even configure expression rules and other configuration settings all through a modern Web Service API, using standard SOAP and REST communication protocols. Scripto Web Service APIs simplify system integration in a loosely-coupled, secure way, and we have a commitment to offering a comprehensive collection of standard APIs into the Axeda Platform. But we can't have an API that suits every need exactly. You may want data in a particular format, such as CSV, JSON, or XML. Or some logic should be applied and its inefficient to query lots of data to look for the bit you want. Wouldn’t you rather make the service on the other side do exactly what you want, and give it to you in exactly the format you need? That is Scripto – the bridge between the power and efficiency of the Axeda Custom Object scripting engine, and a Web Service client. Using Scripto, you can code a script in the Groovy language, using the Axeda SDK and potentially mashing up results from other systems, all within the platform, and expose your script to an external consumer via a single, REST-based Web Service operation. You create your own set of Web Services that do exactly what you want. This powerful combination let’s you simplify your Web Service client code, and give you easy access and maintainability to the scripted logic. Present Rich Internet Applications are a great way to build engaging, information-rich user experiences. By exposing platform data and functions via Web Services and Scripto, you can use your tool of choice for developing your front-end. In fact, if you choose a technology that doesn’t require a server-side rendering engine, such as HTML + AJAX, Adobe Flash, or Microsoft Silverlight, then you can upload your application UI files to the Axeda Platform and let the platform serve your URL! Addition references for using RIAs on the Axeda Platform: Axeda Sample Application: Populating A Web Page with Data Items Extending the Axeda Platform UI - Custom Tabs and Modules Far-Front-Ends and Other Systems If a client-only user RIA interface is not an option for you, you can still use Web Services to integrate platform information into other server-side presentation technologies, such as Microsoft Sharepoint portal or a Java EE Web Application. You can also get lightning-fast updates for your users with the Axeda Machine Streams that uses ActiveMQ JMS technology to stream data in real-time from the Axeda Platform to your custom data warehouses.  While your users are viewing a drill down on a set of assets, they can receive asynchronous notifications about real-time data changes, without the need to constantly poll Web Services. Summary Building Connected Products and applications for them is incredibly rewarding job. Axeda provides a set of core services and a framework for bringing your application to market.
View full tip
As of Dec. 23, 2021, ThingWorx 9.3.0 is available for download and on Jan. 7, 2022, it will be available for Cloud Services.   What’s new in ThingWorx 9.3? Composer and Mashup Builder  Enhanced ability to find entities, code references, and project dependencies with a new Referenced by report feature. New Grid widget allows for improved visualization of row/columnar type data, with improved performance, styling, and configuration experiences.  Composer enhancements allow for faster configuration and application management in the areas of test execution and dynamic use of Master Mashups.    Analytics  New configuration parameters (binningStrategy and allowOverlap) available for profiles to tailor application of the algorithm to meet needs of domain use cases.  New option to utilize K-Fold cross validation to improve quality of predictive models created on limited data sets using industry standard technique.  Automate treatment of missing values to reduce data preparation before application of advanced analytics algorithms.    Foundation  Query performance improvements with indexing enables performance of quick “search” queries across thousands of connected assets and pre-filtering of query resultsets for better overall query performance.  Upgrade script improvements for logging, script execution and error handling which simplify the upgrade process.    Remote Access and Control  Improvements to Remote Access and Control includes Auto & Click to Launch capabilities that streamline the remote access startup process by starting the support application you need for you. New connectivity options enable user control and selection of local IP and Port addresses used in the session creation.    Software Content Management  Instructions can now be configured to execute either Synchronously or Asynchronously for a streamlined execution process. Added support for data migration of older, from the audit data table information into a format to the new audit database.    View release notes here and be sure to upgrade to 9.3!   Stay Connected, Rachel
View full tip
Distributed Timer and Scheduler Execution in a ThingWorx High Availability (HA) Cluster Written by Desheng Xu and edited by Mike Jasperson    Overview Starting with the 9.0 release, ThingWorx supports an “active-active” high availability (or HA) configuration, with multiple nodes providing redundancy in the event of hardware failures as well as horizontal scalability for workloads that can be distributed across the cluster.   In this architecture, one of the ThingWorx nodes is elected as the “singleton” (or lead) node of the cluster.  This node is responsible for managing the execution of all events triggered by timers or schedulers – they are not distributed across the cluster.   This design has proved challenging for some implementations as it presents a potential for a ThingWorx application to generate imbalanced workload if complex timers and schedulers are needed.   However, your ThingWorx applications can overcome this limitation, and still use timers and schedulers to trigger workloads that will distribute across the cluster.  This article will demonstrate both how to reproduce this imbalanced workload scenario, and the approach you can take to overcome it.   Demonstration Setup   For purposes of this demonstration, a two-node ThingWorx cluster was used, similar to the deployment diagram below:   Demonstrating Event Workload on the Singleton Node   Imagine this simple scenario: You have a list of vendors, and you need to process some logic for one of them at random every few seconds.   First, we will create a timer in ThingWorx to trigger an event – in this example, every 5 seconds.     Next, we will create a helper utility that has a task that will randomly select one of the vendors and process some logic for it – in this case, we will simply log the selected vendor in the ThingWorx ScriptLog.     Finally, we will subscribe to the timer event, and call the helper utility:     Now with that code in place, let's check where these services are being executed in the ScriptLog.     Look at the PlatformID column in the log… notice that that the Timer and the helper utility are always running on the same node – in this case Platform2, which is the current singleton node in the cluster.   As the complexity of your helper utility increases, you can imagine how workload will become unbalanced, with the singleton node handling the bulk of this timer-driven workload in addition to the other workloads being spread across the cluster.   This workload can be distributed across multiple cluster nodes, but a little more effort is needed to make it happen.   Timers that Distribute Tasks Across Multiple ThingWorx HA Cluster Nodes   This time let’s update our subscription code – using the PostJSON service from the ContentLoader entity to send the service requests to the cluster entry point instead of running them locally.       const headers = { "Content-Type": "application/json", "Accept": "application/json", "appKey": "INSERT-YOUR-APPKEY-HERE" }; const url = "https://testcluster.edc.ptc.io/Thingworx/Things/DistributeTaskDemo_HelperThing/services/TimerBackend_Service"; let result = Resources["ContentLoaderFunctions"].PostJSON({ proxyScheme: undefined /* STRING */, headers: headers /* JSON */, ignoreSSLErrors: undefined /* BOOLEAN */, useNTLM: undefined /* BOOLEAN */, workstation: undefined /* STRING */, useProxy: undefined /* BOOLEAN */, withCookies: undefined /* BOOLEAN */, proxyHost: undefined /* STRING */, url: url /* STRING */, content: {} /* JSON */, timeout: undefined /* NUMBER */, proxyPort: undefined /* INTEGER */, password: undefined /* STRING */, domain: undefined /* STRING */, username: undefined /* STRING */ });   Note that the URL used in this example - https://testcluster.edc.ptc.io/Thingworx - is the entry point of the ThingWorx cluster.  Replace this value to match with your cluster’s entry point if you want to duplicate this in your own cluster.   Now, let's check the result again.   Notice that the helper utility TimerBackend_Service is now running on both cluster nodes, Platform1 and Platform2.   Is this Magic?  No!  What is Happening Here?   The timer or scheduler itself is still being executed on the singleton node, but now instead of the triggering the helper utility locally, the PostJSON service call from the subscription is being routed back to the cluster entry point – the load balancer.  As a result, the request is routed (usually round-robin) to any available cluster nodes that are behind the load balancer and reporting as healthy.   Usually, the load balancer will be configured to have a cookie-based affinity - the load balancer will route the request to the node that has the same cookie value as the request.  Since this PostJSON service call is a RESTful call, any cookie value associated with the response will not be attached to the next request.  As a result, the cookie-based affinity will not impact the round-robin routing in this case.   Considerations to Use this Approach   Authentication: As illustrated in the demo, make sure to use an Application Key with an appropriate user assigned in the header. You could alternatively use username/password or a token to authenticate the request, but this could be less ideal from a security perspective.   App Deployment: The hostname in the URL must match the hostname of the cluster entry point.  As the URL of your implementation is now part of your code, if deploy this code from one ThingWorx instance to another, you would need to modify the hostname/port/protocol in the URL.   Consider creating a variable in the helper utility which holds the hostname/port/protocol value, making it easier to modify during deployment.   Firewall Rules: If your load balancer has firewall rules which limit the traffic to specific known IP addresses, you will need to determine which IP addresses will be used when a service is invoked from each of the ThingWorx cluster nodes, and then configure the load balancer to allow the traffic from each of these public IP address.   Alternatively, you could configure an internal IP address endpoint for the load balancer and use the local /etc/hosts name resolution of each ThingWorx node to point to the internal load balancer IP, or register this internal IP in an internal DNS as the cluster entry point.
View full tip
ThingWorx DevOps with Azure: The Comprehensive DevOps Guide Written by Tori Firewind, IoT EDC   As promised in a previous post,  attached here is a comprehensive guide to DevOps in ThingWorx, including tutorials and instructions for creating a continuous integration, continuous deployment (CI/CD) process for application development.  There are also updated scripts and entities attached, including an entire sample application for importing, exporting, and testing an application in ThingWorx. From Docker and Github to Azure DevOps and Solution Central, this guide has it all. Learn how to perform your role in the DevOps process whether an administrator or a developer, automate your deployments and testing, and create a more efficient process  for publication changes to production.  A complete DevOps process like this really does facilitate faster and easier updates with fewer risks, fewer delays, and a better pathway to success.  
View full tip
Hey Community Members – I have exciting news to share! Last week, PTC announced that ThingWorx was recognized as a frontrunner IIOT Platform in four leading analyst research reports on the industrial software market.   In alphabetical order, the reports PTC/ThingWorx was named in were: ABI Research’s Smart Manufacturing Platforms Competitive Ranking The Forrester Wave™: Industrial IoT Software Platforms, Q3 2021 The Gartner® Magic QuadrantTM for Industrial IoT Platforms IDC MarketScape for Industrial IoT Platforms and Applications for Manufacturing Not only is PTC the only company included in all four reports, but it also placed as a leader in all four, which hopefully makes you feel confident in your choice to use ThingWorx. The research criteria the analyst firms use to make selections is unique to each firm, but to us, all of them measure the value ThingWorx can bring to an industrial business.   What value does ThingWorx bring to you?   Stay connected, Rachel               Gartner, Magic Quadrant for Industrial IoT Platforms, Alfonso Velosa, Ted Friedman, Katell Thielemann, Emil Berthelsen, Peter Havart-Simkin, Eric Goodness, Matthew Flatley, Lloyd Jones, Kevin Quinn, 18 October 2021 Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner's research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose. Gartner and Magic Quadrant are registered trademarks of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved. (this will be soon updated in our Policy as well)  
View full tip
To simplify the development of IIoT applications and solutions on the ThingWorx platform, we introduce the concept of Building Blocks. The intent of Building Blocks is to ease the creation of your own solutions and customization of PTC’s solutions. These Building Blocks are domain specific business logic pre-made for reusability, which means you won’t need to build from scratch on ThingWorx and can accelerate your time to value. What do we mean by Building Blocks? Building Blocks are premade components that enable modular software development. They are reusable, replaceable packages of functionality that can be connected into an architecture framework. Building Blocks allow for quicker development and customization of solutions and applications. What are the different types of Building Blocks?   Connectors  Leverage the same connectors we use for PTC solutions for better overall application performance and seamless transfer of data from disparate devices and systems. Identify the devices and systems you would like to monitor and let the connector do the rest.   Domain Models  Incorporate behavior and data from your devices and systems into a conceptual model of the domain, which is prepackaged based on common use cases. You can also leverage our out of the box models to connect and build dependencies between domains.   Business Logic  Encode real-world business rules that determine how data can be created, stored, and changed. Create KPIs for your devices and systems with these rules and create alerts based on your unique parameters.   UI  Construct widgets to view or analyze key data points in a graphical user interface that you can customize and leverage to extend functionality. Created with manufacturing and service use cases in mind, UI are predesigned to make it easy to view and understand data.   Building Blocks build upon the ThingWorx platform and are the base of all of PTC’s current and future solutions. We will continue to discuss Building Blocks in future posts, but in the meantime: How will you leverage building blocks in your own solutions? Is there more you want to know?   Stay connected, Rachel  
View full tip
Integrating LDAP authentication into Thingworx is fairly simple. Since release 5.0 and later, the out-of-the-box (OOTB) Thingworx authenticators already include the necessary code to validate a user's credentials against an LDAP server. These authenticators look to see if an LDAP server is connected every time a user attempts a login, and then further check to see if this user exists in the LDAP server. If the username does exist in LDAP, then Thingworx will check if the password entered is a match to the password stored within LDAP. If the password entered does not match the password stored in LDAP, then Thingworx will next check if the password matches the one stored in Thingworx for that user. So in order for a user to login to Thingworx, they must have a user Thing created for them within Thingworx Composer (this can be done programmatically, see below), and a valid password which matches either an LDAP account password or the password as it is set for that user on the Thing in Thingworx Composer. The first thing a developer needs to do to integrate LDAP is configure their Thingworx instance so that it can find the LDAP server and access its contents. This is done by importing an XML file which will allow the developer to see a Thing that comes with the Thingworx platform (see attached file "directoryServices.xml"). The Thing that needs configuring is called ApacheDS3 and it is a DirectoryServices Thing. The largest task for a developer to do to integrate LDAP into Thingworx involves importing their LDAP users into Thingworx. Getting the LDAP usernames out of the LDAP server will vary depending on which distribution of LDAP is in use. However, once the developer acquires this information, using it to create users in Thingworx is simple. The developer will need to create a Thing Service which creates a dummy password and assigns the LDAP username in the parameters. Then they can pass the parameters into the CreateUser service of the “EntitiyServices” resource: var params = { password: "SOMETHING_COMPLICATED", //dummy password does not matter, but you don't want an accidental match, so make it something very complicated, and standard to your company's LDAP users name: ldap_username, //retrieve from LDAP description: "This user was created as part of LDAP import", //can be whatever you'd like tags: undefined }; Resources["EntityServices"].CreateUser(params); // no return Any users created in this way will be redirected to Squeal if there is no home mashup assigned, so you will have to add an additional bit of code which assigns the home mashups to users, looping through something like this: var params = {     name: "dashboard" //replace this with String name of dashboard (must exist) }; Users[username].SetHomeMashup(params); For full steps on integrating LDAP and Thingworx, including instructions on how to set up an ApacheDS test LDAP server, see the Thingworx support article titled “Integrate LDAP Authentication and Import LDAP User Directory into Thingworx” (reference document – CS221840).
View full tip
Thundering Herd Scenarios in ThingWorx Written by Jim Klink, Edited by Tori Firewind   Introduction The thundering herd topic is quite vast, but it can be broken down into two main categories: the “data flood” and the “reconnect storm”. One category involves what happens to the business login (the “data flood” scenario) and affects both Factory and Connected Products use cases. The other category involves bringing many, many devices back online in a short time (the “reconnect storm” scenario), which largely influences Connected Products scenarios.   Citation: https://gfp.sd.gov/buffalo-roundup/ Think of Connected Products as a thundering stampede of many small buffalo, which then makes a Factory thundering herd scenario a stampede of a couple massive brontosaurus, much fewer in number, but still with lots of persisted data to send back in. This article focuses in on how to manage the “reconnect storm” scenario, by delaying the return of individual buffalo to reduce the intensity of the stampede. Find here the necessary insights on how to configure your ThingWorx edge applications to minimize the effect of a server down scenario.    The C-SDK will be used for examples, but the general principles will apply to any of the ThingWorx edge options (EMS, .Net SDK, Java SDK).  This article also references the ExampleAgent application which is built using the C-SDK. The ExampleAgent is available for download as an attachment to this post.  It offers an easily configurable edge solution for Windows and Linux that can be used for the following purposes: Foundation for rapid development of a robust custom edge application based on the ThingWorx C-SDK for use by customers and partners. Full featured, well documented, ‘C’ source code example of developing an application using the ThingWorx C-SDK. A “local” issue is one which affects a single agent, a loss of connectivity due to hardware malfunction or local network issues. Local issues are quite common in the IoT world, and recovery usually isn’t too much of a challenge. A “global” issue occurs when many agents disconnect simultaneously, usually because there is an issue with the ThingWorx server itself (though the Load Balancer, Connection Server, or web hosting software could also be the source). Perhaps it is a scheduled software update, perhaps it is unexpected downtime due to issues, but either way, it’s important to consider how the fleet of agents will respond if ThingWorx suddenly becomes unavailable.   There are two broad issues to consider in a situation like this. One is maintaining the agent’s data so that it can be sent when the connection becomes available again. This can be done in the C-SDK using an offline file storage system, which includes properties, events, and services. Offline storage is configured in the twConfig.h file in the C SDK.  The second issue the number of Agents seeking to reconnect to the server in a short period of time when the server is available.    Of course, if revenue is based on uptime, perhaps persisting data is less critical and can be lost, making things simple. However, in most cases, this data will need to be stored on the edge device until reconnect. Then, once the server comes back up, suddenly all of this data comes streaming in from all of the many edge devices simultaneously.   This flood of both data and reconnection of a multitude of agents can create what is called a “thundering herd” scenario, in which ThingWorx can become backlogged with data processing requests, data can be lost if the queues are overwhelmed, or worst-case, the Foundation server can become unresponsive once again. This is when outages become costly and drag on longer than necessary. Several factors can lead to a thundering herd scenario, including the number of agents in the fleet, the amount of stored data per agent, the amount of data ordinarily sent by these devices, which is sent side-by-side with the stored data upon reconnection, and how much processing occurs once all of this data is received on the Foundation server.   The easiest way to mitigate a potential thundering herd scenario, and this is considered a ThingWorx best practice as well, is to randomize the reconnection of devices. Each agent can be configured to delay itself by a random amount of time before attempting to reconnect after a loss of connectivity. This random delay then distributes the number of assets connecting at a time over a longer period, thus minimizing the impact of the reconnections on ThingWorx. There are several configuration settings that help in this regard.   Configuring the Herd (C-SDK) The C-SDK is great at managing agent connectivity, having a lot of options for fine-tuning the connections. The web-socket connection is managed by the SDK layer of the edge device (which also manages the retry process). To review the source code for how connections are made, see the C-SDK file found here: src\api\twApi.c, specifically the function called twApi_Connect().   The ExampleAgent uses custom configuration files to manage this process from the application layer, a more robust and complete solution. Detailed here are the configuration options in the ExampleAgent attached to this post, most of which can be found in its ws_connection.json configuration file: connect_timeout is used throughout the C-SDK as the time to wait for a web-socket connection to be established (i.e. the ‘timeout’ value). This is the maximum delay for the socket to be established or to send and receive data. If it is established sooner, then a success code is returned. If a connection is not established in the configured timeout period, then an error is returned. Setting this value to 10 seconds is reasonable, for reference. connect_retries is the number of times the SDK will attempt to establish a connection before the twApi_Connect() function returns an error. Setting this to -1 will trigger the SDK to stay in the loop infinitely until a connection is established. connect_retry_interval is the delay between connection retries. max_connect_delay is used as a delay before even entering the loop, that which uses the connect_retries and connect_retry_interval parameters to establish the connection. The SDK function twAddConnectionDelay() is called, which delays by a random amount of time between 0 seconds and the value given by this parameter. This random delay is only used once per call to twApi_Connect().  This is therefore the parameter most critical to preventing thundering herd scenarios (as discussed above). Configuring the SDK agents to reconnect in this way is critical, but there are also some drawbacks, namely that while the twApi_Connect() function is running, there is no clean way of shutting the agent down. Likewise, the agent only does ONE randomized delay per call of the twApi_Connect() function, meaning that if reconnection cannot occur immediately, it’s still possible for many agents to try to reconnect at once. Consider this when determining what values to assign to these parameters.   ExampleAgent Design The ExampleAgent provided here is a fully implemented, configurable application, like the EMS in terms of functionality, but containing only simulated data. The data capture component is missing here and has to be custom developed. Attached alongside this source code is extensive documentation that explains how to get the application set up and configured. This isn’t meant to be used directly in a production environment.   Please note that the ExampleAgent is provided as-is; it is not an officially released product by PTC.   This disclaimer includes the ExampleAgent source code, build process, documentation, deliverables as well as any ExampleAgent modifications to the official releases of the C-SDK or the SCM extension product. Full and sole responsibility for the use, deployment, reliability, and accuracy of any ExampleAgent related code, documentation, etc. falls to the user, and any use of the ExampleAgent is an implicit agreement with this disclaimer.   The ExampleAgent was developed by PTC sales and services to help in the Edge application development process.  For assistance, support, or additional development, an authorized statement of work is needed.  Please Note:  PTC support is not aware of the existence of the ExampleAgent and cannot provide assistance.    Because of the small downside to configuring the twApi_Connect() function directly as discussed above, there is alternative approach given here as well. The ExampleAgent module ConnectionMgr.c controls the calling of the twApi_Connect() on a dedicated connection thread. The ConnectionMgrThreadFunction() contains the source code necessary to understanding this process.   The ConnectionMgr.c workflow and source code visualization via Microsoft Visual Studio are in the diagrams below. The ExampleAgent defines its own randomized delay to mitigate the thundering herd scenario while still deploying an edge system that responds to shut down requests cleanly. In this case, the randomized delay is configured by the parameter reconnect_random_delay_seconds in the agent_config.json file. Since the ConnectionMgrThreadFunction() controls the calling of twApi_Connect(), the ConnectionMgrThreadFunction() will delay the randomized value EVERY time before calling this reconnect function. A separate thread is created to call the reconnect function so that there are still resources available for data processing and to check for shutdown signals and other conditions.   Recommended Values These recommendations are based around managing the reconnection process from the application layer. These may be different if the C-SDK is configured directly, but creating application layer management is recommended and provided in the ExampleAgent attached. The ExampleAgent is configured by default to simplify the SDK layer’s involvement.   These configuration options tell the SDK layer to try to connect just once, after just 1 second: There is no official recommendation for the above values due to the fact that every use case is different and will require different fine tuning to work well.   Then this setting here handles the retry process from within the application layer of the ExampleAgent: Conclusion To reduce the chances of a thundering herd scenario, configure the fleet to reconnect after differing random delays. The larger the random delay times, the longer it takes for the fleet to come back online and fleet data to be received. While more complex ThingWorx deployment architectures (such as container-based deployments like Kubernetes or Thingworx High Availability (HA) clusters) can also help to address the increased peak load during a thundering herd event, randomized reconnect delays can still be an effective tool.        
View full tip
Sometimes you need the values from different ThingTemplate members in ONE grid. Therefore it would be great, if you can join 2 "GetImplementedThingsWithData" results into a common one. Here a script that works generally as long as you don't mess with datatypes on same column names. I'm very interested, if someone can find a much easier solution. The Union function was the only one I found suited for the task, but this needs preparation of the infotables upfront. Input: Table1 :Infotable Table2: Infotable Output: Infotable Here the "Snippet": // Define params for an Infotable to hold column names var params = {   infoTableName: "field" /* STRING */ }; // Define column 1 var newField = new Object(); newField.name = "field"; newField.baseType = 'STRING'; // Two 1 columns Infotables to store the field definition; var field1 = Resources["InfoTableFunctions"].CreateInfoTable(params); field1.AddField(newField); var field2 = Resources["InfoTableFunctions"].CreateInfoTable(params); field2.AddField(newField); // Define the cell to add to Infotable var myField = new Object(); myField.field = ""; myField.baseType = "STRING"; // Loop through Table1 var dataShapeFields = Table1.dataShape.fields; for (var fieldName in dataShapeFields) {   logger.debug('field1 name is ' + dataShapeFields[fieldName].name);     myField.field = dataShapeFields[fieldName].name;    field1.AddRow(myField); } // Loop through Table2 var dataShapeFields = Table2.dataShape.fields; for (var fieldName in dataShapeFields) {   logger.debug('field2 name is ' + dataShapeFields[fieldName].name);    myField.field = dataShapeFields[fieldName].name;    field2.AddRow(myField); } // Using inner join functionality to filter only the values that exist in both var params = {   columns2: "field" /* STRING */,   columns1: "field" /* STRING */, joinType: "INNER" /* STRING */,   t1: field1 /* INFOTABLE */, t2: field2 /* INFOTABLE */,   joinColumns1: "field" /* STRING */,   joinColumns2: "field" /* STRING */ }; var commonFields = Resources["InfoTableFunctions"].Intersect(params); // Loop over the result to build a search string var commonColumns = ""; var tableLength = commonFields.rows.length; for (var x = 0; x < tableLength; x++) {   var row = commonFields.rows ;   commonColumns = commonColumns + row.field + ","; } // Reduce Table1 to match only common columns var params = { t: Table1 /* INFOTABLE */, columns: commonColumns /* STRING */ }; var result1 = Resources["InfoTableFunctions"].Distinct(params); // Reduce Table2 to match only common columns var params = {   t: Table2 /* INFOTABLE */,   columns: commonColumns /* STRING */ }; var result2 = Resources["InfoTableFunctions"].Distinct(params); // At the END JOIN the tables together (does not work if colums are different) var params = {   t1: result1 /* INFOTABLE */,   t2: result2 /* INFOTABLE */ }; var result = Resources["InfoTableFunctions"].Union(params);
View full tip
To setup the Single-Sign On with Windchill, we can just follow steps in Windchill extension guide. However, there is a huge problem to use "Websocket" for EMS or Edge SDKs from devices since Apache for Windchill blocks to pass "ws" or "wss" protocol. It's like a problem of a proxy server. There might be a couple of ways to avoid this issue, but I suggest to change filter-mappings for the SSO filter. When you look at the Windchill extension guide, it says that users set filters for all incoming URLs of ThingWorx by using "/*" filter mappings. Please use below settings for "web.xml" of ThingWorx server to avoid the problem that I stated above. It looks quite long and complicated, but basically the filter mappings from settings for "AuthenticationFilter" which are already defined by default except "Websocket" related urls. <!-- Windchill Extension SSO Start--> <filter> <filter-name>IdentityProviderAuthenticationFilter</filter-name> <filter-class>com.ptc.connected.plm.thingworx.wc.idp.client.filter.IdentityProviderAuthenticationFilter</filter-class> <init-param> <param-name>idpLoginUrl</param-name> <param-value>http(s)://<SERVERHOSTURL>/Windchill/wtcore/jsp/genIdKey.jsp</param-value> </init-param> </filter> <filter-mapping>   <filter-name>IdentityProviderAuthenticationFilter</filter-name>   <url-pattern>/extensions/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/action-authenticate/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/action-login/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/action-confirm-creds/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/action-change-password/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/ThingworxMain.html</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/ThingworxMain.html/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Server/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/ApplicationKeys/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Networks/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Dashboards/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/DirectoryServices/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Authenticators/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/PersistenceProviderPackages/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/tunnel/wsadapter.jsp</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/tunnel/adapter.jsp</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Logs/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Resources/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Subsystems/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Users/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Home/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/StateDefinitions/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/StyleDefinitions/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/ScriptFunctionLibraries/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/AtomFeedService/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/DataShapes/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Importer/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/ImageEncoder/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Exporter/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/ExportDatabase/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/ExportTheme/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/ExportDefaultEntities/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/ImportDatabase/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/DataExporter/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/DataImporter/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Widgets/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Groups/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/ThingPackages/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Things/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/ThingTemplates/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/ThingShapes/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/DataTags/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/ModelTags/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Composer/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Squeal/index.html</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Runtime/index.html</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Mashups/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Menus/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/MediaEntities/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/loaders/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/demos/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/ExtensionPackageUploader/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/ExtensionPackages/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/FileRepositoryUploader/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/FileRepositoryDownloader/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/FileRepositories/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/xmpp/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/LocalizationTables/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Organizations/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/RemoteTunnel/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/PersistenceProviders/*</url-pattern>   </filter-mapping> <filter> <filter-name>IdentityProviderKeyValidationFilter</filter-name> <filter-class>com.ptc.connected.plm.thingworx.wc.idp.client.filter.IdentityProviderKeyValidationFilter</filter-class> <init-param> <param-name>keyValidationUrl</param-name> <param-value>http(s)://<SERVERHOSTURL>/Windchill/login/validateIdKey.jsp</param-value> </init-param> </filter> <filter-mapping>   <filter-name>IdentityProviderKeyValidationFilter</filter-name>   <url-pattern>/extensions/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/action-authenticate/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/action-login/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/action-confirm-creds/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/action-change-password/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/ThingworxMain.html</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/ThingworxMain.html/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Server/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/ApplicationKeys/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Networks/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Dashboards/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/DirectoryServices/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Authenticators/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/PersistenceProviderPackages/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/tunnel/wsadapter.jsp</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/tunnel/adapter.jsp</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Logs/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Resources/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Subsystems/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Users/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Home/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/StateDefinitions/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/StyleDefinitions/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/ScriptFunctionLibraries/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/AtomFeedService/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/DataShapes/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Importer/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/ImageEncoder/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Exporter/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/ExportDatabase/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/ExportTheme/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/ExportDefaultEntities/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/ImportDatabase/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/DataExporter/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/DataImporter/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Widgets/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Groups/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/ThingPackages/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Things/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/ThingTemplates/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/ThingShapes/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/DataTags/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/ModelTags/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Composer/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Squeal/index.html</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Runtime/index.html</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Mashups/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Menus/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/MediaEntities/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/loaders/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/demos/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/ExtensionPackageUploader/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/ExtensionPackages/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/FileRepositoryUploader/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/FileRepositoryDownloader/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/FileRepositories/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/xmpp/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/LocalizationTables/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Organizations/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/RemoteTunnel/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/PersistenceProviders/*</url-pattern>   </filter-mapping> <!-- Windchill Extension SSO End-->
View full tip
Those who have been working with ThingWorx for many years will have noticed the work done around ingress stress testing and performance optimization.  Adding InfluxDB as a time-series data persistence provider really helped level up these capabilities while simultaneously decreasing the overall resources required by the infrastructure.  However with this ease comes a hidden challenge: query and data processing performance to work it into something useful.   Often It's Too Much Data In general most customers that I work with want to collect far too much data -- without knowing what it will be used for, or what processing will be required in order to make it usable and useful.  This is a trap in general with how many people envision IoT projects, being told by infrastructure providers that cloud storage and compute resources are abundant and cheap and that they should get as much data as possible.  This buildup of data means that more effort needs to be spent working it into something useful (data engineering/feature extraction) and addressing common data issues (quality, gaps, precision, etc.).  This might be fine for mature companies with large data analytics teams; however this is a makeup that I've only seen in the largest of our customers.  Some advice - figure out what you need and how you'll use it, and then collect that.  Work on extracting value today rather than hoping that extra data collected  now will provide some insights years from now.   Example - Problem Statement You got your Thing Model designed, and edge devices connected.  Now you've got data flowing in and being stored every 5 seconds in InfluxDB.  Great progress!  Now on to building the applications which cover the various use cases. The raw data is most likely going to need to be processed and potentially even significantly transformed into other information in order to make it useful.  Turning a "powered on and running" BOOLEAN to an "hour meter" INTEGER is a simple example.  Then you may need to provide a report showing equipment run time hours by day over a month.  The maintenance team may also have asked to look for usage patterns which lead to breakdowns, requiring extracting other data points from the initial one like number of daily starts, average daily run time, average time between restarts. The problem here is that unless you have prepared these new data points and stored them as well (say in a Stream), you are going to have to build these data sets on the fly, and that can be time and resource intensive and not give you the response time expected.  As you can imagine, repeatedly querying and processing large volumes of unchanging raw data is going to have resource and time implications - so this is why data collection and data use need to be thought about separately.   Data Engineering In the above examples, the key is actually creating new data points which are calculated progressively throughout normal operation.  This not only makes the information that you want available when you need it - in the right format - but it also significantly reduces resource requirements by constantly reprocessing raw data.  It also helps managing data purging, because as you create and store usable insights, you can eventually just archive away your old raw data streams.   Direct Database Queries vs. Thingworx Data Services Despite the above being a rule of thumb, sometimes a simple well structured database query can get you exactly what you need and do so quite quickly.  This is especially true for InfluxDB when working with extremely large time-series datasets.  The challenge here is that ThingWorx persistence providers abstract away the complexity of writing ones own database queries, so we can't easily get at the databases raw power and are forced to query back more data than needed and work it into a usable format in memory (which is not fast).   Leveraging the InfluxDB API using the ContentLoader Technique As InfluxDBs API is 100% REST, we can access it using in-built ThingWorx Content Loader services.  Check out this demonstration and explanation video where I talk about how to interact directly with InfluxDB in order to crush massive time-series data and get back much more usable and manageable data sets.  It is important to note here that you should use a read-only database user here, as you should never modify the ThingWorx databases to avoid untested scenarios which may lead to data corruption.   Optimizing ThingWorx query performance with the InfluxDB REST API - YouTube InfluxToolBox ThingWorx demo project (by T. Wobben)      
View full tip
Recently I have been accompanying an integration partner and end customer around an issue experienced with ThingWorx resource exhaustion.  Early on it seemed like this was an issue with the ThingWorx Azure IoT Hub Connector as it would freeze up and become unresponsive.  Following a root cause analysis it became clear that it was actually caused by a lack of a number of standard cloud design patterns, which if used would have automatically adapted operation of the overall solution to be far more resilient as well as resource optimized.   The way that the logic was structured, it prioritized job execution on entities with the oldest last success time and would continue to retry these executions (IoT Direct Methods) every few seconds until successful.  There were a number of problems here, but I'll unpack a few in order to tie the problem to the solution via design patterns.   1) No exception handling When the direct method execution failed/timed out or the system reported being unable to execute the remote service, this response was not used to adapt the solutions behavior. 2) No backoff retry mechanism As exceptions were not caught, an adaptive retry mechanism with incremental or exponential backoff could not be leveraged to limit the impact of the build up of the failing retries. 3) No exception tracking Tracking that exceptions were occurring and counting them would allow powering an exponential backoff retry algorithm (with jitter), a Cancel or Circuit Breaker pattern (stop doing something which is just broken), as well as provided alerting to address specific areas of the distributed solution experiencing issues. 4) Conflicting priorities It was interesting to see the manifestation of the conflicting interests of wanting to ensure checks and balances (had all needed data) and system resiliency.  Retries and resource usage built up exponentially due to the transient error instead of backing them off.  Trying so hard to get the needed data from failing sensors meant that operational sensors were deprioritized and their data was not received either - spreading the localized issue to the whole system.   Around the time that I shared my recommendations and some examples of how to make the solution more resilient, one of my technical colleagues at Microsoft shared some extremely interesting and relevant design patterns documented by Microsoft as a part of the "Microsoft Azure Well-Architected Framework".  This framework with included Design Patterns for specific cloud application goals allows applying well-known industry standard approaches to dealing with the challenges of large scale distributed enterprise systems (reliability, performance, cost optimization).   She later then shared this blog post describing exactly the exponential backoff retry with jitter pattern which we had together recommended to the systems integrator.   What's interesting for us ThingWorx people is that this framework from Microsoft is about well-architected cloud solutions and does not specifically reference the Azure stack, and as such many of these approaches and design practices can be employed in your ThingWorx applications.  What are you waiting for?  Go check them out!
View full tip
Interested in learning how others using and/or hosting ThingWorx solutions can comply with various regulatory and compliance frameworks?   Based on inquiries regarding the ability of customers to meet a wide range of obligations – ranging from SOC 2 to ISO 27001 to the Department of Defense’s Cybersecurity Maturity Model Certification (CMMC) – the PTC's IoT Product Management and EDC teams have collaborated on a set of detailed articles explaining how to do so.   Please check out the ThingWorx Compliance Hub (support.ptc.com login required) for more information!
View full tip
We will host a live Expert Session: "Top 5 Thingworx environment monitoring best practices" on March 25th, 10h00 EST.   Please find below the description of the expert session and the registration link.   Expert Session: Top 5 Thingworx environment monitoring best practices Date and Time: March 25th, 10h00 EST Duration: 1 hour Host: Tori Firewind, Tim Atwood and Dave Bernbeck from Enterprise Deployment Center - Enterprise Deployment Center Registration Here: https://www.ptc.com/en/resources/iiot/webcast/top-5-thingworx-monitoring-best-practices    In this session, we will be reviewing the main monitoring practices to keep a heathy environment and discuss the main issues from the audience. Bring your questions!.   Existing Recorded sessions can be found on support portal using the keyword ‘Expert Sessions’. You can also suggest topics for upcoming sessions using this small form.   Here are some recorded sessions that might be of your interest. You can find recordings for the full library of webinars using the keyword ‘Expert Sessions’ in PTC support portal search   Thingworx Active Active Clustering This session will cover the main aspects of the High Availability Clustering feature launched with the ThingWorx 9.0 release.   Recoding Link Upgrade to Thingworx 9 – How to Plan / Evaluate Impacts This session highlights the key points you should evaluate to properly plan your upgrade to Thingworx 9. Recording Link Top 5 items to check for Thingworx Performance Troubleshooting How to troubleshoot performance issues in a Thingworx Environment? Here we cover the top 5 investigation steps that will help you understand the source of your environment issues and allow better communication with PTC Technical Support     Recording Link
View full tip
This project is developed out of curiosity of how ThingWorx communicates with sensors and vice versa. Immediately a Smart Parking system idea struck to our mind and I started working on it. While heading from home to office I always worry about car parking space in office especially in rainy season. This project will help user in getting parking space. This project has 4 sections as follows, 1) Smart Parking system: A system application developed in ThingWorx guides user to find empty car parking space. Sensors placed at each car parking slot senses the presence of car. A program running on Raspberry Pi board collects sensor information and sends that information to the Smart Car Parking System application in ThingWorx. The data received through sensor is displayed on ThingWorx dashboard/mashup. 2) Live Traffic: This inherits a Google Map and shows the traffic around user's current location. 3) Traffic Blog: If user is visiting a place and have questions regarding parking, traffic condition etc., then user can post their questions here and people around that area can answer it. Questions are not restricted for parking related questions but like best places to visit in areas, restaurant, shops etc. 4) Automobile Wiki: This page provides an documented help regarding anything related to automobile e.g. how to change car tyres?, how to change car wipers? etc.
View full tip
Troubleshooting platform issues is  generally done by using a layer approach, similar to a simplified OSI Model. From bottom to top, the following layers represent the areas to analyze during each step: 1. Physical (Server, power, wired connections): check the server status and condition, CPU and memory levels. 2. Software (Operating system, tomcat, java versions, compatibility, and configuration): refer to the compatibility matrix to ensure the requirements are met; verify Tomcat  java configuration. * Note: Tomcat manager, server status, conveniently provides this information in one place. 3. Network: ensure  proper connectivity, port availability, firewall  configuration, and additional security, if applicable. 4. Application. The main focus of this blog post will concentrate on the step 4. As the Thingworx application is driven by Tomcat, first available tools coming "out-of-the-box" is the built-in Tomcat manager app.  Clicking on the "Server Status" provides the information on the versions, memory usage, processes, times and thread counts. Keep in mind, the default Tomcat maximum thread number is 200. Some additional tools that could assist in troubleshooting java applications and gathering performance metrics are: Javamelody, new relic, profiler4j. These have to be obtained, installed, and configured separately. Javamelody: Free and lightweight monitoring tool which does not do any profiling, safe to use in production environments. It comes with a series of plug-ins including for Grails, Jenkins and Jira. New relic: Real-time Java application monitoring, features code deployment reports, transaction tracing across different tiers and the ability to create alerts. Subscription fee applies. Profiler4j: Profiler4J is a free open-source tool for profiling in Java. It is enabled by passing an argument at start-up with a path to the Profiler4J .jar file. It comes with several graphs and charts showing a call graph with method details, a call tree, a memory monitor, a class list and thread monitoring. From the application perspective, Thingworx composer provides a PlatformSubsystem and LoggingSubsystem: PlatformSubsystem contains such services as GetPerformanceMetrics, GetSummaryInformation, GetThingworxVersion, and more to provide fundamental information for any troubleshooting scenario. LoggingSubsystem contains the logs, log settings, and other monitoring values. List of recommended tools for troubleshooting all layers: Wireshark: monitors network traffic Jstack: monitors memory consumption of specific threads Dynatrace: system performance and web application performance jconsole: system or application performance ​​
View full tip
How to Scale Vertically and Horizontally, and When to Use Sharding Written by Mike Jasperson, VP of IOT EDC   Deployment architecture describes the way in which an IOT application is deployed, or where each of the components are hosted on the network. There are deployment architecture considerations to make when scaling up an application. Each approach to deployment expansion can be described by the “eggs in a basket” analogy: vertical scale is like one person carrying a bigger basket, horizontal scale is like one person carrying more baskets, and sharding is like more than one person carrying the baskets (see below).   All of these approaches result in the eggs getting from point A to point B (they all satisfy the use case), but the simplest (vertical scale) is not necessarily the best. Sure, it makes sense on paper for one person to carry everything in one big basket, but that doesn’t ensure that all of the eggs arrive intact. Selecting the right deployment architecture is a way to ensure the use cases are satisfied in the best and most efficient ways possible, with the least amount of application downtown or data loss.   Vertical Scale - a.k.a. "one person carrying a bigger basket" The most common scalability approach is to simply size the IOT server larger, or scale up the server. This might mean the server is given additional CPU cores, faster CPU clock speeds, more memory, faster disks, additional network bandwidth or improved network cards, and so on. This is a very good idea  when the application logic is increased in complexity, when more data is therefore needed in memory at a time, or when the processing of said data has to occur as quickly as possible. For example, adding additional devices to the fleet increases the size of the “Thing Model” in the process and will require additional heap memory be available to the Foundation server.   However, there are limitations to this approach. Only so many concurrent operations and threads can be performed at once by a single server. Operations trying to read and write to the disks at once can introduce bottlenecks and reduce server performance. Likewise, “one person, one basket” introduces a single-point-of-failure operating risk. If for some reason the server’s performance does degrade or cease altogether, then all of the “eggs” go down with it. Therefore, this approach is important, but usually not sufficient on its own for empowering an enterprise level deployment.   Horizontal Scale - a.k.a. "one person, with two or more baskets" As of ThingWorx version 9.0, Foundation servers can be deployed in clusters, meaning more baskets to carry the eggs. More baskets means that if even one of these servers is active, the application remains up in the event of an individual node failure or maintenance. So clustered deployments are those which facilitate High Availability.   Clustered servers save on some resources, but not others. For instance, every server in a cluster will need to have the same amount of memory, enough to store the entire Thing Model. Each of the multiple baskets in our analogy has to have the same type of eggs. One basket can’t have quail eggs if the rest have chicken eggs. So, each server has to have an identical version of the application, and therefore enough memory to store the entire application.   Also keep in mind that not all application business logic can scale horizontally. Event queues are local to each ThingWorx node, so the events generated within each node are processed locally by that particular node, and not the entire network (examples are timer and scheduler-based activities). Likewise, data ingestion done through an extension or other background process, like MQTT, emits events within a node that therefore must be processed by that particular node, since that's where the events are visible. On the other hand, load distribution that happens external to ThingWorx in either the Connection Server (for AlwaysOn based data coming from ThingWorx SDKs, EMS, eMessage agents, or Kepware) or REST API calls through a load balancer (i.e. user activity) will be distributed across the cluster, facilitating greater scaling potential in terms of userbase and mashup complexity. Also note that batched data will be processed by the node that received it, but different batches coming through a connection server or load balancer will still be distributed.   Another consideration with clusters pertains to failure modes. While each node in the cluster shares a cache for many things, Stream and Value Stream queues are only stored locally. In the event of a node failure, other nodes will pick up subsequent requests, but any activity already queued on the failed node will be lost. For use cases where each and every data point is critical, it is important to size each node large enough (in other words, to vertically scale each node) such that queue sizes are constantly kept low and the data within them processed as quickly as possible. Ensuring sufficient network and database throughput to handle concurrent writes from the many clustered nodes is key as well.   Once each node has enough resources to handle local queues, the system is highly available with low risk for outages or data loss. However, when multiple use cases become necessary on single deployments, horizontal scaling may no longer be enough to ensure things run smoothly. If one use case is logic-heavy, something non-time-critical which processes data for later consumption, it can use too many resources and interfere with other, lighter but more time-critical use cases. Clustering alone does not provide the flexibility to prioritize specific operations or use cases over others, but sharding does.   Sharding - a.k.a. "more than one person carrying baskets" “Sharding” generally refers to breaking up a larger IOT enterprise implementation into smaller ones, each with its own configuration and resources. More server maintenance and administration may be required for each ThingWorx implementation, but the reduction in risk is worth it. If each of the use cases mentioned above has its own implementation, then any unexpected issues with the more complex, analytical logic will not affect the reaction time of operators to time-sensitive matters in the other use case. In other words, “don’t keep all your eggs in one basket”.   The best places to break up an implementation lie along logical boundaries already accepted by the business. Breaking things down in other ways might look nice on paper, but encouraging widespread adoption in those cases tends to be an uphill battle.   In connected products use cases, options for boundaries could be regional, tied more towards business vertical, or centered around different products or models.  These options can be especially beneficial when data needs to stay in particular countries or regions due to regulatory requirements. In connected operations use cases, the most common logical boundaries would be site-based, with smaller IOT implementations serving just a smaller number of related factories in a particular area. Use-case or product-line boundaries can also make sense here, in-line with the above comments about keeping production-critical or time-sensitive use cases isolated from interference from business-support and analysis use cases.     Ideally, a shard model will put the IOT implementation “closer” to both the devices communicating with it and the users that interact with the data. This minimizes the amount of data to be sent or received over long distances, reducing the impact from bandwidth and latency on performance. When determining which approach is best, consider that smaller, more focused implementations offer more flexibility, but are harder to manage. Having different versions of the same applications deployed in multiple places can easily become a maintainability nightmare. It’s therefore best not to combine a regional model with a use case model when it comes to determining sharding boundaries. Also consider using deployment automation tools like Solution Central. These enable tracking and managing version-controlled deployments to multiple IOT implementations, whether they are deployed on-premise, or in the cloud, giving one central location of all source code.   Another benefit to sharding is the focused investment of server resources in a more targeted way. For instance, if one region is larger than another, it may need more CPU and memory. Or, perhaps only part of an application requires High Availability, the time-critical use cases which are best suited to small, clustered deployments. The larger, analysis-centered use cases can then remain non-clustered (but still vertically scaled of course). Sharding can also make access control simpler, as those who need access to only one region or use case can just be given a user account on that particular shard.   However, certain use cases need data from more than one shard in order to operate, turning the data storage and access control benefits into challenges. Luckily, ThingWorx has an excellent toolkit for overcoming such issues. For one thing, REST API calls are readily available in ThingWorx, allowing each shard to exchange information with each other, as well as other enterprise data systems, like ERP or Service Ticketing. Sometimes, lower-level data replication strategies are the way to go, say if downsampling or data transfer from one store to another is necessary, and built-in database tools can more easily handle the workload. Most of the time, however, REST API calls are used to define the business logic within the application layer so that copying data actively between shards is unnecessary, using fewer resources to control what information is shared overall.   There are several design approaches for REST API communication between shards, the two most common being the “peer” model and the “layered” model. In a peer model, one shard may call upon another shard using REST whenever it needs more information. In a layered model, there are “front-line” shards which handle most (if not all) of the device communication and time-critical use cases, things which require only the information in one shard to operate. Then there are also “back-line” shards that aggregate data from the many front-line shards, performing any operations that are less time-sensitive and more complex or analytical.   For any of these approaches, it remains important to keep your data archival and purge strategy in mind. It is a best practice in ThingWorx to only retain as much data as is absolutely necessary, purging the rest periodically. If the front-line shards only ever need the last 7 days of raw data for 5 properties, plus the last 52 weeks of min/max/average data, then implementing an approach where each shard computes the min/max/average values and then archives the older data to a shared “data lake” before purging it would be ideal. This data lake then serves as the data store for all back-line shard operations.   There is also the option to consider sharing some infrastructure between ThingWorx instances when using sharding in a deployment, which can create more flexible, scalable architectures, but can also introduce issues where more than one shard is affected when issues occur on only one shard. For instance, a common shared infrastructure piece is at the database level; each ThingWorx instance needs its own database, but a single database server instance (such as a PostgreSQL HA cluster) could serve separate database namespaces to multiple ThingWorx instances. This is an attractive option where an existing enterprise-scale database infrastructure with experienced DBAs is already in place.   Similarly, load balancers can often be configured to manage load for multiple servers or URLs. If properly configured, an experienced load balancer could direct traffic for multiple applications, but it can also create a bottleneck for inbound connections if not properly sized. Load balancers designed for High Availability can also be considered. Apache Zookeeper is another tool often deployed once for an entire cluster to monitor the health and availability of individual components, or to vote them in or out of operations if problems are detected. With all of these options, remember that sharing infrastructure increases the chances of sharing issues from one ThingWorx system to the next, which can reduce the overall infrastructure complexity at the cost of increased administrative complexity.   Bringing it All Together Vertical and Horizontal scale are both effective ways to add more capacity and availability to software implementations, but there are typically some diminishing returns in the investment of additional infrastructure. For example, consider two large, vertically-scaled implementations – one running on a VM with 64 vCPUs and 256 GiB RAM, and another running on a VM with 96 vCPUs and 384GiB of RAM. While the 96-core server has 1.5 times the compute capacity, in sizing tests with 1 million simulated assets, these two systems tend to fall behind on WebSocket execution at approximately the same point.     In a horizontal scale example, with two nodes each sized the same (64 vCPU and 256GiB RAM), one would expect High Availability to occur, where one node picks up the other’s slack in a failover scenario. However, what if that singular node can’t handle the entire workload? Should both machines be sized vertically such that either can take on the full load, and if so, then what is the cost-benefit of that? It would be less expensive in this case to have a third server.     Optimizing the deployment architecture for a ThingWorx application will therefore usually involve a blended approach. With more than two nodes, High Availability is more readily obtained, as two servers can almost certainly share the load of the third, failed node. Likewise, some workload aspects do not scale well until multiple additional nodes are added. For instance, spreading out the user load from mashup requests across multiple nodes to give the singleton more resources for the tasks it alone can perform doesn’t have much benefit if there are just two nodes.   However, with horizontal scaling alone, the servers may still need to be vertically scaled larger than is ideal in terms of cost. Each one has to hold the entire Thing Model in memory, which means that sometimes, some of the nodes may be oversized for the tasks actually performed there. Sharding allows for each node to have a different Thing Model as necessary based around what boundaries are selected, which can mean saving on costs by sizing each server only as large as it really needs to be.   So, a combination of approaches is typically the best when it comes to deployment architecture. The key is to break things up as much as possible, but in ways that make sense. Determine where the boundaries of the shards will be such that each machine can be as light and focused as possible, while still not introducing more work in terms of user effort (having to access two system to get the job done), application development (extra code used to maintain multiple systems or exchange information between them), and system administration (monitoring and maintaining multiple enterprise systems).   Find the right balance for your systems, and you’ll maximize your cost-benefit ratio and get the most out of your ThingWorx application. Happy developing!
View full tip
Announcements