cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

ThingWorx Navigate is now Windchill Navigate Learn More

IoT & Connectivity Tips

Sort by:
  Today on Ask Kaya, I have a riddle.   I was effective and trendy, but now I can be annoying. I sometimes tend to look out of place and I get in others’ space. I look easy to learn, but few have truly mastered my intricacies. Am I the floss dance?   No, I’m not the floss dance. I’m the expression and validator widgets.   It’s time to say goodbye to those pesky widgets that were super useful but super annoying. Yep, those widgets that littered your design canvas but were “Invisible at Runtime.”     I’m talking about expressions, validators, status messages and event routers. In our next release, expression and validator widgets will no longer appear on the canvas at build time.   You may remember from the previous post titled “Ask the Expert: What are the top three features in ThingWorx 8.4 that I might not know about?” In the post, we discussed the concept of Data Helpers, now known as “Functions.”   What do Functions do? Functions give you the ability add custom logic and bindings to improve UI application functionality. Before we describe how to configure them, let’s first explain what they are.   An expression widget runs any expression you give it. That piece of logic might be something like result = a +  b. While an expression can run any type of logic—not just numbers—you must specify the output base type to be the same as the input base type.    Expressions can also be used to run a different service based on an event. For example, a user may write an expression to run a service if a value changes. They may not care about the value itself but rather just want to know that the value changed.   A validator widget is similar to an expression widget; the key difference is that a validator only outputs a Boolean. When their result is true, you bind to one service, when false, another. Unlike an expression widget, the validator widget does not have to have matching input and output datatypes because the output datatype will always be Boolean.   The input for a validator can be anything. You can create a scenario in which a validator widget outputs a status message that reads “the value is within the acceptable range” when the validator returns true or “the value is outside of the acceptable range” when the validator returns false.   Ready for the extra good stuff? We’re introducing a new editor for you to create, add and configure expressions and validators.   How can I use Functions? Let’s walk through an example using the following these steps.   Create a new Mashup. You’ll see a new tab called “Functions,” which, on default, appears in the bottom right panel.                 Click the “+” arrow in the top right of the “Functions” panel. Choose “expression.” Use the new Functions editor to write your expression. In this example, we’ll say that result = a + b; New Functions Editor  We’ll then set the default values for a and b to be 2 and 3, respectively, to output a result of 5.     Expressions are just as powerful as they were before, but they no longer take up space on your mashup during design time, and they can now be configured in our brand new editor! (To spread the joy even more, the same holds true for validator widgets.)   Reach out with any questions or thoughts below!   Stay connected and keep floss dancing, Kaya
View full tip
Get Started with ThingWorx for IoT Guide Part 3   Step 7: Create Alerts and Subscriptions   An Event is a custom-defined message published by a Thing, usually when the value of a Property changes. A Subscription listens for a specific Event, then executes Javascript code. In this step, you will create an Alert which is quick way to define both an Event and the logic for when the Event is published.   Create Alert   Create an Alert that will be sent when the temperature property falls below 32 degrees. Click Thing Shapes under the Modeling tab in Composer, then open the ThermostatShape Thing Shape from the list.   Click Properties and Alerts tab.   Click the temperature property. Click the green Edit button if not already in edit mode, then click the + in the Alerts column.   Choose Below from the Alert Type drop down list. Type freezeWarning in the Name field.   Enter 32 in the Limit field. Keep all other default settings in place. NOTE: This will cause the Alert to be sent when the temperature property is at or below 32.        8. Click ✓ button above the new alert panel.       9. Click Save.     Create Subscription   Create a Subscription to this event that uses Javascript to record an entry in the error log and update a status message. Open the MyHouse Thing, then click Subscriptions tab.   Click Edit if not already in edit mode, then click + Add.   Type freezeWarningSubscription in the Name field. After clicking the Inputs tab, click the the Event drop down list, then choose Alert. In the Property field drop down, choose temperature.   Click the Subscription Info tab, then check the Enabled checkbox   Create Subscription Code   Follow the steps below to create code that sets the message property value and writes a Warning message to the ThingWorx log. Enter the following JavaScript in the Script text box to the right to set the message property.                       me.message = "Warning: Below Freezing";                       2. Click the Snippets tab. NOTE: Snippets provide many built-in code samples and functions you can use. 3. Click inside the Script text box and hit the Enter key to place the cursor on a new line. 4. Type warn into the snippets filter text box or scroll down to locate the warn Snippet. 5. Click All, then click the arrow next to warn, and Javascript code will be added to the script window. 6. Add an error message in between the quotation marks.                       logger.warn("The freezeWarning subscription was triggered");                       7. Click Done. 8. Click Save.   Step 8: Create Application UI ThingWorx you can create customized web applications that display and interact with data from multiple sources. These web applications are called Mashups and are created using the Mashup Builder. The Mashup Builder is where you create your web application by dragging and dropping Widgets such as grids, charts, maps, buttons onto a canvas. All of the user interface elements in your application are Widgets. We will build a web application with three Widgets: a map showing your house's location on an interactive map, a gauge displaying the current value of the watts property, and a graph showing the temperature property value trend over time. Build Mashup Start on the Browse, folder icon tab of ThingWorx Composer. Select Mashups in the left-hand navigation, then click + New to create a new Mashup.   For Mashup Type select Responsive.   Click OK. Enter widgetMashup in the Name text field, If Project is not already set, click the + in the Project text box and select the PTCDefaultProject, Click Save. Select the Design tab to display Mashup Builder.   Organize UI On the upper left side of the design workspace, in the Widget panel, be sure the Layout tab is selected, then click Add Bottom to split your UI into two halves.   Click in the bottom half to be sure it is selected before clicking Add Left Click anywhere inside the lower left container, then scroll down in the Layout panel to select Fixed Size Enter 200 in the Width text box that appears, then press Tab key of your computer to record your entry.   Click Save   Step 9: Add Widgets Click the Widgets tab on the top left of the Widget panel, then scroll down until you see the Gauge Widget Drag the Gauge widget onto the lower left area of the canvas on the right. This Widget will show the simulated watts in use.   Select the Gauge object on the canvas, and the bottom left side of the screen will show the Widget properties. Select Bindable from the Catagory dropdown and enter Watts for the Legend property value, and then press tab..   Click and drag the Google Map Widget onto the top area of the canvas. NOTE: The Google Map Widget has been provisioned on PTC CLoud hosted trial servers. If it is not available, download and install the Google Map Extension using the step-by-step guide for using Google Maps with ThingWorx . Click and drag the Line Chart Widget onto the lower right area of the canvas. Click Save
View full tip
Connect Kepware Server to ThingWorx Foundation Guide Part 1   Overview   This guide has step-by-step instructions for connecting ThingWorx Kepware Server to ThingWorx Foundation. This guide will demonstrate how easily industrial equipment can be connected to ThingWorx Foundation without installing any software on production equipment. NOTE: This guide's content aligns with ThingWorx 9.3. The estimated time to complete ALL 2 parts of this guide is 30 minutes.    Step 1: Learning Path Overview     This guide explains the steps to connect ThingWorx Kepware Server with ThingWorx Foundation and is part of the Connect and Monitor Industrial Plant Equipment Learning Path. You can use this guide independently from the full Learning Path. If you want to learn to connect ThingWorx Kepware Server to ThingWorx Foundation, this guide will be useful to you. When used as part of the Industrial Plant Learning Path, you should have already installed ThingWorx Kepware Server and created an Application Key. In this guide, we will send information from ThingWorx Kepware Server into ThingWorx Foundation. Other guides in this learning path will use Foundation's Mashup Builder to construct a website dashboard that displays information and from ThingWorx Kepware Server. We hope you enjoy this Learning Path.   Step 2: Create Gateway   To make a connection between ThingWorx Kepware Server and Foundation Server, you must first create a Thing. WARNING: To avoid a timeout error, create a Thing in ThingWorx Foundation BEFORE attempting to make the connection in ThingWorx Kepware Server. In ThingWorx Foundation Composer, click Browse. On the left, click Modeling -> Things.   Click + NEW. In the Name field, enter IndConn_Server, including matching capitalization. If Project is not already set, click the + in the Project text box and select the PTCDefaultProject. In the Description field, enter an appropriate description, such as Industrial Gateway Thing to connect to ThingWorx Kepware Server.   In the Base Thing Template field, enter indus, then select the IndustrialGateway Thing template from the sorted list. Click Save.   Step 3: Connect to Foundation   Now that you’ve created an IndustrialGateway Thing and an Application Key, you can configure ThingWorx Kepware Server to connect to ThingWorx Foundation. Return to the ThingWorx Kepware Server Windows application. Right-click Project. Select Properties….       4. In the Property Editor pop-up, click ThingWorx.       5. In the Enable field, select Yes from the drop-down.       6. In the Host field, enter the URL or IP address of your ThingWorx Foundation server, Do not enter http://       7. Enter the Port number. If you are using the "hosted" Developer Portal trial, enter 443. 8. In the Application Key field, copy and paste the Application Key you just created. 9. In the Trust self-signed certificates field, select Yes from the drop-down. 10. In the Trust all certificates field, select Yes from the drop-down. 11. In the Disable encryption field, select No from the drop-down if you are using a secure port. Select Yes if you are using an http port. 12. Type IndConn_Server in the Thing name field, including matching capitalization. 13. If you are connecting with a remote instance of ThingWorx Foundation and you expect any breaks or latency in your connection, enable Store and Forward. 14. Click Apply in the pop-up. 15. Click Ok. In the ThingWorx Kepware Server Event window at the bottom, you should see a message indicating Connected to ThingWorx.   NOTE: If you do not see the "Connected" message, repeat the steps above, ensuring that all information is correct. In particular, check the Host, Port, and Thing name fields for errors.   Step 4: Bind Industrial Tag   Now that you've established a connection, you can use ThingWorx Foundation to inspect all available information on ThingWorx Kepware Server. ThingWorx Kepware Server includes some information by default to assist you with verifying a valid connection with ThingWorx Foundation. Create New Thing Return to ThingWorx Foundation. Click Browse. Click Modeling -> Industrial Connections.   Click IndConn_Server. At the top, click Discover.   The Discover option is exclusive to Things inheriting the IndustrialGateway Thing Template and displays information coming from ThingWorx Kepware Server. Expand Simulation Examples. Click Functions.   On the right, you’ll see several pre-defined Tags to assist with connectivity testing. Click the checkbox next to Random3. Click Bind to New Entity.   In the Choose Template pop-up, select RemoteThing and click OK.   Finalize New RemoteThing   You’ll now be in an interface to create a new Thing with a predefined Property based on ThingWorx Kepware Server Tag1. Type IndConn_Tag1 in the Name field. If Project is not already set, click the + in the Project text box and select the PTCDefaultProject. In the Description field, enter an appropriate description, such as Thing with a property fed from an Kepware Server Tag. The Base Thing Template has been automatically set to RemoteThing. The Implemented Shapes has been automatically set to IndustrialThingShape. 4. Click Save.   Test Connection   The IndConn_Tag1 Thing you created now has a Property with a value that will change with each update from ThingWorx Kepware Server. The Tag1 we utilized is a 'ramp' and therefore, the value will increase at regular intervals. At the top, click Properties and Alerts. Under Inherited Properties, you will see entries for both RemoteThing and IndustrialThingShape. The Property isConnected is checked, indicating a connection from Foundation to ThingWorx Kepware Server. The Property IndustrialThing has been automatically set to IndConn_Server. Notice the predefined Property named Simulation_Examples_Functions_Random3.   Click Refresh repeatedly. You’ll see the value increase with each Refresh. This represents data being simulated in ThingWorx Kepware Server. Click  here to view Part 2 of this guide.  
View full tip
When to Include InfluxDB in the ThingWorx Development Lifecycle (this article is also available for download as a PDF attached)   The Short Answer InfluxDB is a time series database designed specifically for data ingestion. Historically, InfluxDB has been viewed as a high-scale expansion option for ThingWorx: a way to ensure the application works as intended, even when scaled up to the enterprise level. This is certainly one way to view it, because when there are many, many remote things, each with a lot of properties writing to the Platform at short intervals, then InfluxDB is a sure choice. However, what about in smaller applications? Is there still a benefit to using an optimized data ingestion tool in any case? The short answer is: yes, there is!   Using InfluxDB for optimized data ingestion is a good idea even in smaller-sized applications, especially if there are plans to scale the application up in the future. It is far better to design the application around InfluxDB from the start than to adjust the data model of the application later on when an optimized data ingestion process is required. PostgreSQL and InfluxDB simply handle the storage of data in different ways, with the former functioning better with many Value Streams, and the latter with fewer Value Streams. Switching the way data is retained and referenced later, when the application is already on the larger side, causes delays in growing the application larger and adding more devices. Likewise, if the Platform reaches its ingestion limits in a production environment, there can be costly downtime and data loss while a proper solution (which likely involves reworking the application to work optimally with InfluxDB) is implemented.   Don’t think that InfluxDB is for expansion only; it is an optimized ingestion database that has benefits at every level of the application development lifecycle. From the end to end, InfluxDB can ensure reliable data ingestion, reduced risk of data loss, and reduced memory and CPU used by the deployment overall. Preliminary sizing and benchmark data is provided in this article to explain these recommendations. Consider how ThingWorx is ingesting data now, how much CPU and other resources are used just for acquiring the data, and perhaps InfluxDB would seem a benefit to improve application performance.   The Long Answer In order to uncover just how beneficial InfluxDB can be in any size application, the IoT Enterprise Deployment Center has run some simulations with small and medium sized applications. The use case in the simulation is simple with user requests coming from a collection of basic mashups and data ingestion coming from various numbers of things, each with a collection of “fast” and “slow” properties which update at different rates. This synthetic load of data does not include a more complete application scenario, so the memory and CPU usage shown here should not be used as sizing recommendations. For those types of recommendations, stay tuned for the soon-to-release ThingWorx 9.0 Sizing (or check out the current 8.5 Sizing Guide).   Comparing Runs When determining the health of the ThingWorx Platform, there are several categories to inspect: Value Stream Queue Rate and Queue Size, HTTP Requests, and the overall Memory and CPU use for each server. Using Grafana to store the metrics results in charts like those below which can easily be compared and contrasted, and used to evaluate which hardware configuration results in the best performance. The size of the numbers on the vertical axis indicate total numbers of resources used for that metric, while the slope or trend of each chart indicates bottlenecks and inadequate resource allocation for the use case.   In this case, all darker charts represent data from PostgreSQL ONLY configurations, while the lighter charts represent the InfluxDB instances. Because this is not a sizing guide, whether each of these charts comes from the small or medium run is unimportant as long as they match (for valid comparisons between with Influx and without it). The smaller run had something like 20k Things, and the larger closer to 60k, both with 275 total Platform users (25 Admins) and 3 mashups, which were each called at various refresh rates over the course of the 1-hour testing period. Note that in the PostgreSQL ONLY instances, there were more Thing Templates and corresponding Value Streams. This change is necessary between runs because only with fewer Value Streams does InfluxDB begin to demonstrate notable improvements.   The most important thing to note is that the lighter charts clearly demonstrate better performance for both size runs. Each section below will break down what the improvement looks like in the charts to show how to use Grafana to verify the best performance.   Value Stream Queue The vertical axis on the Value Stream Queue Rate chart shows how many total writes per minute (WPS) the Platform can handle. The average is 10 WPS higher using InfluxDB in both scenarios, and InfluxDB is also much more stable, meaning that the writes happen more reliably. The Value Stream Queue Size chart demonstrates how well the writes within the queue are processed. Both of these are necessary to determine the health of data ingestion.   If the queue size were to increase and trend upward in the lighter Queue Size chart, then that would mean the Platform couldn’t handle the higher ingestion rate. However, since the Queue Size is stable and close to 0 the entire time, it is clear that the Platform is capable of clearing out the Value Stream Queue immediately and reliably throughout the entire test. FIGURE 1 – THESE REPRESENT THE DATA GETTING STORED INTO THE DATA PROVIDER. NOTE: THE FORMER IS MUCH LOWER THAN THE LATTER.   FIGURE 2 – NOTE THE DATA LOSS IN THE NON-INFLUX INSTANCE (THE QUEUE IN GREEN REACHES THE MAX IN YELLOW). THE INFLUX INSTANCE HAS LESS TROUBLE CLEARING OUT THE QUEUE, AS DEMONSTRATED BY THE CONSISTENTLY LOW QUEUE SIZE.   HTTP Requests Taking the strain of ingestion off of the Platform’s primary database frees its resources up for other activities. This in turn improves the performance and reliability of the Platform to respond to HTTP requests, those which in a typical application are used to aggregate data into smaller data stores (depending on the use case) and which render the mashups for the end users. The business logic and mashups can be more complex when there is one database designated for ingestion (InfluxDB) and one for everything else (PostgreSQL). FIGURE 3 – THE DARKER CHART SHOWS A LOT OF CHOPPINESS, MEANING THAT WHILE THE PLATFORM WAS RESPONDING THE WHOLE TIME, IT WAS NOT DOING SO RELIABLY. THE SMOOTHER SECOND CHART SHOWS HOW MUCH EASIER THE PLATFORM CAN HANDLE THESE REQUESTS WHEN THE LOAD IS DISTRUBITED INTELLIGENTLY ACROSS MULTIPLE SERVERS, EACH OPTIMIZED FOR THE TYPE OF DATA THEY RECEIVE. THE “STAIRCASE” SHAPE OCCURS BECAUSE THE SIMULATOR INCREASES THE WORK LOAD EVERY 10 MINUTES UNTIL IT BREAKS.   Likewise, the nature of Postgres lends well towards this differentiation, given that there are many more database tables required for supporting the HTTP requests, something Postgres does well. That leaves Influx to handle the time-series data and ingestion, and those are the primary strengths of that software as well. So, splitting the load across multiple servers in this way results in smaller server sizes overall, each which is stream-lined and optimized to handle exactly what it is given by the Platform.   Note that in both of these charts, there are no bad requests, so both would seem to be successful runs. However, as future charts will demonstrate more clearly, there is a catastrophic failure when the load is increased around 12:30p. The simulation ends before the server begins to show any real symptoms of the issue, and that is why there are no bad requests. The maximum Operations Per Second (OPS) in the Hardware Specifications and Performance section is taken from before the failure begins.   Clearly the InfluxDB instance has better performance given that the average Operations Per Second (OPS) is substantially higher, nearly 4 times what is seen in the PostgreSQL ONLY instance. Obviously how well the Platform manages the business logic and mashup loading will depend on a lot of factors. In this test scenario, the OPS was increased by increasing the mashup refresh rate on the InfluxDB instances (which could handle over double the operations). Likewise, the number of Stream writes to the PostgreSQL database could be double what it was when PostgreSQL was the only database. Therefore, configuring InfluxDB for the data ingestion and leaving Postgres for the rest of the application certainly makes the load much easier on the Platform, and the same would be true even in a much more complex scenario.   Memory and CPU The important thing here is to keep the memory use low enough that any spikes in usage won’t cause a server malfunction. CPU Usage should stay at or below around 75%, and Memory should never exceed around 80% of the total allocated to the server. The sizing guides can help determine what this allocation of memory needs to be.   Of note in these charts is the slight, upward slope of the CPU usage in the darker chart, indicating the start of a catastrophic failure, and the difference in the total memory needed for the ThingWorx Platform and Postgres servers when Influx is used or not. As is apparent, the servers use much less memory when the database load is split up intelligently across multiple servers.   FIGURE 4 – THE THINGWORX CPU IS ABOUT THE SAME HERE AS IN THE INFLUXDB CONFIGURATION BELOW BUT LOOK AT HOW MUCH MORE MEMORY BOTH THE PLATFORM AND THE POSTGRES DATABASE NEED ALLOCATED TO THEM IN THIS CONFIGURATION (64 GB A PIECE). ALSO NOTE THE JUMP IN CPU AND MEMORY USAGE AFTER 12:30P. THIS IS REFERENCED IN THE PREVIOUS SECTION, AND THE SLOPE UPWARD OF THE USAGE AFTER THAT POINT INDICATES THE START OF A CATASROPHIC FAILURE. THE TEST ENDS TOO SOON TO SEE ANY SYMPTOMS OF FAILURE, BUT IT IS A SURE THING AFTER THE INCREASE IN LOAD AROUND 12:30P. FIGURE 5 – INFLUX NEEDS AN EXTRA SERVER, BUT THE SIZE OF THE INFLUX AND POSTGRES SERVERS TOGETHER IS LESS THAN HALF THE SIZE AS THAT REQUIRED FOR THE SINGLE POSTGRES DATABASE IN THE POSTGRES ONLY CONFIGURATION (8 GB). THINGWORX IS SMALLER TOO (32 GB).     Hardware Specifications and Performance These are the exact specifications for each simulated instance, broken down by size and whether InfluxDB is configured or not. Note that some of the hardware specifications may be more than is necessary real-world use case depending. As stated previously, this document is not a sizing guide (use the official ThingWorx Sizing Guide). Note that the maximum number of WPS and OPS are shown here. The maximums are higher in the InfluxDB scenarios, meaning that even with smaller-sized servers, the InfluxDB configurations can handle much greater loads.   Summary In conclusion, if InfluxDB may at some point be needed in the lifecycle of an application, because the expected number of things or the number of properties on each thing is large enough that it will max the limitations of the Platform otherwise, then InfluxDB should be used from the very start. There are benefits to using InfluxDB for data ingestion at every size, from performance to reliability, and of course the obviously improved scalability as well.   Reworking the application for use with InfluxDB later on can be costly and cause delays. This is why the benefits and costs associated with an InfluxDB-centric hardware configuration should be considered from the start. More servers are required for InfluxDB, but each of these servers can be sized smaller (depending on the use case), and all of this will affect the overall cost of hosting the ThingWorx application. The benefits of InfluxDB are especially pronounced when used in conjunction with clusters, which will be demonstrate fully in the 9.0 Sizing Guide (soon to be released). If InfluxDB is used to interface with the clusters, then there are even more resources to spare for user requests.   It is considered ThingWorx best practice for high ingestion customers to make use of InfluxDB in applications of any size. Note, though, that this will mean the number of Value Streams per Influx Database will need to be limited to single digits. We hope this helps, and from everyone here at the EDC, happy developing!
View full tip
This document contains information that should be reviewed before installing or upgrading to the latest version ThingWorx for both new and existing customers. Note that many of the links in this document require that you have created and validated an account on the PTC website. Account Creation For users who do not yet have an active maintenance agreement, an account can be created by accessing the Basic Account Creation page.  With a basic account, you will have access to the ThingWorx Community, product documentation, and support articles. Also, a basic level account will grant access to our new Developer eSupport Portal, which is a great resource for users of all levels to become more proficient with ThingWorx and the emerging world of IoT in general.  For more details on the new Developer eSupport Portal, please refer to our Getting Started with the New eSupport Portal guide. For users having an active maintenance agreement with PTC that have not yet created an account on the PTC website, a new customer account can be created by accessing the New Customer Account page.  With a customer-level account, you will have access to all basic account links listed above, plus access to download all licensed PTC products.  You will also have access to our dedicated application support team.  In order to create a customer account, you will be asked to provide your Customer Number, and one of either your Service Contract Number (SCN), Sales Order Number (SON), or Site Number.  This information will have been included in documentation sent to your company by your PTC Sales Representative. If you have any questions or concerns in relation to the account creation process, please contact us using the Web Account Case Logger. Document Structure Throughout this document, references will be made to specific documentation related to the ThingWorx Platform.  A listing of links to this supporting documentation will be provided at the end of this article for all supported releases of the ThingWorx Platform. Please reference this link to review documentation specific to the release version being installed in your local environment. Overview of Changes For a complete listing of new features and enhancements introduced in the latest version of the ThingWorx Platform, please refer to the Release notes documentation, which is included within the platform downloadable zip file. In addition to providing brief descriptions of each enhancement, this document indicates where you can find more comprehensive coverage where applicable. Release notes are also available online for review.  For a complete listing of release notes for all supported releases of ThingWorx, please refer to the link at the end of this document. Required Software The following table lists the files that are required for a complete installation of the ThingWorx Platform: Component Link Oracle JDK Oracle JDK Download Page Apache Tomcat Apache Tomcat Download Page PostgreSQL PostgreSQL Download Page Full details on installing and configuring the above files are provided in the ThingWorx Installation Guide for all supported OS environments.  Please also take note of any version requirements for the above files based on the version of the ThingWorx Platform being installed.  Version requirements are noted in the ThingWorx System Requirements guide. ThingWorx Installation Files The desired version of the ThingWorx platform can be obtained through the PTC eSupport Site (reminder: maintenance agreement required). The file is provided in a zip format, that starts with a name "MED-XXXXXX-CD...”, where X is some digit.  Once the directory is unarchived, the Tomcat-deployable file is Thingworx.war.  Release notes are included within the zip archive. Installation and Reference Documentation In addition to the listing at the end of this guide, all installation and reference documentation is also available online from the PTC Reference Documents page. To locate documentation within this link related to the latest release of ThingWorx, follow these steps. Set the Product field to ThingWorx. Set the Reported Release appropriately. Narrow the search results by setting the Document Type field if desired, or leave this set to All Document Types. Leave the User Role field set to the default All User Roles selection. It is strongly recommended to review the following documents at minimum: ThingWorx Platform System Requirements Installing ThingWorx (For new installations) Upgrading ThingWorx (For customers migrating from an older release) Upgrade Planning for Returning Customers For existing customers who are upgrading from a prior release of the ThingWorx Platform, PTC offers an Upgrade Planning Guide that can help in preparing for the upgrade process. This guide provides a checklist of activities that are critical to performing a successful upgrade to the latest version of ThingWorx. PTC recommends reviewing this guide for assistance in planning for the upgrade process. Additional References and Troubleshooting The following table lists common reference material and troubleshooting material involving the installation of the platform. General Functionality Frequently Seen Errors upon launching the ThingWorx application Link "HTTP Status 401 - Could not handle request" error when attempting to access a new PostgreSQL-based installation of ThingWorx Link Contacting Technical Support Should you have any questions about the installation process, or if you encounter any issues during the process, our qualified team of technical support engineers are available to assist you. With an active maintenance agreement for ThingWorx, you will have access to web-based technical assistance as well as live phone-based support. Contact details vary, depending on you region. For comprehensive information on how to obtain technical support, please refer to our online Customer Support Guide. Links to Documentation for Supported Releases of ThingWorx For links to supporting documentation for current and legacy releases of ThingWorx, please refer to the following article: Where to Find ThingWorx Documentation
View full tip
    April 22, 2025   Hello ThingWorx community members! We are excited to announce the preview release of ThingWorx 10.0, the latest evolution of our IIoT platform for all your Industrial Data Management needs. This release focuses on delivering a powerful, secure, and intelligent foundation for industrial innovation, empowering businesses to achieve more with their IoT solutions. Powerful & Secure: A New Standard in IoT Platforms ThingWorx 10.0 sets a new benchmark for scalability and security in IoT with features like IOT Streams to enhance enterprise industrial data acccess and reliability, caching improvements to increase server scale and response times, and security updates for TLS, Tomcat, Java, and others to ensure top-tier performance and protection. These advancements make ThingWorx 10.0 the most mature and secure platform yet, giving businesses the confidence to scale their IoT deployments while safeguarding their data.   Industrial Solutions: Ready to Drive Performance ThingWorx 10.0 enhances our industrial solutions, advancing connected worker, manufacturing efficiency, and quality use cases. Windchill Navigate View Work Instructions, a powerful, feature-rich app, launches with ThingWorx 10.0. Built on ThingWorx and integrated with Windchill PLM, this task-based solution delivers real-time work instructions, enhancing enterprise collaboration and boosting worker productivity with seamless, intuitive guidance. Alongside Connected Work Cell (CWC), both applications strengthen the connected worker experience in manufacturing with real-time instructions and data. Additionally, enhancements to Real-Time Production Performance Monitoring (RTPPM), and Digital Performance Management (DPM), improve manufacturing performance by optimizing workflows, enhancing service quality, and providing operators with clear, data-driven insights. Data Insights: Unlocking Intelligence from Edge to Cloud ThingWorx 10.0 empowers businesses to harness data-driven intelligence like never before. With advanced analytics and integration with third-party generative AI tools, the platform enables seamless management of industrial data from edge to cloud. Unlock actionable insights and make smarter decisions to stay ahead in a competitive landscape. Get started today The preview release of ThingWorx 10.0 is now available for evaluation. Discover how a powerful, secure, and intelligent IoT platform can transform your industrial operations. Please reach out to your account reps or customer success team to get preview access for this release. Alternatively, drop a comment on this post or submit a Tech Support ticket, and we’ll get in touch with you to discuss onboarding to the ThingWorx 10.0 Private Preview Program.   Lastly watch out this space as we roll out additional details about ThingWorx 10.0 release and other announcements!   Cheers! Ayush Tiwari Director Product Management ThingWorx, a PTC Technology.
View full tip
  What Is “Beyond the Pilot” and Why it matters?   Beyond the Pilot is a reference architecture series that highlights how real-world manufacturing challenges are addressed at enterprise scale across engineering, manufacturing, and service lines of business.   These stories reflect solutions built using — but not limited to — industrial IoT (IIoT), AI, and cloud technologies. Each entry captures how a specific use case — ranging from quality optimization to predictive maintenance — was solved using a combination of proven tools and repeatable design patterns.   Our goal is simple: To provide a blueprint for repeatable success, enabling technical leaders, architects, and operations teams to move from isolated wins to sustained value — securely, scalably, and strategically. Because technology doesn’t create impact in the lab — it creates impact when it’s scaled across the enterprise. Because innovation is only as powerful as its ability to sustain value across engineering, manufacturing, and service. Because manufacturers everywhere face the same question: How do we take what works in a pilot and make it work at scale? That’s exactly what this series will explore.     What You’ll Find in This Series   Each blog will highlight: The Context – The real-world problem we set out to solve The Architecture – The design patterns and integrations that made it possible The Execution – How we turned a concept into a production-ready system The Outcome – Tangible business results, from efficiency gains to cost savings The Lessons Learned – Best practices and insights you can reuse in your own journey   What’s Next   Upcoming posts will dive into: Manufacturing efficiency gains through connected operations Secure, cloud-native deployments that balance scale with compliance How digital twins are reshaping design, operations, and customer experience And much more…   Each story will link back here as the anchor introduction to this series. So stay tuned — and join us as we explore what it really takes to go beyond the pilot. Because that’s where innovation becomes transformation. Vineet Khokhar Principal Product Manager, IoT Security   Disclaimer: These reference architectures will be based on real-world implementation; however, specific customer details and proprietary information will be omitted or generalized to maintain confidentiality.   Stay tuned for more updates, and as always, in case of issues, feel free to reach out to <support.ptc.com>  
View full tip
Get Started with ThingWorx for IoT Guide Part 4   Step 10: Display Data   Now that you have configured the visual part of your application, you need to bind the Widgets in your Mashup to a data source, and enable your application to display data from your connected devices.   Add Services to Mashup   Click the Data tab in the top-right section of the Mashup Builder. Click on the green + symbol in the Data tab.   Type MyHouse in the Entity textbox. Click MyHouse. In the Filter textbox below Services, type GetPropertyValues. Click the arrow to the right of the GetPropertyValues service to add it.   Select the checkbox under Execute on Load. NOTE: If you check the Execute on Load option, the service will execute when the Mashup starts. 8. In the Filter textbox under Services, type QueryProperty. 9. Add the QueryPropertyHistory service by clicking the arrow to the right of the service name. 10. Click the checkbox under Execute on Load. 11. Click Done. 12. Click Save.   Bind Data to Widgets   We will now connect the Services we added to the Widgets in the Mashup that will display their data.   Gauge   Configure the Gauge to display the current power value. Expand the GetPropertyValues Service as well as the Returned Data and All Data sections. Drag and drop the watts property onto the Gauge Widget.   When the Select Binding Target dialogue box appears, select # Data.   Map   Configure Google Maps to display the location of the home. Expand the GetPropertyValues service as well as the Returned Data section. Drag and drop All Data onto the map widget.   When the Select Binding Target dialogue box appears, select Data. Click on the Google Map Widget on the canvas to display properties that can configured in the lower left panel. Set the LocationField property in the lower left panel by selecting building_lat_lng from the drop-down menu.   Chart   Configure the Chart to display property values changing over time. Expand the QueryPropertyHistory Service as well as the Returned Data section. Drag and drop All Data onto the Line Chart Widget. When the Select Binding Target dialogue box appears, select Data. In the Property panel in the lower left, select All from the Category drop-down. Enter series in Filter Properties text box then enter 1 in NumberOfSeries . Enter field in Filter Properties text box then click XAxisField. Select the timestamp property value from the XAxisField drop-down. Select temperature from the DataField1 drop-down.   Verify Data Bindings   You can see the configuration of data sources bound to widgets displayed in the Connections pane. Click on GetPropertyValues in the data source panel then check the diagram on the bottom of the screen to confirm a data source is bound to the Gauge and Map widget.   Click on the QueryPropertyHistory data source and confirm that the diagram shows the Chart is bound to it. Click Save.   Step 11: Simulate a Data Source   At this point, you have created a Value Stream to store changing property value data and applied it to the BuildingTemplate. This guide does not include connecting edge devices and another guide covers choosing a connectivity method. We will import a pre-made Thing that creates simulated data to represent types of information from a connected home. The imported Thing uses Javascript code saved in a Subscription that updates the power and temperature properties of the MyHouse Thing every time it is triggered by its timer Event.    Import Data Simulation Entities   Download the attached sample:  Things_House_data_simulator.xml. In Composer, click the Import/Export icon at the lower-left of the page. Click Import. Leave all default values and click Browse to select the Things_House_data_simulator.xml file that you just downloaded. Click Open, then Import, and once you see the success message, click Close.   Explore Imported Entities   Navigate to the House_data_simulator Thing by using the search bar at the top of the screen. Click the Subscriptions tab. Click Event.Timer under Name. Select the Subscription Info tab. NOTE: Every 30 seconds, the timer event will trigger this subscription and the Javascript code in the Script panel will run. The running script updates the temperature and watts properties of the MyHouse Thing using logic based on both the temperature property from MyHouse and the isACrunning property of the simulator itself. 5. Expand the Subscription Info menu. The simulator will send data when the Enabled checkbox is checked. 6. Click Done then Save to save any changes.   Verify Data Simulation   Open the MyHouse Thing and click Properties an Alerts tab Click the Refresh button above where the current property values are shown   Notice that the temperature property value changes every 30 seconds when the triggered service runs. The watts property value is 100 when the temperature exceeds 72 to simulate an air conditioner turning on.   Step 12: Test Application   Browse to your Mashup and click View Mashup to launch the application.   NOTE: You may need to enable pop-ups in your browser to see the Mashup.       2. Confirm that data is being displayed in each of the sections.        Test Alert   Open MyHouse Thing Click the Properties and Subscriptions Tab. Find the temperature Property and click on pencil icon in the Value column. Enter the temperature property of 29 in the upper right panel. Click Check mark icon to save value. This will trigger the freezeWarning alert.   Click Refresh to see the value of the message property automatically set.   7. Click the the Monitoring icon on the left, then click ScriptLog to see your message written to the script log.   Click here to view Part 5 of this guide. 
View full tip
    Build a remote monitoring application with our developer toolkit for real-time insight into a simulated SMT assembly line.   Guide Concept   This project will introduce methods to creating your IoT application with the ability to analyze real time information as the goal. Following the steps in this guide, you will create an IoT application with the ThingWorx Java SDK that is based on the functionality of an SMT assembly line. We will teach you how to use the ThingWorx Java SDK, ThingWorx Composer, and the ThingWorx Mashup Builder to connect and build a fully functional IoT application running numerous queues and "moving parts".   You'll learn how to   Use ThingWorx Composer to build an application that uses simulated data Track diagnostics and performance in real-time   NOTE: The estimated time to complete this guide is 60 minutes       Step 1: Completed Example   Download the completed files for this tutorial attached here: ManagementApplication.zip.   In this tutorial, we walk through a real-world scenario for a Raspberry Pi assembly line. The ManagementApplication.zip file provided to you contains a completed example of an SMT application. Utilize this file to see a finished example and return to it as a reference if you become stuck creating your own fully flushed out application. Keep in mind, this download uses the exact names for Entities used in this tutorial. If you would like to import this example and also create Entities on your own, change the names of the Entities you create. The download contains the following Java classes that support this scenario:    Name                          Description Motherboard Abstract representation of a Thing inheriting from a MotherboardTemplate AssemblyLine Abstract representation of a Thing inheriting from a SMTAssemblyLineTemplate AssemblyMachine Abstract representation of a Thing inheriting from a AssemblyMachineTemplate   Once you complete the Java environment setup by installing a Java JDK, import the Entities/ThingWorxEntities.xml file into ThingWorx Composer. This file contains various Data Shapes, Mashups, Value Streams, Things, and Thing Templates necessary to support the application. The more important Entities are as follows:    Feature                                                 Entity Type          Description RaspberryPi 1 - 6 Thing Things that inherit from the motherboard template SolderPasteAssemblyMachine Thing A Thing that inherits from the assembly machine template PickPlaceAssemblyMachine Thing A Thing that inherits from the assembly machine template ReflowSolderAssemblyMachine Thing A Thing that inherits from the assembly machine template InspectionAssemblyMachine Thing A Thing that inherits from the assembly machine template RaspberryPiSMTAssemblyLine Thing A Thing that inherits from the assembly line template MotherboardTemplate ThingTemplate A template used for building motherboard devices AssemblyMachineTemplate ThingTemplate A template used to create the various types of SMT assembly machines SMTAssemblyLineTemplate ThingTemplate A template used to represent the entire assembly line and all devices in it Advisor User User created to be used with the Java SDK examples   NOTE: An Application Key is NOT included in the zip file you downloaded. You will need to create your Application Key and assign it to the Advisor user provided in the ThingWorxEntities.xml file, the Administrator (which is not recommended for production applications), or any user you've created. If you do not know how to create one or just need a refresher, visit the Create An Application Key guide, then come back to this guide.       Step 2: Run Application   The Java code provided in the download is pre-configured to run and connect to the entities in the ThingWorxEntities.xml file. Open the Executable/Script in a text editor, and edit the script with your host and port.  Operating System   File Name Mac/Linux Script.sh Windows Script.bat Update the <HOST> and <PORT> arguments to that of your ThingWorx Composer and update the Application Key argument to the one you have created. Use the examples in the file for assistance. NOTE: If you are using the hosted trial server, follow the HTTPS example and use 443 as the port. After updating the script that pertains to your operating system, double-click or run Script.sh (Linux, Mac) or Script.bat (Windows) to run the Java program. In your browser, proceed to the following URL (replace the host field with your ThingWorx Composer host) in order to see the application work:   <host>/Thingworx/Runtime/index.html#master=AssemblyLineMaster&mashup=RaspberryPiAssemblyLine   You can also open the RaspberryPiAssemblyLine Mashup in the Composer and click View Mashup.   You should be able to see rows of assembly machines with buttons. Click the Start button to start the assembly line. Click the Add Board button to add Raspberry Pi motherboards.   NOTE: The screen will not update and properties cannot be changed until the Java backend starts running. Ensure the connection is made before attempting to start the assembly line.   Functional Breakdown   At runtime, the Mashup executes the following functions:                Mashup Component        Function 1  Assembly Machines Selecting an assembly machine will provide you with information on the diagnostic status of that assembly machine and access to charts highlighting its performance. 2  Start Button Start up the assembly line and all assembly machines. 3  Shutdown Button Stop the assembly line and shutdown all assembly machines. Queues will not be purged. 4  Motherboard Add Dropdown A dropdown that shows the available motherboards that can be added to the assembly line. 5  Add Boards Button If a MotherboardTemplate Entity is selected in the Motherboard Add dropdown, that Raspberry Pi will be added to the assembly line. If no Motherboard is selected, this will add a new Raspberry Pi Thing to the assembly line. 6  Motherboard Image Show all motherboards currently inside the assembly line queue of Raspberry Pi. 7  Motherboard Pick Up Dropdown A dropdown that shows the motherboards in the assembly line that are not in a Complete Stage. 8  Add Pick Up Button If a MotherboardTemplate entity is selected in the Motherboard Pick Up dropdown, that Raspberry Pi will be removed from the assembly line and no longer be available. This can be done if a Raspberry Pi is slowing down the other queues. 9  Box Image Show all motherboards currently in the Complete Stage.         Step 3: Services and Java Implementation   JavaScript using ThingWorx Services   To support and run the application quickly, ThingWorx Services are utilized as much as possible. This ensures the speed and quality of the application are maintained while also ensuring code changes can be made quickly.   Opening and Starting Up   Open the RaspberryPiAssemblyLine Mashup by going to the URL provided in the last section. The machines will all be in a shut-down (RED) state. This is ensured by a call to the Shutdown service within the SMTAssemblyLineTemplate ThingTemplate. This method begins the process of resetting the Motherboards to their default states and AssemblyMachines to a shutdown state.   Click the Start button to call the StartUp Service. This call will notify the Java Code to turn the simulated machines on and begin waiting for any motherboards to be added to the queue.   INFO: The StartUp and Shutdown services call other services, some of which can be overrided. If you would like to make a change to the implementation, make the change in an implementation of the SMTAssemblyLineTemplate ThingTemplate. You can use RaspberryPiSMTAssemblyLine as an example.   New Raspberry Pi Names   The CommonServices Entity provides services that can be reused by other entities easily. The GenerateRandomThingName service is utilized to create a psuedo-random name for a new Motherboard. You can use this service to create names - names may start with “Raspberry,” but not necessarily - they are based on how you set the parameters.   Creating and Adding Boards   Select the Add Board button to make a call to the AddBoard service of the SMTAssemblyLineTemplate ThingTemplate. This service will call the CommonServices Thing to create a new name for the Motherboard, then begin the process of creating, enabling, and adding that Motherboard to the simulated devices in the Java code.   Pickup Boards   Select the Pickup Board button to make a call to the PickUpMotherboard service of the SMTAssemblyLineTemplate ThingTemplate. This service will remove a Motherboard from the assembly line, update the status to having been picked up, and ensure the simulated devices are updated with this new information.   Queue Processing   Add a Motherboard to the available queue of a machine when the Motherboard is ready to be worked on that machine. A machine will NOT know information about a Motherboard until that motherboard is ready for that stage of processing.   The Motherboard is then added to the internal queue of the machine based on the size of the internal queue of that machine. Being in the internal queue of a machine does not mean it is being worked on. The Motherboard is ONLY being worked on when the machine has added the Motherboard to it’s working queue. The size of the working queue is based on the machine’s placement heads. You can play with these values to increase or descrease queue performance.   INFO: The heads, speeds, and queue sizes of the machines are created in the RaspberryPiSMTAssemblyLine Thing. To change these configurations, update the AddStartingMachines service with new values or new machines.   Java Implementation using ThingWorx Java SDK   The Java code we created for the Assembly Line scenario creates a connection to the ThingWorx Composer as any ThingWorx SDK utility would. This code is used to allow extended functionality for the application, and mimics the behavior of devices or machines connected to the ThingWorx Composer.   Motherboard Class   The Motherboard Class contains several methods to ensure the location of the motherboard is known at all times. It also updates the status level from 0 to 100 as the motherboard is being assembled.   AssemblyMachine Class   The fields in the AssemblyMachine class ensure that the queues handled by the machine are working correctly. When an AssemblyMachine is created, it will load both the available queue and the internal queue if the machine will be the first stage in the assembly line (Soldering). If not a solder machine, the queues will be empty, as no device is pending its task. If the machine is on, it will continue to work based on its current status of the motherboards in its queue. When a machine is turned on or the current task is complete, the AssemblyMachine will re-evaluate the queues to optimize timing and decrease idle time.   Challenge: Find a way to improve the timing of the queue and reduce the idle time even more. Think of a problem an assembly line might have when machines are waiting on a prior machine to complete a task.   SMTAssemblyLine Class   The SMTAssemblyLine class handles the overall process and controls how motherboards are handled when entering and exiting the assembly line. There are also listeners to start up the assembly machines.   When a board is added to the queue of the assembly line, it will instantly be added to the available queue for a solder machine to begin processing. This is the only machine that will have immediate access to the motherboard. When a board is picked up from the assembly line queue, the status of the board is set to “PICKED UP”. That motherboard will be available later for processing by the assembly line.     Click here to view Part 2 of this guide.  
View full tip
Create a new Thing using the Timer Thing Template. The Timer Thing will fire a Timer Event when the Timer's Update Rate has expired. The event is automatically present and does not need to be added manually. Configuration   The Timer Configuration is quite straightforward. It can be accessed via the Thing's Entity Configuration. Configuration allows for Enabling the Timer on Thing-Startup - whenever the Thing is started, e.g. when restarting ThingWorx or via the RestartThing Generic Service, also the Timer is enabled and will fire Events. Changing the Update Rate - in which intervall the Events will be fired (by default every minute [60000 milliseconds]). Changing the User Context - in which the Events will be handled. The user will need visibility and permission on e.g. executing Services or depending Things, which are required to run the Service triggered by the Event.           Services   Timer Things inherit two Services by default from the Thing Template DisableTimer EnableTimer These will activate / de-activate the Timer and allow / disallow firing Events once the Update Rate has expired If a Timer is currently enabled or disabled can be seen in its properties  
View full tip
Get MQTT (like mosquitto) operating with SSL - use http://rockingdlabs.dunmire.org/exercises-experiments/ssl-client-certs-to-secure-mqtt as your primary guide to building out your self-signed CA cert and your server cert and key. Simply follow their directions with the one caveat of setting IPLIST and HOSTLIST environment variables prior to executing the generate-CA.sh script. This will be necessary for hosted environments like AWS where the actual IP address of the system cannot be used to access the server from the internet. Put the external facing IP address in IPLIST and the external facing fully qualified domain name (FQDN) into HOSTLIST. If you have multiple usable ip addresses or hostname aliases, enclose them in quotes and separate them with spaces  (export IPLIST="1.2.3.4 5.6.7.8") Complete steps 1-3 in the instructions above. This is sufficient to get the MQTT traffic encrypted and use it with Thingworx. Do not proceed until you can make a mosquitto_pub and mosquitto_sub pass data using the --cafile option and get an error if you do not supply the --cafile option. Make sure you have a copy of the ca.crt file generated by the script above to reference in the commands. Note that it may be necessary to use the ip address rather than the FQDN. mosquitto_sub --cafile path/to/ca.crt -h ipaddr -t topic mosquitto_pub --cafile path/to/ca.crt -h ipaddr -t topic -m message Create an MQTT Thing in Thingworx based on the MQTT ThingTemplate. Create a property in the new thing for sending messages to the MQTT broker. In the configuration page for the new MQTT Thing, set the serverName, serverPort and check the useSSL checkbox. In the Property to MQTT topic mappings, create a publish entry that points to the property you created in the thing and set the topic to the mqtt topic on which you want to publish . The ca.crt file created in the above script is the certificate for a new Certificate Authority (self-signed, so not really official). Clients may have to import this certificate into their trusted CA Root store in order to make the encryption work. Add the ca.crt file from the mqtt broker system to a keystore file that will become tomcat's truststore (the list of CAs trusted by the server). See the Tomcat documentation if you need to configure https on tomcat as well. Create a new keystore if one does not already exist as a truststore. keytool -import -trustcacerts -file /path/to/ca/ca.crt -alias CA_ALIAS -keystore path/to/TrustStore -storepass mypassword). Replace the CA_ALIAS with some identifying string like MyPrivateCertificateAuthority. It did not appear to care about the CA_ALIAS value used. Replace path/to/truststore to point to the file that already exists or you want to create. Add the following to the CATALINA_OPTS for starting tomcat -Djavax.net.ssl.trustStore=path/to/TrustStore"  -Djavax.net.ssl.trustStorePassword=xxxx Replace path/to/TrustStore with the pathname of the file you created / updated with keytool above. Replace xxxx with whatever password you used in the keytool command above Restart tomcat. Check the mqtt Thing for its isConnected property. It should now be true. If it is not, then check the log files for mosquitto and for tomcat looking for SSL issues. Change the property value and see it appear in the output of a (properly constructed) mosquitto_sub --cafile path/to/TrustStore -t test somewhere.
View full tip
Announcing the Final Installment   JMeter for ThingWorx, the Comprehensive Guide and Best Practice Tips This is the final post on using JMeter for ThingWorx. Below there are best practice tips for using JMeter and for load testing in general. Attached to this post is a comprehensive guide including all of the information from every post we've made on JMeter, including the tutorials. For a more central source, feel free to download the guide , or see the past posts here: JMeter for ThingWorx (original post) Building More Complex Tests in JMeter Distributed Testing with JMeter Generating and Reviewing JMeter Results   JMeter Best Practice Tips Use Distributed Testing As already mentioned in a previous post, each JMeter client can only handle about 150-250 threads depending on the complexity of the tests, and each client will need around 1 CPU and up to 8 GB of RAM for the Java heap. Some test plans will run with fewer host resources, so resizing the test client VM up or down is often required during test development. Create a batch or shell script to start the multiple JMeter clients for greater ease of use. Use Non-Graphical Mode Non-graphical mode allows the system to scale up higher; client processing uses up resources just to keep the simulation running, but with graphical mode turned off, there is less of an impact on the response times and other results. Graphical mode is essentially only used for debugging. Turn off Embedded Resources This setting reloads all of the typically cached requests over and over; there will be far more download requests, and to the exclusion of other requests, than is helpful. Ensure this box is not checked, especially in the HTTP Requests Defaults element:   Browser caching means that this setting doesn’t actually simulate a proper user load, given that many of the reloaded resources would not be reloaded by actual users. Use this incrementally, for one or two HTTP requests only, if there is a reason why those requests might need to download fresh images, scripts, or other resources with each call; for instance, simulate page timeouts using this once per hour or something similar. Using this across the whole project will prevent it from scaling well, while not actually simulating real-world conditions. Avoid Using Listeners For instance, the “View Results Tree”, which uses additional resources that may impact the results in disingenuous ways, based around the needs of the clients themselves and not the actual response times of the server. Many listeners are only for debugging a handful of threads while designing the tests. A list of recommended listeners for different purposes is in JMeter documentation. Summary Report is the only one you want enabled, as that exports the results as a csv or similarly formatted file, which can then be used to build reports. JMeter CAN handle SSO JMeter can authenticate into and test an SSO-enabled system. Sometimes the SSO configuration is essential for customers, and they may be quick to assume therefore that they cannot use JMeter, but that's not entirely true. Some external tools that might help with this are BlazeMeter (mentioned again in just a moment) and Fiddler, a good tool for decoding what data a particular SSO setup is exchanging during the authentication process. Use Logic Controllers for Parametrization Parametrization is critical to mirroring a proper user load, and allowing different data sets to be queried or created; the load should seem organic, random in the right ways, with actions occurring at random times, not predictable times, to prevent seeing artificial peaks of usage that don’t represent real usage of the Foundation server. Random order controllers direct the threads down different paths based on random dice rolls, allowing for a randomized collection of user activity each time, not something that has to be regenerated like a set of Boolean values that is specified in an input CSV and used to navigate a series of true or false switches. Switches just look for an environment variable to be either 1 or 0, and when it hits a switch that’s a 1, it triggers the switch below, running them in the order given under the transaction controller that goes with the switch. In this image, the 1’s and 0’s are given in the CSV input file; randomizing that input file therefore randomizes the execution of the switches too:   Use Commercial Add-Ons There are many external, add-on tools and plugins which enhance JMeter’s capabilities. One external tool that can enhance JMeter’s capabilities is Blazemeter, which has some free and some paid options to help create better reports, removing automatically much of the “garbage” REST calls (which would otherwise need to be manually deleted), and provide more consumable test reports right out of the box. Other tools and plugins include: Maven Netbeans SonarQube Jenkins Autometer Gradle Amazon EC2 Lightning IntelliJ IDEA Cassandra Grafana For more best practice information, see the JMeter Best Practice Manual.   General Load Testing Guidelines Concurrency Requirements – How to Properly Estimate the Size of the Load Test Take a brand new ThingWorx-based app. How people will be accessing the system and how often? How many are business users? How many are engineers? What do they do? Many assume that every named user in the corporate LDAP will need to access to the server, often 10s of thousands of users; this generally drastically oversizes the system. Load testing for many thousands of users is very hard and requires a lot of set-up, tuning, and optimization to get right; so if it seems that thousands of users are expected, then validating this claim is important: most customers don’t really have that many concurrent users in an engineering system. Use estimates based on how many people work at which offices, which time zones those people are in, and what kinds of users they are. Do they need access to engineering data? Perhaps there are simpler mashups for them that uses less resources. One tool for these sorts of estimations that PTC offers is the office time zone overlap Windchill Sizing Calculator (shown here) Other ways to estimate include: Analyzing the business processes, things like how long workloads typically take to complete and how many workloads are generated per day, converted into hour, minute, or second as desired for the peak duration, the length of the test. “Day in the Life” modelling, or considering things like “what does user X do in a day?” Maybe, user X checks out some drawings, edits them, and then checks them back in at 4:30. Maybe user Y actually digs into the underlying parts and assemblies, putting in change requests or orders throughout the day, instead of waiting for the end. Models are made based around the types of users. Also consider: What are worst case scenarios? What are the longest running activities? What produces the largest data transfers? What activities have large, heavy data base queries? When is the peak overlap of usage? Beginning and end of day downloads and check ins? Reports that are generated regularly? How do these impact the foreground users? For a simpler estimate, start with a percentage of the named user count, anywhere from 5-15% is a good ballpark percentage. Don’t overestimate to feel like the application has been financially worth it; even if everyone is logged in and using it all at once, which is unlikely, load testing for every single user doesn’t take into account the fact that people pause in between clicking on things to think, type emails, get coffee, and so forth. Fewer people than expected are actually doing concurrent activities like loading web pages and updating data streams. Whenever possible, use concurrency data from existing customer systems to guide the estimate for the new system. Legacy system are great places to start.   Use Grafana to monitor the system side throughout the load test, which is also required to know the test has been successful; also set up Grafana to monitor the application once it goes live, to both prevent and mitigate more rapidly any technical issues with the server. Also remember that PTC Technical Support is here to help! Provide thread dumps with an open case to any TSE, and they will help troubleshoot the tests and review any errors in the ThingWorx or Tomcat logs.    
View full tip
Exciting news! ThingWorx now has improved support for Docker containers to help you manage CI/CD, improve development efficiency in your organization and save costs. Check out these FAQs below and, as always, reach out to me if you have any additional questions.   Stay connected, Kaya   FAQs: ThingWorx Docker Containers   What are Docker Containers? From Docker.com: “a Docker container image is a lightweight, standalone, executable package of software that includes everything needed to run an application: code, runtime, system tools, system libraries and settings”. Learn more here.   What's the difference between Docker containers and VMs? Containers are an abstraction at the app layer that packages code and dependencies together, whereas Virtual Machines (VMs) are an abstraction of physical hardware turning one server into many servers. Here are some great discussions on it on Stack Overflow. Containers vs. VMs   How can I build ThingWorx Docker images? Check out the Building ThingWorx 8.3 Docker Images Guide or watch this video to instruct you on how to build and test Docker containers. (view in My Videos)   How does PTC support building ThingWorx Docker images? PTC provides the ability for customers and partners to build ThingWorx Docker images. A customer can download the Dockerfiles and scripts packaged as a zip folder from the PTC Software Downloads Portal under “ThingWorx Platform,” then “Release 8.3”  then“ThingWorx Dockerfiles.” (Please note that you must be logged in for the link to function properly.) PTC Software Downloads PortalThe zip folder contains the Dockerfiles, template jar, and scripts to fetch Tomcat, and ThingWorx WAR files using CLI. Java must be downloaded manually from the vendor's website. We also provide an instructional guide called “Build ThingWorx Docker Images” available on the Reference Documents page on the Support Portal.   How are ThingWorx Docker images different from the usual delivery media of WAR files? The WAR file delivery is typically accompanied by an installation guide that contains the manual steps for creating the VM or bare-metal environment. That guide includes instructions for the administrator to manually install the prerequisites, including Tomcat, Java, and ThingWorx platform settings files. To deploy and run the WAR file, the administrator follows the guide to create the runtime environment on an OS. In contrast, the Dockerfile build in this delivery automates the creation of a Docker image once supplied with the prerequisites.   Do you have any reference deployment and guidance? Yes, you can refer to our blog post to learn how to deploy and run ThingWorx Docker containers on your existing Kubernetes environment.   Is there any recommendation on which Container Orchestrator as a Service (CaaS) a customer should run ThingWorx Foundation Docker container images on? You can use Docker-Compose for testing, but it is generally not suggested for production deployment use cases. In a production environment, customers should use container orchestrators such as Kubernetes, OpenShift, Azure Kubernetes Service (AKS), or Amazon Elastic Container Service for Kubernetes (Amazon EKS), to deploy and manage ThingWorx Docker images.   What are the skill sets required? Familiarity with OS CLI and Docker tools is required to build building the ThingWorx Docker images. Familiarity with Docker-compose to run the resulting Docker containers is needed to test the resulting builds. We don’t recommend Docker-Compose for production use, but when using it for local testing and demo purposes, users can rapidly install ThingWorx and get it up and running in minutes. We expect PTC partners and customers who want to run ThingWorx containerized instances in their production environment to possess the required skill sets within their DevOps team.   How is ThingWorx licensing handled with the Docker images? By default, the container created from these Docker images starts up in a limited mode with no license supplied. You can configure your username and password for the PTC licensing portal to automatically load a license via environment variables passed into the container on startup. Additionally, you can mount a volume to the /ThingworxPlatform directory, which contains your license file, or to retrieve a license request. To keep your Host ID consistent, ensure that the /ThingworxStorage and /ThingworxPlatform directories are persisted and not removed with individual container restarts. More detailed instructions can be found in the build guide or in a Kubernetes blog post .   Is Docker free? What version of Docker does PTC support for ThingWorx? Docker is open-source and licensed under the Apache 2 license. Information on Docker licensing can be found here. The following Docker versions are required: Docker Community Edition (docker-ce) Version 18.05.0-ce is recommended. To install the Docker Community Edition on your system, follow the instructions for your operating system on the Docker website here. Docker Compose (docker-compose) Version 1.17.1 is recommended. To install the Docker Compose on your system, follow the instructions for your operating system on the Docker website here. What persistence providers are currently supported? PTC provides the ability to build ThingWorx Foundation containers for the following supported persistence providers: H2 Microsoft SQL Server PostgreSQL Additional persistence providers will be added to the Docker build delivery as the ThingWorx Foundation Platform releases support for those new databases in future releases.   What are some of the security best practices? For production use, customers are strongly advised to secure their Docker environments by following all the recommendations provided by Docker. Review and implement the best practices detailed at https://docs.docker.com/engine/security/security/.   Can we build Docker images for ThingWorx High Availability (HA) architecture? Yes. ThingWorx Dockerfiles are provided for both basic ThingWorx deployment architecture and HA ThingWorx deployment architecture.   How easy is the rehosting and upgrading of ThingWorx releases on Docker with existing data? In Kubernetes environment, data is kept in a separate volume and can be attached to different containers. When one container dies, the data can be attached to a different container and the container should start without issue. For more information, please refer to the upgrade section of the Building ThingWorx 8.3 Docker Images Guide.   Is it okay to use the Docker exec and access the bash shell to make config changes or should I always rebuild the image and re-deploy?­ Although using Docker exec to gain access to the container internals is useful for testing and troubleshooting issues, any changes made will not be saved after a container is stopped. To configure a container's environment, variables are passed in during the start process. This can be done with Docker start commands, using compose files with environment variables defined, or with helm charts. More detailed instructions can be found in the build guide or in this blog post .   What if there are issues? Should I call PTC Technical Support? We are providing the scripts and reference documents solely to empower our community to build ThingWorx Docker images. We believe that customers using Docker in their production processes would have expertise to manage running Docker containers themselves. If there are any issues or questions regarding the build scripts provided in the PTC official downloads portal, then customers can contact PTC Technical Support at 1-800-477-6435 or visit us online at: http://support.ptc.com. PTC does not provide support for orchestration troubleshooting.   What can you share about future roadmap plans? As we are enabling our customers and partners to build ThingWorx Foundation Platform Docker images, we plan to do the same for upcoming products such as ThingWorx Integration & Orchestration, ThingWorx Analytics, upcoming persistence providers such as InfluxDB, and many more. We also plan to provide additional reference architecture examples and use cases to help developers understand how to use Docker containers in their DevOps and production environments.   Where can I learn more about Docker containers and container orchestrators? See these resources below for additional information: https://training.docker.com/ https://kubernetes.io/docs/tutorials/online-training/overview/
View full tip
ThingWorx DevOps with Jenkins DevOps as a topic is vast and has been addressed at many times throughout the history of the PTC Community. Previous posts address what DevOps is, teach how to make use of DevOps like a pro,  announce updates to the PTC Git Extension, and explain why this extension is so helpful to achieving continuous Git integration with ThingWorx.   This post provides a PDF guide on Jenkins integration with ThingWorx, including tutorials with detailed information on how to setup your ThingWorx instance and how to configure your Jenkins Pipeline. The PDF is listed for download separately, but it is also included in the zip with the other required files for the tutorial. The Jenkins Pipeline provided here is intended as an example / starting point for managing your DevOps in ThingWorx and can easily be extended. Please note that this Pipeline is not officially supported by PTC. 
View full tip
Events   Timers and Schedulers both come with a specific Event inherited from the Thing Template: Timer ScheduledEvent Both have a Data Shape allowing to capture the timestamp of when the Event was actually fired. Events in ThingWorx are triggered when a specific condition is met. In this context the condition is met and the Event is fired when a Timer has expired or a Scheduler's time is reached. Once an Event is triggered, Subscriptions will take care of executing custom Services to react to the Event. Subscriptions   Subscriptions listen to Events and can be used to react to certain Events with running custom Service scripts. To follow-up on Timers and Schedulers, a new Subscription must be created, listening to any related Event fired. Add a new Subscription to the Thing with       As the Subscription is usually listening to the Thing that it is configured on, the Source has to be left empty. When listening to other Entities' Subscriptions the corresponding Entity can be picked in the Source Entity picker. Ensure to check the Enabled checkbox to actually enable the Subscription and allow it for executing code in the Script area. The following Script will log into the ScriptLog once the Timer Event is fired     The following Script will log into the ScriptLog once the ScheduledEvent Event is fired  
View full tip
Let's consider that we have two Streams Stream1 and Stream2 with same DataShape StreamDS. DataShape StreamDS has two fields Id (number) and Name (string). We want to copy all the entries from Stream1 to Stream2. Steps: 1. Open Stream1 Stream in Composer and run GetStreamEntriesWithData service. 2. In the popup click on Create DataShape from Result option to create a new DataShape GetStreamEntriesDS. 3. Create a Service and use JavaScript like below (Added Comments for Details): // Create Temporary Infotable to hold output of GetStreamEntriesWithData Service var paramsForInfotable = {   infoTableName: "InfoTable" /* STRING */,   dataShapeName: "GetStreamEntriesDS" /* DATASHAPENAME */ }; // result: INFOTABLE var InfotableForCopy = Resources["InfoTableFunctions"].CreateInfoTableFromDataShape(paramsForInfotable); //Save output of GetStreamEntriesWithData Service to Temporary Infotable InfotableForCopy var paramsForGetStreamEntriesWithDataService = {   oldestFirst: false /* BOOLEAN */,   maxItems: 10000 /* NUMBER */ }; // result: INFOTABLE dataShape: "GetStreamEntriesDS" InfotableForCopy = Things["Stream1"].GetStreamEntriesWithData(paramsForGetStreamEntriesWithDataService); // Read the data from Infotable row by row and add it to new Stream var tableLength = InfotableForCopy.rows.length; for (var x = 0; x < tableLength; x++) {   var row = InfotableForCopy.rows ; // values:INFOTABLE(Datashape: StreamDS) var values = Things["Stream2"].CreateValues(); values.Id = row.Id; //NUMBER values.Name = row.Name; //STRING var paramsForAddStreamEntryService = {   sourceType: row.sourceType /* STRING */,   values: values /* INFOTABLE*/,   location: row.location /* LOCATION */,   source: row.source /* STRING */,   timestamp: row.timestamp /* DATETIME */,   tags: row.tags /* TAGS */ }; // AddStreamEntry(tags:TAGS, timestamp:DATETIME, source:STRING, values:INFOTABLE(StreamDS), location:LOCATION):NOTHING Things["Stream2"].AddStreamEntry(paramsForAddStreamEntryService); } var result = InfotableForCopy;
View full tip
1. Use Postman or any other software for Rest Api call to the ThingWorx. 2. Create a query in Postman with following parameters: Type: POST URL: https://<IP>:<PORT>/Thingworx/Users/<UserName>/Services/AssignNewPassword <IP>: IP of the server where ThingWorx is installed. <PORT>: Port on which ThingWorx is running (if required). <UserName>: User Name of the user whom Password is to be reset. Headers: appkey : Your Administrator App key or App key of user having Permission for AssignNewPassword Service for the user. Content-Type: application/json Body: {     "newPassword":"NewPasswordHere",     "newPasswordConfirm":"NewPasswordHere" } 3. Send the Query. 4. Login using new Password.
View full tip
  DevOps. It’s not just a buzzword. It’s a true development methodology that can make all the difference in your application quality and release time. Today, I’ll walk you through how you can continuously integrate and deploy your ThingWorx applications to achieve CI/CD objectives as part of a DevOps-focused culture. At the end, I’ll provide you a sneak peek of what you can expect in a future release (hint: we’re working on some awesome new CI/CD functionality). Overview of ThingWorx DevOps and Common Tools I’ll start by providing an overview of the DevOps cycle, and then I’ll provide more details around each step of the cycle. Before we can start, you’ll need to define your high-level architecture and functional requirements as part of the “Plan” phase.   Now, let’s build your ThingWorx app. Ready? Here we go!   Code As with any software platform, developers can start working in any number of areas of the IoT application—from edge, to visualization, rules authoring to data modelling. For the purpose of this article, we’ll start with the UI, but much of the same steps can be applied in any order. Also, we’ll just call out high level steps of development, but for more info on building out each aspect of your application, please visit developer.thingworx.com.   In ThingWorx Composer, build out your user interface with Mashups. Starting with UI can help you think about the types of data you want to collect from devices and systems and how you want to solve your unique requirements for the business. Starting at this point can also help you show live POCs and functional mockups to stakeholders. Once you’ve built some starter screens and a skeleton of app navigation, you can start adding in data through configuration in Composer by creating your Things, Templates, properties and services. [Optional] We offer 65 out-of-the-box widgets for the UI in the ThingWorx platform. There are times when you have specific visualization requirements for your application and the out-of-the-box widgets don’t quite satisfy them. We have a path for that, through our custom widget extensions. If you choose to develop your own widget extensions, you can do so through other IDEs like Eclipse or WebStorm. Custom development and extensions are not just for UI. We also allow you to define Thing entities and their custom services in Java. If you are developing extensions in this way, we’d recommend you do so using Eclipse to code and Gradle to build and drive tests. For instructions on how to create your own extension, see “Creating Customized ThingWorx Widgets” on page 42 of the ThingWorx Application Development Guide posted on Ask Kaya. With a good start on the data model, business logic and UI, some quick testing and validation is in order. You’ll probably also want to save all of this work also to share with colleagues or move to other integration environments. Capture all of your entity and code artifacts (Mashups, style definitions, Thing shapes, Thing templates, JavaScript, etc.) by using the “Export to Source Control” feature from ThingWorx Composer to write entities to the file system. You can use Git or other source systems to monitor the file system and push to the remote repository of your choice (e.g. GitHub, Bitbucket, etc.). Again, if you are developing extensions outside of Composer, you’ll want to source control those items, too, from Eclipse or the file system directly. Build [Optional] You can build an application package as an extension with all entities and code from Eclipse using the ThingWorx Eclipse plugin. When you build the project, it will create an extension zip file. Again, more info in the Application Development Guide. Make your life easy by using tools like Gradle or Maven. ThingWorx is very similar to other Java development systems, so Gradle and Maven track your dependencies and create a package with all of the referenced extensions you may be using and put them into one single zip file package. Once you have a package built, you can import it into test or integration environments. For added automation, create repeatable tasks like a job in Jenkins so that every time your code is changed in the source repository (e.g. Git), it triggers a job to increment the version, build the project and create the package deliverables. Consider also configuring the Jenkins jobs to push artifacts to a central repository like Artifactory. Test Once your code has been built, we can’t forget about testing! Automation is king for DevOps! For ThingWorx apps, you should still design a test strategy for your application, and then define and create your tests. These can run in your local developer environment, as well as be triggered via build tasks/changes in the source repository. Tools like JUnit for your entities and Java-backed services or Selenium for testing the Mashup UIs can be used. You can create separate jobs in Jenkins along with the build to run the integration and unit tests against an instance of ThingWorx that has the latest artifacts deployed into it. You can also do static code analysis using tools like PMD to find bugs, check style issues or identify inefficient code paths. To round out your app also with performance and load testing, JMeter is one tool that you can leverage. Release Releasing is the culmination of the team’s great work! If the test results pass and the builds are green, you are good to go, and it’s time to establish your release build. Make sure that you consider a versioning scheme for your application and its artifacts. Semantic versioning is a pattern that can be implemented for your ThingWorx application. Correct versioning of ThingWorx packages affects your upgrade plans and is a signal to your users on the intent and content of the release. Again, see the Application Development Guide. Once a release milestone is met, you can create a source branch in Git for that milestone, which will have all the changes encompassed in that release. Configure a Jenkins job to create builds from that milestone branch for maintenance purposes. Deploy + Operate + Monitor   If you’ve tested and released your application, it’s time for production and real users! Using the build and testing infrastructure you’ve set up earlier in the development process, you can also deploy your release builds to your target staging and production ThingWorx environments with Jenkins jobs, Artifactory and automated steps. Finally, as with anything, it is important to measure success and monitor performance via KPIs, trends and logs. You can also extract application insights and recommendations from the PTC System Monitor (PSM) tool, which uses Dynatrace; here is a guide on how to install and deploy PSM.  There are many different paths through the platform and options for developers to match your local team processes and tools—this was simply a quick overview. Congrats! You’re now equipped to build ThingWorx apps while leveraging software best practices and incorporating a DevOps culture!   What can I expect in a future release of ThingWorx? Coming in a near-term release of ThingWorx, we’ll make it easier for you to continuously integrate and deploy your ThingWorx applications. How? Through new functionality that bolsters our packaging concepts, new cloud services to assist in deployment to environments and an error-proof way to integrate applications with an automated dependency awareness.   Stay tuned for more info about this exciting new deployment and application management functionality targeted for Fall 2019!   Reach out with any questions and stay connected.   -Kaya
View full tip
PTC was recognized an outright leader in the global IoT market.
View full tip
As of May 24, 2023, ThingWorx 9.4.0 is available for download!  Here are some of the highlights from the recent ThingWorx release.   What’s new in ThingWorx 9.4? Composer and Mashup Builder  New Combo chart and Pie chart based on modern Web Components architecture along with several other widget enhancements such as Grid toolbar improvements for custom actions, highlighting newly added rows for Grid widget and others Ability to style the focus on each widget when they are selected to get specific styling for different components Absolute positioning option with a beta flag to use by default is available now Several other Mashup improvements, such as Export function support is made available; improved migration dialogue (backported to 9.3 as well) to allow customers to move to new widgets; and legacy widgets language changes   Foundation  Heavily subscribed events could now be distributed across the ThingWorx HA cluster nodes to be scaled horizontally for better performance and processing of ThingWorx subscriptions. Azure Database for PostgreSQL Flex Server GA support with ThingWorx 9.4.1 for customers deploying ThingWorx Foundation on their own Azure Cloud infrastructure Improved ThingWorx APIs for InfluxDB to avoid Data loss at a high scale with additional monitoring metrics in place for guardrails Several security fixes and key third-party stack updates   Remote Access and Control (RAC) Remote Service Edge Extension (RSEE) for C-SDK, currently under Preview and planned to be Generally Available (GA) by Sept 2023, would allow ThingWorx admins to use Remote Access and Control with the C-SDK based connected devices to use Global Access Server (GAS) for a stable, secure, and large number of remote sessions With 9.4, a self-signed certificate or certificate generated by the trusted certificate authority can be used for RAC with the ThingWorx platform Ability to use parameter substitution to update Auto Launch commands dynamically based on data from the ThingWorx platform is now available   Software Content Management  Support for 100k+ assets with redesigned and improved performance on Asset Search Better control over package deployment with redesigned package tracking, filtering, and management Improved overall SCM stability, performance, and scalability and a more user-friendly and intuitive experience   Analytics  Improvements to the scalability of property transforms to enable stream processing of the larger number of properties within a ThingWorx installation Refactoring of Analytics Builder UI to use updated widgets and align with the PTC design system Updates to underlying libraries to enable the creation and scoring of predictive models in the latest version of PMML (v4.4) End of Support for Analytics Manager .NET Agent SDK     View release notes here and be sure to upgrade to 9.4!     Cheers, Ayush Tiwari Director, Product Management, ThingWorx      
View full tip
Starting with the 8.1 release, the architecture of ThingWorx Analytics has changed from being a single sever to being split into several independent microservices.  This has been done to allow services to run concurrently. It also prevents issues with one microservice from affecting the others. Overview The new Analytics Server Architecture consists of a suite of 9 microservices: Data Clustering Profiling Signals Training Prediction Validation Presciptive Results All of the microservices work together to create a similar experience for users as it was in the past. The data that is uploaded and generated by the Analytics Server is stored directly in a file system, instead of a Postgres Database like it was in the past. Closer Integration with ThingWorx Please note that ThingWorx Foundation is required to be installed and operating before Installing Analytics.  During the install you will be asked to supply IP Address of the ThingWorx Instance that will be used for Analytics.  At this step, the AnalyticsServerThing is configured which allows the user to interact with Analytics Server through ThingWorx.  All of the configured microservices are represented as Things under the AnalyticsServerThing. This is because ThingWorx Analytics has become a native part of ThingWorx Foundation functionality and is dependent on ThingWorx for user interaction.  Because of these changes, there is no longer a direct ThingWorx Analytics Server REST API. Support for accessing the services via REST calls is now provided through the ThingWorx Core REST API layer.  Because of this, a new URI pattern is required moving forward. One other update from the older versions is that the requirement to use application keys and Application IDs are no longer necessary.  This should come as a welcome relief as the Application keys and IDs were the source of issues for users who may have misplaced them etc. Less Data-Centric In the old versions, jobs, models, signals, etc. were all tied to the dataset.  So there was no way to a model from one dataset to the other. With the new architecture, this is no longer the case you are able to move a model from one dataset to the other seamlessly.  Please note that when moving a model from one dataset to the other, it must have the same metadata between each of the datasets.  This is because a model created to increase efficiency in a factory would provide no insight on a dataset that monitors the soil moisture in a corn field. Updates to Metadata Although going over the exact changes to the Metadata is out of scope for this post, it is worth mentioning. For more details on the changes, please follow this link. Summary In conclusion, the new architecture of ThingWorx Analytics was done to increase scalability and to produce a more robust system.  The new release is much more integrated into the ThingWorx Platform to increase the ease of use from the previous releases.  It is much less data-centric than it was in the past and geared more to the solutions themselves. 
View full tip
Announcements