cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

ThingWorx Navigate is now Windchill Navigate Learn More

IoT & Connectivity Tips

Sort by:
In this video we show the setup for anomaly detection (ThingWatcher) in release 8.4. We also show how to create an anomaly alert.  
View full tip
In this video we show a simple use case on how to setup a transformed property to collect statistical values  
View full tip
In this video we cover the installation of the platform analytics services which include: Descriptive services and property transform services.  
View full tip
In this video we introduce the Descriptive Services and property transform services that are found on the platform analytics media  
View full tip
This video shows the steps to install ThingWorx Analytics Server 8.4 as well as the ThingWorx Analytics Extension.
View full tip
We are excited to announce ThingWorx 8.4 is now available for download!    Key functional highlights ThingWorx 8.4 covers the following areas of the product portfolio: ThingWorx Analytics and ThingWorx Foundation which includes Connection Server and Edge capabilities.   ThingWorx Foundation Next Generation Composer: File Repository Editor added for application file management New entity Config Table Editor to enable application configurability and customization Localization support fornew languages: Italian, Japanese, Korean, Spanish, Russian, Chinese/Taiwan, Chinese/Simplified Mashup Builder: Responsive Layout with new Layout Editor 13 new and updated widgets (beta) Theming Editor (beta) New Functions Editor New Personalized Workspace Platform: Added support for AzureSQL, a relational database-as-a-service (DBaaS) as the new persistence provider A PaaS database that is always running on the latest stable version of SQL Server Database Engine and  patched OS with 99.99% availability.   Added support for InfluxData, a leading time series storage platform as the new ThingWorx persistence provider Supports ingesting large amounts of IoT data and offers high availability with clustering setup New extension for Remote Access and Control Supports VNC, RDP desktop sharing for any remote device HTTP and SSH connectivity supported An optional microservice to offload the ThingWorx server by allowing query execution to occur in a separate process on the same or on a different physical machine. Installers for Postgres versions of ThingWorx running on Windows or RHEL AzureSQL InfluxDB Thing Presence feature introduced which indicates whether the connection of a thing is “normal” based on the expected behavior of the device. Remote Access Extension Query Microservice: Click and Go Installers for Windows and Linux (RHEL) Security: Major investments include updating 3rd party libraries, handling of data to address cross-site scripting (XSS)  issues and enhancements to the password policy, including a password blacklist. A significant number of security issues have been fixed in this release. It is recommended that customers upgrade as soon as possible to take advantage of these important improvements. Docker Support  Added Dockerfile as a distribution media for ThingWorx Foundation and Analytics Allows building Docker container image that unlocks the potential of Dev and Ops Note:  Legacy Composer has been removed and replaced with the New Composer.   Documentation: ThingWorx 8.4 Reference Documents ThingWorx Platform 8.4 Release Notes ThingWorx Platform Help Center ThingWorx Analytics Help Center ThingWorx Connection Services Help Center  
View full tip
Beginning with version 8.4.0 ThingWorx Analytics Manager is now able to delete Jobs by filter. Underneath video demonstrates this capability.   
View full tip
Beginning with version 8.4.0 ThingWorx Analytics Server can now automatically create metadata (Json file) based on the uploaded csv. file. Underneath video demonstrates the steps for automated metadata detection.
View full tip
This post covers how to build and operationalize a time series model using Thingworx Analytics. A lookback window is used to read multiple previous rows before the current one, and base the prediction on those lookback rows.   In this example we use time series data to predict water flow for different water pumps in a system.   There is a full explanation of the method attached, also all necessary resources are included in the attached files.
View full tip
Predicting time to failure (TTF) or remaining useful life (RUL) is a common need in IIOT world. We are looking here at some  ways to implement it. We are going to use one of the Nasa dataset publicly available that simulates the Turbofan engine degradation (https://c3.nasa.gov/dashlink/resources/139/) . The original dataset has got 26 features as below Column 1 – asset id Column 2 – cycle/time of sensor data collection Column 3- 5 – operational setting Column 6-26 – sensor measurement In the training dataset the sensor measurement ends when the failure occurs.     Data Collection Since the prediction model is based on historic data, the data collection is a critical point. In some cases the data would have been already collected form the past and you need to make the best out of it. See the Data preparation chapter below. In situation where you are collecting data, a few points are good to keep in mind, some may or may not apply depending on the type of data to be collected. More frequent (higher frequency) collection is usually better, especially for electronic measure. In situation where one or more specific sensor values are known to impact the TTF, it is good to take measure at different values of this sensor until the failure without artificially modifying the values. For example, for a light bulb with normal working voltage of 1.5V, it is good to take some measure at let’s say 1V, 1.5V , 2V , 3V and 4V. But each time run till the failure. Do not start at 1.5V and switch to 4V after 1h. This would compromise what the model can learn. More variation is better as it helps the prediction model to generalize. In the same example of voltage it is best to collect data for 1V, 1.5V , 2V , 3V and 4V rather than just 1.5V which would be the normal running condition. This also depends on the use case, for example if we know for sure that voltage will always be between 1.45V and 1.55V, then we could focus only on data collection in this range. Once the failure is reached, stop collecting data. We are indeed not interested in what happens after the failure. Collecting data after the failure will also lead to lower prediction model accuracy. Each failure run should be a separate cycle in the dataset. In other word from a metadata stand point, each failure run should be represented by a different ENTITY_ID. TTF business need Before going into data preparation and model creation we need to understand what information is important in term of TTF prediction for our business need. There are several ways to conceive the TTF, for example: Exact time value when failure might occur This is probably going to be the most challenging to predict However one should consider if it is really necessary. Indeed do we need to know that a failure might occur in 12 min as opposed to 14 min ? Very often knowing that the time to failures is less than X min, is what is important, not the exact time. So the following options are often more appropriate. Threshold For some application knowing that a critical threshold is reached is all that is needed. In this case a Boolean goal, for example lessThan30min or healthy with yes/no values, can be used This is usually much easier than the exact value above. Range For other applications we may need to have a bit more insight and try to predict some ranges, for example: lessThan30min, 30to60min, 60to90min and moreThan90min In this case we will define an ordinal goal The caveat here is that currently ThingWorx Analytics Builder does not support ordinal goal, though ThingWorx Analytics Server does support it. So it only means that the model creation needs to be done through the API. This is the option we will take with the NASA dataset. The picture below shows the 3 different types of TTF listed above   Data Preparation   General Feature engineering Data Preparation is always a very important step for any machine learning work. It is important to present the data in the best suitable way for the algorithms to give the best results. There are a lot of practices that can be used but beyond the scope of this post. The Feature Engineering  post gives some starting point on this. There are also a lot of resources available on the Internet to get started, though the use of a data scientist may be necessary. As an example, in the original NASA dataset we can see that a few features have a constant value therefore there are unlikely to impact the prediction and will be removed. This will allow to free computational resources and prevent confusion in the model. Sensor data resampling The data sampling across the different sensor should be uniform. In a real case scenario we may though have sensors data collected at different time interval. Data transformation/extrapolation should  be done so that all sensor values are at the same frequency in the uploaded dataset. TTF feature Since we want to predict the time to failure, we do need a column in the dataset that represent this values for the data we have. In a real case scenario we obviously cannot measure the time to failure, but we usually have sensor data up to the point of failure, which we can use to derived the TTF values. This is what happens in the NASA dataset, the last cycle corresponds to the time when the failure occurred. We can therefore derive a new feature TTF in this dataset. This will start at 0 for the last cycle when failure occurred, and will be incremented by 1 up to the very first measurement, as shown below:   Once this TTF column is defined, we may need to transform it further depending on the path we choose for TTF prediction, as described in the TTF business need chapter. In the case of the NASA dataset we are choosing a range TTF with values of more100, 50to100, 10to50 and less10 to represent the number of remaining cycles till the predicted failure. This is the information we need to predict in order to plan a suitable maintenance action. Our transformed TTF column look as below:     Once the data in csv is ready, we need to create the json file to represent the metadata. In the case of range TTF this will be defined as an ordinal goal as below (see attachment for the full matadata json file) {         "fieldName": "TTF",         "values": ["less10",                   "10to50",                   "50to100",                   "more100"],         "range": null,         "dataType": "STRING",         "opType": "ORDINAL",         "timeSamplingInterval": null,         "isStatic": false   }   Model creation Once the data is ready it can be uploaded into ThingWorx Analytics and work on the prediction model can start. ThingWorx Analytics is designed to make machine learning easy and accessible to non data scientists, so this steps will be easier than when using other solutions. However some trial and error are needed to refine the model which may also involve reworking the dataset. Important considerations: When dealing with Time to failure prediction, it is usually needed to unset the Use Goal History in the Advanced parameters of the model creation wizard. If using API, the equivalent is to set the virtualSensor parameter to true. Tests with Redundancy Filter enabled should be done as this has shown to give better results. In a first attempt it is a good idea to keep lookback Size to 0. This indicates to ThingWorx Analytics to find the best lookback size between 2, 4, 8 and 16. If you need a different value or know that a different value is better suited, you can change this value accordingly. However bear in mind the following: Larger lookback size will lead to less data being available to train, since more data are needed to predict one goal. Larger lookback do lead to significant memory increase – See https://www.ptc.com/en/support/article?n=CS294545     In the case of the NASA dataset, since we are using an ordinal goal, we need to execute it through API. This can be done through mashup and services (see How to work with ordinal and categorical data in ThingWorx Analytics ? for an example) for a more productive way. As a test the TrainingThing.CreateJob service can be called from the Composer directly, as shown below:       Once the model is created we can check some performance statistics in ThingWorx Analytics Builder or, in the case of ordinal goal, via the ValidationThing.RetrieveResults service. The parameter most relevant in the case of ordinal goal will be the confusion matrix. Here is the confusion matrix I get   Another validation is to compute some PVA (Predicted Vs Actual) results for some validation data. ThingWorx Analytics does validation automatically when using ThingWorx Analytics Builder and present some useful performance metrics and graph. In the case of ordinal goal, we can still get this automatic validation run (hence the above confusion matrix), but no PVA graph or data is available. This can be done manually if some data are kept aside and not passed to the training microservice. Once the model is completed, we can then score (using PredictionThing.RealTimeScore or BatchScore for ordinal goal, or Builder UI for other goal) this validation dataset and compare the prediction result with the actual value. here is one example:     Depending on the business case this model can be deemed acceptable or may need rework, such as change the range values, change learners’ parameters, modify dataset … There is certainly a fair amount of experimentation before creating the optimal model but hopefully this post does give some good starting points.   Resources:   Original Dataset attached as train_FD001-original.csv Transformed dataset attached as train_FD001-TTF-transformed.csv json metadata file for transformed dataset attached as train_FD001-ttford.json                    
View full tip
Applicable releases: 8.3 and above   Description: In this video, we will see  Basic concepts of SVM Some scenarios for a better understanding
View full tip
I have created a mashup which allows you to easily use and test the Prescriptions functionality in Thingworx Analytics (TWA). This is where you choose 1 or more fields for optimization, and TWA tells you how to adjust those fields to get an optimal outcome.   The functionality is based on a public sample dataset for concrete mixtures, full details are included in the attached documentation.  
View full tip
How to score new data with ThingWorx Analytics ?   The following is valid starting with ThingWorx Analytics (TWA) 8.3.0   Overview   Once a training model has been created, one of the main objective is to score new data to predict the value for the goal ThingWorx Analytics can score new data in 2 ways: Batch scoring Real time scoring Batch scoring   Batch scoring will be used when a large amount of data needs to be scored. To perform a batch scoring we will usually follow steps similar to the below ones: Upload the historic data Create a new model with this historic data Upload new data – the one to be scored Perform a prediction job to score those new data Retrieve the prediction job result Uploading the new data can be done in different ways. If using a large amount of data, it can be easier to upload the data via a csv file in a similar way as the historic data. This is the way used in ThingWorx Analytics Builder. If the amount of data is more limited this can be sent in the body of the scoring request. The post Analytics: Prediction Methods Mashup  shows a good example of how to do this using the PredictionThing.BatchScore service. We are focusing below on ThingWorx Analytics Builder, that is uploading new data via a csv file. In order to perform the scoring job only on the new data in step 4 above, we need to be able to filter those added data. If the dataset has already suitable column/feature such as a timestamp for example, we can use this to score only new data after timestamp > newdate, assuming all data are in chronological order. If the dataset has no such feature, we will have to add one  beforehand when we first upload the historic data in step 1 above. We often use a new column/feature named record_purpose to this effect. So initial data can take a value of training for this record_purpose feature since they are used to create the initial model. Then new added data to be scored can get any value that identify those rows only. It is important to note that this record_purpose feature needs to be set with the optType INFORMATIONAL so as to not be taken into account by the learning algorithms.   The video below shows those steps while using ThingWorx Analytics Builder   Real time scoring   Real time scoring is better suited for small amount of data. The process for real time scoring can be done either via the Analytics Server PredictionThing RealTimeScore service or using the Analytics Manager framework. The posts How to work with ordinal and categorical data in ThingWorx Analytics  and Analytics: Prediction Methods Mashup do give  examples of the use of the RealTimeScore service.   We will concentrate below on the Analytics Manager. The process involves the following steps: In Analytics Manager Create an Analysis Provider that uses the AnalyticsServerConnector connector Publish the model created in ThingWorx Analytics Builder to Analytics Manager Enable the model created Create an Analysis Event Map the properties to the datashape field Enable the Event In ThingWorx Composer Relevant properties of the Thing used in the Analysis Event are updated in someway This trigger the analysis job to be executed The scoring result is populated into the result property mapped in the Analysis event The Help Center has got more detailed about this process. The following video shows those steps Following articles can also be of interest for this topic: How to use ThingPredictor in release 8.3 of ThingWorx Analytics Server ? Publish model from Analytics Builder into Analytics Manager using TW.AnalysisServices.AnalyticsServer.AnalyticsServerConnector Creating Template For Thing, And Configure Analysis Event For Real-Time Scoring via Analytics Manager Note that the AnalyticsServerConnector connector in release 8.3 replaces the ThingPredictor connector from previous releases.
View full tip
To help explain some of the different ways in which a prediction can be triggered from a Thingworx Analytics Model, I've built a mashup which allows you to easily trigger these types of prediction:   - API Realtime Prediction - Analytics Manager: Event - API Batch Prediction   For information on setting up this environment to use the mashup with some sample data, please see the attached instructions document: Prediction-Methods-Mashup.pdf. The referenced resource files can be found inside resources.zip   For more information on prediction scoring please see this related post: How to score new data with ThingWorx Analytics 8.3.x
View full tip
Underneath, video is about ThingWorx Analytics and walks through following functions: Upload a Dataset. Create a Training Model. Create a Scoring Job.
View full tip
Underneath video walks through how to Publish a Model from Analytics Builder into Analytics Manager using the connector named TW.AnalysisServices.AnalyticsServer.AnalyticsServerConnector.
View full tip
Continuing our series of Troubleshooting ThingWorx Analytics installations, in this IoT Tech Tip we will cover two items have been appearing for many users.   Error 1069 Encountered with Native Windows Installation of ThingWorx Analytics 8.2   In some instances, when a user successfully installs ThingWorx Analytics (TWAS) to a Windows Server operating system, they will encounter an error where TWAS will report an Error 1069: The Service did not start due to logon failure.   This can occur with any individual Service that is created by the installation, the following fix should work in addressing the issue.   Primary Reason This Happens:   This error can be encountered when the user provides incorrect credentials for associating the Services to an account during installation. In TWAS 8.2, there is a utility that will enable to the user to change the associated user on the Services. It is important the user provides the password for the User Account on Windows, and not the user/password combination for ThingWorx Foundation Platform Server.   Steps to Fix Issue   Solution 1:   Open a Command Prompt as Administrator, via Start Menu à Run à type CMD. Then right click on cmd.exe and Run As Administrator.   In the elevated command prompt, change your directory to the ThingWorxAnalyticsServer/bin directory, for example in the default installation path would be: cd C:\Program Files (x86)/ThingWorxAnalyticsServer/bin Then execute the changeServiceUserAccount.bat <username>, for example: changeServiceUserAccount.bat user1   You will be prompted to change the password for the user.   Solution 2:   If Solution 1 does not resolve the issue, alternately you can manually change the Log On properties for each of the services. The changeServiceUserAccount.bat would do this via script, but on occasion this may work. Open the Control Panel and navigate to Services, for example: Control Panel à All Control Panel Items à Administrative Tools   You will have to right click each individual service and go to Properties à Log On tab and enter the account name and password for the local account. Note: Local System account will not resolve this issue.   This issue was resolved in the ThingWorx Analytics Server 8.3 release, where all Services are associated with the Network Service account.     More information can be found in this Knowledge Article   Uploading of a Dataset hangs or does not complete in ThingWorx Analytics 8.3   On occasion, after a fresh installation of ThingWorx Analytics Server 8.3 on a Windows Server operating system, a dataset will not complete its upload. Typically no error message is displayed, and the upload wizard UI will just hang on the upload progress after:   Creating copy of Configuration File... Submitting Create Dataset request... Creating copy of Data File...   Primary Reason This Happens:   This is caused by twas-zookeeper service being stuck in a PAUSED state. This means that in the post installation, twas-zookeeper did not start.   Steps to Fix Issue   You will have to double check that the JAVA_HOME variable was defined as a System Variable. In the ThingWorx Analytics Installation guide, pages 12-14 outline the steps required as pre-requisites. You can change this in Control Panel > System > Advanced Settings > Environment Variables, and ass a new variable named JAVA_HOME under System Variables. The value location should be the location of your deployment of JAVA software.   Typically this is located in C:\Program Files\Java\<jre or jdk>_<version number>     More information can be found in this Knowledge Article
View full tip
Precision and Recall are the evaluation matrices that are used to evaluate the machine learning algorithm used. This post needs some prior understanding of the confusion matrix and would recommend you to go through it here.   Example of Animal Image Recognition Consider the below Confusion Matrix for the input of the animal images and algorithm trying to identify the animal correctly: ANIMALS Cat Dog Leopard Tiger Jaguar Puma Cat 62 2 0 0 1 0 Dog 1 50 1 0 4 0 Leopard 0 2 98 4 0 0 Tiger 0 0 10 78 2 0 Jaguar 0 1 8 0 46 0 Puma 2 0 0 1 1 42   Explaining Few Random Grids: [Cat, Cat]: The grid is having the value 62. It means the image of a cat was identified as a cat for 62 times. [Cat, Dog]: The grid is having the value 2. It means the image of a cat was identified as dog twice. [Leopard, Tiger]: The grid is having the value 4. It means the image of Leopard was identified as Tiger for 4 times.   Questions To Find Some Answers    Q.How many times our algorithm predicted the image to be Tiger? A. Looking at the Tiger column: 0+0+4+78+0+1 = 83   Q.What is the probability that Puma will be classified correctly? A. Looking at the matrix above, we can see that we 42 times Puma was classified correctly. But twice it was classified as Cat, once as Tiger and once as Jaguar. So the probability will come down as: 42/(42+2+1+1)= 42/46 = 0.91      This concept is called as RECALL. It is the fraction of correctly predicted positives out of all actual positives. So we can say that Recall = (True Positives) / (True Positives + False Negatives)   Q. What is the probability that when our algorithm is identifying the image as Cat, it is actually Cat? A. Looking at the matrix above, we can see that once our algorithm has identified a Dog as a Cat, twice Puma as Cat and 62 times Cat as Cat. So the probability will come down as: 62/(62+1+2) = 0.95      This concept is called as PRECISION. It is the fraction if correctly predicted positives out of all predicted positives. So we can say that Precision = (True Positives) / (True Positives + False Positives)
View full tip
This video gives an introduction to the Descriptive Services: what they are how to install them how to configure them how to use them  
View full tip
This video shows the steps to install ThingWorx Analytics release 8.3  
View full tip
ThingWorx 8.3 covers the following areas of the product portfolio:  ThingWorx Analytics, ThingWorx Utilities, and ThingWorx Foundation which includes Connection Server and Edge capabilities.   Highlights of the release include:   ThingWorx Foundation Next Generation Composer: Now default admin and developer interface Full Feature parity with legacy Composer New capability for User and Group administration, Authorization and permissions, Export, Monitoring and Logging. More in Helpcenter Localization support for German and French Mashup Builder: JQuery 3 upgrade Grid Advanced Extension now supports Cell Editing and Footers Platform: Active Directory (AD) Integration enhancements for larger AD forests and user extension field mapping Upgrade in-place enhancements for Java SDK developers Developer Enablement Capture the usage statics such as time taken to execute a ThingWorx service, # of times a service runs in ThingWorx using Service Utilization Statistics functionality powered by all new and efficient Utilization Subsystem. Collect ThingWorx system data such as ESAPI configuration, ThingworxStorage logs, licensing, and JVM information to better diagnose system issues Service Utilization Statistics: ThingWorx Support Package tool Administrator Password and Password Length New installations of ThingWorx will be required to supply the initial Administrator password of the installer’s choice. That password must be supplied via a new entry in the platform-settings.json file. After the initial installation, the Administrator password should then be changed to a strong password to be used going forward. Additional information. As a step toward industry best practices, the Administrator password and all new passwords will need to be at least 10 characters.  When upgrading to 8.3, passwords from older versions of the platform will not need to be modified, but any new passwords being created will need to be at least 10 characters long. See the installation instructions for complete details.   ThingWorx Analytics New Descriptive Services  Core statistics (min, max, deviation, etc.), data distribution (binning), confidence intervals, and other useful calculations. Frequency analysis and transformation (via fast Fourier transform) for troubleshooting use cases and predictive analytics applications Improves users’ ability to apply logic and derive the following insights from streaming data without constructing complex models or accessing machine learning: Enables platform developers to easily process platform data in their applications and prepare the data for predictions. Statistical Process Control (SPC) Services Provides industry-standard calculations that allow IoT developers to implement SPC “control chart rules” in their applications.  Useful in manufacturing and in monitoring equipment and processes. Supports a wide assortment of rules, including number of points continuously above / below a range, in and out of range, increasing or decreasing trends, or alternating directions. Analytics Workbench Bundles the two Analytics interfaces (Analytics Builder and Manager) into a new Analytics section in Composer. Predictive Analytics Improvements Reduces overall install and administration complexity. Improves handling of time dseries data when used in predictive scoring. Includes a new learner, Support Vector Machines, enhancing the platform’s utility in building Boolean predictions. Includes a new ensemble method, Majority Vote, that improves generated model accuracy. Provides redundancy filtering which can optionally remove redundant information to improve explanatory analytics (Signals) and predictive model training. Now supports time series lookahead configuration, simplifying this type of prediction. Replaces ThingPredictor predictive scoring in Analytics Manager with native Analytics Server scoring: Improves scalability of concurrent jobs. Axeda Compatibility Package IDM Connector Support o   ACP v1.1.0 introduces the IDM Connector which enables Axeda customers to connect their Axeda IDM agents to the ThingWorx platform.  The IDM Connector provides support for registration requests, property updates, faults, events, file uploads and downloads.  Axeda ThingWorx Entity Exporter Update o   ACP v1.1.0 also includes an updated version of Axeda-ThingWorx Entity Exporter (ATEE) which now supports exporting Axeda IDM assets from the Axeda application into a format that can be imported in the ThingWorx Platform.  eMessage Connector Improvements o   Additionally, ACP v1.1.0 includes support for instruction based Software Content Management packages for the eMessage Connector which allows you to download file(s), execute instruction(s) and optionally restart the agent.  The Axeda Compatibility Extension (ACE) has new entities to support the IDM Connector and SCM for the eMesssage Connector.  o   Finally, updated versions of the Axeda Compatibility Extensions (ACE) and the Connection Services Extension (CSE) are included in ACP v1.1.0 and provide an improved workflow for granting permissions to the eMessage and IDM Connectors. ThingWorx Extension Updates Websocket Tunnel Extension Update The Websocket Tunnel Extension was updated for 8.3 to support the upgrade to jQuery3 Grid Advanced 4.0.0 comes with 2 key features: Editing - we now have cell editing support for all basetypes. The previous version had boolean editing; 4.0.0 now includes support for all basetypes. Footers - A footer section can now be added to the Grid to display rolled-up Grid totals. You can perform client-side calculations like count, min, max and average, and it includes support for custom functions. Note - Grid Advanced 4.0.0 only supports ThingWorx 8.3 and above. Custom Charts 3.0.1 12 Bug Fixes Google Maps 3.0.1 General Bug Fixes ThingWorx Utilities With the 8.3 Release, ThingWorx Utilities functionality are being repackaged into ThingWorx Foundation and ThingWorx Asset Advisor.  ThingWorx Workflow will now be available with Foundation.  The functionality from the Asset and Alert Management Utilities will be delivered in ThingWorx Asset Advisor.  ThingWorx Software Content Management capabilities will continue to be available for customer to manage the delivery of Software to their Connected Products.  The naming of “Utilities” is being phased out of the ThingWorx Platform packaging but the key functionality formerly described as ThingWorx Utilities continues to be delivered with version 8.3.   ThingWorx 8.3 Reference Documents ThingWorx Analytics 8.3 Reference Documents ThingWorx Platform 8.3 Release Notes ThingWorx Platform Help Center ThingWorx Edge SDKs and WebSocket-based Edge MicroServer Help Center ThingWorx Connection Services Help Center ThingWorx Analytics Help Center ThingWorx Industrial Connectivity Help Center ThingWorx Utilities Help Center ThingWorx Utilities Installation Guide     ThingWorx eSupport Portal ThingWorx Developer Portal PTC Marketplace   The following items will be available for download from the PTC Software Download site on June 8, 2018. ThingWorx Platform – Select Release 8.3 ThingWorx Utilities – Select Release 8.3 ThingWorx Analytics – Select Release 8.3 ThingWorx Extensions – Select Individual Extensions for download.  Will be available with the next Marketplace refresh
View full tip
Announcements