cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

ThingWorx Navigate is now Windchill Navigate Learn More

IoT & Connectivity Tips

Sort by:
We are excited to announce ThingWorx 8.4 is now available for download!    Key functional highlights ThingWorx 8.4 covers the following areas of the product portfolio: ThingWorx Analytics and ThingWorx Foundation which includes Connection Server and Edge capabilities.   ThingWorx Foundation Next Generation Composer: File Repository Editor added for application file management New entity Config Table Editor to enable application configurability and customization Localization support fornew languages: Italian, Japanese, Korean, Spanish, Russian, Chinese/Taiwan, Chinese/Simplified Mashup Builder: Responsive Layout with new Layout Editor 13 new and updated widgets (beta) Theming Editor (beta) New Functions Editor New Personalized Workspace Platform: Added support for AzureSQL, a relational database-as-a-service (DBaaS) as the new persistence provider A PaaS database that is always running on the latest stable version of SQL Server Database Engine and  patched OS with 99.99% availability.   Added support for InfluxData, a leading time series storage platform as the new ThingWorx persistence provider Supports ingesting large amounts of IoT data and offers high availability with clustering setup New extension for Remote Access and Control Supports VNC, RDP desktop sharing for any remote device HTTP and SSH connectivity supported An optional microservice to offload the ThingWorx server by allowing query execution to occur in a separate process on the same or on a different physical machine. Installers for Postgres versions of ThingWorx running on Windows or RHEL AzureSQL InfluxDB Thing Presence feature introduced which indicates whether the connection of a thing is “normal” based on the expected behavior of the device. Remote Access Extension Query Microservice: Click and Go Installers for Windows and Linux (RHEL) Security: Major investments include updating 3rd party libraries, handling of data to address cross-site scripting (XSS)  issues and enhancements to the password policy, including a password blacklist. A significant number of security issues have been fixed in this release. It is recommended that customers upgrade as soon as possible to take advantage of these important improvements. Docker Support  Added Dockerfile as a distribution media for ThingWorx Foundation and Analytics Allows building Docker container image that unlocks the potential of Dev and Ops Note:  Legacy Composer has been removed and replaced with the New Composer.   Documentation: ThingWorx 8.4 Reference Documents ThingWorx Platform 8.4 Release Notes ThingWorx Platform Help Center ThingWorx Analytics Help Center ThingWorx Connection Services Help Center  
View full tip
A confusion matrix is a technique for summarizing the performance of a classification algorithm. Classification accuracy alone can be misleading if you have an unequal number of observations in each class or if you have more than two classes in your data set. Calculating a confusion matrix can give you a better idea of what your classification model is getting right and what types of errors it is making. Classification Accuracy and its Limitations: ​Classification Accuracy = Correct Predictions/Total Predictions The main problem with classification accuracy is that it hides the detail you need to better understand the performance of your classification model. Below are two examples: 1.  When you are data has more than 2 classes. With 3 or more classes you may get a classification accuracy of 80%, but you don’t know if that is because all classes are being predicted equally well or whether one or two classes are being neglected by the model. 2.  When your data does not have an even number of classes. You may achieve accuracy of 90% or more, but this is not a good score if 90 records for every 100 belong to one class and you can achieve this score by always predicting the most common class value. Classification accuracy can hide the detail you need to diagnose the performance of your model. But thankfully we can tease apart this detail by using a confusion matrix. Confusion Matrix Terminology: A confusion matrix is a table that is often use to describe the performance of a classification model on a set of test data for which true values are known. Let’s start with an example for a binary classifier: N=165 Predicted no: Predicted yes: Actual no: 50 10 Actual yes: 5 100 What we can learn from Confusion Matrix? There are two possible predicted classes: "yes" and "no". If we were predicting the presence of a disease, for example, "yes" would mean they have the disease, and "no" would mean they don't have the disease. The classifier made a total of 165 predictions (e.g., 165 patients were being tested for the presence of that disease). Out of those 165 cases, the classifier predicted "yes" 110 times, and "no" 55 times. In reality, 105 patients in the sample have the disease, and 60 patients do not. Let's now define the most basic terms, which are whole numbers (not rates): True positives (TP): These are cases in which we predicted yes (they have the disease), and they do have the disease. True negatives (TN): We predicted no, and they don't have the disease. False positives (FP): We predicted yes, but they don't actually have the disease. (Also known as a "Type I error.") False negatives (FN): We predicted no, but they actually do have the disease. (Also known as a "Type II error.") N=165 Predicted No: Predicted Yes: Actual No: TN=50 FP=10 60 Actual Yes: FN=5 TP=100 105 55 110 This is a list of rates that are often computed from a confusion matrix for a binary classifier: Accuracy: Overall, how often is the classifier correct? 1. (TP+TN)/total = (100+50)/165 = 0.91 Misclassification Rate: Overall, how often is it wrong? 1. (FP+FN)/total = (10+5)/165 = 0.09 2. Equivalent to 1 minus Accuracy 3. Also known as "Error Rate" True Positive Rate: When it's actually yes, how often does it predict yes? 1. TP/actual yes = 100/105 = 0.95 2. Also known as "Sensitivity" or "Recall" False Positive Rate: When it's actually no, how often does it predict yes? 1. FP/actual no = 10/60 = 0.17 Specificity: When it's actually no, how often does it predict no? 1. TN/actual no = 50/60 = 0.83 2. Equivalent to 1 minus False Positive Rate Precision: When it predicts yes, how often is it correct? 1. TP/predicted yes = 100/110 = 0.91 Prevalence: How often does the yes condition actually occur in our sample? 1. Actual yes/total = 105/165 = 0.64
View full tip
In this blog we will have a look at the installation of the Thingworx Analytics Builder extension. This is used as guideline but make sure to check the Help Center for your release as steps do vary with versions. The installation has been divided in 3 parts: Introduction and import of the extension into Thingworx Platform Video Link : 1568 Configuration of the extension Note: For release 8.1, the Settings menu differs from previous versions, seeWhat's New in ThingWorx Analytics Builder 8.1 between times 00:12 sec to 00:40 sec for up to date menu selection. Video Link : 1572 Installation of the UploadThing module Note: this step no longer applies as of ThingWorx Analytics 8.1 Video Link : 1573 Useful links: PTC Download page for Thingworx Analytics PTC Reference Document page for Thingworx Analytics How to copy files from Windows to Linux ?
View full tip
Overview Time-series predictive models generated by ThingWorx Analytics Server in Analytics Manager will have additional columns in their dataShape generated by ThingPredictor. These columns are known as “transformation fields” and are used for internal processing but are not necessary for inclusion in the DataShape.  So there is no need to worry about mapping all these additional fields since it will be handled internally by ThingPredictor.  There is one addition step that the user must take which is detailed below. Step to Import: Edit the DataShape generated by ThingPredictor to match the format of the data that was provided during the model training process. In other words, remove all the transformation fields from the DataShape.
View full tip
Continuing our series of Troubleshooting ThingWorx Analytics installations, in this IoT Tech Tip we will cover two items have been appearing for many users.   Error 1069 Encountered with Native Windows Installation of ThingWorx Analytics 8.2   In some instances, when a user successfully installs ThingWorx Analytics (TWAS) to a Windows Server operating system, they will encounter an error where TWAS will report an Error 1069: The Service did not start due to logon failure.   This can occur with any individual Service that is created by the installation, the following fix should work in addressing the issue.   Primary Reason This Happens:   This error can be encountered when the user provides incorrect credentials for associating the Services to an account during installation. In TWAS 8.2, there is a utility that will enable to the user to change the associated user on the Services. It is important the user provides the password for the User Account on Windows, and not the user/password combination for ThingWorx Foundation Platform Server.   Steps to Fix Issue   Solution 1:   Open a Command Prompt as Administrator, via Start Menu à Run à type CMD. Then right click on cmd.exe and Run As Administrator.   In the elevated command prompt, change your directory to the ThingWorxAnalyticsServer/bin directory, for example in the default installation path would be: cd C:\Program Files (x86)/ThingWorxAnalyticsServer/bin Then execute the changeServiceUserAccount.bat <username>, for example: changeServiceUserAccount.bat user1   You will be prompted to change the password for the user.   Solution 2:   If Solution 1 does not resolve the issue, alternately you can manually change the Log On properties for each of the services. The changeServiceUserAccount.bat would do this via script, but on occasion this may work. Open the Control Panel and navigate to Services, for example: Control Panel à All Control Panel Items à Administrative Tools   You will have to right click each individual service and go to Properties à Log On tab and enter the account name and password for the local account. Note: Local System account will not resolve this issue.   This issue was resolved in the ThingWorx Analytics Server 8.3 release, where all Services are associated with the Network Service account.     More information can be found in this Knowledge Article   Uploading of a Dataset hangs or does not complete in ThingWorx Analytics 8.3   On occasion, after a fresh installation of ThingWorx Analytics Server 8.3 on a Windows Server operating system, a dataset will not complete its upload. Typically no error message is displayed, and the upload wizard UI will just hang on the upload progress after:   Creating copy of Configuration File... Submitting Create Dataset request... Creating copy of Data File...   Primary Reason This Happens:   This is caused by twas-zookeeper service being stuck in a PAUSED state. This means that in the post installation, twas-zookeeper did not start.   Steps to Fix Issue   You will have to double check that the JAVA_HOME variable was defined as a System Variable. In the ThingWorx Analytics Installation guide, pages 12-14 outline the steps required as pre-requisites. You can change this in Control Panel > System > Advanced Settings > Environment Variables, and ass a new variable named JAVA_HOME under System Variables. The value location should be the location of your deployment of JAVA software.   Typically this is located in C:\Program Files\Java\<jre or jdk>_<version number>     More information can be found in this Knowledge Article
View full tip
Video Author:                     Christophe Morfin Original Post Date:            September 13, 2016 Applicable Releases:        ThingWorx Analytics 52.2 to 8.0   Description: In this video we cover the installation of the UploadThing Module.   Useful Links: How to copy files from Windows to Linux  
View full tip
ThingWorx 8.3 covers the following areas of the product portfolio:  ThingWorx Analytics, ThingWorx Utilities, and ThingWorx Foundation which includes Connection Server and Edge capabilities.   Highlights of the release include:   ThingWorx Foundation Next Generation Composer: Now default admin and developer interface Full Feature parity with legacy Composer New capability for User and Group administration, Authorization and permissions, Export, Monitoring and Logging. More in Helpcenter Localization support for German and French Mashup Builder: JQuery 3 upgrade Grid Advanced Extension now supports Cell Editing and Footers Platform: Active Directory (AD) Integration enhancements for larger AD forests and user extension field mapping Upgrade in-place enhancements for Java SDK developers Developer Enablement Capture the usage statics such as time taken to execute a ThingWorx service, # of times a service runs in ThingWorx using Service Utilization Statistics functionality powered by all new and efficient Utilization Subsystem. Collect ThingWorx system data such as ESAPI configuration, ThingworxStorage logs, licensing, and JVM information to better diagnose system issues Service Utilization Statistics: ThingWorx Support Package tool Administrator Password and Password Length New installations of ThingWorx will be required to supply the initial Administrator password of the installer’s choice. That password must be supplied via a new entry in the platform-settings.json file. After the initial installation, the Administrator password should then be changed to a strong password to be used going forward. Additional information. As a step toward industry best practices, the Administrator password and all new passwords will need to be at least 10 characters.  When upgrading to 8.3, passwords from older versions of the platform will not need to be modified, but any new passwords being created will need to be at least 10 characters long. See the installation instructions for complete details.   ThingWorx Analytics New Descriptive Services  Core statistics (min, max, deviation, etc.), data distribution (binning), confidence intervals, and other useful calculations. Frequency analysis and transformation (via fast Fourier transform) for troubleshooting use cases and predictive analytics applications Improves users’ ability to apply logic and derive the following insights from streaming data without constructing complex models or accessing machine learning: Enables platform developers to easily process platform data in their applications and prepare the data for predictions. Statistical Process Control (SPC) Services Provides industry-standard calculations that allow IoT developers to implement SPC “control chart rules” in their applications.  Useful in manufacturing and in monitoring equipment and processes. Supports a wide assortment of rules, including number of points continuously above / below a range, in and out of range, increasing or decreasing trends, or alternating directions. Analytics Workbench Bundles the two Analytics interfaces (Analytics Builder and Manager) into a new Analytics section in Composer. Predictive Analytics Improvements Reduces overall install and administration complexity. Improves handling of time dseries data when used in predictive scoring. Includes a new learner, Support Vector Machines, enhancing the platform’s utility in building Boolean predictions. Includes a new ensemble method, Majority Vote, that improves generated model accuracy. Provides redundancy filtering which can optionally remove redundant information to improve explanatory analytics (Signals) and predictive model training. Now supports time series lookahead configuration, simplifying this type of prediction. Replaces ThingPredictor predictive scoring in Analytics Manager with native Analytics Server scoring: Improves scalability of concurrent jobs. Axeda Compatibility Package IDM Connector Support o   ACP v1.1.0 introduces the IDM Connector which enables Axeda customers to connect their Axeda IDM agents to the ThingWorx platform.  The IDM Connector provides support for registration requests, property updates, faults, events, file uploads and downloads.  Axeda ThingWorx Entity Exporter Update o   ACP v1.1.0 also includes an updated version of Axeda-ThingWorx Entity Exporter (ATEE) which now supports exporting Axeda IDM assets from the Axeda application into a format that can be imported in the ThingWorx Platform.  eMessage Connector Improvements o   Additionally, ACP v1.1.0 includes support for instruction based Software Content Management packages for the eMessage Connector which allows you to download file(s), execute instruction(s) and optionally restart the agent.  The Axeda Compatibility Extension (ACE) has new entities to support the IDM Connector and SCM for the eMesssage Connector.  o   Finally, updated versions of the Axeda Compatibility Extensions (ACE) and the Connection Services Extension (CSE) are included in ACP v1.1.0 and provide an improved workflow for granting permissions to the eMessage and IDM Connectors. ThingWorx Extension Updates Websocket Tunnel Extension Update The Websocket Tunnel Extension was updated for 8.3 to support the upgrade to jQuery3 Grid Advanced 4.0.0 comes with 2 key features: Editing - we now have cell editing support for all basetypes. The previous version had boolean editing; 4.0.0 now includes support for all basetypes. Footers - A footer section can now be added to the Grid to display rolled-up Grid totals. You can perform client-side calculations like count, min, max and average, and it includes support for custom functions. Note - Grid Advanced 4.0.0 only supports ThingWorx 8.3 and above. Custom Charts 3.0.1 12 Bug Fixes Google Maps 3.0.1 General Bug Fixes ThingWorx Utilities With the 8.3 Release, ThingWorx Utilities functionality are being repackaged into ThingWorx Foundation and ThingWorx Asset Advisor.  ThingWorx Workflow will now be available with Foundation.  The functionality from the Asset and Alert Management Utilities will be delivered in ThingWorx Asset Advisor.  ThingWorx Software Content Management capabilities will continue to be available for customer to manage the delivery of Software to their Connected Products.  The naming of “Utilities” is being phased out of the ThingWorx Platform packaging but the key functionality formerly described as ThingWorx Utilities continues to be delivered with version 8.3.   ThingWorx 8.3 Reference Documents ThingWorx Analytics 8.3 Reference Documents ThingWorx Platform 8.3 Release Notes ThingWorx Platform Help Center ThingWorx Edge SDKs and WebSocket-based Edge MicroServer Help Center ThingWorx Connection Services Help Center ThingWorx Analytics Help Center ThingWorx Industrial Connectivity Help Center ThingWorx Utilities Help Center ThingWorx Utilities Installation Guide     ThingWorx eSupport Portal ThingWorx Developer Portal PTC Marketplace   The following items will be available for download from the PTC Software Download site on June 8, 2018. ThingWorx Platform – Select Release 8.3 ThingWorx Utilities – Select Release 8.3 ThingWorx Analytics – Select Release 8.3 ThingWorx Extensions – Select Individual Extensions for download.  Will be available with the next Marketplace refresh
View full tip
Design and Implement Data Models to Enable Predictive Analytics Learning Path   Design and implement your data model, create logic, and operationalize an analytics model.   NOTE: Complete the following guides in sequential order. The estimated time to complete this learning path is 390 minutes.    Data Model Introduction  Design Your Data Model Part 1 Part 2 Part 3  Data Model Implementation Part 1 Part 2 Part 3  Create Custom Business Logic  Implement Services, Events, and Subscriptions Part 1 Part 2  Build a Predictive Analytics Model  Part 1 Part 2 Operationalize an Analytics Model  Part 1 Part 2  
View full tip
This video concludes Module 9: Anomaly Detection of the ThingWorx Analytics Training videos. It gives an overview of the "Statistical Process Control (SPC) Accelerator"
View full tip
  PLEASE NOTE DataConnect has now been deprecated and no longer in use and supported.   We are regularly asked in the community how to send data from ThingWorx platform to ThingWorx Analytics in order to perform some analytics computation. There are different ways to achieve this and it will depend on the business needs. If the analytics need is about anomaly detection, the best way forward is to use ThingWatcher without the use of ThingWorx Analytics server. The ThingWatcher Help Center is an excellent place to start, and a quick start up program can be found in this blog. If the requirement is to perform a full blown analytics computation, then sending data to ThingWorx Analytics is required. This can be achieved by Using ThingWorx DataConnect, and this is what this blog will cover Using custom implementation. I will be very pleased to get your feedback on your experience in implementing custom solution as this could give some good ideas to others too. In this blog we will use the example of a smart Tractor in ThingWorx where we collect data points on: Speed Distance covered since last tyre change Tyre pressure Amount of gum left on the tyre Tyre size. From an Analytics perspective the gum left on the tyre would be the goal we want to analyse in order to know when the tyre needs changing.   We are going to cover the following: Background Workflow DataConnect configuration ThingWorx Configuration Data Analysis Definition Configuration Data Analysis Definition Execution Demo files   Background For people not familiar with ThingWorx Analytics, it is important to know that ThingWorx Analytics only accepts a single datafile in a .csv format. Each columns of the .csv file represents a feature that may have an impact on the goal to analyse. For example, in the case of the tyre wear, the distance covered, the speed, the pressure and tyre size will be our features. The goal is also included as a column in this .csv file. So any solution sending data to ThingWorx Analytics will need to prepare such a .csv file. DataConnect will perform this activity, in addition to some transformation too.   Workflow   Decide on the properties of the Thing to be collected, that are relevant to the analysis. Create service(s) that collect those properties. Define a Data Analysis Definition (DAD) object in ThingWorx. The DAD uses a Data Shape to define each feature that is to be collected and sends them to ThingWorx Analytics. Part of the collection process requires the use of the services created in point 2. Upon execution, the DAD will create one skinny csv file per feature and send those skinny .csv files to DataConnect. In the case of our example the DAD will create a speed.csv, distance.csv, pressure.csv, gumleft.csv, tyresize.csv and id.csv. DataConnect processes those skinny csv files to create a final single .csv file that contains all these features. During the processing, DataConnect will perform some transformation and synchronisation of the different skinny .csv files. The resulting dataset csv file is sent to ThingWorx Analytics Server where it can then be used as any other dataset file.     DataConnect configuration   As seen in this workflow a ThingWorx server, DataConnect server and a ThingWorx Analytics server will need to be installed and configured. Thankfully, the installation of DataConnect is rather simple and well described in the ThingWorx DataConnect User’s guide. Below I have provided a sample of a working dataconnect.conf file for reference, as this is one place where syntax can cause a problem:   ThingWorx Configuration The platform Subsystem needs to be configured to point to the DataConnect server . This is done under SYSTEM > Subsystems > PlatformSubsystem:     DAD Configuration The most critical part of the process is to properly configure the DAD as this is what will dictate the format and values filled in the skinny csv files for the specific features. The first step is to create a data shape with as many fields as feature(s)/properties collected.   Note that one field must be defined as the primary key. This field is the one that uniquely identifies the Thing (more on this later). We can then create the DAD using this data shape as shown below   For each feature, a datasource needs to be defined to tell the DAD how to collect the values for the skinny csv files. This is where custom services are usually needed. Indeed, the Out Of The Box (OOTB) services, such as QueryNumberPropertyHistory, help to collect logged values but the id returned by those services is continuously incremented. This does not work for the DAD. The id returned by each services needs to be what uniquely identifies the Thing. It needs to be the same for all records for this Thing amongst the different skinny csv files. It is indeed this field that is then used by DataConnect to merge all the skinny files into one master dataset csv file. A custom service can make use of the OOTB services, however it will need to override the id value. For example the below service uses QueryNumberPropertyHistory to retrieve the logged values and timestamp but then overrides the id with the Thing’s name.     The returned values of the service then needs to be mapped in the DAD to indicate which output corresponds to the actual property’s value, the Thing id and the timestamp (if needed). This is done through the Edit Datasource window (by clicking on Add Datasource link or the Datasource itself if already defined in the Define Feature window).   On the left hand side, we define the origin of the datasource. Here we have selected the service GetHistory from the Thing Template smartTractor. On the right hand side, we define the mapping between the service’s output and the skinny .csv file columns. Circled in grey are the output from the service. Circled in green are what define the columns in the .csv file. A skinny csv file will have 1 to 3 columns, as follow: One column for the ID. Simply tick the radio button corresponding to the service output that represents the ID One column representing the value of the Thing property. This is indicated by selecting the link icon on the left hand side in front of the returned data which represent the value (in our example the output data from the service is named value) One column representing the Timestamp. This is only needed when a property is time dependant (for example, time series dataset). On the example the property is Distance, the distance covered by the tyre does depend on the time, however we would not have a timestamp for the TyreSize property as the wheel size will remain the same. How many columns should we have (and therefore how many output should our service has)? The .csv file representing the ID will have one column, and therefore the service collecting the ID returns only one output (Thing name in our smartTractor example – not shown here but is available in the download) Properties that are not time bound will have a csv file with 2 columns, one for the ID and one for the value of the property. Properties that are time bound will have 3 columns: 1 for the ID, one for the value and one for the timestamp. Therefore the service will have 3 outputs.   Additionally the input for the service may need to be configured, by clicking on the icon.   Once the datasources are configured, you should configure the Time Sampling Interval in the General Information tab. This sampling interval will be used by DataConnect to synchronize all the skinny csv files. See the Help Center for a good explanation on this.   DAD Execution Once the above configuration is done, the DAD can be executed to collect properties’ value already logged on the ThingWorx platform. Select Execution Settings in the DAD and enter the time range for property collection:   A dataset with the same name as the DAD is then created in DataConnect as well as in ThingWorx Analytics Server Dataconnect:     ThingWorx Analytics:   The dataset can then be processed as any other dataset inside ThingWorx Analytics.   Demo files   For convenience I have also attached a ThingWorx entities export that can be imported into a ThingWorx platform for you to take a closer look at the setup discussed in this blog. Attached is also a small simulator to populate the properties of the Tractor_1 Thing. The usage is : java -jar TWXTyreSimulatorClient.jar  hostname_or_IP port AppKey For example: java -jar TWXTyreSimulatorClient.jar 192.168.56.106 8080 d82510b7-5538-449c-af13-0bb60e01a129   Again feel free to share your experience in the comments below as they will be very interesting for all of us. Thank you
View full tip
Thing Subscription This post is intended for novice ThingWorx users who wants to understand what the definition of Thing Subscription is and the overall purpose of using Thing Subscriptions.   Definition of a Thing Subscription? A Thing subscription is a script(JavaScript) that is called each time an event occurs. Events are property states which are of end users interest (e.g. temperature) and therefore indicators to kick off some functionality in a Thing subscription when any action needed. Events can e.g. be triggered by an Alert that detects a change or an anomaly in property values. The Thing subscription is explicitly linked to an event and when the event is fired the data is being passed to the subscriber.    Why Use a Thing Subscription? Imagine your machine is running 24 hours 7 days a week with supervised human interaction. If a pump temperature exceeds accepted value it needs to be regulated by the manufacturing department. But no one in the department knows when the temperature will exceed accepted value or drop suddenly therefore, the machines is always sporadically physically supervised by humans which leads to heavy costs for the manufacture. With a Thing Subscription a notification alert email can be sent directly to the department manager who acts based on the email notification.   Thing Subscription must have A Thing subscription must have defined a rule which gets executed when an event occurs. The definition of the rule may accommodate any appropriate business logic.   Thing Subscription example process In this scenario Thing subscription is using a predictive analytics model to detect Data Change or any anomaly values going through a Thing Property. So, based on historical data including failure information, a predictive analytics model begins to analyze run-time values from individual Things/properties to the analytics server. The predictive analytics model detects a pattern which detects past failures, when the analytics model predicts a failure/event based on the analyzed patterns an action is being fired via a Thing subscription. That action could be for ThingWorx to create a service ticket or send a notification email to the service department.   Example of a simple Thing Subscription set-up without using Analytics model to analyze data but instead a build-in ThingWorx alert Below example of Thing Subscription will send a notification email when temperature exceeds defined values from ThingWorx alert configuration. Prerequisites; it is necessary to have a mail server extension imported into the ThingWorx Composer this enables the service department to receive the email notification when an event have occurred. The extension can be downloaded from the marketplace. 1. Create a Thing with the MailServer[i] as the Base Thing template.     2. Create a new Thing and add Properties together with an alert that is triggered when the value exeeds user defined temerature.   3. Enable the Thing Subcriptions by Select Subscription and click +Add Make sure to mark the checkbox Enabled Selecting your Event name and your Property name In the right side of the screen you can enter your script/function that will notify ThingWorx email service to create the email notification Select Done and Save   4. Enable Email notification by selecting Services Provide an name Select Me/Entities Mark Other entity Find your Thing where the MailServer is the Thing Template   5. Then find the SendMessage snippet/script and fill out the snippet with your personal information.   [i] View this blog for more information on how to install the MailServer
View full tip
In this part of the Troubleshooting blog series, we will review the process on how to restart individual services essential to the ThingWorx Analytics Application within the Virtual Machine Appliance.   Services have stopped, and I cannot run my Analytics jobs! In some cases, we have had users encounter issues where a system or process has halted and they are unable to proceed with their tasks. This can be due to a myriad of reasons, ranging from OS hanging issues to memory issues with certain components.   As we covered earlier in Part II, the ThingWorx Analytics Application is installed in a CentOS (Linux) Operating System. As with most Linux Operating Systems, you have the ability to manually check and restart processes as needed.   Steps to Restart Services   With how the Application is installed and configured, the services for functionality should auto start when you boot up the VM. You will have to verify that the Appliance is functional by running your desired API call.   If a system is not functioning as expected, you will receive an error in your output when you POST an API call. Some errors are very specific and you can search the Knowledge Database for any existing Knowledge Articles that may solve the issue.   For error messages that do not have an existing article, you may want to attempted the following   Method 1:   If you are encountering issues, and are unsure what process is not working correctly, we would recommend a full Application restart. This involves restarting the Virtual Machine Appliance via the command line terminal.   We would recommend that you use the following command, as root user or using SUDO, as this is known as a “Graceful restart” ​sudo reboot -h now   This will restart the virtual machine safely, and once you are back up and running you can run your API calls to verify functionality. This should resolve any incremental issues you may have faced.   Method 2:   If you want to restart an individual service, there is a particular start order that needs to be followed to make sure the Application is operating as expected.   The first step is check what services are not running, you can use the following command to check what is running and its current status: service –status-all   The services you are looking for are the following: Zookeeper PostgreSQL Server GridWorker(s) Tomcat   If a particular service is not on the running list, you will have to manually start them by using the service start command. service [name of service] start e.g. service tomcat start You may be prompted for the root password   You can verify that the services are operating by running the status check command as described above.   If you need to restart all services, we have a specific start order, this is important to do as there are some dependencies such as Postgres for the GridWorker(s) and Tomcat to use.   The start order is as follows: Zookeeper PostgreSQL Server GridWorker(s) Tomcat   After completing the restart, and verifying that the services are running, run your desired API call.
View full tip
Anomaly Detection (also known as Outlier Detection) is a set of techniques that identify unusual occurrences in data. The premise is that such occurrences may be early indicators of future negative events (e.g. failure of assets or production lines).  Data Science algorithms for Anomaly Detection include both Supervised and Unsupervised methods. In Unsupervised Anomaly Detection, the algorithms make the assumption that most of the data points are "normal" (e.g. normal operation of the asset) and are looking for data points that are most dissimilar to the remainder of the dataset. Supervised Anomaly Detection requires a labeled set of Anomalies, in which case predictive algorithms can be applied directly on this data.   Thingworx employs a number of algorithms in support of Anomaly Detection: Simple threshold alerts. These are easy to setup on Thing properties but require a domain expert to provide such thresholds. Then, the alert will automatically fire when the value of the monitored property goes outside the predefined range of values, often seen as the "bad" side of the threshold. Statistical Process Control (SPC). This can be implemented using Thingworx Analytics Property Transforms. Most companies use a subset of SPC charts and rules to monitor production processes. Examples include the X Bar and R charts, as well as the Western Electric rules (e.g. one point outside the average +/- three sigma range). Explainable and widely accepted, SPC can also provide an earlier warning system compared to simple threshold alerts, in that it captures more complex patterns. Clustering. Using Thingworx Analytics, one can build an optimal clustering for the available data points. Under the assumption that data is representative of mostly normal operation and that there is not a significant pre-defined pattern of anomalies that form their own cluster, one can identify outliers by looking at the distance between points and their corresponding cluster centers. Points that are very far from their corresponding cluster center can be labeled as anomalies. Semi-supervised Anomaly Alerting (formerly known as ThingWatcher). This functionality identifies single property time series behavior that is statistically different than what was seen in a finite window of “known normal operation”. As such, it does not identify a “bad” event, or even a precursor to a “bad” event.  Rather, it points the end user to further investigate a situation which may lead to a “bad” event. Anomaly Alerts can be easily setup like any other Alert on a Thing property. Multiple Anomaly Alerts can be setup on the same or different properties of a Thing. Behind the scenes, the platform builds a time series neural network model for the known normal operation data, which is then applied to incoming data, and, if the errors are significantly different than those on known normal operation over a period of time, then an Anomaly Alert is produced. The techniques mentioned above are either unsupervised or semi-supervised. If the dataset contains labeled anomalies (e.g. asset faults, or suspicious patterns) then supervised predictive techniques (such as regression, decision trees, neural nets, or ensemble methods) are available to model the relationships between such anomalies (dependent variables) and various variables of interest (independent variables). These models can then be employed to monitor assets or production lines for upcoming anomalies. In many real-world use cases, anomalies are relatively rare; care needs to be taken when building such predictive models. Techniques such as up-sampling can prove beneficial in such situations.   What constitutes an Anomaly depends on the observed data and the current context. If only few data points are initially available, then it is possible that a lot of future data is predicted as an Anomaly, despite being normal operation. Also, in terms of context, if an Anomaly Detection is trained on a connected product in the Winter, it is likely to say all Summer operation is anomalous. This can be tackled by having multiple anomaly detection alerts implemented, one for each different context of operation (e.g. season, recipe being manufactured, operation done by a robot).   Another consideration is lead time vs explainability. For example, when a threshold alert fires, it is obvious why, but it may not be early enough to take action. As more advanced methods are employed, more complex patterns can be captured, hence more lead time, but typically at the expense of explainability. For example, semi supervised Anomaly Alerting (formerly known as ThingWatcher) uses time windows, aggregations, and derivatives of up to the third order, resulting in significantly less explainability when an Anomaly is presented to the end user.   Choosing the appropriate Anomaly Detection technique is use case dependent, balancing the desired lead time and explainability. If historical data on failures/anomalies is not available, a good place to start is Statistical Process Control, as it provides a balanced approach between the two dimensions, in addition to being already in use across many manufacturing companies.
View full tip
This video go through the steps required to use the Creo Insight extension: - Download and install the required extension - Set required config.pro options - Create provider in Analytics Manager - Publish sensor from Creo - Create analysis Event in Analysis Manager - Retrieve sensor values from ThingWorx in Creo     See also: - https://www.ptc.com/en/support/article?n=CS277514  for a  written version of those steps. - Creo Help Center  
View full tip
Best Practices in Data Preparation for ThingWorx Analytics Data Preparation is an important phase in the process of Data Analysis when using ThingWorx Analytics. Basically, it is getting your Data from being Raw Data that you might have gathered through your Operational system or from your Data warehouse to the kind of Data ready to be analyzed. In this Document we will be using “Talend Data Preparation Free Desktop” as a Tool to illustrate some examples of the Data Preparations process. This tool could be downloaded under the following Link: https://www.talend.com/products/data-preparation (You could also choose to use another tool) We would also use the Beanpro Dataset in our Examples and illustrations. Checking data formats The analysis starts with a raw data file. The user needs to make sure that the data files can be read. Raw data files come in many different shapes and sizes. For example, spreadsheet data is formatted differently than web data or Sensors collected data and so forth. In ThingWorx Analytics the Data Format acceptable are CSV. So the Data retrieved needs to be inputted into that format before it could be uploaded to TWA Data Example (BeanPro dataset used): After that is done the user needs to actually look at what each field contains. For example, a field is listed as a character field could actually contains none character data. Verify data types Verifying the data types for each feature or field in the Dataset used.  All data falls into one of four categories that affect what sort of analytics that could be applied to it: Nominal data is essentially just a name or an identifier. Ordinal data puts records into order from lowest to highest. Interval data represents values where the differences between them are comparable. Ratio data is like interval data except that it also allows for a value of 0. It's important to understand which categories your data falls into before you feed it into ThingWorx Analytics. For example when doing Predictive Analytics TWA would not accept a Nominal Data Field as Goal. The Goal feature data would have to be of a numerical non nominal type so this needs to be confirmed in an early stage.                 Creating a Data Dictionary A data dictionary is a metadata description of the features included in the Dataset when displayed it consists of a table with 3 columns: - The first column represents a label: that is, the name of a feature, or a combination of multiple (up to 3) features which are fields in the used Dataset. It points to “fieldname” in the configuration json file. - The second column is the Datatype value attached to the label. (Integer, String, Datetime…). It points to “dataType” in the configuration json file. - The third column is a description of the Feature related to the label used in the first column. It points to “description” in the configuration json file. In the context of TWA this Metadata is represented by a Data configuration “json” file that would be uploaded before even uploading the Dataset itself. Sample of BeanPro dataset configuration file below: Verify data accuracy Once it is confirmed that the data is formatted the way that is acceptable by TWA, the user still need to make sure it's accurate and that it makes sense. This step requires some knowledge of the subject area that the Dataset is related to. There isn't really a cut-and-dried approach to verifying data accuracy. The basic idea is to formulate some properties that you think the data should exhibit and test the data to see if those properties hold. Are stock prices always positive? Do all the product codes match the list of valid ones? Essentially, you're trying to figure out whether the data really is what you've been told it is. Identifying outliers Outliers are data points that are distant from the rest of the distribution. They are either very large or very small values compared with the rest of the dataset. Outliers are problematic because they can seriously compromise the Training Models that TWA generates. A single outlier can have a huge impact on the value of the mean. Because the mean is supposed to represent the center of the data, in a sense, this one outlier renders the mean useless. When faced with outliers, the most common strategy is to delete them. Example of the effect of an Outlier in the Feature “AVG Technician Tenure” in BeanPro Dataset:   Dataset with No Outlier: Dataset with Outlier: Deal with missing values Missing values are one of the most common (and annoying) data problems you will encounter. In TWA dealing with the Null values is done by one of the below methods: - Dropping records with missing values from your Dataset. The problem with this is that missing values are frequently not just random little data glitches so this would consider as the last option. - Replacing the NULL values with average values of the responses from the other records of the same field to fill in the missing value Transforming the Dataset - Selecting only certain columns to load which would be relevant to records where salary is not present (salary = null). - Translating coded values: (e.g., if the source system codes male as "1" and female as "2", but the warehouse codes male as "M" and female as "F") - Deriving a new calculated value: (e.g., sale_amount = qty * unit_price) - Transposing or pivoting (turning multiple columns into multiple rows or vice versa) - Splitting a column into multiple columns (e.g., converting a comma-separated list, specified as a string in one column, into individual values in different columns) Please note that: Issue with Talend should be reported to the Talend Team Data preparation is outside the scope of PTC Technical Support so please use this article as an advisable Best Practices document
View full tip
This post covers how to build and operationalize a time series model using Thingworx Analytics. A lookback window is used to read multiple previous rows before the current one, and base the prediction on those lookback rows.   In this example we use time series data to predict water flow for different water pumps in a system.   There is a full explanation of the method attached, also all necessary resources are included in the attached files.
View full tip
A Feature - a piece of information that is potentially useful for prediction. Any attribute could be a feature, as long as it is useful to the model. Feature engineering – Feature engineering is the process of transforming raw data into features that better represent the underlying problem to the predictive models, resulting in improved model accuracy on unseen data. It’s a vaguely agreed space of tasks related to designing feature sets for Machine Learning applications. Components: First, understanding the properties of the task you’re trying to solve and how they might interact with the strengths and limitations of the model you are going to use. Second, experimental work were you test your expectations and find out what actually works and what doesn’t. Feature engineering as a technique, has three sub categories of techniques: Feature selection, Dimension reduction and Feature generation. Feature Selection: Sometimes called feature ranking or feature importance, this is the process of ranking the attributes by their value to predictive ability of a model. Algorithms such as decision trees automatically rank the attributes in the data set. The top few nodes in a decision tree are considered the most important features from a predictive stand point. As a part of a process, feature selection using entropy based methods like decision trees can be employed to filter out less valuable attributes before feeding the reduced dataset to another modeling algorithm. Regression type models usually employ methods such as forward selection or backward elimination to select the final set of attributes for a model. For example: Project Development decision-tree:                                                  Dimension Reduction: This is sometimes called feature extraction. The most classic example of dimension reduction is principle component analysis or PCA. PCA allows us to combine existing attributes into a new data frame consisting of a much reduced number of attributes by utilizing the variance in the data. The attributes which "explain" the highest amount of variance in the data form the first few principal components and we can ignore the rest of the attributes if data dimensionality is a problem from a computational standpoint. Feature Generation or Feature Construction: Quite simply, this is the process of manually constructing new attributes from raw data. It involves intelligently combining or splitting existing raw attributes into new one which have a higher predictive power. For example a date stamp may be used to generate 2 new attributes such as AM and PM which may be useful in discriminating whether day or night has a higher propensity to influence the response variable. Feature construction is essentially a data transformation process. Tips for Better Feature Engineering Tip 1: Think about inputs you can create by rolling up existing data fields to a higher/broader level or category. As an example, a person’s title can be categorized into strategic or tactical. Those with titles of “VP” and above can be coded as strategic. Those with titles “Director” and below become tactical. Strategic contacts are those that make high-level budgeting and strategic decisions for a company. Tactical are those in the trenches doing day-to-day work.  Other roll-up examples include: Collating several industries into a higher-level industry: Collate oil and gas companies with utility companies, for instance, and call it the energy industry, or fold high tech and telecommunications industries into a single area called “technology.” Defining “large” companies as those that make $1 billion or more and “small” companies as those that make less than $1 billion.   Tip 2: Think about ways to drill down into more detail in a single field. As an example, a contact within a company may respond to marketing campaigns, and you may have information about his or her number of responses. Drilling down, we can ask how many of these responses occurred in the past two weeks, one to three months, or more than six months in the past. This creates three additional binary (yes=1/no=0) data fields for a model. Other drill-down examples include: Cadence: Number of days between consecutive marketing responses by a contact: 1–7, 8–14, 15–21, 21+ Multiple responses on same day flag (multiple responses = 1, otherwise =0) Tip 3: Split data into separate categories also called bins. For example, annual revenue for companies in your database may range from $50 million (M) to over $1 billion (B). Split the revenue into sequential bins: $50–$200M, $201–$500M, $501M–$1B, and $1B+. Whenever a company falls with the revenue bin it receives a one; otherwise the value is zero. There are now four new data fields created from the annual revenue field. Other examples are: Number of marketing responses by contact: 1–5, 6–10, 10+ Number of employees in company: 1–100, 101–500, 502–1,000, 1,001–5,000, 5,000+ Tip 4: Think about ways to combine existing data fields into new ones. As an example, you may want to create a flag (0/1) that identifies whether someone is a VP or higher and has more than 10 years of experience. Other examples of combining fields include: Title of director or below and in a company with less than 500 employees Public company and located in the Midwestern United States You can even multiply, divide, add, or subtract one data field by another to create a new input. Tip 5: Don’t reinvent the wheel – use variables that others have already fashioned. Tip 6: Think about the problem at hand and be creative. Don’t worry about creating too many variables at first, just let the brainstorming flow.
View full tip
ThingWorx Foundation Flow Enable customers using Azure to take advantage of Azure services Access hundreds of Azure system connectors by invoking Azure Logic Apps from within ThingWorx Flow Execute Azure functions to leverage Azure dynamic, serverless scaling and pay just for processing power needed Access Azure Cognitive AI services for image recognition, text to voice/voice to text, OCR and more Easily integrate with homegrown and commercial solutions based on SQL databases where explicit APIs or REST services are not exposed Automatically trigger business process flows by subscribing to Windchill object class and instance events Provide visibility to mature PLM content (such as when a part is released) to downstream manufacturing and supply chain roles and systems Easily add new actions by extending functionality from existing connectors to create new actions to facilitate common tasks Inherit or copy functionality from existing actions and change only what is necessary to support new custom action Azure Connector SQL Database Connector Windchill Event Trigger Custom Action Improvements Platform Composer: Horizontal tab navigation is back!  Also new Scheduler editor. Security: TLS 1.2 support by default, new services for handling expired device connections New support for InFlux 1.7 and MSSQL 2017 * New* Solution Central Package, publish and upload your app with version info and metadata to your tenancy of Solution Central in the PTC cloud Identify missing dependencies via automatic dependency management to ensure your application is packaged with everything required for it to run on the target environments Garner enterprise-wide visibility of your ThingWorx apps deployed across the enterprise via a cloud portal showcasing your company’s available apps, their versions and target environments to foster a holistic view of your entire IIoT footprint across all of your servers, sites and use cases Solution Central is a brand-new cloud-based service to help enterprises package, store, deploy and manage their ThingWorx apps Accelerate your application deployment Initially targeted at developers and admins in its first release, Solution Central enables you to: Mashup Builder 9 new widgets, 5 new functions. Theme Editor with swappable Mashup Preview Responsive Layout enhancements including new settings for fixed and range sizes New Builder for custom screen sizes, new Widget and Style editors, Canvas Zoom Migration utility available for legacy applications to help move to latest features Security 3 new built-in services for WebSocket Communications Subsystem: QueryEndpointSessions, GetBoundThingsForEndpoint, and CloseEndpointSessions Provide greater awareness of Things bound to the platform Allow for mass termination of connections, if necessary Can be configured to automatically disconnect devices with expired authentication methods Encrypting data-in-motion (using TLS 1.2) is a best practice for securely using ThingWorx For previous versions, the installer defaulted to not configuring TLS; ThingWorx 8.5 and later installers will default to configuring TLS ThingWorx will still allow customers to decline to do so, if desired Device connection monitoring & security TLS by default when using installer   ThingWorx Analytics Confidence Model Training and Scoring (ThingWorx Analytics APIs) Deepens functionality by enabling training and scoring of confidence models to provide information about the uncertainty in a prediction to facilitate human and automated decision making Range Property Transform and Descriptive Service Improves ease of implementation of data transformations required for common statistical process control visualizations Architecture Simplification Improves cost of ownership by reducing the number of microservices required by Analytics Server to reduce deployment complexity Simplified installation process enables system administrators to integrate ThingWorx Analytics Server with either (or both) ThingWorx Foundation 8.5 and FactoryTalk Analytics DataFlowML 3.0.   ThingWorx Manufacturing and Service Apps & Operator Advisor Manufacturing common layer extension - now bundling all apps as one extension (Operator Advisor, Asset Advisor, Production KPIs, Controls Advisor) Operator Advisor user interface for work instruction delivery Shift and Crew data model & user interface Enhancements to Operator Advisor MPMLink connector Flexible KPI calculations Multiple context support for assets   ThingWorx Navigate New Change Management App, first in the Contribute series, allows a user to participate in change request reviews delivered through a task list called “My Tasks” BETA Release of intelligent, reusable components that will dramatically increase the speed of custom App development Improvements to existing View Apps Updated, re-usable 3D viewing component (ThingView widget) Support for Windchill Distributed Vaults Display of Security Labels & Values   ThingWorx Azure IOT Hub Connector Seamless compatibility of Azure devices with ThingWorx accelerators like Asset Advisor and custom applications developed using Mashup Builder. Ability to update software and firmware remotely using ready-built Software Content Management via “ThingWorx Azure Software Content Management” Module on Azure IoT Edge. Quick installation and configuration of ThingWorx Azure IoT Hub Connector, Azure IoT Hub and Azure IoT Edge SCM module.   Documentation ThingWorx Platform ThingWorx Platform 8.5 Release Notes ThingWorx Platform Help Center ThingWorx 8.5 Platform Reference Documents ThingWorx Connection Services Help Center   ThingWorx Azure IoT Hub Connector ThingWorx Azure IoT Hub Connector Help Center   ThingWorx Analytics ThingWorx Platform Analytics 8.5.0 Release Notes Analytics Server 8.5.1 Release Notes ThingWorx Analytics Help Center   ThingWorx Manufacturing & Service Apps and ThingWorx Operator Advisor ThingWorx Apps Help Center ThingWorx Operator Advisor Help Center   ThingWorx Navigate ThingWorx Navigate 8.5 Release Notes Installing ThingWorx Navigate 8.5 Upgrading to ThingWorx Navigate 8.5 ThingWorx Navigate 8.5 Tasks and Tailoring Customizing ThingWorx Navigate 8.5 PTC Windchill Extension Guide 1.12.x ThingWorx Navigate 8.5 Product Compatibility Matrix ThingWorx Navigate 8.5 Upgrade Support Matrix ThingWorx Navigate Help Center     Additional Information Helpcenter ThingWorx eSupport Portal ThingWorx Developer Portal PTC Marketplace The National Instruments Connector can be found on PTC Marketplace  
View full tip
In this video we cover the installation of the UploadThing module. This video applies to ThingWorx Analytics 52.2 till 8.0. This is no longer applicable with ThingWorx Analytics 8.1   Useful links: How to copy files from Windows to Linux Updated Link for access to this video:  Installing Thingworx Analytics Builder: Part 3 of 3  
View full tip
ThingWorx Analytics is capable of being assembled in multiple Operating Systems. In this post, we will discuss common issues that have been encountered by other users. Permissions Denied – Read/Write access to Third Party Components This is encountered when executing the desired Shell script to begin the creation process. In MacOS and Linux you may encounter a “Permissions Denied” error on the two required components in the creation, the packer-post-processor-vhd and packer components. Error Message This will result in a Terminal dialog message that will read “Process Completed, No Artifacts Created”. This indicates that the Packer Script has failed to complete the task, and the desired appliance images were not created. To correct this issue, you will have to change the permissions of the packer-post-processor-vhd and packer components to be able to be read and executable by the user account that is attempting to create the appliance. Solution Run the following commands in the Virtual Machine terminal (you may need to run as SUDO or as Root): chmod +x packer-post-processor-vhd ​chmod +x packer After running the above command, run the Shell script of the desired VM Appliance output. This should resolve the issue with “Permission Denied” while executing the build scripts. Error Starting Appliance in VirtualBox Users have experienced this issue at the first run of the Appliance, right after it has been assembled. This issue is unique to VirtualBox versions 5.0 and above. Error Message – Dialog Box If you encounter the error depicted below, please check under settings for the imported OVA for any errors: This issue is the result of invalid settings in the Appliance Configuration. You will need to check for Invalid Settings, by navigating to the Settings Menu for the Appliance: The “Invalid settings detected” indicates that when the Product was assembled, some configuration settings were not applied correctly by the creation tool scripts. Solution Hover your mouse over the settings and it will direct you to cause, in this case it is due to remote monitor setup. Just change the settings in Display (Remote Display Tab) by unchecking the Enable Server button. Press OK after unchecking the “Enable Server” option, and start the Appliance.
View full tip
  Operationalize an Analytics Model Guide Part 1   Overview   This project will introduce ThingWorx Analytics Manager. Following the steps in this guide, you will learn how to deploy the model which you created in the earlier Builder guide. We will teach you how to utilize this deployed model to investigate whether or not live data indicates a potential engine failure. NOTE: This guide’s content aligns with ThingWorx 9.3. The estimated time to complete ALL 2 parts of this guide is 60 minutes.    Step 1: Analytics Architecture   You can leverage product-based analysis Models developed using PTC and third-party tools while building solutions on the ThingWorx platform. Use simulation as historical basis for predictive Models Create a virtual sensor from simulation Design-time versus operational-time intelligence It is important to understand how Analytics Manager interacts with the ThingWorx platform.   Build Model   In an IoT implementation, multiple remote Edge devices feed information into the ThingWorx Foundation platform. That information is stored, organized, and operated-upon in accordance with the application's Data Model. Through Foundation, you will upload your dataset to Analytics Builder. Builder will then create an Analytics Model.     Operationalize Model   Analytics Manager tests new data through the use of a Provider, which applies the Model to the data to make a prediction. The Provider generates a predictive result, which is passed back through Manager to ThingWorx Foundation. Once Foundation has the result, you can perform a variety of actions based on the outcome. For instance, you could set up Events/Subscriptions to take immediate and automatic action, as well as alerting stakeholders of an important situation.       Step 2: Simulate Data Source   For any ThingWorx IoT implementation, you must first connect remote devices via one of the supported connectivity options, including Edge MicroServer (EMS), REST, or Kepware Server. Edge Connectivity is outside the scope of this guide, so we'll use a data simulator instead. This simulator will act like an Engine with a Vibration Sensor, as described in Build a Predictive Analytics Model. This data is subdivided into five frequency bands, s1_fb1 through s1_fb5. From this data, we will attempt to predict (through the engine's vibrations) when a low grease emergency condition is occuring.   Import Entities   Import the engine simulator into your Analytics Trial Edition server. Download and unzip the attached amqs_entities.zip file. At the bottom-left of ThingWorx Composer, click Import/Export > Import.     Keep the default options of From File and Entity, click Browse, and select the amqs_entities.twx file you just downloaded.   Click Import, wait for the Import Successful message, and click Close.   From Browse > All, select AMQS_Thing from the list.   At the top, click Properties and Alerts to see the core functionality of the simulator. NOTE: The InfoTable Property is used to store data corresponding to the s1_fb1 through s1_fb5 frequency bands of the vibration sensor on our engine. The values in this Property change every ten seconds through a Subscription to the separate AMQS_Timer Thing. The first set of values are good, in that they do NOT correspond to a low grease condition. The second set of values are bad, in that they DO correspond to a low grease condition. These values will change whenever the ten-second timer fires.   View Mashup   We have created a sample Mashup to make it easier to visualize the data, since analyzing data values in the Thing Properties is cumbersome. Follow these steps to access the Mashup. On the ThingWorx Composer Browse > All tab, click AMQS_Mashup.   At the top, click View Mashup.    Observe the Mashup for at least ten-seconds. You'll see the values in the Grid Advanced Widget change from one set to another at each ten-second interval.     NOTE: These values correspond to data entries from the vibration dataset we utilized in the pre-requisite Analytics Builder guide. Specifically, the good entry is number 20,040... while the bad entry is number 20,600. You can see in the dataset that 20,400 corresponds to a no low grease condition, while 20,600 corresponds to a yes, low grease condition.   Step 3: Configure Provider   In ThingWorx terminology, an Analysis Provider is a mathematical analysis engine. Analytics Manager can use a variety of Providers, such as Excel or Mathcad. In this quickstart, we use the built-in AnalyticsServerConnector, an Analysis Provider that has been specifically created to work seamlessly in Analytics Manager and to use Builder Models. From the ThingWorx Composer Analytics tab, click ANALYTICS MANAGER > Analysis Providers, New....   In the Provider Name field, type Vibration_Provider. In the Connector field, search for and select TW.AnalysisServices.AnalyticsServer.AnalyticsServerConnector.   4. Leave the rest of the options at default and click Save.   Step 4: Publish Analysis Model   Once you have configured an Analysis Provider, you can publish Models from Analytics Builder to Analytics Manager. On the ThingWorx Composer Analytics tab, click ANALYTICS BUILDER > Models.   Select vibration_model_s1_only and click Publish.   On the Publish Model pop-up, click Yes. A new browser tab will open with the Analytics Manager's Analysis Models menu.      4. Close that new browser tab, and instead click Analytics Manager > Analysis Models in the ThingWorx Composer Analytics navigation. This is the same interface as the auto-opened tab which you closed.   False Test   It is recommended to test the published Model with manually-entered sample data prior to enabling it for automatic analysis. For this example, we will use entry 20,400 from the vibration dataset. If the Model is working correctly, then it will return a no low grease condition. In Analysis Models, select the model you just published and click View.   Click Test.   In the causalTechnique field, type FULL_RANGE. In the goalField field, type low_grease. For _s1_fb1 through _s1_fb5, enter the following values: Data Row Name Data Row Value _s1_fb1 161 _s1_fb2 180 _s1_fb3 190 _s1_fb4 176 _s1_fb5 193 6. Click Add Row. 7. Select the newly-added row in the top-right, then click Set Parent Row. 8. Click Submit Job. 9. Select the top entry in the bottom-left Results Data Shape field. 10. Click Refresh Job. Note that _low_grease is false and and _low_grease_mo is well below 0.5 (the threashold for a true prediction).   You have now successfully submitted your first Analytics Manager job and received a result from ThingPredictor. ThingPredictor took the published Model, used the no low grease data as input, and provided a correct analysis of false to our prediction.   True Test Now, let's test a true condition where our engine grease IS LOW, and confirm that Analytics Manager returns a correct prediction. In the top-right, select the false data row we've already entered and click Delete Row. For _s1_fb1 through _s1_fb5, change to the following values: Data Row Name Data Row Value _s1_fb1 182 _s1_fb2 140 _s1_fb3 177 _s1_fb4 154 _s1_fb5 176 3. Select the top entry in the bottom-left Results Data Shape field. 4. Click Refresh Job. Note that _low_grease is true and and _low_grease_mo is above 0.5.                 5. Click Submit Job.         6. Select the top entry in the bottom-left Results Data Shape field.         7. Click Refresh Job. Note that _low_grease is true and and _low_grease_mo is above 0.5.          You've now manually submitted yet another job and received a predictive score. Just like in the dataset Entry 20,600, Analytics Manager predicts that the second s1_fb1 through s1_fb5 vibration frequencies correspond to a low grease condition which needs to be addressed before the engine suffers catastrophic failure.   Enable Model   Since both false and true predictions made by the Model seem to match our dataset, let's now enable the Model so that it can be used for automatic predictions in the future. In the top-left, expand the Actions... drop-down box.   Select Enable.     Click here to view Part 2 of this guide.   
View full tip
Announcements