cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

Community Tip - Learn all about PTC Community Badges. Engage with PTC and see how many you can earn! X

IoT Tips

Sort by:
Overview Time-series predictive models generated by ThingWorx Analytics Server in Analytics Manager will have additional columns in their dataShape generated by ThingPredictor. These columns are known as “transformation fields” and are used for internal processing but are not necessary for inclusion in the DataShape.  So there is no need to worry about mapping all these additional fields since it will be handled internally by ThingPredictor.  There is one addition step that the user must take which is detailed below. Step to Import: Edit the DataShape generated by ThingPredictor to match the format of the data that was provided during the model training process. In other words, remove all the transformation fields from the DataShape.
View full tip
In this blog we will have a look at the installation of the Thingworx Analytics Builder extension. This is used as guideline but make sure to check the Help Center for your release as steps do vary with versions. The installation has been divided in 3 parts: Introduction and import of the extension into Thingworx Platform Video Link : 1568 Configuration of the extension Note: For release 8.1, the Settings menu differs from previous versions, seeWhat's New in ThingWorx Analytics Builder 8.1 between times 00:12 sec to 00:40 sec for up to date menu selection. Video Link : 1572 Installation of the UploadThing module Note: this step no longer applies as of ThingWorx Analytics 8.1 Video Link : 1573 Useful links: PTC Download page for Thingworx Analytics PTC Reference Document page for Thingworx Analytics How to copy files from Windows to Linux ?
View full tip
ThingWorx 8.3 covers the following areas of the product portfolio:  ThingWorx Analytics, ThingWorx Utilities, and ThingWorx Foundation which includes Connection Server and Edge capabilities.   Highlights of the release include:   ThingWorx Foundation Next Generation Composer: Now default admin and developer interface Full Feature parity with legacy Composer New capability for User and Group administration, Authorization and permissions, Export, Monitoring and Logging. More in Helpcenter Localization support for German and French Mashup Builder: JQuery 3 upgrade Grid Advanced Extension now supports Cell Editing and Footers Platform: Active Directory (AD) Integration enhancements for larger AD forests and user extension field mapping Upgrade in-place enhancements for Java SDK developers Developer Enablement Capture the usage statics such as time taken to execute a ThingWorx service, # of times a service runs in ThingWorx using Service Utilization Statistics functionality powered by all new and efficient Utilization Subsystem. Collect ThingWorx system data such as ESAPI configuration, ThingworxStorage logs, licensing, and JVM information to better diagnose system issues Service Utilization Statistics: ThingWorx Support Package tool Administrator Password and Password Length New installations of ThingWorx will be required to supply the initial Administrator password of the installer’s choice. That password must be supplied via a new entry in the platform-settings.json file. After the initial installation, the Administrator password should then be changed to a strong password to be used going forward. Additional information. As a step toward industry best practices, the Administrator password and all new passwords will need to be at least 10 characters.  When upgrading to 8.3, passwords from older versions of the platform will not need to be modified, but any new passwords being created will need to be at least 10 characters long. See the installation instructions for complete details.   ThingWorx Analytics New Descriptive Services  Core statistics (min, max, deviation, etc.), data distribution (binning), confidence intervals, and other useful calculations. Frequency analysis and transformation (via fast Fourier transform) for troubleshooting use cases and predictive analytics applications Improves users’ ability to apply logic and derive the following insights from streaming data without constructing complex models or accessing machine learning: Enables platform developers to easily process platform data in their applications and prepare the data for predictions. Statistical Process Control (SPC) Services Provides industry-standard calculations that allow IoT developers to implement SPC “control chart rules” in their applications.  Useful in manufacturing and in monitoring equipment and processes. Supports a wide assortment of rules, including number of points continuously above / below a range, in and out of range, increasing or decreasing trends, or alternating directions. Analytics Workbench Bundles the two Analytics interfaces (Analytics Builder and Manager) into a new Analytics section in Composer. Predictive Analytics Improvements Reduces overall install and administration complexity. Improves handling of time dseries data when used in predictive scoring. Includes a new learner, Support Vector Machines, enhancing the platform’s utility in building Boolean predictions. Includes a new ensemble method, Majority Vote, that improves generated model accuracy. Provides redundancy filtering which can optionally remove redundant information to improve explanatory analytics (Signals) and predictive model training. Now supports time series lookahead configuration, simplifying this type of prediction. Replaces ThingPredictor predictive scoring in Analytics Manager with native Analytics Server scoring: Improves scalability of concurrent jobs. Axeda Compatibility Package IDM Connector Support o   ACP v1.1.0 introduces the IDM Connector which enables Axeda customers to connect their Axeda IDM agents to the ThingWorx platform.  The IDM Connector provides support for registration requests, property updates, faults, events, file uploads and downloads.  Axeda ThingWorx Entity Exporter Update o   ACP v1.1.0 also includes an updated version of Axeda-ThingWorx Entity Exporter (ATEE) which now supports exporting Axeda IDM assets from the Axeda application into a format that can be imported in the ThingWorx Platform.  eMessage Connector Improvements o   Additionally, ACP v1.1.0 includes support for instruction based Software Content Management packages for the eMessage Connector which allows you to download file(s), execute instruction(s) and optionally restart the agent.  The Axeda Compatibility Extension (ACE) has new entities to support the IDM Connector and SCM for the eMesssage Connector.  o   Finally, updated versions of the Axeda Compatibility Extensions (ACE) and the Connection Services Extension (CSE) are included in ACP v1.1.0 and provide an improved workflow for granting permissions to the eMessage and IDM Connectors. ThingWorx Extension Updates Websocket Tunnel Extension Update The Websocket Tunnel Extension was updated for 8.3 to support the upgrade to jQuery3 Grid Advanced 4.0.0 comes with 2 key features: Editing - we now have cell editing support for all basetypes. The previous version had boolean editing; 4.0.0 now includes support for all basetypes. Footers - A footer section can now be added to the Grid to display rolled-up Grid totals. You can perform client-side calculations like count, min, max and average, and it includes support for custom functions. Note - Grid Advanced 4.0.0 only supports ThingWorx 8.3 and above. Custom Charts 3.0.1 12 Bug Fixes Google Maps 3.0.1 General Bug Fixes ThingWorx Utilities With the 8.3 Release, ThingWorx Utilities functionality are being repackaged into ThingWorx Foundation and ThingWorx Asset Advisor.  ThingWorx Workflow will now be available with Foundation.  The functionality from the Asset and Alert Management Utilities will be delivered in ThingWorx Asset Advisor.  ThingWorx Software Content Management capabilities will continue to be available for customer to manage the delivery of Software to their Connected Products.  The naming of “Utilities” is being phased out of the ThingWorx Platform packaging but the key functionality formerly described as ThingWorx Utilities continues to be delivered with version 8.3.   ThingWorx 8.3 Reference Documents ThingWorx Analytics 8.3 Reference Documents ThingWorx Platform 8.3 Release Notes ThingWorx Platform Help Center ThingWorx Edge SDKs and WebSocket-based Edge MicroServer Help Center ThingWorx Connection Services Help Center ThingWorx Analytics Help Center ThingWorx Industrial Connectivity Help Center ThingWorx Utilities Help Center ThingWorx Utilities Installation Guide     ThingWorx eSupport Portal ThingWorx Developer Portal PTC Marketplace   The following items will be available for download from the PTC Software Download site on June 8, 2018. ThingWorx Platform – Select Release 8.3 ThingWorx Utilities – Select Release 8.3 ThingWorx Analytics – Select Release 8.3 ThingWorx Extensions – Select Individual Extensions for download.  Will be available with the next Marketplace refresh
View full tip
Continuing our series of Troubleshooting ThingWorx Analytics installations, in this IoT Tech Tip we will cover two items have been appearing for many users.   Error 1069 Encountered with Native Windows Installation of ThingWorx Analytics 8.2   In some instances, when a user successfully installs ThingWorx Analytics (TWAS) to a Windows Server operating system, they will encounter an error where TWAS will report an Error 1069: The Service did not start due to logon failure.   This can occur with any individual Service that is created by the installation, the following fix should work in addressing the issue.   Primary Reason This Happens:   This error can be encountered when the user provides incorrect credentials for associating the Services to an account during installation. In TWAS 8.2, there is a utility that will enable to the user to change the associated user on the Services. It is important the user provides the password for the User Account on Windows, and not the user/password combination for ThingWorx Foundation Platform Server.   Steps to Fix Issue   Solution 1:   Open a Command Prompt as Administrator, via Start Menu à Run à type CMD. Then right click on cmd.exe and Run As Administrator.   In the elevated command prompt, change your directory to the ThingWorxAnalyticsServer/bin directory, for example in the default installation path would be: cd C:\Program Files (x86)/ThingWorxAnalyticsServer/bin Then execute the changeServiceUserAccount.bat <username>, for example: changeServiceUserAccount.bat user1   You will be prompted to change the password for the user.   Solution 2:   If Solution 1 does not resolve the issue, alternately you can manually change the Log On properties for each of the services. The changeServiceUserAccount.bat would do this via script, but on occasion this may work. Open the Control Panel and navigate to Services, for example: Control Panel à All Control Panel Items à Administrative Tools   You will have to right click each individual service and go to Properties à Log On tab and enter the account name and password for the local account. Note: Local System account will not resolve this issue.   This issue was resolved in the ThingWorx Analytics Server 8.3 release, where all Services are associated with the Network Service account.     More information can be found in this Knowledge Article   Uploading of a Dataset hangs or does not complete in ThingWorx Analytics 8.3   On occasion, after a fresh installation of ThingWorx Analytics Server 8.3 on a Windows Server operating system, a dataset will not complete its upload. Typically no error message is displayed, and the upload wizard UI will just hang on the upload progress after:   Creating copy of Configuration File... Submitting Create Dataset request... Creating copy of Data File...   Primary Reason This Happens:   This is caused by twas-zookeeper service being stuck in a PAUSED state. This means that in the post installation, twas-zookeeper did not start.   Steps to Fix Issue   You will have to double check that the JAVA_HOME variable was defined as a System Variable. In the ThingWorx Analytics Installation guide, pages 12-14 outline the steps required as pre-requisites. You can change this in Control Panel > System > Advanced Settings > Environment Variables, and ass a new variable named JAVA_HOME under System Variables. The value location should be the location of your deployment of JAVA software.   Typically this is located in C:\Program Files\Java\<jre or jdk>_<version number>     More information can be found in this Knowledge Article
View full tip
In this part of the Troubleshooting blog series, we will review the process on how to restart individual services essential to the ThingWorx Analytics Application within the Virtual Machine Appliance.   Services have stopped, and I cannot run my Analytics jobs! In some cases, we have had users encounter issues where a system or process has halted and they are unable to proceed with their tasks. This can be due to a myriad of reasons, ranging from OS hanging issues to memory issues with certain components.   As we covered earlier in Part II, the ThingWorx Analytics Application is installed in a CentOS (Linux) Operating System. As with most Linux Operating Systems, you have the ability to manually check and restart processes as needed.   Steps to Restart Services   With how the Application is installed and configured, the services for functionality should auto start when you boot up the VM. You will have to verify that the Appliance is functional by running your desired API call.   If a system is not functioning as expected, you will receive an error in your output when you POST an API call. Some errors are very specific and you can search the Knowledge Database for any existing Knowledge Articles that may solve the issue.   For error messages that do not have an existing article, you may want to attempted the following   Method 1:   If you are encountering issues, and are unsure what process is not working correctly, we would recommend a full Application restart. This involves restarting the Virtual Machine Appliance via the command line terminal.   We would recommend that you use the following command, as root user or using SUDO, as this is known as a “Graceful restart” ​sudo reboot -h now   This will restart the virtual machine safely, and once you are back up and running you can run your API calls to verify functionality. This should resolve any incremental issues you may have faced.   Method 2:   If you want to restart an individual service, there is a particular start order that needs to be followed to make sure the Application is operating as expected.   The first step is check what services are not running, you can use the following command to check what is running and its current status: service –status-all   The services you are looking for are the following: Zookeeper PostgreSQL Server GridWorker(s) Tomcat   If a particular service is not on the running list, you will have to manually start them by using the service start command. service [name of service] start e.g. service tomcat start You may be prompted for the root password   You can verify that the services are operating by running the status check command as described above.   If you need to restart all services, we have a specific start order, this is important to do as there are some dependencies such as Postgres for the GridWorker(s) and Tomcat to use.   The start order is as follows: Zookeeper PostgreSQL Server GridWorker(s) Tomcat   After completing the restart, and verifying that the services are running, run your desired API call.
View full tip
Thing Subscription This post is intended for novice ThingWorx users who wants to understand what the definition of Thing Subscription is and the overall purpose of using Thing Subscriptions.   Definition of a Thing Subscription? A Thing subscription is a script(JavaScript) that is called each time an event occurs. Events are property states which are of end users interest (e.g. temperature) and therefore indicators to kick off some functionality in a Thing subscription when any action needed. Events can e.g. be triggered by an Alert that detects a change or an anomaly in property values. The Thing subscription is explicitly linked to an event and when the event is fired the data is being passed to the subscriber.    Why Use a Thing Subscription? Imagine your machine is running 24 hours 7 days a week with supervised human interaction. If a pump temperature exceeds accepted value it needs to be regulated by the manufacturing department. But no one in the department knows when the temperature will exceed accepted value or drop suddenly therefore, the machines is always sporadically physically supervised by humans which leads to heavy costs for the manufacture. With a Thing Subscription a notification alert email can be sent directly to the department manager who acts based on the email notification.   Thing Subscription must have A Thing subscription must have defined a rule which gets executed when an event occurs. The definition of the rule may accommodate any appropriate business logic.   Thing Subscription example process In this scenario Thing subscription is using a predictive analytics model to detect Data Change or any anomaly values going through a Thing Property. So, based on historical data including failure information, a predictive analytics model begins to analyze run-time values from individual Things/properties to the analytics server. The predictive analytics model detects a pattern which detects past failures, when the analytics model predicts a failure/event based on the analyzed patterns an action is being fired via a Thing subscription. That action could be for ThingWorx to create a service ticket or send a notification email to the service department.   Example of a simple Thing Subscription set-up without using Analytics model to analyze data but instead a build-in ThingWorx alert Below example of Thing Subscription will send a notification email when temperature exceeds defined values from ThingWorx alert configuration. Prerequisites; it is necessary to have a mail server extension imported into the ThingWorx Composer this enables the service department to receive the email notification when an event have occurred. The extension can be downloaded from the marketplace. 1. Create a Thing with the MailServer[i] as the Base Thing template.     2. Create a new Thing and add Properties together with an alert that is triggered when the value exeeds user defined temerature.   3. Enable the Thing Subcriptions by Select Subscription and click +Add Make sure to mark the checkbox Enabled Selecting your Event name and your Property name In the right side of the screen you can enter your script/function that will notify ThingWorx email service to create the email notification Select Done and Save   4. Enable Email notification by selecting Services Provide an name Select Me/Entities Mark Other entity Find your Thing where the MailServer is the Thing Template   5. Then find the SendMessage snippet/script and fill out the snippet with your personal information.   [i] View this blog for more information on how to install the MailServer
View full tip
  PLEASE NOTE DataConnect has now been deprecated and no longer in use and supported.   We are regularly asked in the community how to send data from ThingWorx platform to ThingWorx Analytics in order to perform some analytics computation. There are different ways to achieve this and it will depend on the business needs. If the analytics need is about anomaly detection, the best way forward is to use ThingWatcher without the use of ThingWorx Analytics server. The ThingWatcher Help Center is an excellent place to start, and a quick start up program can be found in this blog. If the requirement is to perform a full blown analytics computation, then sending data to ThingWorx Analytics is required. This can be achieved by Using ThingWorx DataConnect, and this is what this blog will cover Using custom implementation. I will be very pleased to get your feedback on your experience in implementing custom solution as this could give some good ideas to others too. In this blog we will use the example of a smart Tractor in ThingWorx where we collect data points on: Speed Distance covered since last tyre change Tyre pressure Amount of gum left on the tyre Tyre size. From an Analytics perspective the gum left on the tyre would be the goal we want to analyse in order to know when the tyre needs changing.   We are going to cover the following: Background Workflow DataConnect configuration ThingWorx Configuration Data Analysis Definition Configuration Data Analysis Definition Execution Demo files   Background For people not familiar with ThingWorx Analytics, it is important to know that ThingWorx Analytics only accepts a single datafile in a .csv format. Each columns of the .csv file represents a feature that may have an impact on the goal to analyse. For example, in the case of the tyre wear, the distance covered, the speed, the pressure and tyre size will be our features. The goal is also included as a column in this .csv file. So any solution sending data to ThingWorx Analytics will need to prepare such a .csv file. DataConnect will perform this activity, in addition to some transformation too.   Workflow   Decide on the properties of the Thing to be collected, that are relevant to the analysis. Create service(s) that collect those properties. Define a Data Analysis Definition (DAD) object in ThingWorx. The DAD uses a Data Shape to define each feature that is to be collected and sends them to ThingWorx Analytics. Part of the collection process requires the use of the services created in point 2. Upon execution, the DAD will create one skinny csv file per feature and send those skinny .csv files to DataConnect. In the case of our example the DAD will create a speed.csv, distance.csv, pressure.csv, gumleft.csv, tyresize.csv and id.csv. DataConnect processes those skinny csv files to create a final single .csv file that contains all these features. During the processing, DataConnect will perform some transformation and synchronisation of the different skinny .csv files. The resulting dataset csv file is sent to ThingWorx Analytics Server where it can then be used as any other dataset file.     DataConnect configuration   As seen in this workflow a ThingWorx server, DataConnect server and a ThingWorx Analytics server will need to be installed and configured. Thankfully, the installation of DataConnect is rather simple and well described in the ThingWorx DataConnect User’s guide. Below I have provided a sample of a working dataconnect.conf file for reference, as this is one place where syntax can cause a problem:   ThingWorx Configuration The platform Subsystem needs to be configured to point to the DataConnect server . This is done under SYSTEM > Subsystems > PlatformSubsystem:     DAD Configuration The most critical part of the process is to properly configure the DAD as this is what will dictate the format and values filled in the skinny csv files for the specific features. The first step is to create a data shape with as many fields as feature(s)/properties collected.   Note that one field must be defined as the primary key. This field is the one that uniquely identifies the Thing (more on this later). We can then create the DAD using this data shape as shown below   For each feature, a datasource needs to be defined to tell the DAD how to collect the values for the skinny csv files. This is where custom services are usually needed. Indeed, the Out Of The Box (OOTB) services, such as QueryNumberPropertyHistory, help to collect logged values but the id returned by those services is continuously incremented. This does not work for the DAD. The id returned by each services needs to be what uniquely identifies the Thing. It needs to be the same for all records for this Thing amongst the different skinny csv files. It is indeed this field that is then used by DataConnect to merge all the skinny files into one master dataset csv file. A custom service can make use of the OOTB services, however it will need to override the id value. For example the below service uses QueryNumberPropertyHistory to retrieve the logged values and timestamp but then overrides the id with the Thing’s name.     The returned values of the service then needs to be mapped in the DAD to indicate which output corresponds to the actual property’s value, the Thing id and the timestamp (if needed). This is done through the Edit Datasource window (by clicking on Add Datasource link or the Datasource itself if already defined in the Define Feature window).   On the left hand side, we define the origin of the datasource. Here we have selected the service GetHistory from the Thing Template smartTractor. On the right hand side, we define the mapping between the service’s output and the skinny .csv file columns. Circled in grey are the output from the service. Circled in green are what define the columns in the .csv file. A skinny csv file will have 1 to 3 columns, as follow: One column for the ID. Simply tick the radio button corresponding to the service output that represents the ID One column representing the value of the Thing property. This is indicated by selecting the link icon on the left hand side in front of the returned data which represent the value (in our example the output data from the service is named value) One column representing the Timestamp. This is only needed when a property is time dependant (for example, time series dataset). On the example the property is Distance, the distance covered by the tyre does depend on the time, however we would not have a timestamp for the TyreSize property as the wheel size will remain the same. How many columns should we have (and therefore how many output should our service has)? The .csv file representing the ID will have one column, and therefore the service collecting the ID returns only one output (Thing name in our smartTractor example – not shown here but is available in the download) Properties that are not time bound will have a csv file with 2 columns, one for the ID and one for the value of the property. Properties that are time bound will have 3 columns: 1 for the ID, one for the value and one for the timestamp. Therefore the service will have 3 outputs.   Additionally the input for the service may need to be configured, by clicking on the icon.   Once the datasources are configured, you should configure the Time Sampling Interval in the General Information tab. This sampling interval will be used by DataConnect to synchronize all the skinny csv files. See the Help Center for a good explanation on this.   DAD Execution Once the above configuration is done, the DAD can be executed to collect properties’ value already logged on the ThingWorx platform. Select Execution Settings in the DAD and enter the time range for property collection:   A dataset with the same name as the DAD is then created in DataConnect as well as in ThingWorx Analytics Server Dataconnect:     ThingWorx Analytics:   The dataset can then be processed as any other dataset inside ThingWorx Analytics.   Demo files   For convenience I have also attached a ThingWorx entities export that can be imported into a ThingWorx platform for you to take a closer look at the setup discussed in this blog. Attached is also a small simulator to populate the properties of the Tractor_1 Thing. The usage is : java -jar TWXTyreSimulatorClient.jar  hostname_or_IP port AppKey For example: java -jar TWXTyreSimulatorClient.jar 192.168.56.106 8080 d82510b7-5538-449c-af13-0bb60e01a129   Again feel free to share your experience in the comments below as they will be very interesting for all of us. Thank you
View full tip
This video is Module 11: ThingWorx Analytics Mashup Exercise of the ThingWorx Analytics Training videos. It shows you how to create a ThingWorx project and populate it with entities that collectively comprise a functioning application. 
View full tip
Build a Predictive Analytics Model Guide Part 1   Overview   This project will introduce ThingWorx Analytics Builder. Following the steps in this guide, you will create an analytical model, and then refine it based on further information from the Analytics platform. We will teach you how to determine whether or not a model is accurate and how you can optimize both your data inputs and the model itself.   NOTE: This guide's content aligns with ThingWorx 9.3. The estimated time to complete ALL 2 parts of this guide is 60 minutes.      Step 1: Scenario   MotorCo manufactures, sells, and services commercial motors. Recently, MotorCo has been developing a new motor, and they already have a working prototype. However, they've noticed that the motor has a chance to FAIL CATASTROPHICALLY if it's not properly serviced to replace lost grease on a key moving part. In order to prevent this type of failure in the field, MotorCo has decided to instrument their motors with sensors which record vibration. The hope is that these sensors can detect certain vibrations which indicate required maintenance before a failure occurs. MotorCo has decided to utilize ThingWorx Analytics to scan their prototype data for any insights they can gain to address this issue. Challenges: There is a known failure mode of a prototype, but it is not currently possible to predict when the failure might happen Until this failure mode is mitigated, the prototype cannot move into full production While connected data is being monitored, what to do with that data is not currently known The additional sensors are adding to the overall cost of the product   Step 2: Settings   If you manually installed the Analytics extension, then you need to complete the steps below to finalize the connection.  On the ThingWorx Composer Analytics tab, click ANALYTICS BUILDER > Settings. In the Analytics Server field, search for and select your Server Thing Entity, such as AnalyticsServer_Thing or localhost-AnalyticsServer. Click Verify Configuration.   Click Save. WARNING: You MUST CLICK SAVE or the configurations will be lost when you move to Data in the next step.   Step 3: Upload Data   We will now load the data that ThingWorx Analytics will use to generate a model. Download the attached analytics_vibration.zip to your computer. Unzip the analytics_vibration.zip file to access the vibration_data_and_header.csv and vibration_metadata.json files. On the left, click ANALYTICS BUILDER > DATA.   Under Datasets, click New.... The New Dataset pop-up will open.         5. In the Dataset Name field, enter vibration_dataset. 6. In the File Containing Dataset Data section, search for and select vibration_data_and_header.csv. 7. In the File Containing Dataset Field Configuration section, search for and select vibration_metadata.json.   8. Click Submit. Note that it will take a variable amount of time for the data-upload to complete, based on the size of your dataset.              Step 4: Signals   The Signals section of ThingWorx Analytics looks for the most statistically correlated single field in the dataset which relates to your selected goal. This doesn't necessarily indicate that it is the cause of your goal, whether maximizing or minimizing. It just means that the dataset indicates that this single field happens to correlate with the goal that you desire. On the left, click ANALYTICS BUILDER > Signals.   At the top, click New…. A New Signals pop up will open.   3. In the Signal Name field, enter vibration_signal. 4. In the Dataset field, select vibration_dataset. 5. Leave the Goal field set to the default of low_grease. 6. Leave the Filter field set to the default of all_data. 7. Leave the Excluded Fields from Signal field set to the default of empty. 8. Click Submit. 9. Wait ~30 seconds for Signal State to change to COMPLETED The results will be displayed at the bottom.             The results show that the five Frequency Bands for Sensor 1 are the most highly correlated with determining our goal of detecting a low grease condition. For Sensor 2, only bands one and four seem to be at all related, and bands two, three, and five are hardly related at all.   Click here to view Part 2 of this guide. 
View full tip
Anomaly Detection (also known as Outlier Detection) is a set of techniques that identify unusual occurrences in data. The premise is that such occurrences may be early indicators of future negative events (e.g. failure of assets or production lines).  Data Science algorithms for Anomaly Detection include both Supervised and Unsupervised methods. In Unsupervised Anomaly Detection, the algorithms make the assumption that most of the data points are "normal" (e.g. normal operation of the asset) and are looking for data points that are most dissimilar to the remainder of the dataset. Supervised Anomaly Detection requires a labeled set of Anomalies, in which case predictive algorithms can be applied directly on this data.   Thingworx employs a number of algorithms in support of Anomaly Detection: Simple threshold alerts. These are easy to setup on Thing properties but require a domain expert to provide such thresholds. Then, the alert will automatically fire when the value of the monitored property goes outside the predefined range of values, often seen as the "bad" side of the threshold. Statistical Process Control (SPC). This can be implemented using Thingworx Analytics Property Transforms. Most companies use a subset of SPC charts and rules to monitor production processes. Examples include the X Bar and R charts, as well as the Western Electric rules (e.g. one point outside the average +/- three sigma range). Explainable and widely accepted, SPC can also provide an earlier warning system compared to simple threshold alerts, in that it captures more complex patterns. Clustering. Using Thingworx Analytics, one can build an optimal clustering for the available data points. Under the assumption that data is representative of mostly normal operation and that there is not a significant pre-defined pattern of anomalies that form their own cluster, one can identify outliers by looking at the distance between points and their corresponding cluster centers. Points that are very far from their corresponding cluster center can be labeled as anomalies. Semi-supervised Anomaly Alerting (formerly known as ThingWatcher). This functionality identifies single property time series behavior that is statistically different than what was seen in a finite window of “known normal operation”. As such, it does not identify a “bad” event, or even a precursor to a “bad” event.  Rather, it points the end user to further investigate a situation which may lead to a “bad” event. Anomaly Alerts can be easily setup like any other Alert on a Thing property. Multiple Anomaly Alerts can be setup on the same or different properties of a Thing. Behind the scenes, the platform builds a time series neural network model for the known normal operation data, which is then applied to incoming data, and, if the errors are significantly different than those on known normal operation over a period of time, then an Anomaly Alert is produced. The techniques mentioned above are either unsupervised or semi-supervised. If the dataset contains labeled anomalies (e.g. asset faults, or suspicious patterns) then supervised predictive techniques (such as regression, decision trees, neural nets, or ensemble methods) are available to model the relationships between such anomalies (dependent variables) and various variables of interest (independent variables). These models can then be employed to monitor assets or production lines for upcoming anomalies. In many real-world use cases, anomalies are relatively rare; care needs to be taken when building such predictive models. Techniques such as up-sampling can prove beneficial in such situations.   What constitutes an Anomaly depends on the observed data and the current context. If only few data points are initially available, then it is possible that a lot of future data is predicted as an Anomaly, despite being normal operation. Also, in terms of context, if an Anomaly Detection is trained on a connected product in the Winter, it is likely to say all Summer operation is anomalous. This can be tackled by having multiple anomaly detection alerts implemented, one for each different context of operation (e.g. season, recipe being manufactured, operation done by a robot).   Another consideration is lead time vs explainability. For example, when a threshold alert fires, it is obvious why, but it may not be early enough to take action. As more advanced methods are employed, more complex patterns can be captured, hence more lead time, but typically at the expense of explainability. For example, semi supervised Anomaly Alerting (formerly known as ThingWatcher) uses time windows, aggregations, and derivatives of up to the third order, resulting in significantly less explainability when an Anomaly is presented to the end user.   Choosing the appropriate Anomaly Detection technique is use case dependent, balancing the desired lead time and explainability. If historical data on failures/anomalies is not available, a good place to start is Statistical Process Control, as it provides a balanced approach between the two dimensions, in addition to being already in use across many manufacturing companies.
View full tip
This video go through the steps required to use the Creo Insight extension: - Download and install the required extension - Set required config.pro options - Create provider in Analytics Manager - Publish sensor from Creo - Create analysis Event in Analysis Manager - Retrieve sensor values from ThingWorx in Creo     See also: - https://www.ptc.com/en/support/article?n=CS277514  for a  written version of those steps. - Creo Help Center  
View full tip
Best Practices in Data Preparation for ThingWorx Analytics Data Preparation is an important phase in the process of Data Analysis when using ThingWorx Analytics. Basically, it is getting your Data from being Raw Data that you might have gathered through your Operational system or from your Data warehouse to the kind of Data ready to be analyzed. In this Document we will be using “Talend Data Preparation Free Desktop” as a Tool to illustrate some examples of the Data Preparations process. This tool could be downloaded under the following Link: https://www.talend.com/products/data-preparation (You could also choose to use another tool) We would also use the Beanpro Dataset in our Examples and illustrations. Checking data formats The analysis starts with a raw data file. The user needs to make sure that the data files can be read. Raw data files come in many different shapes and sizes. For example, spreadsheet data is formatted differently than web data or Sensors collected data and so forth. In ThingWorx Analytics the Data Format acceptable are CSV. So the Data retrieved needs to be inputted into that format before it could be uploaded to TWA Data Example (BeanPro dataset used): After that is done the user needs to actually look at what each field contains. For example, a field is listed as a character field could actually contains none character data. Verify data types Verifying the data types for each feature or field in the Dataset used.  All data falls into one of four categories that affect what sort of analytics that could be applied to it: Nominal data is essentially just a name or an identifier. Ordinal data puts records into order from lowest to highest. Interval data represents values where the differences between them are comparable. Ratio data is like interval data except that it also allows for a value of 0. It's important to understand which categories your data falls into before you feed it into ThingWorx Analytics. For example when doing Predictive Analytics TWA would not accept a Nominal Data Field as Goal. The Goal feature data would have to be of a numerical non nominal type so this needs to be confirmed in an early stage.                 Creating a Data Dictionary A data dictionary is a metadata description of the features included in the Dataset when displayed it consists of a table with 3 columns: - The first column represents a label: that is, the name of a feature, or a combination of multiple (up to 3) features which are fields in the used Dataset. It points to “fieldname” in the configuration json file. - The second column is the Datatype value attached to the label. (Integer, String, Datetime…). It points to “dataType” in the configuration json file. - The third column is a description of the Feature related to the label used in the first column. It points to “description” in the configuration json file. In the context of TWA this Metadata is represented by a Data configuration “json” file that would be uploaded before even uploading the Dataset itself. Sample of BeanPro dataset configuration file below: Verify data accuracy Once it is confirmed that the data is formatted the way that is acceptable by TWA, the user still need to make sure it's accurate and that it makes sense. This step requires some knowledge of the subject area that the Dataset is related to. There isn't really a cut-and-dried approach to verifying data accuracy. The basic idea is to formulate some properties that you think the data should exhibit and test the data to see if those properties hold. Are stock prices always positive? Do all the product codes match the list of valid ones? Essentially, you're trying to figure out whether the data really is what you've been told it is. Identifying outliers Outliers are data points that are distant from the rest of the distribution. They are either very large or very small values compared with the rest of the dataset. Outliers are problematic because they can seriously compromise the Training Models that TWA generates. A single outlier can have a huge impact on the value of the mean. Because the mean is supposed to represent the center of the data, in a sense, this one outlier renders the mean useless. When faced with outliers, the most common strategy is to delete them. Example of the effect of an Outlier in the Feature “AVG Technician Tenure” in BeanPro Dataset:   Dataset with No Outlier: Dataset with Outlier: Deal with missing values Missing values are one of the most common (and annoying) data problems you will encounter. In TWA dealing with the Null values is done by one of the below methods: - Dropping records with missing values from your Dataset. The problem with this is that missing values are frequently not just random little data glitches so this would consider as the last option. - Replacing the NULL values with average values of the responses from the other records of the same field to fill in the missing value Transforming the Dataset - Selecting only certain columns to load which would be relevant to records where salary is not present (salary = null). - Translating coded values: (e.g., if the source system codes male as "1" and female as "2", but the warehouse codes male as "M" and female as "F") - Deriving a new calculated value: (e.g., sale_amount = qty * unit_price) - Transposing or pivoting (turning multiple columns into multiple rows or vice versa) - Splitting a column into multiple columns (e.g., converting a comma-separated list, specified as a string in one column, into individual values in different columns) Please note that: Issue with Talend should be reported to the Talend Team Data preparation is outside the scope of PTC Technical Support so please use this article as an advisable Best Practices document
View full tip
A Feature - a piece of information that is potentially useful for prediction. Any attribute could be a feature, as long as it is useful to the model. Feature engineering – Feature engineering is the process of transforming raw data into features that better represent the underlying problem to the predictive models, resulting in improved model accuracy on unseen data. It’s a vaguely agreed space of tasks related to designing feature sets for Machine Learning applications. Components: First, understanding the properties of the task you’re trying to solve and how they might interact with the strengths and limitations of the model you are going to use. Second, experimental work were you test your expectations and find out what actually works and what doesn’t. Feature engineering as a technique, has three sub categories of techniques: Feature selection, Dimension reduction and Feature generation. Feature Selection: Sometimes called feature ranking or feature importance, this is the process of ranking the attributes by their value to predictive ability of a model. Algorithms such as decision trees automatically rank the attributes in the data set. The top few nodes in a decision tree are considered the most important features from a predictive stand point. As a part of a process, feature selection using entropy based methods like decision trees can be employed to filter out less valuable attributes before feeding the reduced dataset to another modeling algorithm. Regression type models usually employ methods such as forward selection or backward elimination to select the final set of attributes for a model. For example: Project Development decision-tree:                                                  Dimension Reduction: This is sometimes called feature extraction. The most classic example of dimension reduction is principle component analysis or PCA. PCA allows us to combine existing attributes into a new data frame consisting of a much reduced number of attributes by utilizing the variance in the data. The attributes which "explain" the highest amount of variance in the data form the first few principal components and we can ignore the rest of the attributes if data dimensionality is a problem from a computational standpoint. Feature Generation or Feature Construction: Quite simply, this is the process of manually constructing new attributes from raw data. It involves intelligently combining or splitting existing raw attributes into new one which have a higher predictive power. For example a date stamp may be used to generate 2 new attributes such as AM and PM which may be useful in discriminating whether day or night has a higher propensity to influence the response variable. Feature construction is essentially a data transformation process. Tips for Better Feature Engineering Tip 1: Think about inputs you can create by rolling up existing data fields to a higher/broader level or category. As an example, a person’s title can be categorized into strategic or tactical. Those with titles of “VP” and above can be coded as strategic. Those with titles “Director” and below become tactical. Strategic contacts are those that make high-level budgeting and strategic decisions for a company. Tactical are those in the trenches doing day-to-day work.  Other roll-up examples include: Collating several industries into a higher-level industry: Collate oil and gas companies with utility companies, for instance, and call it the energy industry, or fold high tech and telecommunications industries into a single area called “technology.” Defining “large” companies as those that make $1 billion or more and “small” companies as those that make less than $1 billion.   Tip 2: Think about ways to drill down into more detail in a single field. As an example, a contact within a company may respond to marketing campaigns, and you may have information about his or her number of responses. Drilling down, we can ask how many of these responses occurred in the past two weeks, one to three months, or more than six months in the past. This creates three additional binary (yes=1/no=0) data fields for a model. Other drill-down examples include: Cadence: Number of days between consecutive marketing responses by a contact: 1–7, 8–14, 15–21, 21+ Multiple responses on same day flag (multiple responses = 1, otherwise =0) Tip 3: Split data into separate categories also called bins. For example, annual revenue for companies in your database may range from $50 million (M) to over $1 billion (B). Split the revenue into sequential bins: $50–$200M, $201–$500M, $501M–$1B, and $1B+. Whenever a company falls with the revenue bin it receives a one; otherwise the value is zero. There are now four new data fields created from the annual revenue field. Other examples are: Number of marketing responses by contact: 1–5, 6–10, 10+ Number of employees in company: 1–100, 101–500, 502–1,000, 1,001–5,000, 5,000+ Tip 4: Think about ways to combine existing data fields into new ones. As an example, you may want to create a flag (0/1) that identifies whether someone is a VP or higher and has more than 10 years of experience. Other examples of combining fields include: Title of director or below and in a company with less than 500 employees Public company and located in the Midwestern United States You can even multiply, divide, add, or subtract one data field by another to create a new input. Tip 5: Don’t reinvent the wheel – use variables that others have already fashioned. Tip 6: Think about the problem at hand and be creative. Don’t worry about creating too many variables at first, just let the brainstorming flow.
View full tip
ThingWorx Foundation Flow Enable customers using Azure to take advantage of Azure services Access hundreds of Azure system connectors by invoking Azure Logic Apps from within ThingWorx Flow Execute Azure functions to leverage Azure dynamic, serverless scaling and pay just for processing power needed Access Azure Cognitive AI services for image recognition, text to voice/voice to text, OCR and more Easily integrate with homegrown and commercial solutions based on SQL databases where explicit APIs or REST services are not exposed Automatically trigger business process flows by subscribing to Windchill object class and instance events Provide visibility to mature PLM content (such as when a part is released) to downstream manufacturing and supply chain roles and systems Easily add new actions by extending functionality from existing connectors to create new actions to facilitate common tasks Inherit or copy functionality from existing actions and change only what is necessary to support new custom action Azure Connector SQL Database Connector Windchill Event Trigger Custom Action Improvements Platform Composer: Horizontal tab navigation is back!  Also new Scheduler editor. Security: TLS 1.2 support by default, new services for handling expired device connections New support for InFlux 1.7 and MSSQL 2017 * New* Solution Central Package, publish and upload your app with version info and metadata to your tenancy of Solution Central in the PTC cloud Identify missing dependencies via automatic dependency management to ensure your application is packaged with everything required for it to run on the target environments Garner enterprise-wide visibility of your ThingWorx apps deployed across the enterprise via a cloud portal showcasing your company’s available apps, their versions and target environments to foster a holistic view of your entire IIoT footprint across all of your servers, sites and use cases Solution Central is a brand-new cloud-based service to help enterprises package, store, deploy and manage their ThingWorx apps Accelerate your application deployment Initially targeted at developers and admins in its first release, Solution Central enables you to: Mashup Builder 9 new widgets, 5 new functions. Theme Editor with swappable Mashup Preview Responsive Layout enhancements including new settings for fixed and range sizes New Builder for custom screen sizes, new Widget and Style editors, Canvas Zoom Migration utility available for legacy applications to help move to latest features Security 3 new built-in services for WebSocket Communications Subsystem: QueryEndpointSessions, GetBoundThingsForEndpoint, and CloseEndpointSessions Provide greater awareness of Things bound to the platform Allow for mass termination of connections, if necessary Can be configured to automatically disconnect devices with expired authentication methods Encrypting data-in-motion (using TLS 1.2) is a best practice for securely using ThingWorx For previous versions, the installer defaulted to not configuring TLS; ThingWorx 8.5 and later installers will default to configuring TLS ThingWorx will still allow customers to decline to do so, if desired Device connection monitoring & security TLS by default when using installer   ThingWorx Analytics Confidence Model Training and Scoring (ThingWorx Analytics APIs) Deepens functionality by enabling training and scoring of confidence models to provide information about the uncertainty in a prediction to facilitate human and automated decision making Range Property Transform and Descriptive Service Improves ease of implementation of data transformations required for common statistical process control visualizations Architecture Simplification Improves cost of ownership by reducing the number of microservices required by Analytics Server to reduce deployment complexity Simplified installation process enables system administrators to integrate ThingWorx Analytics Server with either (or both) ThingWorx Foundation 8.5 and FactoryTalk Analytics DataFlowML 3.0.   ThingWorx Manufacturing and Service Apps & Operator Advisor Manufacturing common layer extension - now bundling all apps as one extension (Operator Advisor, Asset Advisor, Production KPIs, Controls Advisor) Operator Advisor user interface for work instruction delivery Shift and Crew data model & user interface Enhancements to Operator Advisor MPMLink connector Flexible KPI calculations Multiple context support for assets   ThingWorx Navigate New Change Management App, first in the Contribute series, allows a user to participate in change request reviews delivered through a task list called “My Tasks” BETA Release of intelligent, reusable components that will dramatically increase the speed of custom App development Improvements to existing View Apps Updated, re-usable 3D viewing component (ThingView widget) Support for Windchill Distributed Vaults Display of Security Labels & Values   ThingWorx Azure IOT Hub Connector Seamless compatibility of Azure devices with ThingWorx accelerators like Asset Advisor and custom applications developed using Mashup Builder. Ability to update software and firmware remotely using ready-built Software Content Management via “ThingWorx Azure Software Content Management” Module on Azure IoT Edge. Quick installation and configuration of ThingWorx Azure IoT Hub Connector, Azure IoT Hub and Azure IoT Edge SCM module.   Documentation ThingWorx Platform ThingWorx Platform 8.5 Release Notes ThingWorx Platform Help Center ThingWorx 8.5 Platform Reference Documents ThingWorx Connection Services Help Center   ThingWorx Azure IoT Hub Connector ThingWorx Azure IoT Hub Connector Help Center   ThingWorx Analytics ThingWorx Platform Analytics 8.5.0 Release Notes Analytics Server 8.5.1 Release Notes ThingWorx Analytics Help Center   ThingWorx Manufacturing & Service Apps and ThingWorx Operator Advisor ThingWorx Apps Help Center ThingWorx Operator Advisor Help Center   ThingWorx Navigate ThingWorx Navigate 8.5 Release Notes Installing ThingWorx Navigate 8.5 Upgrading to ThingWorx Navigate 8.5 ThingWorx Navigate 8.5 Tasks and Tailoring Customizing ThingWorx Navigate 8.5 PTC Windchill Extension Guide 1.12.x ThingWorx Navigate 8.5 Product Compatibility Matrix ThingWorx Navigate 8.5 Upgrade Support Matrix ThingWorx Navigate Help Center     Additional Information Helpcenter ThingWorx eSupport Portal ThingWorx Developer Portal PTC Marketplace The National Instruments Connector can be found on PTC Marketplace  
View full tip
This post covers how to build and operationalize a time series model using Thingworx Analytics. A lookback window is used to read multiple previous rows before the current one, and base the prediction on those lookback rows.   In this example we use time series data to predict water flow for different water pumps in a system.   There is a full explanation of the method attached, also all necessary resources are included in the attached files.
View full tip
This is part of the continuing series of Blog posts regarding Troubleshooting the Application, this article will discuss more advance issues that some clients and customer have encountered while building or using ThingWorx Analytics. Packer Script Error – Unable to Download CentOS Image As the application is developed and built inside a CentOS image, the ThingWorx Analytics Packer Script tool for Virtual Machine Appliance creation utilizes the CentOS mirror repository in the creation process. When the end user is attempting to build the Virtual Machine Appliance with the Packer Script media creation tool, part of the process is to download the CentOS 7 ISO image file as the basis for the operating system that the ThingWorx Analytics Server software will be installed to. If CentOS updates or changes their mirror links for the source file ISO, you may encounter the following error: ==> virtualbox-iso: Downloading or copying Guest additions virtualbox-iso: Downloading or copying: file:///C:/Program%20Files/Oracle/VirtualBox/VBoxGuestAdditions.iso ==> virtualbox-iso: Downloading or copying ISO virtualbox-iso: Downloading or copying: file:///local-file-repo/CentOS-7-x86_64-Minimal-1511.iso virtualbox-iso: Error downloading: open local-file-repo/CentOS-7-x86_64-Minimal-1511.iso: The system cannot find the path specified. virtualbox-iso: Downloading or copying: http://mirror.spro.net/centos/7/isos/x86_64/CentOS-7-x86_64-Minimal-1511.iso virtualbox-iso: Error downloading: checksums didn't match expected: 88c0437f0a14c6e2c94426df9d43cd67 ==> virtualbox-iso: ISO download failed. Build 'virtualbox-iso' errored: ISO download failed. ==> Some builds didn't complete successfully and had errors: --> virtualbox-iso: ISO download failed. ==> Builds finished but no artifacts were created. Solution Method 1: Configuration File Replacement We have created a custom JSON configuration file that resolves the mirror issue for CentOS 7 v1611. You can download the JSON file here; you may have to right-click and “save link as” a JSON extension file. Also note, you will have to save/rename this JSON file as neuron-solo-variables.json. Using this file, navigate to your Packer Script builder directory, usually this is found in the following path: <PATH>\ThingWorx-Analytics-Server-Standalone\components\vm-builder\neuron-vm-builder Copy the new JSON file into this directory, and replace the current existing copy. You can now re-run the Packer Script for your desired Virtual Machine Appliance output. Method 2: Manual Configuration File Adjustment You will have to locate an active mirror for CentOS 7. A list of current active mirrors can be found here. When selecting a mirror, you will need to select the Minimal ISO install, as this is the base image that is used for the VM creation. Next, you will have to open the current neuron-solo-variables.json configuration file located in the <PATH>\ThingWorx-Analytics-Server-Standalone\components\vm-builder\neuron-vm-builder directory. You will have to replace the os_image_download_url value with an active Mirror URL from the list above. Next, for the os_iso_md5_checksum variable, you will need to replace the entry with the new SHA256 checksum from CentOS, which can be located here. Default Settings: New Settings: Save changes and close the neuron-solo-variables.json configuration file. CentOS has switched over from MD5 to SHA256 checksums. Even though in the following the variable name has “MD5” in the string, we will be modifying a second JSON configuration file to address this. In the same directory that we are currently working in, open the neuron-solo.json configuration file. You will need to modify the attribute iso_checksum_type to sha256 Default Settings: New Settings: Save changes and close the neuron-solo.json configuration file. You can now re-run the Packer Script for your desired Virtual Machine Appliance output.
View full tip
In this video we cover the installation of the UploadThing module. This video applies to ThingWorx Analytics 52.2 till 8.0. This is no longer applicable with ThingWorx Analytics 8.1   Useful links: How to copy files from Windows to Linux Updated Link for access to this video:  Installing Thingworx Analytics Builder: Part 3 of 3  
View full tip
In this video we cover: a short introduction of Thingworx Analytics Builder The import of the Thingworx Analytics Builder extension   This video applies to ThingWorx Analytics 52.1 till 8.1   Updated Link for access to this video:  Installing Thingworx Analytics Builder:  Part 1 of 3
View full tip
Parquet Data Format used in ThingWorx Analytics   Starting ThingWorx Analytics Version 8.1 Data storage will no longer require the installation of a PostgreSQL database. Instead, uploaded CSV data is converted to the optimized  Apache Parquet format and stored directly in the file system. This Blog explains some the features of Apache Parquet justifying this transition in ThingWorx Analytics Data Storage. features What is Apache Parquet: Apache Parquet is a column-oriented data store of the Apache Hadoop ecosystem. It is compatible with most of the data processing frameworks in the Hadoop environment. It provides efficient data compression and encoding schemes with enhanced performance to handle complex data in bulk. Below is an illustration of the Columnar Storage model: Apache Parquet Features and Benefits: Apache Parquet is implemented using the record shredding and assembly algorithm taking into account the complex data structures that can be used to store the data. Apache Parquet stores data where the values in each column are physically stored in contiguous memory locations.  Due to the columnar storage, Apache Parquet provides the following benefits: Column-wise compression is efficient and saves storage space Compression techniques specific to a type can be applied as the column values tend to be of the same type Queries that fetch specific column values need not read the entire row data thus improving performance Different encoding techniques can be applied to different columns Some advantages of using Parquet for ThingWorx Analytics: Apart from the above benefits of using Parquet which amount to higher efficiency and increased performance, below are some advantages that apply specifically to ThingWorx Analytics This change in ThingWorx Analytics from using a Database to using Parquet removes the limitations on the number of data columns the system can handle. It also allows for streamlining the dataset creation process. Since the data is converted to a Parquet format, there is no need to separately optimize the dataset. Even when new data is appended to an existing dataset, a new partition is added and re-optimization is optional but not required. Data could be appended easily so there is no longer a need to re-load the full Dataset when new Data values are added The illustration below shows the transition from Row-based Data Storage model VS the columnar based Storage of Parquet
View full tip
Design and Implement Data Models to Enable Predictive Analytics Learning Path   Design and implement your data model, create logic, and operationalize an analytics model.   NOTE: Complete the following guides in sequential order. The estimated time to complete this learning path is 390 minutes.    Data Model Introduction  Design Your Data Model Part 1 Part 2 Part 3  Data Model Implementation Part 1 Part 2 Part 3  Create Custom Business Logic  Implement Services, Events, and Subscriptions Part 1 Part 2  Build a Predictive Analytics Model  Part 1 Part 2 Operationalize an Analytics Model  Part 1 Part 2  
View full tip
One of the interesting features of ThingWorx Analytics Manager is its ability to run distributed models created in Excel (and more of course).  Most people having been tasked with understanding data have built models in Excel and have sometimes built quite complex models (or even applications) with it.   The ability to tie these models to real data coming from various systems connected through ThingWorx and operationalise their execution is a really simple way for people to leverage their existing work and I.P. on a connected analytics journey.   To demonstrate this power and ease of implementation, I created a sample data set with historical data, traffic profile, and a simple anomaly detection model to execute with Analytics Manager.  (files are attached)   The online help center was quite helpful in explaining the process of Creating the Excel Workbook, however I got stuck at the XML mapping stage.  The Analytics and Excel documentation both neglect to mention one important detail -- you must be using the Windows version of Excel in order to get the XML Source functionality (and I use Mac).  Once using Windows, it was easy to do - here is a video of the XML mapping part of the process (for the inputs and results).   
View full tip