cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

Community Tip - Want the oppurtunity to discuss enhancements to PTC products? Join a working group! X

IoT Tips

Sort by:
The following videos are provided to help users get started with ThingWorx: ThingWorx Installation Installing ThingWorx (Neo4j) in Windows ThingWorx PostgreSQL Setup for Windows ThingWorx PostgreSQL for RHEL ThingWorx Data Storage Introduction to Streams Introduction to Value Streams Introduction to DataTables Introduction to InfoTables ThingWorx Concepts & Functionality Introduction to Media Entities Using State Formatting in a Mashup Configuring Properties ThingWorx REST API REST API (Part 1) REST API (Part 2) ThingWorx Edge SDK Configuring File Transfer with the .NET SDK ThingWorx Analytics *new* Getting Started with ThingWorx Analytics Part 1 Getting Started with ThingWorx Analytics Part 2 Installing ThingWorx Analytics Builder Part 1 of 3 Installing ThingWorx Analytics Builder Part 2 of 3 Installing ThingWorx Analytics Builder Part 3 of 3 Creating Signals in the Analytics Builder How to Access the ThingWorx Analytics Interactive API Guide ThingWorx Widgets How to Create and Configure the Auto Refresh Widget How to Create and Define a Blog Widget How to Create and Configure a Button Widget How to Use the Divider and Shape Widgets How to Create and Configure a Chart Widget How to Use a Contained Mashup How to Use the Data Filter Widget How to Use an Expression Widget How to Create and Configure a Gauge Widget How to Create and Configure a Checkbox Widget How to Use a Contained Mashup Widget How to Use a Data Export Widget How to Use the DateTime Picker Widget How to Use the Editable Grid Widget Using Fieldset and Panel Widgets How to Use the File Upload Widget How to Use the Folding Panel Widget How to Use the Google Location Picker How to Use the Google Map Widget How to Use a Grid Widget How to Use an HTML TextArea Widget How to Use the List Widget How to Use a Label Widget How to Use the Layout Widget How to Use the LED Display Widget How to Use the List Widget How to Use the Masked Textbox Widget Navigation in ThingWorx: Using Menus, the Navigation Widget, Link Widget, and Contained Mashups How to Use the Numeric Entry Widget How to Use the Pie Chart Widget How to Use the Property Display Widget How to Use the Radio Button Widget How to Use the Repeater Widget How to Use the Slider Widget How to Use the SQUEAL Search Widget How to Use the Responsive Tab Widget How to Use the Tag Cloud Widget How to Use the Tag Picker Widget How to Use the TextArea and TextBox Widgets How to Use the Time Selector Widget How to Use the Tree Widget How to Use the Value Display Widget How to Use the Web Frame Widget How to Create and Define a Wiki How to Use the XY Chart Quick note: Thread will be updated with more videos as they are added.
View full tip
​​​There are four types of Analytics:                                                                 Prescriptive analytics: What should I do about it? Prescriptive analytics is about using data and analytics to improve decisions and therefore the effectiveness of actions.Prescriptive analytics is related to both Descriptive and Predictive analytics. While Descriptive analytics aims to provide insight into what has happened and Predictive analytics helps model and forecast what might happen, Prescriptive analytics seeks to determine the best solution or outcome among various choices, given the known parameters. “Any combination of analytics, math, experiments, simulation, and/or artificial intelligence used to improve the effectiveness of decisions made by humans or by decision logic embedded in applications.”These analytics go beyond descriptive and predictive analytics by recommending one or more possible courses of action. Essentially they predict multiple futures and allow companies to assess a number of possible outcomes based upon their actions. Prescriptive analytics use a combination of techniques and tools such as business rules, algorithms, machine learning and computational modelling procedures. Prescriptive analytics can also suggest decision options for how to take advantage of a future opportunity or mitigate a future risk, and illustrate the implications of each decision option. In practice, prescriptive analytics can continually and automatically process new data to improve the accuracy of predictions and provide better decision options. Prescriptive analytics can be used in two ways: Inform decision logic with analytics: Decision logic needs data as an input to make the decision. The veracity and timeliness of data will insure that the decision logic will operate as expected. It doesn’t matter if the decision logic is that of a person or embedded in an application — in both cases, prescriptive analytics provides the input to the process. Prescriptive analytics can be as simple as aggregate analytics about how much a customer spent on products last month or as sophisticated as a predictive model that predicts the next best offer to a customer. The decision logic may even include an optimization model to determine how much, if any, discount to offer to the customer. Evolve decision logic: Decision logic must evolve to improve or maintain its effectiveness. In some cases, decision logic itself may be flawed or degrade over time. Measuring and analyzing the effectiveness or ineffectiveness of enterprises decisions allows developers to refine or redo decision logic to make it even better. It can be as simple as marketing managers reviewing email conversion rates and adjusting the decision logic to target an additional audience. Alternatively, it can be as sophisticated as embedding a machine learning model in the decision logic for an email marketing campaign to automatically adjust what content is sent to target audiences. Different technologies of Prescriptive analytics to create action: Search and knowledge discovery: Information leads to insights, and insights lead to knowledge. That knowledge enables employees to become smarter about the decisions they make for the benefit of the enterprise. But developers can embed search technology in decision logic to find knowledge used to make decisions in large pools of unstructured big data. Simulation: ​Simulation imitates a real-world process or system over time using a computer model. Because digital simulation relies on a model of the real world, the usefulness and accuracy of simulation to improve decisions depends a lot on the fidelity of the model. Simulation has long been used in multiple industries to test new ideas or how modifications will affect an existing process or system. Mathematical optimization: Mathematical optimization is the process of finding the optimal solution to a problem that has numerically expressed constraints. Machine learning: “Learning” means that the algorithms analyze sets of data to look for patterns and/or correlations that result in insights. Those insights can become deeper and more accurate as the algorithms analyze new data sets. The models created and continuously updated by machine learning can be used as input to decision logic or to improve the decision logic automatically. Paragmetic AI: ​Enterprises can use AI to program machines to continuously learn from new information, build knowledge, and then use that knowledge to make decisions and interact with people and/or other machines.                                               Use of Prescriptive Analytics in ThingWorx Analytics: Thing Optimizer: Thing Optimizer functionality provides the prescriptive scoring and optimization capabilities of ThingWorx Analytics. While predictive scoring allows you to make predictions about future outcomes, prescriptive scoring allows you to see how certain changes might affect future outcomes. After you have generated a prediction model (also called training a model), you can modify the prescriptive attributes in your data (those attributes marked as levers) to alter the predictions. The prescriptive scoring process evaluates each lever attribute, and returns an optimal value for that feature, depending on whether you want to minimize or maximize the goal variable. Prescriptive scoring results include both an original score (the score before any lever attributes are changed) and an optimized score (the score after optimal values are applied to the lever attributes). In addition, for each attribute identified in your data as a lever, original and optimal values are included in the prescriptive scoring results. How to Access Thing Optimizer Functionality: ThingWorx Analytics prescriptive scoring can only be accessed via the REST API Service. Using a REST client, you can access the Scoring service which includes a series of API endpoints to submit scoring requests, retrieve results, list jobs, and more. Requires installation of the ThingWorx Analytics Server. How to avoid mistakes - Below are some common mistakes while doing Prescriptive analytics: Starting digital analytics without a clear goal Ignoring core metrics Choosing overkill analytics tools Creating beautiful reports with little business value Failing to detect tracking errors                                                                                                                                 Image source: Wikipedia, Content: go.forrester.com(Partially)
View full tip
I have created a mashup which allows you to easily use and test the Prescriptions functionality in Thingworx Analytics (TWA). This is where you choose 1 or more fields for optimization, and TWA tells you how to adjust those fields to get an optimal outcome.   The functionality is based on a public sample dataset for concrete mixtures, full details are included in the attached documentation.  
View full tip
Datasets with ordinal or categorical goal cannot currently be used in ThingWorx Analytics Builder. However this is only a UI limitation, ThingWorx Analytics Server can handle those data. It does simply require to use the services from the AnalyticsServer-Training and AnalyticsServer-Prediction things to perform the operations.   This can be done using a mashup or via Rest API call (see https://www.ptc.com/en/support/article?n=CS271485 ) . The below video expands on the mashup solution. Attached are also the entities used during the video and a sample dataset with ordinal goal.     Update for ThingWorx 9.0  The API has changed in 9.0, use the entities Entities-90-3Jun2020.xml for release 9.0  
View full tip
This video shows the steps to install ThingWorx Analytics release 8.3  
View full tip
Put together a quick example mashup in support of a couple of analytics projects to demonstrate use of the new TWX Analytics 8.1 APIs within TWX mashup builder. The intention here is for use in POCs to provide a quick way of demonstrating customer-facing analytics outputs along with the more detailed view available in Analytics Builder. Required pre-requisites are: ThingWorx 8.1 + Analytics Extensions ThingWorx Analytics Server 8.1 Carousel TWX UI widget (attached) imported into TWX Data set(s) loaded with signals / profiles generated. The demo can be installed by importing the attached entities file into TWX composer then launching the mashup 'EMEA.Analytics.CustomerInsightMashUp'. A quick run through of the functionality ... On launching the mashup, data sets and models are displayed for selection on the left hand-side. On selecting dataset and model, signals are presented in two tabs - first an overview of all signals. The list on the left can be expanded by changing the value for 'Top <n> Contributing Features'. On selecting a signal from the list, the 'Selected Signal Details' tab displays additional charting for value ranges, average goal etc. The number of 'bins' to display can be edited. Similarly, profiles can be viewed from the 'Profiles' tab - each profile can be selected by dragging the upper carousel. This is all done using the Analytics 8.1 "Things" in TWX along with an additional custom Thing with some scripted services (EMEA.Analytics.Helper). Thanks to Arian Van Huelsen & Tanveer Saifee at PTC for their support; all comments / feedback welcome.
View full tip
Predicting time to failure (TTF) or remaining useful life (RUL) is a common need in IIOT world. We are looking here at some  ways to implement it. We are going to use one of the Nasa dataset publicly available that simulates the Turbofan engine degradation (https://c3.nasa.gov/dashlink/resources/139/) . The original dataset has got 26 features as below Column 1 – asset id Column 2 – cycle/time of sensor data collection Column 3- 5 – operational setting Column 6-26 – sensor measurement In the training dataset the sensor measurement ends when the failure occurs.     Data Collection Since the prediction model is based on historic data, the data collection is a critical point. In some cases the data would have been already collected form the past and you need to make the best out of it. See the Data preparation chapter below. In situation where you are collecting data, a few points are good to keep in mind, some may or may not apply depending on the type of data to be collected. More frequent (higher frequency) collection is usually better, especially for electronic measure. In situation where one or more specific sensor values are known to impact the TTF, it is good to take measure at different values of this sensor until the failure without artificially modifying the values. For example, for a light bulb with normal working voltage of 1.5V, it is good to take some measure at let’s say 1V, 1.5V , 2V , 3V and 4V. But each time run till the failure. Do not start at 1.5V and switch to 4V after 1h. This would compromise what the model can learn. More variation is better as it helps the prediction model to generalize. In the same example of voltage it is best to collect data for 1V, 1.5V , 2V , 3V and 4V rather than just 1.5V which would be the normal running condition. This also depends on the use case, for example if we know for sure that voltage will always be between 1.45V and 1.55V, then we could focus only on data collection in this range. Once the failure is reached, stop collecting data. We are indeed not interested in what happens after the failure. Collecting data after the failure will also lead to lower prediction model accuracy. Each failure run should be a separate cycle in the dataset. In other word from a metadata stand point, each failure run should be represented by a different ENTITY_ID. TTF business need Before going into data preparation and model creation we need to understand what information is important in term of TTF prediction for our business need. There are several ways to conceive the TTF, for example: Exact time value when failure might occur This is probably going to be the most challenging to predict However one should consider if it is really necessary. Indeed do we need to know that a failure might occur in 12 min as opposed to 14 min ? Very often knowing that the time to failures is less than X min, is what is important, not the exact time. So the following options are often more appropriate. Threshold For some application knowing that a critical threshold is reached is all that is needed. In this case a Boolean goal, for example lessThan30min or healthy with yes/no values, can be used This is usually much easier than the exact value above. Range For other applications we may need to have a bit more insight and try to predict some ranges, for example: lessThan30min, 30to60min, 60to90min and moreThan90min In this case we will define an ordinal goal The caveat here is that currently ThingWorx Analytics Builder does not support ordinal goal, though ThingWorx Analytics Server does support it. So it only means that the model creation needs to be done through the API. This is the option we will take with the NASA dataset. The picture below shows the 3 different types of TTF listed above   Data Preparation   General Feature engineering Data Preparation is always a very important step for any machine learning work. It is important to present the data in the best suitable way for the algorithms to give the best results. There are a lot of practices that can be used but beyond the scope of this post. The Feature Engineering  post gives some starting point on this. There are also a lot of resources available on the Internet to get started, though the use of a data scientist may be necessary. As an example, in the original NASA dataset we can see that a few features have a constant value therefore there are unlikely to impact the prediction and will be removed. This will allow to free computational resources and prevent confusion in the model. Sensor data resampling The data sampling across the different sensor should be uniform. In a real case scenario we may though have sensors data collected at different time interval. Data transformation/extrapolation should  be done so that all sensor values are at the same frequency in the uploaded dataset. TTF feature Since we want to predict the time to failure, we do need a column in the dataset that represent this values for the data we have. In a real case scenario we obviously cannot measure the time to failure, but we usually have sensor data up to the point of failure, which we can use to derived the TTF values. This is what happens in the NASA dataset, the last cycle corresponds to the time when the failure occurred. We can therefore derive a new feature TTF in this dataset. This will start at 0 for the last cycle when failure occurred, and will be incremented by 1 up to the very first measurement, as shown below:   Once this TTF column is defined, we may need to transform it further depending on the path we choose for TTF prediction, as described in the TTF business need chapter. In the case of the NASA dataset we are choosing a range TTF with values of more100, 50to100, 10to50 and less10 to represent the number of remaining cycles till the predicted failure. This is the information we need to predict in order to plan a suitable maintenance action. Our transformed TTF column look as below:     Once the data in csv is ready, we need to create the json file to represent the metadata. In the case of range TTF this will be defined as an ordinal goal as below (see attachment for the full matadata json file) {         "fieldName": "TTF",         "values": ["less10",                   "10to50",                   "50to100",                   "more100"],         "range": null,         "dataType": "STRING",         "opType": "ORDINAL",         "timeSamplingInterval": null,         "isStatic": false   }   Model creation Once the data is ready it can be uploaded into ThingWorx Analytics and work on the prediction model can start. ThingWorx Analytics is designed to make machine learning easy and accessible to non data scientists, so this steps will be easier than when using other solutions. However some trial and error are needed to refine the model which may also involve reworking the dataset. Important considerations: When dealing with Time to failure prediction, it is usually needed to unset the Use Goal History in the Advanced parameters of the model creation wizard. If using API, the equivalent is to set the virtualSensor parameter to true. Tests with Redundancy Filter enabled should be done as this has shown to give better results. In a first attempt it is a good idea to keep lookback Size to 0. This indicates to ThingWorx Analytics to find the best lookback size between 2, 4, 8 and 16. If you need a different value or know that a different value is better suited, you can change this value accordingly. However bear in mind the following: Larger lookback size will lead to less data being available to train, since more data are needed to predict one goal. Larger lookback do lead to significant memory increase – See https://www.ptc.com/en/support/article?n=CS294545     In the case of the NASA dataset, since we are using an ordinal goal, we need to execute it through API. This can be done through mashup and services (see How to work with ordinal and categorical data in ThingWorx Analytics ? for an example) for a more productive way. As a test the TrainingThing.CreateJob service can be called from the Composer directly, as shown below:       Once the model is created we can check some performance statistics in ThingWorx Analytics Builder or, in the case of ordinal goal, via the ValidationThing.RetrieveResults service. The parameter most relevant in the case of ordinal goal will be the confusion matrix. Here is the confusion matrix I get   Another validation is to compute some PVA (Predicted Vs Actual) results for some validation data. ThingWorx Analytics does validation automatically when using ThingWorx Analytics Builder and present some useful performance metrics and graph. In the case of ordinal goal, we can still get this automatic validation run (hence the above confusion matrix), but no PVA graph or data is available. This can be done manually if some data are kept aside and not passed to the training microservice. Once the model is completed, we can then score (using PredictionThing.RealTimeScore or BatchScore for ordinal goal, or Builder UI for other goal) this validation dataset and compare the prediction result with the actual value. here is one example:     Depending on the business case this model can be deemed acceptable or may need rework, such as change the range values, change learners’ parameters, modify dataset … There is certainly a fair amount of experimentation before creating the optimal model but hopefully this post does give some good starting points.   Resources:   Original Dataset attached as train_FD001-original.csv Transformed dataset attached as train_FD001-TTF-transformed.csv json metadata file for transformed dataset attached as train_FD001-ttford.json                    
View full tip
ThingWorx Analytics is capable of being assembled in multiple Operating Systems. In this post, we will discuss common issues that have been encountered by other users. Permissions Denied – Read/Write access to Third Party Components This is encountered when executing the desired Shell script to begin the creation process. In MacOS and Linux you may encounter a “Permissions Denied” error on the two required components in the creation, the packer-post-processor-vhd and packer components. Error Message This will result in a Terminal dialog message that will read “Process Completed, No Artifacts Created”. This indicates that the Packer Script has failed to complete the task, and the desired appliance images were not created. To correct this issue, you will have to change the permissions of the packer-post-processor-vhd and packer components to be able to be read and executable by the user account that is attempting to create the appliance. Solution Run the following commands in the Virtual Machine terminal (you may need to run as SUDO or as Root): chmod +x packer-post-processor-vhd ​chmod +x packer After running the above command, run the Shell script of the desired VM Appliance output. This should resolve the issue with “Permission Denied” while executing the build scripts. Error Starting Appliance in VirtualBox Users have experienced this issue at the first run of the Appliance, right after it has been assembled. This issue is unique to VirtualBox versions 5.0 and above. Error Message – Dialog Box If you encounter the error depicted below, please check under settings for the imported OVA for any errors: This issue is the result of invalid settings in the Appliance Configuration. You will need to check for Invalid Settings, by navigating to the Settings Menu for the Appliance: The “Invalid settings detected” indicates that when the Product was assembled, some configuration settings were not applied correctly by the creation tool scripts. Solution Hover your mouse over the settings and it will direct you to cause, in this case it is due to remote monitor setup. Just change the settings in Display (Remote Display Tab) by unchecking the Enable Server button. Press OK after unchecking the “Enable Server” option, and start the Appliance.
View full tip
In this blog we will have a look at the installation of the Thingworx Analytics Builder extension. This is used as guideline but make sure to check the Help Center for your release as steps do vary with versions. The installation has been divided in 3 parts: Introduction and import of the extension into Thingworx Platform Video Link : 1568 Configuration of the extension Note: For release 8.1, the Settings menu differs from previous versions, seeWhat's New in ThingWorx Analytics Builder 8.1 between times 00:12 sec to 00:40 sec for up to date menu selection. Video Link : 1572 Installation of the UploadThing module Note: this step no longer applies as of ThingWorx Analytics 8.1 Video Link : 1573 Useful links: PTC Download page for Thingworx Analytics PTC Reference Document page for Thingworx Analytics How to copy files from Windows to Linux ?
View full tip
The following is valid  for ThingWorx Analytics (TWA) 52.0.2 till 8.0 For release 8.3.0 and above see How to score new data in ThingWorx Analytics 8.3.x ?   Overview The main steps are as follow: Create a dataset Configure the dataset Upload data to the dataset Optimize the dataset Create filters for training and scoring data Train the model Execute scoring on existing data Upload new data to dataset Execute scoring on new data TWA models are dataset centric, which means a model created with one dataset cannot be reused with a different dataset. In order to be able to score new data, a specific feature, record purpose in the below example, is included in the dataset. This feature needs to be included from the beginning when the data is first uploaded to TWA. A filter on that feature can then be created to allow to isolate desired data. When new data comes in, they are added to the original dataset but with a specific value for the filtered feature (record purpose), which allows to discriminate and score only those new records. Process Create a dataset This example uses the beanpro demo dataset Create dataset is done through a POST on datasets REST API as below 2. Configure dataset This is done through a POST on <dataset>/configuration REST API 3.      Upload data         4.      Optimize the dataset         5.      Create filters The dataset includes a feature named record purpose created especially to differentiate between the rows to be used for training and the rows to be used for scoring. New data to be added will have record purpose set to scoringnew, which will allow to execute a scoring job limited to those filtered new rows Filter for training data: Filter for new scoring data        6.      Train the model This is done through a POST on <dataset>/prediction API        7.      Score the training data This is done through a POST on <dataset>/predictive_scores API. Note the use of the filter TrainingData created earlier. This allow to score only the rows with training as value for record purpose feature. Note: scoring could also be done without filter at this stage, in which case all the data in the dataset will be scored and not just the ones with training fore record purpose   Retrieving the scoring result show all the records in the dataset:   8.      Upload new data The newly uploaded csv file should only contains new record. This will be appended to the existing ones.   Note that the new record (it could be more than one) has a value scoringnew for the record purpose feature: This will allow to use the previously created filter ScoringNewData so that a new scoring job will only take into account this new record.   9.      Scoring new data A POST on API predictive_scores is executed however using the filter ScoringNewData. This results in only the newly added data to be scored and therefore a much quicker execution time too. Retrieving the scoring result shows only the new record:
View full tip
Continuing our series of Troubleshooting ThingWorx Analytics installations, in this IoT Tech Tip we will cover two items have been appearing for many users.   Error 1069 Encountered with Native Windows Installation of ThingWorx Analytics 8.2   In some instances, when a user successfully installs ThingWorx Analytics (TWAS) to a Windows Server operating system, they will encounter an error where TWAS will report an Error 1069: The Service did not start due to logon failure.   This can occur with any individual Service that is created by the installation, the following fix should work in addressing the issue.   Primary Reason This Happens:   This error can be encountered when the user provides incorrect credentials for associating the Services to an account during installation. In TWAS 8.2, there is a utility that will enable to the user to change the associated user on the Services. It is important the user provides the password for the User Account on Windows, and not the user/password combination for ThingWorx Foundation Platform Server.   Steps to Fix Issue   Solution 1:   Open a Command Prompt as Administrator, via Start Menu à Run à type CMD. Then right click on cmd.exe and Run As Administrator.   In the elevated command prompt, change your directory to the ThingWorxAnalyticsServer/bin directory, for example in the default installation path would be: cd C:\Program Files (x86)/ThingWorxAnalyticsServer/bin Then execute the changeServiceUserAccount.bat <username>, for example: changeServiceUserAccount.bat user1   You will be prompted to change the password for the user.   Solution 2:   If Solution 1 does not resolve the issue, alternately you can manually change the Log On properties for each of the services. The changeServiceUserAccount.bat would do this via script, but on occasion this may work. Open the Control Panel and navigate to Services, for example: Control Panel à All Control Panel Items à Administrative Tools   You will have to right click each individual service and go to Properties à Log On tab and enter the account name and password for the local account. Note: Local System account will not resolve this issue.   This issue was resolved in the ThingWorx Analytics Server 8.3 release, where all Services are associated with the Network Service account.     More information can be found in this Knowledge Article   Uploading of a Dataset hangs or does not complete in ThingWorx Analytics 8.3   On occasion, after a fresh installation of ThingWorx Analytics Server 8.3 on a Windows Server operating system, a dataset will not complete its upload. Typically no error message is displayed, and the upload wizard UI will just hang on the upload progress after:   Creating copy of Configuration File... Submitting Create Dataset request... Creating copy of Data File...   Primary Reason This Happens:   This is caused by twas-zookeeper service being stuck in a PAUSED state. This means that in the post installation, twas-zookeeper did not start.   Steps to Fix Issue   You will have to double check that the JAVA_HOME variable was defined as a System Variable. In the ThingWorx Analytics Installation guide, pages 12-14 outline the steps required as pre-requisites. You can change this in Control Panel > System > Advanced Settings > Environment Variables, and ass a new variable named JAVA_HOME under System Variables. The value location should be the location of your deployment of JAVA software.   Typically this is located in C:\Program Files\Java\<jre or jdk>_<version number>     More information can be found in this Knowledge Article
View full tip
Starting with the 8.1 release, the architecture of ThingWorx Analytics has changed from being a single sever to being split into several independent microservices.  This has been done to allow services to run concurrently. It also prevents issues with one microservice from affecting the others. Overview The new Analytics Server Architecture consists of a suite of 9 microservices: Data Clustering Profiling Signals Training Prediction Validation Presciptive Results All of the microservices work together to create a similar experience for users as it was in the past. The data that is uploaded and generated by the Analytics Server is stored directly in a file system, instead of a Postgres Database like it was in the past. Closer Integration with ThingWorx Please note that ThingWorx Foundation is required to be installed and operating before Installing Analytics.  During the install you will be asked to supply IP Address of the ThingWorx Instance that will be used for Analytics.  At this step, the AnalyticsServerThing is configured which allows the user to interact with Analytics Server through ThingWorx.  All of the configured microservices are represented as Things under the AnalyticsServerThing. This is because ThingWorx Analytics has become a native part of ThingWorx Foundation functionality and is dependent on ThingWorx for user interaction.  Because of these changes, there is no longer a direct ThingWorx Analytics Server REST API. Support for accessing the services via REST calls is now provided through the ThingWorx Core REST API layer.  Because of this, a new URI pattern is required moving forward. One other update from the older versions is that the requirement to use application keys and Application IDs are no longer necessary.  This should come as a welcome relief as the Application keys and IDs were the source of issues for users who may have misplaced them etc. Less Data-Centric In the old versions, jobs, models, signals, etc. were all tied to the dataset.  So there was no way to a model from one dataset to the other. With the new architecture, this is no longer the case you are able to move a model from one dataset to the other seamlessly.  Please note that when moving a model from one dataset to the other, it must have the same metadata between each of the datasets.  This is because a model created to increase efficiency in a factory would provide no insight on a dataset that monitors the soil moisture in a corn field. Updates to Metadata Although going over the exact changes to the Metadata is out of scope for this post, it is worth mentioning. For more details on the changes, please follow this link. Summary In conclusion, the new architecture of ThingWorx Analytics was done to increase scalability and to produce a more robust system.  The new release is much more integrated into the ThingWorx Platform to increase the ease of use from the previous releases.  It is much less data-centric than it was in the past and geared more to the solutions themselves. 
View full tip
Overview Time-series predictive models generated by ThingWorx Analytics Server in Analytics Manager will have additional columns in their dataShape generated by ThingPredictor. These columns are known as “transformation fields” and are used for internal processing but are not necessary for inclusion in the DataShape.  So there is no need to worry about mapping all these additional fields since it will be handled internally by ThingPredictor.  There is one addition step that the user must take which is detailed below. Step to Import: Edit the DataShape generated by ThingPredictor to match the format of the data that was provided during the model training process. In other words, remove all the transformation fields from the DataShape.
View full tip
Here is a tutorial to explain the process of uploading a PMML file from an external system to Thingworx Analytics. The tutorial steps are explained in the attached PDF and all referenced files can be found in the attached ZIP.  
View full tip
This Blog presents a simple Java utility to validate the deployment of ThingWatcher. It is important to note that the utility used is not a real life situation, the intent was to keep it as simple as possible in order to achieve its aim: validation of the deployment. An understanding of Java IDE (such as Eclipse) is necessary in order to run the utility with relevant dependency and classpath setup. Those are beyond the scope of this posting. We will cover the following points: Pre-Requisites Using the sample utility Code walk through Validate training job creation Validate model job creation Update for ThingWorx Analytics 8.0 Pre-requisites A strict adherence to the ThingWatcher deployment guide is recommended in order to first deploy training and model microservices as well as to familiarize yourself with ThingWatcher APIs. Prior to testing ThingWatcher, both the training and model microservices should be up and running The media for ThingWatcher (including model and training micro-service) should be downloaded from PTC Software Download page . The commands to deploy the micro-services will vary depending on the platform used and are presented in the ThingWatcher deployment guide. As a reference example, on Windows the command will be similar to the following: Start Docker: Start > Program > Docker > Docker Quick Start Terminal Load model micro service tar $ docker load < "D:\PTC\MED-61147-CD-522_F000_ThingWorx-Analytics-ThingWatcher-52-2\components\ModelService\ModelService\model-service.tar"     3. Install model service: $ docker run -d -p 8080:8080 -v '/d/TWatcherStorage/model:/data/models' -v '/d/TWatcherStorage/db:/tmp/' twxml/model-service:1.0 -Dfile.storage.path=/data/models -jar maven/model-1.0.jar server maven/standalone-evaluator.yml     4. Load training micro service tar file                         $ docker load < "D:\PTC\MED-61147-CD-522_F000_ThingWorx-Analytics-ThingWatcher-52-2\components\TrainingService\TrainingService\training-service.tar"     5. Install training service                         $ docker run -d -p 8090:8080  twxml/training-service:1.0.0  -Dmodel.destination.uri=model://192.168.99.100:8080/models -jar maven/training-standalone-1.0.0-bin.jar server /maven/training-standalone-single.yml Note: the -Dmodel.destination.uri points here to the model micro-service host. To find the ip address, enter docker-machine ip on the model micro-service docker machine.     6. Validate micro-services deployment: Execute docker ps  and confirmed that both services are up, as in the following example: CONTAINER ID        IMAGE                          COMMAND                      CREATED            STATUS              PORTS NAMES 5b6a29b95611        twxml/training-service:1.0.0  "java -Dmodel.destina"  13 days ago        Up 44 minutes      8081/tcp, 0.0.0.0:8090->8080/tcp  modest_albattani 8c13c0bc910e        twxml/model-service:1.0        "java -Dfile.storage."      2 weeks ago        Up 44 minutes      0.0.0.0:8080->8080/tcp, 8081/tcp  thirsty_ptolemy   Using the sample utility Download the attachment Main.java Import Main.java into Eclipse (or IDE of choice) with the ThingWatcher dependencies added in classpath. Update the trainingBaseURI (see below) to points to the training micro-services. The utility should be ready to execute. Code walk through The code declares a thingwatcher in the following snippet: ThingWatcher thingwatcher = new ThingWatcherBuilder() .certainty(90.0) .trainingDataDuration(60) .trainingDataDurationUnit(DurationUnit.SECOND) .trainingBaseURI("http://192.168.99.100:8090/training") .getThingWatcher(); In the above code it is important to update the trainingBaseURI argument with the correct ip address and port for the training micro-service host. The code then loops 10000 times and sends a new value, which simulates the sensor data, at a simulated 100 ms interval. The value is computed as Math.sin(i) for the whole calibrating phase and most of the monitoring phase too. We artificially introduce an anomaly by sending a value of Math.incremetExact(i) between the 9000 th and 9900 th iterations. During the Monitoring phase, the code logs the value, the anomalous status and the thingwatcher state. It is advised to save the output to a file in order to review the logging once the utility has run. In Eclipse this can be done by selecting the Main.java with right mouse button > Run As… > Run Configuration > Common and tick Output File under the Standard Input and Output, and specify a location for the output file. A review of the output log file will shows that somewhere between timestamp 900000 and 990000, the isAnomalousValue is true. Note that this does not starts and ends exactly at 900000 and 990000, as ThingWatcher needs a few occurrences before reporting it as anomaly. Sample output indicating an anomalous state: [main] INFO com.thingworx.analytics.demo.Main - Value = 901700,9017.0,-9016.403802019577 [main] INFO com.thingworx.analytics.demo.Main - isAnomalousValue = true [main] INFO com.thingworx.analytics.demo.Main - ThingWatcherStat = MONITORING As part of validating the successful deployment of ThingWatcher, it is recommended to validate the correct creation of a training and model job. Validate training job creation In order to validate the successful creation of a training job, execute a GET request to the training micro service : http://192.168.99.100:8090/training (update the ip address to the one on your system) This should return a COMPLETED job whose body starts with something similar to: Validate model job creation In order to validate the successful creation of a model job, execute a GET request to http://192.168.99.100:8080/models (update the ip address to the one on your system) to see all the models that have been created. For example: Alternatively, click (or use) the URI reported in the training job output, here http://192.168.99.100:8080/models/6/pmml.xml, to see the complete model definition. The output will be similar to: When this sample test runs correctly, the ThingWatcher deployment has been validated. Update for ThingWorx Analytics 8.0 Deploying the microservices, see Video Link : 1937 Updated Java code: see Does anyone know how to use java api to achieve anomaly detection with Thingwatcher8.0? To Note: The utility provided is for testing purpose only. The code does not represent any kind of best practice and is not meant to be a perfect java coding example. It is provided as is with no guarantee.
View full tip
In this video we cover the installation of the UploadThing module. This video applies to ThingWorx Analytics 52.2 till 8.0. This is no longer applicable with ThingWorx Analytics 8.1   Useful links: How to copy files from Windows to Linux Updated Link for access to this video:  Installing Thingworx Analytics Builder: Part 3 of 3  
View full tip
There are Four Types of Analytics:                         Descriptive: What Happened? Descriptive analytics is a preliminary stage of data processing that creates a summary of historical data to yield useful information and possibly prepare the data for further analysis. Analytics, which use data aggregation and data mining to provide insight into the past and answer: “What has happened? Descriptive analysis or statistics does exactly what the name implies they “Describe”, or summarize raw data and make it something that is interpret-able by humans. They are analytics that describe the past. The past refers to any point of time that an event has occurred, whether it is one minute ago, or one year ago. Descriptive analytics are useful because they allow us to learn from past behaviors, and understand how they might influence future outcomes. The vast majority of the statistics we use fall into this category. (Think basic arithmetic like sums, averages, percent changes). Usually, the underlying data is a count, or aggregate of a filtered column of data to which basic math is applied. For all practical purposes, there are an infinite number of these statistics. Descriptive statistics are useful to show things like, total stock in inventory, average dollars spent per customer and Year over year change in sales. Common examples of descriptive analytics are reports that provide historical insights regarding the company’s production, financials, operations, sales, finance, inventory and customers. Note: Use Descriptive Analytics when you need to understand at an aggregate level what is going on in your company, and when you want to summarize and describe different aspects of your business.                                     Different techniques of Descriptive Analytics: Sampling Mean Mode Median Standard Deviation Range and Variance Stem and Leaf Diagram Histogram Quartiles Frequency Distributions Use of Descriptive Analytics in ThingWorx Analytics: Signal Detection: When analyzing volumes of data, it is helpful to know which data is actually useful and which data is just noise. Signals are based on a correlation algorithm that examines historical data to identify the strength of a given input in predicting future outcomes. Signals can identify meaningful correlations within the data. Signals are useful during initial analysis to determine which features you want to curate in a given data-set for predictive model generation. For example, knowing the month of the year is more important to accurately predicting tomorrow’s weather than knowing the day of the week. The month has a much stronger signal than the day of the week for this prediction. ThingWorx Analytics reports signal strength in a mutual information (MI) score that represents the probability of predicting the goal variable when a given feature is provided. It can effectively capture non-linear relationships. ThingWorx Analytics evaluates each feature, or combination of features, to identify the top signals. Cluster Analysis: Cluster analysis categorizes data into groups based on similarities relative to a goal variable. Like a clique, objects in a cluster minimize intra-distances (distances within the cluster) while maximizing inter-distances (distances between clusters). Clusters are mutually exclusive, meaning that each record can belong to only one cluster. However, ThingWorx Analytics supports a user-defined cluster hierarchy that can include sub-clusters inside other clusters. The higher the number of clusters in the data, the smaller each cluster’s population will be, but the stronger the potential insights can be. How to Access Descriptive Analysis Functionality via ThingWorx Analytics: REST API Service — Using a REST client, you can access the Signals Service and the Clusters Service. Each service includes a series of API endpoints to submit analysis requests, retrieve results, list jobs, and more. Requires installation of the ThingWorx Analytics Server. Analytics Builder — As part of the ThingWorx Analytics Extension, Analytics Builder provides a user interface for interacting with your data. In addition to generating and scoring predictive models in Analytics Builder, you can also run procedures to generate signals. How to avoid mistakes - Useful tips for Different Techniques of Descriptive Analytics: Crystallize the research problem → Operability of it! Read literature on data analysis techniques. Evaluate various techniques that can do similar things w.r.t. to research problem. Know what a technique does and what it doesn’t. Consult people, esp. supervisor.
View full tip
In this video we cover the process of installing ThingWorx Analytics Server 52.1. Make sure to have reviewed the part 1 video about pre requisite   Updated Link for access to this video:  Installing ThingWorx Analytics Server: Part 2 of 2
View full tip
Underneath video walks through how to Publish a Model from Analytics Builder into Analytics Manager using the connector named TW.AnalysisServices.AnalyticsServer.AnalyticsServerConnector.
View full tip
This video gives an introduction to the Descriptive Services: what they are how to install them how to configure them how to use them  
View full tip
Welcome to the ThingWorx Analytics Training Course! Through these 11 modules, you will learn all about the functionality of this software, as well as techniques to help you build a successful and meaningful predictive analytics application.
View full tip