cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

Community Tip - Help us improve the PTC Community by taking this short Community Survey! X

IoT Tips

Sort by:
In this video we cover: a short introduction of Thingworx Analytics Builder The import of the Thingworx Analytics Builder extension   This video applies to ThingWorx Analytics 52.1 till 8.1   Updated Link for access to this video:  Installing Thingworx Analytics Builder:  Part 1 of 3
View full tip
Attached is a description about Ensemble Learning Techniques.
View full tip
Best Practices in Data Preparation for ThingWorx Analytics Data Preparation is an important phase in the process of Data Analysis when using ThingWorx Analytics. Basically, it is getting your Data from being Raw Data that you might have gathered through your Operational system or from your Data warehouse to the kind of Data ready to be analyzed. In this Document we will be using “Talend Data Preparation Free Desktop” as a Tool to illustrate some examples of the Data Preparations process. This tool could be downloaded under the following Link: https://www.talend.com/products/data-preparation (You could also choose to use another tool) We would also use the Beanpro Dataset in our Examples and illustrations. Checking data formats The analysis starts with a raw data file. The user needs to make sure that the data files can be read. Raw data files come in many different shapes and sizes. For example, spreadsheet data is formatted differently than web data or Sensors collected data and so forth. In ThingWorx Analytics the Data Format acceptable are CSV. So the Data retrieved needs to be inputted into that format before it could be uploaded to TWA Data Example (BeanPro dataset used): After that is done the user needs to actually look at what each field contains. For example, a field is listed as a character field could actually contains none character data. Verify data types Verifying the data types for each feature or field in the Dataset used.  All data falls into one of four categories that affect what sort of analytics that could be applied to it: Nominal data is essentially just a name or an identifier. Ordinal data puts records into order from lowest to highest. Interval data represents values where the differences between them are comparable. Ratio data is like interval data except that it also allows for a value of 0. It's important to understand which categories your data falls into before you feed it into ThingWorx Analytics. For example when doing Predictive Analytics TWA would not accept a Nominal Data Field as Goal. The Goal feature data would have to be of a numerical non nominal type so this needs to be confirmed in an early stage.                 Creating a Data Dictionary A data dictionary is a metadata description of the features included in the Dataset when displayed it consists of a table with 3 columns: - The first column represents a label: that is, the name of a feature, or a combination of multiple (up to 3) features which are fields in the used Dataset. It points to “fieldname” in the configuration json file. - The second column is the Datatype value attached to the label. (Integer, String, Datetime…). It points to “dataType” in the configuration json file. - The third column is a description of the Feature related to the label used in the first column. It points to “description” in the configuration json file. In the context of TWA this Metadata is represented by a Data configuration “json” file that would be uploaded before even uploading the Dataset itself. Sample of BeanPro dataset configuration file below: Verify data accuracy Once it is confirmed that the data is formatted the way that is acceptable by TWA, the user still need to make sure it's accurate and that it makes sense. This step requires some knowledge of the subject area that the Dataset is related to. There isn't really a cut-and-dried approach to verifying data accuracy. The basic idea is to formulate some properties that you think the data should exhibit and test the data to see if those properties hold. Are stock prices always positive? Do all the product codes match the list of valid ones? Essentially, you're trying to figure out whether the data really is what you've been told it is. Identifying outliers Outliers are data points that are distant from the rest of the distribution. They are either very large or very small values compared with the rest of the dataset. Outliers are problematic because they can seriously compromise the Training Models that TWA generates. A single outlier can have a huge impact on the value of the mean. Because the mean is supposed to represent the center of the data, in a sense, this one outlier renders the mean useless. When faced with outliers, the most common strategy is to delete them. Example of the effect of an Outlier in the Feature “AVG Technician Tenure” in BeanPro Dataset:   Dataset with No Outlier: Dataset with Outlier: Deal with missing values Missing values are one of the most common (and annoying) data problems you will encounter. In TWA dealing with the Null values is done by one of the below methods: - Dropping records with missing values from your Dataset. The problem with this is that missing values are frequently not just random little data glitches so this would consider as the last option. - Replacing the NULL values with average values of the responses from the other records of the same field to fill in the missing value Transforming the Dataset - Selecting only certain columns to load which would be relevant to records where salary is not present (salary = null). - Translating coded values: (e.g., if the source system codes male as "1" and female as "2", but the warehouse codes male as "M" and female as "F") - Deriving a new calculated value: (e.g., sale_amount = qty * unit_price) - Transposing or pivoting (turning multiple columns into multiple rows or vice versa) - Splitting a column into multiple columns (e.g., converting a comma-separated list, specified as a string in one column, into individual values in different columns) Please note that: Issue with Talend should be reported to the Talend Team Data preparation is outside the scope of PTC Technical Support so please use this article as an advisable Best Practices document
View full tip
The following is valid  for ThingWorx Analytics (TWA) 52.0.2 till 8.0 For release 8.3.0 and above see How to score new data in ThingWorx Analytics 8.3.x ?   Overview The main steps are as follow: Create a dataset Configure the dataset Upload data to the dataset Optimize the dataset Create filters for training and scoring data Train the model Execute scoring on existing data Upload new data to dataset Execute scoring on new data TWA models are dataset centric, which means a model created with one dataset cannot be reused with a different dataset. In order to be able to score new data, a specific feature, record purpose in the below example, is included in the dataset. This feature needs to be included from the beginning when the data is first uploaded to TWA. A filter on that feature can then be created to allow to isolate desired data. When new data comes in, they are added to the original dataset but with a specific value for the filtered feature (record purpose), which allows to discriminate and score only those new records. Process Create a dataset This example uses the beanpro demo dataset Create dataset is done through a POST on datasets REST API as below 2. Configure dataset This is done through a POST on <dataset>/configuration REST API 3.      Upload data         4.      Optimize the dataset         5.      Create filters The dataset includes a feature named record purpose created especially to differentiate between the rows to be used for training and the rows to be used for scoring. New data to be added will have record purpose set to scoringnew, which will allow to execute a scoring job limited to those filtered new rows Filter for training data: Filter for new scoring data        6.      Train the model This is done through a POST on <dataset>/prediction API        7.      Score the training data This is done through a POST on <dataset>/predictive_scores API. Note the use of the filter TrainingData created earlier. This allow to score only the rows with training as value for record purpose feature. Note: scoring could also be done without filter at this stage, in which case all the data in the dataset will be scored and not just the ones with training fore record purpose   Retrieving the scoring result show all the records in the dataset:   8.      Upload new data The newly uploaded csv file should only contains new record. This will be appended to the existing ones.   Note that the new record (it could be more than one) has a value scoringnew for the record purpose feature: This will allow to use the previously created filter ScoringNewData so that a new scoring job will only take into account this new record.   9.      Scoring new data A POST on API predictive_scores is executed however using the filter ScoringNewData. This results in only the newly added data to be scored and therefore a much quicker execution time too. Retrieving the scoring result shows only the new record:
View full tip
This Guide contains all the Linux commands that you may have to use for ThingWorx Analytics Installation or day to day use. Command/Category Description Network/port ip a List the ips of all of the network interfaces ssh How to jump from one machine to another ping Send packets to a remote machine.  useful for testing connectivity netstat –anp Check active port cat < /dev/tcp/localhost/8080 Test connection to a port Replace localhost with desired hostname or ip, replace 8080 with desired port number (/dev/tcp/host/port) exit Exit my current sign in.  this lets one disconnect from remove ssh sessions or if one has changed one's user e.g. switched to root scp Retrieve something via ssh Resource Usage free -m Check memory -m is for output in Mb Mpstat -P ALL CPU usage top Process usage jvmtop Collect cpu usage of jvm and its thread https://github.com/patric-r/jvmtop (requires jdk to be installed) File Interaction cp / mv Copy and move respectively. mv just deletes the source file.  Usage: cp /source/location/file /output/location/file cat Mostly used to just print the contents of a file to the command line.  can also print multiple files at once:  cat /var/log/gridworker/warning.log /var/log/gridworker/error.log vim / vi A command line text editor. Not the most user friendly (none of them are) but really useful. Here's a cheat sheet for the commands https://www.fprintf.net/vimCheatSheet.html rm Remove files chmod Change the access permissions of files chown Change the user or group ownership of files grep A text based filtering.  Useful for making a larger list smaller and more targeted.  Almost always used after a pipe (see pipe below) less Generally used to view the contents of a file with more friendly scrolling locate Find a file by name Directory ls What’s in the directory.  Will do the current directory but you can also pass the directory e.g. ls /var/log/tomcat. Black writing is files Blue writing is directories Red writing is compressed file pwd Tell me which directory I'm currently in cd Change directory.  provide the directory to change to or just use cd to return to the user's home directory clear Clears the screen Terminal clears all provided commands mkdir Creates Directories Running Processes ps Query what services are running. usually use ps -aux to get a full, sorted list.  using grep with this is helpful systemctl The correct way to interact with services that are running Package installation yum install <packageName> Install a package. More useful commands: https://www.centos.org/docs/5/html/5.1/Deployment_Guide/s1-yum-useful-commands.html yum list installed List installed packages yum list <package> List available packages yum --showduplicates list java-1.7.0-openjdk-devel Use --showduplicates to see all versions Can use * for package name: *openjdk* rpm -ql <packagename> Find where package are installed Note: works if package installed with yum Yumdownloader --urls <packageName> Find URL where a package is downloaded from. Note: need to install yum-utils package Repoquery --requires <packageName> Find dependencies of a package Note: need to install yum-utils package repoquery --qf=%{name} -g --list --grouppkgs=all [groups] | xargs repotrack -a x86_64 -p /repos/Packages Download a package with all its dependencies. Need to install yum-utils package From  <http://unix.stackexchange.com/questions/50642/download-all-dependencies-with-yumdownloader-even-if-already-installed> Other Commands curl http://localhost:8080/1.0/about/versioninfo Send REST call via command line Use -X POST (default GET) for a POST (see man page - https://curl.haxx.se/docs/manual.html for example) See also http://www.codingpedia.org/ama/how-to-test-a-rest-api-from-command-line-with-curl/ Find / -type f -exec grep -I mystring {} \; Search string in files Sudo -u user command Execute a command as different user The below helpers are not commands themselves, but can be used in conjunction with the above commands. Helper Description 'pipe' The | character.  lets one chain commands.  e.g. ps -aux | grep java ./ The shorthand way to refer to this directory explicitly ../ The shorthand way to refer to the parent directory 'tab completion' Pressing tab will let linux guess what command/option best fits what's currently written.  very useful for navigating directories and long-named files (NOTE: not necessarily tab based upon one's keyboard layout/language) 'ctrl-r' Look up the mostly likely command that matches what one typing.  so if one earlier used ps -aux | grep java | less and the hit ctrl-r and typed -aux it would likely pull that command or at least the most recent one that matches
View full tip
Best Practices in Data Preparation for ThingWorx Analytics
View full tip
The following videos are provided to help users get started with ThingWorx: ThingWorx Installation Installing ThingWorx (Neo4j) in Windows ThingWorx PostgreSQL Setup for Windows ThingWorx PostgreSQL for RHEL ThingWorx Data Storage Introduction to Streams Introduction to Value Streams Introduction to DataTables Introduction to InfoTables ThingWorx Concepts & Functionality Introduction to Media Entities Using State Formatting in a Mashup Configuring Properties ThingWorx REST API REST API (Part 1) REST API (Part 2) ThingWorx Edge SDK Configuring File Transfer with the .NET SDK ThingWorx Analytics *new* Getting Started with ThingWorx Analytics Part 1 Getting Started with ThingWorx Analytics Part 2 Installing ThingWorx Analytics Builder Part 1 of 3 Installing ThingWorx Analytics Builder Part 2 of 3 Installing ThingWorx Analytics Builder Part 3 of 3 Creating Signals in the Analytics Builder How to Access the ThingWorx Analytics Interactive API Guide ThingWorx Widgets How to Create and Configure the Auto Refresh Widget How to Create and Define a Blog Widget How to Create and Configure a Button Widget How to Use the Divider and Shape Widgets How to Create and Configure a Chart Widget How to Use a Contained Mashup How to Use the Data Filter Widget How to Use an Expression Widget How to Create and Configure a Gauge Widget How to Create and Configure a Checkbox Widget How to Use a Contained Mashup Widget How to Use a Data Export Widget How to Use the DateTime Picker Widget How to Use the Editable Grid Widget Using Fieldset and Panel Widgets How to Use the File Upload Widget How to Use the Folding Panel Widget How to Use the Google Location Picker How to Use the Google Map Widget How to Use a Grid Widget How to Use an HTML TextArea Widget How to Use the List Widget How to Use a Label Widget How to Use the Layout Widget How to Use the LED Display Widget How to Use the List Widget How to Use the Masked Textbox Widget Navigation in ThingWorx: Using Menus, the Navigation Widget, Link Widget, and Contained Mashups How to Use the Numeric Entry Widget How to Use the Pie Chart Widget How to Use the Property Display Widget How to Use the Radio Button Widget How to Use the Repeater Widget How to Use the Slider Widget How to Use the SQUEAL Search Widget How to Use the Responsive Tab Widget How to Use the Tag Cloud Widget How to Use the Tag Picker Widget How to Use the TextArea and TextBox Widgets How to Use the Time Selector Widget How to Use the Tree Widget How to Use the Value Display Widget How to Use the Web Frame Widget How to Create and Define a Wiki How to Use the XY Chart Quick note: Thread will be updated with more videos as they are added.
View full tip
Hi I have attached a Postman collection, this can be used as a template and be modified. steps to upload the collection to Postman. 1. In your Postman window click at Import. 2. Once you clicked import, you can chose your file. 3. The collection is now visible in your left side of the window.
View full tip
This document provides API information for all 51.0 releases of ThingWorx Machine Learning.
View full tip
Announcements