cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

ThingWorx Navigate is now Windchill Navigate Learn More

IoT & Connectivity Tips

Sort by:
1. Add an Json parameter Example: { ​    "rows":[         {             "email":"example1@ptc.com"         },         {             "name":"Qaqa",             "email":"example2@ptc.com"         }     ] } 2. Create an Infotable with a DataShape usingCreateInfoTableFromDataShape(params) 3. Using a for loop, iterate through each Json object and add it to the Infotable usingInfoTableName.AddRow(YourRowObjectHere) Example: var params = {     infoTableName: "InfoTable",     dataShapeName : "jsontest" }; var infotabletest = Resources["InfoTableFunctions"].CreateInfoTableFromDataShape(params); for(var i=0; i<json.rows.length; i++) {     infotabletest.AddRow({name:json.rows.name,email:json.rows.email}); }
View full tip
The following code snippet will retrieve a months worth of data from the system and return it as a CSV document suitable for import into your spreadsheet or reporting tool of choice. import static com.axeda.sdk.v2.dsl.Bridges.* import com.axeda.drm.sdk.Context import com.axeda.common.sdk.id.Identifier import com.axeda.services.v2.* import com.axeda.sdk.v2.exception.* def ac = new AuditCriteria() ac.fromDate = Date.parse('yyyy-MM-dd', '2017-03-01') ac.toDate   = Date.parse('yyyy-MM-dd', '2017-03-31') def retString = '' tcount = 0 while ( (results = auditBridge.find(ac)) != null  && tcount < results .totalCount) {   results.audits.each { res ->     retString += "${res?.user?.id},${res?.asset?.serialNumber},${res?.category},${res.message},${res.date}\n"     tcount++   }   ac.pageNumber = ac.pageNumber + 1 } return retString
View full tip
This is using the simplest structure to do a look through an infotable.  It's simple but it avoids having to use row indexes and cleans up the code for readability as well.   //Assume incoming Infotable parameter names "thingList" for each (row in thingList.rows) {      // Now each row is already assigned to the row variable in the loop      var thingName = row.name; }   You can also nest these loops (just use a different variable from "row").  Also important to note to not add or remove row entries of the Infotable inside the loop.  In this case you may end up skipping or repeating rows in the loop since the indexes will be changed.
View full tip
This has been moved to its new home in the Augmented Reality Category in the PTC Community.
View full tip
This has been moved to its new home in the Augmented Reality Category in the PTC Community.
View full tip
1. Create a network and added all Entities that implement from a specific ThingShape in the network 2. Create a ThingShape mashup as below Note: Bind the Entity parameter to DynamicThingShapes_TracotrShape's service GetProperties input EntityName. Laso bind mashup RefreshRequested event to that service 3. Create a mashup named ContentShape, add Tree widget and ContainedMashp in it 4. Bind Service GetNetworkConnection's Selected Row(s) result and Selected RowsChanged event to ContainedMashup widget Note: Master can total replace ThingShape mashup. Suggest to use Master after ThingWorx 6.0
View full tip
This has been moved to its new home in the Augmented Reality Category in the PTC Community.
View full tip
This has been moved to its new home in the Augmented Reality Category in the PTC Community.
View full tip
One of the issues we have encountered recently is the fact that we could not establish a VNC Remote session. The edge was located outside of the internal network where the Tomcat was hosted, and all access to the instance was through an Apache reverse proxy. The EMS was able to connect successfully to the Server, because the Apache had correctly setup the Websocket forwarding through the following directive: ProxyPass "/Thingworx/WS/"  "wss://192.168.0.2/Thingworx/WS" However, we saw that tunnels immediately closed after creation and as a result (or so we thought), we could not connect from the HTML5 VNC viewer. More diagnostics revealed that you need to have ProxyPass directives for the following: -the EMS will make calls to another WS endpoint, called WSTunnelServer. After you setup this, the EMS will be able to create tunnels to the server. -the HTML5 VNC page will make a "websocket" call to yet another WS endpoint, called WSTunnelClient. Only at this step you have the ability to successfully use tunnels through a reverse proxy. Hope it helps!
View full tip
This has been moved to its new home in the Augmented Reality Category in the PTC Community.
View full tip
Concepts of Anomaly Detection used in ThingWatcher ThingWatcher is based on anomaly detection with the normal distribution. What does that mean? Actually,  normally distributed metrics follow a set of probabilistic rules. Upcoming values who follow those rules are recognized as being “normal” or “usual”. Whereas value who break those rules are recognized as being unusual. What is a normal distribution? A normal distribution is a very common probability distribution. In real life, the normal distribution approximates many natural phenomena. A data set is known as “normally distributed” when most of the data aggregate around it's mean, in a symmetric way. Also, it's extreme values get less and less likely to appear. Example When a factory is making 1 kg sugar bags it doesn’t always produce exactly 1 kg. In reality, it is around 1 kg. Most of the time very close to 1 kg and very rarely far from 1 kg. Indeed, the production of 1 kg sugar bag follows a normal distribution. Mathematical rules When a metric appears to be normally distributed it follows some interesting law. As does the sugar bag example. The mean and the median are the same. Both are equal to 1000. It’s because of  the perfectly symmetric “bell-shape” It is the standard deviation called sigma σ that defines how the normal distribution is spread around the mean. In this example σ = 20 68% of all values fall between [mean-σ; mean+σ] For the sugar bag [980; 1020] 95% of all values fall between [mean-2*σ; mean+2*σ] For the sugar bag [960; 1040] 99,7% of all values fall between [mean-3*σ; mean+3*σ] For the sugar bag [940; 1060] The last 3 rules are also known as the 68–95–99.7 rule also called the three-sigma rule of thumb When the rules get broken: it’s an anomaly As previously stated, When a system has been proven normally distributed, it follows a set of rules. Those rules become the model representing the normal behavior of the metric. Under normal conditions, upcoming values will match the normal distribution and the model will be followed. But what happens when the rules get broken? This is when things turn different as something unusual is happening. In theory, in a normal distribution, no values are impossible. If the weights of the bags of sugar were really distributed, we would probably find a bag of sugar of 860 g every billion products. In reality, we approximate this sugar bag example as normally distributed. Also, almost impossible value are approximated as impossible Techniques of Anomaly Detection Technique n°1: outlier value An almost impossible value could be considered as an anomaly. When the value deviates too much from the mean, let’s say by ± 4σ, then we can consider this almost impossible value as an anomaly. (This limit can also be calculated using the percentile). Sugar bags who weigh less than 920 g or more than 1080 g are considered anomalous. Chances are, there is a problem in the production chain. This provides a simple way to define maximum and minimum thresholds. Technique 2: detecting change in the normal distribution Technique n°2 can detect unusual distribution fast, using only some points. But it can’t detect anomalies who move from one sigma σ to another in a usual manner. To detect this kind of anomaly we use a “window” of n last elements. If the mean and standard derivation of this window change too much from usual then we can deduce an anomaly. Using a big window with a lot of values is more stable, but it requires more time to detect the anomaly. The bigger the window is the more stable it becomes. But it would require more time to detect the anomaly as it needs to aggregate more values for the detection.
View full tip
This has been moved to its new home in the Augmented Reality Category in the PTC Community.
View full tip
This has been moved to its new home in the Augmented Reality Category in the PTC Community.
View full tip
I had a better example, this is my original example ... If I find the other one I'll upload that as well.
View full tip
Axeda Enterprise has long provided a feature to run custom code on the server side in response to end user requests or events triggered by data sent in by remote agents.  Version 6.6 introduced Axeda Artisan - an Apache Maven based tool to add modern best practices to developing Axeda-based solutions, using modern code editors such as Eclipse and IntelliJ, and allowing for the use of source code control tools like Git or Clearcase.  One downside to Artisan, however, is that it has no export tool - no way to take currently existing entities in the Axeda instance, and save them. The attached Groovy script, GetCustomObjects.groovy, solves that problem for custom objects.  It will iterate an Axeda instance and save any found CustomObjects to disk for backup, or to use to bootstrap an Artisan project from an existing instance. { / }  » groovy GetCustomObjects.groovy usage: getCustomObjects -acceptBadSSL          Ignore any TLS validation issues -h                     help -instance <instance>   instance name - directory to store results -password <password>   password -url <url>             url of Axeda Machine Cloud -username <username>   username An example call might look like: { / } groovy GetCustomObjects.groovy -instance prod-instance -url https://prod-instance.example.com -username <uname> -password <pwd> This will save all custom objects in a directory called prod-instance.
View full tip
Scoring is the process of making the prediction on the basis of the available data. Scoring is the process of assigning a predicted outcome to an individual record based on running that record’s conditions through the trained model. It allows you to request and retrieve individual record level prediction scores for a defined data set for a set of prediction topics. The accuracy of the score will likely be a direct reflection of the error rate produced by the Trained Model. Why the score value exceeds min or max value range of feature There are a few concepts to address with regards to this: Scoring outputs: It is important to note that when training an analytics model, the method is to create a generalizable model from a relatively small training dataset. By its nature, we expect the training process to see a limited subset and not an exhaustive list of all possible values for many constraints, especially time and practicality. As such, these generalized models will be expected to handle unseen data in the form of new combinations or values outside of previously observed ranges (more on this below). One common way to see scores that exceed the observed ranges in training, under the assumption that the goals are continuous, is to use prescriptive scoring. Prescriptive scoring attempts to find optimal values for lever, meaning tunable, features in order to maximize or minimize score values. Min/Max constraints: these are constraints that are placed upon the inputs for training and expected inputs for scoring. For training: If theses ranges were provided as part of the upload process, then training will raise exceptions regarding invalid data. However, if the ranges are not provided - they will be inferred from the data and, as such, training will not see values outside of observed ranges. For scoring: validation of the ranges will only be performed on the inputs - not the outputs. It is very important to note that the handling of these "constraints" is dependent upon the data type. For categorical (e.g. colors) and ordinal data (e.g. shirt sizes), the constraints are strict and data that was not observed in training will raise exceptions during scoring. However, for continuous values (e.g. temperature ranges) these constraints are more informational in nature. For predictive scoring, our code will accept records with values outside of those ranges. The rule of thumb is that values slightly outside these ranges are acceptable and that as the values stray farther from the ranges, the accuracy of the model degrades very quickly. For prescriptive scoring, these constraints are used to determine the acceptable ranges of values to try when determining the optimal values. Values outside of these constraints will NOT be tried. How to handle goal values while scoring What should be the value for the goal(objective TRUE) column in new data which would be scored using existing prediction model? <Dataset for making prediction model> Independent value goal field -0.65 0 -0.75 0 -0.85 0 0.85 1 0.45 1 ~~~ ~~~ <New data to be scored> Independent value goal field -0.25 ?? 0.35 ?? -0.45 ?? 0.95 ?? 0.15 ?? ~~~ ~~~ Now scoring, by its definition, does not take into consideration the goal column when being run. Seeing as the goal column above is a Boolean, we can populate the yet to be scored records with either a 0 or 1 and it won’t matter when it comes to scoring.
View full tip
This has been moved to its new home in the Augmented Reality Category in the PTC Community.
View full tip
This has been moved to its new home in the Augmented Reality Category in the PTC Community.
View full tip
Metrics for Model evaluation used in ThingWorx Analytics In ThingWorx Analytics, we consider different kinds of metrics to evaluate our models. The choice of metric completely depends on the type of model and the implementation plan of the model. After you are finished building your model, these 3 metrics will help you in evaluating your model accuracy. Here are below further explanations about the 3 metrics used. 1-The ROC Curve: To understand what is ROC (Receiver operating characteristic) curve, let's look at the confusion matrix below. We observe that for a probabilistic model, we get a different value for each metric. Hence, for each sensitivity, we get a different specificity. The two vary as follows: The ROC curve is the plot between sensitivity and (1- specificity). (1- specificity) is also known as false positive rate and sensitivity is also known as True Positive rate. Following is the ROC curve for the case in hand Let’s take an example of threshold = 0.5 (refer to confusion matrix). Here is the confusion matrix: As you can see, the sensitivity at this threshold is 99.6% and the (1-specificity) is ~60%. This coordinate becomes on point in our ROC curve. To bring this curve down to a single number, we find the area under this curve (AUC). Note that the area of the entire square is 1*1 = 1. Hence AUC itself is the ratio under the curve and the total area. For the case in hand, we get AUC ROC as 96.4%. Following are a few thumb rules: .90-1 = excellent (A) .80-.90 = good (B) .70-.80 = fair (C) .60-.70 = poor (D) .50-.60 = fail (F) We see that we fall under the excellent band for the current model. But this might simply be over-fitting. In such cases, it becomes very important to have in-time and out-of-time validations. Points to Remember: For a model which gives a class as an output, it will be represented as a single point in ROC plot. Such models cannot be compared with each other as the judgment needs to be taken on a single metric and not using multiple metrics. For instance, a model with parameters (0.2,0.8) and model with parameter (0.8,0.2) can be coming out of the same model, hence these metrics should not be directly compared. 2-Root Mean Squared Error (RMSE) RMSE is the most popular evaluation metric used in regression problems. It follows an assumption that error are unbiased and follow a normal distribution. Here are the key points to consider on RMSE: The power of ‘square root’ empowers this metric to show large number deviations. The ‘squared’ nature of this metric helps to deliver more robust results which prevent canceling the positive and negative error values. In other words, this metric aptly displays the plausible magnitude of the error term. It avoids the use of absolute error values which is highly undesirable in mathematical calculations. When we have more samples, reconstructing the error distribution using RMSE is considered to be more reliable. RMSE is highly affected by outlier values. Hence, make sure you’ve removed outliers from your data set prior to using this metric. As compared to mean absolute error, RMSE gives higher weighting and punishes large errors. 3-Pearson Correlation Coefficient This metric measures how highly correlated are two variables and is measured from -1 to +1. A Pearson Correlation Coefficient of 1 indicates that the data objects are perfectly correlated but in this case, a score of -1 means that the data objects are not correlated. In other words, the Pearson Correlation score quantifies how well two data objects fit a line. There are several benefits to using this type of metric. The first is that the accuracy of the score increases when data is not normalized. As a result, this metric can be used when quantities (i.e. scores) varies. Another benefit is that the Pearson Correlation score can correct for any scaling within an attribute, while the final score is still being tabulated. Thus, objects that describe the same data but use different values can still be used. The below figure demonstrates how the Pearson Correlation score may appear if graphed. The chart demonstrates the Pearson Correlation Coefficient. The axes are the scores given by the labeled critics and the similarity of the scores given by both critics in regards to certain an_items. In essence, the Pearson Correlation score finds the ratio between the covariance and the standard deviation of both objects. In the mathematical form, the score can be described as: In this equation, (x,y) refers to the data objects and N is the total number of attributes
View full tip
ThingWorx Analytics Builder - Upload Data   This video walks you through how to upload data and shows the configuration settings. Please be aware that shown configuration settings page is different for version 8.1.   Updated Link for access to this video:  ThingWorx Analytics Builder: Upload Data
View full tip
This is the Second Part of Getting Started wth ThingWorx Analytics. In this video,we would be using Postman.   During this video you will learn:   -Creating a Dataset -Entering the Dataset configuration -Uploading the CSV data File to TWA Server   Updated Link for access to this video:  Getting Started with ThingWorx Analytics Part-2
View full tip
Announcements