cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

Community Tip - You can change your system assigned username to something more personal in your community settings. X

IoT Tips

Sort by:
Sampling Strategy​ This Blog Post will cover the 4 sampling Strategies that are available in ThingWorx Analytics.  It will tell you how the sampling strategy runs behind the scenes, when you may want to use that strategy, and will give you the pros and cons of each strategy. SAMPLE_WITH_REPLACEMENT This strategy is not often used by professionals but still may be useful in certain circumstances.  When you sample with replacement, the value that you randomly selected is then returned to the sample pool.  So there is a chance that you can have the same record multiple times in your sample. Example Let’s say you have a hat that contain 3 cards with different people’s names on them. John Sarah Tom Let’s say you make 2 random selections. The first selection you pull out the name Tom. When you sample with replacement, you would put the name Tom back into the hat and then randomly select a card again.  For your second selection, it is possible to get another name like Sarah, or the same one you selected, Tom. Pros May find improved models in smaller datasets with low row counts Cons The Accuracy of the model may be artificially inflated due to duplicates in the sample SAMPLE_WITHOUT_REPLACEMENT This is the default setting in ThingWorx Analytics and the most commonly used sampling strategy by professionals.  The way this strategy works is after the value is randomly selected from the sample pool, it is not returned.  This ensures that all the values that are selected for the sample, are unique. Example Let’s say you have a hat that contain 3 cards with different people’s names on them. John Sarah Tom Let’s say you make 2 random selections. The first selection you pull out the name Tom. When you sample without replacement, you would randomly select a card from the hat again without adding the card Tom.  For your second selection, you could only get the Sarah or John card. Pros This is the sampling strategy that is most commonly used It will deliver the best results in most cases Cons May not be the best choice if the desired goal is underrepresented in the dataset UPSAMPLE_AND_SAMPLE_WITHOUT_REPLACEMENT This is useful when the desired goal is underrepresented in the dataset.  The features that represent the desired outcome of the goal are copied multiple times so they represent a larger share of the total dataset. Example Let’s say you are trying to discover if a patient is at risk for developing a rare condition, like chronic kidney failure, that affects around .5% of the US population.  In this case, the most accurate model that would be generated would say that no one will get this condition, and according to the numbers, it would be right 99.5% of the time.  But in reality, this is not helpful at all to the use case since you want to know if the patient is at risk of developing the condition. To avoid this from happening, copies are made of the records where the patient did develop the condition so it represents a larger share of the dataset.  Doing this will give ThingWorx Analytics more examples to help it generate a more accurate model. Pros Patterns from the original dataset remain intact Cons Longer training time DOWNSAMPLE_AND_SAMPLE_WITHOUT_REPLACEMENT This is also useful when the desired goal is underrepresented in the dataset. In downsample and sample without replacement, some features that do not represent the desired goal outcome are removed. This is done to increase the desired features percentage of the dataset. Example Let’s continue using the medical example from above.  Instead of creating copies of the desired records, undesired record are removed from the dataset.  This causes the records where patients did develop the condition to occupy a larger percentage of the dataset. Pros Shorter training time Cons Patterns from the original dataset may be lost
View full tip
In this particular scenario, the server is experiencing a severe performance drop.The first step to check first is the overall state of the server -- CPU consumption, memory, disk I/O. Not seeing anything unusual there, the second step is to check the Thingworx condition through the status tool available with the Tomcat manager. Per the observation: Despite 12 GB of memory being allocated, only 1 GB is in use. Large number of threads currently running on the server is experiencing long run times (up to 35 minutes) Checking Tomcat configuration didn't show any errors or potential causes of the problem, thus moving onto the second bullet, the threads need to be analyzed. That thread has been running 200,936 milliseconds -- more than 3 minutes for an operation that should take less than a second. Also, it's noted that there were 93 busy threads running concurrently. Causes: Concurrency on writing session variable values to the server. The threads are kept alive and blocking the system. Tracing the issue back to the piece of code in the service recently included in the application, the problem has been solved by adding an IF condition in order to perform Session variable values update only when needed. In result, the update only happens once a shift. Conclusion: Using Tomcat to view mashup generated threads helped identify the service involved in the root cause. Modification required to resolve was a small code change to reduce the frequency of the session variable update.
View full tip
Those who have been working with ThingWorx for many years will have noticed the work done around ingress stress testing and performance optimization.  Adding InfluxDB as a time-series data persistence provider really helped level up these capabilities while simultaneously decreasing the overall resources required by the infrastructure.  However with this ease comes a hidden challenge: query and data processing performance to work it into something useful.   Often It's Too Much Data In general most customers that I work with want to collect far too much data -- without knowing what it will be used for, or what processing will be required in order to make it usable and useful.  This is a trap in general with how many people envision IoT projects, being told by infrastructure providers that cloud storage and compute resources are abundant and cheap and that they should get as much data as possible.  This buildup of data means that more effort needs to be spent working it into something useful (data engineering/feature extraction) and addressing common data issues (quality, gaps, precision, etc.).  This might be fine for mature companies with large data analytics teams; however this is a makeup that I've only seen in the largest of our customers.  Some advice - figure out what you need and how you'll use it, and then collect that.  Work on extracting value today rather than hoping that extra data collected  now will provide some insights years from now.   Example - Problem Statement You got your Thing Model designed, and edge devices connected.  Now you've got data flowing in and being stored every 5 seconds in InfluxDB.  Great progress!  Now on to building the applications which cover the various use cases. The raw data is most likely going to need to be processed and potentially even significantly transformed into other information in order to make it useful.  Turning a "powered on and running" BOOLEAN to an "hour meter" INTEGER is a simple example.  Then you may need to provide a report showing equipment run time hours by day over a month.  The maintenance team may also have asked to look for usage patterns which lead to breakdowns, requiring extracting other data points from the initial one like number of daily starts, average daily run time, average time between restarts. The problem here is that unless you have prepared these new data points and stored them as well (say in a Stream), you are going to have to build these data sets on the fly, and that can be time and resource intensive and not give you the response time expected.  As you can imagine, repeatedly querying and processing large volumes of unchanging raw data is going to have resource and time implications - so this is why data collection and data use need to be thought about separately.   Data Engineering In the above examples, the key is actually creating new data points which are calculated progressively throughout normal operation.  This not only makes the information that you want available when you need it - in the right format - but it also significantly reduces resource requirements by constantly reprocessing raw data.  It also helps managing data purging, because as you create and store usable insights, you can eventually just archive away your old raw data streams.   Direct Database Queries vs. Thingworx Data Services Despite the above being a rule of thumb, sometimes a simple well structured database query can get you exactly what you need and do so quite quickly.  This is especially true for InfluxDB when working with extremely large time-series datasets.  The challenge here is that ThingWorx persistence providers abstract away the complexity of writing ones own database queries, so we can't easily get at the databases raw power and are forced to query back more data than needed and work it into a usable format in memory (which is not fast).   Leveraging the InfluxDB API using the ContentLoader Technique As InfluxDBs API is 100% REST, we can access it using in-built ThingWorx Content Loader services.  Check out this demonstration and explanation video where I talk about how to interact directly with InfluxDB in order to crush massive time-series data and get back much more usable and manageable data sets.  It is important to note here that you should use a read-only database user here, as you should never modify the ThingWorx databases to avoid untested scenarios which may lead to data corruption.   Optimizing ThingWorx query performance with the InfluxDB REST API - YouTube InfluxToolBox ThingWorx demo project (by T. Wobben)      
View full tip
In our interactions with PTC customers we often learn they have previously performed Analytics modeling in Python, Matlab, R, or even built home grown analyses in languages such as Java or C++. As expected, when adopting an Industrial Innovation Platform such as ThingWorx that also has its own ThingWorx Analytics module, customers do not want to reimplement everything from scratch and would rather integrate their previous work in the Smart Applications built in ThingWorx, leveraging a combination of their existing toolset together with ThingWorx Analytics modeling. That is certainly possible and there are multiple ways to do that. In this article we will focus on several general ways to make that happen, but it is important to keep in mind that language specific approaches are also possible and we are happy to discuss those in the specific context of the customer.   Here are five different ways to bring existing Analytics into ThingWorx: If the task is to reuse an existing predictive model developed in a language such as Python/R/Matlab, typically one can export that model in PMML (Predictive Model Markup Language), an xml format, and import it in ThingWorx Analytics using the AnalyticsServer_ResultsThing -> UploadModel service. Libraries such as sklearn2pmml & r2pmml can be utilized towards that goal. The imported model can then be used in the same fashion as a ThingWorx Analytics developed model to power smart applications built in ThingWorx. If the Analysis involves more complex tasks than Predictive Modeling, such as custom data normalizations or non-standard Machine Learning models or home grown algorithms, one can use the options below. Call the ThingWorx exposed REST Web API from Python/Matlab/R/Java/Javascript. Every service from ThingWorx can be called that way, and the API can also be used to push analyses results into ThingWorx for further consumption, perhaps together with other sources of data such as sensor readings, in the smart applications built there. The documentation for the ThingWorx REST API can be found here.  Expose the existing Analytics via using a thin layer of REST Web Services. For example, in Python, this can be done using Flask, with few lines of code. Then, the orchestration can happen from ThingWorx by calling the exposed Web Service and weaving the results back into smart applications. Often our customers' current architecture involves a relational database (e.g. SQL Server, Oracle, etc) that is powering the existing Analytics, and stores the end results (predictions, correlations, etc). In this scenario, we can connect ThingWorx directly to that database to read these results.  Finally, in the case of complex Analytics, where a tighter integration with ThingWorx is desired, existing Analytics / algorithms can be wrapped into a ThingWorx Extension or an Analytics Provider using the corresponding PTC SDKs.  When choosing an integration option, customers need to carefully balance complexity of integration, constraints of their architecture, Analytics modeling complexity, as well as end user consumption requirements.
View full tip
We will host a live Expert Session: "Thingworx Mashup 101 - Do's and Don'ts" on February 24th, 13h30 EST.   Please find below the description of the expert session and the registration link.   Expert Session: Thingworx Mashup 101 - Do's and Don'ts Date and Time: February 24th, 13h30 EST Duration: 1 hour Host: Aanjan Ravi - Technical Product Manager Registration Here: https://www.ptc.com/en/events/thingworx-mashup-101   Description: This session covers the most common and useful tips about how to correctly use Mashup builder, Widgets and Layouts – and what to avoid -  to create applications with good principles of UI/UX and easier to maintain.   Existing Recorded sessions can be found on support portal using the keyword ‘Expert Sessions’. You can also suggest topics for upcoming sessions using this small form.   Here are some recorded sessions that might be of your interest. You can find recordings for the full library of webinars using the keyword ‘Expert Sessions’ in PTC support portal search Thingworx Active Active Clustering This session will cover the main aspects of the High Availability Clustering feature launched with the ThingWorx 9.0 release.   Recoding Link Upgrade to Thingworx 9 – How to Plan / Evaluate Impacts This session highlights the key points you should evaluate to properly plan your upgrade to Thingworx 9. Recording Link Top 5 items to check for Thingworx Performance Troubleshooting How to troubleshoot performance issues in a Thingworx Environment? Here we cover the top 5 investigation steps that will help you understand the source of your environment issues and allow better communication with PTC Technical Support     Recording Link
View full tip
Hello!   We will host a live Expert Session: "Understanding ThingWorx Navigate Licensing" on February 11th, 10h EST.   Please find below the description of the expert session and the registration link.   Expert Session: Understanding ThingWorx Navigate Licensing Date and Time: February 11th, 10h EST Duration: 1 hour Host: Christoph Braeuchle, Emily Larkin and Steve Scheib - ThingWorx Navigate PM team Registration Here: https://www.ptc.com/en/resources/plm/webcast/understanding-thingworx-navigate-licensing     Description: ThingWorx Navigate licensing opens many users a way to access PLM data and functionality at an attractive price tag when they don’t need to use the full power of Windchill functionality. This licensing and packaging have changed over the past 1.5 years and this is the perfect time to share an update on available license types and answer essential questions like... Which license types do my end-users really need? What capabilities are provided by each license type? What are the best ways to understand and control license usage in my company? Don’t miss this session if you want to understand how ThingWorx Navigate licensing works and which options are available.   Existing Recorded sessions can be found on support portal using the keyword ‘Expert Sessions’. You can also suggest topics for upcoming sessions using this small form.   Here are some recorded sessions that might be of your interest. You can find recordings for the full library of webinars using the keyword ‘Expert Sessions’ in PTC support portal search Navigate - SSL & Windchill Authentication This in Expert Session will take you through a step-by-step approach for configuring authentication between Windchill and Navigate with SSL.   Recoding Link Top 5 items to check for Thingworx Performance Troubleshooting How to troubleshoot performance issues in a Thingworx Environment? Here we cover the top 5 investigation steps that will help you understand the source of your environment issues and allow better communication with PTC Technical Support   Recording Link Thingworx 9.0 Component Based App Development Following the series of new capabilities released with Navigate 9.0, this session will focus in the details of Navigate Component Based app development and how to leverage this to your use cases Recording Link
View full tip
We will host a live Expert Session: "Windchill & Thingworx Navigate Authentication" on November 10th at 10:30 AM EST.   Please find below the description of the expert session and the registration link .   Expert Session: Windchill & Thingworx Navigate Authentication Date and Time: Tuesday, November 10th, 2020 10:30 am EST Duration: 1 hour Host: Arshad Imam, PLM Product Technology Lead   Description: This in Expert Session will take you through a step-by-step approach for configuring authentication between Windchill and Navigate with SSL. Plus, you can take advantage of a unique opportunity to ask questions in a live Q&A following the presentation.   Register here   Existing Recorded sessions can be found on support portal using the keyword ‘Expert Sessions’.   You can also suggest topics for upcoming sessions using this small form.   Here are some recorded sessions that might be of your interest. You can find recordings for the full library of webinars using the keyword ‘Expert Sessions’ in PTC support portal search   Navigate 9.0 – What’s New? This session is the intro of a series that will cover new capabilities of the recent Navigate 9 release and the value that each can bring to your implementation. Then we will have further sessions covering the details of some of them   Recoding Link Top 5 items to check for Thingworx Performance Troubleshooting How to troubleshoot performance issues in a Thingworx Environment? Here we cover the top 5 investigation steps that will help you understand the source of your environment issues and allow better communication with PTC Technical Support   Recording Link Thingworx 9.0 Component Based App Development Following the series of new capabilities released with Navigate 9.0, this session will focus in the details of Navigate Component Based app development and how to leverage this to your use cases Recording Link Thingworx Navigate 3D Viewer Following the series of new capabilities released with Navigate 9.0, this session focus in the details of Navigate 3D Viewer leverage this to your use cases Recording Link
View full tip
We will host a live Expert Session: Thingworx Navigate Component Based App Development on Wednesday 09/30, 08:00 AM Eastern Daylight Time   Please find below the description of the expert session as well as the link to register .   Expert Session: Thingworx Navigate Component Based App Development Date and Time: Wednesday 09/30, 08:00 AM Eastern Daylight Time Duration: 1 hour Host: Pratibha Bhatnagar Description: Following the series of new capabilities released with Navigate 9.0, this session will focus in the details of Navigate Component Based app development and how to leverage this to your use cases.   Existing Recorded sessions can be found on support portal using the keyword ‘Expert Sessions’   You can also suggest topics for upcoming sessions using this small form
View full tip
Everywhere in the Thingworx Platform (even the edge and extensions) you see the data structure called InfoTables.  What are they?  They are used to return data from services, map values in mashup and move information around the platform.  What they are is very simple, how they are setup and used is also simple but there are a lot of ways to manipulate them.  Simply put InfoTables are JSON data, that is all.  However they use a standard structure that the platform can recognize and use. There are two peices to an InfoTable, the DataShape definition and the rows array.  The DataShape is the definition of each row value in the rows array.  This is not accessible directly in service code but there are function and structures to manipulate it in services if needed. Example InfoTable Definitions and Values: { dataShape: {     fieldDefinitions : {           name: "ColOneName", baseType: "STRING"     },     {           name: "ColTwoName", baseType: "NUMBER"     }, rows: [     {ColOneName: "FirstValue", ColTwoName: 13},     {ColOneName: "SecondValue, ColTwoName: 14}     ] } So you can see that the dataShape value is made up of a group of JSON objects that are under the fieldDefinitions element.  Each field is a "name" element, which of course defined the field name, and the "baseType" element which is the Thingworx primitive type of the named field.  Typically this structure is automatically created by using a DataShape object that is defined in the platform.  This is also the reason DataShapes need to be defined, so that fields can be defined not only for InfoTables, but also for DataTables and Streams.  This is how Mashups know what the structure of the data is when creating bindings to widgets and other parts of the platform can display data in a structured format. The other part is the "rows" element which contains an array of of JSON objects which contain the actual data in the InfoTable. Accessing the values in the rows is as simple as using standard JavaScript syntax for JSON.  To access the number in the first row of the InfoTable referenced above (if the name of the InfoTable variable is "MyInfoTable") is done using MyInfoTable.rows[0].ColTowName.  This would return a value of 13.  As you can not the JSON array index starts at zero. Looping through an InfoTable in service script is also very simple.  You can use the index in a standard "for loop" structure, but a little cleaner way is to use a "for each loop" like this... for each (row in MyInfoTable.rows) {     var colOneVal = row.ColOneName;     ... } It is important to note that outputs of many base services in the platform have an output of the InfoTable type and that most of these have system defined datashapes built into the platform (such as QueryDataTableEntries, GetImplimentingThings, QueryNumberPropertyHistory and many, many more).  Also all service results from query services accessing external databases are returned in the structure of an InfoTable. Manipulating an InfoTable in script is easy using various functions built into the platform.  Many of these can be found in the "Snippets" tab of the service editor in Composer in both the InfoTableFunctions Resource and InfoTable Code Snippets. Some of my favorites and most commonly used... Create a blank InfoTable: var params = {   infoTableName: "MyTable" }; var MyInfoTable= Resources["InfoTableFunctions"].CreateInfoTable(params); Add a new field to any InfoTable: MyInfoTable.AddField({name: "ColNameThree", baseType: "BOOLEAN"}); Delete a field: MyInfoTable.RemoveField("ColNameThree"); Add a data row: MyInfoTable.AddRow({ColOneName: "NewRowValue", ColTwoName: 15}); Delete one or more data row matching the values defined (Note you can define multiple field in this statement): //delete all rows that have a value of 13 in ColNameOne MyInfoTable.Delete({ColNameOne: 13}); Create an InfoTable using a predefined DataShape: var params = {   infoTableName: "MyInfoTable",   dataShapeName: "dataShapeName" }; var MyInfoTable = Resources["InfoTableFunctions"].CreateInfoTableFromDataShape(params); There are many more functions built into the platform, including ones to filter, sort and query rows.  These can be extremely useful when tying to return limited or more strictly structured InfoTable data.  Hopefully this gives you a better understanding and use of this critical part of the Thingworx Platform.
View full tip
Hello everyone, This post is meant to fill the gap that Basic Rules of ThingWorx Development is having. You can follow these rules even before starting the development process and keep them in mind to have an organized and easy to maintain application. I will update this post in the future with more best practices and advice. Best Practices and suggestions: In order to have a clean and quick progress in any project the approach should be modular. If the modular approach is implemented also the development process should be thought of in a modular way. This will give much needed independence to each individual developer especially if the team concurrently works on the same instance. Some rules need to be in place in order for the project to be as smooth as possible: Every developer must have its own user. This is more important when developing on the same Thingworx instance but it’s a good practice when developing on individual instances as well. Every developer will be responsible for complete modules, from the respective screens of the GUI to the functionality services and business logic. If concurrent work on the same Entity needs to happen then communication between the developers and time sharing on that entity is needed without developers overwriting each other’s code. Don't decide to go into edit mode if there is someone else already editing. That will get you to a dead end. For the point no. 3 to work, after editing an Entity each user must press the Cancel Edit button and leave that Entity in View mode. When searching for services or properties developers should avoid pressing on the name of the Entity which is a link that directly opens the Entity in Edit mode they should rather use the button with the magnifying glass to the left of the name that will then take them in View mode. As a result of the modular approach each module will have its own Utility Thing that will contain services, properties, events and subscriptions that help develop the functionality for that module. Each module will have its own tags and the format could be: <Client_Name><GUI/Business><Module_Name>   8. The integration of the modules will be done in the Master by a single person in charge with that master or by each developer at a time.   9. Depending on the case the Data Model could be treated as a module in its own right or can be integrated in each module if the project permits. How to manage multiple users working on the same code in Composer: (Thanks to Pai Chung) Currently Thingworx within the development environment allows you to heavily document all your works, that includes ‘Save with Comment’. We encourage the use of the Documentation field and the ‘Save with Comment’ option. However generally development is not isolated to one environment. Thingworx provides several ways to back up the information. Backup – this is a true Database backup that creates an additional database in ThingworxBackupStorage and basically can be used as a restore, by copying it back into ThingworxStorage Export to ThingworxStorage – this is a full model export (with or without data) that can be triggered at any time. It can use Date filters to export according to Modified date. This is server side. Export to File – this allows you to export a single or group of entities/data according to a variety of filters. This is client side. Export to Source Controlled Entities – this allows you to export to a file folder structure or Zip that can be easily checked into a Source Control system. How to approach Source Control: After some initial modeling, Export to Source Control Entities and check this into your Source Control system From this point forward all developers have to follow a Check in/check out process Every time an Entity Group security setting is made, Export to ThingworxStorage and also check that into Source Control overwrite the previous. All in use Extensions should be in one zip and also reside in Source Control To do a restore or deploy Install the Platform Install extensions Import from ThingworxStorage the last Export checked in Import each single Entity file, in the proper order. Import each single Data file   6.  Clean up dead entities (if there is a reference list) Additional steps to take to help safeguard the development. Make sure the Automatic backup is running Export the Entity to a subfolder with the Date of the Edit     3.  Full Export to ThingworxStorage to run every day after development stars - This can be scripted and triggered by a timer or scheduler subscription (<Server>/Thingworx/ExportDatabase/?WithData=true). In this way you have a backup with everything that was on before you started working each day so you can roll back if an error occurs. CONTINUED 7 Sep 2015 How to organize wiring needs when developing the GUI: Starting from the idea that we can divide the GUI elements in Display Elements and Action Elements I have created a common form in order to be filled with information necessary for the wiring of that Element. UI Element Type Display Element / User Action Element Thing Name Name of the thing where data / service is found Service Name Service inside the Thing that returns the data / is the subject of the action Property(ies) Name Thing property / column name (when service returns an infotable) for Data Elements / Input parameters for the service to be run if User Action Element Additional Logic Additional information regarding the way the information sources change when preconditions are met. Usually means new services or mashup logic is needed.  I suggest that an additional companion document to the GUI description document to be created. This document will contain the previous form (table) for each screen/slide so that the work on specific screen/slide could be done independently. To be continued...
View full tip
Remote Monitoring of Assets Benchmark   As @ttielebein introduced previously, one of the missions of the IOT Enterprise Deployment Center (EDC) is to publish benchmarks that showcase the ThingWorx Platform deployed to solve real-world IOT business problems.    Our goal is that these benchmarks can be used as a reference or baseline for architects working on their own implementations... showing not only a successful at-scale implementation, but also what happens when that same implementation is pushed to ...or even past... it's limits.   Please find the first installment attached - a reference benchmark demonstrating ThingWorx deployed to monitor 15,000 assets with a high-volume of data properties per asset.  Over 250 hours of simulations were conducted as part of producing this benchmark.   The IOT EDC team will be monitoring this post (as well as our other posts in the IOT Tech Tips forum) to answer any questions we can about the approaches taken in designing, deploying and simulating this implementation.    As the team will publish more benchmarks like this will be published in the future, we also greatly value any feedback you have that can help us to improve the content for future documents.
View full tip
Smoothing Large Data Sets Purpose In this post, learn how to smooth large data sources down into what can be rendered and processed more easily on Mashups. Note that the Time Series Chart  widget is limited to load 8,000 points (hard-coded). This is because rendering more points than this is almost never necessary or beneficial, given that the human eye can only discern so many points and the average monitor can only render so many pixels. Reducing large data sources through smoothing is a recommended best practice for ThingWorx, and for data analysis in general.   To show how this is done, there are sample entities provided which can be downloaded and imported into ThingWorx. These demonstrate the capacity of ThingWorx to reduce tens of thousands of data points based on a "smooth factor" live on Mashups, without much added load time required. The tutorial below steps through setting these entities up, including the code used to generate the dummy data.   Smoothing the Data on Mashups Create a Value Stream for storing the historical data. Create a Data Shape for use in the queries. The fields should be: TestProperty - NUMBER timestamp - DATETIME Create a Thing (TestChartCapacityThing) for simulating property updates and therefore Value Stream updates. There is one property: TestProperty - NUMBER - not persistent - logged The custom query service on this Thing (QueryNamedPropertyHistory) will have the logic for smoothing the data. Essentially, many points are averaged into one point, reducing the overall size, before the data is returned to the mashup. Unfortunately, there is no service built-in to do this (nothing OOTB service). The code is here (input parameters are to - DATETIME; from - DATETIME; SmoothFactor - INTEGER): // This is just for passing the property name into the query var infotable = Resources["InfoTableFunctions"].CreateInfoTable({infotableName: "NamedProperties"}); infotable.AddField({name: "name", baseType: "STRING"}); infotable.AddRow({name: "TestProperty"}); var queryResults = me.QueryNamedPropertyHistory({ maxItems: 9999999, endDate: to, propertyNames: infotable, startDate: from }); // This will be filled in below, based on the smoothing calculation var result = Resources["InfoTableFunctions"].CreateInfoTable({infotableName: "SmoothedQueryResults"}); result.AddField({name: "TestProperty", baseType: "NUMBER"}); result.AddField({name: "timestamp", baseType: "DATETIME"}); // If there is no smooth factor, then just return everything if(SmoothFactor === 0 || SmoothFactor === undefined || SmoothFactor === "") result = queryResults; else { // Increment by smooth factor for(var i = 0; i < queryResults.rows.length; i += SmoothFactor) { var sum = 0; var count = 0; // Increment by one to average all points in this interval for(var j = i; j < (i+SmoothFactor); j++) { if(j < queryResults.rows.length) if(j === i) { // First time set sum equal to first property value sum = queryResults.getRow(j).TestProperty; count++; } else { // All other times, add property values to first value sum += queryResults.getRow(j).TestProperty; count++; } } var average = sum / count; // Use count because the last interval may not equal smooth factor result.AddRow({TestProperty: average, timestamp: queryResults.getRow(i).timestamp}); } } Create a Timer for updating the property values on the Thing. The Timer should subscribe to itself, containing this code (ensure it is enabled as well): var now = new Date(); if(now.getMilliseconds() % 3 === 0) // Randomly reset the number to simulate outliers Things["TestChartCapacityThing"].TestProperty = Math.random()*100; else if(Things["TestChartCapacityThing"].TestProperty > 100) Things["TestChartCapacityThing"].TestProperty -= Math.random()*10; else Things["TestChartCapacityThing"].TestProperty += Math.random()*10; Don't forget to set the runAsUser in the Timer configuration. To generate many properties, set the updateRate to a small value, like 10 milliseconds. Disable the Timer after many thousands of properties are logged in the Value Stream. Create a Mashup for displaying the property data and capacity of the query to smooth the data. The Mashup should run the service created in step 4 on load. The service input comes from widgets on the mashup: Bindings: Place a Time Series Chart widget in the bottom of the Mashup layout. Bind the data from the query to the chart. View the Mashup. Note the difference in the data... All points in one minute: And a smooth factor of 10 in one minute: Note that the outliers still appear, and the peaks are much easier to see. With fewer points, trends become easier to spot and data is easier to understand. For monitoring the specific nature of the outliers, utilize alerts and other types of displays. Alternative forms of data reduction could involve using the mean of each interval (given by the smoothing factor) or the min or max, as needed for the specific use case. Display multiple types of these options for an even more detailed view. Remember, though, the more data needs to be processed, the slower the Mashup will load. As usual, ensure all mashups are load tested and that the number of end users per Mashup is considered during application design.
View full tip
Developing Great IoT Solutions Brought to you once again by your EDC team, find attached here a brand-new, comprehensive overview of ThingWorx best practices! This guide was crafted by combining all available feedback, from support cases to PTC Community threads, and tapping all internal resources. Let this guide serve to bridge the knowledge gaps ThingWorx developers most commonly see.    The Developing Great IoT Solutions (DGIS) Guide is a great way to inform both business and technically minded folks about the capabilities of the ThingWorx Platform. Learn how to design good solutions from a high-level, an overview designed specifically with the business audience in mind. Or, learn how to implement good IoT designs through a series of technical examples. Start from very little knowledge of the Platform and end up understanding data structures and aggregation, how to use the collection widget, and how to build a fully functional rules engine for sending and acknowledging alerts in ThingWorx.   For the more advanced among us, check out the Appendix. Find here a handy list of do's and don'ts surrounding ThingWorx best practice in development, with links to KCS, Help Center, and Community content.   Reinforce your understanding of the capabilities of the ThingWorx Platform with this guide, today!   A big thanks to all who were involved on this project! Happy developing!
View full tip
Saw this great question in the Developers forum https://community.ptc.com/t5/ThingWorx-Developers/Thingworx-Permission-Hierarchy/m-p/556829#M29312. Answered it there, copying it to here: Question Hi, I have a few of questions regarding the permissions model in Thingworx. I can't find any documentation that explains it clearly. Hoping someone can help, or point me in the right direction for more in depth documentation.   My understanding is that permissions can be set at a number of different levels.  Collection Level Template Level Instance Level Thing level My question is, how do these levels interact with one another. Do they all get 'AND'ed together, or do those at the lower levels supersede the ones set at higher levels. e.g.  If I set some visibility at collection level would this overridden by me setting a different visibility at say the Template instance level, or would both visibility permissions be valid. At each of level there is the ability to override (e.g. for a particular property or service). How does that fit in the hierarchy. I have read that in Thingworx 'deny' always supersedes an 'allow' permission. Is this still the case if I set deny at collection level and then at a lower level I gave 'allow' permissions would the deny take precedence. As far as I can tell 'Create' permissions can only be set at collection level. Does this mean that I am unable to restrict one set of users to create things of one template, and a different set of users to create another type of thing. Thanks in advance for any replies   Answer: Great question Thing/Entity level permissions always take precedence So if you set on Collection then on Template then on Entity it will first look at Entity then fill in with Template and Permission So if Collection says can't do Service Execute Template says Can execute Service 1 but not Service 2 Entity says Can execute Service 2 and leaves Service 1 as inherited the end result is that the user can execute service 1 and 2   In Template and Entity you can find the Override ability, that is to specifically allow or disallow the execution of a Service or read/write of a Property   What is a BEST PRACTICE? 1. Give the System user all service execute on collection level 2. Give User Groups 'blanket' permissions to Property Read/Write on ThingTemplate Level 3. Give User Groups only Override permission to execute Services on ThingTemplate Level 4. Override User Group permission to DENY property read on potential properties they are not supposed to read on the ThingTemplate Level   Generally most properties all users can access fully and the blanket permission on a ThingTemplate is fine It is very BAD to give user groups blanket permission to Service execute and should always be done by Override   Entity Hierarchy overrides the Allow Deny hierarchy, but within a single level (Collection / Template / Entity) Deny wins over Allow.   Create is indeed only set on the Collection Level, however the way to secure this is to give the System user the Create ability and create Wrapper services that use the CreateThing service which you can then secure for specific Groups. So you could create a CreateNewThingType1 and CreateNewThingType2 for example and give User Group 1 permission to Type 1 creation and User Group 2 permission to Type 2 creation.   Hope that helps.
View full tip
Events   Timers and Schedulers both come with a specific Event inherited from the Thing Template: Timer ScheduledEvent Both have a Data Shape allowing to capture the timestamp of when the Event was actually fired. Events in ThingWorx are triggered when a specific condition is met. In this context the condition is met and the Event is fired when a Timer has expired or a Scheduler's time is reached. Once an Event is triggered, Subscriptions will take care of executing custom Services to react to the Event. Subscriptions   Subscriptions listen to Events and can be used to react to certain Events with running custom Service scripts. To follow-up on Timers and Schedulers, a new Subscription must be created, listening to any related Event fired. Add a new Subscription to the Thing with       As the Subscription is usually listening to the Thing that it is configured on, the Source has to be left empty. When listening to other Entities' Subscriptions the corresponding Entity can be picked in the Source Entity picker. Ensure to check the Enabled checkbox to actually enable the Subscription and allow it for executing code in the Script area. The following Script will log into the ScriptLog once the Timer Event is fired     The following Script will log into the ScriptLog once the ScheduledEvent Event is fired  
View full tip
Timers and Schedulers can also be created and configured programmatically via custom services. The following service, which can be created on any Thing, will create a new Timer using the following Inputs:         // create new Thing var params = { name: ThingName /* STRING */, description: undefined /* STRING */, thingTemplateName: "Timer" /* THINGTEMPLATENAME */, tags: undefined /* TAGS */ }; Resources["EntityServices"].CreateThing(params); // read initial configuration // result: INFOTABLE var configtable = Things[ThingName].GetConfigurationTable({tableName: "Settings"}); // update configuration with service parameters configtable.updateRate = updateRate configtable.runAsUser = user // set new configuration table var params = { configurationTable: configtable /* INFOTABLE */, persistent: true /* BOOLEAN */, tableName: "Settings" /* STRING */ }; Things[ThingName].SetConfigurationTable(params);   This code is an example which could also be used to create a new Scheduler. The configuration table for a Timer has the following attributes: updateRate enabled runAsUser The configuration table for a Scheduler has the following attributes: schedule enabled runAsUser  
View full tip
Original Post Date:     June 6, 2016 Description: This tutorial video will walk you through the installation process for the PostgreSQL-based version of the ThingWorx Platform in a Windows environment.  All required software components will be covered in this video.    
View full tip
A common issue that is seen when trying to deploy, design or scale up a ThingWorx application is performance.  Slow response, delayed data and the application stopping have all been seen when a performance problems either slowly grows or suddenly pops up.  There are some common themes that are seen when these occur typically around application model or design.  Here are a few of the common problems and some thoughts on what to do about them or how to avoid them. Service Execution This covers a wide range of possibilities and is most commonly seen when trying to scale an application.  Data access within a loop is one particular thing to avoid.  Accessing data from a Thing, other service or query may be fast when only testing it on 100 loops, but when the application grows and you have 1000 suddenly it's slow.  Access all data in one query and use that as an in memory reference.  Writing data to a data store (Stream, Datatable or ValueStream) then querying that same data in one service can cause problems as well.  Run the query first then use all the data you have in the service variables.   To troubleshoot service executions there are a few methods that can be used.  Some for will not be practical for a production system since it is not always advisable to change code without testing first. Used browser development tools to see the execution time of a service.  This is especially helpful when a mashup is slow to load or respond.  It will allow quickly identifying which of multiple services may be the issue. Addition of logging in a service.  Once a service is identified adding simple logging points in the service can narrow what code in the service cases the slow down (it may be another service call).  These logging statements show up in the script logs with time stamps ( you can also log the current time with the logging statements). Use the test button in Composer.  This is a simple on but if the service does not have many parameters (or has defaults) it's a fast and easy way to see how long a service takes to return,' When all else fails you can get thread dumps from the JVM.  ThingWorx Support created an extension that assists with this.  You can find it on the Marketplace with instructions on how to use it.  You can manually examine the output files or open a ticket with support to allow them to assist.  Just be careful of doing memory dumps, there are much larger, hard to analyse and take a lot of memory.  https://marketplace.thingworx.com/tools/thingworx-support-tools Queries ​These of course are services too but a specific type.  Accessing data in ThingWorx storage structures or from external sources seems fairly straight forward but can be tricky when dealing with large data sets.  When designing and dealing with internal platform storage refer to this guide as a baseline to decide where to store data...  Where Should I Store My Thingworx Data?   NEVER store historical data in infotable properties.  These are held in memory (even if they are persistent) and as they grow so will the JVM memory use until the application runs out of it.  We all know what happens then.  Finally one other note that has causes occasional confusion.  The setting on a query service or standard ThingWorx query service that limits the number of records returned.  This is how many records are returned to from the service at the end of processing, not how many are processed or loaded in memory.  That number may be much higher and could cause the same types of issues. Subscriptions and Events ​This is similar to service however there is an added element frequency.  Typical events are data change and timers/schedulers.  This again is often an issue only when scaling up the number of Things or amount of data that need to be referenced.  A general reference on timers and schedulers can be found here.  This also describes some of the event processing that takes place on the platform.  Timers and Schedulers - Best Practice For data change events be very cautions about adding these to very rapidly changing property values.  When a property is updating very quickly, for example two times each second, the subscription to that event must be able to complete in under 0.5 seconds to stay ahead of processing.  Again this may work for 5-10 Things with properties but will not work with 500 due to resources, speed and need to briefly lock the value to get an accurate current read.  In these cases any data processing should be done at the edge when possible (or in the originating system) and pushed to the platform in a separate property or service call.  This allows for more parallel processing since it is de-centralized. A good practice for allowing easier testing of these types of subscription code is to take all of the script/logic and move it to a service call.  Then pass any of the needed event data to parameters in the service.  This allows for easier debug since the event does not need to fire to make the logic execute.  In fact it can essentially be stand alone by the test button in Composer. Mashup Performance This​ one can be very tricky since additional browser elements and rendering can come into play. Sometimes service execution is the root of the issue and reviewed above, other times it is UI elements and design that cause slow down. The Repeater widget is a common culprit. The biggest thing to note here is that each repeater will need to render every element that is repeated and all of the data and formatting for each of those widgets in the repeated mashup. So any complex mashup that is repeated many times may become slow to load. You can minimize this to a degree based on the Load/Unload setting of the widget and when the slowness is more acceptable (when loading or when scrolling). When a mashup is launched from Composer it comes with some debugging tools built in to see errors and execution. Using these with browser debug tools can be very helpful. Scaling an Application When initially modeling an application scale must be considered from the start. It is a challenge (but not impossible) to modify an application after deployment or design to be very efficient. Many times new developers on the ThingWorx platform fall into what I call the .Net trap. Back when .Net was released one of the quote I recall hearing about it's inefficiencies was "memory is cheap". It was more cost efficient to purchase and install more memory than to take extra development time to optimize memory use. This was absolutely true for installed applications where all of the code was complied and stored on every system. Web based applications are not quite a forgiving since most processing and execution is done on the single central web server. Keep this in mind especially when creating Shapes, Templates and Subscriptions. While you may be writing one piece of code when this code is repeated on 1,000 Things they will all be in memory and all be executing this code in parallel. You can quickly see how competition for resources, locks on databases and clean access to in memory structures can slow everything down (and just think when there are 10,000 pieces of that same code!!). Two specific things around this must be stated again (though they were covered in the above sections). Data held in properties has fast access since it is in JVM memory. But this is held in memory for each individual Thing, so hold 5 MB of information in one Thing seems small, loading 10,000 Thing mean instant use of 50 GB of memory!! Next execution of a service. When 10 things are running a service execution takes 2 seconds. Slow but not too bad and may not be too noticeable in the UI. Now 10,000 Things competing for the same data structure and resources. I have seen execution time jump to 2 minutes or more. Aside from design the best thing you can do is TEST on a scaled up structure. If you will have 1,000 Things next year test your application early at that level of deployment to help identify any potential bottlenecks early. Never assume more memory will alleviate the issue. Also do NOT test scale on your development system. This introduces edits changes and other variables which can affect actual real world results. Have a QA system setup that mirrors a production environment and simulate data and execution load. Additional suggestions are welcome in comments and will likely update this as additional tool and platform updates change.
View full tip
New Generation Composer is available from ThingWorx 7.4 and later. Each subsequent release of ThingWorx will contain additional New Composer features/functionalities. This video is focused on the layout change and new features implemented from ThingWorx 7.4.     How to enable the new Composer? 1. In the top right-hand corner, click on the User Menu and select the Preferences option                Figure 1 2. Click the check box for Turn on New Composer Features. 3. Click Done. A New Composer link is added to the menu bar at the top of the Composer window.     Figure 2   4. Click the New Composer link to open a new tab for the New Composer view   Figure 3   What's the layout change in New Composer?     Three areas layout   Figure 4     Menu bar on the left (Area 1) Set Project Context to set default project name for new entities Two views: Recent and Browse Recent view will quickly locate the recent access entities Browse is almost the same to the old menu navigation bar Could be sizable or hidden, and the main idea here is to increase screen real-estate to allow bigger view/edit area Main area for listing and feeding entities in the middle (Area 2) It provides you a wider area to edit entities (author services, mashup builder, etc.) An extra area on the right for preview, properties/events editing, etc. (Area 3) It gave you an easier and handy way to get a glance of an entity’s basic information   Layout change in an entity’s editing page When you create a new Thing, you will find all the facets (general information, properties, services, events, subscriptions) of the Thing are listed in a dropdown list Figure 5       By doing so it will save more area for the feeding   Properties and Alerts Properties and Alerts are now listed separately (in two different Tabs) Figure 6 Figure 7  Properties and Alert are now edited on the right area of the page (See Figure 4 Area 3 Services and Subscription A bigger editor area Events Events are edited on right area of the page (See Figure 4 Area 3)   What's the main function/features change in New Composer   Industrial Connections The Industrial Connections entity allows you to connect with and configure industrial things in ThingWorx. With the “Discover” feature, you could easily bind the Industrial device’s (e.g., Kepware) Tags to a ThingWorx Entity Figure 8 From ThingWorx 8.0.0, Anomaly Alert type is supported An anomaly alert is only useful if you have configured Anomaly Detection to monitor a stream of data Anomaly metrics settings are allowed to be configured in the Alert edit page Figure 9 Subscriptions You could now manually remove a subscription permanently from the system in New Composer which is impossible in old Composer Services New Composer provided assistant scripting tools like static code analysis, String search or replace, etc Figure 10   How could I switch back to the Old Composer when editing an entity? It is so easy! As long as you have opened an ThingWorx Entity, you will notice there is a button “Edit in Composer”; it will lead you back to the old Composer, and all the editing that have been saved will be logged in the old Composer.     Video demonstration for the New Composer is also available now. Feel free to review from: New Composer Video
View full tip
1. Create a network and added all Entities that implement from a specific ThingShape in the network 2. Create a ThingShape mashup as below Note: Bind the Entity parameter to DynamicThingShapes_TracotrShape's service GetProperties input EntityName. Laso bind mashup RefreshRequested event to that service 3. Create a mashup named ContentShape, add Tree widget and ContainedMashp in it 4. Bind Service GetNetworkConnection's Selected Row(s) result and Selected RowsChanged event to ContainedMashup widget Note: Master can total replace ThingShape mashup. Suggest to use Master after ThingWorx 6.0
View full tip
There's a reason why many "Hello World" tutorials begin with writing to the logging tool.  Developers live in the console, inserting breakpoints and watching variables while debugging.  Especially with interconnected, complex systems, logging becomes crucial to saving developer hours and shipping on time. What this tutorial covers This tutorial introduces the three principal methods of monitoring the status of assets and the output of operations on an Axeda instance. Audit Logs Custom Object Log Reports The Audit Log You can filter the Audit Log by date range or by category. A list of the Audit Categories is available in the Help documentation at http://<<yourhost>>.axeda.com/help/en/audit_log_categories_for_filtering... You can write to the Audit Log from an Expression Rule or from a Custom Object. Writing to the Audit Log from an Expression Rule Use the Audit Action to write to the Audit Log: Audit("data-management", "The temperature " + DataItem.temp.value + "was reported at " + FormatDate(Now(), "yyyy/MM/dd hh:mm:ss")) You can insert values from the Variables or Functions list by using the plus operator as a string concatenator. Writing to the Audit Log from a Custom Object import com.axeda.common.sdk.id.Identifier import com.axeda.drm.sdk.Context import com.axeda.drm.sdk.audit.AuditCategory import com.axeda.drm.sdk.audit.AuditMessage auditMessage(Context.getSDKContext(), "data_management", "Thread started timestamp has ended.", context.device.id) private def auditMessage(Context CONTEXT, String category, String message, Identifier assetId) {     AuditCategory auditCategory = AuditCategory.getByString(category) ?: AuditCategory.DATA_MANAGEMENT     if (assetId == null) {         new AuditMessage(CONTEXT, "com.axeda.drm.rules.functions.AuditLogAction", auditCategory, [message]).store()     } else {         new AuditMessage(CONTEXT, "com.axeda.drm.rules.functions.AuditLogAction",    auditCategory, [message], assetId).store()     } }     In either case, a message written in the context of an Asset will be displayed on the Asset Dashboard (assuming the Audit module is enabled for the Asset Dashboard in the Model Preferences). The Custom Object Log The Configuration (6.1-6.5)/Manage(6.6+) tab provides access to the Custom Objects log when they are selected from the View sub-menu: This links allows you to open or save a zip archive of text files called customobject.logX where X is a digit that indicates that the log has rolled over into the next file (ie, customobject.log1).  The most current is customobject.log without a digit.  These files contain logging information in chronological order, identified by Custom Object name.  The log contains full stack traces of exceptions, as well as text written to the log. ERROR 2013-06-20 18:26:02,613 [sstreBinaryReturn,ajp-10.16.70.164-8009-6] Exception occurred in sstreBinaryReturn.groovy: java.lang.NullPointerException     at com.axeda.platform.sdk.v1.services.extobject.ExtendedObject.getPropertyByName(ExtendedObject.java:276)     at com.axeda.platform.sdk.v1.services.extobject.ExtendedObject$getPropertyByName.call(Unknown Source)    The Logger object in Custom Objects is a custom class ScriptoDebuggerLogger that is injected into the script and does not need to be explicitly imported. The following attributes are available for the Logger object: logger.info() logger.debug() logger.error()   All objects can be converted to a String by using the dump() function. logger.info(context.device.dump()) Additionally, a Javascript utility can be used with all SDK v2 domain objects and some SDK v1 domain objects to get a JSON pretty-print string of their attributes. import net.sf.json.JSONArray logger.info(JSONArray.fromObject(context.device).toString(2)) // Outputs: [{   "buildVersion": "",   "condition":  {     "detail": "",     "id": "3",     "label": "",     "restUrl": "",     "systemId": "3"   },   "customer":  {     "detail": "",     "id": "2",     "label": "Default Organization",     "restUrl": "",     "systemId": "2"   },   "dateRegistered":  {     "date": 31,     "day": 4,     "hours": 18,     "minutes": 39,     "month": 0,     "seconds": 31,     "time": 1359657571070,     "timezoneOffset": 0,     "year": 113   },   "description": "",   "detail": "mwc_location_1",   "details": null,   "gateways": [],   "id": "12345",   "label": "",   "location":  {     "detail": "Default Organization",     "id": "2",     "label": "Default Location",     "restUrl": "",     "systemId": "2"   },   "model":  {     "detail": "mwc_location",     "id": "4321",     "label": "standalone",     "restUrl": "",     "systemId": "4321"   },   "name": "mwc_location_1",   "pingRate": 10000,   "properties": [],   "restUrl": "",   "serialNumber": "mwc_location_1",   "sharedKey": [],   "systemId": "12345",   "timeZone": "America/New_York" }] ​Custom object logs may be retrieved by navigating to the Configuration (6.1-6.5)/Manage(6.6+) tab and selecting Custom Objects from the View sub-menu. Click the "Log" button at the bottom of the table and save, then view customobject.log in a text editor. Reports Reports provide a summary of data about the state of objects on the Axeda Platform.  Report titles are generally indicative of what they're reporting, such as Missing Devices Report, Auditing Report, Users Report. A separate license is needed in order to use the Reports feature.  New report types can only be created by a Reports administrator. To run a report, click Run from the Reports Tab. You can manage Reports from the Administration tab. These three tools together offer a full view of the state of domain objects on the Axeda Platform.  Make sure to take advantage of them while troubleshooting assets and applications.
View full tip