cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

Community Tip - You can Bookmark boards, posts or articles that you'd like to access again easily! X

IoT Tips

Sort by:
Welcome to the ThingWorx Manufacturing Apps Community! The ThingWorx Manufacturing Apps are easy to deploy, pre-configured role-based starter apps that are built on PTC’s industry-leading IoT platform, ThingWorx. These Apps provide manufacturers with real-time visibility into operational information, improved decision making, accelerated time to value, and unmatched flexibility to drive factory performance.   This Community page is open to all users-- including licensed ThingWorx users, Express (“freemium”) users, or anyone interested in trying the Apps. Tech Support community advocates serve users on this site, and are here to answer your questions about downloading, installing, and configuring the ThingWorx Manufacturing Apps.     A. Sign up: ThingWorx Manufacturing Apps Community: PTC account credentials are needed to participate in the ThingWorx Community. If you have not yet registered a PTC eSupport account, start with the Basic Account Creation page.   Manufacturing Apps Web portal: Register a login for the ThingWorx Manufacturing Apps web portal, where you can download the free trial and navigate to the additional resources discussed below.     B. Download: Choose a download/packaging option to get started.   i. Express/Freemium Installer (best for users who are new to ThingWorx): If you want to quickly install ThingWorx Manufacturing Apps (including ThingWorx) use the following installer: Download the Express/Freemium Installer   ii. 30-day Developer Kit trial: To experience the capabilities of the ThingWorx Platform with the Manufacturing Apps and create your own Apps: Download the 30-day Developer Kit trial   iii. Import as a ThingWorx Extension (for users with a Manufacturing Apps entitlement-- including ThingWorx commercial customers, PTC employees, and PTC Partners): ThingWorx Manufacturing apps can be imported as ThingWorx extensions into an existing ThingWorx Platform install (v8.1.0). To locate the download, open the PTC Software Download Page and expand the following folders:   ThingWorx Platform | Release 8.x | ThingWorx Manufacturing Apps Extension | Most Recent Datacode     C. Learn After downloading the installer or extensions, begin with Installation and Configuration.   Follow the steps laid out in the ThingWorx Manufacturing Apps Setup and Configuration Guide 8.2   Find helpful getting-started guides and videos available within the 'Get Started' section of the ThingWorx Manufacturing Apps Portal.     D. Customize Once you have successfully downloaded, installed, and configured the Manufacturing Apps, begin to explore the deeper potential of the Apps and the ThingWorx Platform.   Follow along with the discussion and steps contained in the ThingWorx Manufacturing Apps and Service Apps Customization Guide  8.2   Also contained within the the 'Get Started' page of the ThingWorx Manufacturing Apps Portal, find the "Evolve and Expand" section, featuring: -Custom Plant Layout application -Custom Asset Advisor application -Global Plant View application -Thingworx Manufacturing Apps Technical Lab with Sigma Tile (Raspberry Pi application) -Configuring the Apps with demo data set and simulator -Additional Advanced Documentation     E. Get help / give feedback / interact Use the ThingWorx Manufacturing Apps Community page as a resource to find documentation, peruse past forum threads, or post a question to start a discussion! For advanced troubleshooting, licensed users are encouraged to submit support tickets to the PTC My eSupport portal.
View full tip
Let us consider that we have 2 properties Property1 and Property2 in Thing Thing1 which we want to update using UpdatePropertyValues service. In our service we will use a system defined DataShape NamedVTQ which has following field definitions: In Thing1 create a custom service like following: var params1 = {      infoTableName : "InfoTable",      dataShapeName : "NamedVTQ" }; // CreateInfoTableFromDataShape(infoTableName:STRING("InfoTable"), dataShapeName:STRING):INFOTABLE(NamedVTQ) var InputInfoTable = Resources["InfoTableFunctions"].CreateInfoTableFromDataShape(params1); var now = new Date(); // Thing1 entry object var newEntry = new Object(); newEntry.time = now; // DATETIME - isPrimaryKey = true newEntry.quality = undefined; // STRING newEntry.name = "Property1"; // STRING newEntry.value = "Value1"; // STRING InputInfoTable.AddRow(newEntry); newEntry.name = "Property2"; // STRING newEntry.value = "Value2"; // STRING InputInfoTable.AddRow(newEntry); var params2 = {      values: InputInfoTable /* INFOTABLE */ }; me.UpdatePropertyValues(params2);
View full tip
Thingworx provides a library of InfoTable functions, one of the most powerful ones being DeriveFields (besides that I use Aggregate and Query a lot and ... getRowCount) DeriveFields can generate additional columns to your InfoTable and fill that with values that can be derived from ... nearly anything! Hard coded, based on a Service you call, based on a Property Value, based on other values within the InfoTable you are adding the column to. Just remember for this Service (as well as Aggregate), no spaces between different column definitions and use a , (comma) as separator. Here are some two powerful examples: //Calling another function using DeriveFields //Note that the value thingTemplate is the actual value in the row of column thingTemplate! var params = {   types: "STRING" /* STRING */,   t: AllItems /* INFOTABLE */,   columns: "BaseTemplate" /* STRING */,     expressions: "Things['PTC.RemoteMonitoring.GeneralServices'].RetrieveBaseTemplate({ThingTemplateName:thingTemplate})" /* STRING */ }; // result: INFOTABLE var AllItemsWithBase = Resources["InfoTableFunctions"].DeriveFields(params); //Getting values from other Properties //to in this case is the value of the row in the column to //Note the use of , and no spaces //NOTE: You can make this even more generic with something like Things[to][propName] var params = {     types: "NUMBER,STRING,STRING,LOCATION" /* STRING */,     t: AllAssets /* INFOTABLE */,     columns: "Status,StatusLabel,Description,AssetLocation" /* STRING */,     expressions: "Things[to].Status,Things[to].StatusLabel,Things[to].description,Things[to].AssetLocation" /* STRING */ }; // result: INFOTABLE var AllAssetsWithStatus = Resources["InfoTableFunctions"].DeriveFields(params);
View full tip
Hi everyone,   Maybe you got my email, I just wanted to post this also here.   I built a CSS generator for a few widgets, 8 at the moment. This is on a cloud instance accessible by anyone.   Advantages:      - don't have to style buttons and apply style definitions over and over again      - greater flexibility in styling the widget      - you don't have to write any CSS code   The way this works: you use the configurator to style the widget as you want. Use also a class to define what that widget is, or how it's styled. For example "primary-btn", "secondary-btn", etc. Copy the generated CSS code into the CustomCSS tab in ThingWorx and on the widget, put the CustomClass specified in the configurator.   As a best practice, I'd recommend placing all your CSS code into the Master mashup. And then all your mashups that use that master will also get the CustomCSS. So the only thing you have to do to your widgets in the mashups, is fill the CustomClass property with the desired generated style. Also, comment your differently styled widgets by separating them with /* My red button */ for example.   The mashups for this won't be released, this will only be offered as a service. As you'll see in the configurator, they are not that pretty, the main goal was functionality.   Here is the link to the configurator: https://pp-18121912279c.portal.ptc.io/Thingworx/Runtime/index.html#master=CSSMaster&mashup=ButtonVariables User: guest Password: guest123123   Give it a go and have fun! 🙂   NOTE: I will add more widgets to this in the future and will not take any requests in making it for a specific widget, I make these based on usage and styling capabilities.    
View full tip
Large files could cause slow response times. In some cases large queries might cause extensively large response files, e.g. calling a ThingWorx service that returns an extensively large result set as JSON file.   Those massive files have to be transferred over the network and require additional bandwidth - for each and every call. The more bandwidth is used, the more time is taken on the network, the more the impact on performance could be. Imagine transferring tens or hundreds of MB for service calls for each and every call - over and over again.   To reduce the bandwidth compression can be activated. Instead of transferring MBs per service call, the server only has to transfer a couple of KB per call (best case scenario). This needs to be configured on Tomcat level. There is some information availabe in the offical Tomcat documation at https://tomcat.apache.org/tomcat-8.5-doc/config/http.html Search for the "compression" attribute.   Gzip compression   Usually Tomcat is compressing content in gzip. To verify if a certain response is in fact compressed or not, the Development Tools or Fiddler can be used. The Response Headers usually mention the compression type if the content is compressed:     Left: no compression Right: compression on Tomcat level   Not so straight forward - network vs. compression time trade-off   There's however a pitfall with compression on Tomcat side. Each response will add additional strain on time and resources (like CPU) to compress on the server and decompress the content on the client. Especially for small files this might be an unnecessary overhead as the time and resources to compress might take longer than just transferring a couple of uncompressed KB.   In the end it's a trade-off between network speed and the speed of compressing, decompressing response files on server and client. With the compressionMinSize attribute a compromise size can be set to find the best balance between compression and bandwith.   This trade-off can be clearly seen (for small content) here:     While the Size of the content shrinks, the Time increases. For larger content files however the Time will slightly increase as well due to the compression overhead, whereas the Size can be potentially dropped by a massive factor - especially for text based files.   Above test has been performed on a local virtual machine which basically neglegts most of the network related traffic problems resulting in performance issues - therefore the overhead in Time are a couple of milliseconds for the compression / decompression.   The default for the compressionMinSize is 2048 byte.   High potential performance improvement   Looking at the Combined.js the content size can be reduced significantly from 4.3 MB to only 886 KB. For my simple Mashup showing a chart with Temperature and Humidity this also decreases total load time from 32 to 2 seconds - also decreasing the content size from 6.1 MB to 1.2 MB!     This decreases load time and size by a factor of 16x and 5x - the total time until finished rendering the page has been decreased by a factor of almost 22x! (for this particular use case)   Configuration   To configure compression, open Tomcat's server.xml   In the <Connector> definitions add the following:   compression="on" compressibleMimeType="text/html,text/xml,text/plain,text/css,text/javascript,application/javascript,application/json"     This will use the default compressionMinSize of 2048 bytes. In addition to the default Mime Types I've also added application/json to compress ThingWorx service call results.   This needs to be configured for all Connectors that users should access - e.g. for HTTP and HTTPS connectors. For testing purposes I have a HTTPS connector with compression while HTTP is running without it.   Conclusion   If possible, enable compression to speed up content download for the client.   However there are some scenarios where compression is actually not a good idea - e.g. when using a WAN Accelerator or other network components that usually bring their own content compression. This not only adds unnecessary overhead but is compressing twice which might lead to errors on client side when decompressing the content.   Especially dealing with large responses can help decreasing impact on performance. As compressing and decompressing adds some overhead, the min size limit can be experimented with to find the optimal compromise between a network and compression time trade-off.
View full tip
The AddStreamEntries​ snippet does not offer too much information, except that it needs an InfoTable as input. It is however based on the InfoTable for the AddStreamEntity service.     To use the AddStreamEntries table, an InfoTable based on sourceType, values, location, source, timestamp​ and ​tags​ must be used.   In this example, I started with a new Thing based on a ​Stream​ template and the following DataShape:     This DataShape must be converted into an InfoTable with is used as the ​values​ parameter. It's important that the ​timestamp​ parameter has distinct values! Otherwise values matching the same timestamp will be overwritten!   We don't really need the sourceType​ as ThingWorx will automatically determine the type by knowing the source and which kind of Entity Type it is.   I created a new ​MyStreamThing​ with a new service, filling the InfoTable and the Stream. The result is the following code which will add 5 rows to the Stream:     // *** SET UP META DATA FOR INFO TABLE ***   // create a new InfoTable based on AddStreamEntries parameters (timestamp, location, source, sourceType, tags, values)   var myInfoTable = { dataShape: { fieldDefinitions : {} }, rows: [] };   myInfoTable.dataShape.fieldDefinitions['timestamp']  = { name: 'timestamp', baseType: 'DATETIME' }; myInfoTable.dataShape.fieldDefinitions['location']  = { name: 'location', baseType: 'LOCATION' }; myInfoTable.dataShape.fieldDefinitions['source']    = { name: 'source', baseType: 'STRING' }; myInfoTable.dataShape.fieldDefinitions['sourceType'] = { name: 'sourceType', baseType: 'STRING' }; myInfoTable.dataShape.fieldDefinitions['tags']      = { name: 'tags', baseType: 'TAGS' }; myInfoTable.dataShape.fieldDefinitions['values']    = { name: 'values', baseType: 'INFOTABLE' };   // *** SET UP ACTUAL VALUES FOR INFO TABLE ***   // create new meta data   var tags = new Array(); var timestamp = new Date(); var location = new Object(); location.latitude = 0; location.longitude = 0; location.elevation = 0; location.units = "WGS84";   // add rows to InfoTable (~5 times)   for (i=0; i<5; i++) {       // create new values based on Stream DataShape       var params = {           infoTableName : "InfoTable",           dataShapeName : "Cxx-DS"     };       var values = Resources["InfoTableFunctions"].CreateInfoTableFromDataShape(params);       // add something to the values to make them unique       // create and add new row based on Stream DataShape     // only a single line allowed!       var newValues = new Object();     newValues.a = "aaa" + i; // STRING - isPrimaryKey = true     newValues.b = "bbb" + i; // STRING     newValues.c = "ccc" + i; // STRING       values.AddRow(newValues);       // create new InfoTable row based on meta data & values     // add 10 ms to each object, to make it's timestamp unique     // otherwise entries with the same timestamp will be overwritten       var newEntry = new Object();     newEntry.timestamp = new Date(Date.now() + (i * 10));     newEntry.location = location;     newEntry.source = me.name;     newEntry.tags = tags;     newEntry.values = values;       // add new Info Table row to Info Table           myInfoTable.rows = newEntry;       }       // *** ADD myInfoTable (HOLDING MULITPLE STREAM ENTRIES) TO STREAM       // add stream entries in the InfoTable       var params = {           values: myInfoTable /* INFOTABLE */     };       // no return       Things["MyStreamThing"].AddStreamEntries(params);   To verify the values have been added correctly, call the ​GetStreamEntriesWithData​ service on the ​MyStreamThing​
View full tip
Alerts are a special type of event.  Alerts allow you to define rules for firing events.  Like events, you must define a subscription to handle a change in state.  All properties in a Thing Shape, Thing Template, or Thing can have one or more alert conditions defined.   You can even define several of the same type of alert.  When an alert condition is met, ThingWorx throws an event. You can subscribe to the event and define the response to the alert using JavaScript.  Events also fire when a property alert is acknowledged and when it goes out of alert condition.   Alert Types Alerts have conditions which describe when the alert is triggered.  The types of conditions available depend upon the property type.  For example, string alerts may be triggered when the string matches pre-set text.  A number alert may be set to trigger when the value of the number is within a range.   EqualTo: Alert is triggered when the defined Value is reached. Applies to Boolean, DateTime, Infotable (in regard to number of rows), Integer, Long, Location, Number, and String base types. NotEqualTo: Alert is triggered when the defined Value is not reached. Applies to Boolean, DateTime, Infotable (in regard to number of rows), Integer, Long, Location, Number, and String base types. Above: Alert is triggered when the defined Limit is exceeded or met (if the Limit is included).  By default, the Limit is included.  Applies to DateTime, Infotable, Integer, Long, and Number base types. Below: Alert is triggered when the alert value is below the defined Limit or meets it (if the Limit is included).  By default, the Limit is included.  Applies to DateTime, Infotable, Integer, Long, and Number base types. InRange: Alert is triggered when a value is between a defined range.  By default, the minimum value is included, but the maximum can be included as well.  Applies to DateTime, Integer, Long, and Number base types. OutofRange: Alert is triggered when a value is outside a defined range.  By default, the minimum value is included, but the maximum can be included as well.  Applies to DateTime, Integer, Long, and Number base types. DeviationAbove: Alert is triggered when the property value minus the alert Value is greater than the alert Limit ((property value - alert value) > alert Limit).  If the Limit is included, the alert is triggered when the property value minus the alert Value is greater than or equal to the alert Limit ((property value - alert value) >= alert Limit).   By default, the Limit is included.  Applies to DateTime, Integer, Long, Location, and Number base types. DeviationBelow: Alert is triggered when the property value minus the alert Value is less than the alert Limit ((property value - alert value) < alert Limit).  If the Limit is included, the alert is triggered when the property value minus the alert Value is less than or equal to the alert Limit ((property value - alert value) <= alert Limit). By default, the Limit is included.  Applies to DateTime, Integer, Long, Location, and Number base types. Anomaly: Alert is triggered when the property value falls outside of an expected pattern as defined by a predictive model.  Applies to Integer, Long, and Number base types.   Alert types are specific to the data type of the property.  Properties configured as the following base types can be used for alerts:   Boolean Datetime Infotable Integer Location Number String   Creating an Alert When creating an alert:   You can set it to be enabled or disabled Alerts must have a ThingWorx-compatible name and can optionally contain a description You must set the limit(s) to determine when the event fires If an Include Limit is included, the event fires when the Limit Condition is met Not including the Limit causes the event to fire when the Limit Condition is surpassed The priority is a metadata field that enables the addition of a priority. It does not impact the Event/Subscription handling or sequence because the system fires events off asynchronously.   Steps to create or modify an Alert:   Select an existing Property or create a new Property for a Thing, Thing Shape, or Thing Template for which to create/update the Alert Click Manage Alerts Click the New Alert drop-down and select the appropriate Alert Type Note:  The available fields will be vary depending on data type of the Property   Deselect Enabled if you do not wish to make the Alert enabled at the present time (Alert is enabled by default) Provide a Name and optional Description for the Alert Enter a Limit (numeric properties) Select Include Limit? if the value entered in the Limit field should trigger the Alert   Select the appropriate Priority. (The Priority is a metadata field for searching and categorization only.  It does not affect the order of processing, CPU or memory usage.) After defining an Alert, you can click New Alert to add additional alerts of either the same or different condition. You can also click Add New to add additional alerts of the same condition. When all Alerts have been created, click Update Click Done Once all Properties have been updated as needed, click Save   Once Alerts are defined, they appear on the Properties page (while in Edit mode).       After an Alert is defined, a Subscription to that Alert can be configured to launch the appropriate business logic, such as notifying a user of an Event through email or text message.     Monitoring Alerts   When an Alert condition is met, ThingWorx fires off an Alert. You can create a Subscription to the Alert so that you are automatically notified when an Alert is triggered.  Alerts are written to the alert history file and can be viewed through the Alert Summary and Alert History Mashups. The system tracks acknowledged and unacknowledged alerts. Alerts do not fire redundant events. For example, if a numeric property has a rule defined that generates an alert when the value is greater than 50, and a value = 51, an alert is generated and an alert event will fire. If another value comes in at 53 before the original alert is acknowledged, another event will not be fired because the current state is still greater than 50.   The Alert History and Alert Summary streams provide functionality to monitor alerts in the system.  Alert History is a comprehensive log that records all information recorded into the alert stream, where the data is stored until manually removed.   The Alert Summary provides the ability to filter by all alerts, unacknowledged alerts, or acknowledged alerts. You can also acknowledge alerts on a selected property or all alerts from a particular source (thing).   This information can be retrieved using Scripts as well, so you can create your own Alert Summary and History mashups.   From the ThingWorx header, choose Monitoring > Alert History. All Alerts are listed here. Click the Alert Summary Click the Unacknowledged tab to view alerts that have not been acknowledged. Choose to acknowledge an alert on a property or on the source. Type a message in the corresponding field. Click Acknowledge.   For each alert, the following displays: Property name. Source thing – lists the thing that contains this property with the alert. Timestamp – indicates when the alert was triggered. Name and type of alert. Duration – details how long the alert has been active. AckBy – indicates if the alert has been acknowledged and, if so, by whom and when. Message – defaults to the condition but is overwritten with the acknowledge message if one exists. Alert description.    The Alert History screen displays all Alerts that were once in an alert condition, but have moved out of that alert condition.  A Data Filter is provided at the top of the mashup to more easily find a particular Source, Property, or Alert.   The Alert History report is a Thingworx Mashup created using standard Thingworx functionality.  This means that any developer has the ability to re-create this report or a modification of this report.       Acknowledging Alerts   An acknowledgement (ack) is an indication that someone has seen the alert and is dealing with it (for example, low helium in an MRI machine and someone is filling it).  Alert History shows when alerts were acknowledged and any comments.   You can acknowledge an alert on a property or on the source. A source acknowledgment acknowledges all alerts on the source Thing for the selected alert in Monitoring > Alert Summary. A property acknowledgment (ack) only acknowledges the alerts on the property for the selected alert in Alert Summary.   For example, you create a Thing with two properties that have alerts set up. You put both properties in their alert states. View Alert Summary and select the Unacknowledged tab. You should see two alerts. Select one, and do a property acknowledgement. The alert you selected moves to the Acknowledged tab and is removed from the Unacknowledged tab. Put both properties in their alert states again, select one of the alerts on the Unacknowledged tab, and this time do a source acknowledgement. In this case, both alerts move to the Acknowledged tab, even though you only selected one of them.    For more information about Alerts, click here. To view a tutorial video on alerts, click here. Refer to this article for best practices affecting alerts.
View full tip
This is using the simplest structure to do a look through an infotable.  It's simple but it avoids having to use row indexes and cleans up the code for readability as well.   //Assume incoming Infotable parameter names "thingList" for each (row in thingList.rows) {      // Now each row is already assigned to the row variable in the loop      var thingName = row.name; }   You can also nest these loops (just use a different variable from "row").  Also important to note to not add or remove row entries of the Infotable inside the loop.  In this case you may end up skipping or repeating rows in the loop since the indexes will be changed.
View full tip
Original Post Date:     June 6, 2016   Description: This is a video tutorial on creating a DataTable with a DataShape, and adding and retrieving an entry.  
View full tip
This blog post provide information on the technical changes in Thingworx 8.0, New Technical Changes in ThingWorx 8.0.0 Here are some common questions and answers in regards to the Licensing change: Does that mean all the extensions in the marketplace won't be free anymore? Depends on the extensions. The main extensions we are licensing for 8.0 are Navigate, Manufacturing and Utilties. We are not licensing the MailExtension on the marketplace, for example. Partners and customers can still import their custom apps/extensions If TWX connects to RP(remote platform) which has its own subscription based Flexera license (InService, for example), how does this interaction works- license validation.  Is server to sever connection counts as user login direct to PTC product? License files are per TWX instance. For RP, each would have their own license files. User counts (if entitled and enforced) are generic to each system.
View full tip
Those who have been working with ThingWorx for many years will have noticed the work done around ingress stress testing and performance optimization.  Adding InfluxDB as a time-series data persistence provider really helped level up these capabilities while simultaneously decreasing the overall resources required by the infrastructure.  However with this ease comes a hidden challenge: query and data processing performance to work it into something useful.   Often It's Too Much Data In general most customers that I work with want to collect far too much data -- without knowing what it will be used for, or what processing will be required in order to make it usable and useful.  This is a trap in general with how many people envision IoT projects, being told by infrastructure providers that cloud storage and compute resources are abundant and cheap and that they should get as much data as possible.  This buildup of data means that more effort needs to be spent working it into something useful (data engineering/feature extraction) and addressing common data issues (quality, gaps, precision, etc.).  This might be fine for mature companies with large data analytics teams; however this is a makeup that I've only seen in the largest of our customers.  Some advice - figure out what you need and how you'll use it, and then collect that.  Work on extracting value today rather than hoping that extra data collected  now will provide some insights years from now.   Example - Problem Statement You got your Thing Model designed, and edge devices connected.  Now you've got data flowing in and being stored every 5 seconds in InfluxDB.  Great progress!  Now on to building the applications which cover the various use cases. The raw data is most likely going to need to be processed and potentially even significantly transformed into other information in order to make it useful.  Turning a "powered on and running" BOOLEAN to an "hour meter" INTEGER is a simple example.  Then you may need to provide a report showing equipment run time hours by day over a month.  The maintenance team may also have asked to look for usage patterns which lead to breakdowns, requiring extracting other data points from the initial one like number of daily starts, average daily run time, average time between restarts. The problem here is that unless you have prepared these new data points and stored them as well (say in a Stream), you are going to have to build these data sets on the fly, and that can be time and resource intensive and not give you the response time expected.  As you can imagine, repeatedly querying and processing large volumes of unchanging raw data is going to have resource and time implications - so this is why data collection and data use need to be thought about separately.   Data Engineering In the above examples, the key is actually creating new data points which are calculated progressively throughout normal operation.  This not only makes the information that you want available when you need it - in the right format - but it also significantly reduces resource requirements by constantly reprocessing raw data.  It also helps managing data purging, because as you create and store usable insights, you can eventually just archive away your old raw data streams.   Direct Database Queries vs. Thingworx Data Services Despite the above being a rule of thumb, sometimes a simple well structured database query can get you exactly what you need and do so quite quickly.  This is especially true for InfluxDB when working with extremely large time-series datasets.  The challenge here is that ThingWorx persistence providers abstract away the complexity of writing ones own database queries, so we can't easily get at the databases raw power and are forced to query back more data than needed and work it into a usable format in memory (which is not fast).   Leveraging the InfluxDB API using the ContentLoader Technique As InfluxDBs API is 100% REST, we can access it using in-built ThingWorx Content Loader services.  Check out this demonstration and explanation video where I talk about how to interact directly with InfluxDB in order to crush massive time-series data and get back much more usable and manageable data sets.  It is important to note here that you should use a read-only database user here, as you should never modify the ThingWorx databases to avoid untested scenarios which may lead to data corruption.   Optimizing ThingWorx query performance with the InfluxDB REST API - YouTube InfluxToolBox ThingWorx demo project (by T. Wobben)      
View full tip
We will host a live Expert Session: "Thingworx Mashup 101 - Do's and Don'ts" on February 24th, 13h30 EST.   Please find below the description of the expert session and the registration link.   Expert Session: Thingworx Mashup 101 - Do's and Don'ts Date and Time: February 24th, 13h30 EST Duration: 1 hour Host: Aanjan Ravi - Technical Product Manager Registration Here: https://www.ptc.com/en/events/thingworx-mashup-101   Description: This session covers the most common and useful tips about how to correctly use Mashup builder, Widgets and Layouts – and what to avoid -  to create applications with good principles of UI/UX and easier to maintain.   Existing Recorded sessions can be found on support portal using the keyword ‘Expert Sessions’. You can also suggest topics for upcoming sessions using this small form.   Here are some recorded sessions that might be of your interest. You can find recordings for the full library of webinars using the keyword ‘Expert Sessions’ in PTC support portal search Thingworx Active Active Clustering This session will cover the main aspects of the High Availability Clustering feature launched with the ThingWorx 9.0 release.   Recoding Link Upgrade to Thingworx 9 – How to Plan / Evaluate Impacts This session highlights the key points you should evaluate to properly plan your upgrade to Thingworx 9. Recording Link Top 5 items to check for Thingworx Performance Troubleshooting How to troubleshoot performance issues in a Thingworx Environment? Here we cover the top 5 investigation steps that will help you understand the source of your environment issues and allow better communication with PTC Technical Support     Recording Link
View full tip
We will host a live Expert Session: Thingworx Navigate Component Based App Development on Wednesday 09/30, 08:00 AM Eastern Daylight Time   Please find below the description of the expert session as well as the link to register .   Expert Session: Thingworx Navigate Component Based App Development Date and Time: Wednesday 09/30, 08:00 AM Eastern Daylight Time Duration: 1 hour Host: Pratibha Bhatnagar Description: Following the series of new capabilities released with Navigate 9.0, this session will focus in the details of Navigate Component Based app development and how to leverage this to your use cases.   Existing Recorded sessions can be found on support portal using the keyword ‘Expert Sessions’   You can also suggest topics for upcoming sessions using this small form
View full tip
  Maintain cookies and security information by implementing session parameters in your application.   Guide Concept   This project will introduce creating and accessing session data from a User logged into your application. Session data is global session-specific parameters that can be used on the Client and Server side.   Following the steps in this guide, you will be able to access the logged in User's information and their set values.   We will teach you how to access session data, that can later be used to provide Users with unique experiences and a more robust application.   You'll learn how to   Create Session Data Access Stored Session Data   NOTE: This guide's content aligns with ThingWorx 9.3. The estimated time to complete this guide is 30 minutes     Step 1: Completed Example   Download the completed files for this tutorial:  Sessions.xml.   The Sessions.xml file contains a completed example of session parameters. Utilize this file to see a finished example and return to it as a reference if you become stuck during this guide. Keep in mind, this download uses the exact names for entities used in this tutorial. If you would like to import this example and also create entities on your own, change the names of the entities you create.     In the bottom-left of Composer, click Import/Export.     Click IMPORT.     In the Import pop-up, keep the default values and click Browse. Navigate to the Sessions.xml file you downloaded. Select it and click Open. Click Import in the Import pop-up. Click Close to close the pop-up.       Step 2: Create Session Parameters  Click the Browse folder on the left-hand side. Under System, select Subsystems.     Filter for UserManagementSubsystem and open it in Edit mode.     Select Services. Filter for the AddSessionShape Service.     Click the Play button to open the Execute window. Enter UserLogin (the provided ThingShape) as the name input field. Click Execute.     Click Done.   You've just created your first Session Parameter. These values are used for content held in a cookie for a website or information that might be static for the User or session.   Best Practice: For information that will be static for the entire application and not based on the session, use a database option or a stored value in a Thing.       Step 3: Access Session Parameters   Click the Browse folder on the left-hand side. Under System, select Resources.   Filter for CurrentSessionInfo and open it.   Select Services. Filter for the GetGlobalSessionValues Service.   Click the Play button to open the Execute window. Click Execute. You will notice the result is a list of the properties in the UserLogin ThingShape. Your result might differ from mine.   Click Done.   NOTE: There is a difference between Session parameters and Mashup parameters. Mashups can have input values that will be used for services or content of that Mashup ONLY. Session parameters are based on the user using the application in a session. This data will be accessible throughout the application and last until they have completed their usage. This guide shows how to create Session parameters that are considered global session parameters.     Step 4: Next Steps   Congratulations! You've successfully completed the Create Session Parameters guide, and learned how to: Access a logged-in user's information and their set values Use session data to provide users with unique experiences and a more robust application   Learn More   We recommend the following resources to continue your learning experience:   Capability Guide Build Create Custom Business Logic Build Data Model Introduction   Additional Resources   If you have questions, issues, or need additional information, refer to:   Resource  Link Community Developer Community Forum Support Session Parameter Help Center  
View full tip
      Thingworx extensions are a great place to explore UI ideas and get that special feature you want.   Here is a quick primer on Widgets (Note: there is comprehensive documentation here which explores the complete development process ).The intention is not to explain every detail but just the most important points to get you started. I will explore more in additional posts. I also like images rather than lost of words to read. I have attached the simple Hello Word example as a start point and  I'm using Visual Code as my editor of choice.   The attached zip when unzipped will contain a folder called ui and metadata xml file. Within the ui folder there needs to be a folder that has the same name as the widget name. In this case its helloworld.   Metadata file - The 3 callouts are the most import. Package version: is the current version and each time a change is made the value needs to be updated. name: a unique name used through out the widget definition UIResources: The source locations for the widget definition. The UIResources files are used to define the widget in the ide (Composer) and runtime (Mashup). These 2 environments ide and runtime have matching pairs of css (cascading style sheets)  and a js (javascript) files.   The js files are where most of the work is done. There a number of functions used inside the javascript file but just to get things going we will focus on the renderHtml function. This is the function that will generate the HTML to be inserted in the widget location.   renderHtml(helloWorld.ide.js) In this very simple case the renderHtml in the runtime is the same as in the ide renderHtml (helloWorld.runtime.js)   Hopefully you can see that the HTML is pretty easy just some div and span tags with some code to get the Property called Salutation.   So we have the very basics and we are not worried to much about all the other things not mentioned. So to get the simple extension into Thingworx we use the Import -> Extensions menu option. The UI and metadata.xml file needs to be zipped up (as per attachment).  Below is a animated gif that shows how to import and use the widget   Very Quick Steps to import and use in mashup. Video Link : 2147   The next blog will explore functions and allow a user to click the label and display a random message. This will show how to use events   Widget Extensions Click Event
View full tip
A common issue that is seen when trying to deploy, design or scale up a ThingWorx application is performance.  Slow response, delayed data and the application stopping have all been seen when a performance problems either slowly grows or suddenly pops up.  There are some common themes that are seen when these occur typically around application model or design.  Here are a few of the common problems and some thoughts on what to do about them or how to avoid them. Service Execution This covers a wide range of possibilities and is most commonly seen when trying to scale an application.  Data access within a loop is one particular thing to avoid.  Accessing data from a Thing, other service or query may be fast when only testing it on 100 loops, but when the application grows and you have 1000 suddenly it's slow.  Access all data in one query and use that as an in memory reference.  Writing data to a data store (Stream, Datatable or ValueStream) then querying that same data in one service can cause problems as well.  Run the query first then use all the data you have in the service variables.   To troubleshoot service executions there are a few methods that can be used.  Some for will not be practical for a production system since it is not always advisable to change code without testing first. Used browser development tools to see the execution time of a service.  This is especially helpful when a mashup is slow to load or respond.  It will allow quickly identifying which of multiple services may be the issue. Addition of logging in a service.  Once a service is identified adding simple logging points in the service can narrow what code in the service cases the slow down (it may be another service call).  These logging statements show up in the script logs with time stamps ( you can also log the current time with the logging statements). Use the test button in Composer.  This is a simple on but if the service does not have many parameters (or has defaults) it's a fast and easy way to see how long a service takes to return,' When all else fails you can get thread dumps from the JVM.  ThingWorx Support created an extension that assists with this.  You can find it on the Marketplace with instructions on how to use it.  You can manually examine the output files or open a ticket with support to allow them to assist.  Just be careful of doing memory dumps, there are much larger, hard to analyse and take a lot of memory.  https://marketplace.thingworx.com/tools/thingworx-support-tools Queries ​These of course are services too but a specific type.  Accessing data in ThingWorx storage structures or from external sources seems fairly straight forward but can be tricky when dealing with large data sets.  When designing and dealing with internal platform storage refer to this guide as a baseline to decide where to store data...  Where Should I Store My Thingworx Data?   NEVER store historical data in infotable properties.  These are held in memory (even if they are persistent) and as they grow so will the JVM memory use until the application runs out of it.  We all know what happens then.  Finally one other note that has causes occasional confusion.  The setting on a query service or standard ThingWorx query service that limits the number of records returned.  This is how many records are returned to from the service at the end of processing, not how many are processed or loaded in memory.  That number may be much higher and could cause the same types of issues. Subscriptions and Events ​This is similar to service however there is an added element frequency.  Typical events are data change and timers/schedulers.  This again is often an issue only when scaling up the number of Things or amount of data that need to be referenced.  A general reference on timers and schedulers can be found here.  This also describes some of the event processing that takes place on the platform.  Timers and Schedulers - Best Practice For data change events be very cautions about adding these to very rapidly changing property values.  When a property is updating very quickly, for example two times each second, the subscription to that event must be able to complete in under 0.5 seconds to stay ahead of processing.  Again this may work for 5-10 Things with properties but will not work with 500 due to resources, speed and need to briefly lock the value to get an accurate current read.  In these cases any data processing should be done at the edge when possible (or in the originating system) and pushed to the platform in a separate property or service call.  This allows for more parallel processing since it is de-centralized. A good practice for allowing easier testing of these types of subscription code is to take all of the script/logic and move it to a service call.  Then pass any of the needed event data to parameters in the service.  This allows for easier debug since the event does not need to fire to make the logic execute.  In fact it can essentially be stand alone by the test button in Composer. Mashup Performance This​ one can be very tricky since additional browser elements and rendering can come into play. Sometimes service execution is the root of the issue and reviewed above, other times it is UI elements and design that cause slow down. The Repeater widget is a common culprit. The biggest thing to note here is that each repeater will need to render every element that is repeated and all of the data and formatting for each of those widgets in the repeated mashup. So any complex mashup that is repeated many times may become slow to load. You can minimize this to a degree based on the Load/Unload setting of the widget and when the slowness is more acceptable (when loading or when scrolling). When a mashup is launched from Composer it comes with some debugging tools built in to see errors and execution. Using these with browser debug tools can be very helpful. Scaling an Application When initially modeling an application scale must be considered from the start. It is a challenge (but not impossible) to modify an application after deployment or design to be very efficient. Many times new developers on the ThingWorx platform fall into what I call the .Net trap. Back when .Net was released one of the quote I recall hearing about it's inefficiencies was "memory is cheap". It was more cost efficient to purchase and install more memory than to take extra development time to optimize memory use. This was absolutely true for installed applications where all of the code was complied and stored on every system. Web based applications are not quite a forgiving since most processing and execution is done on the single central web server. Keep this in mind especially when creating Shapes, Templates and Subscriptions. While you may be writing one piece of code when this code is repeated on 1,000 Things they will all be in memory and all be executing this code in parallel. You can quickly see how competition for resources, locks on databases and clean access to in memory structures can slow everything down (and just think when there are 10,000 pieces of that same code!!). Two specific things around this must be stated again (though they were covered in the above sections). Data held in properties has fast access since it is in JVM memory. But this is held in memory for each individual Thing, so hold 5 MB of information in one Thing seems small, loading 10,000 Thing mean instant use of 50 GB of memory!! Next execution of a service. When 10 things are running a service execution takes 2 seconds. Slow but not too bad and may not be too noticeable in the UI. Now 10,000 Things competing for the same data structure and resources. I have seen execution time jump to 2 minutes or more. Aside from design the best thing you can do is TEST on a scaled up structure. If you will have 1,000 Things next year test your application early at that level of deployment to help identify any potential bottlenecks early. Never assume more memory will alleviate the issue. Also do NOT test scale on your development system. This introduces edits changes and other variables which can affect actual real world results. Have a QA system setup that mirrors a production environment and simulate data and execution load. Additional suggestions are welcome in comments and will likely update this as additional tool and platform updates change.
View full tip
We will host a live Expert Session: "Thingworx Mashup 101 - Do's and Don'ts" on February 24th, 13h30 EST.   Please find below the description of the expert session and the registration link.   Expert Session: Thingworx Mashup 101 - Do's and Don'ts Date and Time: February 24th, 13h30 EST Duration: 1 hour Host: Aanjan Ravi - Technical Product Manager Registration Here: https://www.ptc.com/en/events/thingworx-mashup-101   Description: This session covers the most common and useful tips about how to correctly use Mashup builder, Widgets and Layouts – and what to avoid -  to create applications with good principles of UI/UX and easier to maintain.   Existing Recorded sessions can be found on support portal using the keyword ‘Expert Sessions’. You can also suggest topics for upcoming sessions using this small form.   Here are some recorded sessions that might be of your interest. You can find recordings for the full library of webinars using the keyword ‘Expert Sessions’ in PTC support portal search Thingworx Active Active Clustering This session will cover the main aspects of the High Availability Clustering feature launched with the ThingWorx 9.0 release.   Recoding Link Upgrade to Thingworx 9 – How to Plan / Evaluate Impacts This session highlights the key points you should evaluate to properly plan your upgrade to Thingworx 9. Recording Link Top 5 items to check for Thingworx Performance Troubleshooting How to troubleshoot performance issues in a Thingworx Environment? Here we cover the top 5 investigation steps that will help you understand the source of your environment issues and allow better communication with PTC Technical Support     Recording Link
View full tip
We will host a live Expert Session: "Thingworx Navigate 3D Viewer" on October 9th at 11:00 AM EST.   Please find below the description of the expert session and the registration link .   Expert Session: Thingworx Navigate 3D Viewer Date and Time: Friday, October 9th, 2020 11:00 am EST Duration: 1 hour Host: Robbie Morrison, Product Management Senior Manager   Description: Following the series of new capabilities released with Navigate 9.0, this session will focus in the details of Navigate 3D Viewer leverage this to your use cases   Register here   Existing Recorded sessions can be found on support portal using the keyword ‘Expert Sessions’.   You can also suggest topics for upcoming sessions using this small form.   Here are some recorded sessions that might be of your interest. You can find recordings for the full library of webinars using the keyword ‘Expert Sessions’ in PTC support portal search   Navigate 9.0 – What’s New? This session is the intro of a series that will cover new capabilities of the recent Navigate 9 release and the value that each can bring to your implementation. Then we will have further sessions covering the details of some of them   Recoding Link Top 5 items to check for Thingworx Performance Troubleshooting How to troubleshoot performance issues in a Thingworx Environment? Here we cover the top 5 investigation steps that will help you understand the source of your environment issues and allow better communication with PTC Technical Support     Recording Link Thingworx 9.0 Component Based App Development Following the series of new capabilities released with Navigate 9.0, this session will focus in the details of Navigate Component Based app development and how to leverage this to your use cases Recording Link
View full tip
Reminder (and for some, announcement!) that the new ThingWorx 8 sizing guide is available here  https://www.ptc.com/en/support/refdoc/ThingWorx_Platform/8.0/ThingWorx_Platform_8_x_Sizing_Guide
View full tip
This is a lessons learned write up that I proposed to present at Liveworx but it didn't make the cut, but I did want to share it with all the developer folks. Please note that this is before we added Influx and Micro Services, which help improve the landscape. Oh and it's long 🙂 ------------------------------------------ This is written as of Thingworx 8.2   Different ways to scale Data and Processing with Thingworx Two main issues are targeted Data Storage Platform processing Data Storage in Thingworx Background Issues around storage is that due to the limited indexing in the Persistence Provider with then the actual values according to the datashape being in a JSON Blob So when you look in the Persistence Provider you’ll see Source sourceType Location entityID Datetime Tags ValueJSONBlob   The first six carry an index, the JSON Blob which holds the values according to the datashape is not, that can read something like {value1:firstvalue,value2:secondvalue,value3:[ …. ]} etc. This means that any queries beyond the standard keys – date/time, entityID (name of Stream or DataTable), source, sourcetype, tags, location become very inefficient because it will query the records and then apply the datashape query server side. Potentially this can cause you to pull way more records over from Persistence Provider to Platform than intended. Ie: a Query on Temperature in my data, that should return 25 records for a given month, will perhaps first return 250K records and then filter own to 25. The second issue with storage is that all Streams are stored in one table in the Persistence Provider using entityID as an additional key to figure out which stream the record is for. This means that your record count per table goes up much faster than you’d expect. Ie: If I have defined 5 ValueStreams for 5 different asset types, ultimately all that data is still in one table in the Persistence Provder. So if each has 250K records, a query against the valuestream will then in actuality be a query against 1.25 million records. I think both of these issues are well known and documented? By now and Dev is working on it. Solution approaches So if you are expecting to store a lot of records what can you do? Archive The easiest solution is to keep a limited set and archive off the rest of the data, preferably into a client’s datalake that is not part of the persistence provider, remember archiving from one stream to another stream is not a solution! Unless … you use Multiple Persistence Providers Multiple Persistence Providers Thingworx does support multiple persistence providers for storing data. So you can spin up extra schemas (potentially even in the same DataBase Server) to be the store for additional Persistence Providers which then are mapped to a specific Stream/ValueStream/DataTable/Blog/Wiki. You still have to deal with the query challenge, but you now have less records per data store to query through. Direct queries in the Persistence Provider If you have full access to your persistence provider (NOTE: PTC Cloud Services does NOT provide this right now). You can create an additional JDBC connection to the Persistence Provider and query the stream directly, this allows you to query on the indexed records with in addition a text search through the JSON Blob all server side. With this approach a query that took several minutes at times Platform side using QueryStreamEntries took only a few seconds. Biggest savings was the fact that you didn’t have to transfer so many records back to the Platform server. Additional Schemas You can create your own schema (either within the persistence provider DB – again not supported by PTC Cloud Services) in a Database Server of your choice and connect to it with JDBC/REST. (NOTE: I believe PTC Cloud Service may/might offer a standalone server with actual root access) This does mean you have to create your own Getter/Setter services to retrieve and store information, plus you’ll need some event to store (like DataChange). This approach right now is probably a common if not best practice recommendation if historical information is required for the solution and the record count looks to go over 1 million records and can’t just be queried based on timestamp. Thingworx Event Processing Background Thingworx will consistently deal with many Things that have many Properties, and often times there will be Alerts/Rules that need to run based on value changes. When you are using straight up Alerts based on a limit value, this isn’t such a challenge, but what if you need to add some latch/lock/debounce logic or need to check against historical values or check multiple conditions? How can you design something that can handle evaluating these complex rules, holds some historical or derived values and avoid race conditions and be responsive? Potential Problems Race conditions Multiple Events may need to update the same Permanent or Temporary store for the determination of a condition. Duplicates If you don’t have some ‘central’ tracker, you may possibly trigger the same rule multiple times. Slow response You are potentially triggering thousands or more events at the same time, depending on how you’ve set up your logic, your response could become so slow that the next event will be firing before finish and you’ll overload the system. System queue overrun If your events trigger faster than you can handle the events, you will slowly build up and finally overrun the event queue. System Thread count overrun Based on the number of cores in your system, you can overrun the number of threads that can be handled. Connection Pool overrun Each read/write to a stream/datatable but also Property Persist is a usage of the connection pool to your persistence provider. If you fire a lot at once, you can stack up requests and cause deadlocks System out of memory Potentially in handling the events you are depending on in memory information, if that is something that grows over time, you could hit an ‘Out of Memory’ issue. Solution Approaches Batch processing Especially with Agents/Sources that write a set of property updates, you potentially trigger multiple threads that all may need the same source information or update the same target information. If you are able to process this as a batch, you can take all values in account and only process this as a single event and have just a single read from source or single write to target. This will be difficult to achieve when using something like Kepserver, unless it is transferring as something non-standard like MQTT. But if you can have the data come in as a single REST POST this approach becomes possible. In Memory vs. Table/Stream Storage To speed up response time, you can put necessary information into Memory vs. in a DataTable or Stream. For example, if you need the most current received record together with some historical values, you could: Use a Stream but carry the current value because the stream updates async. (ie adding the current value to the stream doesn’t guarantee that when you read from the stream it has already been committed) Use a DataTable because they are synchronous but it can make the execution slow, especially if you are reaching 100K records or more Use an InfoTable or JSON Property, now this information is in memory and runs the fastest and is synchronous. Note that in some speed testing JSON object was faster than InfoTable and way faster than DataTable. One challenge is that you would have to do a full overwrite if you need to persist this information. Doing a full write does open up the danger of a race condition, if this information is being updated by multiple threads at the same time. If it is ok to keep the information in memory than an InfoTable is nice because you can just add/delete rows in memory. I sadly haven’t figured out yet how to directly do this to a JSON object property :(. It is important to consider disaster recovery scenarios if you are only using this in memory Centralized Processing vs. Distributed Processing Think about how you can possibly execute some logic within the context of the Entity itself (logic within the ThingShape/ThingTemplate) vs. having it fire into a centralized Service (sync or async) on a separate Entity. Scheduler or Timer As much as Schedulers and Timers are often the culprit of too many threads at the same time, a well setup piece of logic that is triggered by a Scheduler or Timer can be the solution to avoid race conditions If you are working with multiple timers, you may want to consider multiple schedulers which will trigger at a specific time, which means you can eliminate concurrence (several timers firing at the same time) Think about staggering execution if necessary, by using the hated, looked down upon … but oft necessary … pause() function !!!! Synchronous vs. Asynchronous Asynchronous execution can give great savings on the processing speed of a thread, since it will kick off the asynch parts and continue on. The terrible draw back, you can’t tell when it is finished nor what the resulting output is. As you mix and match synch/asynch vs processing speed, you may need to consider other ways to pick up when an asynch process finishes, some Property elsewhere that will trigger into a DataChange for example. Interesting examples Batch Process With one client there was a batch process that would post several hundred results at once that all had to be evaluated. The evaluation also relied on historical information. So with some logic these properties were processed as a batch, related to each other and also compared to information held in memory besides historically storing the information that came in. This utilized several in memory objects and ultimately also an eval() statement to have the greatest flexibility and performance. Mix and Match With another client, they had a requirement to have logic to do latch/lock and escalation. This means that some information needs to be persisted, however because all the several hundred properties per asset are coming in through Kepware once a second, it also had to be very fast. The approach here was to have the DataChange place information into an in memory infotable that then was picked up by a separate latch/lock/escalation timer to move it over to the persistent side. This allowed for the instantaneous processing of DataChange and Alerts, but also a more persistent processing of latch/lock/escalation logic. In Conclusion Remember that PTC created its software for specific purposes. I don’t think there ever will be a perfect magical platform that will do everything we need and want. Thingworx started out on a specific path which was very high speed data ingest and event platform with agnostic all around connectivity, that provided a very nice holistic modeling approach and a simple way to build UI/UX. Our use cases will sometimes go right past everything and at times to the final frontier aka the bleeding edge and few are a carbon copy of another. This means we need to be innovative and creative. Hopefully all of you can use the expert knowledge you have about our products to create those, but then also be proactive and please share with everyone else!  
View full tip
Warning This post is quite long, has various chapters and you might get bored reading it. If you just want a summary read the "Use case" and "Conclusion" chapter - and maybe the "Required Logic" chapter, because I made a cool graph for it. The rest is all about implementation... Introduction I recently had the opportunity to deliver a ThingWorx training for Saint Gobain. One of the use cases for their ThingWorx application is monitoring machine errors and outages on the production line. If an outage or error status is triggered, the machine operator will see a popup on the monitoring screen where he is forced to select a root cause. This root cause will then be persisted in ThingWorx for more data transformation, analytics and reporting - like cost analysis or optimization opportunities. During the training we were also discussing on how such a forced root cause monitoring can be implemented via Mashups and the usage of modal popups. I've compiled the details into this post as it might also interest other developers. The ThingWorx Entities I'm using in this example can be downloaded from here Note: I'm using the word "Alert" here - but not in the context of a ThingWorx Property Alert... just beware to not be confused due to the wording. Use Case One of the requirements for Saint Gobain's IoT Solution was an interactive alert monitoring directly in the factory on the production machines. Let's say the machine has stopped, the root cause should be recorded. For this an interactive popup will be displayed on the machine's monitoring display and an employee has to choose the root cause from a pre-defined list. This could be planned outages, e.g. for maintenance or unplanned outages, e.g. material jam. The root cause will then be recorded and a history of outage causes can be stored in a ThingWorx value stream. This can then be later analyzed with e.g. ThingWorx Analytics capabilities to understand and optimize the machine's production capabilities and efficiency. As the root cause must be entered, the popup will be forced to be displayed when a certain condition / criteria is met - and it will only disappar when a root cause is chosen. The user should not be able to interact with any other elements of the Mashup and not be able to just close the popup. The popup will close itself and reset the initial condition once the root cause has been identified and chosen. Requirements Required Entities Required Logic Note: Just to make it easier to manage and export Entities, I will add all of the created elements in a new Project called RootCausePopups. All of the elements will have a "rcp_" added in front of their name - just to make it easier for me to find and identify them. Implementation Before we start - set a Project Context Create Entities Create a Popup Mashup Create the Main Mashup Testing the Mashups Conclusion Certain conditions (like the state of a checkbox) can be used to trigger modal popups. A modal popup forces a user interaction and the interaction will not offer any other option until a choice is made. With these parameters it's easy to have mandatory reaction from users when it's important to capture data which rely on the analysis of an engineer or a user - e.g. reasons for machine outages. Using this technique there's not much training required for staff, other than pushing a button with an option of their choice - this saves quite some time in capturing data in any other way (e.g. updating Excel files or manual pen-and-paper techniques). As this data is now part of the ThingWorx instance it can be used for further transformation, analysis or just for monitoring purposes There's of course more possibilities when it comes to states and formatting which would exhaust the context of this post - but feel free to explore... In the example we wouldn't need the textbox, but it's there to demonstrate if the correct values are persisted or not In the example we could of course also set the visibility of the checkbox to false, so that we would only see the popup and the Grid holding historic information We could also create different StateDefinitions to color-format / text-format the input differently from the output in the Grid If you found this interesting (and actually made it to the end of this post) - feel free to play with this concept a bit more... The dependencies might seem a bit difficult, but it should be easy to implement and to adjust to your own ideas and requirements.
View full tip