cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

ThingWorx Navigate is now Windchill Navigate Learn More

IoT & Connectivity Tips

Sort by:
Performance and memory issues in ThingWorx often appear at the Java Virtual Machine (JVM) level. We can frequently detect these issues by monitoring JVM garbage collection logs and thread dumps. In this first of a multi-part series, let's discuss how to analyze JVM garbage collection (GC) logs to detect memory issues in our application. What are GC logs? When enabled on the Apache server, the logs will show the memory usage over time and any memory allocation issues (See: How to enable GC logging in our KB for details on enabling this logging level).  Garbage Collection logs capture all memory cleanup operations performed by the JVM.  We can determine the proper memory configuration and whether we have memory issues by analyzing GC logs. Where do I find the GC logs? When configured as per our KB article, GC logs will be written to your Apache logs folder. Check both the gc.out and the gc.restart files for issues if present, especially if the server was restarted when experiencing problems. How do I read GC logs? There are a number of free tools for GC log analysis (e.g. http://gceasy.io, GCViewer). But GC logs can also be analyzed manually in a text editor for common issues. Reading GC files: Generally each memory cleanup operation is printed like this: Additional information on specific GC operations you might see are available from Oracle. A GC analysis tool will convert the text representation of the cleanups into a more easily readable graph: Three key scenarios indicating memory issues: Let's look at some common scenarios that might indicate a memory issue.  A case should be opened with Technical Support if any of the following are observed: Extremely long Full GC operations (30 seconds+) Full GCs occurring in a loop Memory leaks Full GCs: Full GCs are undesirable as they are a ‘stop-the-world’ operation. The longer the JVM pause, the more impact end users will see: All other active threads are stopped while the JVM attempts to make more memory available Full GC take considerable time (sometimes minutes) during which the application will appear unresponsive Example – This full GC takes 46 seconds during which time all other user activity would be stopped: 272646.952: [Full GC: 7683158K->4864182K(8482304K) 46.204661 secs] Full GC Loop: Full GCs occurring in quick succession are referred to as a Full GC loop If the JVM is unable to clean up any additional memory, the loop will potentially go on indefinitely ThingWorx will be unavailable to users while the loop continues Example – four consecutive garbage collections take nearly two minutes to resolve: 16121.092: [Full GC: 7341688K->4367406K(8482304K), 38.774491 secs] 16160.11: [Full GC: 4399944K->4350426K(8482304K), 24.273553 secs] 16184.461: [Full GC: 4350427K->4350426K(8482304K), 23.465647 secs] 16207.996: [Full GC: 4350431K->4350427K(8482304K), 21.933158 secs]      Example – the red triangles occurring at the end of the graph are a visual indication that we're in a full GC loop: Memory Leaks: Memory leaks occur in the application when an increasing number of objects are retained in memory and cannot be cleaned up, regardless of the type of garbage collection the JVM performs. When mapping the memory consumption over time, a memory leaks appears as ever-increasing memory usage that’s never reclaimed The server eventually runs out of memory, regardless of the upper bounds set           Example(memory is increasing steadily despite GCs until we'll eventually max out): What should we do if we see these issues occurring? Full GC loops and memory leaks require a heap dump to identify the root cause with high confidence The heap dump needs to be collected when the server is in a bad state Thread dumps should be collected at the same time A case should be opened with support to investigate available gc, stacktrace, or heap dump files
View full tip
ThingWorx Monitoring and Alerting, Part 2 Using Prometheus and Grafana By Tori Firewind, IoT EDC Building Dashboards     To add a panel which monitors some component of the ThingWorx application to a dashboard in Grafana, click to add a new panel. Under “Metrics” in the box at the bottom of the screen, select what ThingWorx metrics you wish to monitor (type “thingworx” in the search box to see them all). For example, select the Platform Subsystem memory in use:     Label filters aren’t necessary, though you may want to sort by instance if you are monitoring multiple ones with the same dashboard. You may also want to take some time to format the Y axis, which by default will show in bytes. Go to the formatting panel on the right side and scroll down to the section called “Standard options”. For the Unit dropdown, start typing “data” and then select “bytes (SI)”. This will automatically determine if the bytes you’ve provided should really print as MB or GB based on how large the numbers are.     Rename the panel, modify it in any other way desired, and then click Apply (last 5 minutes):     Once you add the panel, you can watch the memory usage as it is scraped by selecting the refresh option (10s or 30s, whatever makes sense based on your scrape interval).     The viewing window is stored in the URL, so that you can generate a report for a specific interval (like when a test was occurring), and then store that result or share it in a more compact way: http://localhost:3000/d/nleucPv4k/thingworx-monitoring?orgId=1&from=1668528038732&to=1668536503953  (absolute timestamps):     Dashboards are just collections of panels which report on all of the various metrics of performance and stability that exist for single components of a system. This is because there can be quite a few metrics worth watching for each individual component. Most of the third-party tools come with their own dashboards, but the ThingWorx component is one which for now, requires some thought and creativity.     Consider your use case carefully and look over the various subsystems contained within ThingWorx. Each part of an application is localized to specific subsystems, and some are more business critical than others. What will go at the top of your dashboard? Add rows, add panels per row, and see what the many choices are for watching your system.     Don’t forget that with Telegraf running, VM or machine usage metrics are also available for display on a dashboard. Things like overall CPU and Memory usage are critical to determining the health of a system, as we have demonstrated in our own reasoning in past benchmarks and scale tests. You can create a panel to monitor the mem_used versus mem_total, like so:     Another metric from Telegraf worth adding is the CPU usage, which should be given “percentage” for the units and which needs a label filter of cpu = cpu-total. If we do some resizing and drag-and-dropping, then we now have the first row of a dashboard:     See how the Platform usage climbs steadily and is purged in a cycle? That is the Java Garbage Collection mechanism, and it’s important to remember to leave room for spikes on top of those peaks. Data can also be calculated or processed in some way to make it more useful for determining system health and stability.     The data in the picture below uses the formula submitted = completed + number queued + number failed. It shows the current queue on the left Y-axis and the max queue on the right (since the two numbers usually are drastically different). It looks pretty, but it doesn’t really tell us much about the system in this format, so let’s do some math and find a representation that is a bit more helpful.     Performing a “non-negative derivative” calculation over the submitted and the completed queue counts over time allows for us to look at the status of the queue as a velocity. When the “complete” speed appears behind the “submitted” speed for too long at a time, then that means the queue is filling up and will eventually result in data loss.     If we take this one step further and calculate the average of the submitted minus the completed over time, then we can actually predict approximately when the queue will fill up. This can then be displayed on a dashboard in Grafana, or used as the basis for an alert.   What to Monitor     In addition to monitoring the system which ThingWorx runs upon, ThingWorx itself can easily be monitored down to the subsystems level by Prometheus due to the Metrics endpoint. Many applications have support built into the way they format the data for scraping, including the JVM (which exposes Prometheus-formatted metrics with the JMX Exporter) and the OS (which can use the Node Exporter or Telegraf for the same purpose). For these more generic components, there are popular community dashboards which can be downloaded and used in Grafana for data analysis and review.     For ThingWorx, there’s different kinds of data to track: subsystem data (see the list on the right) and non-subsystem data. There’s queue based versus non-queue data. These different metrics can collectively characterize the overall health of the application, depending on the use case.     For instance, if this is a system with very many connected devices, one metric which may be important to track is the number of total devices defined on the Foundation server vs. the number of devices which are currently connected. If there are relays involved, then many devices suddenly going offline can mean a relay has failed. Another example is if the system sizing depends on an assumption that there will only ever be a fraction of the total number of devices connected at a time. Use cases like these could be monitored easily by keeping track of the total vs. the number of connected devices.     Other common indicators of a healthy ThingWorx application might include the value stream and stream queues. These queues should fluctuate over time as the data is ingested and processed, but they should never be growing in size. If the stream queues are growing, then that means the data is writing to the queues faster than the queues can write to the database. Eventually, when the system runs out of resources to keep track of the queues, data will be lost. Having the stream information displayed in a chart can make it very easy to spot an upward trend in resource usage early on, which can catch a blockage or bottleneck that needs attention before it starts to affect the larger system in catastrophic ways later.     Memory usage information from the various subsystems might be something worth tracking, as well as the event queue. These can indicate that the business logic is functioning with room to handle spikes, and that the server has enough memory to service all three dimensions of an IoT application: the ingestion, the business logic and thing-based alerting, and the user experience and UI. If file transfers are a key part of the use case, then the number of concurrent transfers, the average speed of them, the size of the files, all of this kind of stuff can be tracked and charted in Grafana by making use of the ThingWorx metrics which automatically show up there once you import the Prometheus data source.     A mature dashboard used for a production environment might look a little like this: For further reading about subsystem monitoring, check out the Help Center.   How to Alert     The alerting mechanism built-in to Prometheus is incredibly easy to configure, so it might be tempting to generate tons of alert rules. However, remember that the more noise a system makes, the harder it is for those monitoring that system to know when action is really required. Playbooks which document how to respond to alerts, who to contact, how to act, and all the information necessary to handle an alert, should be created as an ongoing part of the DevOps process.     Alerts should fire with the right severity in the subject line, as well as all of the information about the issue that is currently known, presented in a concise way, so that whoever receives the alert starts thinking about the root cause sooner and recovers the system faster. Those who receive the alert should have the ability to facilitate its resolution, and know who is expected to react to any alerts which come in.     In the ThingWorx monitoring stack, Prometheus handles the alert rules and the generation of alerts, but alert filtering and delivery is managed in an external alerts manager.     Generally, you want your alerts to follow a curve. If the current queue size exceeds 50% of the maximum, perhaps that isn’t a huge deal, if the application catches up quickly. How long are spikes in queue processing expected to last? Perhaps if the queue size is over half-full for 10 seconds, 30 seconds, then that means the queue is falling behind and not catching up. Ok, so this might be a warning level alert. When does this become an error? Well, let’s say the queue exceeds 90% of the max queue size. This might want to alert the moment it hits the mark. Now, farther along the curve, it may not take as long before data gets lost.     As the severity of the situation increases, the threshold for alerting should increase as well. That way when errors do alert, it is a sure thing that they require a response immediately. The alerts are then pushed into the “Alerts Manager” for delivery based on your management rules. The Alerts Manager may decide to withhold warnings altogether, or send them to a much smaller mailing list, whatever filtering helps to ensure the right people receive the right alerts, right when they need them.   In Conclusion, A Healthy Application...     Has stable memory usage that fluctuates predictably and doesn’t grow over time. In a system experiencing mild issues, the memory starts to trend upward:     If left unattended, systems like this may eventually experience outages. Finding the issue this early means there is even time to do some digging, debugging, taking of stack traces, and other such troubleshooting steps before the system must be restarted or recovered. That can really make the difference in identifying and resolving before there are real problems.     One metric which makes for good alerting is the total number of failed stream entries, which can indicate there’s an issue writing to the database even before the queue has started to fill up. Other alerts may include warnings and errors based around percentages of memory used or queues filled, which depend on how long the queues take to fill up and how long the state has been at its increased usage.     Prometheus has all of the tools necessary to make this possible across a variety of infrastructures and use cases. Set it up on a local machine and poke around at what ThingWorx metrics are available to meet your monitoring needs.
View full tip
The ThingWorx EMS and SDK based applications follow a three step process when connecting to the Platform: Establish the physical websocket:  The client opens a websocket to the Platform using the host and port that it has been configured to use.  The websocket URL exposed at the Platform is /Thingworx/WS.  TLS will be negotiated at this time as well. Authenticate:  The client sends a AUTH message to the platform, containing either an App Key (recommended) or username/password.  The AUTH message is part of the Thingworx AlwaysOn protocol.  If the client attempts to send any other message before the AUTH, the server will disconnect it.  The server will also disconnect the client if it does not receive an AUTH message within 15 seconds.  This time is configurable in the WSCommunicationSubsystem Configuration tab and is named "Amount of time to wait for authentication message (secs)." Once authenticated the SDK/EMS is able to interact with the Platform according to the permissions applied to its credentials.  For the EMS, this means that any client making HTTP calls to its REST interface can access Platform functionality.  For this reason, the EMS only listens for HTTP connections on localhost (this can be changed using the http_server.host setting in your config.json). At this point, the client can make requests to the platform and interact with it, much like a HTTP client can interact with the Platform's REST interface.  However, the Platform can still not direct requests to the edge. Bind:  A BIND message is another message type in the ThingWorx AlwaysOn protocol.  A client can send a BIND message to the Platform containing one or more Thing names or identifiers.  When the Platform receives the BIND message, it will associate those Things with the websocket it received the BIND message over.  This will allow the Platform to send request messages to those Things, over the websocket.  It will also update the isConnected and lastConnection time properties for the newly bound Things. A client can also send an UNBIND request.  This tells the Platform to remove the association between the Thing and the websocket.  The Thing's isConnected property will then be updated to false. For the EMS, edge applications can register using the /Thingworx/Things/LocalEms/Services/AddEdgeThing service (this is how the script resource registers Things).  When a registration occurs, the EMS will send a BIND message to the Platform on behalf of that new resource.  Edge applications can de-register (and have an UNBIND message sent) by calling /Thingworx/Things/LocalEms/RemoveEdgeThing.
View full tip
ThingWorx DevOps with Azure: The Comprehensive DevOps Guide Written by Tori Firewind, IoT EDC   As promised in a previous post,  attached here is a comprehensive guide to DevOps in ThingWorx, including tutorials and instructions for creating a continuous integration, continuous deployment (CI/CD) process for application development.  There are also updated scripts and entities attached, including an entire sample application for importing, exporting, and testing an application in ThingWorx. From Docker and Github to Azure DevOps and Solution Central, this guide has it all. Learn how to perform your role in the DevOps process whether an administrator or a developer, automate your deployments and testing, and create a more efficient process  for publication changes to production.  A complete DevOps process like this really does facilitate faster and easier updates with fewer risks, fewer delays, and a better pathway to success.  
View full tip
From the documentation, a SOLR node is only needed in case of using DataTables.  If the SOLR configuration field left blank, the extension will request to provide an input. Are SOLR nodes required or optional in order to use DSE with TW (in the hypothetical case of not using DataTables)?      -- As for functionality of the Thingworx, a Solr node is not required. However, the extension does try to validate the configuration, and hence, at this point, a SOLR node is mandatory to properly configure the extension. This will be fixed in the future. When there are 2 entries for addresses, one for a Cassandra Cluster and one for a Solr Cluster, are they the same Cluster, or different Clusters?      -- They could be either. There can be one machine with SOLR enabled and using the same IP for both Cassandra and Solr. However, it's not recommend for production workloads. It would be perfectly fine for development or test environments. In a Cluster, in order to have Solr and Cassandra nodes, use of Datacenters is required. Even if a Datacenter isn't explicitlydefined, a default install of DSE will create two data centers called "Cassandra" and "Solr" which is what would be seen see in the default "Cassandra Keyspace Settings" property in the configuration. If the user does create Datacenters with specific names then they will have to update the "Cassandra Keyspace Settings" property to reflect the same. I.e. replication = {'class':'NetworkTopologyStrategy', 'Cassandra':1, 'Solr':1} The number in front (1 being the default) represents the replication factor (https://docs.datastax.com/en/cql/3.1/cql/cql_using/update_ks_rf_t.html) depending on the number of nodes in each data center
View full tip
Key Functional Highlights ThingWorx 8.1 covers the following areas of the product portfolio: ThingWorx Analytics, ThingWorx Utilities and ThingWorx Foundation which includes Core, Connection Server and Edge capabilities. Highlights of the release include: ThingWorx Foundation Next Generation Composer: Embedded Mashup Builder enables codeless development of web visualization. New ability to manage and push configurations for KEPServerEX Notifications: Create SMS and Email notifications natively in Next Generation Composer Support for localized and dynamic content with tokens Protocol Adapter Toolkit: Encrypted communication between edge devices and the Connector over HTTPS or WSS. Encrypted communication between the Connector and ThingWorx Core (WSS). Ability to define a mapping of outbound messages from ThingWorx Core using a Codec to an edge-bound message. Authentication of edge devices 3 rd Party Platform Connectivity: Azure IoT Connector v2.0 § Model data from Azure IoT using the Thing Model § Utilize data from Azure IoT as properties in the Thing Model § Utilize services and events through Azure IoT § Utilize Azure file storage ThingWorx for Predix v1.0 § Synchronize data from Predix to ThingWorx § Enable SSO between Predix & ThingWorx C SDK: Framework for custom functionality to be added to C SDK-based applications at runtime License Management: Simple, automated, licensing system for collection, storage, reporting, management and auditing of licensing entitlements. Deprecated the SQUEAL functionality ThingWorx Analytics Categorical and Ordinal Goals: Adds use of unordered text (categorical) and ordered text (ordinal) goals to predictive analytics.  Create and score models with the new goal types. Virtual Sensor: Adds support for time-series predictions when historical data is not available.  Allows machine learning predictions to take the place of physical sensors and simplifies predictions like time to failure and probability of failure. Tighter platform integration: Analytics Server is more tightly integrated with ThingWorx Core, providing native control and access to analytics programming interfaces. New, simplified API: The new Analytics Server 2.0 API pattern is simpler, more modern, and easier to use. Microservices-based Architecture: Conversion to microservices sets the stage for High Availability and improved distributed installations. Native Linux installer: Docker is no longer required to run on Linux-based systems. Analytics Manager: Several enhancements, including: Simulation-driven data framework allows external providers to send data as if they were a physical Thing. Time Series Data Inputs improves the ability to share time series data with external providers. Thing Connect / Disconnect makes it easier to connect specific Things with external providers. Analytics Builder: Ease of use enhancements including: New UI support for time series models. Easier access to / use of Signals and Profiles. Simplified models for Boolean goals. Easier installation, no longer requires UploadThing. ThingWorx Utilities Software Content Management (SCM): Define package dependencies where the deployment of a package requires the presence of one or more other packages. ThingWorx Trial Edition ThingWorx Trial Edition will be available to internal PTC resources at launch and will be made available externally on the Developer Portal shortly after launch. Developer Enablement: Enhancements have been made to the Trial Edition installation tool, providing a native installation process of the ThingWorx platform including: ThingWorx Foundation ThingWorx Utilities ThingWorx Analytics ThingWorx Industrial Connectivity Documentation ThingWorx 8.1 Reference Documents ThingWorx Analytics 8.1 Reference Documents ThingWorx Core 8.1 Release Notes ThingWorx Core Help Center ThingWorx Edge SDKs and WebSocket-based Edge MicroServer Help Center ThingWorx Connection Services Help Center ThingWorx Industrial Connectivity Help Center ThingWorx Utilities Help Center ThingWorx Utilities Installation Guide ThingWorx Analytics Help Center ThingWorx Trial Edition User Guide Additional information ThingWorx eSupport Portal ThingWorx Developer Portal ThingWorx Marketplace Download The following items are available for download from the PTC Software Download site. ThingWorx Platform – Select Release 8.1 ThingWorx Utilities – Select Release 8.1 ThingWorx Analytics – Select Release 8.1
View full tip
Anomaly Detection (also known as Outlier Detection) is a set of techniques that identify unusual occurrences in data. The premise is that such occurrences may be early indicators of future negative events (e.g. failure of assets or production lines).  Data Science algorithms for Anomaly Detection include both Supervised and Unsupervised methods. In Unsupervised Anomaly Detection, the algorithms make the assumption that most of the data points are "normal" (e.g. normal operation of the asset) and are looking for data points that are most dissimilar to the remainder of the dataset. Supervised Anomaly Detection requires a labeled set of Anomalies, in which case predictive algorithms can be applied directly on this data.   Thingworx employs a number of algorithms in support of Anomaly Detection: Simple threshold alerts. These are easy to setup on Thing properties but require a domain expert to provide such thresholds. Then, the alert will automatically fire when the value of the monitored property goes outside the predefined range of values, often seen as the "bad" side of the threshold. Statistical Process Control (SPC). This can be implemented using Thingworx Analytics Property Transforms. Most companies use a subset of SPC charts and rules to monitor production processes. Examples include the X Bar and R charts, as well as the Western Electric rules (e.g. one point outside the average +/- three sigma range). Explainable and widely accepted, SPC can also provide an earlier warning system compared to simple threshold alerts, in that it captures more complex patterns. Clustering. Using Thingworx Analytics, one can build an optimal clustering for the available data points. Under the assumption that data is representative of mostly normal operation and that there is not a significant pre-defined pattern of anomalies that form their own cluster, one can identify outliers by looking at the distance between points and their corresponding cluster centers. Points that are very far from their corresponding cluster center can be labeled as anomalies. Semi-supervised Anomaly Alerting (formerly known as ThingWatcher). This functionality identifies single property time series behavior that is statistically different than what was seen in a finite window of “known normal operation”. As such, it does not identify a “bad” event, or even a precursor to a “bad” event.  Rather, it points the end user to further investigate a situation which may lead to a “bad” event. Anomaly Alerts can be easily setup like any other Alert on a Thing property. Multiple Anomaly Alerts can be setup on the same or different properties of a Thing. Behind the scenes, the platform builds a time series neural network model for the known normal operation data, which is then applied to incoming data, and, if the errors are significantly different than those on known normal operation over a period of time, then an Anomaly Alert is produced. The techniques mentioned above are either unsupervised or semi-supervised. If the dataset contains labeled anomalies (e.g. asset faults, or suspicious patterns) then supervised predictive techniques (such as regression, decision trees, neural nets, or ensemble methods) are available to model the relationships between such anomalies (dependent variables) and various variables of interest (independent variables). These models can then be employed to monitor assets or production lines for upcoming anomalies. In many real-world use cases, anomalies are relatively rare; care needs to be taken when building such predictive models. Techniques such as up-sampling can prove beneficial in such situations.   What constitutes an Anomaly depends on the observed data and the current context. If only few data points are initially available, then it is possible that a lot of future data is predicted as an Anomaly, despite being normal operation. Also, in terms of context, if an Anomaly Detection is trained on a connected product in the Winter, it is likely to say all Summer operation is anomalous. This can be tackled by having multiple anomaly detection alerts implemented, one for each different context of operation (e.g. season, recipe being manufactured, operation done by a robot).   Another consideration is lead time vs explainability. For example, when a threshold alert fires, it is obvious why, but it may not be early enough to take action. As more advanced methods are employed, more complex patterns can be captured, hence more lead time, but typically at the expense of explainability. For example, semi supervised Anomaly Alerting (formerly known as ThingWatcher) uses time windows, aggregations, and derivatives of up to the third order, resulting in significantly less explainability when an Anomaly is presented to the end user.   Choosing the appropriate Anomaly Detection technique is use case dependent, balancing the desired lead time and explainability. If historical data on failures/anomalies is not available, a good place to start is Statistical Process Control, as it provides a balanced approach between the two dimensions, in addition to being already in use across many manufacturing companies.
View full tip
The KEPServerEX ThingWorx Native Interface was originally released with KEPServerEX v5.21 to enable connectivity with v6.6 of the ThingWorx Platform. Compatibility with the ThingWorx NextGen Composer (v7.4) was implemented with the release of KEPServerEX v6.1, while still allowing support of older ThingWorx composer versions through the Native Interface "Legacy mode" setting.   When connecting with ThingWorx v7.4 and newer, an Industrial Gateway thing template is configured in the NextGen Composer. The instructions for connecting KEPServerEX v6.1 (and newer) with ThingWorx v7.4 (and newer) can be found in the ThingWorx Help Center here:   KEPServerEX / ThingWorx Industrial Connectivity Connection Example (Expand ThingWorx Model Definition and Composer > Industrial Connections > Industrial Connections Example)     When connecting with versions of ThingWorx that are older than v7.4-- or if the older version of ThingWorx Composer will be used-- it will necessary to download and import the KEPServerEX Extension from the ThingWorx marketplace. The "RemoteKEPServerEXThing" thing template will then be used to allow connection with KEPServerEX. Here is a link to the extension download: ThingWorx IoT Marketplace   To enable Legacy Mode in KEPServerEX (v6.1 or newer, when connecting with versions of ThingWorx that are older than v7.4): 1. Open the KEPServerEX Configuration window 2. In the top-left Project view pane, right-click on "Project" and select "Properties" 3. Select the ThingWorx Property Group 4. Change "Enable" to "Yes"; and change "Legacy Mode" to "Enable"     See the chart below for a version compatibility summary:   KEPServerEX version ThingWorx version v5.21 pre-7.4, using RemoteKEPServerEXThing v6.0 pre-7.4, using RemoteKEPServerEXThing v6.1 or higher w/ Legacy Mode enabled pre-7.4, using RemoteKEPServerEXThing v6.1 or higher w/ Legacy Mode disabled (default) v7.4 or higher, using Industrial Gateway
View full tip
You can control the Tracking Indicator that is used to mark the ThingMark position. The Tracking Indicator is a green hexagon, in the screenshot below the red arrow points to it. You can control the display of this tracking indicator via the Display Tracking Indicator property of the ThingMark widget: But you can also get fancier. Here is an exmaple that shows the tracking indicator for 3 seconds when the tracking has started and then hides it automatically. To achieve such a behavior you'll have to use a bit of Javascript. We'll first create a function hideIn3Sec() in the javascript section of our view and then add it to the javascript handler of the Tracking Acquired event of the ThingMark widget. Step 1: Here is the code for copy/paste convenience: $scope.hideIn3Sec=function(){   // The $timeout function has two arguments: the function to execute (defined inline here)   // and the time in msec after which the function is invoked.   $timeout( function hide(){     // you may have to change 'thingMark-1' by the id of the ThingMark in your own experience     $scope.app.view['Home'].wdg['thingMark-1']['trackingIndicator']=false;   },   3000); } Step 2: That's it. Have fun!
View full tip
How to input Database User Credentials at RunTime. This Blog considers that you have already imported the Database Extension and Configured the Thing Template. If you have not done this already please see Steps to connecting to your Relational Database first. Steps: Create a Database Thing template with correct configuration. Example configuration for MySql Database: jDBCDriverClass: com.mysql.jdbc.Driver jDBCConnectionURL: jdbc:mysql://127.0.0.1:3306/<DatabaseNameHere>?allowMultiQueries=true connectionValidationString: SELECT NOW() maxConnections: 100 userName: <DataBaseUserNameHere> password: <DataBasePasswordHere> Create any Generic Thing and add a service to create thing based on the Thing template created in Step 1. Example: // NewDataBaseThingName is the String input for name the database thing to be created. // MySqlServerUpdatedConfiguration is the Thing template with correct configuration var params = {      name: NewDataBaseThingName /* STRING */,      description: NewDataBaseThingName /* STRING */,     thingTemplateName: "MySqlServerUpdatedConfiguration" /* THINGTEMPLATENAME */,     tags: undefined /* TAGS */ }; // no return Resources["EntityServices"].CreateThing(params); Add code to enable and then restart the above thing using EnableThing() and RestartThing() service. Example Things[NewDataBaseThingName].EnableThing(); Things[NewDataBaseThingName].RestartThing(); Test and confirm that the Database Thing services runs as expected. Now Create a DataShape with following Fields: jDBCDriverClass: STRING jDBCConnectionURL: STRING connectionValidationString: STRING maxConnections: NUMBER userName: STRING password: PASSWORD Now in the Generic Thing created in Step 1 add code to update the configuration settings of DataBase Thing. Make sure JDBC Driver Class Name should never be changed. If different database connection is required use different Thing Template. Also, add code to restart the DataBase Thing using RestartThing() service. Example: var datashapeParams = {     infoTableName : "InfoTable",     dataShapeName : "DatabaseConfigurationDS" }; // CreateInfoTableFromDataShape(infoTableName:STRING("InfoTable"), dataShapeName:STRING):INFOTABLE(DatabaseConfigurationDS) var config = Resources["InfoTableFunctions"].CreateInfoTableFromDataShape(datashapeParams); var passwordParams = {         data: "DataBasePasswordHere" /* STRING */ }; // DatabaseConfigurationDS entry object var newEntry = new Object(); newEntry.jDBCDriverClass= "com.mysql.jdbc.Driver"; // STRING newEntry.jDBCConnectionURL = "jdbc:mysql://127.0.0.1:3306/<DatabaseNameHere>?allowMultiQueries=true"; // STRING newEntry.connectionValidationString = "SELECT NOW()"; // STRING newEntry.maxConnections = 100; // NUMBER newEntry.userName = "DataBaseUserNameHere"; // STRING newEntry.password = Resources["EncryptionServices"].EncryptPropertyValue(passwordParams); // PASSWORD config.AddRow(newEntry); var configurationTableParams = { configurationTable: config /* INFOTABLE */, persistent: true /* BOOLEAN */, tableName: "ConnectionInfo" /* STRING */ }; // ThingNameForConfigurationUpdate is the input string for Thing Name whose configuration needs to be updated. // no return Things[ThingNameForConfigurationUpdate].SetConfigurationTable(configurationTableParams); Things[ThingNameForConfigurationUpdate].RestartThing(); Test and confirm that the Database Thing services runs as expected.
View full tip
The .net-sdk can be configured to emit very detailed debugging and diagnostic information to a log file during execution. The .net-sdk uses the standard .NET System.Diagnostic infrastructure for Logging, as such, all configuration of the .net-sdk logger is done via the standard .NET Logging configuration system. By default, Logging is configured via the standard .NET “App.config” file. Log messages can be routed to any standard .NET TraceListener. Optionally, ThingWorx provides a FixedFieldTraceListener which can be used to output log messages to a file. The use of the ThingWorx provided FixedFieldTraceListener is recommended. The FixedFieldTraceListener when configured will automatically create a "logs" directory in the same location as (a sibling to) the running executable file (.exe). This "logs" directory will contain the log files. Every .NET Class can be configured as a specific “Trace Source” which emits log messages. It is recommended to add at least the following Trace Sources to your App.config file to receive the most useful amount of information: com.thingworx.communications.client.BaseClient com.thingworx.communications.client.ConnectedThingClient com.thingworx.communications.client.things.VirtualThing com.thingworx.communications.client.TwApiWrapper com.thingworx.communications.client.things.filetransfer.FileTransferVirtualThing com.thingworx.communications.client.things.contentloader.ContentLoaderVirtualThing The amount of information emitted can range from very low level Trace messages (the Verbose setting) to nothing at all (the Off setting). The “SourceLevels Enumeration” can be used to control how much information is written out to the log file. For reference, this is the <add name="SourceSwitch" value="Information" /> element in the sample below. Below is sample App.config file. <?xml version="1.0" encoding="utf-8"?> <configuration>     <system.diagnostics>       <sources>         <source name="com.thingworx.common.utils.JSONUtilities" switchName="SourceSwitch" switchType="System.Diagnostics.SourceSwitch" >           <listeners>             <add name="file" />           </listeners>         </source>         <source name="com.thingworx.communications.client.TwApiWrapper" switchName="SourceSwitch" switchType="System.Diagnostics.SourceSwitch" >           <listeners>             <add name="file" />           </listeners>         </source>         <source name="com.thingworx.communications.client.BaseClient" switchName="SourceSwitch" switchType="System.Diagnostics.SourceSwitch" >           <listeners>             <add name="file" />           </listeners>         </source>         <source name="com.thingworx.communications.client.ConnectedThingClient" switchName="SourceSwitch" switchType="System.Diagnostics.SourceSwitch" >           <listeners>             <add name="file" />           </listeners>         </source>         <source name="com.thingworx.communications.client.things.contentloader.ContentLoaderVirtualThing" switchName="SourceSwitch" switchType="System.Diagnostics.SourceSwitch" >           <listeners>             <add name="file" />           </listeners>         </source>         <source name="com.thingworx.communications.client.things.filetransfer.FileTransferVirtualThing" switchName="SourceSwitch" switchType="System.Diagnostics.SourceSwitch" >           <listeners>             <add name="file" />           </listeners>         </source>         <source name="com.thingworx.communications.client.things.VirtualThing" switchName="SourceSwitch" switchType="System.Diagnostics.SourceSwitch" >           <listeners>             <add name="file" />           </listeners>         </source>         <source name="com.thingworx.metadata.annotations.MetadataAnnotationParser" switchName="SourceSwitch" switchType="System.Diagnostics.SourceSwitch" >           <listeners>             <add name="file" />           </listeners>         </source>       </sources>       <switches>         <add name="SourceSwitch" value="Information" />       </switches>       <sharedListeners>         <add name="file" type="com.thingworx.common.logging.FixedFieldTraceListener, thingworx-dotnet-common, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" initializeData="false"/>       </sharedListeners>       <trace autoflush="true" indentsize="4" />   </system.diagnostics> </configuration>
View full tip
This might be a well-known topic for some, but I recently had a need that Event Routers fit into perfectly and wanted to share. If you have some neat applications for Event Routers on mashups, feel free to reply!   What? Event Routers are a function on Mashups that let you connect multiple inputs to a single output. For my use case, this was extremely helpful to let me have two different Service Outputs go to the same Widget. They are a really simple tool that can save a lot of headache.   How? Event Routers work by funneling the latest data through to a single output. This is particularly useful for  user-activated actions with the output tied to a widget or another service. The Event Router automatically activates when any one of the Inputs changes.   Example I have two services that generate HTML from different sources, but I want to display just the latest one that the user had activated in a single HTML Text Area widget. The two different services are activated with two different buttons. But how do I show these two outputs in a single widget? Create an Event Router with two HTML inputs!         Now I just tie each service output to the Inputs and tie the Output to the HTML Text Area Text (note: the icon for Input2 is incorrect—it should be HTML as well; this system is running 8.5.1, perhaps it's an issue in that release).         Now when the user clicks on either button, the correct service’s HTML is sent to the HTML Text Area. Ta-da!   P.S. I noticed in some older posts that Event Routers used to be a widget or extension that came and went. Now (8.5+) it is baked into the Functions on the far right side of Mashup Builder.
View full tip
<p>We live in a connected world where we can (want!) to receive instant updates and notifications. ThingWorx leverages the power of Web 2.0 and its Always-On technology to deliver that, but our friendly SMS providers have also provided an easy and powerful way that can be used to deliver SMS notifications right to your phone. Email to Text!</p><div>Set up a 'notification' Thing using our MailServer Template, set up your outgoing e-mail server and you are now ready to invoke the 'SendMessage' service on a given event. All you need now is the email address of your SMS number, which you can find by following this link: <a href="http://sms411.net/how-to-send-email-to-a-phone/" target="_blank"><span style="font-size:8.5pt;line-height:115%;font-family:&quot;Arial&quot;,&quot;sans-serif&quot;">List of e-Mail to SMS  addresses</span></a></div><p class="MsoNormal"><o:p></o:p></p><div><p class="MsoNormal"><o:p></o:p></p></div><p></p>
View full tip
Original Post Date:     June 6, 2016 Description: This is a video tutorial on creating a Value Stream, adding the Value Stream to a Remote Thing, adding, binding and querying the (remote) properties.      
View full tip
The following code snippet will retrieve a months worth of data from the system and return it as a CSV document suitable for import into your spreadsheet or reporting tool of choice. import static com.axeda.sdk.v2.dsl.Bridges.* import com.axeda.drm.sdk.Context import com.axeda.common.sdk.id.Identifier import com.axeda.services.v2.* import com.axeda.sdk.v2.exception.* def ac = new AuditCriteria() ac.fromDate = Date.parse('yyyy-MM-dd', '2017-03-01') ac.toDate   = Date.parse('yyyy-MM-dd', '2017-03-31') def retString = '' tcount = 0 while ( (results = auditBridge.find(ac)) != null  && tcount < results .totalCount) {   results.audits.each { res ->     retString += "${res?.user?.id},${res?.asset?.serialNumber},${res?.category},${res.message},${res.date}\n"     tcount++   }   ac.pageNumber = ac.pageNumber + 1 } return retString
View full tip
  Use the Collection Widget to organize visual elements of your application.   GUIDE CONCEPT   This project will introduce the Collection Widget.   Following the steps in this guide, you will learn to display different values from a single dataset in real-time.   We will teach you how to utilize the Collection Widget to generate a series of repeated Mashups for every row of data in a table.     YOU'LL LEARN HOW TO   Create a Datashape to define columns of a table Create a Thing with an Infotable Property Create a base Mashup to display data Utilize a Collection Widget to display data from multiple rows of a table   NOTE: This guide's content aligns with ThingWorx 9.3. The estimated time to complete ALL 3 parts of this guide is 60 minutes.      Step 1: Create a Datashape    Create a Datashape   The Collection Widget uses a Service to dynamically define visual content.   The data must be in a tabular format (for example: Data Table, Info Table, or external Database-connection) in order for the Collection Widget to access it.   In this part of the lesson, we'll create a Data Shape to define the columns of the table.   In a subsequent step, we'll create an Info Table Property within a Thing in order to store the information.   In the ThingWorx Composer Browse tab, click Modeling > Data Shapes, + New              2.  In the Name field, enter cwht_datashape.                      3. If Project is not already set, search for and select PTCDefaultProject.         4. At the top, click Field Definitions          First Definition:          1. Click + Add to open the New Field Definition slide-out.          2. In the Name field, enter first_number.          3. Change the Base Type to NUMBER.          4. Check the Is Primary Key box. All Datashapes must have a single Primary Key, and the first Field is as acceptable as any other for our purposes here.                5. At the top-right, click the "Check with a +" button for Done and Add.   Second Definition:   In the Name field, enter second_number.       2. Change the Base Type to NUMBER.               3. At the top-right, click the "Check with a +" button for Done and Add.     Third Definition:   In the Name field, enter third_number.         2. Change the Base Type to NUMBER.               3. At the top-right, click the "Check" button for Done         4. At the top, click Save.                   Step 2: Create a Thing   Create a Thing   In the previous step, we created a Data Shape to define the Info Table Property columns.   Now, we will create a Thing, add an Info Table Property, format the Info Table Property with the Data Shape, and set some default values.   On the ThingWorx Composer Browse tab, click Modeling > Things, + New.           2. In the Name field, enter cwht_thing.         3. If Project is not already set, search for and select PTCDefaultProject.         4. Select GenericThing in the Base Thing Template field       Add InfoTable Property         1. At the top, click Properties and Alerts.        2. Click the + Add button to open the New Property slide-out.        3. In the Name field, enter infotable_property.        4. Change the Base Type to INFOTABLE.        5. In the Data Shape field, select cwht_datashape.        6. Check the Persistent checkbox.       First Default   Check the Has Default Value checkbox. A cwht_datashape icon will appear underneath.            2. Under Has Default Value, click the cwht_datashape button to open the pop-up menu which  sets the default values               3. Click the + Add icon.         4. Enter values in each number field, such as 1, 2, and 3.              5. At the bottom-right, click the green Add button.     Second Default   Click the + Add icon.         2. Enter values in each number field, such as 10, 20, and 30            3. At the bottom-right, click the green Add button       Third Default   Click the + Add icon.       2. Enter values in each number field, such as 100, 200, and 300            3. At the bottom-right, click the greenAddbutton           4.  At the bottom-right, click Save to close the pop-up menu               5. At the top-right, click the "Check" button for Done.       6. At the top, click Save          Click here to view Part 2 of this guide.
View full tip
    About   This is part of a ThingBerry related blog post series.         ThingBerry is ThingWorx installed on a RaspBerry Pi, which can be used for portable demonstrations without the need of utilizing e.g. customer networks. Instead the ThingBerry provides its own custom WIFI hotspot and allows Things to connect and send / receive demo data on a small scale.   In this particual blog post we'll discuss on how to connect a ESP8266 module to the ThingBerry WIFI hotspot and send data from a DHT-11 sensor via the MQTT protocol.   As the ThingBerry is a highly unsupported environment for ThingWorx, please see this blog post for all related warnings.   Install MQTT broker on the ThingBerry     To install mosquitto as a MQTT broker, log in to the ThingBerry and run     sudo apt-get install mosquitto   This will provide a basic broker installation, which is good enough for this example. MQTT clients (including ThingWorx) will connect to this broker to exchange messages. There will be no added security like encrypted traffic shown in this example, it's however good practise to secure MQTT broker / client connections.   While the ESP8266 module is publishing information, ThingWorx will subscribe to the corresponding topics to update its internal property values with what is sent by the ESP8266 module.   For more information on MQTT, how to configure it for ThingWorx or more security relevant information also see   https://community.thingworx.com/message/5063#5063 https://community.thingworx.com/community/developers/blog/2016/08/08/securing-mqtt-connection-to-thingworx-platform?sr=tcontent   Configure the ESP8266     There are too many instructions on the web already on how to initially setup the ESP8266 and use it with the Arduino IDE. I'll therefore just refer to Google which covers the topic more extensively than I ever could.   All coding in this example is done in the Arduino IDE and is pushed to the ESP8266 (NodeMCU) via USB. For this you might need to install a CH340g USB driver for the NodeMCU.   In the Arduino IDE under Tools, I have set my environment to   Board: NodeMCU 1.0 (ESP-12E Module) CPU Frequency: 80 MHz Flash Size: 4M (3M SPIFFS) Upload Speed: 115200 Port: COM3   Under Sketch > Include Library > Manage Libraries add / install the following libraries:   DHT sensor library by Adafruit Adafruit Unified Sensor by Adafruit PubSubClient by Nick O'Leary   These bring the libraries necessary to read data from the DHT-11 sensor and to configure the ESP8266 as MQTT client.     Wiring the DHT-11 sensor     The following image shows the PINs on the ESP8266     I'm using a DHT-11 sensor with cables included and already fixed to a board with 3 PINs. In case you're using a different version, there might be additional components and wiring required, like a resistor etc. Google might help here as well.     Ensure that neither board nor sensor are plugged in, and the ESP8266 is powered off.   To hook the sensor up to the ESP8266, join   ( - ) to GND ( + ) to 3.3V (out) to D3   After all the connections are made, connect the ESP8266 via USB to a computer / laptop with the Arudino IDE configured.   Coding   In the Arduino IDE use the following code - adjust the WIFI settings and the MQTT broker configuration. Ensure to rename the ESP_xx name / topic to something more meaningful, e.g. a specific device name (or just leave it as is if in doubt).   Use the ssid and wpa_passphrase from the hostapd.conf used to configure the ThingBerry as WIFI hotspot.   Copy&paste the code below into the Arduino IDE, verify it and upload it to the ESP8266.     If searching for a WIFI connection, the device's blue LED will blink. A successful connection to the broker and publishing the values will result in a static blue LED. In case the LED is off, the connection to the broker is lost or messages cannot be published.   For troubleshooting, use the Serial Monitor function (at 115200 baud) in the Arduino IDE. In case sensor data cannot be read but the wiring is correct and the code addressing the correct PIN verify the sensor is indeed working. It took me a long time to figure out that the first sensor I used was a defective device.   The current configuration sends updates every 10 seconds - longer intervals might make more sense, but can trigger a timeout for the MQTT broker. In this case the program will re-connect automatically and log corresponding messages in the Serial Monitor. This might seem like an error, but is indeed intended behavior by the code and the MQTT broker.     Configure MQTT Thing in ThingWorx     Create a new Thing in ThingWorx based on the MQTT Template. Add two properties:   temperature humidity   Both set to persistent and logged and Data Change Type to ALWAYS. Also configure a Value Stream to log a history of values.   In the configuration, add two more subscriptions. Activate the "subscribe" checkbox and map name (local property) to topic (MQTT topic), e.g.   name = temperature; topic = ESP_xx/temp name = humidity; topic = ESP_xx/hum   Ensure the correct servernames, ports etc. are configured (an empty servername will use the localhost).   Save the configuration. Property values should now be updated from the MQTT broker, depending on what the device is sending.   Code #include "DHT.h" #include "PubSubClient.h" #include "ESP8266WiFi.h" /* * * Configure parameters for sensor and network / MQTT connections * */ // setup DHT 11 pin and sensor #define DHTPin D3 #define DHTTYPE DHT11 // setup WiFi credentials #define WLAN_SSID "mySSID" #define WLAN_PASS "WIFIpassword" // setup MQTT #define MQTTBROKER "mqttbrokerhostname" #define MQTTPORT 1883 // setup built-in blue LED #define LED 2 /* * ============================================================ * * DO NOT CHANGE ANYTHING BELOW * (unless you know what you're doing) * */ // initiate DHT DHT dht(DHTPin, DHTTYPE); // initiate MQTT client WiFiClient wifiClient; PubSubClient client(MQTTBROKER, MQTTPORT, wifiClient); /* * setup */ void setup() { // switch off internal LED pinMode(LED, OUTPUT); digitalWrite(LED, HIGH); // start serial monitor Serial.begin(115200); // start DHT dht.begin(); // start WiFi WiFi.begin(WLAN_SSID, WLAN_PASS); } /* * the loop */ void loop() { // while not connected to WiFi, print "." // after connection exit the loop // blink LED while having no WiFi signal boolean wifiReconnect = false; while (WiFi.status() != WL_CONNECTED) { digitalWrite(LED, LOW); delay(200); Serial.print("."); digitalWrite(LED, HIGH); delay(300); wifiReconnect = true; } // if WiFi has reconnected, print new connection information and turn on LED if (wifiReconnect == true) { // print connection information and local IP address, mac address Serial.println(); Serial.println("WiFi connected"); Serial.println(WiFi.localIP()); Serial.println(WiFi.macAddress()); Serial.println(); // turn on built-in LED to indiciate successful WiFi connection digitalWrite(LED, LOW); } // if MQTT client is not connected, connect again // turn on built-in LED to indicate a successful connection if (!client.connected()) { Serial.println("Disconnected from MQTT server... trying to connect"); if (client.connect("ESP_xx")) { Serial.println("Connected to MQTT server"); Serial.println("Topic = ESP_xx"); digitalWrite(LED, LOW); } else { Serial.println("MQTT connection failed"); digitalWrite(LED, HIGH); } Serial.println(); } // read temperature and humidity from sensor float t = dht.readTemperature(); float h = dht.readHumidity(); if (isnan(t) || isnan(h)) { // if temperature or humidity is not a number, print error Serial.println("Failed retrieving data from DHT sensor"); } else { // print temperature and humidity Serial.print(t); Serial.print("° - "); Serial.print(h); Serial.print("%"); Serial.println(); // only send values to MQTT broker, if client is connected if (client.connected()) { // boolean to check for errors during payload transfer bool isError = false; // create payload and publish values via MQTT client // use buffer to convert float to char* char buffer[10]; dtostrf(t, 0, 0, buffer); if (client.publish("ESP_xx/temp", buffer)) { Serial.print(" published /temp "); } else { Serial.print(" failed /temp "); isError = true; } dtostrf(h, 0, 0, buffer); if (client.publish("ESP_xx/hum", buffer)) { Serial.print(" published /hum "); } else { Serial.print(" failed /hum "); isError = true; } Serial.println(); // on error, turn off LED if (isError == true) { digitalWrite(LED, HIGH); } else { digitalWrite(LED, LOW); } } } // sleep for 10 seconds // if sleep > default mosquitto timeout : a reconnect is forced for each update-cycle delay(10000); }
View full tip
Create Your Application Guide UI Part 1    Overview   This project will introduce the ThingWorx Mashup Builder. Following the steps in this guide, you will learn how to use this tool to create a Graphical User Interface (GUI) for your IoT Application. We will teach you how to rapidly create and update a Mashup, which is a custom visualization built to display data from devices according to your application's business and technical requirements. NOTE: This guide's content aligns with ThingWorx 9.3. The estimated time to complete ALL 5 parts of this guide is 30 minutes.    Step 1: Create New Mashup   The Mashup Builder is a drag-and-drop environment with a What You See Is What You Get (WYSIWYG) interface. With the Mashup Builder you can quickly and easily create a visualization of your IoT data. In this step, we'll explain various options to customize your application GUI. Click Browse > VISUALIZATION > Mashups.   Click + New. You are now on the New Mashup pop-up window.           Layout Options When creating your UI, you must choose whether you want a Responsive, Static, or Responsive (Advanced) Mashup, depending on the resolutions of the displays you want your application users to utilize when running your application. Mashup Layout Description When to Use Responsive Expands to the resolution of the display in an even more dynamic manner than the Responsive (Legacy) option. Previously-labeled as Responsive (Advanced), this is now the default option with modifications to the Mashup Builder interface versus previous versions. For instance, divisions such as a header or footer are available with the Template options at the bottom of the pop-up. Static (Legacy) Sized to the dimensions that you define. Static Mashups are appropriate when your users have a standard device on which they’ll be utilizing your application, such as a smartphone or tablet. When displayed in a lower resolution you will see scroll bars, in a higher resolution there will be unused space around the Mashup. Responsive (Legacy) Expands to the resolution of the display without leaving any unused space around the Mashup. Responsive Mashups should be utilized when you anticipate that users will interface with your application through various-sized screens. Responsive Mashups should always be tested at various resolutions to ensure optimum display of your IoT data.   Keep the default of Responsive (with NO Responsive Templates chosen), and click OK in the pop-up window. In the Name field, enter appui_mashup.   If the Project is not already set, search for and select PTCDefaultProject.  At the top, click Save.   At the top, click Design. This is the Mashup Builder user interface we will use for the remainder of this guide.              6. On the left, click the "left arrow" Collapse icon to cause the Composer Navigation to slide-out, revealing more room for the Mashup Builder interface.         Step 2: Mashup Builder Sections Now that you've launched the Mashup Builder, you'll need an understanding of the layout and configuration options.     NOTE: Section 1 in the top-left has a drop-down arrow you can click to reveal additional sections if your screen resolution is too small to show all of Widgets, Layout, and Explorer. There is also the Mashups tab in Section 1, but it is not covered in this guide.   Mashup Builder Section Description 1a Widgets Widgets provides a list of every graphical element within the platform. There are a wide variety of selections, from grids to graphs to text boxes to buttons. There is also a Filter Widgets field where you can enter the name of a particular Widget to sort the list. 1b Layout Layout is used to divide your Mashup into logical sub-sections. 1c Explorer Explorer allows you to reach any individual element of a Mashup. This can be helpful when multiple Widgets are overload on top of each other in the central Canvas window, making it difficult to click on them individually. 2 Canvas This is an area into which you can drag-and-drop Widgets to your desired location to build your application UI. You can also drag-and-drop data onto the Widgets in the Canvas. 3a Data Data is fundamental to any Mashup that wants to display ThingWorx IoT data, as it allows you to bind backend-data to particular Widgets. 3b Session Session provides access to Session Parameters, which are similar to global variables that may be used across multiple Mashups, such as cookies or security information for a logged-in User. 3c User User provides access to the User Extensions. User Extensions are the Properties of the logged-in User and can be used on the Client side and/or the Server side. 4a Properties Properties displays the Properties for the selected Mashup or Widget. These Properties allow you to perform a variety of modifications to Widgets, such as setting its exact position. 4b Style Properties Style Properties are a sub-set of Properties dealing with items such as changing color. 5a Bindings As data connections are made to various Widgets, the Bindings window will show these links in an easy-to-understand 'arrow' format. For instance, if a button press causes a Thing’s Service to be called, then you’ll see a connection from the Button Widget’s *Clicked* Event over to the Thing’s Service. 5b Reminders Reminders points out issues with your Mashup, such as unbound but required Properties, which often must be fixed before the Mashup can be saved. 6a Data Properties Data Properties displays properties of the selected data element. This can be helpful, for instance, when you want something to be triggered only when something else has completed. In that scenario, you’d look for the ServiceInvokeCompleted Event in this bottom-right area. 6b Functions Functions provides the ability to write custom business logic within the Mashup itself.   Step 3: Introducing Widgets   The primary way in which you'll create the GUI for your IoT application is with Widgets. Widgets are self-contained graphical elements that you can drag-and-drop onto the central Canvas area. Once placed on the Canvas, you can: Modify Widget layout Resize Widgets Bind Data to Widgets Combining several Widgets together will allow you to quickly and easily create a functional GUI for your IoT application. Widget Name Widget Example Picture Description Button   The Button Widget triggers an Event when it is clicked. Typically, this Event will be used to trigger a Service to either push or pull data from the platform’s backend. Checkbox   The Checkbox Widget provides a simple box which can be either set or not-set. You primarily use a Checkbox when you’re setting or displaying Boolean data. Gauge   The Gauge Widget provides a more “graphically interesting” way to display a numerical value. You can set the minimum and maximum values which a Gauge displays, or style a Gauge so that different areas are different colors. Label   The Label Widget displays a non-user-interactable string within your Mashup. You can use this to create headings or descriptions for parts of your Mashup. Text Field   The Text Field Widget is a small, one-line area into which you can either accept an input string from the User or display a string from the platform’s backend. Combining a few of the basic Widgets described above allows you to create a Mashup for your IoT application. However, without data being supplied to these Widgets from the platform's backend, they're just pretty graphics. In the next section, we will introduce a few of the Mashup Services that enable bidirectional data flow between your GUI and device data.   Click here to view Part 2 of this guide.     
View full tip
  Remotely administer Windows Edge IoT Devices without coding.   GUIDE CONCEPT   Learn how to download, install, and configure the Edge Microserver (EMS) to create an AlwaysOn (TM) connection between Edge IoT Devices and ThingWorx Foundation.       YOU'LL LEARN HOW TO   Install the Edge MicroServer (EMS) Configure the EMS Connect the EMS to ThingWorx Foundation   NOTE: The estimated time to complete all parts of this guide is 30 minutes.     Step 1: Description   The Web Socket Edge MicroServer (WSEMS... or just EMS for short) is a pre-compiled application based on the C SDK.     Typically, the EMS is used on devices "smart" enough to have their own operating system, such as a Raspberry Pi or personal computer.   Rather than editing code and compiling into a custom binary (as with the SDKs), the EMS allows you to simply edit some configuration files to point the Edge IoT device towards the appropriate ThingWorx Foundation instance.   In addition, the EMS utilizes the PTC-proprietary AlwaysOn protocol to "phone-home", rather than having Foundation reach out to it. As such, the EMS will typically not require port forwarding/opening, and can easily communicate from a more-secure Edge environment to the Foundation server.     Step 2: Install EMS   In this step, you'll download and extract the EMS onto your personal Windows computer.   Versions of the EMS are available for Linux running on both x86 and ARM processors. Those are outside the scope of this guide, but require only minor modifications versus the instructions presented here.   Download the EMS.   Navigate to the directory where you downloaded the .zip file.     Extract the .zip file and explore into the extracted folder.     Navigate into and Copy all contents inside the \microserver directory.     Navigate to the C:\ root directory.     Create a C:\CDEMO_EMS folder. Note that this directory is not mandatory, but will be used throughout the rest of this guide.      Paste the contents of the extracted \microserver directory into C:\CDEMO_EMS.       Create Additional Directories   New folders may be added to the \CDEMO_EMS directory for various purposes.   Some of these will be utilized within this guide, while others may be utilized in future guides using the EMS.   Note that these particular names are not mandatory, and are simply the names used within this guide.    Create a C:\CDEMO_EMS\other directory. Create a C:\CDEMO_EMS\tw directory. Create a C:\CDEMO_EMS\updates directory.         Create Test Files   It can also be helpful during testing to have some small files in these folders to further demonstrate connectivity.   As these files were custom-created for the guide, seeing them within ThingWorx Foundation ensures that the connection between Foundation and the EMS is real and current.   In the C:\CDEMO_EMS\tw directory, create a text file named tw_test_01.txt. In the C:\CDEMO_EMS\other directory, create a text file named other_test_01.txt.     Click here to view Part 2 of this guide.
View full tip
I recently had a customer who wanted to run services on ThingWorx from Power BI to retrieve existing operational data, and we were a bit stumped on how to pass the API key over in the headers, so I did a bit of Googling and pieced together the solution. It's not quite intuitive on the Power BI side, so I thought it would be helpful to share. If you have any other experience with integrating ThingWorx with Power BI, feel free to add a comment.    Prepare ThingWorx Create an Application Key that has Run Time execution access to the services you need. Understand the inputs needed for the service you would like. I'll have examples of none, one, an InfoTable, and multiple inputs.   Power BI Following the following steps in Power BI: 1. In Power BI, create a new blank query   2. On the left, right click on Query1 and go to the Advanced Editor:   3. Replace all of the body content with the following, replacing your API key, appropriate end point, and base URL as needed (this is an example with NO input parameters, I'll follow with examples of other parameters):     let appKey = "your-application-key-here", endpoint = "Things/YourThingNameHere/Services/YourServiceNameHere", baseUrl = "https://YourServerNameHere/Thingworx/", url = Text.Combine({baseUrl,endpoint}), body = "", request = Web.Contents( url, [ Headers = [ appKey = appKey, #"Content-Type" = "application/json", Accept = "application/json" ], Content = Text.ToBinary(body) ] ), Source = Json.Document(request) in Source       4. Click "Done", and now you'll have a warning about how to connect. Click the "Edit Credentials" button. 5. Leave it on Anonymous and click "Connect":   6. You should now see the return data coming from ThingWorx.   Note that I had a little trouble with this authentication initially and it saved the wrong method. To clear that out, go to the ribbon bar item "Data source settings" and select the server and clear it out.   Other Examples Here is an example for sending a single string parameter:   let appKey = "your-application-key-here", endpoint = "Things/YourThingNameHere/Services/YourServiceNameHere", baseUrl = "https://YourServerNameHere/Thingworx/", url = Text.Combine({baseUrl,endpoint}), body = "{""InputParameter"": ""InputValue""}", request = Web.Contents( url, [ Headers = [ appKey = appKey, #"Content-Type" = "application/json", Accept = "application/json" ], Content = Text.ToBinary(body) ] ), Source = Json.Document(request) in Source     Here's an example of sending a string and an integer: let appKey = "your-application-key-here", endpoint = "Things/YourThingNameHere/Services/YourServiceNameHere", baseUrl = "https://YourServerNameHere/Thingworx/", url = Text.Combine({baseUrl,endpoint}), body = "{""InputString"": ""Hello, world!"", ""InputNumber"" : 42}", request = Web.Contents( url, [ Headers = [ appKey = appKey, #"Content-Type" = "application/json", Accept = "application/json" ], Content = Text.ToBinary(body) ] ), Source = Json.Document(request) in Source   Here is an example for sending an InfoTable. Note that you must supply the dataShape with fieldDefinitions. If you're using an existing Data Shape, you can get the JSON by using the service GetDataShapeMetadataAsJSON() that is on the data shape.     let appKey = "your-application-key-here", endpoint = "Things/YourThingNameHere/Services/YourServiceNameHere", baseUrl = "https://YourServerNameHere/Thingworx/", url = Text.Combine({baseUrl,endpoint}), body = "{""propertyNames"": { ""rows"": [ { ""name"": ""FirstEntityName"", ""description"": ""The first entity"" }, { ""name"": ""SecondEntityName"", ""description"": ""The second entity"" }], ""dataShape"": { ""fieldDefinitions"": { ""name"": { ""name"": ""name"", ""aspects"": { ""isPrimaryKey"": true }, ""description"": ""Entity name"", ""baseType"": ""STRING"", ""ordinal"": 0 }, ""description"": { ""name"": ""description"", ""aspects"": {}, ""description"": ""Entity description"", ""baseType"": ""STRING"", ""ordinal"": 0 } } } }}", request = Web.Contents( url, [ Headers = [ appKey = appKey, #"Content-Type" = "application/json", Accept = "application/json" ], Content = Text.ToBinary(body) ] ), Source = Json.Document(request) in Source       If I find any more interesting ways to use Power BI with ThingWorx services, I'll add them on here.  
View full tip
Meet Neal. When Neal joined PTC five years ago, he immediately hit the ground running on IoT initiatives, working in multiple areas ranging from pre-sales to partner relations. Today, he is a Worldwide ThingWorx Center of Excellence Principal Lead at PTC, and his biggest focus is supporting the go-to-market for the Microsoft partnership.   I sat down with Neal recently to hear the details on exactly how Azure and ThingWorx can be used to develop world-class IIoT applications.   Kaya: Can you explain how Azure and ThingWorx work together? Neal: Yes, so Azure provides the cloud infrastructure that our customers need in order to deploy ThingWorx.   By having Azure as our preferred cloud platform, we’re able to specialize our R&D efforts into utilizing functionality that is available in Azure, rather than having to reinvent the wheel ourselves for each cloud platform in the attempt to remain cloud-agnostic. By leveraging a single, already quite powerful, cloud platform through Azure, we’re able to maximize our development efforts.   Kaya: What was the major problem that led to us investigating cloud options? Neal: There were two issues that our users had. The first was we often had complicated installs and setup procedures because we were genericizing the whole process, so the initial setup and run was complicated and expensive. For example, we were requiring them to setup additional VMs and components, and we were giving them generic instructions because we were being very agnostic, when they had already chosen outside of us to use one of the cloud platforms. We knew our customers wanted to move fast, so we had to make it easier and faster for them to see results.   The other issue we ran into with customers was the confusion in the offerings. For ThingWorx, ingest is just one aspect of IoT. ThingWorx is particularly strong in areas like enabling mixed reality and augmented reality as well as application enablement. And, while we also have the ability to perform ingest capabilities, Microsoft is especially powerful when it comes to ingest capabilities and security. We decided that the smartest solution was to leverage Microsoft’s expertise in data ingestion to make ThingWorx even more powerful; so, we made the Azure IoT Hub Connector. By partnering with Microsoft, we have joint architecture where you can see how Microsoft provides key features and ThingWorx will run on top of those features and get you faster to the market to develop the application.   Below is an example of a high-availability deployment of ThingWorx on Azure that utilizes ThingWorx Azure IoT Connectors to access an Azure IoT Hub.  High-Availability Deployment of ThingWorx on Azure Kaya: Why did we partner with Azure? What specific benefits does Azure offer over other cloud services providers? Neal: When we started to look at what our customers were using for cloud services, we noticed that a lot were using Microsoft. When we join forces with Microsoft, we have a much more wholistic offering around digital transformation. With the partnership, PTC and Microsoft are able to leverage each partner’s respective strengths to provide even more powerful IIoT solutions. You have Windchill and Microsoft’s business application strategies; you have Vuforia and Microsoft’s mixed reality and augmented reality strategies; and, you have ThingWorx on the Microsoft Azure cloud. Overall, you have a much more complete and powerful offering together.   Kaya: What is your favorite aspect about working at PTC? Neal: The growth. There’s been a lot of changes over the last five years that I’ve been here. PTC has been able to leverage its strengths and long-time experience in the CAD and PLM markets to enter and ultimately become a leader in the IIoT market, according to reports by research firms like Gartner and Forrester. In short, the growing IIoT market and PTC’s leadership in the industry.   Note to Readers: You’ve likely heard about our major strategic partnership with Microsoft to leverage our respective IIoT and cloud technologies to optimize the creation, deployment, management and overall use of your IIoT applications. If you haven’t heard about the partnership, check out the press release here. If you’re curious about more aspects of PTC’s partnership with Microsoft, check out this site devoted to sharing how ThingWorx and Azure are better together.
View full tip
Announcements