cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

Community Tip - Did you get called away in the middle of writing a post? Don't worry you can find your unfinished post later in the Drafts section of your profile page. X

IoT Tips

Sort by:
Background: In very rare situations, it is possible that the Firewall-Friendly Agent process may stop running. If the Agent is not running, no machine monitoring or communication with the Cloud Server is possible. Recommendation: WatchDog is a little known yet very helpful feature available with Firewall-Friendly Agents. This program lets you monitor whether an Agent is running; if it’s not running, WatchDog can restart that Agent if needed. If the Agent process fails, WatchDog can bring it back up! WatchDog can also be configured to watch other processes. You can configure WatchDog to run as a service (for Windows) or daemon (for Linux). You will register the Watchdog to run as a service.  The Watchdog configuration file will specify the process(es) to be monitored and what to do when one exits. The options are to attempt to restart the process or to restart the system. Note: Watchdog detects only if a watched process exits. It will not detect or report on processes that may be “hanging”. Need more information? For information about configuring and using WatchDog, see the Agent User’s Guide for your Agent: either Axeda® Platform Axeda® Gateway User’s Guide (PDF) or Axeda® Platform Axeda® Connector User’s Guide (PDF).
View full tip
Having trouble remembering how to get into Flow? How about make /Flow the URL?   Since the Flow environment uses NGINX to front-end the various components that make up Flow, there is a very sophisticated set of rewrites and proxy_pass directives in the NGINX configuration. All you have to do is add another 'location' fragment to the vhost-flow.conf file that will push /Flow over to /Thingworx/Composer/apps/flow:       location /Flow {       rewrite ^/Flow$ $proxy_scheme://$server_host/Thingworx/Composer/apps/flow permanent;     }   On Linux, the file should be at /etc/nginx/conf.d/vhost-flow.conf   On Windows, the file should be at c:\Program Files\nginx-[version]\conf\conf.d\vhost-flow.conf   Test the updated config file with (nginx may not exist in your normal path): nginx -t   Restart the NGINX service: Linux (one of these will work depending upon your Linux version): systemctl restart nginx service nginx restart Windows: Net stop ThingWorxOrchestrationNginx Net start ThingWorxOrchestrationNginx -or- use the Services app to restart the service   From this point forward https://yourserver/Flow will take you to ThingWorx Flow's home page.
View full tip
  The IoT Enterprise Deployment Center’s goal is to create and share knowledge around the best practices for architecting, designing, and deploying successful, enterprise-scale Thingworx IoT Solutions.    To accomplish this goal, the EDC team takes a “real world” approach, using simulated IoT assets and users to benchmark the capabilities of different Thingworx deployment configurations. First, each implementation is pushed to its limit in an effort to establish real-world baselines, metrics which can be used to help customers determine which architecture choices will work for their custom needs. Then, each implementation is pushed beyond its limits, providing useful insight into where and why things fail, and illuminating potential implementation changes which could push the boundaries further.   Through the simulations testing to come, the EDC will be publishing the resulting benchmarks for all to see! These benchmarks will include details on implementation goals and performance metrics for different stages of deployment. Additionally, best-practice articles which illustrate how to deploy the different architectural components (those referenced within the benchmarks) will also be posted, highlighting the optimal approach to integrating everything into the Thingworx platform.   Stay tuned to see more about just how versatile the ThingWorx Platform can be! We look forward to discussing these findings as they are published right here on the PTC Community. 
View full tip
Previously Installing & Connecting C SDK to Federated ThingWorx with VNC Tunneling to the Edge device Installing and configuring Web Socket Tunnel Extension on ThingWorx Platform Overview     Using the C SDK Edge client configuration we did earlier, we'll now create above illustrated setup. In this C SDK Client we'll push the data to ThingWorx Publisher with servername : TW802Neo to ThingWorx Subscriber servername : TW81. Notice that the SteamSensor2 on the pulisher server is the one binding to the C SDK client and the FederatedSteamThing on subscriber is only getting data from the SteamSensor2. Let's crack on!   Content   Configure ThingWorx to publish Configure ThingWorx to subscribe Publish entities from Publisher to the Subscriber Create Mashup to view data published to the subscriber Pre-requisite Minimum requirement is to have two ThingWorx servers running. Note that both ThingWorx systems can be publisher & subscriber at the same time.   Configure ThingWorx Publisher   Configuring Federation Subsystem   1. Navigate to ThingWorx Composer > Subsystems > Federation Subsystems and configure the following highlighted sections   Essentially its required to configure the Server Identification, i.e. My Server name (FQDN / IP) , My Server Description (optional) Federation subscribers this server publishes to, i.e. all the server you want to publish to from this server. Refer to the Federation Subsystem doc in the Help Center to check detail description on each configurable parameter.   2. Save the federation subsystem   Configuring a Thing to be published   1. Navigate back to the Composer home page and select the entity which you'd like to publish 2. In this case I'm using SteamSensor2 which is created to connect to the C SDK client 3. To publish edit the entity and click on Publish checkbox, like so 4. Save the entity   Configure ThingWorx Subscriber     Configuring Federation Subsystem   1. Navigate to ThingWorx Composer > Subsystems > Federation Subsystems and configure the following highlighted sections   Essentially its required to configure the Server Identification, i.e. My Server name (FQDN / IP) , My Server Description (optional) Refer to help center doc on Federation Subsystems should you need more detail on the configurable parameter If you only want to use this server as a subscriber of entities from the publishing ThingWorx Server, in that case you don't have to Configure the section Federation subscribers this server publishes to, I've configured here anyway to show that both servers can work as publishers and subscribers   2. Save the federation subsystem   Configuring a Thing to subscribe to a published Thing 1. Subscribing to an entity is fairly straight forward, I'll demonstrate by utilizing the C SDK client which is currently pushing values to my remote thing called SteamSensor2 on server https://tw802neo:443/Thingworx 2. I have already Published the StreamSensor2, see above section Configuring a Thing to be published 3. Create a Thing called FederatedStreamThing with RemoteThingWithTunnels as ThingTemplate, 4. Browser for the Identifier and select the required entity to which binding must be done, like so   5. Navigate to the Properties section for the entity, click Manage Bindings to search for the remote properties like so for adding them to this thing:     6. Save the entity and then we can see the values that were pushed from the client C SDK     7. Finally, we can also quickly see the values pulled via a Mashup created in the subscriber ThingWorx , below a is a simple mashup with grid widget pulling values using QueryPropertyHistory service  
View full tip
Create a new Thing using the Scheduler Thing Template. The Scheduler Thing will fire a ScheduledEvent Event when the configured schedule is fired. The event is automatically present and does not need to be added manually. Configuration   The Scheduler Configuration is quite straightforward and allows for an exact setup of schedule based on units of time, e.g. seconds, minutes, hours, days of week etc. It can be accessed via the Thing's Entity Configuration   Configuration allows for Changing the runAsUser context - in which the Events will be handled. The user will need visibility and permission on e.g. executing Services or depending Things, which are required to run the Service triggered by the Event. Changing the Schedule - in which time the Events will be fired (by default every minute). The schedule is displayed in CRON String notation and can be changed and viewed in detail by clicking on "More". The CRON String will be generated automatically based on the inputs. Schedules can be configured in Manual mode - allowing for full configuration of each and every time based attribute. Schedules can be configured for a specific time Type - allowing for configuration only based on seconds, minutes, hours, days, weeks, months or years. Below screenshots show schedules running every minute and every Saturday / Sunday at 12:00 ("Every Weekend Day").     Services   Scheduler Things inherit two Services by default from the Thing Template DisableScheduler EnableScheduler These will activate / de-activate the Scheduler and allow / disallow firing Events once a scheduled time is reached If a Scheduler is currenty enabled or disabled can be seen in its properites  
View full tip
Original Post Date:      June 6, 2016   Description: This is a video tutorial on creating an InfoTable through a service, creating and adding a Data Shape adding rows to the Infotable through a service, adding rows to an InfoTable property and querying the InfoTable.    
View full tip
ThingWorx's JDBC extensions - Relational Database Management System and the JDBC Extensions allow ThingWorx to connect to variety of different databases. With that comes a natural question how and what sort of SQL statements could be executed via these extensions? Note: ​​Importing the JDBC extensions i.e. the RDBMS and JDBC Extensions, creates a Database Template for that particular database. If you are working with RDBMS extension then Template of corresponding Database will be created with similar name e.g. importing the RDBMS Extension for Oracle 12 will create Template named OracleDBServer12. While importing the JDBC driver using the JDBC extension will create Template name based on the JDBC driver used or a custom name could be given. Following examples and SQL statements are adhering to Oracle's SQL*Plus standard, however these can be easily adapted to the type of RDBMS you intend to work with. Topics How to create SQL Service in ThingWorx entity Types of SQL Statements Examples on SQL Service usage and some extended use cases / examples How to create SQL Service in ThingWorx Navigate to the Thing implementing the Database Template, e.g. OracleDBServer12         2. Click on the Services section under the Entity Information and click on Add My Service         3. A new service creation section will come up, change the Service type of JavaScript (this is default selection) to either SQL (Query) or SQL (Command) depending on the type of SQL you are to create under this particular service                       4. Here's quick example on creating SQL (Query) service which takes name as input for a select *  sql … Statement, i.e. it returns complete set of rows and columns from any given table on which the user has the access to perform Select                   Note: BaseType defaults to Infotable when creating SQL (Query) service and the returned number of rows are restricted to 500. Therefore, if table contains rows more than 500, ensure to change the Max Rows parameters         5. Example on creating SQL (Command) service that delete all the rows from the database table               Note: The Base Type defaults to Number when using SQL (Command)     Additional information:     When creating a SQL service, apart from providing changing the Service Info and  Inputs /Outputs, 3rd section Tables/Columns allows users to explore the Tables and their respective columns as part of that particular user's schema - meaning the objects on which the user has select rights in his schema in the database.     Types of SQLs This is not an exhaustive list, rather contains most commonly used types of SQL statements     1. Data Definition Language (DDL)           a. Create, Alter and drop schema objects           b. Grant and Revoke privileges and roles     2. Data Manipulation Language (DML)           a. Insert           b. Delete           c. Select Examples for SQL Service usage and some extended use cases / examples     1. Data Definition Language (DDL)           a. Create statement                       b. Alter statements                         c. Drop statement                         d. Flashback statements (Oracle specific)                         e. Grant statement                     f. Rename statement                 2. Data Manipulation Language (DML)           a. Insert statement                     b. Delete statement                     c. Select statements           Use cases - Case 1 : Backing up DataTable DataTable objects in ThingWorx are for quick lookup of data and they are most performant till ~100K rows. Exceeding rows over 100K in a DataTable makes it highly susceptible to performance issues in terms of querying or writing to it. Unless, there's sharding​ on the persistence provider or multiple persistence providers used - JDBC connectivity to external data stores like RDBMS systems could help in keeping up with growing number of rows in DataTables. RDBMS tables are more than capable of storing very large amount of rows without being taxed over the performance. JDBC extension could be used to do just that in a use case requiring backing up DataTable or any Data Storage objects from ThingWorx for that matter. Here's one quick example using one of the Insert SQL service shown above to back up the entire DataTable to the Oracle's DB table. Following ThingWorx JavaScript service wraps the InsertIntoBULKDATAINSERTDT SQL service: // result: INTEGER // getting total row count in the DataTable var totalCount = Things["BulkInsertDT"].GetDataTableEntryCount(); var params = { maxItems: totalCount /* NUMBER */ }; // result: INFOTABLE // DataTable service to fetch all the rows from it var allData = Things["BulkInsertDT"].GetDataTableEntries(params); // looping over the result fetched above to get all the rows for insertion     for (var i = 0; i<totalCount; i++) {         var result = allData.getRow(i); // mapping the data for insert     var params = {         LongCol3: result.LongCol3 /* LONG */,         numcol1: result.NumCol1 /* NUMBER */,         StringCol2: result.StringCol2 /* STRING */,         IntCol4: result.IntCol4 /* INTEGER */     }; try { // result: NUMBER // calling the SQL Service InsertIntoBULKDATAINSERTDT created under a DB Thing called OracleDBThingNew     var result = Things["OracleDBThingNew"].InsertIntoBULKDATAINSERTDT(params); } catch (err) {      Logger.info ("Failed to insert the values" + err) }     }
View full tip
Excited to announce ThingWorx 8.1 is officially available in our Support Portal. Please find the release notes below. The following feature enhancements and bug fixes exist in ThingWorx 8.1.0: Enhancements Platform: • Metrics Reporting is enabled by default, which allows usage, performance, and diagnostics data to be sent to a PTC server daily. For more information about this setting, see Platform Subsystem. • You can add and configure Notifications in New Composer. For more information, see Adding Notifications. • License files are now instance specific.. • Security for application keys has been enhanced. The defualt expiration date has been changed to 24 hours if it is not explictly set. • Additional capability has been added to New Composer. • Improvements to anomaly detection accuracy have been added. As a result, data collection restart is no longer necessary after a long gap and the H2 database that installs with the Training Microservice is stored in memory, not as a persisted file. For more information, see Anomaly Detection. • You can now load configuration/project files from KEPServerEX instances Bug Fixes Platform • Fixed an issue where Tomcat failed to start when using SAP HANA. TW-22191 • Fixed an issue that was preventing ThingWorx from starting after the File Transfer Subsystem was disabled. TW-22177 • Fixed an issue where the change history of a Mashup was automatically updated even if no changes were made. TW-22114 • Fixed an issue that was preventing the ServiceInvokeCompleted event from working after performing an in-place upgrade. TW-21784 • Fixed an issue where alert notifications were not being sent to recipients after removing a recipient. TW-21585 • Fixed an issue where the Add button in the Services page did not display after creating a Data Table. TW-21518 • Fixed an issue with alert notifications for entities containing periods in the name. TW-21347 • Fixed an issue that was causing connected assets to display as disconnected in ThingWorx Utilities. UTL-4698 • Fixed an issue where data bind was lost after changing Read-Only settings to Read/Write in Composer. TW-23506 • Fixed an issue that was causing a MetricsReportingTask error after enabling ThingWorx Performance Advisor. TW-21141 • Fixed an issue with the ThingWorx authentication window when specifying the site while using FF and IE. TW-21271 Mashup Builder • Fixed an issue with the List widget that was causing incorrect tooltips to display. TW-24012 TW-23961 TW-23038 • Fixed an issue where Chrome was automatically retrying Remote Service calls when a timeout occurred. TW-23828 • Fixed an issue after restarting the ThingWorx web app where the Runtime or Composer’s index.html were missing. TW-23984 • Fixed an issue where closing a modal dialogue did not remove the disabled state from an element. TW-11217 • Fixed an issue when creating a popup with the Navigation widget. The tab sequence of the popup was dependent on the original mashup. TW-11151 • Fixed an issue with localized values of data columns when using the Data Filter widget. TW-11059 Extensions  • Fixed an issue where CSV parser extension import failed if the text file that was being imported did not include a new line character at the end of the last line of text. TW-21863 • Fixed an issue with the Advanced Grid widget where the Reset button was not localized. TW-21457 • Fixed an issue with the jQuery library used by the WebSocketTunnel_ExtensionPackage widget. Note If you are using the WebSocketTunnel_ ExtensionPackage, you will need to upgrade to version 3.0.2 if you are upgrading to ThingWorx 8.1.0. To upgrade the extension, go to the Web Sockets Tunnel Widget and Library page of the ThingWorx Marketplace. TW-24465 End of Life Information SQUEAL functionality has been discontinued in 8.1. System requirements: http://support.ptc.com/WCMS/files/173583/en/ThingWorx_Core_8.1_System_Requirements_1.0.pdf Installation guide: http://support.ptc.com/WCMS/files/173600/en/Installing_ThingWorx_8.1_1.0_.pdf ThingWorx 8.1 Cross Platform Highlights: Security ThingWorx 8.1 Cross Platform Highlights and Q&amp;A: Licensing
View full tip
​​​There are four types of Analytics:                                                                 Prescriptive analytics: What should I do about it? Prescriptive analytics is about using data and analytics to improve decisions and therefore the effectiveness of actions.Prescriptive analytics is related to both Descriptive and Predictive analytics. While Descriptive analytics aims to provide insight into what has happened and Predictive analytics helps model and forecast what might happen, Prescriptive analytics seeks to determine the best solution or outcome among various choices, given the known parameters. “Any combination of analytics, math, experiments, simulation, and/or artificial intelligence used to improve the effectiveness of decisions made by humans or by decision logic embedded in applications.”These analytics go beyond descriptive and predictive analytics by recommending one or more possible courses of action. Essentially they predict multiple futures and allow companies to assess a number of possible outcomes based upon their actions. Prescriptive analytics use a combination of techniques and tools such as business rules, algorithms, machine learning and computational modelling procedures. Prescriptive analytics can also suggest decision options for how to take advantage of a future opportunity or mitigate a future risk, and illustrate the implications of each decision option. In practice, prescriptive analytics can continually and automatically process new data to improve the accuracy of predictions and provide better decision options. Prescriptive analytics can be used in two ways: Inform decision logic with analytics: Decision logic needs data as an input to make the decision. The veracity and timeliness of data will insure that the decision logic will operate as expected. It doesn’t matter if the decision logic is that of a person or embedded in an application — in both cases, prescriptive analytics provides the input to the process. Prescriptive analytics can be as simple as aggregate analytics about how much a customer spent on products last month or as sophisticated as a predictive model that predicts the next best offer to a customer. The decision logic may even include an optimization model to determine how much, if any, discount to offer to the customer. Evolve decision logic: Decision logic must evolve to improve or maintain its effectiveness. In some cases, decision logic itself may be flawed or degrade over time. Measuring and analyzing the effectiveness or ineffectiveness of enterprises decisions allows developers to refine or redo decision logic to make it even better. It can be as simple as marketing managers reviewing email conversion rates and adjusting the decision logic to target an additional audience. Alternatively, it can be as sophisticated as embedding a machine learning model in the decision logic for an email marketing campaign to automatically adjust what content is sent to target audiences. Different technologies of Prescriptive analytics to create action: Search and knowledge discovery: Information leads to insights, and insights lead to knowledge. That knowledge enables employees to become smarter about the decisions they make for the benefit of the enterprise. But developers can embed search technology in decision logic to find knowledge used to make decisions in large pools of unstructured big data. Simulation: ​Simulation imitates a real-world process or system over time using a computer model. Because digital simulation relies on a model of the real world, the usefulness and accuracy of simulation to improve decisions depends a lot on the fidelity of the model. Simulation has long been used in multiple industries to test new ideas or how modifications will affect an existing process or system. Mathematical optimization: Mathematical optimization is the process of finding the optimal solution to a problem that has numerically expressed constraints. Machine learning: “Learning” means that the algorithms analyze sets of data to look for patterns and/or correlations that result in insights. Those insights can become deeper and more accurate as the algorithms analyze new data sets. The models created and continuously updated by machine learning can be used as input to decision logic or to improve the decision logic automatically. Paragmetic AI: ​Enterprises can use AI to program machines to continuously learn from new information, build knowledge, and then use that knowledge to make decisions and interact with people and/or other machines.                                               Use of Prescriptive Analytics in ThingWorx Analytics: Thing Optimizer: Thing Optimizer functionality provides the prescriptive scoring and optimization capabilities of ThingWorx Analytics. While predictive scoring allows you to make predictions about future outcomes, prescriptive scoring allows you to see how certain changes might affect future outcomes. After you have generated a prediction model (also called training a model), you can modify the prescriptive attributes in your data (those attributes marked as levers) to alter the predictions. The prescriptive scoring process evaluates each lever attribute, and returns an optimal value for that feature, depending on whether you want to minimize or maximize the goal variable. Prescriptive scoring results include both an original score (the score before any lever attributes are changed) and an optimized score (the score after optimal values are applied to the lever attributes). In addition, for each attribute identified in your data as a lever, original and optimal values are included in the prescriptive scoring results. How to Access Thing Optimizer Functionality: ThingWorx Analytics prescriptive scoring can only be accessed via the REST API Service. Using a REST client, you can access the Scoring service which includes a series of API endpoints to submit scoring requests, retrieve results, list jobs, and more. Requires installation of the ThingWorx Analytics Server. How to avoid mistakes - Below are some common mistakes while doing Prescriptive analytics: Starting digital analytics without a clear goal Ignoring core metrics Choosing overkill analytics tools Creating beautiful reports with little business value Failing to detect tracking errors                                                                                                                                 Image source: Wikipedia, Content: go.forrester.com(Partially)
View full tip
Check out this new KCS article which links to all known best practice documents available for ThingWorx. This article will get larger in time as more articles are published related to the Dos and Don'ts of building an IoT application! Do you know when to use timers, and where to implement their subscriptions? How about ensuring info tables are used at the proper time, and data tables at others? Pesky performance issues wherein ThingWorx runs slow for apparently no reason? All of these questions and more are addressed here!
View full tip
New Generation Composer is available from ThingWorx 7.4 and later. Each subsequent release of ThingWorx will contain additional New Composer features/functionalities. This video is focused on the layout change and new features implemented from ThingWorx 7.4.     How to enable the new Composer? 1. In the top right-hand corner, click on the User Menu and select the Preferences option                Figure 1 2. Click the check box for Turn on New Composer Features. 3. Click Done. A New Composer link is added to the menu bar at the top of the Composer window.     Figure 2   4. Click the New Composer link to open a new tab for the New Composer view   Figure 3   What's the layout change in New Composer?     Three areas layout   Figure 4     Menu bar on the left (Area 1) Set Project Context to set default project name for new entities Two views: Recent and Browse Recent view will quickly locate the recent access entities Browse is almost the same to the old menu navigation bar Could be sizable or hidden, and the main idea here is to increase screen real-estate to allow bigger view/edit area Main area for listing and feeding entities in the middle (Area 2) It provides you a wider area to edit entities (author services, mashup builder, etc.) An extra area on the right for preview, properties/events editing, etc. (Area 3) It gave you an easier and handy way to get a glance of an entity’s basic information   Layout change in an entity’s editing page When you create a new Thing, you will find all the facets (general information, properties, services, events, subscriptions) of the Thing are listed in a dropdown list Figure 5       By doing so it will save more area for the feeding   Properties and Alerts Properties and Alerts are now listed separately (in two different Tabs) Figure 6 Figure 7  Properties and Alert are now edited on the right area of the page (See Figure 4 Area 3 Services and Subscription A bigger editor area Events Events are edited on right area of the page (See Figure 4 Area 3)   What's the main function/features change in New Composer   Industrial Connections The Industrial Connections entity allows you to connect with and configure industrial things in ThingWorx. With the “Discover” feature, you could easily bind the Industrial device’s (e.g., Kepware) Tags to a ThingWorx Entity Figure 8 From ThingWorx 8.0.0, Anomaly Alert type is supported An anomaly alert is only useful if you have configured Anomaly Detection to monitor a stream of data Anomaly metrics settings are allowed to be configured in the Alert edit page Figure 9 Subscriptions You could now manually remove a subscription permanently from the system in New Composer which is impossible in old Composer Services New Composer provided assistant scripting tools like static code analysis, String search or replace, etc Figure 10   How could I switch back to the Old Composer when editing an entity? It is so easy! As long as you have opened an ThingWorx Entity, you will notice there is a button “Edit in Composer”; it will lead you back to the old Composer, and all the editing that have been saved will be logged in the old Composer.     Video demonstration for the New Composer is also available now. Feel free to review from: New Composer Video
View full tip
There are Four Types of Analytics:                         Descriptive: What Happened? Descriptive analytics is a preliminary stage of data processing that creates a summary of historical data to yield useful information and possibly prepare the data for further analysis. Analytics, which use data aggregation and data mining to provide insight into the past and answer: “What has happened? Descriptive analysis or statistics does exactly what the name implies they “Describe”, or summarize raw data and make it something that is interpret-able by humans. They are analytics that describe the past. The past refers to any point of time that an event has occurred, whether it is one minute ago, or one year ago. Descriptive analytics are useful because they allow us to learn from past behaviors, and understand how they might influence future outcomes. The vast majority of the statistics we use fall into this category. (Think basic arithmetic like sums, averages, percent changes). Usually, the underlying data is a count, or aggregate of a filtered column of data to which basic math is applied. For all practical purposes, there are an infinite number of these statistics. Descriptive statistics are useful to show things like, total stock in inventory, average dollars spent per customer and Year over year change in sales. Common examples of descriptive analytics are reports that provide historical insights regarding the company’s production, financials, operations, sales, finance, inventory and customers. Note: Use Descriptive Analytics when you need to understand at an aggregate level what is going on in your company, and when you want to summarize and describe different aspects of your business.                                     Different techniques of Descriptive Analytics: Sampling Mean Mode Median Standard Deviation Range and Variance Stem and Leaf Diagram Histogram Quartiles Frequency Distributions Use of Descriptive Analytics in ThingWorx Analytics: Signal Detection: When analyzing volumes of data, it is helpful to know which data is actually useful and which data is just noise. Signals are based on a correlation algorithm that examines historical data to identify the strength of a given input in predicting future outcomes. Signals can identify meaningful correlations within the data. Signals are useful during initial analysis to determine which features you want to curate in a given data-set for predictive model generation. For example, knowing the month of the year is more important to accurately predicting tomorrow’s weather than knowing the day of the week. The month has a much stronger signal than the day of the week for this prediction. ThingWorx Analytics reports signal strength in a mutual information (MI) score that represents the probability of predicting the goal variable when a given feature is provided. It can effectively capture non-linear relationships. ThingWorx Analytics evaluates each feature, or combination of features, to identify the top signals. Cluster Analysis: Cluster analysis categorizes data into groups based on similarities relative to a goal variable. Like a clique, objects in a cluster minimize intra-distances (distances within the cluster) while maximizing inter-distances (distances between clusters). Clusters are mutually exclusive, meaning that each record can belong to only one cluster. However, ThingWorx Analytics supports a user-defined cluster hierarchy that can include sub-clusters inside other clusters. The higher the number of clusters in the data, the smaller each cluster’s population will be, but the stronger the potential insights can be. How to Access Descriptive Analysis Functionality via ThingWorx Analytics: REST API Service — Using a REST client, you can access the Signals Service and the Clusters Service. Each service includes a series of API endpoints to submit analysis requests, retrieve results, list jobs, and more. Requires installation of the ThingWorx Analytics Server. Analytics Builder — As part of the ThingWorx Analytics Extension, Analytics Builder provides a user interface for interacting with your data. In addition to generating and scoring predictive models in Analytics Builder, you can also run procedures to generate signals. How to avoid mistakes - Useful tips for Different Techniques of Descriptive Analytics: Crystallize the research problem → Operability of it! Read literature on data analysis techniques. Evaluate various techniques that can do similar things w.r.t. to research problem. Know what a technique does and what it doesn’t. Consult people, esp. supervisor.
View full tip
Sampling Strategy​ This Blog Post will cover the 4 sampling Strategies that are available in ThingWorx Analytics.  It will tell you how the sampling strategy runs behind the scenes, when you may want to use that strategy, and will give you the pros and cons of each strategy. SAMPLE_WITH_REPLACEMENT This strategy is not often used by professionals but still may be useful in certain circumstances.  When you sample with replacement, the value that you randomly selected is then returned to the sample pool.  So there is a chance that you can have the same record multiple times in your sample. Example Let’s say you have a hat that contain 3 cards with different people’s names on them. John Sarah Tom Let’s say you make 2 random selections. The first selection you pull out the name Tom. When you sample with replacement, you would put the name Tom back into the hat and then randomly select a card again.  For your second selection, it is possible to get another name like Sarah, or the same one you selected, Tom. Pros May find improved models in smaller datasets with low row counts Cons The Accuracy of the model may be artificially inflated due to duplicates in the sample SAMPLE_WITHOUT_REPLACEMENT This is the default setting in ThingWorx Analytics and the most commonly used sampling strategy by professionals.  The way this strategy works is after the value is randomly selected from the sample pool, it is not returned.  This ensures that all the values that are selected for the sample, are unique. Example Let’s say you have a hat that contain 3 cards with different people’s names on them. John Sarah Tom Let’s say you make 2 random selections. The first selection you pull out the name Tom. When you sample without replacement, you would randomly select a card from the hat again without adding the card Tom.  For your second selection, you could only get the Sarah or John card. Pros This is the sampling strategy that is most commonly used It will deliver the best results in most cases Cons May not be the best choice if the desired goal is underrepresented in the dataset UPSAMPLE_AND_SAMPLE_WITHOUT_REPLACEMENT This is useful when the desired goal is underrepresented in the dataset.  The features that represent the desired outcome of the goal are copied multiple times so they represent a larger share of the total dataset. Example Let’s say you are trying to discover if a patient is at risk for developing a rare condition, like chronic kidney failure, that affects around .5% of the US population.  In this case, the most accurate model that would be generated would say that no one will get this condition, and according to the numbers, it would be right 99.5% of the time.  But in reality, this is not helpful at all to the use case since you want to know if the patient is at risk of developing the condition. To avoid this from happening, copies are made of the records where the patient did develop the condition so it represents a larger share of the dataset.  Doing this will give ThingWorx Analytics more examples to help it generate a more accurate model. Pros Patterns from the original dataset remain intact Cons Longer training time DOWNSAMPLE_AND_SAMPLE_WITHOUT_REPLACEMENT This is also useful when the desired goal is underrepresented in the dataset. In downsample and sample without replacement, some features that do not represent the desired goal outcome are removed. This is done to increase the desired features percentage of the dataset. Example Let’s continue using the medical example from above.  Instead of creating copies of the desired records, undesired record are removed from the dataset.  This causes the records where patients did develop the condition to occupy a larger percentage of the dataset. Pros Shorter training time Cons Patterns from the original dataset may be lost
View full tip
In this particular scenario, the server is experiencing a severe performance drop.The first step to check first is the overall state of the server -- CPU consumption, memory, disk I/O. Not seeing anything unusual there, the second step is to check the Thingworx condition through the status tool available with the Tomcat manager. Per the observation: Despite 12 GB of memory being allocated, only 1 GB is in use. Large number of threads currently running on the server is experiencing long run times (up to 35 minutes) Checking Tomcat configuration didn't show any errors or potential causes of the problem, thus moving onto the second bullet, the threads need to be analyzed. That thread has been running 200,936 milliseconds -- more than 3 minutes for an operation that should take less than a second. Also, it's noted that there were 93 busy threads running concurrently. Causes: Concurrency on writing session variable values to the server. The threads are kept alive and blocking the system. Tracing the issue back to the piece of code in the service recently included in the application, the problem has been solved by adding an IF condition in order to perform Session variable values update only when needed. In result, the update only happens once a shift. Conclusion: Using Tomcat to view mashup generated threads helped identify the service involved in the root cause. Modification required to resolve was a small code change to reduce the frequency of the session variable update.
View full tip
Overview The Axeda Platform is a secure and scalable foundation to build and deploy enterprise-grade applications for connected products, both wired and wireless. This article provides you with a detailed feature overview and helpful links to more in-depth articles and tutorials for a deeper dive into key areas. Types of Connected Product Applications M2M applications can span many vertical markets and industries. Virtually every aspect of business and personal life is being transformed by ubiquitous connectivity. Some examples of M2M applications include: Vehicle Telematics and Fleet Management - Monitor and track the location, movements, status, and behavior of a vehicle or fleet of vehicles Home Energy Monitoring - Smart energy sensors and plugs providing homeowners with remote control and cost-saving suggestions Smart Television and Entertainment Delivery - Integrated set-top box providing in-view interaction with other devices – review your voicemails while watching a movie, chat with your Facebook friends, etc. Family Location Awareness - Set geofences on teenagers, apply curfews, locate family members in real time, with vehicle speed and condition Supply Chain Optimization - Combine status at key inspection points with logistics and present distribution managers with an interactive, real-time control panel Telemedicine - Self-monitoring/testing, telecardiology, teleradiology Why Use a Platform? Have you ever built a Web application? If so, you probably didn't create the Web server as part of your project. Web servers or Web application servers like Apache HTTPd or JBoss manage TCP sockets, threads, the HTTP protocol, the compilation of scripts, and include thousands of other base services. By leveraging the work done by the dedicated individuals who produced those core services, you were able to rapidly build an application that provided value to your users.  The Axeda Platform is analogous to a Web server or a Web application server, but provides services and a design paradigm that makes connected product development streamlined and efficient. Anatomy of an Axeda Connected Product Connected Products can really be anything that your product and imagination can offer, but it is helpful to take pause for some common considerations that apply to most, if not all of these types of solutions. Getting Connected - Bring your product or equipment to the network so that it can provide information to the solution, and react to commands and configuration changes orchestrated by the application logic. Manage and Orchestrate - Script your business logic in the cloud to tie together remote information with information from other business systems or cloud-based services, react to real-time conditions, and facilitate batch operations to synchronize, analyze, and integrate. Present and Report - Build your user experiences, enabling people to interact with your connected product, manage workflows around business processes, or facilitate data analysis. Let's take a look at the Axeda Platform services that help with each of these solution considerations. Getting Connected Wired & Wireless Getting connected can assume all sorts of shapes, depending on the environment of your product and the economics of your solution. The Axeda Platform makes no assumption about connectivity, but instead provides features and functionality to help you connect.For wireless applications, especially those which may use cellular or satellite communications, the speed and cost of communication will be an important factor.  For this reason, Axeda has created the Adaptive Machine Messaging Protocol (AMMP), a cross-platform, efficient HTTP-based protocol that you can use to communicate bi-directionally with the platform.  The protocol is expressive, robust, secure, and most importantly, able to be implemented on a wide range of hardware and programming environments. When you are faced with connecting legacy products that may be communicating with a proprietary messaging protocol, the Axeda Platform can be extended with Codecs to "learn" your protocol by translating your device's communication format into a form that the platform can understand. This is a great option for retrofitting existing, deployed products to get connectivity and value today, while designing your next-generation of product with AMMP support built-in. Manage and Orchestrate The Data Model defines the information and its behavior in the Axeda Platform. Rules Rules form the heart of a dynamic connected application. There are three types of rules that can be leveraged for your orchestration layer: Expression rules run in the cloud, and are configured and associated with your assets through the Axeda Admin UI or SDK. These rules have an If-Then-Else structure that's easy to create and understand. They're like a formula in a spreadsheet. For example, say your asset has a dataitem reading for temperature: [TODO: Image of dataitem TEMP] This rule compares the temperature to 80 every time a reading is received. When this happens, the rule creates an alarm with name "High Temp" and severity 100.Learn more about Expression Rules. State Machines help organize Expression Rules into manageable groups that apply to assets when the assets are in a certain state. For example, if your asset were a refrigerated truck, and you were interested in receiveing an alert when the temperature within the cargo area rose above a preset threshold, you would not want this rule to be applied when your truck asset is empty and parked in the distribution center lot. In this case, you might organize your rules into a state machine called “TruckStatus”. The TruckStatus state machine would then consist of a set of states that you define, and the rules that execute when the truck is in a particular state. [TODO - show state machine image?] State “Parked”: IF doorOpen THEN … State “In Transit”: IF temperature > 40 THEN… State “Maintenance”: <no rules> You can learn more about state machines in an upcoming technical article soon. Scripting Using Axeda Custom Objects, you can harness the power of the Axeda SDK to gain access to the complete set of platform data and functionality, all within a script that you customize. Custom Object scripts can be invoked in an Expression Rule to provide customized and flexible business logic for your application. Custom Object scripts are written in a powerful scripting language called Groovy, which is 100% Java syntax compatible. Groovy also offers very modern, concise syntax options that make your code simple and easy to understand. Groovy can also implement the body of a web service method. We call this Scripto. Scripto allows you to write code and call that code by name through a REST web service. This allows a client application to call a set of customized web services that return exactly the information and format needed by the application. Here is a beginning tutorial on Scripto.  This site includes many Scripto examples called by different Rich Internet Applications (RIA). Location-Based Services Knowing where something is, and where it has been, opens a world of possible application features. The Axeda Platform can keep track of an asset’s current and historical location, which allows your applications to plot the current position of your assets on a map, or show a breadcrumb trail of where a particular asset has been. Geofences are virtual perimeters around geographic locations. You can define a geofence around a physical location by describing a center point and radius, or by “drawing” a polygon around an arbitrary shape. For instance, you may have a geofence around the Boston metro that is defined as the center of town with a 10-mile radius. You may then compare an asset’s location to this geofence in a rule and trigger events to occur. IF InNamedGeofence(“Boston”) THEN CreateAlarm(…) You can learn more about geofences and other location-oriented rule features in an upcoming tutorial. Integration Queue In today’s software landscape, almost no complete solution is an island unto itself. Business systems need to interoperate by sharing data and events, so that specialized systems can do their job without tight coupling. Messaging is a robust and capable pattern for bridging the gap between systems, especially those that are hosted in the cloud. The Axeda Platform provides a message queue that can be subscribed to by external systems to trigger processes and workflows in those systems, based on events that are being tracked within the platform. A simple ExpressionRule can react to a condition by placing a message in the integration queue as follows: IF Alarm.severity > 100 THEN PublishObject() A message is then placed in the queue describing the platform event, and another business system may subscribe to these messages and react accordingly. Web Services Web Services are at the heart of a cloud-based API stack, and the Axeda Platform provides full comprehensiveness or flexibility. The platform exposes Web Service operations for all platform data and configuration meta data. As a result, you can determine the status of an asset, query historical data for assets, search for assets based on their current location, or even configure expression rules and other configuration settings all through a modern Web Service API, using standard SOAP and REST communication protocols. Scripto Web Service APIs simplify system integration in a loosely-coupled, secure way, and we have a commitment to offering a comprehensive collection of standard APIs into the Axeda Platform. But we can't have an API that suits every need exactly. You may want data in a particular format, such as CSV, JSON, or XML. Or some logic should be applied and its inefficient to query lots of data to look for the bit you want. Wouldn’t you rather make the service on the other side do exactly what you want, and give it to you in exactly the format you need? That is Scripto – the bridge between the power and efficiency of the Axeda Custom Object scripting engine, and a Web Service client. Using Scripto, you can code a script in the Groovy language, using the Axeda SDK and potentially mashing up results from other systems, all within the platform, and expose your script to an external consumer via a single, REST-based Web Service operation. You create your own set of Web Services that do exactly what you want. This powerful combination let’s you simplify your Web Service client code, and give you easy access and maintainability to the scripted logic. Present Rich Internet Applications are a great way to build engaging, information-rich user experiences. By exposing platform data and functions via Web Services and Scripto, you can use your tool of choice for developing your front-end. In fact, if you choose a technology that doesn’t require a server-side rendering engine, such as HTML + AJAX, Adobe Flash, or Microsoft Silverlight, then you can upload your application UI files to the Axeda Platform and let the platform serve your URL! Addition references for using RIAs on the Axeda Platform: Axeda Sample Application: Populating A Web Page with Data Items Extending the Axeda Platform UI - Custom Tabs and Modules Far-Front-Ends and Other Systems If a client-only user RIA interface is not an option for you, you can still use Web Services to integrate platform information into other server-side presentation technologies, such as Microsoft Sharepoint portal or a Java EE Web Application. You can also get lightning-fast updates for your users with the Axeda Machine Streams that uses ActiveMQ JMS technology to stream data in real-time from the Axeda Platform to your custom data warehouses.  While your users are viewing a drill down on a set of assets, they can receive asynchronous notifications about real-time data changes, without the need to constantly poll Web Services. Summary Building Connected Products and applications for them is incredibly rewarding job. Axeda provides a set of core services and a framework for bringing your application to market.
View full tip
Distributed Timer and Scheduler Execution in a ThingWorx High Availability (HA) Cluster Written by Desheng Xu and edited by Mike Jasperson    Overview Starting with the 9.0 release, ThingWorx supports an “active-active” high availability (or HA) configuration, with multiple nodes providing redundancy in the event of hardware failures as well as horizontal scalability for workloads that can be distributed across the cluster.   In this architecture, one of the ThingWorx nodes is elected as the “singleton” (or lead) node of the cluster.  This node is responsible for managing the execution of all events triggered by timers or schedulers – they are not distributed across the cluster.   This design has proved challenging for some implementations as it presents a potential for a ThingWorx application to generate imbalanced workload if complex timers and schedulers are needed.   However, your ThingWorx applications can overcome this limitation, and still use timers and schedulers to trigger workloads that will distribute across the cluster.  This article will demonstrate both how to reproduce this imbalanced workload scenario, and the approach you can take to overcome it.   Demonstration Setup   For purposes of this demonstration, a two-node ThingWorx cluster was used, similar to the deployment diagram below:   Demonstrating Event Workload on the Singleton Node   Imagine this simple scenario: You have a list of vendors, and you need to process some logic for one of them at random every few seconds.   First, we will create a timer in ThingWorx to trigger an event – in this example, every 5 seconds.     Next, we will create a helper utility that has a task that will randomly select one of the vendors and process some logic for it – in this case, we will simply log the selected vendor in the ThingWorx ScriptLog.     Finally, we will subscribe to the timer event, and call the helper utility:     Now with that code in place, let's check where these services are being executed in the ScriptLog.     Look at the PlatformID column in the log… notice that that the Timer and the helper utility are always running on the same node – in this case Platform2, which is the current singleton node in the cluster.   As the complexity of your helper utility increases, you can imagine how workload will become unbalanced, with the singleton node handling the bulk of this timer-driven workload in addition to the other workloads being spread across the cluster.   This workload can be distributed across multiple cluster nodes, but a little more effort is needed to make it happen.   Timers that Distribute Tasks Across Multiple ThingWorx HA Cluster Nodes   This time let’s update our subscription code – using the PostJSON service from the ContentLoader entity to send the service requests to the cluster entry point instead of running them locally.       const headers = { "Content-Type": "application/json", "Accept": "application/json", "appKey": "INSERT-YOUR-APPKEY-HERE" }; const url = "https://testcluster.edc.ptc.io/Thingworx/Things/DistributeTaskDemo_HelperThing/services/TimerBackend_Service"; let result = Resources["ContentLoaderFunctions"].PostJSON({ proxyScheme: undefined /* STRING */, headers: headers /* JSON */, ignoreSSLErrors: undefined /* BOOLEAN */, useNTLM: undefined /* BOOLEAN */, workstation: undefined /* STRING */, useProxy: undefined /* BOOLEAN */, withCookies: undefined /* BOOLEAN */, proxyHost: undefined /* STRING */, url: url /* STRING */, content: {} /* JSON */, timeout: undefined /* NUMBER */, proxyPort: undefined /* INTEGER */, password: undefined /* STRING */, domain: undefined /* STRING */, username: undefined /* STRING */ });   Note that the URL used in this example - https://testcluster.edc.ptc.io/Thingworx - is the entry point of the ThingWorx cluster.  Replace this value to match with your cluster’s entry point if you want to duplicate this in your own cluster.   Now, let's check the result again.   Notice that the helper utility TimerBackend_Service is now running on both cluster nodes, Platform1 and Platform2.   Is this Magic?  No!  What is Happening Here?   The timer or scheduler itself is still being executed on the singleton node, but now instead of the triggering the helper utility locally, the PostJSON service call from the subscription is being routed back to the cluster entry point – the load balancer.  As a result, the request is routed (usually round-robin) to any available cluster nodes that are behind the load balancer and reporting as healthy.   Usually, the load balancer will be configured to have a cookie-based affinity - the load balancer will route the request to the node that has the same cookie value as the request.  Since this PostJSON service call is a RESTful call, any cookie value associated with the response will not be attached to the next request.  As a result, the cookie-based affinity will not impact the round-robin routing in this case.   Considerations to Use this Approach   Authentication: As illustrated in the demo, make sure to use an Application Key with an appropriate user assigned in the header. You could alternatively use username/password or a token to authenticate the request, but this could be less ideal from a security perspective.   App Deployment: The hostname in the URL must match the hostname of the cluster entry point.  As the URL of your implementation is now part of your code, if deploy this code from one ThingWorx instance to another, you would need to modify the hostname/port/protocol in the URL.   Consider creating a variable in the helper utility which holds the hostname/port/protocol value, making it easier to modify during deployment.   Firewall Rules: If your load balancer has firewall rules which limit the traffic to specific known IP addresses, you will need to determine which IP addresses will be used when a service is invoked from each of the ThingWorx cluster nodes, and then configure the load balancer to allow the traffic from each of these public IP address.   Alternatively, you could configure an internal IP address endpoint for the load balancer and use the local /etc/hosts name resolution of each ThingWorx node to point to the internal load balancer IP, or register this internal IP in an internal DNS as the cluster entry point.
View full tip
Integrating LDAP authentication into Thingworx is fairly simple. Since release 5.0 and later, the out-of-the-box (OOTB) Thingworx authenticators already include the necessary code to validate a user's credentials against an LDAP server. These authenticators look to see if an LDAP server is connected every time a user attempts a login, and then further check to see if this user exists in the LDAP server. If the username does exist in LDAP, then Thingworx will check if the password entered is a match to the password stored within LDAP. If the password entered does not match the password stored in LDAP, then Thingworx will next check if the password matches the one stored in Thingworx for that user. So in order for a user to login to Thingworx, they must have a user Thing created for them within Thingworx Composer (this can be done programmatically, see below), and a valid password which matches either an LDAP account password or the password as it is set for that user on the Thing in Thingworx Composer. The first thing a developer needs to do to integrate LDAP is configure their Thingworx instance so that it can find the LDAP server and access its contents. This is done by importing an XML file which will allow the developer to see a Thing that comes with the Thingworx platform (see attached file "directoryServices.xml"). The Thing that needs configuring is called ApacheDS3 and it is a DirectoryServices Thing. The largest task for a developer to do to integrate LDAP into Thingworx involves importing their LDAP users into Thingworx. Getting the LDAP usernames out of the LDAP server will vary depending on which distribution of LDAP is in use. However, once the developer acquires this information, using it to create users in Thingworx is simple. The developer will need to create a Thing Service which creates a dummy password and assigns the LDAP username in the parameters. Then they can pass the parameters into the CreateUser service of the “EntitiyServices” resource: var params = { password: "SOMETHING_COMPLICATED", //dummy password does not matter, but you don't want an accidental match, so make it something very complicated, and standard to your company's LDAP users name: ldap_username, //retrieve from LDAP description: "This user was created as part of LDAP import", //can be whatever you'd like tags: undefined }; Resources["EntityServices"].CreateUser(params); // no return Any users created in this way will be redirected to Squeal if there is no home mashup assigned, so you will have to add an additional bit of code which assigns the home mashups to users, looping through something like this: var params = {     name: "dashboard" //replace this with String name of dashboard (must exist) }; Users[username].SetHomeMashup(params); For full steps on integrating LDAP and Thingworx, including instructions on how to set up an ApacheDS test LDAP server, see the Thingworx support article titled “Integrate LDAP Authentication and Import LDAP User Directory into Thingworx” (reference document – CS221840).
View full tip
Those who have been working with ThingWorx for many years will have noticed the work done around ingress stress testing and performance optimization.  Adding InfluxDB as a time-series data persistence provider really helped level up these capabilities while simultaneously decreasing the overall resources required by the infrastructure.  However with this ease comes a hidden challenge: query and data processing performance to work it into something useful.   Often It's Too Much Data In general most customers that I work with want to collect far too much data -- without knowing what it will be used for, or what processing will be required in order to make it usable and useful.  This is a trap in general with how many people envision IoT projects, being told by infrastructure providers that cloud storage and compute resources are abundant and cheap and that they should get as much data as possible.  This buildup of data means that more effort needs to be spent working it into something useful (data engineering/feature extraction) and addressing common data issues (quality, gaps, precision, etc.).  This might be fine for mature companies with large data analytics teams; however this is a makeup that I've only seen in the largest of our customers.  Some advice - figure out what you need and how you'll use it, and then collect that.  Work on extracting value today rather than hoping that extra data collected  now will provide some insights years from now.   Example - Problem Statement You got your Thing Model designed, and edge devices connected.  Now you've got data flowing in and being stored every 5 seconds in InfluxDB.  Great progress!  Now on to building the applications which cover the various use cases. The raw data is most likely going to need to be processed and potentially even significantly transformed into other information in order to make it useful.  Turning a "powered on and running" BOOLEAN to an "hour meter" INTEGER is a simple example.  Then you may need to provide a report showing equipment run time hours by day over a month.  The maintenance team may also have asked to look for usage patterns which lead to breakdowns, requiring extracting other data points from the initial one like number of daily starts, average daily run time, average time between restarts. The problem here is that unless you have prepared these new data points and stored them as well (say in a Stream), you are going to have to build these data sets on the fly, and that can be time and resource intensive and not give you the response time expected.  As you can imagine, repeatedly querying and processing large volumes of unchanging raw data is going to have resource and time implications - so this is why data collection and data use need to be thought about separately.   Data Engineering In the above examples, the key is actually creating new data points which are calculated progressively throughout normal operation.  This not only makes the information that you want available when you need it - in the right format - but it also significantly reduces resource requirements by constantly reprocessing raw data.  It also helps managing data purging, because as you create and store usable insights, you can eventually just archive away your old raw data streams.   Direct Database Queries vs. Thingworx Data Services Despite the above being a rule of thumb, sometimes a simple well structured database query can get you exactly what you need and do so quite quickly.  This is especially true for InfluxDB when working with extremely large time-series datasets.  The challenge here is that ThingWorx persistence providers abstract away the complexity of writing ones own database queries, so we can't easily get at the databases raw power and are forced to query back more data than needed and work it into a usable format in memory (which is not fast).   Leveraging the InfluxDB API using the ContentLoader Technique As InfluxDBs API is 100% REST, we can access it using in-built ThingWorx Content Loader services.  Check out this demonstration and explanation video where I talk about how to interact directly with InfluxDB in order to crush massive time-series data and get back much more usable and manageable data sets.  It is important to note here that you should use a read-only database user here, as you should never modify the ThingWorx databases to avoid untested scenarios which may lead to data corruption.   Optimizing ThingWorx query performance with the InfluxDB REST API - YouTube InfluxToolBox ThingWorx demo project (by T. Wobben)      
View full tip
Recently I have been accompanying an integration partner and end customer around an issue experienced with ThingWorx resource exhaustion.  Early on it seemed like this was an issue with the ThingWorx Azure IoT Hub Connector as it would freeze up and become unresponsive.  Following a root cause analysis it became clear that it was actually caused by a lack of a number of standard cloud design patterns, which if used would have automatically adapted operation of the overall solution to be far more resilient as well as resource optimized.   The way that the logic was structured, it prioritized job execution on entities with the oldest last success time and would continue to retry these executions (IoT Direct Methods) every few seconds until successful.  There were a number of problems here, but I'll unpack a few in order to tie the problem to the solution via design patterns.   1) No exception handling When the direct method execution failed/timed out or the system reported being unable to execute the remote service, this response was not used to adapt the solutions behavior. 2) No backoff retry mechanism As exceptions were not caught, an adaptive retry mechanism with incremental or exponential backoff could not be leveraged to limit the impact of the build up of the failing retries. 3) No exception tracking Tracking that exceptions were occurring and counting them would allow powering an exponential backoff retry algorithm (with jitter), a Cancel or Circuit Breaker pattern (stop doing something which is just broken), as well as provided alerting to address specific areas of the distributed solution experiencing issues. 4) Conflicting priorities It was interesting to see the manifestation of the conflicting interests of wanting to ensure checks and balances (had all needed data) and system resiliency.  Retries and resource usage built up exponentially due to the transient error instead of backing them off.  Trying so hard to get the needed data from failing sensors meant that operational sensors were deprioritized and their data was not received either - spreading the localized issue to the whole system.   Around the time that I shared my recommendations and some examples of how to make the solution more resilient, one of my technical colleagues at Microsoft shared some extremely interesting and relevant design patterns documented by Microsoft as a part of the "Microsoft Azure Well-Architected Framework".  This framework with included Design Patterns for specific cloud application goals allows applying well-known industry standard approaches to dealing with the challenges of large scale distributed enterprise systems (reliability, performance, cost optimization).   She later then shared this blog post describing exactly the exponential backoff retry with jitter pattern which we had together recommended to the systems integrator.   What's interesting for us ThingWorx people is that this framework from Microsoft is about well-architected cloud solutions and does not specifically reference the Azure stack, and as such many of these approaches and design practices can be employed in your ThingWorx applications.  What are you waiting for?  Go check them out!
View full tip
We will host a live Expert Session: "Top 5 Thingworx environment monitoring best practices" on March 25th, 10h00 EST.   Please find below the description of the expert session and the registration link.   Expert Session: Top 5 Thingworx environment monitoring best practices Date and Time: March 25th, 10h00 EST Duration: 1 hour Host: Tori Firewind, Tim Atwood and Dave Bernbeck from Enterprise Deployment Center - Enterprise Deployment Center Registration Here: https://www.ptc.com/en/resources/iiot/webcast/top-5-thingworx-monitoring-best-practices    In this session, we will be reviewing the main monitoring practices to keep a heathy environment and discuss the main issues from the audience. Bring your questions!.   Existing Recorded sessions can be found on support portal using the keyword ‘Expert Sessions’. You can also suggest topics for upcoming sessions using this small form.   Here are some recorded sessions that might be of your interest. You can find recordings for the full library of webinars using the keyword ‘Expert Sessions’ in PTC support portal search   Thingworx Active Active Clustering This session will cover the main aspects of the High Availability Clustering feature launched with the ThingWorx 9.0 release.   Recoding Link Upgrade to Thingworx 9 – How to Plan / Evaluate Impacts This session highlights the key points you should evaluate to properly plan your upgrade to Thingworx 9. Recording Link Top 5 items to check for Thingworx Performance Troubleshooting How to troubleshoot performance issues in a Thingworx Environment? Here we cover the top 5 investigation steps that will help you understand the source of your environment issues and allow better communication with PTC Technical Support     Recording Link
View full tip
I've had a lot of questions over the years working with Azure IoT, Kepware, and ThingWorx that I really struggled getting answers to. I was always grateful when someone took the time to help me understand, and now it is time to repay the favour.   People ask me many things about Azure (in a ThingWorx context), and one of the common ones has been about MQTT communications from Kepware to ThingWorx using IoT Hub. Recently the topic has come up again as more and more of the ThingWorx expert community start to work with Azure IoT. Today, I took the time to build, test, validate, and share an approach and utilities to do this in cases where the Azure Industrial IoT OPC UA integration is overkill or simply a step later in the project plan. Enjoy!   End to end Integration of Kepware to ThingWorx using MQTT over Azure IoT (YoutTube 45 minute deep-dive)   ThingWorx entities for import (ThingWorx 9.0)   This approach can be quite good for a simple demo if you have a Kepware Integrator or Kepware Enterprise license, but the use of IoT Gateway for many servers and tags can be quite costly.   Those looking to leverage Azure IoT Hub for MQTT integration to ThingWorx would likely also find this recorded session and shared utilities quite helpful.   Cheers, Greg
View full tip