cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

ThingWorx Navigate is now Windchill Navigate Learn More

IoT & Connectivity Tips

Sort by:
ThingWorx Analytics is capable of being assembled in multiple Operating Systems. In this post, we will discuss common issues that have been encountered by other users. Permissions Denied – Read/Write access to Third Party Components This is encountered when executing the desired Shell script to begin the creation process. In MacOS and Linux you may encounter a “Permissions Denied” error on the two required components in the creation, the packer-post-processor-vhd and packer components. Error Message This will result in a Terminal dialog message that will read “Process Completed, No Artifacts Created”. This indicates that the Packer Script has failed to complete the task, and the desired appliance images were not created. To correct this issue, you will have to change the permissions of the packer-post-processor-vhd and packer components to be able to be read and executable by the user account that is attempting to create the appliance. Solution Run the following commands in the Virtual Machine terminal (you may need to run as SUDO or as Root): chmod +x packer-post-processor-vhd ​chmod +x packer After running the above command, run the Shell script of the desired VM Appliance output. This should resolve the issue with “Permission Denied” while executing the build scripts. Error Starting Appliance in VirtualBox Users have experienced this issue at the first run of the Appliance, right after it has been assembled. This issue is unique to VirtualBox versions 5.0 and above. Error Message – Dialog Box If you encounter the error depicted below, please check under settings for the imported OVA for any errors: This issue is the result of invalid settings in the Appliance Configuration. You will need to check for Invalid Settings, by navigating to the Settings Menu for the Appliance: The “Invalid settings detected” indicates that when the Product was assembled, some configuration settings were not applied correctly by the creation tool scripts. Solution Hover your mouse over the settings and it will direct you to cause, in this case it is due to remote monitor setup. Just change the settings in Display (Remote Display Tab) by unchecking the Enable Server button. Press OK after unchecking the “Enable Server” option, and start the Appliance.
View full tip
  Operationalize an Analytics Model Guide Part 1   Overview   This project will introduce ThingWorx Analytics Manager. Following the steps in this guide, you will learn how to deploy the model which you created in the earlier Builder guide. We will teach you how to utilize this deployed model to investigate whether or not live data indicates a potential engine failure. NOTE: This guide’s content aligns with ThingWorx 9.3. The estimated time to complete ALL 2 parts of this guide is 60 minutes.    Step 1: Analytics Architecture   You can leverage product-based analysis Models developed using PTC and third-party tools while building solutions on the ThingWorx platform. Use simulation as historical basis for predictive Models Create a virtual sensor from simulation Design-time versus operational-time intelligence It is important to understand how Analytics Manager interacts with the ThingWorx platform.   Build Model   In an IoT implementation, multiple remote Edge devices feed information into the ThingWorx Foundation platform. That information is stored, organized, and operated-upon in accordance with the application's Data Model. Through Foundation, you will upload your dataset to Analytics Builder. Builder will then create an Analytics Model.     Operationalize Model   Analytics Manager tests new data through the use of a Provider, which applies the Model to the data to make a prediction. The Provider generates a predictive result, which is passed back through Manager to ThingWorx Foundation. Once Foundation has the result, you can perform a variety of actions based on the outcome. For instance, you could set up Events/Subscriptions to take immediate and automatic action, as well as alerting stakeholders of an important situation.       Step 2: Simulate Data Source   For any ThingWorx IoT implementation, you must first connect remote devices via one of the supported connectivity options, including Edge MicroServer (EMS), REST, or Kepware Server. Edge Connectivity is outside the scope of this guide, so we'll use a data simulator instead. This simulator will act like an Engine with a Vibration Sensor, as described in Build a Predictive Analytics Model. This data is subdivided into five frequency bands, s1_fb1 through s1_fb5. From this data, we will attempt to predict (through the engine's vibrations) when a low grease emergency condition is occuring.   Import Entities   Import the engine simulator into your Analytics Trial Edition server. Download and unzip the attached amqs_entities.zip file. At the bottom-left of ThingWorx Composer, click Import/Export > Import.     Keep the default options of From File and Entity, click Browse, and select the amqs_entities.twx file you just downloaded.   Click Import, wait for the Import Successful message, and click Close.   From Browse > All, select AMQS_Thing from the list.   At the top, click Properties and Alerts to see the core functionality of the simulator. NOTE: The InfoTable Property is used to store data corresponding to the s1_fb1 through s1_fb5 frequency bands of the vibration sensor on our engine. The values in this Property change every ten seconds through a Subscription to the separate AMQS_Timer Thing. The first set of values are good, in that they do NOT correspond to a low grease condition. The second set of values are bad, in that they DO correspond to a low grease condition. These values will change whenever the ten-second timer fires.   View Mashup   We have created a sample Mashup to make it easier to visualize the data, since analyzing data values in the Thing Properties is cumbersome. Follow these steps to access the Mashup. On the ThingWorx Composer Browse > All tab, click AMQS_Mashup.   At the top, click View Mashup.    Observe the Mashup for at least ten-seconds. You'll see the values in the Grid Advanced Widget change from one set to another at each ten-second interval.     NOTE: These values correspond to data entries from the vibration dataset we utilized in the pre-requisite Analytics Builder guide. Specifically, the good entry is number 20,040... while the bad entry is number 20,600. You can see in the dataset that 20,400 corresponds to a no low grease condition, while 20,600 corresponds to a yes, low grease condition.   Step 3: Configure Provider   In ThingWorx terminology, an Analysis Provider is a mathematical analysis engine. Analytics Manager can use a variety of Providers, such as Excel or Mathcad. In this quickstart, we use the built-in AnalyticsServerConnector, an Analysis Provider that has been specifically created to work seamlessly in Analytics Manager and to use Builder Models. From the ThingWorx Composer Analytics tab, click ANALYTICS MANAGER > Analysis Providers, New....   In the Provider Name field, type Vibration_Provider. In the Connector field, search for and select TW.AnalysisServices.AnalyticsServer.AnalyticsServerConnector.   4. Leave the rest of the options at default and click Save.   Step 4: Publish Analysis Model   Once you have configured an Analysis Provider, you can publish Models from Analytics Builder to Analytics Manager. On the ThingWorx Composer Analytics tab, click ANALYTICS BUILDER > Models.   Select vibration_model_s1_only and click Publish.   On the Publish Model pop-up, click Yes. A new browser tab will open with the Analytics Manager's Analysis Models menu.      4. Close that new browser tab, and instead click Analytics Manager > Analysis Models in the ThingWorx Composer Analytics navigation. This is the same interface as the auto-opened tab which you closed.   False Test   It is recommended to test the published Model with manually-entered sample data prior to enabling it for automatic analysis. For this example, we will use entry 20,400 from the vibration dataset. If the Model is working correctly, then it will return a no low grease condition. In Analysis Models, select the model you just published and click View.   Click Test.   In the causalTechnique field, type FULL_RANGE. In the goalField field, type low_grease. For _s1_fb1 through _s1_fb5, enter the following values: Data Row Name Data Row Value _s1_fb1 161 _s1_fb2 180 _s1_fb3 190 _s1_fb4 176 _s1_fb5 193 6. Click Add Row. 7. Select the newly-added row in the top-right, then click Set Parent Row. 8. Click Submit Job. 9. Select the top entry in the bottom-left Results Data Shape field. 10. Click Refresh Job. Note that _low_grease is false and and _low_grease_mo is well below 0.5 (the threashold for a true prediction).   You have now successfully submitted your first Analytics Manager job and received a result from ThingPredictor. ThingPredictor took the published Model, used the no low grease data as input, and provided a correct analysis of false to our prediction.   True Test Now, let's test a true condition where our engine grease IS LOW, and confirm that Analytics Manager returns a correct prediction. In the top-right, select the false data row we've already entered and click Delete Row. For _s1_fb1 through _s1_fb5, change to the following values: Data Row Name Data Row Value _s1_fb1 182 _s1_fb2 140 _s1_fb3 177 _s1_fb4 154 _s1_fb5 176 3. Select the top entry in the bottom-left Results Data Shape field. 4. Click Refresh Job. Note that _low_grease is true and and _low_grease_mo is above 0.5.                 5. Click Submit Job.         6. Select the top entry in the bottom-left Results Data Shape field.         7. Click Refresh Job. Note that _low_grease is true and and _low_grease_mo is above 0.5.          You've now manually submitted yet another job and received a predictive score. Just like in the dataset Entry 20,600, Analytics Manager predicts that the second s1_fb1 through s1_fb5 vibration frequencies correspond to a low grease condition which needs to be addressed before the engine suffers catastrophic failure.   Enable Model   Since both false and true predictions made by the Model seem to match our dataset, let's now enable the Model so that it can be used for automatic predictions in the future. In the top-left, expand the Actions... drop-down box.   Select Enable.     Click here to view Part 2 of this guide.   
View full tip
This example provides the ability to generate a simple entity structure and some historical data for each entity. Historical data is run through a ThingWorx service to generate histogram data for display in a bar chart.  The provided ThingWorx entities and PDF document provide the example as well as documentation.
View full tip
  Hi everyone,   In anticipation of ThingWorx 9.0’s biggest feature, active-active clustering, we’d like to provide an architectural overview of a sample active-active configuration and its underlying components. If you haven’t already seen it, we invite you to read our previous Community tech tip, where we introduce the concept of active-active clustering for ThingWorx Foundation, which enables you to: significantly reduce unplanned downtime for your mission-critical services and apps support horizontal scalability of the ThingWorx Server where you can scale your services up and down based on your requirements easily run, package, deploy, and operate advanced apps and services with the help of intuitive browser-based navigation, interactive monitoring and debugging tools, and more deploy anywhere - public cloud, private datacenter, on-premise, hybrid, or even locally on your laptop with deep optimizations for Azure Now, before we go too deep, we’d like to let you know that you can continue to seamlessly upgrade from previous versions of ThingWorx releases to upcoming ThingWorx 9.0  releases. Previously, you could deploy ThingWorx Foundation in a “single server” mode, and, for a high availability in “active-passive cluster” mode (see here for details). In the ThingWorx 9.0 release and onwards, you’ll be able to continue to deploy ThingWorx Foundation in a “single server” mode and for high availability scenarios via our new “active-active cluster” mode. Please note that active-passive clustering configuration will no longer be supported in ThingWorx 9.0 or onwards.   Let’s start with a quick recap on how the ThingWorx Foundation 9.0 release would look like in single server deployment.   ThingWorx 9.0 Deployed in a "Single-Server" Architecture Below is a high-level diagram depicting the main architectural layers and components of ThingWorx Foundation deployed in a single server mode.   Deployment Components Below is a brief summary of all major architectural components and their purpose in the deployment architecture:   The Client Layer This layer is comprised of everything that connects with, sends data to, and receives content from the ThingWorx platform. It be broken down into two groups: Devices/Things: Things, devices, agents, and other assets. Users/Clients: People and the respective products (primarily web browsers) they use to access ThingWorx.   The Application Layer This layer is where ThingWorx Foundation and other applications deployed with ThingWorx Foundation reside, such as ThingWorx Analytics, ThingWorx Connection Server, ThingWorx Azure IoT Hub Connector and others. This layer provides connectivity to the client layer, performs authentication and authorization checks, ingests/processes/analyzes content, and reacts to conditions by sending alerts. For a specific ThingWorx Foundation deployment that needs basic device data ingestion, processing and storage, you can setup only ThingWorx Foundation server. In some cases, with large number of device connections, you may want to setup ThingWorx Connection Server with ThingWorx Foundation in a single server for further scalable connectivity. ThingWorx Foundation: ThingWorx Foundation is a java-based application that serves as a rapid, model-based application development platform. Shared File Storage: Shared Disk space to contain ThingWorx Storage repositories, store and archive log files accessed by all ThingWorx Foundation servers. A NAS file storage, AWS Elastic File System, or Azure Files and others could be used for this purpose.   The Data Layer ThingWorx Foundation includes several persistence provider implementations that enable you to choose a database option that best fits your use case. A persistence provider enables the connection to a data store and the ability to perform a CRUD operation on that data. See here for more information. Currently, there are two basic variations of persistence providers: Model Provider – Responsible for ThingWorx model metadata and system data. Data Provider – Responsible for runtime data ingested against the model elements, including streams, value streams, data tables, etc. ThingWorx supports H2 (in-memory Database), PostgreSQL, MS SQL Server and AzureSQL as both model and data providers, and InfluxDB as only a data provider. Please see here for model and data best practices.   ThingWorx 9.0 Deployed in an Active-Active Clustering Reference Architecture Below is a reference architecture diagram for ThingWorx 9.0 with multiple ThingWorx Foundation servers configured in an active-active cluster deployment. Please note that this is only one reference example of how ThingWorx 9.0 can be deployed in an active-active clustered environment. There could be other architectural configurations dependent upon the needs of the specific deployment. Deployment Components Once you have developed an understanding of the basic architectural components in a single server mode, below are the additional components required to run ThingWorx in active-active cluster mode.   The Client Layer This will be similar to what has been mentioned in the above single server configuration.   The Application Layer In this layer, if you’re familiar with ThingWorx active-passive cluster configuration, then you may be aware of most of the components used below—with the exception of a new component: Apache Ignite that provides Distributed Caching for the horizontally scalable ThingWorx Foundation servers. Load Balancer: A third-party device that receives network traffic and distributes requests among available servers. In active-active cluster configuration, the load balancer is used to direct WebSocket-based traffic to the ThingWorx Connection Servers while user requests (http/https) traffic is directly distributed to the ThingWorx Foundation servers. Users can continue to use a load balancer with ThingWorx 9.0 that they might already be using for their existing active-passive or single server deployments with ThingWorx 8.X or previous releases.  Some example load balancers include, but are not limited to: HAProxy, Azure Application Gateway, and AWS Application Load Balancer. ThingWorx Connection Services: These services handle all messages routing to and from devices, providing scalable connectivity to the ThingWorx Foundation Server. With the ThingWorx 9.0 release, ThingWorx Connection Services have been upgraded with many additional features to support active-active clustering of the ThingWorx Foundation servers, where now they route all WebSocket traffic in a round robin fashion to the connected ThingWorx Foundation servers. Depending upon the various use cases, one could use multiple ThingWorx Connection Services available, such as ThingWorx Connection Server, ThingWorx Azure IoT Hub Connector, and ThingWorx Protocol Adapter Toolkit. Please see here for further details. Please note that for ThingWorx 9.0 releases, ThingWorx Connection Server would be required in an active-active configuration to support all the WebSocket-based traffic routing, including egress of files and device messages from multiple ThingWorx Foundation servers back to the devices, and it would also serve as WebSocket communication from ThingWorx Mashup-based applications to ThingWorx Composer. ThingWorx Foundation: With ThingWorx 9.0 and onwards, you can set up ThingWorx Foundation servers in an N-active-active cluster model to provide higher availability to your applications and horizontally scale the Foundation server nodes up and down based on your scalability needs. Apache Zookeeper: Apache ZooKeeper is a centralized service for maintaining configuration information and naming as well as providing distributed synchronization and group services. It is a coordination service for distributed applications that enables synchronization across a cluster. Specific to ThingWorx, ZooKeeper is used for distributed locking, selecting a singleton server during the server initialization, service discovery for Apache Ignite, allowing it to find instances of ThingWorx Foundation servers. Apache Ignite: This offers a distributed cache for the active-active cluster setup. It is used by ThingWorx Foundation Servers to share state. It may be embedded with each ThingWorx instance or can be run as a standalone cluster for larger scale. In this configuration, Ignite is set up in a standalone cluster but can be run embedded within the ThingWorx Foundation. Running Ignite in a standalone cluster is more ideal for larger scale, as it supports higher vertical scale of memory in the deployment setup.   The Data Layer ThingWorx is largely database agnostic. You can continue to use officially supported persistence providers that you may already be using in your existing deployments based off of ThingWorx 8.X or previous releases. Please look out for an upcoming ThingWorx update as well as enhanced installation documents to help with your upgrade and migration questions with the general availability of ThingWorx 9.0. Please note that this diagram does not make the distinction between model and data providers; depending on your data ingestion needs, separate model and data providers can be used. As a reminder, all databases should be deployed in a high-availability configuration to help eliminate any single point of failure.   In closing, we can't wait to launch active-active clustering in 9.0 to help you: dramatically further reduce application downtime scale your deployments and more efficiently manage your apps, regardless of where they’re deployed   If you have any questions about active-active clustering or its architecture, please do not hesitate to reach out!   Stay connected! Kaya
View full tip
This is a basic troubleshooting guide for ThingWorx. It goes over the importance, types and levels of logs, getting started on troubleshooting the Composer, Mashup and Remote Connectivity.     For full-sized viewing, click on the YouTube link in the player controls.   Visit the Online Success Guide to access our Expert Session videos at any time as well as additional information about ThingWorx training and services.
View full tip
  Sunshine, beach chairs & ThingWorx 9.2. What more could you need for your summer essentials?   Targeted for June 2021, our next release features intelligent one-click deploy with Solution Central*, new web components, and an enhanced IAM integration!   Let’s dive deeper into each.   Deploy an entire solution in one click with Solution Central’s intelligent one-click deploy. Good news: you followed a modular design pattern and broke up your application into smaller libraries and components. You can now enjoy easier maintenance and re-use of your app. Bad news: your app now has 10 different dependencies, with differing versions, each with a required order to import into ThingWorx. Now, try to share these modules with colleagues, or use them on environments where code may already exist. Not exactly a day at the beach, right? Fear not, one-click deploy has you covered. You click the button, we spin through and find the right dependencies, the right versions, the right order and load them all into the target platform upon a deploy request. Solution Central  one-click deploy means more sun and sand for you! Check out this post to learn more about what’s available in Solution Central 3.0! Intelligent One-Click Deploy with Solution Central Enhance your solutions with our latest web components! Imagine this: you’re a systems developer at a large parts manufacturer and your boss has asked for a detailed analysis of downtime over the last six months. Not to worry! ThingWorx 9.2 features a new waterfall chart that can be leveraged to understand dynamics in defect counts, loss reasons, time bottlenecks and other conditions. Be sure to try it out! And, while you’re at it, try out our new web components that are available now as preview: a toolbar to add key like filtering at the top of your screen or data intensive widgets (e.g., grids), a more flexible grid and a fancy new paradigm for interface developers. These three preview widgets are fully functional and tested in 9.2. Preview widgets will graduate in a future release when we add all planned functionality or address any perceived usability feedback.​ Don’t be afraid—it just means more good things are coming. Surf’s up, you can use these widgets safely now!​ New Waterfall Widget Coming in ThingWorx 9.2 Leverage new integrations with Azure Active Directory for more seamless user management. In prior releases we have offered integration to Azure Active Directory and SSO through Central Authorization Service type products or through custom authenticator extensions to ThingWorx.  With our new Azure AD integration, you can cut the custom extensions and additional software out of the picture.  We now accept direct SAML assertions from Azure AD directly to ThingWorx platform, which makes it that much easier to deploy your app in your organization’s SSO flow.  It’s as smooth as that frosty tropical drink when the sun goes down.   Like what you see? Want to try it out for yourself? ThingWorx 9.2 is targeted for June 2021, so be sure to keep a lookout on the horizon. Bump, set, spike!   Stay cool & connected, Kaya
View full tip
Parquet Data Format used in ThingWorx Analytics   Starting ThingWorx Analytics Version 8.1 Data storage will no longer require the installation of a PostgreSQL database. Instead, uploaded CSV data is converted to the optimized  Apache Parquet format and stored directly in the file system. This Blog explains some the features of Apache Parquet justifying this transition in ThingWorx Analytics Data Storage. features What is Apache Parquet: Apache Parquet is a column-oriented data store of the Apache Hadoop ecosystem. It is compatible with most of the data processing frameworks in the Hadoop environment. It provides efficient data compression and encoding schemes with enhanced performance to handle complex data in bulk. Below is an illustration of the Columnar Storage model: Apache Parquet Features and Benefits: Apache Parquet is implemented using the record shredding and assembly algorithm taking into account the complex data structures that can be used to store the data. Apache Parquet stores data where the values in each column are physically stored in contiguous memory locations.  Due to the columnar storage, Apache Parquet provides the following benefits: Column-wise compression is efficient and saves storage space Compression techniques specific to a type can be applied as the column values tend to be of the same type Queries that fetch specific column values need not read the entire row data thus improving performance Different encoding techniques can be applied to different columns Some advantages of using Parquet for ThingWorx Analytics: Apart from the above benefits of using Parquet which amount to higher efficiency and increased performance, below are some advantages that apply specifically to ThingWorx Analytics This change in ThingWorx Analytics from using a Database to using Parquet removes the limitations on the number of data columns the system can handle. It also allows for streamlining the dataset creation process. Since the data is converted to a Parquet format, there is no need to separately optimize the dataset. Even when new data is appended to an existing dataset, a new partition is added and re-optimization is optional but not required. Data could be appended easily so there is no longer a need to re-load the full Dataset when new Data values are added The illustration below shows the transition from Row-based Data Storage model VS the columnar based Storage of Parquet
View full tip
    Raise your hand if you’re ready for seamless, rapid deployment of your ThingWorx applications with visibility into your various environments! It’s time to say goodbye to error-prone deployments with manual dependency tracking and hello to Solution Central!   Releasing this fall as part of 8.5, Solution Central is a brand-new cloud service coming to the ThingWorx platform to enable you to efficiently manage your ThingWorx applications across the enterprise.   With Solution Central, you’ll no longer be caught chasing missing dependencies (like ThingShapes, Mashups, templates or library extensions). Solution Central automatically identifies and packages up the dependencies required for your application. No more manual dependency madness!   Whether you’re managing many apps deployed to a few environments or a single app deployed to hundreds of environments, Solution Central allows you to accelerate your deployment through an intuitive UI or powerful APIs for automation.   Here’s how it works: Begin by creating your application in Composer with a project. Let Solution Central automatically package up all the artifacts and dependencies required for your application. Allow Solution Central to publish your solution package to the cloud. Deploy your application to your various environments (local servers, data centers, cloud systems) directly from Solution Central. It’s like your company has its own private app store. Here’s a sneak peek of the Solution Central UI! Keep an eye out for the release of ThingWorx 8.5 at the end of Sept 2019 and begin accelerating your app deployment! Check out the presentation and demo my fellow PM Chris Baldwin and I delivered at LiveWorx19—and be sure to attend LiveWorx20! To navigate to our session recording, search for “Introducing Solution Central: Your Gateway to Accelerated IIoT Value Across the Enterprise” here.   Sound interesting? Message me directly to discover how you can become part of the Solution Central Private Preview Program!   -Kaya  
View full tip
Let's assume I collect Timeseries Data of two temperature sensors, located next to each other. This is done for redundancy and ensuring the quality of measures. Each of the sensors is logged into its Property in ThingWorx and I can create a Timeseries for the individual sensors. However I would like to create a combined InfoTable that holds information for both sensors, but averages out their values.   Instead of reading values from a stream, I just create some custom data for both InfoTables. After this I use the UNION function to combine the two tables and sort them. Once they are sorted, the INTERPOLATE function allows to group the InfoTable by timestamp.   With this, I have combined the two sensor result into on result set. Taking the average of numbers will give closer results to the real value (as both sensors might not be 100% accurate). In case one sensor does not have data for a given point in time, it will still be considered in the final output.   InfoTable1:   2018-12-18 00:00:00.000 2 2018-12-19 00:00:00.000 3 2018-12-20 00:00:00.000 5 2018-12-21 00:00:00.000 7   InfoTable2:   2018-12-18 00:00:00.000 1 2018-12-19 12:00:00.000 2 2018-12-20 00:00:00.000 3 2018-12-21 00:00:00.000 4   Combined Result:   2018-12-18 00:00:00.000 1.5 2018-12-19 00:00:00.000 3 2018-12-19 12:00:00.000 2 2018-12-20 00:00:00.000 4 2018-12-21 00:00:00.000 5.5     This can be done with the following code:   // Required DataShape "myInfoTableShape": "timestamp" = DATETIME, "value" = NUMBER // The Service Output is an InfoTable based on the same DataShape var params = { infoTableName : "InfoTable", dataShapeName : "myInfoTableShape" }; // Create two InfoTables, representing the data of each sensor var infoTable1 = Resources["InfoTableFunctions"].CreateInfoTableFromDataShape(params); var infoTable2 = Resources["InfoTableFunctions"].CreateInfoTableFromDataShape(params); var newEntry = new Object(); // Create custom data for InfoTable1 newEntry.timestamp = 1545091200000; newEntry.value = 2; infoTable1.AddRow(newEntry); newEntry.timestamp = 1545177600000; newEntry.value = 3; infoTable1.AddRow(newEntry); newEntry.timestamp = 1545264000000; newEntry.value = 5; infoTable1.AddRow(newEntry); newEntry.timestamp = 1545350400000; newEntry.value = 7; infoTable1.AddRow(newEntry); // Create custom data for InfoTable2 newEntry.timestamp = 1545091200000; newEntry.value = 1; infoTable2.AddRow(newEntry); newEntry.timestamp = 1545220800000; newEntry.value = 2; infoTable2.AddRow(newEntry); newEntry.timestamp = 1545264000000; newEntry.value = 3; infoTable2.AddRow(newEntry); newEntry.timestamp = 1545350400000; newEntry.value = 4; infoTable2.AddRow(newEntry); // Combine the two InfoTables via the UNION function var unionTable = Resources["InfoTableFunctions"].Union({ t1: infoTable1, t2: infoTable2 }); // Optional: Sort the table by timestamp var sortedTable = Resources["InfoTableFunctions"].Sort({ sortColumn: "timestamp", t: unionTable, ascending: true }); // Interpolate the (sorted) table by Interval and take average values and build the result var result = Resources["InfoTableFunctions"].Interpolate({ mode: "INTERVAL", timeColumn: "timestamp", t: sortedTable, ignoreMissingData: undefined, stats: "AVG", endDate: 1545609600000, columns: "value", count: undefined, startDate: 1545004800000 });  
View full tip
This document provides API information for all 51.0 releases of ThingWorx Machine Learning.
View full tip
Hi everyone, As everyone knows already, the main way to define Properties inside the EMS Java SDK is to use annotations at the beginning of the VirtualThing class implementation. There are some use-cases when we need to define those properties dynamically, at runtime, like for example when we use a VirtualThing to push a sensor's data from a Device Cloud to the ThingWorx server, for multiple customers. In this case, the number properties differ based on customers, and due to the large number of variations, we need to be able to define programmatically the Properties themselves. The following code will do just that: for (int i = 0; i < int_PropertiesLength; i++) {     Node nNode = device_Properties.item(i);     PropertyDefinition pd;     AspectCollection aspects = new AspectCollection();     if (NumberUtils.isNumber(str_NodeValue))     {         pd = new PropertyDefinition(nNode.getNodeName(), " ", BaseTypes.NUMBER);     }     else if (str_NodeValue=="true"|str_NodeValue=="false")     {         pd = new PropertyDefinition(nNode.getNodeName(), " ", BaseTypes.BOOLEAN);     }     else     pd = new PropertyDefinition(nNode.getNodeName(), " ", BaseTypes.STRING);     aspects.put(Aspects.ASPECT_DATACHANGETYPE,    new StringPrimitive(DataChangeType.VALUE.name()));     //Add the dataChangeThreshold aspect     aspects.put(Aspects.ASPECT_DATACHANGETHRESHOLD, new NumberPrimitive(0.0));     //Add the cacheTime aspect     aspects.put(Aspects.ASPECT_CACHETIME, new IntegerPrimitive(0));     //Add the isPersistent aspect     aspects.put(Aspects.ASPECT_ISPERSISTENT, new BooleanPrimitive(false));     //Add the isReadOnly aspect     aspects.put(Aspects.ASPECT_ISREADONLY, new BooleanPrimitive(true));     //Add the pushType aspect     aspects.put("pushType", new StringPrimitive(DataChangeType.ALWAYS.name()));     aspects.put(Aspects.ASPECT_ISLOGGED,new BooleanPrimitive(true));     //Add the defaultValue aspect if needed...     //aspects.put(Aspects.ASPECT_DEFAULTVALUE, new BooleanPrimitive(true));     pd.setAspects(aspects);     super.defineProperty(pd); }  //you need to comment initializeFromAnnotations() and use instead the initialize() in order for this to work. //super.initializeFromAnnotations();   super.initialize(); Please put this code in the Constructor method of your VirtualThing extending implementation. It needs to be run exactly once, at any instance creation. This method relies on the manual discovery of the sensor properties that you will do before this. Depending on the implementation you can either do the discovery of the properties here in this method (too slow), or you can pass it as a parameter to the constructor (better). Hope it helps!
View full tip
The Asset Simulator can simulate actual device behavior without having to connect to a physical asset. It does this by replaying data sequences derived from mathematical distributions or actual asset data imported as CSV files. Virtual assets can be configured to reference these data sequences and expose them as asset behavior.   The Asset Simulator communicates with KepServerEX in the same way that a real device does. The simulated asset behavior is controlled through an administration console. If you would like to test with the Asset Simulator 8.2.0, please find attached a guide and the actual files necessary.   Notes: The attached Asset Simulator applies to both Manufacturing and Service Apps If using ThingWorx Manufacturing Apps, import the Manufacturing Apps demo data If using ThingWorx Service Apps, import the Service Apps demo data
View full tip
Here is a demo that uses repeater widget and smart grid widget to display a nested table. Note: This demo is for testing use. Create a datashape for  the nested datatable named DataTableStructure Create DataShape named InfoStructure for Obj Create the nested datatable named DataTableTest as below Create a service named AddObjInput as below in this datatable Crate a Mashup named RepeaterMsh Add widgets like textbox, label, button and smart grid on the mashup There are some parameters we added before, please bind them to the widgets that we just added to the mashup Add DataTableTest service AddOrUpdateDataTableEntry to the mashup. Bind textbox and smart grid edited table to AddOrUpdateDataTableEntry parameter. Bind button click event to trigger the service AddOrUpdateDataTableEntry The smart grid widget need a special attribute named TableDefinition This demo sample is{"columns":[{"name":"PrimarySchool","type":"text","display":"PrimarySchool"},{"name":"SecondarySchool","type":"text","display":"SecondarySchool"},{"name":"HighSchool","type":"text","display":"HighSchool"}]} Create mashup which has repeater widgets The mashup over view is like below The first part is aimed to add a new record, the second part (repeater widget)is aimed to display table data, and also update data. In the first part, there are textboxs, labels, smart grid and button widget. They are banded to the service named AddDataTableEntry, aimed to add a new record in the table. (Smart grid is used to add the nested table data) There is a button named AddRecords, bind it’s clicked event to AddDataTableEntry service. And bind the AddDataTableEntry service’s event ServiceInvokeCompleteted to GetDataTableEntries The smart grid widget in here is an empty grid, but it should have a structure at beginning. That’s why to create a service named AddObjInput in step2. It returns an empty table but with the datashape. The second part is a repeater widget. Repeater widget has an attribute named Mashup, bind the mashup Repeatermsh. Bind GetDataTableEntries all data to the repeater widget. There should be some parameters that need to bind like gender, InfoTable, Sname… Testing the demo Note: Smart Grid widget is not a ThingWorx OOTB functionality The smart grid widget can be edited flexible. Please download with is Link https://marketplace.thingworx.com/Items/Smart%20Grid%20Widget There is a thread teach how to use smart grid: https://community.thingworx.com/message/53379#53379
View full tip
There are many choices in life and ThingWorx offers some persistence provider options as well. As of ThingWorx release 8.2, five Database options are provided. 1 PostgreSQL  9.4.5 minimum 2 DataStax Enterprise Edition 4.6.3,5 3 SAP HANA  SPS 11, 12 4 Microsoft SQL Server 2014 and later 5 H2 (version info is not available, maybe because it's an embedded?) H2 is for small scale, mainly for testing purpose, PostgreSQL and Microsoft SQL Server are for middle scale and finally DataStax Enterprise Edition is for big scale. I don't have enough information about SAP HANA so would like to leave it untouched in my comment... I don't have a number as to how many customers are using which database but my gut feeling tells me that PostgreSQL is a popular option, especially cost-wise. PostgreSQL offers powerful tools, such as logging and utilities, to troubleshoot issues.   In this post I would like to cover some useful information you can retrieve by using pgstattuple and pgstatindex of contrib module. By default, PostgreSQL takes a good care of fragmentation and reindex by itself. But in some cases, there's a situation that you want to review status of the database to narrow down the cause of your troubleshooting issue. There are many ways to achieve it but contrib module is provided to review stats of tables and indexes. As explained in this article, it is recommended to keep the number of records in value_stream and stream less than 100,000. That means you'll insert and delete many records when running ThingWorx. What happens then? If you delete(/update) a record in a table, PostgreSQL keeps the previous record in a page but mark it as deleted(and inserts a new record when it's update operation) If the number of those logically deleted records increases, PostgreSQL needs to access many pages of the table to obtain records which meets the criteria user might experience slow performance because of this Those logically deleted records will be ultimately removed from pages when vacuum is run   If you have installed contrib module and enabled it, you can review stats of tables by command below; select * from pgstattuple('stream');                             //This returns the stats of stream table select * from pgstatindex('stream_id_time_index');    //This returns the stats of an index on stream table   pgstattuple returns information below (I modified the format to make it more readable in this post) and meaning of each items are explained in the document .   table_len tuple_count tuple_len tuple_percent dead_tuple_count dead_tuple_len dead_tuple_percent free_space free_percent  8192 1 33 0.4 3  97 1.18 8004 97.71   Before obtaining the stat, I Inserted 4 records and Deleted 3 records and therefore it shows that tuple_count (the active record is 1) and dead_tuple_count (the logically deleted records are 3) and dead_tuple_percent is 1.18. If dead_tuple_percent is high, that means the table is not vacuumed or many DML were executed after the last vacuum operation and this could be the cause of the slow ststem performance.   * IMPORTANT: pgstattuple, pgstatindex consumes resources so it's recommended to run them during the maintenance window.   Takaaki
View full tip
This simple example creates an infotable of hierarchical data from an existing datashape that can be used in a tree or D3 Tree widget.  Attachments are provides that give a PDF document as well as the ThingWorx entities for the example.  If you were just interested in the service code, it is listed below: var params = {   infoTableName : "InfoTable",   dataShapeName : "TreeDataShape" }; // CreateInfoTableFromDataShape(infoTableName:STRING("InfoTable"), dataShapeName:STRING):INFOTABLE(TreeDataShape) var result = Resources["InfoTableFunctions"].CreateInfoTableFromDataShape(params); var NewRow = {}; NewRow.Parent = "No Parent"; NewRow.Child = "Enterprise"; NewRow.ChildDescription = "Enterprise"; result.AddRow(NewRow); NewRow.Parent = "Enterprise"; NewRow.Child = "Site1"; NewRow.ChildDescription = "Wilmington Plant"; result.AddRow(NewRow); NewRow.Parent = "Site1"; NewRow.Child = "Site1.Line1"; NewRow.ChildDescription = "Blow Molding"; result.AddRow(NewRow); NewRow.Parent = "Site1.Line1"; NewRow.Child = "Site1.Line1.Asset1"; NewRow.ChildDescription = "Preform Staging"; result.AddRow(NewRow); NewRow.Parent = "Site1.Line1"; NewRow.Child = "Site1.Line1.Asset2"; NewRow.ChildDescription = "Blow Molder"; result.AddRow(NewRow); NewRow.Parent = "Site1.Line1"; NewRow.Child = "Site1.Line1.Asset3"; NewRow.ChildDescription = "Bottle Unscrambler"; result.AddRow(NewRow); NewRow.Parent = "Site1"; NewRow.Child = "Site1.Line2"; NewRow.ChildDescription = "Filling"; result.AddRow(NewRow); NewRow.Parent = "Site1.Line2"; NewRow.Child = "Site1.Line2.Asset1"; NewRow.ChildDescription = "Rinser"; result.AddRow(NewRow); NewRow.Parent = "Site1.Line2"; NewRow.Child = "Site1.Line2.Asset2"; NewRow.ChildDescription = "Filler"; result.AddRow(NewRow); NewRow.Parent = "Site1.Line2"; NewRow.Child = "Site1.Line2.Asset3"; NewRow.ChildDescription = "Capper"; result.AddRow(NewRow); NewRow.Parent = "Site1"; NewRow.Child = "Site1.Line3"; NewRow.ChildDescription = "Packaging"; result.AddRow(NewRow); NewRow.Parent = "Site1.Line3"; NewRow.Child = "Site1.Line3.Asset1"; NewRow.ChildDescription = "Printer Labeler"; result.AddRow(NewRow); NewRow.Parent = "Site1.Line3"; NewRow.Child = "Site1.Line3.Asset2"; NewRow.ChildDescription = "Packer"; result.AddRow(NewRow); NewRow.Parent = "Site1.Line3"; NewRow.Child = "Site1.Line3.Asset3"; NewRow.ChildDescription = "Palletizing"; result.AddRow(NewRow); NewRow.Parent = "Enterprise"; NewRow.Child = "Site2"; NewRow.ChildDescription = "Mobile Plant"; result.AddRow(NewRow); NewRow.Parent = "Site2"; NewRow.Child = "Site2.Line1"; NewRow.ChildDescription = "Blow Molding"; result.AddRow(NewRow); NewRow.Parent = "Site2.Line1"; NewRow.Child = "Site2.Line1.Asset1"; NewRow.ChildDescription = "Preform Staging"; result.AddRow(NewRow); NewRow.Parent = "Site2.Line1"; NewRow.Child = "Site2.Line1.Asset2"; NewRow.ChildDescription = "Blow Molder"; result.AddRow(NewRow); NewRow.Parent = "Site2.Line1"; NewRow.Child = "Site2.Line1.Asset3"; NewRow.ChildDescription = "Bottle Unscrambler"; result.AddRow(NewRow); NewRow.Parent = "Site2"; NewRow.Child = "Site2.Line2"; NewRow.ChildDescription = "Filling"; result.AddRow(NewRow); NewRow.Parent = "Site2.Line2"; NewRow.Child = "Site2.Line2.Asset1"; NewRow.ChildDescription = "Rinser"; result.AddRow(NewRow); NewRow.Parent = "Site2.Line2"; NewRow.Child = "Site2.Line2.Asset2"; NewRow.ChildDescription = "Filler"; result.AddRow(NewRow); NewRow.Parent = "Site2.Line2"; NewRow.Child = "Site2.Line2.Asset3"; NewRow.ChildDescription = "Capper"; result.AddRow(NewRow); NewRow.Parent = "Site2"; NewRow.Child = "Site2.Line3"; NewRow.ChildDescription = "Packaging"; result.AddRow(NewRow); NewRow.Parent = "Site2.Line3"; NewRow.Child = "Site2.Line3.Asset1"; NewRow.ChildDescription = "Printer Labeler"; result.AddRow(NewRow); NewRow.Parent = "Site2.Line3"; NewRow.Child = "Site2.Line3.Asset2"; NewRow.ChildDescription = "Packer"; result.AddRow(NewRow); NewRow.Parent = "Site2.Line3"; NewRow.Child = "Site2.Line3.Asset3"; NewRow.ChildDescription = "Palletizing"; result.AddRow(NewRow);
View full tip
Remember that when you are calling an external URL to fetch data via an API call to another system that you must encode special characters specifically. For example the URL that you may type into a browser to test may look like this: https://someserver.somwhere.com:443/apicall?parameter1=test string&parameter2=test^number but when scripting that into a string variable you'll need to replace the space and the carrot with the proper encoded values (%20 and %5E) var params = { username : "me", password : "password", url : "https://someserver.somwhere.com:443/apicall?parameter1=test%20string&parameter2=test%5Enumber", ignoreSSLErrors : false, timeout : 60, headers : headers }; var result = Resources['ContentLoaderFunctions'].LoadXML(params); also note that in this instance we're making a secure connection therefore port 443 (typically the default) was explicitly specified...
View full tip
Licensing summary, installing 8.1: - Now instance specific license used which prevents license sharing and protects our intellectual property further - 2 paths for getting up and running with a license:          -Connected: platform-settings configuration only          -Disconnected: text file generated, self-serve creation from PTC support portal, license_capability_response.bin generated to be placed in platform folder   Connected / Disconnected mode LicensingSubsystem::GetLicenseState returns : UNINITIALIZED in disconnected mode LICENSE_EXISTS in connected mode   User Journey Refer to Licensing ThingWorx 8.1 and Later for specific instructions for generating a license for your instance   NOTE: Each instance needs a valid license file to be running.   Troubleshooting: - If any misconfiguration or connection failures occur, error messages will be thrown to Application log Example: unable to fetch license file with device id; Unable to connect to the PTC license server. Please make sure the LicensingConnectionSettings settings... - Connection attempt will occur upon ThingWorx startup. If connection can't be made, instance will be following disconnected scenario  -Make sure Pop Up Blockers are turned off     Platform Settings - Licensing License Connection Settings (in platform-settings.json - plain text sample) -new section for licensing connection strings: -user name - used to connect to ptc support portal -password - encrypted (recommended) or plain text password used to connect to ptc support portal   "LicensingConnectionSettings": { "username":"<PTC_support_portal_username>", "password":"<PTC_support_portal_password>" }    New Services on Licensing Subsystem - GetInstanceID - returns instance/device ID which is created at startup - WriteLicenseUsageData - writes encrypted license usage (same as system data table) to ThingworxStorage\reports\LicenseUsageReport folder   Instance ID The license file is now bound to the platform Instance ID aka Device ID - (unlike most other PTC products where the license is bound to the hardware : CPUID, MAC, ...) This Instance ID is generated during first startup and stored in the database Instance ID is accessible in composer with LicensingSubsystem::GetInstanceID   Q: What happens when license files are bad or missing? A: If there is an invalid license file, with 8.0 valid license.bin needs to be in the folder before starting up; tomcat will crash. In 8.1 no need for license file as long as connected (platform-settings.json), no need to take an extra step, dynamic connection.  In disconnected scenario, ThingWorx will run for 30 days in limited mode with monitoring mashup accessible to check logs.   Q: Where is the InstanceID stored ? A: It's stored in the database   Q: Will the instanceID change during updates (minor 8.1.0 to 8.1.1) A: Device id's don't change.   Q: What will happen in disconnected scenario if there is no valid license after 30 days? The application will not start anymore or user is not able to login? A:  It shuts down, so there is no more "limited" mode. However, a user can come along on day 55 (for example) and can drop in a valid license and start the web app to get it fully running.   Q: What happens if the customer has to reinstall the platform after the license has been fetched ? Is it possible to "return" the license ? A: Reinstalling the platform results in the generation of a new Device ID, therefore a new license file will need to be generated.  
View full tip
There have been a number of questions from customers and partners on when they should use different tools for calculation of descriptive analytics within ThingWorx applications. The platform includes two different approaches for the implementation of many common statistical calculations on data for a property: descriptive services and property transforms. Both of these tools are easy to implement and orchestrate as part of a ThingWorx application. However, these tools are targeted for handling different scenarios and also differ in utilization of compute resources. When choosing between these two approaches it is important to consider the specific use case being implemented along with how the implemented approach will fit into the overall design and architecture of the ThingWorx environment. This article will provide some guidance on scenarios to use each of these approaches in ThingWorx applications and things to consider with each approach.   Let's look at the two different approaches and some guidelines for when they should be used.   Descriptive services (click for more details) provide a set of ThingWorx services to analyze a data set and perform many common data transformations.  These services are targeted for performing calculations and transformations on recent operating history of a single property.  Descriptive services are called on demand to perform batch calculations. Scenarios to use descriptive services: On demand calculations performed within a mashup, a service call or an event to determine action and calculation results are not (always) stored Regular occurring calculations on logged property values or generated datasets (batch calculations) Calculations are done regularly in minutes, hours or days on a discrete set of data.  Examples: average value in last hour, median value in last day, or max value in last half hour.  Time between data creation and analysis is minutes or hours.  Some latency in the calculation result is acceptable for the use case. Input data set has 10s to 100s to 1000s of values.  Keep the size of the input data at 10,800 values or less.  If larger data sizes are required, then break them into micro batches if possible or use other tools to handle the processing. Multiple calculations need to be done from the same set of input data.  Examples: average value in last hour, max value in the last hour and standard deviation value in the last hour are all required to be calculated. Things to consider when using descriptive services Requires input dataset to be in the specific datashape format that is used by descriptive services.  If property values are logged in a value stream, there is a service to query the values and prepare the dataset for processing.  If scenarios where the data is not for a logged property, then another service or sql query can be used to prepare the dataset for processing. Requires javascript development work to implement.   This includes creation of a service to execute the descriptive services and usage of subscriptions and events to orchestrate calculations. An example of the javascript to execute descriptive services is available in the help center (here) Typically retrieval of the input data from value stream (QueryTimedValuesForProperty) is slowest part of the process. The input data is sent to an out of process platform analytics service for all calculations. Broader set of calculation services available (see table at the end of this article) Remember that these services are not meant to be used for big data calculations or big data preparation.  Look for other approaches if the input data sets grow larger than 10,800 values Property Transforms (click for more details) provide a set of transformation services for streaming data as it enters ThingWorx.   Property transforms are targeted for performing continuous calculations on recent values in the stream of a single property and delivering results in (near) real-time.  Since property transforms are continuous calculations, they are always running and using compute resources. Before implementing property transforms review the information in the property transform sizing guide to better understand factors that impact the scaling of property transforms. Scenarios to use: Continuous calculations on a stream for a single property as new data comes into ThingWorx New values enter the stream faster than one value per minute (as a general guideline) Calculations required to be done in seconds or minutes.  Examples: average electrical current in last 10 seconds, median pressure in the last 10 readings,  or max torque in last minute Time between data creation and analysis is small (in seconds).  Results of property transform is required for rapid decisions and action so reducing latency is critical Data sets used for calculation are small and contain 10s to 100s of values.  Calculated results are stored in a new property in the ThingModel Things to consider when using property transforms Codeless process to create new property transforms on a single property in the ThingModel Does not require input property values to be logged as calculations are performed on streaming data as it enters ThingWorx Unlike descriptive services which only execute when called, each property transform creates a continuously running job that will always be using compute resources.  Resource allocations for property transforms must be included in the overall system architecture.  Before selecting the property transform approach, refer to the Property Transform Sizing Guide for more information about how different parameters affect the performance of Property Transforms and results of performance load test scenarios. Let’s apply these guidelines to a few different use cases to determine which approach to select. 1. Mashup application that allows users to calculate and view median temperature over a selected time window In this scenario, the calculation will be executed on-demand with a user defined time window. Descriptive services are the only option here since there is not a pre-defined schedule and the user can select which data to use for the calculation.   2. Calculate the max torque (readings arriving one per second) on a press over each minute without storing all of the individual readings. In this scenario, the calculation will be executed without storing the individual readings coming from the machine. The transformation is made to the data on its way into ThingWorx and continuously calculating based on new values. Property transforms are the only option here since the individual values are not being stored.   3. Calculation of average pressure value (readings arriving one per second) over a five minute window to monitor conditions and raise an alert when the median value is more than two standard deviations from expected. In this scenario, both descriptive services and property transforms can perform the calculation required. The calculation is going to occur every 5 minutes and each data set will have about 300 values. The selection of batch (descriptive services) or streaming (property transforms) will likely be determined by the usage of the result. In this case, the calculation result will be used to raise an alert for a specific five minute window which likely will require immediate action. Since the alert needs to be raised as soon as possible, property transforms are the best option (although descriptive services will handle this case also with less compute resource requirements).   4, Calculation of median temperature (readings each 20 seconds) over 48 hour period to use as input to predict error conditions in the next week. In this scenario, the calculation will be performed relatively infrequently (once every 48 hours) over a larger data set (about 8,640 values). Descriptive services are the best option in this case due to the data size and calculation frequency. If property transforms were used, then compute resources would be tied up holding all of the incoming values in memory for an extended period before performing a calculation. With descriptive services, the calculation will only consume resource when needed, or once every 48 hours.   Hopefully this information above provides some more insight and guidelines to help choose between property transforms and descriptive services. The table below provides some additional comparisons between the two approaches.     Descriptive Services Property Transforms Purpose Provide a set of ThingWorx services to analyze a data set and perform many common data transformations. Provide a set of prescribed transformation services for streaming data as it enters ThingWorx. Processing Mode Batch Streaming / Continuous Delivery API / Service Composer interface API / Service Input Data Discrete data set Must be logged Single property Configurable by time or lookback Rolling data set on property X Persistence is optional Single property Configurable by time or lookback Output Data Return object handled programmatically Single output for discrete data set New property f_X in the input model Continuous output at configurable frequency Output time aligned with input data Available Services Statistics (min, max, mean, median, mode, std deviation) SPC calculations (# continuous data points: above threshold, in / out of range, increasing / decreasing, alternating) Data distribution: count by bins (histogram) Five numbers (min, lower quartile, median, upper quartile, max) Confidence interval Sampling frequency Frequency transform (FFT) Statistics (min, max, mean, median, mode, std deviation) SPC calculations (# continuous data points: above threshold, in / out of range, increasing / decreasing, alternating)
View full tip
In this session, we pick up where we left off with the mashup which was worked on in part 1 of our Advanced Mashup Expert Session series. Specifically, we will explore the concepts of master mashups, session variables, and media entities, using each to further enhance the look and feel of our mashup.     For full-sized viewing, click on the YouTube link in the player controls. Visit the Online Success Guide to access our Expert Session videos at any time as well as additional information about ThingWorx training and services.
View full tip
One of the interesting features of ThingWorx Analytics Manager is its ability to run distributed models created in Excel (and more of course).  Most people having been tasked with understanding data have built models in Excel and have sometimes built quite complex models (or even applications) with it.   The ability to tie these models to real data coming from various systems connected through ThingWorx and operationalise their execution is a really simple way for people to leverage their existing work and I.P. on a connected analytics journey.   To demonstrate this power and ease of implementation, I created a sample data set with historical data, traffic profile, and a simple anomaly detection model to execute with Analytics Manager.  (files are attached)   The online help center was quite helpful in explaining the process of Creating the Excel Workbook, however I got stuck at the XML mapping stage.  The Analytics and Excel documentation both neglect to mention one important detail -- you must be using the Windows version of Excel in order to get the XML Source functionality (and I use Mac).  Once using Windows, it was easy to do - here is a video of the XML mapping part of the process (for the inputs and results).   
View full tip
In our interactions with PTC customers we often learn they have previously performed Analytics modeling in Python, Matlab, R, or even built home grown analyses in languages such as Java or C++. As expected, when adopting an Industrial Innovation Platform such as ThingWorx that also has its own ThingWorx Analytics module, customers do not want to reimplement everything from scratch and would rather integrate their previous work in the Smart Applications built in ThingWorx, leveraging a combination of their existing toolset together with ThingWorx Analytics modeling. That is certainly possible and there are multiple ways to do that. In this article we will focus on several general ways to make that happen, but it is important to keep in mind that language specific approaches are also possible and we are happy to discuss those in the specific context of the customer.   Here are five different ways to bring existing Analytics into ThingWorx: If the task is to reuse an existing predictive model developed in a language such as Python/R/Matlab, typically one can export that model in PMML (Predictive Model Markup Language), an xml format, and import it in ThingWorx Analytics using the AnalyticsServer_ResultsThing -> UploadModel service. Libraries such as sklearn2pmml & r2pmml can be utilized towards that goal. The imported model can then be used in the same fashion as a ThingWorx Analytics developed model to power smart applications built in ThingWorx. If the Analysis involves more complex tasks than Predictive Modeling, such as custom data normalizations or non-standard Machine Learning models or home grown algorithms, one can use the options below. Call the ThingWorx exposed REST Web API from Python/Matlab/R/Java/Javascript. Every service from ThingWorx can be called that way, and the API can also be used to push analyses results into ThingWorx for further consumption, perhaps together with other sources of data such as sensor readings, in the smart applications built there. The documentation for the ThingWorx REST API can be found here.  Expose the existing Analytics via using a thin layer of REST Web Services. For example, in Python, this can be done using Flask, with few lines of code. Then, the orchestration can happen from ThingWorx by calling the exposed Web Service and weaving the results back into smart applications. Often our customers' current architecture involves a relational database (e.g. SQL Server, Oracle, etc) that is powering the existing Analytics, and stores the end results (predictions, correlations, etc). In this scenario, we can connect ThingWorx directly to that database to read these results.  Finally, in the case of complex Analytics, where a tighter integration with ThingWorx is desired, existing Analytics / algorithms can be wrapped into a ThingWorx Extension or an Analytics Provider using the corresponding PTC SDKs.  When choosing an integration option, customers need to carefully balance complexity of integration, constraints of their architecture, Analytics modeling complexity, as well as end user consumption requirements.
View full tip
Announcements