cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

ThingWorx Navigate is now Windchill Navigate Learn More

IoT & Connectivity Tips

Sort by:
I created this recently for another group -  might be useful    Video Link to Create a Database connection to you Postgres Thingwork Database 
View full tip
NOTE: Even though I have tried on ODataConnector and SwaggerConnector, these steps below should be working for all the Thingworx Integration Connectors viz. GenericConnector, HTTPConnector, ODataConnector, SAPODataConnector, SwaggerConnector, WindchillSwaggerConnector.   This document guides you to add a custom header in any Thingworx Integration connector. Step 1. Create a Datashape say "CustomHeadersDataShape" and add a string field with Name the same as the header name you want to add. In this case, I want to add a header called "Prefer" so it will look something like              Step 2: Go to the Integration Connector which you want to add this custom header. Navigate to "Services". Under the "Inherited Services", edit/overwrite the "GetCustomHeaderParameters" service by clicking on the edit (pencil) icon. Step 3: In the JavaScript Code sniped section add below code snipped   var params = { infoTableName : "InfoTable", dataShapeName : "CustomHeadersDataShape" }; var result = Resources["InfoTableFunctions"].CreateInfoTableFromDataShape(params); var preferValue = "odata.maxpagesize=50"; var newRow = {"Prefer" : preferValue }; result.AddRow(newRow);   Step 4: Save the service and execute "GetCustomHeaderParameters". You should see something like         Now your custom header "Prefer: odata.maxpagesize=50" is set. further execution of your connector services will consider this header until it is reset.
View full tip
Persistent vs. Logged Properties By Mike Jasperson, VP of IoT EDC   Executive Summary ThingWorx provides several different “aspects” (or storage options) for how property values are saved.  These options each have different implications for performance and scalability.  Understanding those implications is important for designing a scalable IOT solution.   Persistent Properties are best used for non-telemetry data which will change infrequently (for example only a few times in a day) and where historical values are not required.  When overused, Persistent properties can put significant pressure on the database layer of your ThingWorx implementation, leading to poor performance of your IOT application.  As the number of Things in your IOT application scales up, the quantity or frequency of persistent properties per Thing needs to be carefully considered.   Logged Properties are best used for telemetry data where historical values need to be retained, but also for any other value that is expected to change frequently.  Logged properties can create some additional requirements: a process for handling null/default values after restarts, more disk space, and a data retention policy. There are benefits as well, though, like more flexibility and scalability for the ingestion of larger volumes of data.   Persistent + Logged Properties perform database operations of both aspects.  Combined use should be very limited – only properties that update infrequently (a few times a day), and that must be in-memory in the event of a ThingWorx restart.   In-Memory Only Properties are neither persistent nor logged – they are not stored to the database.  These properties can greatly improve scale for values that need to be available for the application to drive UIs or compute other derived values that will be stored.  However, high-frequency updates of in-memory properties can create scale challenges in HA (high availability) ThingWorx configurations where memory state needs to be constantly shared between multiple ThingWorx nodes.     Find a complete summary as well as example cases in the document attached.
View full tip
Javascript, everyone knows it, at least a little bit. What if I told you that you could do serious data acquisition with just a little bit of Javascript and you may already have the tools to do it, right now on your "Off the Shelf" device. Node.js is a command line implementation of Javascript that can be run on common, credit card sized devices like the Raspberry PI or the Intel Edison. I suspect that if you already know about Node.js, you may have encountered its non-blocking asynchronous, "Call back", style of programming which can be a little different that most other languages which block or wait for commands to complete. While this can be a benefit for increasing performance, it can also be a barrier to entry for new users. This is the problem that Node Red really solves. Node Red is a web based Integrated Development Environment (IDE) that turns the "Call Back" style Javascript programming of Node.js into a series of interconnected Nodes, each Node of which represents a Javascript function which is connected by a callback to another node/function. A simple hello world program in Node Red would look something like this ( with annotations in red) : You can re-create this program using the Node Red IDE yourself. Here is a brief video (with no sound) which should familiarize you with how to create your own hello world flow. Video Link : 1333 How can you install Node Red on your own system to try it out? The good news is, if you have a Raspberry PI 2 with a NOOBS installed on it, Node.js and Node Red come pre-installed. If you do not already have it installed, or want to install it on your own system it is still pretty simple. Here are the steps: 1. Download and install Node.js (https://nodejs.org/en/download/) 2. Run the command:  sudo npm install -g --unsafe-perm node-red     Omit the sudo on windows (see http://nodered.org/docs/getting-started/installation.html  for more info) 3. You now have Node Red. To run it, just type: node-red  on your command line. 4. Using your web browser goto http://localhost:1880 and the Node Red IDE will appear in your browser. How about a real hardware integration example? Node Red comes with many built in Nodes and many more nodes you can add to connect to specific peripherals you may have on your device. Rather than provide a complete tutorial on Node Red, I will focus on discussing using this IDE to re-create a hardware integration that I created in the past using the Java SDK, The Raspberry PI, AM2302 Weather Station (see Weather Applications with Raspberry Pi | ThingWorx)​. This example contains detailed specifics on the attachment of the AM2302 Temperature/Humidity sensor to your Raspberry PI. I am going to assume you have the hardware already attached to your Raspberry PI as described in this tutorial ( https://learn.adafruit.com/dht-humidity-sensing-on-raspberry-pi-with-gdocs-logging/overview ). I am also assuming that you have installed the python based sample program described in this tutorial as well and you now have a python script called "AdafruitDHT.py" installed on your PI that produces the following output when it is run. pi@raspberrypi:~/projects/Adafruit_Python_DHT/examples $ sudo ./AdafruitDHT.py 2302 4 Temp=22.3*  Humidity=30.6% pi@raspberrypi:~/projects/Adafruit_Python_DHT/examples $ If you don't have any of this hardware installed, you can still proceed with this example and just create your own temperature and humidity values manually. We are going to connect the output of this python script directly to ThingWorx and sample its output value every 5 seconds. I will start assuming you do not have the Am2302 hardware and create simulated values. I will then replace them with the actual output of the python script as a final step. Polling versus Interrupt Driven Data Collection In the Java SDK version of this example, we are polling for changes in data. Every so many seconds our device will wake up and take a reading. How do we recreate the same effect in Node Red without having to push an inject button every 5 seconds. No. We need an input node that activates on its own every 5 seconds. The Inject Node will do this. Drag out an inject node and configure it as shown below. This is an input node so it will be starting a new flow. It will fire off every 5 seconds from the minute this sheet is deployed. Simulate Data Collection Lets generate a random humidity and temperature value before getting the actual data. For this node we will use a Function node. Drag one out and configure it as shown below. Here is the Javascript for this node so you can cut and paste it into this dialog. var tempF = Math.random() * 40 + 60; var tempC = (tempF-32)/1.8; var humidity = Math.random() * 80 + 20; msg.payload = {     "tempF":tempF,     "tempC":tempC,     "humidity":humidity     }; return msg;                                    Remember that the returned message is the message that the next node will receive. The payload property is the standard or default property of a message that most nodes use to pass data between each other. Here, our payload is an object with all of our simulated data in it. Lets Test it Out Connect the two nodes together and add a debug output node and deploy your sheet. The completed flow will look like this. As soon as you deploy you should see the following output in your debug tab and every five seconds another data sample will be generated. So how does this data get to ThingWorx? What we need to do is take this data and deliver it to ThingWorx in the form of a REST web service call. This is easier to do than it sounds. First off, lets create a Thing on your ThingWorx server that looks like this. Now give it these properties. Next, create an Application Key in the application keys section of the composer. Assign it to the "Administrator" user. Your keyId will of course be different. This key will be the credential you need to post your data. Installing the ThingRest Node Red Node To simplify the process of posting the data to ThingWorx, I have created my own custom node to post data. To install a custom node into your Node Red installation you have to find the directory Node Red is using to store your sheets in. By default this is a directory called ".node-red" in your home directory. On a Raspberry PI this directory would be /home/pi/.node-red. If you are running Node Red now, quit it by hitting control-c and cd into the .node-red directory. Below is the sequence of commands you would issue on your PI to install the ThingRest node. cd ~/.node-red npm install git+https://git@github.com/obiwan314/node-red-node-thingrest.git node-red                     The node package manager (npm) will install this new node automatically into your .node-red directory. Now re-run node-red and go back to your browser and refresh your Node Red IDE. You should now have a "REST Thing" node. Adding a REST Thing node to your flow Drag a REST Thing output node into your flow and configure it as shown below. Remember, your Application Key will be different than the one shown here. Also, your ThingWorx server URL may be different if your server is not on the same machine you are working on. Now connect it as shown below. When you deploy this sheet, you will be posting data to ThingWorx. Go back to your WeatherStation1 Thing in ThingWorx and use the Refresh button shown below to see your data changing. Wait, that is? Thats the whole data collection program? Yes. The flow above is the equivalent of the Java SDK code from the Java weather station example. Now for Some Real Data As promised, we will now replace the simulated data in the Generate Data node with real data obtained from the "~/projects/Adafruit_Python_DHT/examples/AdafruitDHT.py 2302 4" python command on your Raspberry PI using an Exec node. The exec node can be found at the very bottom of your node palette. It executes a command and returns the results as msg.payload to the next node in the flow. You may have noticed it has three outputs instead of one. In order these outputs are your Standard output, Standard Error and the integer return code of the process. Use the first output node to get the results of this command. Now Connect this in place of the Generate Data Node as shown below. At this point, we can't connect the collected data to the WeatherStation1 Thing because it is in the wrong format. It is console output and we need it in the form of a Javascript object. We are going to need a function to parse the console output into a Javascript object. Add the function node shown below. Here is the Javascript for cut and paste convenience. var temphumidArray=msg.payload.split(" "); var tempC = parseFloat(temphumidArray[0].replace("*","").split('=')[1]); var tempF = tempC *1.8 + 32; var humidity = parseFloat(temphumidArray[2].replace("%","").replace("\n","").split('=')[1]); msg.payload = {     "humidity":humidity,     "tempF":tempF,     "tempC":tempC   }; return msg;   Now msg.payload contains a javascript object identical to the one we were generating at random but now it is using real data. Connect up your nodes so they appear as shown below but when you deploy, don't expect it to work yet because there is still one problem you will have to get around. This python script expects to be run as the root user. How to run Node Red as Root You can start Node Red as root with the following command sudo node-red -u /home/pi/.node-red   Note that the -u argument is required to make sure you keep using the pi user's .node-red directory. If you loose your REST Thing node, you are not using the pi user's .node-red directory, but root's instead. If you see any error messages in your debug window, try re-attaching the the debug node to the Collect Data node and see what is being produced by the exec node. Don't forget to verify that your tempC,tempF and humidity properties are updating in ThingWorx. Lets Add a GPS Location You may have noticed that there is a stationLocation property on the WeatherStation1 Thing. Lets set that to a fixed location to complete this example of 40.0568764,-75.6720953,18. Below is the modified Javascript to update in the Parse Data node to add this location. var temphumidArray=msg.payload.split(" "); var tempC = parseFloat(temphumidArray[0].replace("*","").split('=')[1]); var tempF = tempC *1.8 + 32; var humidity = parseFloat(temphumidArray[2].replace("%","").replace("\n","").split('=')[1]); msg.payload = {     "humidity":humidity,     "tempF":tempF,     "tempC":tempC,     "stationLocation":"40.0568764,-75.6720953,18" }; return msg; What's Next? Node Red has many more nodes that you can add to your project through the use of the npm command. There is a GPIO node library you can install at https://github.com/monteslu/node-red-contrib-gpio which will give you input and output nodes for the GPIO pins on your PI as well, This library also supports accessing Arduino's attached to the PI over a USB cable which expand the possibilities for data collection and peripheral control.Hopefully this article has exposed you to the many other possibilities for connecting devices to your ThingWorx Server. The Rest Thing node is using the HTTP REST protocol to talk to ThingWorx. In the near future, with the Introduction of the ThingWorx Javascript SDK, a Node Red library can be created that uses ThingWorx AlwaysOn WebSockets protocol to communicate with your ThingWorx server which will offer even more capabilities and better performance.
View full tip
ThingWorx Performance Monitoring with Grafana authored by EDC team member Desheng Xu ( @xudesheng )   Monitoring ThingWorx performance is crucially important, both during the load testing of a newly completed application, and after the deployment of new code in an existing application. Monitoring performance ensures that everything works as expected at the Enterprise level.  This tutorial steps you through configuring and installing a tool  which runs on the same network as the ThingWorx instance. This tool collects data from the Platform and translates it into something visual and easy to understand via Grafana.    tsample is  small and customizable, and it plays a similar role to telegraf. Its focus is on gathering ThingWorx performance metrics. Historically, this tool also supported collecting OS level performance metrics, but this is no longer supported. It is highly recommended to collect OS level performance metrics by using telegraf, a tool designed specifically for that purpose (and not discussed here). This is not the only way to go about monitoring ThingWorx performance, but this tool uses a very good approach that has been proven effective both at customer sites and internally by PTC to monitor scale tests.   Find the most recent release here.   Recommended Deployment Architecture tsample can be deployed in the same box where ThingWorx Tomcat is running, but it's recommended to deploy it on a separated box to minimize any performance impact caused by the collector. tsample supports export to InfluxDB and/or local file. In this document, it is assumed that InfluxDB will be used for monitoring purpose. Please note that this is not the same instance of InfluxDB being used by ThingWorx (if configured). This article will not cover setting up InfluxDB or NGINX (if necessary), so please configure these before beginning this tutorial.   Supported Platform tsample has been tested on Windows 2016, MacOS 10.15, Ubuntu 16.04, and Redhat 7.x.  It's anticipated to work on a more general Ubuntu/Redhat/Mac/Windows release as well. Please leave a comment or contact the author, @xudesheng , if Raspberry Pi support is needed.    Configuration File Where to Store the Configuration File tsample will pick up the configuration file in the following sequence: from the command line...   ./tsample -c <path to configuration file>​     from the environment... Linux:   export TSAMPLE_CONFIG=<path to configuration file> ./tsample​   Windows:   set TSAMPLE_CONFIG=<path to configuration file> tsample.exe   from a default location... tsample will try to find a file with the name "config.toml " from the same folder in which it starts.   How to Craft a Configuration File You can use following command to generate a sample file:     ./tsample -c config.toml -e     or:     ./tsample -c config.toml --export       A file with the name "config.toml " will be generated with a sample configuration. You can then adjust its content in accordance with the following.   Configuration File Content Format Configuration file must be in toml format. title and owner sections Both sections are optional. The intention of these two sections is to support doc tool in future. TestMachine section This is section is required, and it defines where this tool will run.   thingworx_servers section This section is where you define targeted ThingWorx applications. Multiple ThingWorx servers can be defined with the same or different metrics to be collected.   thingworx_servers.metrics sections Underneath each thingworx_servers section, there are several metrics. In default example, following metrics have been included: ValueStreamProcessingSubsystem DataTableProcessingSubsystem EventProcessingSubsystem PlatformSubsystem StreamProcessingSubsystem WSCommunicationsSubsystem WSExecutionProcessingSubsystem TunnelSubsystem AlertProcessingSubsystem FederationSubsystem You can add your own customized metrics, as long as the result follows the same Data Shape. The default Data Shape has 3 columns: If the output Data Shape exceeds this limit, the tool will likely not work properly.   result_export_to_db section This section defines the target InfluxDB as a sink of collected performance metrics.   result_export_to_file section This section defines the target file storage for collected performance metrics.   Grafana Configuration Example Monitor Value Stream Step 1. Connect Grafana to InfluxDB   Step 2: Create a New Dashboard   Step 3. Create a New Query Depending on which metrics you defined to collect in the tsample configuration file, you will see a different choice of measurement in Grafana. Here, we will use ValueStreamProcessingSubsystem as an example.   Step 4. Choose the Right Platform and Storage Provider Some metrics depend on the database storage provider, like Value Stream and Stream.   Step 5. Choose the Metrics Figures   Select "remove" to get rid of the default 'mean' calculation. Select "non_negative_difference" from Transformations. Using this transformation, Grafana can show us the speed of writes.     Then, remove the default GROUP BY "time" clause. Assign a meaningful alias of this query.   Step 6. Add Another Query You can add another query as 'Value Stream Queued Speed' by following the same steps.   Step 7. Assign a Panel Title   Step 8. Review the Result Let's go back to the dashboard page and select "last 15 minutes" or "last 5 minutes" from the top right corner. It should show a result similar to the chart below.   Step 9. Save the Dashboard Don't forget to save your dashboard before we add more panels.   Step 10. Refine the Panel It's difficult to figure out the high-level write speed from the above panel, so let's enhance it. Add a new query with the following configuration: In the above query, there are two additional figures: 20s and 1m... How do you choose? 20s should be the same as sampling_cycle_inseconds in your tsample configuration file. If you choose a different value, then you could end up with misleading results. Larger values such as "1m" may give you a smoother result, but they could also hide system instability. Going larger than 1m is not recommended in most cases. With this new query, it's much easier to figure out what the average write speed in current testing is.   Tips: if your sampling_Cycle_inseconds is 30s, then you may not need this additional query. The following image is a sample at the 30s interval time. You would not need an additional average query to get a smooth write speed.   The next example is a sample at the 10s interval time. Without additional queries, you may not be able to get a meaningful understanding of the write speed. From the above three examples, it's recommended to configure the sampling interval time at 30s, or anything larger than 20s. You can then choose whether you need additional queries based on the visualization result.   Step 11. Further Refinement The above charts illustrate the queuing and writing speed. However, it is possible that the Value Stream may perform at a reasonable speed, but the Value Stream queue may be growing and could exceed its capacity. Let's add another query to monitor this: However, it is difficult to read this chart, since it has a different value range on the y-axis: Let's move this query to a second y-axis on the right: This will make the view much easier to see: The current queue size or remaining queue size will always move up and down; it is healthy as long as it does not continue to grow to a high level.   What Else Can Be Monitored? The following metrics would be monitored by most customers: Value Stream write and queue speed Value Stream queue size Stream write and queue speed Stream queue size Event performed speed (completedTaskCount) Event submitted speed (submittedTaskCount) Event queue size Websocket communication Websocket connection   ThingWorx Memory Usage Monitoring Create a new panel and add a new query: In a running system, memory usage will always move up and down - at times sharply or quickly - when the system is busy. The system is healthy as long as memory doesn't go up continuously or stay at a maximum for a long period of time.   Conclusion Setting up monitoring is absolutely crucial to managing the performance of an enterprise ThingWorx application. Using Grafana makes tracking and visualizing the performance much easier. Stay tuned to the EDC tag for more monitoring tips to come!
View full tip
Remote Monitoring of Assets Benchmark   As @ttielebein introduced previously, one of the missions of the IOT Enterprise Deployment Center (EDC) is to publish benchmarks that showcase the ThingWorx Platform deployed to solve real-world IOT business problems.    Our goal is that these benchmarks can be used as a reference or baseline for architects working on their own implementations... showing not only a successful at-scale implementation, but also what happens when that same implementation is pushed to ...or even past... it's limits.   Please find the first installment attached - a reference benchmark demonstrating ThingWorx deployed to monitor 15,000 assets with a high-volume of data properties per asset.  Over 250 hours of simulations were conducted as part of producing this benchmark.   The IOT EDC team will be monitoring this post (as well as our other posts in the IOT Tech Tips forum) to answer any questions we can about the approaches taken in designing, deploying and simulating this implementation.    As the team will publish more benchmarks like this will be published in the future, we also greatly value any feedback you have that can help us to improve the content for future documents.
View full tip
Since it's somewhat unclear on how to set up the reset password feature through the login form, these steps might be a little more helpful. Assuming the mail extension has already been imported into the Thingworx platform and properly configured - say, PassReset - (test with SendMessage service to verify), let's go ahead and create a new user - Blank, and a new organization that will have that user assigned as a member - Test. Let's open the configuration tab for the organization, assign the PassReset mail thing as the mail server, assign login image, style, prompt (optional), check the Allow Password Reset, then the rest looks like this: Onto the Email content part, it is not possible to save the organization as is at the moment: Clicking on the question mark for the Email content will provide the following requirements: Now this is when it might not be too clear. The tokens [[:user:]], [[:organization:]], [[:url:]] can be used in the email body and at the runtime will be replaced with the actual Usernames, organization, and the reset password url. Out of those fiels, only [[:url:]] token is required. So, it is sufficient to place only [[:url:]] in the body and save the organization: Then, when going to the FormLogin, at <your thingworx host:port>/Thingworx/FormLogin/<organization name>, a password reset button is available: Filling out the User information in the reset field, the email gets sent to the user address specified and the proper message appears: Since in this example only the [[:url:]]  token has been used in the email content, the email received will look like this: To troubleshoot any errors that might be seen in the process of retrieving the password reset link, it's helpful to check your browser developer tools and Thingworx application log for details.
View full tip
  Question: What should I know about using ThingWorx with InfluxDB to store my time series data? Hi, ThingWorx users!   It’s here! Thanks for waiting patiently since my previous post announcing ThingWorx’ new support of InfluxDB as a time series persistence provider.   As of our 8.4 release, you can now use InfluxDB to store your ThingWorx time series data with incredible power and ease.   Want to learn more? Check out the following FAQs:   1. What is InfluxDB? Who is InfluxData? InfluxDB is a time series database designed to handle high write and query loads. It is meant to be used as a backing store for any use case involving large amounts of timestamped data, like monitoring, application metrics, IoT sensor data, and real-time analytics that you’d find in ThingWorx.   InfluxDB is created by InfluxData, an awesome company that we are proud to call a PTC partner.   2. When would I want to use InfluxDB for IIoT? While the ThingWorx IIoT platform supports multiple databases to persist IIoT data and is agnostic when it comes to the storage layer, InfluxDB is the ideal choice for time series. When the number of connected devices increases, along with the amount of streaming data, the need to have a high-scale telemetry database choice is obvious.   For very high scale data ingestion, InfluxDB should be used as a persistent provider with the ThingWorx platform for multiple reasons. Its flexibility and ease of use provides native support for standard time series functions, including: sampling, interpolation, time bucketing, aggregation, selector, transformation, predictor, etc. It does all of this while supporting a high compression of data (~45x) with the ability to handle thousands of writes per second and read thousands of rows in milliseconds.   Check out this article by our Enterprise Deployment Center (EDC) explaining why InfluxDB is great for small ThingWorx applications.   3. What are the three different flavors of InfluxDB? InfluxDB Open Source (TICK Stack), InfluxDB Enterprise & InfluxDB Cloud. Here’s more info on each: InfluxDB Open Source (TICK Stack): This is the open-source version of the product available to download via the InfluxData website. Also included here are the other projects that comprise the TICK Stack, including: [T] Telegraf; open source collection agent [I] InfluxDB; open source time series database [C] Chronograf; open source visualization application [K] Kapacitor; open source streaming processing engine; side car to InfluxDB InfluxDB Enterprise: This is the commercial software version of InfluxDB for high availability clustering and the recommended time series database to be used for production with ThingWorx 8.4 and later. InfluxDB Enterprise works with the rest of the TICK stack interchangeably (Telegraf, Chronograf, Kapacitor). InfluxDB Cloud: This is the commercial service version of InfluxDB, hosted on AWS, managed by InfluxData, and delivered as a service to customers. InfluxDB Cloud works with the rest of the TICK stack interchangeably (Telegraf, Chronograf, Kapacitor). To learn more about the different modules of InfluxDB (Telegraf, Chronograf, Kapcitor), check out InfluxData Introduction for documentation or InfluxData Products for product info.   4. What is the difference between InfluxDB opensource and enterprise? InfluxDB Open Source is available in a single (1 only) data node configuration only, albeit with “n” number of vCPU or “cores” provisioned on that single node.  InfluxDB Enterprise is available in multiple (2 or more) data node configuration, also with “n” number of vCPU or “cores” provisioned to each node. The Enterprise edition is generally preferred for production deployments that require high availability, replication, and redundancy. Provisioned along with the data nodes are three (3) meta nodes and a load balancer to distribute data workload across the multiple nodes. Typical configurations are in even increments of data nodes (i.e. 2, 4, 6, 8, etc.).   5. Where can I find the pricing overview for buying enterprise licenses for InfluxDB? The PTC product and go-to-market team have defined commercial pricing for InfluxDB Enterprise. For help with pricing, reach out to Chris Wensley (cwensley@ptc.com) and Anders Hinrichsen (anders@influxdata.com).   6. How do I configure InfluxDB with ThingWorx? We’ve outlined the steps for you in the ThingWorx Help Center and created a quick video to instruct you on how to install InfluxDB with ThingWorx. (view in My Videos) To see the current version of InfluxDB that we support, read our ThingWorx 9.0 System Requirements guide.   7. How do I configure InfluxDB and ThingWorx in a high availability scenario? With the ability to leverage multiple data stores, we work to provide the flexibility to best meet the needs of your IT preferences and investments. InfluxDB helps us do that. To configure ThingWorx for High Availability, please refer to this section of the ThingWorx Platform 9 Help Center. To configure InfluxDB for High Availability at the database level, please refer to InfluxData’s documentation on how to Install and deploy InfluxDB Enterprise clusters.   8. Where can I learn more about how to monitor and manage InfluxDB? Monitoring info for InfluxDB can be found here: Monitoring Tools for TICK Stack.   9. How can I tune and optimize InfluxDB with ThingWorx? The best approach for running InfluxDB with PTC ThingWorx 8.4 (or later) is to treat the workload and configuration just as you would in a stand-alone deployment. We suggest to stick to the recommendations in the InfluxDB and TICK stack documentation.   10. How do I perform backup and recovery of ThingWorx with InfluxDB? Please see the ThingWorx Platform Backup and Recovery Planning Technical Brief to plan for back and recovery. You can also find more more details on taking backups and restoring data from InfluxDB in the Backing up and restoring in InfluxDB Enterprise overview.   11. Where can I learn more about sampling, interpolation, time bucketing, aggregation, pivot​ and other key features of InfluxDB? Features of InfluxDB can be found here: InfluxData Time Series Platform. Implementation of InfluxDB features can be found here: Getting Started with InfluxDB.   12. What are all the different persistence providers supported with ThingWorx? When should I use InfluxDB? ThingWorx supports the following model and data provider storage options: H2, PostgreSQL, MS SQL Server and AzureSQL ThingWorx supports the following data provider only storage options: InfluxDB Please refer to the model and data best practices section of the ThingWorx 9 Help Center for further information on options how to store your model and data with ThingWorx.   We have also updated the ThingWorx Platform 9.0 Sizing Guide to provide relevant information to estimate the amount of processing and memory that ThingWorx may need to meet your requirements. It also provides guidance on when to use InfluxDB for your scale needs.   13. When should I use InfluxDB over DataStax Enterprise (DSE)? Here is a good blog post that benchmarks time series data performance of InfluxDB vs. Cassandra, which is the core of DataStax Enterprise (DSE). In specific use cases, InfluxData Enterprise may be more cost effective when compared to similar telemetry use cases with DSE.   14. How can I migrate my data from PostgreSQL to InfluxDB? Migration from PostgreSQL or MSSQL is supported by the ThingWorx in-built data tools, which can export entities and data from PostgreSQL or MSSQL and then import them into InfluxDB.   Details on how to upgrade to ThingWorx 9.0 can be found in the Upgrading ThingWorx  section of the ThingWorx 9 Help Center.   15. Should I use InfluxDB as a time series store rather than OSI PI, IP21, or others? For ThingWorx 8.4 and later, InfluxDB is the recommended time series store. This can be implemented at the edge with ThingWorx (i.e. “front end”) using the open source edition and can also be implemented at the hub (i.e. “back end”) using either of the commercial editions designed for HA production workloads.   As always, ThingWorx can connect to most industrial software, including OSI PI, IP21, etc. with our integration toolset.   That’s a wrap—almost! We’ve added two extra questions for you.   16. What’s on the roadmap for ThingWorx with InfluxDB? Key development work to fully leverage built-in InfluxDB querying capabilities and support InfluxDB 2.0 in future ThingWorx releases Leveraging query operations capabilities from InfluxDB to further improve query performance Supporting additional native InfluxDB features (e.g. continuous queries)   17. What should I do if I need technical support with InfluxDB? If you select InfluxDB as your persistence provider, then all support requests related to configuring InfluxDB can be logged through PTC Technical Support at https://support.ptc.com or by calling 1-800-477-6435. You may also want to use the PTC Community to learn and collaborate with the growing PTC developer community. For all other requests related to database management, troubleshooting, monitoring, and administration, we encourage you to reach out to InfluxData directly based on your enterprise purchase contract made with InfluxData. PTC customers using InfluxDB can also email ptc-support@influxdata.com for support requests related to InfluxData.   If you’re as excited as I am about the ability to store your time series data with InfluxDB, let me know in the comments below!   Until next time, if you have any questions, just ask Kaya!
View full tip
Overview The Axeda Platform is a secure and scalable foundation to build and deploy enterprise-grade applications for connected products, both wired and wireless. This article provides you with a detailed feature overview and helpful links to more in-depth articles and tutorials for a deeper dive into key areas. Types of Connected Product Applications M2M applications can span many vertical markets and industries. Virtually every aspect of business and personal life is being transformed by ubiquitous connectivity. Some examples of M2M applications include: Vehicle Telematics and Fleet Management - Monitor and track the location, movements, status, and behavior of a vehicle or fleet of vehicles Home Energy Monitoring - Smart energy sensors and plugs providing homeowners with remote control and cost-saving suggestions Smart Television and Entertainment Delivery - Integrated set-top box providing in-view interaction with other devices – review your voicemails while watching a movie, chat with your Facebook friends, etc. Family Location Awareness - Set geofences on teenagers, apply curfews, locate family members in real time, with vehicle speed and condition Supply Chain Optimization - Combine status at key inspection points with logistics and present distribution managers with an interactive, real-time control panel Telemedicine - Self-monitoring/testing, telecardiology, teleradiology Why Use a Platform? Have you ever built a Web application? If so, you probably didn't create the Web server as part of your project. Web servers or Web application servers like Apache HTTPd or JBoss manage TCP sockets, threads, the HTTP protocol, the compilation of scripts, and include thousands of other base services. By leveraging the work done by the dedicated individuals who produced those core services, you were able to rapidly build an application that provided value to your users.  The Axeda Platform is analogous to a Web server or a Web application server, but provides services and a design paradigm that makes connected product development streamlined and efficient. Anatomy of an Axeda Connected Product Connected Products can really be anything that your product and imagination can offer, but it is helpful to take pause for some common considerations that apply to most, if not all of these types of solutions. Getting Connected - Bring your product or equipment to the network so that it can provide information to the solution, and react to commands and configuration changes orchestrated by the application logic. Manage and Orchestrate - Script your business logic in the cloud to tie together remote information with information from other business systems or cloud-based services, react to real-time conditions, and facilitate batch operations to synchronize, analyze, and integrate. Present and Report - Build your user experiences, enabling people to interact with your connected product, manage workflows around business processes, or facilitate data analysis. Let's take a look at the Axeda Platform services that help with each of these solution considerations. Getting Connected Wired & Wireless Getting connected can assume all sorts of shapes, depending on the environment of your product and the economics of your solution. The Axeda Platform makes no assumption about connectivity, but instead provides features and functionality to help you connect.For wireless applications, especially those which may use cellular or satellite communications, the speed and cost of communication will be an important factor.  For this reason, Axeda has created the Adaptive Machine Messaging Protocol (AMMP), a cross-platform, efficient HTTP-based protocol that you can use to communicate bi-directionally with the platform.  The protocol is expressive, robust, secure, and most importantly, able to be implemented on a wide range of hardware and programming environments. When you are faced with connecting legacy products that may be communicating with a proprietary messaging protocol, the Axeda Platform can be extended with Codecs to "learn" your protocol by translating your device's communication format into a form that the platform can understand. This is a great option for retrofitting existing, deployed products to get connectivity and value today, while designing your next-generation of product with AMMP support built-in. Manage and Orchestrate The Data Model defines the information and its behavior in the Axeda Platform. Rules Rules form the heart of a dynamic connected application. There are three types of rules that can be leveraged for your orchestration layer: Expression rules run in the cloud, and are configured and associated with your assets through the Axeda Admin UI or SDK. These rules have an If-Then-Else structure that's easy to create and understand. They're like a formula in a spreadsheet. For example, say your asset has a dataitem reading for temperature: [TODO: Image of dataitem TEMP] This rule compares the temperature to 80 every time a reading is received. When this happens, the rule creates an alarm with name "High Temp" and severity 100.Learn more about Expression Rules. State Machines help organize Expression Rules into manageable groups that apply to assets when the assets are in a certain state. For example, if your asset were a refrigerated truck, and you were interested in receiveing an alert when the temperature within the cargo area rose above a preset threshold, you would not want this rule to be applied when your truck asset is empty and parked in the distribution center lot. In this case, you might organize your rules into a state machine called “TruckStatus”. The TruckStatus state machine would then consist of a set of states that you define, and the rules that execute when the truck is in a particular state. [TODO - show state machine image?] State “Parked”: IF doorOpen THEN … State “In Transit”: IF temperature > 40 THEN… State “Maintenance”: <no rules> You can learn more about state machines in an upcoming technical article soon. Scripting Using Axeda Custom Objects, you can harness the power of the Axeda SDK to gain access to the complete set of platform data and functionality, all within a script that you customize. Custom Object scripts can be invoked in an Expression Rule to provide customized and flexible business logic for your application. Custom Object scripts are written in a powerful scripting language called Groovy, which is 100% Java syntax compatible. Groovy also offers very modern, concise syntax options that make your code simple and easy to understand. Groovy can also implement the body of a web service method. We call this Scripto. Scripto allows you to write code and call that code by name through a REST web service. This allows a client application to call a set of customized web services that return exactly the information and format needed by the application. Here is a beginning tutorial on Scripto.  This site includes many Scripto examples called by different Rich Internet Applications (RIA). Location-Based Services Knowing where something is, and where it has been, opens a world of possible application features. The Axeda Platform can keep track of an asset’s current and historical location, which allows your applications to plot the current position of your assets on a map, or show a breadcrumb trail of where a particular asset has been. Geofences are virtual perimeters around geographic locations. You can define a geofence around a physical location by describing a center point and radius, or by “drawing” a polygon around an arbitrary shape. For instance, you may have a geofence around the Boston metro that is defined as the center of town with a 10-mile radius. You may then compare an asset’s location to this geofence in a rule and trigger events to occur. IF InNamedGeofence(“Boston”) THEN CreateAlarm(…) You can learn more about geofences and other location-oriented rule features in an upcoming tutorial. Integration Queue In today’s software landscape, almost no complete solution is an island unto itself. Business systems need to interoperate by sharing data and events, so that specialized systems can do their job without tight coupling. Messaging is a robust and capable pattern for bridging the gap between systems, especially those that are hosted in the cloud. The Axeda Platform provides a message queue that can be subscribed to by external systems to trigger processes and workflows in those systems, based on events that are being tracked within the platform. A simple ExpressionRule can react to a condition by placing a message in the integration queue as follows: IF Alarm.severity > 100 THEN PublishObject() A message is then placed in the queue describing the platform event, and another business system may subscribe to these messages and react accordingly. Web Services Web Services are at the heart of a cloud-based API stack, and the Axeda Platform provides full comprehensiveness or flexibility. The platform exposes Web Service operations for all platform data and configuration meta data. As a result, you can determine the status of an asset, query historical data for assets, search for assets based on their current location, or even configure expression rules and other configuration settings all through a modern Web Service API, using standard SOAP and REST communication protocols. Scripto Web Service APIs simplify system integration in a loosely-coupled, secure way, and we have a commitment to offering a comprehensive collection of standard APIs into the Axeda Platform. But we can't have an API that suits every need exactly. You may want data in a particular format, such as CSV, JSON, or XML. Or some logic should be applied and its inefficient to query lots of data to look for the bit you want. Wouldn’t you rather make the service on the other side do exactly what you want, and give it to you in exactly the format you need? That is Scripto – the bridge between the power and efficiency of the Axeda Custom Object scripting engine, and a Web Service client. Using Scripto, you can code a script in the Groovy language, using the Axeda SDK and potentially mashing up results from other systems, all within the platform, and expose your script to an external consumer via a single, REST-based Web Service operation. You create your own set of Web Services that do exactly what you want. This powerful combination let’s you simplify your Web Service client code, and give you easy access and maintainability to the scripted logic. Present Rich Internet Applications are a great way to build engaging, information-rich user experiences. By exposing platform data and functions via Web Services and Scripto, you can use your tool of choice for developing your front-end. In fact, if you choose a technology that doesn’t require a server-side rendering engine, such as HTML + AJAX, Adobe Flash, or Microsoft Silverlight, then you can upload your application UI files to the Axeda Platform and let the platform serve your URL! Addition references for using RIAs on the Axeda Platform: Axeda Sample Application: Populating A Web Page with Data Items Extending the Axeda Platform UI - Custom Tabs and Modules Far-Front-Ends and Other Systems If a client-only user RIA interface is not an option for you, you can still use Web Services to integrate platform information into other server-side presentation technologies, such as Microsoft Sharepoint portal or a Java EE Web Application. You can also get lightning-fast updates for your users with the Axeda Machine Streams that uses ActiveMQ JMS technology to stream data in real-time from the Axeda Platform to your custom data warehouses.  While your users are viewing a drill down on a set of assets, they can receive asynchronous notifications about real-time data changes, without the need to constantly poll Web Services. Summary Building Connected Products and applications for them is incredibly rewarding job. Axeda provides a set of core services and a framework for bringing your application to market.
View full tip
I have created a mashup which allows you to easily use and test the Prescriptions functionality in Thingworx Analytics (TWA). This is where you choose 1 or more fields for optimization, and TWA tells you how to adjust those fields to get an optimal outcome.   The functionality is based on a public sample dataset for concrete mixtures, full details are included in the attached documentation.  
View full tip
Not as simple a question as it sounds.  There more options than some might think and choosing the right one can be the difference between a well performing application and one that struggles as it scales up in size.  There are options both internal and external to the Thingworx platform that can be used.  Each has their own use cases and cost considerations.   Internal to Thingworx there are three options as the storage provider PostGreSQL, Microsoft SQL Server (Azure SQL for PTC hosted systems) and InFlux DB.  PostGreSQL can be used for storing the Thingworx model structure and data,  and is an open source technology, meaning no additional cost.  SQL Server allows the same model and data storage but has licensing costs associated.  Both perform well up to an estimated 500 Gb of data storage (this is a rough estimate dependant on use case).  For very high volume data InFlux is the choice, it performs well for large data sets.   External to Thingworx you can use virtually any data storage technology the provides a JDBC connector or even one that has a driver that can be used to create a Thingworx Extension via our SDK or edge SDKs.  The platform knows how to use JDBC drivers so this can easily be used to connect to relational data storage like Oracle.   The first real question to ask when making the choice of where to store data is, what does my data look like?  Many systems are adapted or migrated from legacy systems which may include relational data, others simply have this structure by necessity.  If the data will need to use complex SQL to retrieve (like using joins, like, cursors, temp tables, etc.) then store the data in a true relational database.  If it is simple historical data, time series data or data that does not require compounding or recursive calculation to be useful, then keep it in platform data storage.   The second question to ask is, how much data will I be storing.  This adds a bit of complexity to where data is best stored.  There is no limit to the number of records in any data structure however, the Thingworx Platform storage is optimized to store and retrieve time series data, using the ValueSteam and Stream types built into the Platform.  This is the most common IoT data structure and in this case you can refer back to the previous information when choosing  the correct backend storage.  Data tables can be used when contained in small data sets (around 100,000 records or less) you can use Platform storage for this as these are intended for largely static data structures.  Retrieving data when DataTables grow larger than this will begin to slow performance quickly. This is because currently Thingworx will do a full scan of the data, in this specific type of structure, when querying because all of the logic for the query or filter is done on the platform, not on the database (this will likely change in a future version).  So small amounts of data can be quickly loaded and parsed in memory. NOTE (Neo4j specific): In datatables if you add a index to a column, these indexes are used when calling "FindDataTableEntries" but not when using "QueryDataTableEntries".   Streams and ValueStreams, however, are optimized for time series data.  In these structures Thingworx has built in datetime filters that allow for very fast retrieval of data based on a date range.  When the number of records returned after the date range is applied is still a very large number (100,00 - 200,000) you may see a drop in performance of a query at that point.  Just as before, all records, after the date filter is applied, are returned to the Platform and further query and filtering are done in memory.   The querying/retrieval of data is commonly where the greatest performance issues are seen.  Using a JDBC connector to send the query to the database (even if it is PostGreSQL, SQL Server Or InFlux) can help, or if the historical data is not queried regularly you can move this data to a separate Thingworx data store (another DataTable or Stream).   That would leave only large data sets of non-time series data as the outlier.  This scenario could perform equally well (or poorly) primarily on how the data will be retrieved. If there are loose relationship between the data that need to be used then a relational system that would allow these to be executed on the database server is preferred.  Sequential data that does not need this type of processing could be stored in InFlux.   This is a base outline of considerations when designing data storage on your application.  Most use cases are unique and may have additional considerations around process and cost.
View full tip
Here is a spreadsheet that I created which helps to estimate data transfer volumes for the purpose of estimating egress costs when transferring data out of region.   You find that there are a number of input parameters like numbers of assets, properties, file sizes, compression ratio, as well as a page with the cost elements which can be updated from the Interweb.    
View full tip
Hello ThingWorx'ers!   First off, we’re pleased to see the excitement around ThingWorx 10.0! Your feedback on features like the Debugger, Caching, and the Windchill Navigate View Work Instructions app has been fantastic. I’m especially amazed by the rapid adoption of IoT Streams over the past two months. I'm glad it’s delivering value!   Join us for an exclusive ThingWorx 10.0 Launch Webinar on September 9, 2025, at 11:00 AM EDT. We’ll dive into the latest features, including real-time IoT Streams, enhanced scalability, stronger security, and exciting AI updates that will power your Industrial IoT journey. Register now: ThingWorx 10.0 Webinar   After the webinar, we’d love to hear your thoughts! Share what you’re loving about 10.0 and any feedback you may have.     Keep pushing the boundaries of innovation!   Cheers, Ayush Director Product Management, ThingWorx        
View full tip
Setting up the ThingWorx Server RemoteThing, ApplicationKey, and TunnelSubsystem Tunneling from the ThingWorx platform to an Edge Device can be easily done with a few preparation steps on the platform side: Create an ApplicationKey entity on the ThingWorx server so that the EMS or SDK you are using can authenticate with the platform Create a RemoteThingWithTunnels or RemoteThingWithTunnelsAndFileTransfer Thing for the remote device to bind to Either ThingTemplate will work, the only difference is if you want to use any native file transfer capabilities that are provided by ThingWorx In the newly created Thing, on the General Information page, click on the drop-down menu next to Enable Tunneling and select Override - Enabled ​Go to the Configuration​ section under ​Entity Information ​on the right and click on the Add My Tunnel ​button The Tunnel Name is used to identify what tunnel to use in the RemoteAccessWidget you will bind to the tunnel The Host will remain 127.0.0.1 because this is from the perspective of where the vnc server is to the remote device In my example they are on the same device The Port value should be the Port that the server is listening on This is typically 5900, but my vnc server is running on port 5901 for this example The App URI can be cleared out because we do not need to reference that file Here is a link to a further explanation on what the App URI is for: ThingWorx Tunneling App URI's The # of Connections and Protocol can remain their default values unless you have a reason to change them Navigate back to Home and look for the TunnelSubsystem under the Subsystems page Click on the TunnelSubsystem Click on the Configuration option on the left Modify the Public host name used for tunnels field and the Public port used for tunnels field to the host and port of your ThingWorx server Save and close the TunnelSubsystem Configuring the Edge Device For this example I'm going to keep it simple and set up an EMS (Edge MicroServer) instead of an SDK. This EMS will be on a totally separate device (an Ubuntu machine), while my ThingWorx server is on my local machine. Download the latest EMS onto a separate machine Configure the config.json file settings to match the server's host, port, and application key The ​tunnel​ block will be necessary to add as well, see below for an example of a working config.json file: Configure the config.lua file to match the name of the RemoteThingWithTunnels we created earlier; in this instance the name of my RemoteThing is ​EdgeThing​: Run the EMS and LSR (Lua Script Resource) The LSR EdgeThing​ will bind automatically to the RemoteThingWithTunnels we created earlier To verify there is successful connection between the platform and EMS go to the ​EdgeThing​'s Properties page and check to see if the ​isConnected ​property is currently set to ​true​ If it's not, please refer to this Help Center section for further troubleshooting. There is a list of error codes here. Installing a VNC Viewer and Server The next series of steps talks about configuring a VNC Server on the EMS machine and a VNC Client on the computer you are using to connect to the server. For this example I will be using packages tightvncserver, xfce4, xfce4-goodies, and vnc4server on my Ubuntu machine that hosts the EMS, and I will be using the tightvnc viewer available for download here. The following steps describe how to configure the Ubuntu machine so that it will be ready to accept vnc requests: I want to note that I am specifically using a 64-bit Ubuntu 14.04 LTS OS Run the following commands: sudo apt-get update sudo apt-get install xfce4 xfce4-goodies tightvncserver Run the vncserver and you will be prompted to setup a password I used password to keep it simple, but you will want to use something relatively secure We will want to kill this instance right away so we can proceed with further configuration vncserver -kill :1 ​Make a backup of the ​xstartup​ file in case things go awry mv ~/.vnc/xstartup ~/.vnc/xstartup.bak Create a new xstartup ​file to proceed with the setup nano ~/.vnc/xstartup Insert the following commands into the file, and they will be exercised every time the server starts or is restarted: #!/bin/bash xrdb $HOME/.Xresources startxfce4 & The first command in the file tells the VNC's GUI framework to reference the .Xresources file, which is where a user can change vnc settings The second command launches the XFCE -- the graphical software Ensure that the xstartup ​file has executable privileges: sudo chmod +x ~/.vnc/xstartup Start the server back up with vncserver For the machine that is being used to view the Mashup, install the tightvnc server from the link mentioned above. You should double-click the tightvnc-jviewer.jar file to run the viewer application now so it is up and ready for the ​Establishing a Tunnel ​section​. Creating the RemoteAccess Mashup This next portion of the tutorial covers creating the Mashup that will be asked by any user who wants to remote into the Edge device. Go to Composer Home and open the Mashup menu option on the left side of the screen Add a new Static or Dynamic Mashup Drag-and-drop a RemoteAccessWidget onto the Mashup Click on the RemoteAccessWidget and modify the RemoteThingName, TunnelName, and AcceptSelfSignedCertificates ​properties for the connection The RemoteThingName is the name of the Edge Thing the remote device is bound to The TunnelName is the name of the tunnel we added to the Edge Thing in the Configuration screen The AcceptSelfSignedCertificates is only used when using an SSL connection with self signed certs View the Mashup and the RemoteAccess Widget should have a green plus sign on it if the connection from the EMS to platform is up and connected Establishing a Tunnel The following section is the last part of the process where we actually establish a tunnel between the client, platform, and remote device. Open the Mashup with the RemoteAccess Widget if you closed it Click on the RemoteAccess Widget to being the wsadapter.jnlp download Once that has completed click on the wsadapter.jnlp file to run it Keep in mind that there is a default 90 second timeout defined in the TunnelSubsystem that will render the wsadapter.jnlp file useless and you will have to download a new one if the connection is not established within that timeframe If you receive the following error message you may need to reconfigure your TunnelSubsystem configuration options for your server because the thingworx-tunnel-launcher.jar was unable to be found at that address If you receive the following error message after you will need to modify your security settings in your Java options. This is done by opening ​Configure Java​, navigating to the ​Security ​tab, and then adding your ThingWorx server's IP and port to the site list via the ​Edit Site List...​ button You should have received a Security Warning message upon successfully finding the thingworx-tunnel-launcher.jar file that you will click the ​Run​ button on and check the I accept the risk and want to run this application​ A pop-up, like the following, will be seen and you know the tunnel is now open for tightvnc to connect through Do not click ​OK​, instead, please proceed to the next step. Clicking OK will close the tunnel if you have not connected to the EMS via the VNC Viewer yet. Open the tightvnc-jviewer.jar and type in the corresponding host and port that a vnc connection should be established to: localhost ​ and port ​16345​ are used because we have already established a connection to the EMS and it is listening for a vnc connection on port 16345 -- per the ThingWorx pop-up we just saw Click ​Connect​ and a new window should appear showing the GUI environment of your Ubuntu server like below
View full tip
This is just a quick reference on how to install pgadmin 3 if the autoinstall with yum command (sudo yum install pgadmin3) fails. Two routes would be available. Try running yum list pgadmin* ​If you see something like this: that means the package is available in the highlighted repository, you'd just need to add it. rpm -Uvh http://yum.postgresql.org/9.4/redhat/rhel-7-x86_64/pgdg-redhat94-9.4-3.noarch.rpm After that, try sudo yum install pgadmin3_94 (insert your actual version) 2. If you would like a different version (or the latest one) of pgadmin, what you could do is grab the .tar.gz file from here https://www.postgresql.org/ftp/pgadmin3/release/ Then manually install it. For example for version 1.22.2 (the version is just for demoing purposes – I grabbed the top one available in the list): mv pgadmin3-1.22.2.tar.gz /usr/local/src cd /usr/local/src tar –zxvf pgadmin3-1.22.2.tar.gz cd pgadmin3-1.22.2 ./configure make make install Then you would need to configure your server to allow remote user access of the database using pgadmin. Two config files would need to be modified: Open up the postgresql.conf configuration file.  Do a search or find for the phrase ‘listen_addresses’ without quotes.  In order to open the access up to all IP addresses change the value to a *.  The default is set to ‘localhost’ which does not allow connection from remote computers. Next open the pg_hba.conf configuration file. Scroll down to the bottom of the file to the section marked # IPv4 local connections.  Add in the following code on its own line, just underneath the 127.0.0.1/32 line necessary for ‘localhost’. host all all youripaddress/32          trust This will allow for local access of the database server to the computer with the IP address you specified.  To add additional remote computers simply add a new line with their appropriate IP address. Now that your configuration is complete restart the PostgreSQL database server for the changes to take effect. su – su postgres pg_ctl restart –D /usr/local/pgsql/data
View full tip
    Step 11: Build Extension   You can use either Gradle or Ant to build your ThingWorx Extension Project. Ant is the preferred method.   Build Extension with Gradle   Right click on your project ->Gradle (STS)->Tasks Quick Launcher.     NOTE: This opens up a new window.   Set Project from the drop-down menu to your project name and type Tasks as build. Press Enter   NOTE: This will build the project and any error will be indicated in the console window. Your console window will display BUILD SUCCESSFUL. This means that your extension is created and stored as a zip file in your_project->build->distributions folder.   Build Extension with Ant   Go to the Package explorer -> your_project->. Right click on build-extension.xml->Run As->Ant Build   Your console output will indicate BUILD SUCCESSFUL.   NOTE: This will build your project and create the extension zip in the your_project->build->distributions folder of your project.     Step 12: Import Extension    If you have valuable data on your ThingWorx server, save the current state before importing an untested extension by duplicating and renaming the ThingworxStorage directory. This will save all current entities and a new, empty ThingworxStorage directory will be generated when Tomcat is restarted. To restore your saved state, rename the duplicate directory back to ThingworxStorage. Alternatively, If you do not back up your storage, make sure that any entities you want to save are exported into xml format. This way you will be able to restore your ThingWorx server to its initial state by deleting the storage directory before importing the saved entities.   Import Extension In the lower left corner, click Import/Export, then select Import. NOTE: The build produces a zip file in ProjectName->build->distributions folder. This zip file will be required for importing the extension. For the Import Option option, select Extension. Click Browse and choose the zip file in the distributions folder (located in the Exclipse Project's build directory). Click Import.   Create a Thing   Create a Thing using the ThingWorx Composer with the Thing Template set to the WeatherThingTemplate.     Open the ConfigurationTable tab and add the appid from the OpenWeatherMap.org site.   Open the WeatherAppMashup Mashup by searching for WeatherAppMashup in the Search bar.   Click View Mashup in the WeatherAppMashup Mashup window. Type the name of a city (eg. Boston) and click go.   NOTE: You can now see the current temperature reading and weather description of your city in the Mashup.   Troubleshooting   If your import did not get through with the two green checks, you may want to modify your metadata.xml or java code to fix it depending on the error shown in the logs.   Issue Solution JAR Conflict arises between two similar jars JAR conflicts arise when a similar jar is already present in the Composer database. Try to remove the respective jar resources from the metadata.xml. Add these jars explicitly in twx-lib folder in the project folder inside the workspace directory. Now, build the project and import the extension in ThingWorx Composer once again. JAR is missing Add the respective jar resource in metadata.xml using the ThingWorx->New Jar Resource. Now, build the project and import the extension in ThingWorx Composer once again. Minimum Thingworx Version [ 7.2.1] requirements are not met because current version is: 7.1.3 The version of SDK you have used to build your extension is higher than the version of the ThingWorx Composer you are testing against. You can manually edit the configfiles->metadata.xml file to change the Minimum ThingWorx version to your ThingWorx Composer version.   Step 13: Next Steps    Congratulations! You've successfully completed the Create an Extension tutorial, and learned how to:   Install the Eclipse Plugin and Extension SDK Create and configure an Extension project Create Services, Events and Subscriptions Add Composer entities Build and import an Extension   Learn More We recommend the following resources to continue your learning experience:" Capability Guide Build Application Development Tips & Tricks Additional Resources If you have questions, issues, or need additional information, refer to: Resource Link Community Developer Community Forum Support Extension Development Guide
View full tip
  Connect a Raspberry Pi to ThingWorx using the Edge Micro Server (EMS).   Guide Concept   This project will introduce you to the Edge MicroServer (EMS) and how to connect your ThingWorx server to a Raspberry Pi device.   Following the steps in this guide, you will be able to connect to the ThingWorx platform with your Raspberry Pi. The coding will be simple and the steps will be very straight forward.   We will teach you how to utilize the EMS for your Edge device needs. The EMS comes with the Lua Script Resource, which serves as an optional process manager, enabling you to create Properties, Services, Events, and Subscriptions for a remote device on the ThingWorx platform.   You'll learn how to   Set up Raspberry Pi Install, configure and launch the EMS Connect a remote device to ThingWorx   NOTE: This guide's content aligns with ThingWorx 9.3. The estimated time to complete ALL parts of this guide is 30 minutes.    Step 1: Setup Raspberry Pi   Follow the setup instructions to get your Raspberry Pi up and running with the Raspberry Pi OS operating system. Ensure that your Pi has a valid Ethernet or Wifi connection. If your Pi is connected to a monitor/keyboard, run ifconfig from the Command Line Interface (CLI) to determine the IP address. If you are connecting remotely, probe your local network to find your Pi using one of these methods to determine the IP address. Log into your Raspberry Pi using the userid/password combination pi/raspberry.   Step 2: Install the EMS Download the MED-61060-CD-054_SP10_Microserver-Linux-arm-hwfpu-openssl-5-4-10-1509.zip attached here directly to the Raspberry Pi, or transfer it using a SFTP application such as WinSCP. After downloading the EMS zip file, unzip the archive in a suitable location on the Pi using the command below. Use the Tab key to automatically complete file names. unzip /MED-61060-CD-054_SP9_Microserver-Linux-arm-hwfpu-openssl-5-4-10-1509.zip After unzipping the distribution, a sub-directory named /microserver will be created inside the parent directory. Verify that microserver directory was created with the command ls -l   Switch into the microserver directory with the command cd microserver The microserver directory includes the following files.        File Name                    Description wsems An executable file that runs the Edge MicroServer luaScriptResource The Lua utility that is used to run Lua scripts, configure remote things, and integrate with the host system     Step 3: Create Application Key   In this step, you will be using the ThingWorx Composer to generate an Application Key. The Application Key will be used to identify the Edge Agent. The Application Key is tied to a user and has the same entitlements on the server.   Using the Application Key for the default User (Administrator) is not recommended. If administrative access is absolutely necessary, create a User and place the user as a member of the SecurityAdministrators and Administrators User Groups.   Create the User the Application Key will be assigned to.   On the Home screen of Composer click + New.   In the dropdown list, click Applications Key.   Give your Application Key a name (ie, MyAppKey). Set the User Name Reference to a User you created.   Update the Expiration Date field, otherwise it will default to 1 day. Click Save.   Step 4: Configure the EMS   The EMS consists of two distinct components that do slightly different things and communicate with each other. The first is the EMS which creates an AlwaysOn™ connection to the ThingWorx server. It binds things to the platform and automatically provides features like file transfer and tunneling.   The second is the Lua Script Resource (LSR). It is used as a scripting language so that you can add properties, services, and events to the things that you create in the EMS. The LSR communicates with your sensors or devices. The LSR can be installed on the same device as the EMS or on a separate device. For example, one LSR can be a gateway and send data from several different things to a single EMS.     Open a terminal emulator for the Raspberry Pi. Change directory to microserver/etc. cd microserver/etc Create a config.json file. EMS comes with two sample config files that can be used as a reference for creating your config.json file. The config.json.minimal file provides minimum and basic options for getting started. The config.json.complete provides all of the configuration options.   Create the config.json file in the etc folder. sudo nano config.json Edit the config.json file ws_servers - host and port address of the server hosting the ThingWorx Platform. If you are using a Developer Portal hosted server, your server hostname is listed on the dashboard. {"host":"<TwX Server IP>", "port":443} http_server - host and port address of the machine running the LSR. In this case it will be your localhost running on the raspberry pi. {"host":"127.0.0.1","port":8080, "use_default_certificate": true,"ssl": false, "authenticate": false} appKey - the application key generated from the ThingWorx server. Use the keyId generated in the previous step "Create Application Key". "appKey":"<insert keyId>" logger - sets the logging level for debugging purposes. Set to log at a DEBUG level. ("level":"INFO"} certificates - for establishing a secure websocket connection between the ThingWorx server and the EMS. A valid certificate should be used in a production environment but for debugging purposes you can turn off validation and allow self signed certificates. {"validate":false, "disable_hostname_validation": true} NOTE: To ensure a secure connection, use valid certificates, encryption and HTTPS (port : 443) protocol for establishing a websocket connection between the EMS and the ThingWorx Platform. 5. Exit and Save. ctrl x   Sample config.json File   Replace host and appKey with values from your hosted server.   { "ws_servers": [{ "host": "pp-2007011431nt.devportal.ptc.io", "port": 443 }], "appkey": "2d4e9440-3e51-452f-a057-b55d45289264", "http_server": { "host": "127.0.0.1", "port": 8080, "use_default_certificate": true, "ssl": false, "authenticate": false }, "logger": { "level": "INFO" }, "certificates": { "validate": false, "disable_hostname_validation": true } }     Click here to view Part 2 of this guide. 
View full tip
Want to do a REST call from ThingWorx Want to use REST to send request to External System. Want to get data from other system using REST Here is how you can do this.... ThingWorx has ContentLoaderFunctions API which provides services to load or post content to and from other web applications. One can issue an HTTP request using any of the allowed actions (GET, POST, PUT, DELETE). List of available ContentLoaderFunctions: Delete GetCookies GetJSON GetText GetXML LoadBinary LoadImage LoadJSON LoadMediaEntity LoadText LoadXML PostBinary PostImage PostJSON PostMultipart PostText PostXML PutBinary PutJSON PutText PutXML Example: Using LoadXML snippet in a custom service to retrieve an XML document from a specific URL Insert the LoadXML snippet into a custom service var params = {     proxyScheme: undefined /* STRING */,     headers: "{ 'header1':'value1','header2':'value2'}" /* JSON */,     ignoreSSLErrors: false /* BOOLEAN */,     useProxy: undefined /* BOOLEAN */,     proxyHost: undefined /* STRING */,       url: "http://some_url/sampleXMLDocument.xml" /* STRING */,     timeout: 30000 /* NUMBER */,     proxyPort: undefined /* INTEGER */,     password: "fakePassword" /* STRING */,     username: "Administrator"/* STRING */ }; var result = Resources["ContentLoaderFunctions"].LoadXML(params); The snippet above contains an example of how to format any headers in JSON that need to be passed in, the URL that points directly to some XML document, a password, username, timeout, and ignoreSSLErrors set to false When LoadXML is exercised it will retrieve the XML document, and this can then be parsed or handled however is necessary To see the XML document that is returned from this service the service can be called from a third-party client, such as Postman Note: If a proxy or username and password are required to connect to the URL, those parameter MUST be specified Using the PostXML snippet in a custom service to send content to another URL, in this example, another service in Composer Insert the PostXML snippet into a custom service var content = "<xml><tag1>NAME</tag1><tag2>AGE</tag2></xml>"; var params = {  url: "http://localhost/Thingworx/Things/thingName/Services/serviceName?postParameter=parameterName" /* STRING */, content: content /* STRING */, password: "admin" /* STRING */, username: "Administrator" /* STRING */ }; var result = Resources["ContentLoaderFunctions"].PostXML(params); When posting XML content to another ThingWorx service the postParameter header must be defined in the url parameter for the PostXML snippet  The postParameter header, in the url parameter, is set equal to the name of the input parameter for the service we are POSTing to Change the parameterName variable in the url to the name of the input parameter defined for the service The content parameter is set to the XML content that will be passed into the function or manually specified Note: When declaring namespace URLs in an element make sure that there is a white space in between each declaration ex: <root xmlns:h="http://www.w3.org/TR/html4/" xmlns:f="http://www.w3schools.com/furniture">
View full tip
In this blog I will be testing the SAPODataConnector using the SAP Gateway - Demo Consumption System.   Overview   The SAPODataConnector enables the connection to the SAP Netweaver Gateway through the ODdata specification. It is a specialized implementation of the ODataConnector. See Integration Connectors for documentation.   It relies on three components : Integration Runtime : microservice that runs outside of ThingWorx and has to be deployed separately, it uses Web Socket to communicate with the ThingWorx platform (similar to EMS). Integration Subsystem : available by default since 7.4 (not extension needed) Integration Connector : SAPODataConnector available by default in 8.0 (not extension needed)   ThingWorx can use OAuth to access SAP, but in this blog I will just use basic authentication.   SAP Netweaver Gateway Demo system registration   1. Create an account on the Gateway Demo system (credentials to be used on the connector are sent by email) 2. Verify that the account has access to the basic OData sample service : https://sapes4.sapdevcenter.com/sap/opu/odata/IWBEP/GWSAMPLE_BASIC/   Integration Runtime microservice setup   1. Follow WindchillSwaggerConnector hands-on (7.4) - Integration Runtime microservice setup Note: Only one Integration Runtime instance is required for all your Integration Connectors (Multiple instances are supported for High Availability and scale).   SAPODataConnector setup   Use the New Composer UI (some setting, such as API maps, are not available in the ThingWorx legacy composer)     1. Create a DataShape that is used to map the attributes being retrieved from SAP SAPObjectDS : Id (STRING), Name (STRING), Price (NUMBER) 2. Create a Thing named TestSAPConnector that uses SAPODataConnector as thing template 3. Setup the SAP Netweaver Gateway connection under TestSAPConnector > Configuration Generic Connector Connection Settings Authentication Type = fixed HTTP Connector Connection Settings Username = <SAP Gateway user> Password = < SAP Gateway pwd> Base URL : https://sapes4.sapdevcenter.com/sap Relative URL : /opu/odata/IWBEP/GWSAMPLE_BASIC/ Connection URL : /opu/odata/IWBEP/GWSAMPLE_BASIC/$metadata 4. Create the API maps and service under TestSAPConnector > API Maps (New Composer only) Mapping ID : sap EndPoint : getProductSet Select DataShape : SAPObjectDS (created at step 1) and map the following attributes : Name <- Name Id <- ProductID Price <- Price Pick "Create a Service from this mapping"     Testing our Connector   Test the TestSAPConnector::getProductSet service (keep all the input parameters blank)
View full tip
Below is where I will discuss the simple implementation of constructing a POST request in Java. I have embedded the entire source at the bottom of this post for easy copy and paste. To start you will want to define the URL you are trying to POST to: String url = "http://127.0.0.1:80/Thingworx/Things/Thing_Name/Services/​Service_to_Post_to​"; Breaking down this url String: ​http://​ - a non-SSL connection is being used in this example 127.0.0.1:80 -- the address and port that ThingWorx is hosted on /Thingworx -- this bit is necessary because we are talking to ThingWorx /Things -- Things is used as an example here because the service I am posting to is on a Thing Some alternatives to substitute in are ThingTemplates, ThingShapes, Resources, and Subsystems /​Thing_Name​ -- Substitute in the name of your Thing where the service is located /Services -- We are calling a service on the Thing, so this is how you drill down to it /​Service_to_Post_to​ -- Substitute in the name of the service you are trying to invoke Create a URL object: URL obj = new URL(url); Class URL, included in the java.net.URL import, represents a Uniform Resource Locator, a pointer to a "resource" on the Internet. Adding the port is optional, but if it is omitted port 80 will be used by default. Define a HttpURLConnection object to later open a single connection to the URL specified: HttpURLConnection con = (HttpURLConnection) obj.openConnection(); Class HttpURLConnection, included in the java.net.HttpURLConnection import, provides a single instance to connect to the URL specified. The method openConnection is called to create a new instance of a connection, but there is no connection actually made at this point. Set the type of request and the header values to pass: con.setRequestMethod("POST"); con.setRequestProperty("Accept", "application/json"); con.setRequestProperty("Content-Type", "application/json"); con.setRequestProperty("appKey", "80aab639-ad99-43c8-a482-2e1e5dc86a2d"); You can see that we are performing a POST request, passing in an Accept header, a Content-Type header, and a ThingWorx specific appKey header. Pass true into the setDoOutput method because we are performing a POST request; when sending a PUT request we would pass in true as well. When there is no request body being sent false can be passed in to denote there is no "output" and we are making a GET request.         con.setDoOutput(true); Create a DataOutputStream object that wraps around the con object's output stream. We will call the flush method on the DataOutputStream object to push the REST request from the stream to the url defined for POSTing. We immediately close the DataOutputStream object because we are done making a request.         DataOutputStream wr = new DataOutputStream(con.getOutputStream());     wr.flush();     wr.close();           The DataOutputStream class lets the Java SDK write primitive Java data types to the ​con​ object's output stream. The next line returns the HTTP status code returned from the request. This will be something like 200 for success or 401 for unauthorized.         int responseCode = con.getResponseCode(); The final block of this code uses a BufferedReader that wraps an InputStreamReader that wraps the con object's input stream (the byte response from the server). This BufferedReader object is then used to iterate through each line in the response and append it to a StringBuilder object. Once that has completed we close the BufferedReader object and print the response we just retrieved.         BufferedReader in = new BufferedReader(new InputStreamReader(con.getInputStream()));     String inputLine;     StringBuilder response = new StringBuilder();     while((inputLine = in.readLine()) != null) {       response.append(inputLine);     }     in.close();     System.out.println(response.toString());    The InputStreamReader decodes bytes to character streams using a specified charset.         The BufferedReader provides a more efficient way to read characters from an InputStreamReader object.         The StringBuilder object is an unsynchronized method of creating a String representation of the content residing in the BufferedReader object. StringBuffer can be used instead in a case where multi-threaded synchronization is necessary.      Below is the block of code in it's entirety from the discussion above: public void sendPost() throws Exception {   String url = "http://127.0.0.1:80/Thingworx/Things/Thing_Name/Services/Service_to_Post_to";   URL obj = new URL(url);   HttpURLConnection con = (HttpURLConnection) obj.openConnection();   //add request header   con.setRequestMethod("POST");   con.setRequestProperty("Accept", "application/json");   con.setRequestProperty("Content-Type", "application/json");   con.setRequestProperty("appKey", "80aab639-ad99-43c8-a482-2e1e5dc86a2d");   // Send post request   con.setDoOutput(true);   DataOutputStream wr = new DataOutputStream(con.getOutputStream());   wr.flush();   wr.close();   int responseCode = con.getResponseCode();   System.out.println("\nSending 'POST' request to URL : " + url);   System.out.println("Response Code : " + responseCode);   BufferedReader in = new BufferedReader(new InputStreamReader(con.getInputStream()));   String inputLine;   StringBuilder response = new StringBuilder();   while((inputLine = in.readLine()) != null) {   response.append(inputLine);   }   in.close();   //print result   System.out.println(response.toString());   }
View full tip
Hello everyone,   Following a recent  experience, I felt it was important to share my insights with you. The core of this article is to demonstrate how you can format a Flux request in ThingWorx and post it to InfluxDB, with the aim of reporting the need for performance in calculations to InfluxDB. The following context is renewable energy. This article is not about Kepware neither about connecting to InfluxDB. As a prerequisite, you may like to read this article: Using Influx to store Value Stream properties from... - PTC Community     Introduction   The following InfluxDB usage has been developed for an electricity energy provider.   Technical Context Kepware is used as a source of data. A simulation for Wind assets based on excel file is configured, delivering data in realtime. SQL Database also gather the same data than the simulation in Kepware. It is used to load historical data into InfluxDB, addressing cases of temporary data loss. Once back online, SQL help to records the lost data in InfluxDB and computes the KPIs. InfluxDB is used to store data overtime as well as calculated KPIs. Invoicing third party system is simulated to get electricity price according time of the day.   Orchestration of InfluxDB operations with ThingWorx ThingWorx v9.4.4 Set the numeric property to log Maintain control over execution logic Format Flux request with dynamic inputs to send to Influx DB  InfluxDB Cloud v2 Store logged property Enable quick data read Execute calculation Note: Free InfluxDB version is slower in write and read, and only 30 days data retention max.     ThingWorx model and services   ThingWorx context Due to the fact relevant numeric properties are logged overtime, new KPIs are calculated based on the logged data. In the following example, each Wind asset triggered each minute a calculation to get the monetary gain based on current power produced and current electricity price. The request is formated in ThingWorx, pushed and executed in InfluxDB. Thus, ThingWorx server memory is not used for this calculation.   Services breakdown CalculateMonetaryKPIs Entry point service to calculate monetary KPIs. Use the two following services: Trigger the FormatFlux service then inject it in Post service. Inputs: No input Output: NOTHING FormatFlux _CalculateMonetaryKPI Format the request in Flux format for monetary KPI calculation. Respect the Flux synthax used by InfluxDB. Inputs: bucketName (STRING) thingName (STRING) Output: TEXT PostTextToInflux Generic service to post the request to InfluxDB, whatever the request is Inputs: FluxQuery (TEXT) influxToken (STRING) influxUrl (STRING) influxOrgName (STRING) influxBucket (STRING) thingName (STRING) Output: INFOTABLE   Highlights - CalculateMonetaryKPIs Find in attachments the full script in "CalculateMonetaryKPIs script.docx". Url, token, organization and bucket are configured in the Persitence Provider used by the ValueStream. We dynamically get it from the ValueStream attached to this thing. From here, we can reuse it to set the inputs of two other services using “MyConfig”.   Highlights - FormatFlux_CalculateMonetaryKPI Find in attachments the full script in "FormatFlux_CalculateMonetaryKPI script.docx". The major part of this script is a text, in Flux synthax, where we inject dynamic values. The service get the last values of ElectricityPrice, Power and Capacity to calculate ImmediateMonetaryGain, PotentialMaxMonetaryGain and PotentialMonetaryLoss.   Flux logic might not be easy for beginners, so let's break down the intermediate variables created on the fly in the Flux request. Let’s take the example of the existing data in the bucket (with only two minutes of values): _time _measurement _field _value 2024-07-03T14:00:00Z WindAsset1 ElectricityPrice 0.12 2024-07-03T14:00:00Z WindAsset1 Power 100 2024-07-03T14:00:00Z WindAsset1 Capacity 150 2024-07-03T15:00:00Z WindAsset1 ElectricityPrice 0.15 2024-07-03T15:00:00Z WindAsset1 Power 120 2024-07-03T15:00:00Z WindAsset1 Capacity 160   The request articulates with the following steps: Get source value Get last price, store it in priceData _time ElectricityPrice 2024-07-03T15:00:00Z 0,15 Get last power, store it in powerData _time Power 2024-07-03T15:00:00Z 120 Get last capacity, store it in capacityData _time Capacity 2024-07-03T15:00:00Z 160 Join the three tables *Data on the same time. Last values of price, power and capacity maybe not set at the same time, so final joinedData may be empty. _time ElectricityPrice Power Capacity 2024-07-03T14:00:00Z 0,15 120 160 Perform calculations gainData store the result: ElectricityPrice * Power _time _measurement _field _value 2024-07-03T15:00:00Z WindAsset1 ImmediateMonetaryGain 18 maxGainData store the result: ElectricityPrice * Capacity lossData store the result: ElectricityPrice * (Capacity – Power) Add the result to original bucket   Highlights - PostTextToInflux Find in attachments the full script in "PostTextToInflux script.docx". Pretty straightforward script, the idea is to have a generic script to post a request. The header is quite original with the vnd.flux content type Url needs to be formatted according InfluxDB API     Well done!   Thanks to these steps, calculated values are stored in InfluxDB. Other services can be created to retrieve relevant InfluxDB data and visualize it in a mashup.     Last comment It was the first time I was in touch with Flux script, so I wasn't comfortable, and I am still far to be proficient. After spending more than a week browsing through InfluxDB documentation and running multiple tests, I achieved limited success but nothing substantial for a final outcome. As a last resort, I turned to ChatGPT. Through a few interactions, I quickly obtained convincing results. Within a day, I had a satisfactory outcome, which I fine-tuned for relevant use.   Here is two examples of two consecutive ChatGPT prompts and answers. It might need to be fine-tuned after first answer.   Right after, I asked to convert it to a ThingWorx script format:   In this last picture, the script won’t work. The fluxQuery is not well formatted for TWX. Please, refer to the provided script "FormatFlux_CalculateMonetaryKPI script.docx" to see how to format the Flux query and insert variables inside. Despite mistakes, ChatGPT still mainly provides relevant code structure for beginners in Flux and is an undeniable boost for writing code.  
View full tip
Announcements