cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

ThingWorx Navigate is now Windchill Navigate Learn More

IoT & Connectivity Tips

Sort by:
ThingWorx Performance Monitoring with Grafana authored by EDC team member Desheng Xu ( @xudesheng )   Monitoring ThingWorx performance is crucially important, both during the load testing of a newly completed application, and after the deployment of new code in an existing application. Monitoring performance ensures that everything works as expected at the Enterprise level.  This tutorial steps you through configuring and installing a tool  which runs on the same network as the ThingWorx instance. This tool collects data from the Platform and translates it into something visual and easy to understand via Grafana.    tsample is  small and customizable, and it plays a similar role to telegraf. Its focus is on gathering ThingWorx performance metrics. Historically, this tool also supported collecting OS level performance metrics, but this is no longer supported. It is highly recommended to collect OS level performance metrics by using telegraf, a tool designed specifically for that purpose (and not discussed here). This is not the only way to go about monitoring ThingWorx performance, but this tool uses a very good approach that has been proven effective both at customer sites and internally by PTC to monitor scale tests.   Find the most recent release here.   Recommended Deployment Architecture tsample can be deployed in the same box where ThingWorx Tomcat is running, but it's recommended to deploy it on a separated box to minimize any performance impact caused by the collector. tsample supports export to InfluxDB and/or local file. In this document, it is assumed that InfluxDB will be used for monitoring purpose. Please note that this is not the same instance of InfluxDB being used by ThingWorx (if configured). This article will not cover setting up InfluxDB or NGINX (if necessary), so please configure these before beginning this tutorial.   Supported Platform tsample has been tested on Windows 2016, MacOS 10.15, Ubuntu 16.04, and Redhat 7.x.  It's anticipated to work on a more general Ubuntu/Redhat/Mac/Windows release as well. Please leave a comment or contact the author, @xudesheng , if Raspberry Pi support is needed.    Configuration File Where to Store the Configuration File tsample will pick up the configuration file in the following sequence: from the command line...   ./tsample -c <path to configuration file>​     from the environment... Linux:   export TSAMPLE_CONFIG=<path to configuration file> ./tsample​   Windows:   set TSAMPLE_CONFIG=<path to configuration file> tsample.exe   from a default location... tsample will try to find a file with the name "config.toml " from the same folder in which it starts.   How to Craft a Configuration File You can use following command to generate a sample file:     ./tsample -c config.toml -e     or:     ./tsample -c config.toml --export       A file with the name "config.toml " will be generated with a sample configuration. You can then adjust its content in accordance with the following.   Configuration File Content Format Configuration file must be in toml format. title and owner sections Both sections are optional. The intention of these two sections is to support doc tool in future. TestMachine section This is section is required, and it defines where this tool will run.   thingworx_servers section This section is where you define targeted ThingWorx applications. Multiple ThingWorx servers can be defined with the same or different metrics to be collected.   thingworx_servers.metrics sections Underneath each thingworx_servers section, there are several metrics. In default example, following metrics have been included: ValueStreamProcessingSubsystem DataTableProcessingSubsystem EventProcessingSubsystem PlatformSubsystem StreamProcessingSubsystem WSCommunicationsSubsystem WSExecutionProcessingSubsystem TunnelSubsystem AlertProcessingSubsystem FederationSubsystem You can add your own customized metrics, as long as the result follows the same Data Shape. The default Data Shape has 3 columns: If the output Data Shape exceeds this limit, the tool will likely not work properly.   result_export_to_db section This section defines the target InfluxDB as a sink of collected performance metrics.   result_export_to_file section This section defines the target file storage for collected performance metrics.   Grafana Configuration Example Monitor Value Stream Step 1. Connect Grafana to InfluxDB   Step 2: Create a New Dashboard   Step 3. Create a New Query Depending on which metrics you defined to collect in the tsample configuration file, you will see a different choice of measurement in Grafana. Here, we will use ValueStreamProcessingSubsystem as an example.   Step 4. Choose the Right Platform and Storage Provider Some metrics depend on the database storage provider, like Value Stream and Stream.   Step 5. Choose the Metrics Figures   Select "remove" to get rid of the default 'mean' calculation. Select "non_negative_difference" from Transformations. Using this transformation, Grafana can show us the speed of writes.     Then, remove the default GROUP BY "time" clause. Assign a meaningful alias of this query.   Step 6. Add Another Query You can add another query as 'Value Stream Queued Speed' by following the same steps.   Step 7. Assign a Panel Title   Step 8. Review the Result Let's go back to the dashboard page and select "last 15 minutes" or "last 5 minutes" from the top right corner. It should show a result similar to the chart below.   Step 9. Save the Dashboard Don't forget to save your dashboard before we add more panels.   Step 10. Refine the Panel It's difficult to figure out the high-level write speed from the above panel, so let's enhance it. Add a new query with the following configuration: In the above query, there are two additional figures: 20s and 1m... How do you choose? 20s should be the same as sampling_cycle_inseconds in your tsample configuration file. If you choose a different value, then you could end up with misleading results. Larger values such as "1m" may give you a smoother result, but they could also hide system instability. Going larger than 1m is not recommended in most cases. With this new query, it's much easier to figure out what the average write speed in current testing is.   Tips: if your sampling_Cycle_inseconds is 30s, then you may not need this additional query. The following image is a sample at the 30s interval time. You would not need an additional average query to get a smooth write speed.   The next example is a sample at the 10s interval time. Without additional queries, you may not be able to get a meaningful understanding of the write speed. From the above three examples, it's recommended to configure the sampling interval time at 30s, or anything larger than 20s. You can then choose whether you need additional queries based on the visualization result.   Step 11. Further Refinement The above charts illustrate the queuing and writing speed. However, it is possible that the Value Stream may perform at a reasonable speed, but the Value Stream queue may be growing and could exceed its capacity. Let's add another query to monitor this: However, it is difficult to read this chart, since it has a different value range on the y-axis: Let's move this query to a second y-axis on the right: This will make the view much easier to see: The current queue size or remaining queue size will always move up and down; it is healthy as long as it does not continue to grow to a high level.   What Else Can Be Monitored? The following metrics would be monitored by most customers: Value Stream write and queue speed Value Stream queue size Stream write and queue speed Stream queue size Event performed speed (completedTaskCount) Event submitted speed (submittedTaskCount) Event queue size Websocket communication Websocket connection   ThingWorx Memory Usage Monitoring Create a new panel and add a new query: In a running system, memory usage will always move up and down - at times sharply or quickly - when the system is busy. The system is healthy as long as memory doesn't go up continuously or stay at a maximum for a long period of time.   Conclusion Setting up monitoring is absolutely crucial to managing the performance of an enterprise ThingWorx application. Using Grafana makes tracking and visualizing the performance much easier. Stay tuned to the EDC tag for more monitoring tips to come!
View full tip
Axeda Machine Streams enables external Platform integrators to access the current, raw data from connected assets. The Platform can stream the data item, alarm, mobile location, and registration messages from connected assets to an ActiveMQ server or Azure Service Bus endpoint. Streamed data can be used for data analytics or reporting, or simply for storage. This article explains the Machine Streams Data Relay project that Axeda provides. This sample project illustrates how stream consumers can create their own projects to relay Machine Stream messages from ActiveMQ or Azure Service Bus into their environments. The Machine Streams Data Relay project was created using Apache Maven. The project operates by dispatching messages to a log message processor. Each machine streams message is logged to stdout. Note: The "Axeda Features Guide" provides a high level introduction to the Axeda Machine Streams feature. That PDF is available from PTC Support (http://support.ptc.com/).) Downloading and Installing the Project The machine-streams-data-relay project is provided as a tar.gz archive for Linux users and a .zip archive for Windows users. Each archive includes a Maven project with all source code. This page provides downloads and full source for the machine-steams-data-relay Maven project. The Data Relay project files are available from here. Prerequisites To download, build, and compile the machine-streams-data-relay project, you will need the following: Access to an Axeda Platform instance configured to stream asset data (for ActiveMQ endpoint this includes the Axeda provided ActiveMQ machine-streams plugin/overlay (axeda-jms-plugin-r<SVN_REVISION>-machine-streams.zip, which is provided here. ActiveMQ or Azure Service Bus server configured for Machine Streams. Instructions for configuring an ActiveMQ or Azure Service Bus server for Machine Streams are provided in the “Axeda® Machine Streams: A Guide to Setting Up Broker Endpoints", available with all documentation from PTC Support (http://support.ptc.com/). At least one machine stream (Axeda Artisan Machine Streams Archetype) configured to stream data to the ActiveMQ or Azure Service Bus server for your assets. (Complete information about creating machine streams and adding machine stream support to the Axeda Platform is provided in the “Axeda v2 API/Services Developers Reference Guide” available from PTC Support (http://support.ptc.com/).) Access to the ActiveMQ or Azure Service Bus server configured as the endpoint for streamed Machine Streams content Oracle Java JDK 1.7 or greater and java and javac installed and available in your PATH (if you need instructions for this, see http://www.oracle.com/technetwork/java/javase/downloads/index.html) Maven 3.0.4 or greater and mvn installed and available in your PATH (if you need instructions for this, see http://maven.apache.org/download.cgi) Note: For the Machine Streams Data Relay project to work successfully, the Axeda Platform instance and the ActiveMQ or Azure Service Bus server instance must be configured with support for Axeda Machine Streams, and at least one machine stream must be configured to stream data. Complete information about configuring Axeda Platform for Axeda Machine Streams, including the data format for the resulting streams (XML or JSON) is available in the “Axeda v2 API/Services Developers Reference Guide.” Instructions for configuring an ActiveMQ or Azure Service Bus server for Machine Streams are provided in the “Axeda® Machine Streams: A Guide to Setting Up Broker Endpoints” Reference Guide (available from PTC Support (http://support.ptc.com/)). Building the Project This page provides instructions for building the Data Relay project for Linux and for Windows environments. 1. Download and uncompress the project for your environment Linux: Click here for the machine-streams-data-relay-1.0.3-project.tar.gz # tar -zxvf machine-streams-data-relay-1.0.3-project.tar.gz # cd machine-streams-data-relay-1.0.3 Windows: Click here for the machine-streams-data-relay-1.0.3-project.zip Unzip the project to the following directory: C:\machine-streams-data-relay-1.0.3 2. Edit the ActiveMQ or Azure Service Bus configuration file (configAMQ.properties or configASB.properties) in src\main\scripts\ as needed. sample Config.properties files for the MachineStreamsDataRelay component For ActiveMQ broker endpoints - configAMQ.properties # The ActiveMQ broker URL. brokerURL=tcp://localhost:62000 # The ActiveMQ queue name to process messages from. # It can be a single queue: MachineStream.stream01 # Or a wildcard queue: MachineStream.> queueName=MachineStream.> # The username used to connect to the ActiveMQ queue username=axedaadmin # The password used to connect to the ActiveMQ queue password=zQXuLzhQgcyRZ25JCDXYEPBCT2kx48 # The number of ActiveMQ broker connections. numConnections=10 # The number of sessions per connection. Note that each session will create a separate thread. numSessionsPerConnection=5 # The number of concurrent threads used for processing machine streams messages. numProcessingThreads=100 # The type of message listener container. # default = single queue name per connection. # multiDestination = supports multiple queue names per connection messageListenerContainerType=default For Azure Service Bus broker endpoints - configASB.properties # The ASB broker URL. brokerURL=amqps://your-azure-service-bus-namespace.servicebus.windows.net # The ASB queues to process messages from. # It can be a single queue: MachineStream.stream01 # Or multiple queues separated by a comma: MachineStream.stream01,MachineStream.stream02 # Or a queue range defined by the following syntax: MachineStream.stream[01-20] queueName=MachineStream.stream[01-50] # The username used to connect to the ASB queue(s) username=your-azure-service-bus-username # The password used to connect to the ASB queue(s) password=the-password-for-your-azure-service-bus-username # The max number of ASB broker connections. numConnections=10 # The number of concurrent threads used for processing machine streams messages. numProcessingThreads=100 # The type of message listener container. # default = single queue name per connection. # multiDestination = supports multiple queue names per connection messageListenerContainerType=multiDestination Note: messageListenerContainerType is provided because Azure Service Bus does not support wildcard queue names. The configuration details are as follows: Name Description Value brokerURL location of the ActiveMQ or Azure Service Bus (broker) location of the ActiveMQ or Azure Service Bus server (broker) queueName Name of the ActiveMQ or Azure Service Bus queue from which you want to process messages To define a single queue: MachineStream.<insert single queue name here> To define a wildcard queue name for multiple queues:MachineStream. It can be a single queue:  MachineStream.stream01 Or multiple queues separated by a comma: MachineStream.stream01,MachineStream.stream02 Or a queue range defined by the following syntax: MachineStream.stream[01-20]: queueName=MachineStream.stream[01-50] (if you have multiple queues and you want to use ASB, then you have to use multiDestination and use the range) username username used to connect to the ActiveMQ or Azure Service Bus queue For ActiveMQ: username=axedaadmin For ASB: username=your-azure-service-bus-username password used to connect to the ActiveMQ or Azure Service Bus queue password used to connect to the ActiveMQ or ASB queue(s) numConnections number of ActiveMQ or Azure Service Bus broker connections Default is 10 broker connections numSessionsPerConnection The number of sessions per connection. Note that each session will create a separate thread. (This key is used infrequently.) APPLICABLE TO ACTIVEMQ ONLY. Default is 5 sessions per connection APPLICABLE TO ACTIVEMQ ONLY. numProcessingThreads The number of concurrent threads used for processing machine streams messages. Default is 100 concurrent threads messageListenerContainerType The type of message listener container. Default is single queue name per connection. Supports multiple queue names per connection 3. Build code using Maven.  Use -DskipTests option if you want to skip tests.  This will build all source code and produce a bin archive in the target directory. For Linux: # mvn package -DskipTests For Windows: c:\> mvn package -DskipTests 4. Enter the target directory and uncompress *bin.tar.gz archive and enter correct directory For Linux: # cd target # tar -zxvf machine-streams-data-relay-1.0.3-bin.tar.gz # cd machine-streams-data-relay-1.0.3 For Windows: c:\> cd target c:\> unzip machine-streams-data-relay-1.0.3.bin.zip c:\> cd machine-streams-data-relay-1.0.3 5. Start the application. For Linux: # ./machineStreamsDataRelay.sh <config properties file> for example: e.g. ./machineStreamsDataRelay.sh configOfYourChoice.properties For Windows: # ./machineStreamsDataRelay.sh <config properties file> for example: e.g. ./machineStreamDataRelay.bat configOfYourChoice.properties See the two example config files included within the project: configASB.properties (for Azure Service Bus) and configAMQ.properties (for ActiveMQ). 6. Scan the output. If your ActiveMQ configuration is correct, output similar to the following should appear, and no ERRORS should be shown: 2014-03-26 10:27:06.179 [main] INFO  [MessageListenerServiceImpl]: Initializing connections to tcp://localhost:62000 username=axedaadmin 2014-03-26 10:27:06.346 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 1: queue=MachineStream.> numSessions=5 2014-03-26 10:27:06.351 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 2: queue=MachineStream.> numSessions=5 2014-03-26 10:27:06.356 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 3: queue=MachineStream.> numSessions=5 2014-03-26 10:27:06.365 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 4: queue=MachineStream.> numSessions=5 2014-03-26 10:27:06.369 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 5: queue=MachineStream.> numSessions=5 2014-03-26 10:27:06.381 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 6: queue=MachineStream.> numSessions=5 2014-03-26 10:27:06.388 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 7: queue=MachineStream.> numSessions=5 2014-03-26 10:27:06.402 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 8: queue=MachineStream.> numSessions=5 2014-03-26 10:27:06.411 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 9: queue=MachineStream.> numSessions=5 2014-03-26 10:27:06.416 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 10: queue=MachineStream.> numSessions=5 If your Azure Service Bus configuration is correct, output similar to the following should appear, and no ERRORS should be shown: 2014-10-01 16:51:30.114 [main] INFO [MessageListenerServiceImpl]: Initializing Connections to amqps://acme.servicebus.windows.net username=owner 2014-10-01 16:51:31.613 [ConnectionRecovery-thread-6] INFO [MultiDestinationMessageListenerContainer]: Connection 6 created 0/10 queue consumers 2014-10-01 16:51:31.614 [ConnectionRecovery-thread-8] INFO [MultiDestinationMessageListenerContainer]: Connection 8 created 0/10 queue consumers 2014-10-01 16:51:31.614 [ConnectionRecovery-thread-10] INFO [MultiDestinationMessageListenerContainer]: Connection 10 created 0/9 queue consumers 2014-10-01 16:51:31.614 [ConnectionRecovery-thread-2] INFO [MultiDestinationMessageListenerContainer]: Connection 2 created 0/10 queue consumers 2014-10-01 16:51:31.614 [ConnectionRecovery-thread-3] INFO [MultiDestinationMessageListenerContainer]: Connection 3 created 0/10 queue consumers 2014-10-01 16:51:31.614 [ConnectionRecovery-thread-5] INFO [MultiDestinationMessageListenerContainer]: Connection 5 created 0/10 queue consumers 2014-10-01 16:51:31.615 [ConnectionRecovery-thread-9] INFO [MultiDestinationMessageListenerContainer]: Connection 9 created 0/10 queue consumers 2014-10-01 16:51:31.615 [ConnectionRecovery-thread-4] INFO [MultiDestinationMessageListenerContainer]: Connection 4 created 0/10 queue consumers 2014-10-01 16:51:31.621 [ConnectionRecovery-thread-7] INFO [MultiDestinationMessageListenerContainer]: Connection 7 created 0/10 queue consumers 2014-10-01 16:51:31.756 [ConnectionRecovery-thread-1] INFO [MultiDestinationMessageListenerContainer]: Connection 1 created 0/10 queue consumers 2014-10-01 16:51:32.613 [ConnectionRecovery-thread-6] INFO [MultiDestinationMessageListenerContainer]: Connection 6 created 9/10 queue consumers 2014-10-01 16:51:32.614 [ConnectionRecovery-thread-8] INFO [MultiDestinationMessageListenerContainer]: Connection 8 created 9/10 queue consumers 2014-10-01 16:51:32.614 [ConnectionRecovery-thread-10] INFO [MultiDestinationMessageListenerContainer]: Connection 10 created 7/9 queue consumers 2014-10-01 16:51:32.614 [ConnectionRecovery-thread-2] INFO [MultiDestinationMessageListenerContainer]: Connection 2 created 10/10 queue consumers 2014-10-01 16:51:32.615 [ConnectionRecovery-thread-3] INFO [MultiDestinationMessageListenerContainer]: Connection 3 created 9/10 queue consumers 2014-10-01 16:51:32.615 [ConnectionRecovery-thread-5] INFO [MultiDestinationMessageListenerContainer]: Connection 5 created 0/10 queue consumers 2014-10-01 16:51:32.615 [ConnectionRecovery-thread-9] INFO [MultiDestinationMessageListenerContainer]: Connection 9 created 7/10 queue consumers 2014-10-01 16:51:32.615 [ConnectionRecovery-thread-4] INFO [MultiDestinationMessageListenerContainer]: Connection 4 created 9/10 queue consumers 2014-10-01 16:51:32.623 [ConnectionRecovery-thread-7] INFO [MultiDestinationMessageListenerContainer]: Connection 7 created 9/10 queue consumers 2014-10-01 16:51:32.756 [ConnectionRecovery-thread-1] INFO [MultiDestinationMessageListenerContainer]: Connection 1 created 10/10 queue consumers 2014-10-01 16:51:32.833 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 1: numQueues=10 initTimeMillis=2631 millis 2014-10-01 16:51:32.833 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 2: numQueues=10 initTimeMillis=2488 millis 2014-10-01 16:51:33.613 [ConnectionRecovery-thread-6] INFO [MultiDestinationMessageListenerContainer]: Connection 6 created 10/10 queue consumers 2014-10-01 16:51:33.614 [ConnectionRecovery-thread-8] INFO [MultiDestinationMessageListenerContainer]: Connection 8 created 10/10 queue consumers 2014-10-01 16:51:33.614 [ConnectionRecovery-thread-10] INFO [MultiDestinationMessageListenerContainer]: Connection 10 created 9/9 queue consumers 2014-10-01 16:51:33.615 [ConnectionRecovery-thread-3] INFO [MultiDestinationMessageListenerContainer]: Connection 3 created 9/10 queue consumers 2014-10-01 16:51:33.615 [ConnectionRecovery-thread-5] INFO [MultiDestinationMessageListenerContainer]: Connection 5 created 0/10 queue consumers 2014-10-01 16:51:33.615 [ConnectionRecovery-thread-9] INFO [MultiDestinationMessageListenerContainer]: Connection 9 created 8/10 queue consumers 2014-10-01 16:51:33.615 [ConnectionRecovery-thread-4] INFO [MultiDestinationMessageListenerContainer]: Connection 4 created 9/10 queue consumers 2014-10-01 16:51:33.623 [ConnectionRecovery-thread-7] INFO [MultiDestinationMessageListenerContainer]: Connection 7 created 10/10 queue consumers 2014-10-01 16:51:34.615 [ConnectionRecovery-thread-5] INFO [MultiDestinationMessageListenerContainer]: Connection 5 created 0/10 queue consumers 2014-10-01 16:51:34.615 [ConnectionRecovery-thread-3] INFO [MultiDestinationMessageListenerContainer]: Connection 3 created 9/10 queue consumers 2014-10-01 16:51:34.615 [ConnectionRecovery-thread-9] INFO [MultiDestinationMessageListenerContainer]: Connection 9 created 8/10 queue consumers 2014-10-01 16:51:34.615 [ConnectionRecovery-thread-4] INFO [MultiDestinationMessageListenerContainer]: Connection 4 created 9/10 queue consumers 2014-10-01 16:51:35.615 [ConnectionRecovery-thread-5] INFO [MultiDestinationMessageListenerContainer]: Connection 5 created 9/10 queue consumers 2014-10-01 16:51:35.615 [ConnectionRecovery-thread-3] INFO [MultiDestinationMessageListenerContainer]: Connection 3 created 9/10 queue consumers 2014-10-01 16:51:35.615 [ConnectionRecovery-thread-9] INFO [MultiDestinationMessageListenerContainer]: Connection 9 created 8/10 queue consumers 2014-10-01 16:51:35.616 [ConnectionRecovery-thread-4] INFO [MultiDestinationMessageListenerContainer]: Connection 4 created 9/10 queue consumers 2014-10-01 16:51:36.616 [ConnectionRecovery-thread-5] INFO [MultiDestinationMessageListenerContainer]: Connection 5 created 9/10 queue consumers 2014-10-01 16:51:36.616 [ConnectionRecovery-thread-3] INFO [MultiDestinationMessageListenerContainer]: Connection 3 created 9/10 queue consumers 2014-10-01 16:51:36.616 [ConnectionRecovery-thread-4] INFO [MultiDestinationMessageListenerContainer]: Connection 4 created 9/10 queue consumers 2014-10-01 16:51:36.616 [ConnectionRecovery-thread-9] INFO [MultiDestinationMessageListenerContainer]: Connection 9 created 8/10 queue consumers 2014-10-01 16:51:37.616 [ConnectionRecovery-thread-5] INFO [MultiDestinationMessageListenerContainer]: Connection 5 created 9/10 queue consumers 2014-10-01 16:51:37.616 [ConnectionRecovery-thread-3] INFO [MultiDestinationMessageListenerContainer]: Connection 3 created 9/10 queue consumers 2014-10-01 16:51:37.616 [ConnectionRecovery-thread-4] INFO [MultiDestinationMessageListenerContainer]: Connection 4 created 9/10 queue consumers 2014-10-01 16:51:37.616 [ConnectionRecovery-thread-9] INFO [MultiDestinationMessageListenerContainer]: Connection 9 created 8/10 queue consumers 2014-10-01 16:51:38.616 [ConnectionRecovery-thread-3] INFO [MultiDestinationMessageListenerContainer]: Connection 3 created 10/10 queue consumers 2014-10-01 16:51:38.617 [ConnectionRecovery-thread-9] INFO [MultiDestinationMessageListenerContainer]: Connection 9 created 10/10 queue consumers 2014-10-01 16:51:38.616 [ConnectionRecovery-thread-4] INFO [MultiDestinationMessageListenerContainer]: Connection 4 created 10/10 queue consumers 2014-10-01 16:51:38.616 [ConnectionRecovery-thread-5] INFO [MultiDestinationMessageListenerContainer]: Connection 5 created 10/10 queue consumers 2014-10-01 16:51:38.643 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 3: numQueues=10 initTimeMillis=8491 millis 2014-10-01 16:51:38.643 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 4: numQueues=10 initTimeMillis=8490 millis 2014-10-01 16:51:38.643 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 5: numQueues=10 initTimeMillis=8490 millis 2014-10-01 16:51:38.643 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 6: numQueues=10 initTimeMillis=3485 millis 2014-10-01 16:51:38.643 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 7: numQueues=10 initTimeMillis=3495 millis 2014-10-01 16:51:38.643 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 8: numQueues=10 initTimeMillis=3485 millis 2014-10-01 16:51:38.643 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 9: numQueues=10 initTimeMillis=8488 millis 2014-10-01 16:51:38.643 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 10: numQueues=9 initTimeMillis=3485 millis 7. To verify that messages are being streamed properly from the Axeda Platform, send DataItems from your connected Assets. You should see messages similar to the following. (Remember that each Asset you are testing must have an associated Machine Stream.) 2014-03-26 10:45:16.309 [pool-1-thread-1] INFO  [LogMessageProcessor]: StreamedDataItem: Model,Asset1,799021d6-70a3-7c32-0000-00000000021d,false,Wed Mar 26 14:45:16 EDT 2014,temp,43,analog 2014-03-26 10:45:21.137 [pool-1-thread-2] INFO  [LogMessageProcessor]: StreamedDataItem: Model,Asset2,799021d6-70a3-7c32-0000-000000000225,false,Wed Mar 26 14:45:21 EDT 2014,temp,43,analog 2014-03-26 10:45:26.134 [pool-1-thread-3] INFO  [LogMessageProcessor]: StreamedDataItem: Model,Asset1,799021d6-70a3-7c32-0000-00000000022b,false,Wed Mar 26 14:45:26 EDT 2014,temp,44,analog 2014-03-26 10:45:31.135 [pool-1-thread-4] INFO  [LogMessageProcessor]: StreamedDataItem: Model,Asset2,799021d6-70a3-7c32-0000-000000000231,false,Wed Mar 26 14:45:31 EDT 2014,temp,44,analog 2014-03-26 10:45:36.142 [pool-1-thread-5] INFO  [LogMessageProcessor]: StreamedDataItem: Model,Asset1,799021d6-70a3-7c32-0000-000000000237,false,Wed Mar 26 14:45:36 EDT 2014,temp,45,analog 2014-03-26 10:45:41.146 [pool-1-thread-6] INFO  [LogMessageProcessor]: StreamedDataItem: Model,Asset2,799021d6-70a3-7c32-0000-00000000023d,false,Wed Mar 26 14:45:41 EDT 2014,temp,45,analog Configuring a CustomMessageProcessor By default, the project is configured to use a LogMessageProcessor that logs each streamed message it receives to standard out. The project takes a StreamedMessage in either an XML or JSON format (as configured in the MachineStream SDKv2 object) and decodes the message into a StreamedMessage Java object. LogMessageProcessor.java implements the MessageProcessor interface. Here is the MessageProcessor.java interface: MessageProcessor.java package com.axeda.tools.streams.processor; import com.axeda.tools.streams.model.StreamedMessage; /** * This class defines the message processor method callback that will called for message processing. * Note that this methods will be called by multiple threads concurrently. */ public interface MessageProcessor { /** * Process a machine stream message. Note that this method will be called by multiple threads concurrently.  * The number of concurrent processing threads is defined in MachineStreamsConfig.getNumProcessingThreads(). * If you add code here that significantly slows down message processing, then there is the potential that * MessageListenerService threads will also block.  When the MessageListenerService threads block, this means that * messages will start to backup in the ActiveMQ or ASB message queues. If you are processing a large number of messages, * then you may need to adjust your configuration parameters or optimize your processMessage() code. * @param message machine streams message to process */public void processMessage(StreamedMessage message);} An additional class named CustomMessageProcessor.java has been provided so that you can provide your own custom message processing logic: CustomMessageProcessor.java package com.axeda.tools.streams.processor; import org.springframework.stereotype.Component; import com.axeda.tools.streams.model.StreamedAlarm; import com.axeda.tools.streams.model.StreamedDataItemMessage; import com.axeda.tools.streams.model.StreamedMessage; import com.axeda.tools.streams.model.StreamedMobileLocation; import com.axeda.tools.streams.model.StreamedRegistrationMessage; /** * This class was provided for customers to implement their own message processing business * logic. To use this class, change the @Autowired messageProcessor qualifier in * MessageProcessingServiceImpl.java to @Qualifier("customMessageProcessor") */ @Component("customMessageProcessor") public class CustomMessageProcessor implements MessageProcessor { /** * (non-Javadoc) * @see com.axeda.tools.streams.processor.MessageProcessor#processMessage(com.axeda.tools.streams.model.StreamedMessage) * * Process a machine stream message. Note that this method will be called by multiple threads * concurrently. The number of concurrent processing threads is defined in * MachineStreamsConfig.getNumProcessingThreads(). * If you add code here that significantly slows down message processing, then there is the * potential that MessageListenerService threads will also block. When the MessageListenerService * threads block, this means that messages will start to backup in the ActiveMQ or Azure Service Bus message queues. If you * are processing a large number of messages, then you may need to adjust your configuration parameters * or optimize your processMessage() code. */ @SuppressWarnings("unused") @Override public void processMessage(StreamedMessage message) { if (message instanceof StreamedDataItemMessage) { StreamedDataItemMessage dataItem = (StreamedDataItemMessage) message; // add your business logic here } else if (message instanceof StreamedAlarm) { StreamedAlarm alarm = (StreamedAlarm) message; // add your business logic here } else if (message instanceof StreamedMobileLocation) { StreamedMobileLocation mobileLocation = (StreamedMobileLocation) message; // add your business logic here } else if (message instanceof StreamedRegistrationMessage) { StreamedRegistrationMessage registration = (StreamedRegistrationMessage)message; // add your business logic here } } } The Axeda Platform Machine Streams feature currently support 4 different message types: StreamedDataItemMessage StreamedAlarm StreamedMobileLocation StreamedRegistrationMessage For each of the different message types, you should add your message processing business logic.  You may want to write each message to your favorite NoSql database or to a flat file. Once you have completed your changes to the CustomObjectMessageProcessor, then you must make one change in the MessageProcessingServiceImpl.java class to use this Spring bean. Uncomment this line  // @Qualifier("customMessageProcessor") Comment this line @Qualifier("logMessageProcessor") The following code snppet shows what your changes should look like when you are finished: MessageProcessingServiceImpl.java @Component("messageProcessingService") public class MessageProcessingServiceImpl implements MessageProcessingService private static final Logger LOGGER = LoggerFactory.getLogger(MessageProcessingServiceImpl.class); @Autowired private MessageDecoder messageDecoder; @Autowired // If you want to use the CustomMessageProcessor instead of the default LogMessageProcessor then change this Qualifier to @Qualifier("customMessageProcessor") //@Qualifier("logMessageProcessor") private MessageProcessor messageProcessor; private ExecutorService executorService;
View full tip
Complete information about installing and using the Axeda Machine Streams Data Relay Project is available here. This page provides the files for Axeda Machine Streams Data Relay Project: For Linux users ready to run (bin) files: download the machine-streams-data-relay-1.0.3-bin.tar_.gz archive full Maven project files: download the machine-streams-data-relay-1.0.3-project.tar_.gz archive For Windows users ready to run (bin) files: download the machine-streams-data-relay-1.0.3-bin.zip archive full Maven project files: download the machine-streams-data-relay-1.0.3-project.zip archive From the links below, select the project you want to use.
View full tip
  Hello, everyone! Discover how we embed security throughout the entire lifecycle of the ThingWorx platform in our latest “ThingWorx on Air” episode!   Hear Walter walk through how the ThingWorx platform is secured from end to end. Walter breaks it down into three simple parts: secure design, secure coding practices and continuous security improvements via our maintenance releases.   Listen to Episode 07 to hear the steps we’re taking in each of these areas and how security is at the forefront of what we do.   Finally, Walter mentions the Secure Deployment Hub, our brand-new set of resources to help you securely deploy your ThingWorx apps. Check out my last tech tip to learn more.   As always, stay connected, Kaya
View full tip
Hi,   If you need to change the used hostname at installation of Thingworx Flow, some manual changes should be done without re-installing Flow. Basically, hostname for Flow should be changed in the nginx configuration and in Flow modules configuration; whenever you see the hostname used at Flow installation, change it with the new hostname.   Change the following configurations after renaming the ThingWorx Flow server in Windows OS : 1. Stop Flow, Nginx and ThingWorx Tomcat services 2. Update C:\Program Files\ <nginx>\conf\conf.d\vhost-flow.conf server_name : change hostname with new one      3. Update C:\Program Files (x86)\ <ThingWorxFlow>\modules\lookup\deploymentConfig.json ENDPOINT : change hostname with new one      4. Update <ThingWorxFlow>\modules\oauth\deploymentConfig.json UI_ENDPOINT : change hostname with new one ENDPOINT : change hostname with new one      5. Update <ThingWorxFlow>\modules\trigger\deploymentConfig.json DOMAIN : change hostname with new one TRIGGER_HOST : change hostname with new one 6. Update <ThingWorxFlow>\modules\ux\deploymentConfig.json api_endpoint : change hostname with new one view > oauth_server : change hostname with new one service_api_endpoint : change hostname with new one      If the ThingWorx Platform is installed on the Flow server : enterprise > built > host + prefix_url : change hostname with new one 7. If the ThingWorx Platform is not installed on the Flow server: Stop Thingworx Tomcat service Update <ThingworxPlatform>\platform-settings.json         PlatformSettingsConfig >  OrchestrationSettings > QueueHost : change flow hostname with new one 8. Restart the Thingworx, Flow and Nginx services   After these steps, Flow should be accessible with the new hostname: https://new_hostname:port/Thingworx/Composer/apps/flow/     Regards, Raluca Edu    
View full tip
  Whether your ThingWorx instances are deployed on premise, in the cloud or a hybrid of the two, I’d like you to imagine: You have a super cool app. You want to deploy it securely. You’re not a security expert. What do you do? How do you know how to securely deploy your app? Where do you go for security best practices? Introducing the new ThingWox Secure Deployment Hub!   The ThingWorx Secure Deployment Hub is a new section of our support site that will introduce you to the ThingWorx security landscape and direct you to security resources pertaining to the Edge, the platform and beyond.   From permissions and provisioning, to subsystems and SSO, the hub is packed with our recommendations and best practices for you to deploy your app in a secure fashion.   Happy deploying! Kaya
View full tip
This Zip file contains the Axeda patch (axeda-jms-plugin-<version>-machine-streams) required for proper installation and configuration of an Apache ActiveMQ server to use with the Axeda Machine Streams service (which is supported for Axeda Platform v6.8 and later). Note: Information about the Axeda Machine Streams feature is provided in the Axeda Features Guide available from the Axeda Support site, http://help.axeda.com. This patch overlay needs to be applied to the v5.8.0 ActiveMQ server installed as the Axeda Machine Streams endpoint broker, so that Axeda Platform can send streamed content to that server endpoint. Complete instructions for installing and configuring an Apache ActiveMQ server for Axeda Machine Streams are provided in the reference, Axeda® Machine Streams: A Guide to Setting Up Broker Endpoints. This guide is available with all Axeda product documentation from the PTC Support site.
View full tip
Recently I needed to be able to parse and handle XML data natively inside of a ThingWorx script, and this XML file happened to have a SOAP namespace as well. I learned a few things along the way that I couldn’t find a lot of documentation on, so am sharing here.   Lessons Learned The biggest lesson I learned is that ThingWorx uses “E4X” XML handling. This is a language that Mozilla created as a way for JavaScript to handle XML (the full name is “ECMAscript for XML”). While Mozilla deprecated the language in 2014, Rhino, the JavaScript engine that ThingWorx uses on the server, still supports it, so ThingWorx does too. Here’s a tutorial on E4X - https://developer.mozilla.org/en-US/docs/Archive/Web/E4X_tutorial The built-in linter in ThingWorx will complain about E4X syntax, but it still works. I learned how to get to the data I wanted and loop through to create an InfoTable. Hopefully this is what you want to do as well.   Selecting an Element and Iterating My data came inside of a SOAP envelope, which was meaningless information to me. I wanted to get down a few layers. Here’s a sample of my data that has made-up information in place of the customer's original data:                <SOAP-ENV:Envelope xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/" headers="">     <SOAP-ENV:Body>         <get_part_schResponse xmlns="urn:schemas-iwaysoftware-com:iwse">             <get_part_schResult>                 <get_part_schRow>                     <PART_NO>123456</PART_NO>                     <ORD_PROC_DIV_CD>E</ORD_PROC_DIV_CD>                     <MFG_DIV_CD>E</MFG_DIV_CD>                     <SCHED_DT>2020-01-01</SCHED_DT>                 </get_part_schRow>                 <get_part_schRow>                     <PART_NO>789456</PART_NO>                     <ORD_PROC_DIV_CD>E</ORD_PROC_DIV_CD>                     <MFG_DIV_CD>E</MFG_DIV_CD>                     <SCHED_DT>2020-01-01</SCHED_DT>                 </get_part_schRow>             </get_part_schResult>         </get_part_schResponse>     </SOAP-ENV:Body> </SOAP-ENV:Envelope> To get to the schRow data, I need to get past SOAP and into a few layers of XML. To do that, I make a new variable and use the E4X selections to get there: var data = resultXML.*::Body.*::get_part_schResponse.*::get_part_schResult.*; Note a few things: resultXML is a variable in the service that contains the XML data. I skipped the Envelope tag since that’s the root. The .* syntax does not mean “all the following”, it means “all namespaces”. You can define and specify the namespaces instead of using .*, but I didn’t find value in that. I found some sample code that theoretically should work on a VMware forum: https://communities.vmware.com/thread/592000. This gives me schRow as an XML List that I can iterate through. You can see what you have at this point by converting the data to a String and outputting it: var result = String(data); Now that I am to the schRow data, I can use a for loop to add to an InfoTable: for each (var row in data) {      result.AddRow({         PartNumber: row.*::PART_NO,         OrderProcessingDivCD: row.*::ORD_PROC_DIV_CD,         ManufacturingDivCD: row.*::MFG_DIV_CD,         ScheduledDate: row.*::SCHED_DT     }); } Shoo! That’s it! Data into an InfoTable! Next time, I'll ask for a JSON API. 😊
View full tip
This might be a well-known topic for some, but I recently had a need that Event Routers fit into perfectly and wanted to share. If you have some neat applications for Event Routers on mashups, feel free to reply!   What? Event Routers are a function on Mashups that let you connect multiple inputs to a single output. For my use case, this was extremely helpful to let me have two different Service Outputs go to the same Widget. They are a really simple tool that can save a lot of headache.   How? Event Routers work by funneling the latest data through to a single output. This is particularly useful for  user-activated actions with the output tied to a widget or another service. The Event Router automatically activates when any one of the Inputs changes.   Example I have two services that generate HTML from different sources, but I want to display just the latest one that the user had activated in a single HTML Text Area widget. The two different services are activated with two different buttons. But how do I show these two outputs in a single widget? Create an Event Router with two HTML inputs!         Now I just tie each service output to the Inputs and tie the Output to the HTML Text Area Text (note: the icon for Input2 is incorrect—it should be HTML as well; this system is running 8.5.1, perhaps it's an issue in that release).         Now when the user clicks on either button, the correct service’s HTML is sent to the HTML Text Area. Ta-da!   P.S. I noticed in some older posts that Event Routers used to be a widget or extension that came and went. Now (8.5+) it is baked into the Functions on the far right side of Mashup Builder.
View full tip
The Property Set Approach This article details an approach developed by Prachi Rath and Roy Clarke, refined by the EDC team in the December 2019's Remote Monitoring of Assets Reference Benchmark , and used to handle multi-property business rules in an Enterprise ThingWorx application.   Introduction If there are logic rules which depend upon multiple properties, and each property receives its updates one at a time, then each property will need to have an identical subscription, because there is no way for any one subscription to know the most up-to-date values for the other properties. This inefficient approach would create redundancy and sizing constraints, reducing the capacity of the application to scale up to the Enterprise level. The Property Set Approach resolves this issue by sending in all property updates as one Info Table or JSON property (called the “property set”), which can then have a single subscription. The property set is assembled on the Edge when an update needs to be sent, and then the Platform dissects, processes, and stores the data within this property set as required by the business logic.   This approach also involves caching the last property value into a runtime variable so that it can be referenced within the business logic subscription without having to be retrieved from the database. This can significantly improve the runtime of the subscription, reducing the number of resources required to sustain the business logic and ensuring that any alerts or events resulting from the business logic occur as soon as possible. It also reduces the load on the database, ensuring that data ingestion can complete unhindered.   So, while there are many benefits to this approach, it is also more complicated. It tightly couples the development of the Edge and Platform code and increases the application complexity, making it slightly less easy to maintain the application in the long run. The property set also requires a little more bandwidth and a more stable internet connection between Edge devices and the Platform since there is more metadata in an Info Table property, and therefore every update is slightly larger than it would be otherwise. So this approach is only recommended when multi-property rules are a requirement of the application and a stable internet connection exists between the Edge and Platform.   Platform Implementation I. Create an Info Table (or JSON) Property This tutorial uses the out-of-the-box Data Shape called NamedVTQ for the Info Table property, which is defined on a Thing Template as a remote property. It is important that this is not marked as persistent or logged, as the purpose is to reduce the amount of database writes and reads required by the Platform. The Info Table property has the following property definition:     <PropertyDefinition aspect.dataChangeType="ALWAYS" aspect.dataShape="NamedVTQ" aspect.isPersistent="false" baseType="INFOTABLE" isLocalOnly="false" name="numberPropertySetAsInfotable"/>       II. Create a Data Change Event Subscription for the Info Table Property The subscription has three parts: Cache the last value for the property in a runtime variable Start off the business rules processing, sending in the whole Info Table Send the Info Table to be logged as individual local properties in the database     // First step caches the last Value, refer to the next step… // Second step sets off the business rules processing with the Info Table me.ScaleTestBusinessRuleForPropertySetAsInfotable({ PropertySetAsInfotable: eventData.newValue.value }); // Third step sends the Info Table as one property into a service which parses it into the // individual properties, updating both the runtime properties on the remote thing and the database me.UpdatePropertyValues({ values: eventData.newValue.value /* INFOTABLE */ });       III. Set-Up Caching Each property which needs to be cached should be created on the Thing Template level and named in a similar way, say by placing the word “Last” at the end, such as “Property1” => “Property1Last”, “Property2” => “Property2Last”, etc. This property should NOT be logged or persistent, as the point of this is to store the most recent value in memory, removing any superfluous dependency on database queries in the process. Note that while storing the property in runtime memory makes it much more accessible, it also means that the property needs to be rewritten manually upon Platform restart. Additional code (not provided here) must be written to populate these properties from the database upon application start-up.   The following code should be placed in the data change event subscription (option 1 in the case where only a few properties need caching, or option 2 if every property value needs to be cached):   Option 1: Some but Not All Properties Need Caching     // Names of properties for which you want to cache the last value var propertyNames = ['number1', 'number2']; // Loop through the properties and cache their time if they are found in the property set propertyNames.map(assignLast); // This function can be split into two functions for Age and Last separately if need be function assignLast(propertyName) { logger.debug("Looping for property -> "+ propertyName); var searchprop = new Object(); searchprop.name = propertyName; property = eventData.newValue.value.Find(searchprop); if(property){ logger.debug("Found Row. Name= " + property.name); var lastPropertyName = propertyName+"Last"; if(property.value) { // Set the cache property on me, this entity, to the current property value me[lastPropertyName] = me[propertyName]; } } else { logger.debug("Property Not Found in property set -> " + propertyName); } }       Option 2: All Properties Need Caching     var rowCount = eventData.newValue.value.getRowCount(); for(var i=0; i<rowCount; i++){ logger.warn("property name->" + eventData.newValue.value[i].name + "----- property new value->" + eventData.newValue.value[i].value.value); var propertyName = eventData.newValue.value[i].name; var lastPropertyName = propertyName+"Last"; me[lastPropertyName] = me[propertyName]; logger.warn("done last subscription, last property value for lastPropertyName" + me[lastPropertyName]); }         Useful Platform Code Snippets I. Age Calculation     var date1 = new Date(); var date2 = me.GetPropertyTime({ propertyName: propertyName /* STRING */ }); var result = millisToMinutesAndSeconds (dateDifference(date1, date2) ); // This function converts from an unintelligibly large number in milliseconds to something formatted in minutes and seconds function millisToMinutesAndSeconds(millis) { var minutes = Math.floor(millis / 60000); var seconds = ((millis % 60000) / 1000).toFixed(0); return (seconds == 60 ? (minutes+1) + ":00" : minutes + ":" + (seconds < 10 ? "0" : "") + seconds); }       II. Sort the Info Table by Time     var params = { sortColumn: "time" /* STRING */, t: me.propertySet/* INFOTABLE */, ascending: ascending /* BOOLEAN */ }; var result = Resources["InfoTableFunctions"].Sort(params);       III. Search the Info Table for a Property     var searchprop = new Object(); searchprop.name = propertyName; property = PropertySetAsInfotable.Find(searchprop); if(property === null){ logger.info("Property Not Found -> " + propertyNumber1); } else { logger.info("Found Row. Name= [" + property.name + "], value= " + property.value.value); }         Edge Implementation This example implementation uses the .NET Edge SDK to build a property set Info Table at the Edge.   I. Define the Data Shape A standard Data Shape is used (NamedVTQ), but because this Data Shape is not exposed in the Edge SDK code, it has to be created manually.     // Data Shape definition for NamedVTQ FieldDefinitionCollection namedVTQFields = new FieldDefinitionCollection(); namedVTQFields.addFieldDefinition(new FieldDefinition(CommonPropertyNames.PROP_NAME, BaseTypes.STRING)); namedVTQFields.addFieldDefinition(new FieldDefinition(CommonPropertyNames.PROP_VALUE, BaseTypes.VARIANT)); namedVTQFields.addFieldDefinition(new FieldDefinition(CommonPropertyNames.PROP_TIME, BaseTypes.DATETIME)); namedVTQFields.addFieldDefinition(new FieldDefinition(CommonPropertyNames.PROP_QUALITY, BaseTypes.STRING)); base.defineDataShapeDefinition("NamedVTQ", namedVTQFields);     II. Define the Info Table Property The property defined should NOT be logged or persistent, and it can be read-only, since data is always pushed from the Edge and read from the server cache when accessed on the Platform. Note that the push type of the info table property MUST be set to "ALWAYS" (if set to "VALUE", the data change event will only fire if the number of rows changes).   // Property Set Definitions [ThingworxPropertyDefinition( name = "DevicePropertySet", description = "Alternative representation of properties as an Info Table for rules processing", baseType = "INFOTABLE", category = "Status", aspects = new string[] { "isReadOnly:true", "isPersistent:false", "isLogged:false", "dataShape:NamedVTQ", "cacheTime:0", "pushType:ALWAYS" } ) ]     III. Define a Property to Store the GOOD Quality Status   private static String QUALITY_STATUS_GOOD = QualityStatus.GOOD.name();     IV. Define Functions to Populate the Value Collections  An Info Table is really just made up of many Value Collections, where each Value Collection is considered a row. These services take in the name and value of a property and return a Value Collection object which can be added to the property set Info Table.   public ValueCollection createNumberValueCollection(String name, double value) { ValueCollection vc = new ValueCollection(); // Add quality and time entries to the Value Collection vc.SetStringValue(CommonPropertyNames.PROP_QUALITY, QUALITY_STATUS_GOOD); vc.SetDateTimeValue(CommonPropertyNames.PROP_TIME, new DatetimePrimitive(DateTime.UtcNow)); vc.SetStringValue(CommonPropertyNames.PROP_NAME, name); vc.SetNumberValue(CommonPropertyNames.PROP_VALUE, value); return vc; } public ValueCollection createBooleanValueCollection(String name, Boolean value) { ValueCollection vc = new ValueCollection(); // Add quality and time entries to the Value Collection vc.SetStringValue(CommonPropertyNames.PROP_QUALITY, QUALITY_STATUS_GOOD); vc.SetDateTimeValue(CommonPropertyNames.PROP_TIME, new DatetimePrimitive(DateTime.UtcNow)); vc.SetStringValue(CommonPropertyNames.PROP_NAME, name); vc.SetBooleanValue(CommonPropertyNames.PROP_VALUE, value); return vc; }     V. Build the Property Set Call this code from the processScanRequest method to build the property set.   // Create an instance of a new Info Table using the standard "NamedVTQ" Data Shape InfoTable propertySet = new InfoTable(getDataShapeDefinition("NamedVTQ")); // Set name/value for Temperature using convenience function propertySet.addRow(createNumberValueCollection("Temperature", temperature)); // Set name/value for Pressure using convenience function propertySet.addRow(createNumberValueCollection("Pressure", pressure)); // Set name/value for TotalFlow using convenience function propertySet.addRow(createNumberValueCollection("TotalFlow", this._totalFlow)); // Set name/value for InletValve using convenience function propertySet.addRow(createBooleanValueCollection("InletValve", inletValveStatus)); // Set name/value for FaultStatus using convenience function propertySet.addRow(createBooleanValueCollection("FaultStatus", faultStatus)); // Set the property set Info Table property base.setProperty("DevicePropertySet", propertySet);     VI. Update the subscribed properties These two lines of code update the properties and events, actually sending the property set (containing all property updates) to the Platform.   base.updateSubscribedProperties(15000); base.updateSubscribedEvents(60000);     Conclusion Following these steps will enable the Edge to build a property set before sending any property updates to the Platform. The Platform can then rely on caching to process the business logic with no database dependency, which is faster and more efficient than any other approach. Finally the updates are still written to the database, so in the end, there is no functional difference between using a property set and binding each property individually. Please don't hesitate to comment here with any questions about this approach.
View full tip
Joint Study by Cognizant and the Economist Intelligence Unit (EIU) - Making products smart can deliver game-changing innovation,enriched customer experiences and new, across-the board levels of efficiency. From R&D and manufacturing, through distribution and after-sales support, product data is changing how products are built, sold and cared for. Our latest research reveals practical steps business leaders can take to benefit from this quickly intensifying and accelerating trend.
View full tip
With the advent of faster and cheaper hardware, and owing to vast improvements in connectivity such as Gigabit networks and 5G, data is collected and processed at an ever increasing pace. As such, there is a related need for the Analytical Models built on top of such data to evolve over time. As part of this evolution, modelling experiments with more data, new variables, new techniques, and different training parameters will be performed. The resulting models need to be tracked, monitored, and appropriate versions need to be used for scoring in various scenarios. This creates the need for version control of Analytical Models.   ThingWorx Analytics makes it easy to implement an “Analytics Model Version Control” system via two mechanisms: A job Id and timestamp based identification system, so if a model with the same jobName (model name) is retrained, the old model will still exist and can be easily retrieved A tagging mechanism for the above jobs   Before using the above techniques to version control Analytics Models, it is critical to decide what represents “the same model” to be versioned. Unlike in the world of Software Engineering where the concept to be versioned is a file / folder that evolves over time, in the world of Analytics Models, there could be several, potentially customer specific definitions. For example, one definition could be “same training parameters, but trained on the latest, most comprehensive data available”. Yet another, more relaxed definition could be “any training parameters, but same training dataset and goal” or even “any training parameters, on any historical version of the training dataset”.   Once this concept is agreed upon within the customer's organization, and if training is done in a ThingWorx service by calling the APIs, for a given jobName (model name) one can simply query the tags for a LatestVersion type tag, increment, and create the new model with the same jobName and incremented tag. Any model version with the same jobName and its corresponding performance metrics can then be accessed using the tag. Additional tags (such as techniques used, dataset version, etc) can be added if desired to make retrieval of context dependent models more efficient.
View full tip
A student in the Axeda Groovy course had some good questions. Instead of answering via email, I thought I'd answer here, so we could share the knowledge. Domain Objects - How can I customize or extend Axeda provided domain objects? (Eg., I need to store whether a user is external or internal user) How can I achieve this? This is a perfect use case for the Axeda Extended Data API. Documentation on the API can be found in the Axeda Platform v1 API Developer's Reference Guide. A good, "Getting Started," topic is available on Axeda Mentor - Getting Started With the Axeda Extended Data API. To customize/extend/decorate Axeda-provided domain objects, use Extended Objects. Extended Objects can be thought of in the abstract as a collection of custom database table rows. The analogy isn't perfect, but it helps with understanding. The use case in the question is very common. Extended Objects can stand on their own, or they can be associated with an Axeda domain object with the setInternalID() method. In the following sample code, we "decorate" an Axeda User object with two new fields, isExternalUser and companyID: private void updateAdditionalFieldsForUser(User user, boolean isExternalUser, String companyID) {   // GET OBJECT TYPE - This example assumes the ExtendedObjectType has already been created   ExtendedObjectType extObjectType = extendedObjectService.findExtendedObjectTypeByClassname("com.axeda.drm.sdk.user.User")   // GET PROPERTY TYPES   PropertyType isExternalUserPropertyType = findOrCreatePropertyType(extObjectType, "IS_EXTERNAL_USER")   PropertyType companyIdPropertyType = findOrCreatePropertyType(extObjectType, "COMPANY_ID")      /* GET THE EXTENDED OBJECT   * Note - the example provides a findExtendedObject() method that searches by INTERNAL ID. This linkage via the internal ID   * associates a domain object to an ExtendedObject, allowing us to "decorate" it with custom attributes   */   ExtendedObject extObject = findExtendedObject(extObjectType, user.id.value)   if (extObject == null)   {     extObject = new ExtendedObject()     extObject.setExtendedObjectType(extObjectType)     extObject.setInternalObjectId(user.id.value)     extObject.addProperty(createExtendedProperty(isExternalUserPropertyType, isExternalUser.toString()))     extObject.addProperty(createExtendedProperty(companyIdPropertyType, companyID))     extendedObjectService.createExtendedObject(extObject)   }   else   {     addOrUpdateExtendedProperty(extObject, isExternalUserPropertyType, "IS_EXTERNAL_USER", isExternalUser.toString())     addOrUpdateExtendedProperty(extObject, companyIdPropertyType, "COMPANY_ID", companyID)     extendedObjectService.updateExtendedObject(extObject)   } }  /**  * Adds or Updates an extended property.  *  * @param extendedObject the extended object to associate with the property  * @param propertyType the type of the property  * @param propertyName the name of the property  * @param propertyValue the value of the property  */ private void addOrUpdateExtendedProperty(ExtendedObject extendedObject, PropertyType propertyType, String propertyName, String propertyValue) {   Property extendedProperty = extendedObject.getPropertyByName(propertyName)   if (extendedProperty == null)   {   extendedProperty = createExtendedProperty(propertyType, propertyValue)   extendedObject.addProperty(extendedProperty)   }   else   {   extendedProperty.setValue(propertyValue)   } }  /**  * Retrieves and returns the extended object with the given type and internal id.  *  * @param extObjectType the type of the desired external object  * @param internalObjectId the internal id of the desired extended object  * @return the extended object matching the given details or <code>null</code> in case there's no match  */ private ExtendedObject findExtendedObject(ExtendedObjectType extObjectType, Long internalObjectId) {   ExtendedObjectSearchCriteria searchCriteria = new ExtendedObjectSearchCriteria()   searchCriteria.setExtendedObjectTypeId(extObjectType.getId())   searchCriteria.setInternalObjectId(internalObjectId)    List<ExtendedObject> extObjects = extendedObjectService.findExtendedObjects(searchCriteria, -1, 0, "name")    if (!extObjects?.isEmpty())   {    return extObjects[0]   }   return null }  /**  * Gets a property type given its name and the associated extended object type.  * If it cannot find the property type it will create it.  *  * @param extendedObjectType the associated extended object type for the property type  * @param propertyTypeName the property type name  * @return the property type  */  private PropertyType findOrCreatePropertyType(ExtendedObjectType extendedObjectType, String propertyTypeName)  {     PropertyType propertyType = extendedObjectType.getPropertyTypeByName(propertyTypeName)     if (propertyType == null)     {       propertyType = new PropertyType()       propertyType.setDataType(PropertyDataType.String)       propertyType.setName(propertyTypeName)       propertyType.setExtendedObjectType(extendedObjectType)       extendedObjectService.createPropertyType(propertyType)       extendedObjectType.addPropertyType(propertyType)     }     return propertyType  } By using Extended Objects, and associating them with Axeda domain objects via an internal ID, we can decorate/extend/customize those domain objects with use-case-specific attributes.
View full tip
While I was writing my previous post about Purging ValueStream entries from Deleted Things , it occurred to me that a similar issue would happen to those using the InfluxDB as persistence provider for their ValueStreams.   I wanted to take the opportunity that the logic is still fresh in my mind and extend the utility to do the same thing with InfluxDB. I also used the opportunity to create a dynamic InfoTable as it's been a while since I did it.    Also, it shows how to directly interact with the InfluxDB through its API.    The modification is based on a new thing called influxDBConnector  which has the following services:   dropMeasurements: Deletes all the DB entries related to a Thing; getAllEntries: Queries all entries related to a Thing. It has a string output; getAllEntriesTable: Queries all entries related to a Thing. Infotable output; getMissingThings: Queries ThingWorx Things tables and InfluxDB measurements. It filters to have in the result only the Measurements that are not in the Things table; showMeasurements; Gets all the measurements in InfluxDB   Also the following properties: database: Influx database name used to persist the value stream data; InfluxLink: hostname:port to the InfluxDB server   Like the previous example, I created a sample mashup (purgeInfluxDBStreamsMashup ) to help to execute the services:   Obviously this utility expects that there's an existing Influx persistence provider.   Once more: these services affect directly the DB and need to be used carefully and only by Administrators.  Make sure you fully test it before using it in production.   The attached XML contains the complete utility for PostgreSQL and InfluxDB.   Hope it helps Ewerton  
View full tip
  Hi everyone,   If you’ve been curious about Solution Central, our new cloud tool for application management and deployment, and wondered how it can help your existing apps, then get ready because we have a feature for you! Our initial release of Solution Central focused on deploying apps built on 8.5 or later. Today, whether big or small, round or square, red or blue, built by PTC or by you, all apps can be deployed using Solution Central. Using Operator Advisor 8.4? We got you covered. Using an old extension? No worries. Using an app built on 7.2? Not a problem. Solution Central can deploy all apps.   This ability to deploy any app, regardless of which version it was created on, is just one of the many improvements we’re releasing to help you accelerate your deployment. You can take advantage of this latest capability in ThingWorx 8.5.3 and Solution Central 1.0.3. Try it out today!   We’re not stopping there, though.   We have a lot that we’re working on and wanted to share a quick glimpse of what you can expect in the very near future: ability to manage solution lifecycle through deletion of solutions and their versions ability to secure and manage connected environments through improved activation and deactivation services advanced administrative capability to withdraw and re-initiate deployment requests   The above functionality, along with additional enhancements, are targeted for TW8.5.5 and SC1.1.0 in early April.   These improvements will help to maximize the time savings and overall value you derive from Solution Central’s unique deployment services.   As always, stay connected, Kaya
View full tip
ThingWorx 8.5 Sizing Guide Sizing is a very important part of the application design process, answering such questions as: how much hardware is required? What specifications does this hardware need to handle the expected load? And therefore, what is the overall cost of setting up and maintaining the ThingWorx environment? Properly sizing the environment before development begins ensures that there are no unexpected costs or limitations to application functionality later on down the road.   "Hardware sizing is driven by many factors - some more easily calculated than others", as stated in the new ThingWorx 8.5 Sizing Guide. "Measures like data streaming frequency (the data ingestion component) and HTTP request volume (the data visualization component) are easily calculated... However, sizing considerations for the data processing component of the application can depend largely on business use cases and application design." Enterprise-Ready applications have the capacity to handle all aspects of an IoT application, from data ingestion and processing to data visualization, as detailed in the friendly infographic above, which many will recognize from LiveWorx. Inside the ThingWorx 8.5 Sizing Guide,  there are formulas designed to help size the more analytical aspects of the application, as well as descriptions of other factors and how they (conceptually) play a role in sizing.    There are also two application design examples which step the reader through the calculations, the comparisons, and the selections of hardware for each use case. New in this version, these examples have been simulated in the real world to prove that the theory behind these calculations is sound, and to demonstrate the full process of designing, sizing, and testing an application.  One of the examples (shown here) sizes a Connected Product Solution, something which has many, many remote things in the field, each writing to the Platform at a slower rate, for consumption by a large number of general users, who don't access the same mashups many times nor refresh their view very often. The second example is much more complex, modeling an industrial use-case, where there are many different kinds of users each accessing the mashups many times, fewer things, and more variations in the types of properties each thing possesses. These examples are designed to help anyone with any use case step through the sizing of their application properly.   Please check out the new ThingWorx 8.5 Sizing Guide, especially because each version of ThingWorx is different and must be sized accordingly. Comments and questions about the guide are very welcome right here on this thread!
View full tip
In the ptc-windchill-extension-1.0.0-14.zip archive there is an extension called infotableselector_ExtensionPackage.zip​ . This extension enables the use of the Widget called Infotable Selector, which can be used to clear the selection in a grid. For how to use this widget, take a look at the picture:
View full tip
When using Value Streams to log historical data, there's a service to purge the ValueStream entries from the Thing itself. But, what to do when a Thing that once logged values into a ValueStream was deleted? Currently, there's no OOTB way to delete these entries if they're not being used anymore. Currently, I was asked this question and wanted to share this with the entire community. I created a utility application that queries directly the TWX DB for Things that are present in the ValueStream but don't exist anymore and allows a user to purge it.    These services are considering  PostgreSQLServer as persistence provider for the ValueStreams. The services can be modified if you're using SQLServer.  Do not apply for InfluxDB persistence providers   The twxDBConnector thing is based on the PostgreSqlServer template, that is present in the Relational Databases extension. It has 4  main services:   getEntriesToPurge:  Queries the TWX DB for all the entries related to a Thing. It does not consider the ValueStream id, so it will purge all the entries across all value streams. Requires a Thing name as an input; getMissingThings: Queries Things that are present in the ValueStream DB table that are not present in the Things table, meaning that they were deleted; purgeThingEntries: Purges the entries related to a Thing. It does not consider the ValueStream id, so it will purge all the entries across all value streams; purgeAllEntries: Purge all the entries related to Things that were deleted.   The queries can be modified to allow the selection of the value stream to be cleaned.   I also added a sample mashup that leverages the services.       The twxDBConnector has a configuration table that requires the DB Connection string, user and password.   You can also do it directly from the DB using PGAdmin and purge it all.   DELETE FROM value_stream WHERE value_stream.entry_id IN --Queries all entries in the value stream table that belong to an inexistent thing (SELECT entry_id FROM value_stream LEFT JOIN thing_model ON value_stream.source_id = thing_model.name WHERE thing_model.name IS NULL)   Attention: These services are changing directly the TWX DB, so use it carefully.   To use it:   Import the PostgresSQLServer Extension (you might need to change the JAR in the extension depending on the TWX version you're using); Import the entities from the purgeVSEntries.xml Thanks  @dsantos for the help on optimizing the queries.   Hope it helps. Ewerton  
View full tip
  Happy New Year, everyone! New year, same mission. As we continue to improve ThingWorx, we remain committed to taking into account what you, our users, are saying. What are you using ThingWorx for? What do you want it to do? What additional tools are you looking for?   To hear from you directly, our PM team has created a quick survey to understand your identity management and SaaS strategies a little better.   Complete the survey below for a chance to win a free Solution Central t-shirt! Loading…   Thanks in advance! At the end of the survey, all respondents will receive a link to check out the latest in our ThingWorx 8.5 release. One lucky winner will receive the Solution Central t-shirt.   Thanks for sharing your info! We’ll be sure to study it as we continue to develop our robust IoT solutions platform.   Stay connected in the New Year! Kaya
View full tip
Hi all,   ThingWorx contains lots of useful functionality for your services (last count is 339 Snippets in ThingWorx 8.5.2). These snippets are an important part of the platform application building capabilities, and most of them are simple enough to understand based on their name and the description that appears when hovering on them.   I have witnessed that however, in some cases, the platform users are not aware of their full capabilities. With this in mind, I started creating some time ago a Snippet Guide for my personal use that I'm sharing now with the community. It contains additional explanations, documentation links and sample source code tested by me.   Please bear in mind that it was done for an earlier ThingWorx version and I did not have enough time to update it for 8.5.x, but it should work the same here as well.   This enhanced documentation is not supported by PTC, so please 1. do not open a Tech Support ticket based on the content of this document and, instead 2. Comment on this thread if there are things I can improve on it.   Happy New Year!
View full tip
Good evening and Happy New Year!   I launched version 2.3.0 of the GitBackup Extension.  The release is available here https://github.com/vrosu/thingworx-gitbackup-extension/releases/tag/V2.3.0   It adds the capability to push commits using User specific Committer Name and Email. This is an optional feature, which allows multiple developers to push commits under their own credentials. The documentation was updated with more details about this.   This release certifies the Extension to work for ThingWorx 8.5.x. (Previous ThingWorx versions have not been tested and the documentation was updated accordingly).    As always, this extension is not a PTC supported product, so in case you have issues with it, please do not open a PTC Tech Support ticket and instead use GitHub's issue system or the PTC Community.   Please feel free to fork and improve it.
View full tip
Announcements