cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

ThingWorx Navigate is now Windchill Navigate Learn More

IoT & Connectivity Tips

Sort by:
  It’s here! It’s happening! ThingWorx 9.0 released today!   Unleash the Power of Now and leverage exciting new features like: Improved performance and higher availability via active-active clustering Faster, more streamlined development with: New widgets like line, bar & schedule charts Undo & redo functionality in Mashup Builder (look out for an upcoming Ask Kaya tech tip) New Mashup mobile settings Simpler management of entities and applications: Improved auditing scale & usability Ability to group entities via Thing Groups Ability to deploy apps built prior to 8.5 with Solution Central And more—like: Confidence model training Delegated authorization for Flow [Navigate] Component-based app development for custom production use apps (upgrade-safe) And even more—the list goes on and on. Check out the full list of new functionality in the 9.0 release notes and discover our What’s New page.   Access 9.0 under “ThingWorx Foundation” on the PTC Downloads Page and unleash its power today! As you get started, check out the ThingWorx Help Center for guidance.   Enjoy 9.0 and let us know what you think below! Kaya
View full tip
  Hello everyone!   We’re back with Episode 08 of ThingWorx on Air! In this episode, I sit down with Ryan Servais, one of our High Availability (HA) experts on the ThingWorx product management team. We continue our HA discussion from previous Ask Kaya tech tips and cover some frequently asked questions like what are the benefits of active-active clustering? How does active-active clustering enable horizontal scale? How can I get started? Brand-new to active-active clustering? Check out these tech tips to start: 9.0 Sneak Peek: Active-Active Clustering for ThingWorx 9.0 Sneak Peek: ThingWorx Architecture for Active-Active Clustering 9.0 Sneak Peek: Flexible Deployments of Active-Active Clustering for ThingWorx Click here to listen to how active-active clustering can help you in a variety of scenarios: If you have a request overflow in production and your servers are slowing down, try out active-active clustering! If your IT admin keeps delaying replacing the network card on your HPE rack server and you keep losing connections, check out the power of active-active clustering! If your team is challenged to provision 1000s of additional assets into your system and you’re worried one server can’t handle it, use active-active clustering for horizontal scale! Finally, if you haven’t already, check out Ryan’s LiveWorx session with Senior IoT Product Manager Ayush Tiwari where they break down availability into its core components and explain how you can leverage active-active clustering to achieve key benefits like reduced downtime, increased cost savings, and more.   Enjoy!   Stay connected, Kaya
View full tip
Hiya,   I recently prepared a short demo which shows how to onboard and use Azure IoT devices in ThingWorx and added some usability tips and tricks to help others who might struggle with some of the things that I did.     The good news... I recorded and posted it to YouTube here.   •Connect Azure IoT Hub with ThingWorx (to be updated soon for 9.0 release) •Using the Azure IoT Dev Kit with ThingWorx •Getting the Azure IoT Hub Connector Up and Running (V3/8.5)   Enjoy, and don't hesitate to comment with your own tips and feedback.   Cheers,   Greg
View full tip
Hi Community,   Although we have reference architectures and integration paths for connecting devices to ThingWorx through Azure IoT; no one has ever written anything about doing the same from one ThingWorx to another.  I thought I’d change that and put some ideas out there around how one might go about doing this.  Although this is not officially supported or recommended by PTC; I have consulted with a number of leading SMEs on the subject, which have participated in forming the basis of my thinking outlined here.   Components Required (in order of communication path): On-premise ThingWorx Platform Protocol Adapter Toolkit* (CXS) - MQTT Azure IoT Edge Azure IoT Hub ThingWorx Azure IoT Hub Connector (CXS) Azure Cloud-hosted ThingWorx Platform   PAT (2) with codec to encode MQTT messages publishes to on-premise IoT Edge MQTT endpoint which handles store-and-forward of messages to IoT Hub.  An Azure IoT device would exist for each Thing you wish to represent on the ThingWorx servers.  The Azure IoT Hub Connector would pick-up the incoming messages and pass them on to the cloud ThingWorx which would decode the MQTT payload and map to Thing property updates.   The only part that I presently don’t like about this approach is that you’ll need to decode the MQTT messages on the ThingWorx platform in the cloud when they are received from the IoT Hub, and this mechanism will need to also need to handle encoding and publishing back to the IoT Hub if C2D (Cloud-to-Device) messages are to be implemented (aka bi-directional).  This is required as ThingWorx only supports AlwaysOn as an application level protocol so some form of mapping needs to be done.   * Another approach would be to replace the PAT with a custom agent which implements both the ThingWorx Edge SDK and the Azure IoT device SDK   Regards,   Greg Eva
View full tip
  Ever dreamed of participating in the design of the latest ThingWorx features? Now’s your chance! Join the UX Lab in usability and research sessions to help inform the design of your favorite ThingWorx features from the Edge to Kepware to remote monitoring to Solution Central and more!   For those of you newer to PTC, the UX Lab is an opportunity to see early views of product mockups, wireframes, etc. to provide your direct feedback; the UX Lab is part of LiveWorx, our definitive event for digital transformation, and this year both are virtual!   Click here to influence the design of Edge, Kepware, remote monitoring, Solution Central, and so much more!   Reach out with any questions!   Stay connected, Kaya
View full tip
A while back, when I was learning about the ThingWorx platform, I could not find any good description of how InfoTables worked so I reverse engineered everything I could about them from a javascript perspective and wrote a short paper that I think is still of use today so I posted a copy here. It discusses what role InfoTables have as a ThingWorx primitive for passing data to and from ThingWorx and some of the lesser known capabilities that the data structure itself has built in. Here is a link to it Getting to Know InfoTables.pdf .
View full tip
  Hi, everyone!   In previous tech tips, we’ve introduced 9.0’s biggest feature, active-active clustering, and covered some of the main architectural components. Today, I’ll cover some other architectural configurations and the flexibility that active-active clustering provides to meet your IoT deployment needs.   Deployment Flexibility With Active-Active Clustering, ThingWorx allows you to achieve the highest availability of your IIoT system while still retaining the high degree of deployment flexibility that ThingWorx is known for.   It’s important to emphasize flexibility not only in where you deploy ThingWorx, but also in how  you deploy it. Flexibility is embedded in the ThingWorx architecture to suit your deployment needs. Let’s look at two interesting options.   Cost-Efficient Deployment The first example is deploying ThingWorx in a cost-efficient architecture for smaller or cost-sensitive deployments. Instead of each component on its own separate VM or box, ThingWorx Connection Server, ThingWorx Foundation Server, ZooKeeper, and Ignite are all installed on the same box.   With three boxes in parallel, the system still achieves high availability while reducing deployment costs by minimizing the number of servers utilized. A minimum of three servers is needed to achieve an odd number of instances required by ZooKeeper while still providing redundancy.   In order to optimize for cost while still providing high availability, there are a few factors to consider with this deployment. In this configuration, Ignite is run within the same JVM as ThingWorx Foundation, rather than on a different server. While this does reduce the overall footprint, Ignite will consume additional memory, sharing the resources with the ThingWorx Foundation platform. Since ThingWorx Connection Server is run on the same VM as ThingWorx Foundation Server, this introduces socket limitations, which can restrict the maximum number of connected devices.     In short, the example above illustrates how, while being in an active-active clustered environment, you still retain the deployment flexibility to optimize for your deployment needs. In this scenario, the architecture is optimized for a leaner, more cost-effective deployment at the expense of numbers of connections and memory.   Now, let’s walk through some common use cases where ThingWorx is deployed on premise on VMs or baremetal to support use cases that require a balance of availability and scalability at an affordable cost. A few of these examples include: On-premise deployment of ThingWorx Foundation to support a dedicated factory production line on a plant floor to improve its connected manufacturing operations. VM based on-site deployment of ThingWorx Foundation to support IoT applications for a Smart Building scenario. Smart Ship deployments managing key ship operations where ThingWorx is deployed on a ship and sends data intermittently to private data centers running ThingWorx on shore through a satellite network leveraging ThingWorx federation technology. A failover-proof redundant setup required to capture key service metrics for medical equipment for predictive failures and maintenance where ThingWorx is deployed in hospital premises. OEM Deployment with Common Components The second example covers how active-active clustering architectural components can be shared across multiple clusters to reduce the cost and complexity of deployment. This can include scenarios where a large deployment spans multiple sites, geographies or end customers.   In this deployment, there are two ThingWorx clusters that share a common ZooKeeper cluster and database layer. Within each cluster, both the Connection Server and ThingWorx Foundation with Ignite running as embedded are deployed on a VM.   For the shared components, proper separation measures must be taken to ensure security. For ZooKeeper, separate namespaces and authentication from the clients are required. For the shared database layer, each cluster must have a unique schema, a separate shared file system and separate user credentials with database-level isolation. Please note that with a large shared database across tenants, rather than a dedicated database for each tenant of HA cluster, there is a risk of impacting performance and availability of other tenants due to issues caused by one of the shared ones.     This architecture is optimized for a larger deployment comprised of multiple ThingWorx clusters requiring separation between the clusters, while still allowing the provider to share a common infrastructure for cost reduction.   Again, let’s walk through another few common use cases, including: A ThingWorx OEM hosting multiple ThingWorx Foundation instances in their managed data center or public cloud to offer applications with further customization abilities to their end users. In this case, an OEM vendor can offer multiple ThingWorx clusters in active-active cluster mode with shared pieces of infrastructure across multiple tenants. A ThingWorx enterprise customer looking to build and host different ThingWorx-based applications in their IT data center where each application is powered by an individual active-active cluster satisfying specific digital transformation use cases across the enterprise value chain, including those related to engineering, product, sales, marketing, supply chain, partners, service and support. A ThingWorx factory customer looking to host multiple ThingWorx applications from their main regional factory IT data center to cater to the IoT needs of different factories located in surrounding geographic regions. In this case, a customer could dedicate one HA cluster to each factory to enable that factory/site to power multiple IoT applications efficiently.   These two configurations are some of the lesser known—yet equally powerful—deployments of this new architecture. Again, flexibility of our deployment architecture is one of the key superpowers of the ThingWorx platform. And, the active-active fun doesn’t stop there! Be on the lookout for an upcoming LiveWorx session titled Active-Active Clustering with ThingWorx 9.0  (Session ID: IP1117B), future tech tips like this one, and, of course, the GA release of ThingWorx 9.0 (targeted for June 2020) where you can take advantage of this new functionality!   Let us know what you think in the comments!   Stay connected, Kaya      
View full tip
Here is a spreadsheet that I created which helps to estimate data transfer volumes for the purpose of estimating egress costs when transferring data out of region.   You find that there are a number of input parameters like numbers of assets, properties, file sizes, compression ratio, as well as a page with the cost elements which can be updated from the Interweb.    
View full tip
  Hi everyone,   In anticipation of ThingWorx 9.0’s biggest feature, active-active clustering, we’d like to provide an architectural overview of a sample active-active configuration and its underlying components. If you haven’t already seen it, we invite you to read our previous Community tech tip, where we introduce the concept of active-active clustering for ThingWorx Foundation, which enables you to: significantly reduce unplanned downtime for your mission-critical services and apps support horizontal scalability of the ThingWorx Server where you can scale your services up and down based on your requirements easily run, package, deploy, and operate advanced apps and services with the help of intuitive browser-based navigation, interactive monitoring and debugging tools, and more deploy anywhere - public cloud, private datacenter, on-premise, hybrid, or even locally on your laptop with deep optimizations for Azure Now, before we go too deep, we’d like to let you know that you can continue to seamlessly upgrade from previous versions of ThingWorx releases to upcoming ThingWorx 9.0  releases. Previously, you could deploy ThingWorx Foundation in a “single server” mode, and, for a high availability in “active-passive cluster” mode (see here for details). In the ThingWorx 9.0 release and onwards, you’ll be able to continue to deploy ThingWorx Foundation in a “single server” mode and for high availability scenarios via our new “active-active cluster” mode. Please note that active-passive clustering configuration will no longer be supported in ThingWorx 9.0 or onwards.   Let’s start with a quick recap on how the ThingWorx Foundation 9.0 release would look like in single server deployment.   ThingWorx 9.0 Deployed in a "Single-Server" Architecture Below is a high-level diagram depicting the main architectural layers and components of ThingWorx Foundation deployed in a single server mode.   Deployment Components Below is a brief summary of all major architectural components and their purpose in the deployment architecture:   The Client Layer This layer is comprised of everything that connects with, sends data to, and receives content from the ThingWorx platform. It be broken down into two groups: Devices/Things: Things, devices, agents, and other assets. Users/Clients: People and the respective products (primarily web browsers) they use to access ThingWorx.   The Application Layer This layer is where ThingWorx Foundation and other applications deployed with ThingWorx Foundation reside, such as ThingWorx Analytics, ThingWorx Connection Server, ThingWorx Azure IoT Hub Connector and others. This layer provides connectivity to the client layer, performs authentication and authorization checks, ingests/processes/analyzes content, and reacts to conditions by sending alerts. For a specific ThingWorx Foundation deployment that needs basic device data ingestion, processing and storage, you can setup only ThingWorx Foundation server. In some cases, with large number of device connections, you may want to setup ThingWorx Connection Server with ThingWorx Foundation in a single server for further scalable connectivity. ThingWorx Foundation: ThingWorx Foundation is a java-based application that serves as a rapid, model-based application development platform. Shared File Storage: Shared Disk space to contain ThingWorx Storage repositories, store and archive log files accessed by all ThingWorx Foundation servers. A NAS file storage, AWS Elastic File System, or Azure Files and others could be used for this purpose.   The Data Layer ThingWorx Foundation includes several persistence provider implementations that enable you to choose a database option that best fits your use case. A persistence provider enables the connection to a data store and the ability to perform a CRUD operation on that data. See here for more information. Currently, there are two basic variations of persistence providers: Model Provider – Responsible for ThingWorx model metadata and system data. Data Provider – Responsible for runtime data ingested against the model elements, including streams, value streams, data tables, etc. ThingWorx supports H2 (in-memory Database), PostgreSQL, MS SQL Server and AzureSQL as both model and data providers, and InfluxDB as only a data provider. Please see here for model and data best practices.   ThingWorx 9.0 Deployed in an Active-Active Clustering Reference Architecture Below is a reference architecture diagram for ThingWorx 9.0 with multiple ThingWorx Foundation servers configured in an active-active cluster deployment. Please note that this is only one reference example of how ThingWorx 9.0 can be deployed in an active-active clustered environment. There could be other architectural configurations dependent upon the needs of the specific deployment. Deployment Components Once you have developed an understanding of the basic architectural components in a single server mode, below are the additional components required to run ThingWorx in active-active cluster mode.   The Client Layer This will be similar to what has been mentioned in the above single server configuration.   The Application Layer In this layer, if you’re familiar with ThingWorx active-passive cluster configuration, then you may be aware of most of the components used below—with the exception of a new component: Apache Ignite that provides Distributed Caching for the horizontally scalable ThingWorx Foundation servers. Load Balancer: A third-party device that receives network traffic and distributes requests among available servers. In active-active cluster configuration, the load balancer is used to direct WebSocket-based traffic to the ThingWorx Connection Servers while user requests (http/https) traffic is directly distributed to the ThingWorx Foundation servers. Users can continue to use a load balancer with ThingWorx 9.0 that they might already be using for their existing active-passive or single server deployments with ThingWorx 8.X or previous releases.  Some example load balancers include, but are not limited to: HAProxy, Azure Application Gateway, and AWS Application Load Balancer. ThingWorx Connection Services: These services handle all messages routing to and from devices, providing scalable connectivity to the ThingWorx Foundation Server. With the ThingWorx 9.0 release, ThingWorx Connection Services have been upgraded with many additional features to support active-active clustering of the ThingWorx Foundation servers, where now they route all WebSocket traffic in a round robin fashion to the connected ThingWorx Foundation servers. Depending upon the various use cases, one could use multiple ThingWorx Connection Services available, such as ThingWorx Connection Server, ThingWorx Azure IoT Hub Connector, and ThingWorx Protocol Adapter Toolkit. Please see here for further details. Please note that for ThingWorx 9.0 releases, ThingWorx Connection Server would be required in an active-active configuration to support all the WebSocket-based traffic routing, including egress of files and device messages from multiple ThingWorx Foundation servers back to the devices, and it would also serve as WebSocket communication from ThingWorx Mashup-based applications to ThingWorx Composer. ThingWorx Foundation: With ThingWorx 9.0 and onwards, you can set up ThingWorx Foundation servers in an N-active-active cluster model to provide higher availability to your applications and horizontally scale the Foundation server nodes up and down based on your scalability needs. Apache Zookeeper: Apache ZooKeeper is a centralized service for maintaining configuration information and naming as well as providing distributed synchronization and group services. It is a coordination service for distributed applications that enables synchronization across a cluster. Specific to ThingWorx, ZooKeeper is used for distributed locking, selecting a singleton server during the server initialization, service discovery for Apache Ignite, allowing it to find instances of ThingWorx Foundation servers. Apache Ignite: This offers a distributed cache for the active-active cluster setup. It is used by ThingWorx Foundation Servers to share state. It may be embedded with each ThingWorx instance or can be run as a standalone cluster for larger scale. In this configuration, Ignite is set up in a standalone cluster but can be run embedded within the ThingWorx Foundation. Running Ignite in a standalone cluster is more ideal for larger scale, as it supports higher vertical scale of memory in the deployment setup.   The Data Layer ThingWorx is largely database agnostic. You can continue to use officially supported persistence providers that you may already be using in your existing deployments based off of ThingWorx 8.X or previous releases. Please look out for an upcoming ThingWorx update as well as enhanced installation documents to help with your upgrade and migration questions with the general availability of ThingWorx 9.0. Please note that this diagram does not make the distinction between model and data providers; depending on your data ingestion needs, separate model and data providers can be used. As a reminder, all databases should be deployed in a high-availability configuration to help eliminate any single point of failure.   In closing, we can't wait to launch active-active clustering in 9.0 to help you: dramatically further reduce application downtime scale your deployments and more efficiently manage your apps, regardless of where they’re deployed   If you have any questions about active-active clustering or its architecture, please do not hesitate to reach out!   Stay connected! Kaya
View full tip
Still not sure what the reference benchmarks have to offer? Check out this short video abstract reviewing the purpose of the reference benchmarks, some notes on how to read the guide, and information about what these guides will have to offer. Then, check out the reference benchmark for remote monitoring here!   ~~
View full tip
Developing with Axeda Artisan (for Axeda Platform, v6.8 and later) Axeda Artisan is a development tool based on the Apache Maven build system. Artisan includes authoring, management, and deployment mechanisms that define and manage the development of Axeda Platform extension points, allowing you to flexibly install, update, and uninstall many types of objects on the Axeda Platform. The Components of Artisan The Artisan framework is comprised of several components: The Axeda Platform Each instance of the Axeda Platform contains libraries and sample Artisan Projects (archetypes) that are downloaded by developers to author and deploy their own Artisan Projects. The Artisan Project An Artisan Project is a collection of content and configuration files that are used to define a set of objects that will be installed on the Axeda Platform. The Artisan Installer The Artisan Installer is a flexible, configurable application that uses the Apache Maven build life-cycle to upload the contents of an Artisan Project to the Axeda Platform. Axeda Platform APIs The Artisan Installer interacts with the Axeda Platform and uses Axeda APIs to install, upgrade, or uninstall ArtisanProjects. How You Work With Artisan The goal of using Artisan is to develop an Artisan Project that will become an object or application that runs on the Axeda Platform. There are several phases involved in creating an Artisan Project: Preparing Your Development Environment – When first working with Artisan, you need to prepare your development environment by installing or configuring Java, Apache Maven, and any other required development tools. Creating an Artisan Project – Artisan uses archetypes as the basis for Artisan Projects. Archetypes are used to generate an Artisan Project that contains the correct type of development framework, which you then modify to create your own Artisan Project: The Hello World archetype provides a sample project you can use to create common objects on the Axeda Platform. The Machine Streams archetype provides a sample project you can use to configure machine streams for your Axeda Platform. Installing an Artisan Project on the Axeda Platform – The Artisan Installer packages the contents of the Artisan Project, and deploys them on the Axeda Platform. Testing the Artisan Project – Once installed, you can test your Artisan Project to verify its functionality and identify any issues that need to be addressed. It is common for an Artisan Project to be installed on the Axeda Platform several times as issues are identified and corrected during development. Creating a Stand-alone Installer – Once ready for deployment to production, the Artisan Installer is used to create a stand-alone installer for distributing the project contents to other instances of the Axeda Platform. Where to Go from Here To begin working with the Artisan framework to create your own projects, refer to the following: Artisan Landing Page – A page hosted on the Axeda Platform that introduces Artisan and serves as a starting point for creating an Artisan Project. The Artisan Landing Page can be found at the following location: http://instance_name:port/artisan                     Where instance_name:port/ is the ip address and port of an instance of the Axeda Platform. Axeda® Artisan Developer’s Guide – A reference that introduces Artisan, describes how to prepare your development environment, and provides detailed technical information concerning creating an Artisan Project and configuring the Artisan Installer. The Axeda® Artisan Developer’s Guide is available with all Axeda product documentation from the PTC Support site, http://support.ptc.com/
View full tip
Axeda Machine Streams enables external Platform integrators to access the current, raw data from connected assets. The Platform can stream the data item, alarm, mobile location, and registration messages from connected assets to an ActiveMQ server or Azure Service Bus endpoint. Streamed data can be used for data analytics or reporting, or simply for storage. This article explains the Machine Streams Data Relay project that Axeda provides. This sample project illustrates how stream consumers can create their own projects to relay Machine Stream messages from ActiveMQ or Azure Service Bus into their environments. The Machine Streams Data Relay project was created using Apache Maven. The project operates by dispatching messages to a log message processor. Each machine streams message is logged to stdout. Note: The "Axeda Features Guide" provides a high level introduction to the Axeda Machine Streams feature. That PDF is available from PTC Support (http://support.ptc.com/).) Downloading and Installing the Project The machine-streams-data-relay project is provided as a tar.gz archive for Linux users and a .zip archive for Windows users. Each archive includes a Maven project with all source code. This page provides downloads and full source for the machine-steams-data-relay Maven project. The Data Relay project files are available from here. Prerequisites To download, build, and compile the machine-streams-data-relay project, you will need the following: Access to an Axeda Platform instance configured to stream asset data (for ActiveMQ endpoint this includes the Axeda provided ActiveMQ machine-streams plugin/overlay (axeda-jms-plugin-r<SVN_REVISION>-machine-streams.zip, which is provided here. ActiveMQ or Azure Service Bus server configured for Machine Streams. Instructions for configuring an ActiveMQ or Azure Service Bus server for Machine Streams are provided in the “Axeda® Machine Streams: A Guide to Setting Up Broker Endpoints", available with all documentation from PTC Support (http://support.ptc.com/). At least one machine stream (Axeda Artisan Machine Streams Archetype) configured to stream data to the ActiveMQ or Azure Service Bus server for your assets. (Complete information about creating machine streams and adding machine stream support to the Axeda Platform is provided in the “Axeda v2 API/Services Developers Reference Guide” available from PTC Support (http://support.ptc.com/).) Access to the ActiveMQ or Azure Service Bus server configured as the endpoint for streamed Machine Streams content Oracle Java JDK 1.7 or greater and java and javac installed and available in your PATH (if you need instructions for this, see http://www.oracle.com/technetwork/java/javase/downloads/index.html) Maven 3.0.4 or greater and mvn installed and available in your PATH (if you need instructions for this, see http://maven.apache.org/download.cgi) Note: For the Machine Streams Data Relay project to work successfully, the Axeda Platform instance and the ActiveMQ or Azure Service Bus server instance must be configured with support for Axeda Machine Streams, and at least one machine stream must be configured to stream data. Complete information about configuring Axeda Platform for Axeda Machine Streams, including the data format for the resulting streams (XML or JSON) is available in the “Axeda v2 API/Services Developers Reference Guide.” Instructions for configuring an ActiveMQ or Azure Service Bus server for Machine Streams are provided in the “Axeda® Machine Streams: A Guide to Setting Up Broker Endpoints” Reference Guide (available from PTC Support (http://support.ptc.com/)). Building the Project This page provides instructions for building the Data Relay project for Linux and for Windows environments. 1. Download and uncompress the project for your environment Linux: Click here for the machine-streams-data-relay-1.0.3-project.tar.gz # tar -zxvf machine-streams-data-relay-1.0.3-project.tar.gz # cd machine-streams-data-relay-1.0.3 Windows: Click here for the machine-streams-data-relay-1.0.3-project.zip Unzip the project to the following directory: C:\machine-streams-data-relay-1.0.3 2. Edit the ActiveMQ or Azure Service Bus configuration file (configAMQ.properties or configASB.properties) in src\main\scripts\ as needed. sample Config.properties files for the MachineStreamsDataRelay component For ActiveMQ broker endpoints - configAMQ.properties # The ActiveMQ broker URL. brokerURL=tcp://localhost:62000 # The ActiveMQ queue name to process messages from. # It can be a single queue: MachineStream.stream01 # Or a wildcard queue: MachineStream.> queueName=MachineStream.> # The username used to connect to the ActiveMQ queue username=axedaadmin # The password used to connect to the ActiveMQ queue password=zQXuLzhQgcyRZ25JCDXYEPBCT2kx48 # The number of ActiveMQ broker connections. numConnections=10 # The number of sessions per connection. Note that each session will create a separate thread. numSessionsPerConnection=5 # The number of concurrent threads used for processing machine streams messages. numProcessingThreads=100 # The type of message listener container. # default = single queue name per connection. # multiDestination = supports multiple queue names per connection messageListenerContainerType=default For Azure Service Bus broker endpoints - configASB.properties # The ASB broker URL. brokerURL=amqps://your-azure-service-bus-namespace.servicebus.windows.net # The ASB queues to process messages from. # It can be a single queue: MachineStream.stream01 # Or multiple queues separated by a comma: MachineStream.stream01,MachineStream.stream02 # Or a queue range defined by the following syntax: MachineStream.stream[01-20] queueName=MachineStream.stream[01-50] # The username used to connect to the ASB queue(s) username=your-azure-service-bus-username # The password used to connect to the ASB queue(s) password=the-password-for-your-azure-service-bus-username # The max number of ASB broker connections. numConnections=10 # The number of concurrent threads used for processing machine streams messages. numProcessingThreads=100 # The type of message listener container. # default = single queue name per connection. # multiDestination = supports multiple queue names per connection messageListenerContainerType=multiDestination Note: messageListenerContainerType is provided because Azure Service Bus does not support wildcard queue names. The configuration details are as follows: Name Description Value brokerURL location of the ActiveMQ or Azure Service Bus (broker) location of the ActiveMQ or Azure Service Bus server (broker) queueName Name of the ActiveMQ or Azure Service Bus queue from which you want to process messages To define a single queue: MachineStream.<insert single queue name here> To define a wildcard queue name for multiple queues:MachineStream. It can be a single queue:  MachineStream.stream01 Or multiple queues separated by a comma: MachineStream.stream01,MachineStream.stream02 Or a queue range defined by the following syntax: MachineStream.stream[01-20]: queueName=MachineStream.stream[01-50] (if you have multiple queues and you want to use ASB, then you have to use multiDestination and use the range) username username used to connect to the ActiveMQ or Azure Service Bus queue For ActiveMQ: username=axedaadmin For ASB: username=your-azure-service-bus-username password used to connect to the ActiveMQ or Azure Service Bus queue password used to connect to the ActiveMQ or ASB queue(s) numConnections number of ActiveMQ or Azure Service Bus broker connections Default is 10 broker connections numSessionsPerConnection The number of sessions per connection. Note that each session will create a separate thread. (This key is used infrequently.) APPLICABLE TO ACTIVEMQ ONLY. Default is 5 sessions per connection APPLICABLE TO ACTIVEMQ ONLY. numProcessingThreads The number of concurrent threads used for processing machine streams messages. Default is 100 concurrent threads messageListenerContainerType The type of message listener container. Default is single queue name per connection. Supports multiple queue names per connection 3. Build code using Maven.  Use -DskipTests option if you want to skip tests.  This will build all source code and produce a bin archive in the target directory. For Linux: # mvn package -DskipTests For Windows: c:\> mvn package -DskipTests 4. Enter the target directory and uncompress *bin.tar.gz archive and enter correct directory For Linux: # cd target # tar -zxvf machine-streams-data-relay-1.0.3-bin.tar.gz # cd machine-streams-data-relay-1.0.3 For Windows: c:\> cd target c:\> unzip machine-streams-data-relay-1.0.3.bin.zip c:\> cd machine-streams-data-relay-1.0.3 5. Start the application. For Linux: # ./machineStreamsDataRelay.sh <config properties file> for example: e.g. ./machineStreamsDataRelay.sh configOfYourChoice.properties For Windows: # ./machineStreamsDataRelay.sh <config properties file> for example: e.g. ./machineStreamDataRelay.bat configOfYourChoice.properties See the two example config files included within the project: configASB.properties (for Azure Service Bus) and configAMQ.properties (for ActiveMQ). 6. Scan the output. If your ActiveMQ configuration is correct, output similar to the following should appear, and no ERRORS should be shown: 2014-03-26 10:27:06.179 [main] INFO  [MessageListenerServiceImpl]: Initializing connections to tcp://localhost:62000 username=axedaadmin 2014-03-26 10:27:06.346 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 1: queue=MachineStream.> numSessions=5 2014-03-26 10:27:06.351 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 2: queue=MachineStream.> numSessions=5 2014-03-26 10:27:06.356 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 3: queue=MachineStream.> numSessions=5 2014-03-26 10:27:06.365 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 4: queue=MachineStream.> numSessions=5 2014-03-26 10:27:06.369 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 5: queue=MachineStream.> numSessions=5 2014-03-26 10:27:06.381 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 6: queue=MachineStream.> numSessions=5 2014-03-26 10:27:06.388 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 7: queue=MachineStream.> numSessions=5 2014-03-26 10:27:06.402 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 8: queue=MachineStream.> numSessions=5 2014-03-26 10:27:06.411 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 9: queue=MachineStream.> numSessions=5 2014-03-26 10:27:06.416 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 10: queue=MachineStream.> numSessions=5 If your Azure Service Bus configuration is correct, output similar to the following should appear, and no ERRORS should be shown: 2014-10-01 16:51:30.114 [main] INFO [MessageListenerServiceImpl]: Initializing Connections to amqps://acme.servicebus.windows.net username=owner 2014-10-01 16:51:31.613 [ConnectionRecovery-thread-6] INFO [MultiDestinationMessageListenerContainer]: Connection 6 created 0/10 queue consumers 2014-10-01 16:51:31.614 [ConnectionRecovery-thread-8] INFO [MultiDestinationMessageListenerContainer]: Connection 8 created 0/10 queue consumers 2014-10-01 16:51:31.614 [ConnectionRecovery-thread-10] INFO [MultiDestinationMessageListenerContainer]: Connection 10 created 0/9 queue consumers 2014-10-01 16:51:31.614 [ConnectionRecovery-thread-2] INFO [MultiDestinationMessageListenerContainer]: Connection 2 created 0/10 queue consumers 2014-10-01 16:51:31.614 [ConnectionRecovery-thread-3] INFO [MultiDestinationMessageListenerContainer]: Connection 3 created 0/10 queue consumers 2014-10-01 16:51:31.614 [ConnectionRecovery-thread-5] INFO [MultiDestinationMessageListenerContainer]: Connection 5 created 0/10 queue consumers 2014-10-01 16:51:31.615 [ConnectionRecovery-thread-9] INFO [MultiDestinationMessageListenerContainer]: Connection 9 created 0/10 queue consumers 2014-10-01 16:51:31.615 [ConnectionRecovery-thread-4] INFO [MultiDestinationMessageListenerContainer]: Connection 4 created 0/10 queue consumers 2014-10-01 16:51:31.621 [ConnectionRecovery-thread-7] INFO [MultiDestinationMessageListenerContainer]: Connection 7 created 0/10 queue consumers 2014-10-01 16:51:31.756 [ConnectionRecovery-thread-1] INFO [MultiDestinationMessageListenerContainer]: Connection 1 created 0/10 queue consumers 2014-10-01 16:51:32.613 [ConnectionRecovery-thread-6] INFO [MultiDestinationMessageListenerContainer]: Connection 6 created 9/10 queue consumers 2014-10-01 16:51:32.614 [ConnectionRecovery-thread-8] INFO [MultiDestinationMessageListenerContainer]: Connection 8 created 9/10 queue consumers 2014-10-01 16:51:32.614 [ConnectionRecovery-thread-10] INFO [MultiDestinationMessageListenerContainer]: Connection 10 created 7/9 queue consumers 2014-10-01 16:51:32.614 [ConnectionRecovery-thread-2] INFO [MultiDestinationMessageListenerContainer]: Connection 2 created 10/10 queue consumers 2014-10-01 16:51:32.615 [ConnectionRecovery-thread-3] INFO [MultiDestinationMessageListenerContainer]: Connection 3 created 9/10 queue consumers 2014-10-01 16:51:32.615 [ConnectionRecovery-thread-5] INFO [MultiDestinationMessageListenerContainer]: Connection 5 created 0/10 queue consumers 2014-10-01 16:51:32.615 [ConnectionRecovery-thread-9] INFO [MultiDestinationMessageListenerContainer]: Connection 9 created 7/10 queue consumers 2014-10-01 16:51:32.615 [ConnectionRecovery-thread-4] INFO [MultiDestinationMessageListenerContainer]: Connection 4 created 9/10 queue consumers 2014-10-01 16:51:32.623 [ConnectionRecovery-thread-7] INFO [MultiDestinationMessageListenerContainer]: Connection 7 created 9/10 queue consumers 2014-10-01 16:51:32.756 [ConnectionRecovery-thread-1] INFO [MultiDestinationMessageListenerContainer]: Connection 1 created 10/10 queue consumers 2014-10-01 16:51:32.833 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 1: numQueues=10 initTimeMillis=2631 millis 2014-10-01 16:51:32.833 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 2: numQueues=10 initTimeMillis=2488 millis 2014-10-01 16:51:33.613 [ConnectionRecovery-thread-6] INFO [MultiDestinationMessageListenerContainer]: Connection 6 created 10/10 queue consumers 2014-10-01 16:51:33.614 [ConnectionRecovery-thread-8] INFO [MultiDestinationMessageListenerContainer]: Connection 8 created 10/10 queue consumers 2014-10-01 16:51:33.614 [ConnectionRecovery-thread-10] INFO [MultiDestinationMessageListenerContainer]: Connection 10 created 9/9 queue consumers 2014-10-01 16:51:33.615 [ConnectionRecovery-thread-3] INFO [MultiDestinationMessageListenerContainer]: Connection 3 created 9/10 queue consumers 2014-10-01 16:51:33.615 [ConnectionRecovery-thread-5] INFO [MultiDestinationMessageListenerContainer]: Connection 5 created 0/10 queue consumers 2014-10-01 16:51:33.615 [ConnectionRecovery-thread-9] INFO [MultiDestinationMessageListenerContainer]: Connection 9 created 8/10 queue consumers 2014-10-01 16:51:33.615 [ConnectionRecovery-thread-4] INFO [MultiDestinationMessageListenerContainer]: Connection 4 created 9/10 queue consumers 2014-10-01 16:51:33.623 [ConnectionRecovery-thread-7] INFO [MultiDestinationMessageListenerContainer]: Connection 7 created 10/10 queue consumers 2014-10-01 16:51:34.615 [ConnectionRecovery-thread-5] INFO [MultiDestinationMessageListenerContainer]: Connection 5 created 0/10 queue consumers 2014-10-01 16:51:34.615 [ConnectionRecovery-thread-3] INFO [MultiDestinationMessageListenerContainer]: Connection 3 created 9/10 queue consumers 2014-10-01 16:51:34.615 [ConnectionRecovery-thread-9] INFO [MultiDestinationMessageListenerContainer]: Connection 9 created 8/10 queue consumers 2014-10-01 16:51:34.615 [ConnectionRecovery-thread-4] INFO [MultiDestinationMessageListenerContainer]: Connection 4 created 9/10 queue consumers 2014-10-01 16:51:35.615 [ConnectionRecovery-thread-5] INFO [MultiDestinationMessageListenerContainer]: Connection 5 created 9/10 queue consumers 2014-10-01 16:51:35.615 [ConnectionRecovery-thread-3] INFO [MultiDestinationMessageListenerContainer]: Connection 3 created 9/10 queue consumers 2014-10-01 16:51:35.615 [ConnectionRecovery-thread-9] INFO [MultiDestinationMessageListenerContainer]: Connection 9 created 8/10 queue consumers 2014-10-01 16:51:35.616 [ConnectionRecovery-thread-4] INFO [MultiDestinationMessageListenerContainer]: Connection 4 created 9/10 queue consumers 2014-10-01 16:51:36.616 [ConnectionRecovery-thread-5] INFO [MultiDestinationMessageListenerContainer]: Connection 5 created 9/10 queue consumers 2014-10-01 16:51:36.616 [ConnectionRecovery-thread-3] INFO [MultiDestinationMessageListenerContainer]: Connection 3 created 9/10 queue consumers 2014-10-01 16:51:36.616 [ConnectionRecovery-thread-4] INFO [MultiDestinationMessageListenerContainer]: Connection 4 created 9/10 queue consumers 2014-10-01 16:51:36.616 [ConnectionRecovery-thread-9] INFO [MultiDestinationMessageListenerContainer]: Connection 9 created 8/10 queue consumers 2014-10-01 16:51:37.616 [ConnectionRecovery-thread-5] INFO [MultiDestinationMessageListenerContainer]: Connection 5 created 9/10 queue consumers 2014-10-01 16:51:37.616 [ConnectionRecovery-thread-3] INFO [MultiDestinationMessageListenerContainer]: Connection 3 created 9/10 queue consumers 2014-10-01 16:51:37.616 [ConnectionRecovery-thread-4] INFO [MultiDestinationMessageListenerContainer]: Connection 4 created 9/10 queue consumers 2014-10-01 16:51:37.616 [ConnectionRecovery-thread-9] INFO [MultiDestinationMessageListenerContainer]: Connection 9 created 8/10 queue consumers 2014-10-01 16:51:38.616 [ConnectionRecovery-thread-3] INFO [MultiDestinationMessageListenerContainer]: Connection 3 created 10/10 queue consumers 2014-10-01 16:51:38.617 [ConnectionRecovery-thread-9] INFO [MultiDestinationMessageListenerContainer]: Connection 9 created 10/10 queue consumers 2014-10-01 16:51:38.616 [ConnectionRecovery-thread-4] INFO [MultiDestinationMessageListenerContainer]: Connection 4 created 10/10 queue consumers 2014-10-01 16:51:38.616 [ConnectionRecovery-thread-5] INFO [MultiDestinationMessageListenerContainer]: Connection 5 created 10/10 queue consumers 2014-10-01 16:51:38.643 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 3: numQueues=10 initTimeMillis=8491 millis 2014-10-01 16:51:38.643 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 4: numQueues=10 initTimeMillis=8490 millis 2014-10-01 16:51:38.643 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 5: numQueues=10 initTimeMillis=8490 millis 2014-10-01 16:51:38.643 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 6: numQueues=10 initTimeMillis=3485 millis 2014-10-01 16:51:38.643 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 7: numQueues=10 initTimeMillis=3495 millis 2014-10-01 16:51:38.643 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 8: numQueues=10 initTimeMillis=3485 millis 2014-10-01 16:51:38.643 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 9: numQueues=10 initTimeMillis=8488 millis 2014-10-01 16:51:38.643 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 10: numQueues=9 initTimeMillis=3485 millis 7. To verify that messages are being streamed properly from the Axeda Platform, send DataItems from your connected Assets. You should see messages similar to the following. (Remember that each Asset you are testing must have an associated Machine Stream.) 2014-03-26 10:45:16.309 [pool-1-thread-1] INFO  [LogMessageProcessor]: StreamedDataItem: Model,Asset1,799021d6-70a3-7c32-0000-00000000021d,false,Wed Mar 26 14:45:16 EDT 2014,temp,43,analog 2014-03-26 10:45:21.137 [pool-1-thread-2] INFO  [LogMessageProcessor]: StreamedDataItem: Model,Asset2,799021d6-70a3-7c32-0000-000000000225,false,Wed Mar 26 14:45:21 EDT 2014,temp,43,analog 2014-03-26 10:45:26.134 [pool-1-thread-3] INFO  [LogMessageProcessor]: StreamedDataItem: Model,Asset1,799021d6-70a3-7c32-0000-00000000022b,false,Wed Mar 26 14:45:26 EDT 2014,temp,44,analog 2014-03-26 10:45:31.135 [pool-1-thread-4] INFO  [LogMessageProcessor]: StreamedDataItem: Model,Asset2,799021d6-70a3-7c32-0000-000000000231,false,Wed Mar 26 14:45:31 EDT 2014,temp,44,analog 2014-03-26 10:45:36.142 [pool-1-thread-5] INFO  [LogMessageProcessor]: StreamedDataItem: Model,Asset1,799021d6-70a3-7c32-0000-000000000237,false,Wed Mar 26 14:45:36 EDT 2014,temp,45,analog 2014-03-26 10:45:41.146 [pool-1-thread-6] INFO  [LogMessageProcessor]: StreamedDataItem: Model,Asset2,799021d6-70a3-7c32-0000-00000000023d,false,Wed Mar 26 14:45:41 EDT 2014,temp,45,analog Configuring a CustomMessageProcessor By default, the project is configured to use a LogMessageProcessor that logs each streamed message it receives to standard out. The project takes a StreamedMessage in either an XML or JSON format (as configured in the MachineStream SDKv2 object) and decodes the message into a StreamedMessage Java object. LogMessageProcessor.java implements the MessageProcessor interface. Here is the MessageProcessor.java interface: MessageProcessor.java package com.axeda.tools.streams.processor; import com.axeda.tools.streams.model.StreamedMessage; /** * This class defines the message processor method callback that will called for message processing. * Note that this methods will be called by multiple threads concurrently. */ public interface MessageProcessor { /** * Process a machine stream message. Note that this method will be called by multiple threads concurrently.  * The number of concurrent processing threads is defined in MachineStreamsConfig.getNumProcessingThreads(). * If you add code here that significantly slows down message processing, then there is the potential that * MessageListenerService threads will also block.  When the MessageListenerService threads block, this means that * messages will start to backup in the ActiveMQ or ASB message queues. If you are processing a large number of messages, * then you may need to adjust your configuration parameters or optimize your processMessage() code. * @param message machine streams message to process */public void processMessage(StreamedMessage message);} An additional class named CustomMessageProcessor.java has been provided so that you can provide your own custom message processing logic: CustomMessageProcessor.java package com.axeda.tools.streams.processor; import org.springframework.stereotype.Component; import com.axeda.tools.streams.model.StreamedAlarm; import com.axeda.tools.streams.model.StreamedDataItemMessage; import com.axeda.tools.streams.model.StreamedMessage; import com.axeda.tools.streams.model.StreamedMobileLocation; import com.axeda.tools.streams.model.StreamedRegistrationMessage; /** * This class was provided for customers to implement their own message processing business * logic. To use this class, change the @Autowired messageProcessor qualifier in * MessageProcessingServiceImpl.java to @Qualifier("customMessageProcessor") */ @Component("customMessageProcessor") public class CustomMessageProcessor implements MessageProcessor { /** * (non-Javadoc) * @see com.axeda.tools.streams.processor.MessageProcessor#processMessage(com.axeda.tools.streams.model.StreamedMessage) * * Process a machine stream message. Note that this method will be called by multiple threads * concurrently. The number of concurrent processing threads is defined in * MachineStreamsConfig.getNumProcessingThreads(). * If you add code here that significantly slows down message processing, then there is the * potential that MessageListenerService threads will also block. When the MessageListenerService * threads block, this means that messages will start to backup in the ActiveMQ or Azure Service Bus message queues. If you * are processing a large number of messages, then you may need to adjust your configuration parameters * or optimize your processMessage() code. */ @SuppressWarnings("unused") @Override public void processMessage(StreamedMessage message) { if (message instanceof StreamedDataItemMessage) { StreamedDataItemMessage dataItem = (StreamedDataItemMessage) message; // add your business logic here } else if (message instanceof StreamedAlarm) { StreamedAlarm alarm = (StreamedAlarm) message; // add your business logic here } else if (message instanceof StreamedMobileLocation) { StreamedMobileLocation mobileLocation = (StreamedMobileLocation) message; // add your business logic here } else if (message instanceof StreamedRegistrationMessage) { StreamedRegistrationMessage registration = (StreamedRegistrationMessage)message; // add your business logic here } } } The Axeda Platform Machine Streams feature currently support 4 different message types: StreamedDataItemMessage StreamedAlarm StreamedMobileLocation StreamedRegistrationMessage For each of the different message types, you should add your message processing business logic.  You may want to write each message to your favorite NoSql database or to a flat file. Once you have completed your changes to the CustomObjectMessageProcessor, then you must make one change in the MessageProcessingServiceImpl.java class to use this Spring bean. Uncomment this line  // @Qualifier("customMessageProcessor") Comment this line @Qualifier("logMessageProcessor") The following code snppet shows what your changes should look like when you are finished: MessageProcessingServiceImpl.java @Component("messageProcessingService") public class MessageProcessingServiceImpl implements MessageProcessingService private static final Logger LOGGER = LoggerFactory.getLogger(MessageProcessingServiceImpl.class); @Autowired private MessageDecoder messageDecoder; @Autowired // If you want to use the CustomMessageProcessor instead of the default LogMessageProcessor then change this Qualifier to @Qualifier("customMessageProcessor") //@Qualifier("logMessageProcessor") private MessageProcessor messageProcessor; private ExecutorService executorService;
View full tip
Complete information about installing and using the Axeda Machine Streams Data Relay Project is available here. This page provides the files for Axeda Machine Streams Data Relay Project: For Linux users ready to run (bin) files: download the machine-streams-data-relay-1.0.3-bin.tar_.gz archive full Maven project files: download the machine-streams-data-relay-1.0.3-project.tar_.gz archive For Windows users ready to run (bin) files: download the machine-streams-data-relay-1.0.3-bin.zip archive full Maven project files: download the machine-streams-data-relay-1.0.3-project.zip archive From the links below, select the project you want to use.
View full tip
  Whether your ThingWorx instances are deployed on premise, in the cloud or a hybrid of the two, I’d like you to imagine: You have a super cool app. You want to deploy it securely. You’re not a security expert. What do you do? How do you know how to securely deploy your app? Where do you go for security best practices? Introducing the new ThingWox Secure Deployment Hub!   The ThingWorx Secure Deployment Hub is a new section of our support site that will introduce you to the ThingWorx security landscape and direct you to security resources pertaining to the Edge, the platform and beyond.   From permissions and provisioning, to subsystems and SSO, the hub is packed with our recommendations and best practices for you to deploy your app in a secure fashion.   Happy deploying! Kaya
View full tip
The Property Set Approach This article details an approach developed by Prachi Rath and Roy Clarke, refined by the EDC team in the December 2019's Remote Monitoring of Assets Reference Benchmark , and used to handle multi-property business rules in an Enterprise ThingWorx application.   Introduction If there are logic rules which depend upon multiple properties, and each property receives its updates one at a time, then each property will need to have an identical subscription, because there is no way for any one subscription to know the most up-to-date values for the other properties. This inefficient approach would create redundancy and sizing constraints, reducing the capacity of the application to scale up to the Enterprise level. The Property Set Approach resolves this issue by sending in all property updates as one Info Table or JSON property (called the “property set”), which can then have a single subscription. The property set is assembled on the Edge when an update needs to be sent, and then the Platform dissects, processes, and stores the data within this property set as required by the business logic.   This approach also involves caching the last property value into a runtime variable so that it can be referenced within the business logic subscription without having to be retrieved from the database. This can significantly improve the runtime of the subscription, reducing the number of resources required to sustain the business logic and ensuring that any alerts or events resulting from the business logic occur as soon as possible. It also reduces the load on the database, ensuring that data ingestion can complete unhindered.   So, while there are many benefits to this approach, it is also more complicated. It tightly couples the development of the Edge and Platform code and increases the application complexity, making it slightly less easy to maintain the application in the long run. The property set also requires a little more bandwidth and a more stable internet connection between Edge devices and the Platform since there is more metadata in an Info Table property, and therefore every update is slightly larger than it would be otherwise. So this approach is only recommended when multi-property rules are a requirement of the application and a stable internet connection exists between the Edge and Platform.   Platform Implementation I. Create an Info Table (or JSON) Property This tutorial uses the out-of-the-box Data Shape called NamedVTQ for the Info Table property, which is defined on a Thing Template as a remote property. It is important that this is not marked as persistent or logged, as the purpose is to reduce the amount of database writes and reads required by the Platform. The Info Table property has the following property definition:     <PropertyDefinition aspect.dataChangeType="ALWAYS" aspect.dataShape="NamedVTQ" aspect.isPersistent="false" baseType="INFOTABLE" isLocalOnly="false" name="numberPropertySetAsInfotable"/>       II. Create a Data Change Event Subscription for the Info Table Property The subscription has three parts: Cache the last value for the property in a runtime variable Start off the business rules processing, sending in the whole Info Table Send the Info Table to be logged as individual local properties in the database     // First step caches the last Value, refer to the next step… // Second step sets off the business rules processing with the Info Table me.ScaleTestBusinessRuleForPropertySetAsInfotable({ PropertySetAsInfotable: eventData.newValue.value }); // Third step sends the Info Table as one property into a service which parses it into the // individual properties, updating both the runtime properties on the remote thing and the database me.UpdatePropertyValues({ values: eventData.newValue.value /* INFOTABLE */ });       III. Set-Up Caching Each property which needs to be cached should be created on the Thing Template level and named in a similar way, say by placing the word “Last” at the end, such as “Property1” => “Property1Last”, “Property2” => “Property2Last”, etc. This property should NOT be logged or persistent, as the point of this is to store the most recent value in memory, removing any superfluous dependency on database queries in the process. Note that while storing the property in runtime memory makes it much more accessible, it also means that the property needs to be rewritten manually upon Platform restart. Additional code (not provided here) must be written to populate these properties from the database upon application start-up.   The following code should be placed in the data change event subscription (option 1 in the case where only a few properties need caching, or option 2 if every property value needs to be cached):   Option 1: Some but Not All Properties Need Caching     // Names of properties for which you want to cache the last value var propertyNames = ['number1', 'number2']; // Loop through the properties and cache their time if they are found in the property set propertyNames.map(assignLast); // This function can be split into two functions for Age and Last separately if need be function assignLast(propertyName) { logger.debug("Looping for property -> "+ propertyName); var searchprop = new Object(); searchprop.name = propertyName; property = eventData.newValue.value.Find(searchprop); if(property){ logger.debug("Found Row. Name= " + property.name); var lastPropertyName = propertyName+"Last"; if(property.value) { // Set the cache property on me, this entity, to the current property value me[lastPropertyName] = me[propertyName]; } } else { logger.debug("Property Not Found in property set -> " + propertyName); } }       Option 2: All Properties Need Caching     var rowCount = eventData.newValue.value.getRowCount(); for(var i=0; i<rowCount; i++){ logger.warn("property name->" + eventData.newValue.value[i].name + "----- property new value->" + eventData.newValue.value[i].value.value); var propertyName = eventData.newValue.value[i].name; var lastPropertyName = propertyName+"Last"; me[lastPropertyName] = me[propertyName]; logger.warn("done last subscription, last property value for lastPropertyName" + me[lastPropertyName]); }         Useful Platform Code Snippets I. Age Calculation     var date1 = new Date(); var date2 = me.GetPropertyTime({ propertyName: propertyName /* STRING */ }); var result = millisToMinutesAndSeconds (dateDifference(date1, date2) ); // This function converts from an unintelligibly large number in milliseconds to something formatted in minutes and seconds function millisToMinutesAndSeconds(millis) { var minutes = Math.floor(millis / 60000); var seconds = ((millis % 60000) / 1000).toFixed(0); return (seconds == 60 ? (minutes+1) + ":00" : minutes + ":" + (seconds < 10 ? "0" : "") + seconds); }       II. Sort the Info Table by Time     var params = { sortColumn: "time" /* STRING */, t: me.propertySet/* INFOTABLE */, ascending: ascending /* BOOLEAN */ }; var result = Resources["InfoTableFunctions"].Sort(params);       III. Search the Info Table for a Property     var searchprop = new Object(); searchprop.name = propertyName; property = PropertySetAsInfotable.Find(searchprop); if(property === null){ logger.info("Property Not Found -> " + propertyNumber1); } else { logger.info("Found Row. Name= [" + property.name + "], value= " + property.value.value); }         Edge Implementation This example implementation uses the .NET Edge SDK to build a property set Info Table at the Edge.   I. Define the Data Shape A standard Data Shape is used (NamedVTQ), but because this Data Shape is not exposed in the Edge SDK code, it has to be created manually.     // Data Shape definition for NamedVTQ FieldDefinitionCollection namedVTQFields = new FieldDefinitionCollection(); namedVTQFields.addFieldDefinition(new FieldDefinition(CommonPropertyNames.PROP_NAME, BaseTypes.STRING)); namedVTQFields.addFieldDefinition(new FieldDefinition(CommonPropertyNames.PROP_VALUE, BaseTypes.VARIANT)); namedVTQFields.addFieldDefinition(new FieldDefinition(CommonPropertyNames.PROP_TIME, BaseTypes.DATETIME)); namedVTQFields.addFieldDefinition(new FieldDefinition(CommonPropertyNames.PROP_QUALITY, BaseTypes.STRING)); base.defineDataShapeDefinition("NamedVTQ", namedVTQFields);     II. Define the Info Table Property The property defined should NOT be logged or persistent, and it can be read-only, since data is always pushed from the Edge and read from the server cache when accessed on the Platform. Note that the push type of the info table property MUST be set to "ALWAYS" (if set to "VALUE", the data change event will only fire if the number of rows changes).   // Property Set Definitions [ThingworxPropertyDefinition( name = "DevicePropertySet", description = "Alternative representation of properties as an Info Table for rules processing", baseType = "INFOTABLE", category = "Status", aspects = new string[] { "isReadOnly:true", "isPersistent:false", "isLogged:false", "dataShape:NamedVTQ", "cacheTime:0", "pushType:ALWAYS" } ) ]     III. Define a Property to Store the GOOD Quality Status   private static String QUALITY_STATUS_GOOD = QualityStatus.GOOD.name();     IV. Define Functions to Populate the Value Collections  An Info Table is really just made up of many Value Collections, where each Value Collection is considered a row. These services take in the name and value of a property and return a Value Collection object which can be added to the property set Info Table.   public ValueCollection createNumberValueCollection(String name, double value) { ValueCollection vc = new ValueCollection(); // Add quality and time entries to the Value Collection vc.SetStringValue(CommonPropertyNames.PROP_QUALITY, QUALITY_STATUS_GOOD); vc.SetDateTimeValue(CommonPropertyNames.PROP_TIME, new DatetimePrimitive(DateTime.UtcNow)); vc.SetStringValue(CommonPropertyNames.PROP_NAME, name); vc.SetNumberValue(CommonPropertyNames.PROP_VALUE, value); return vc; } public ValueCollection createBooleanValueCollection(String name, Boolean value) { ValueCollection vc = new ValueCollection(); // Add quality and time entries to the Value Collection vc.SetStringValue(CommonPropertyNames.PROP_QUALITY, QUALITY_STATUS_GOOD); vc.SetDateTimeValue(CommonPropertyNames.PROP_TIME, new DatetimePrimitive(DateTime.UtcNow)); vc.SetStringValue(CommonPropertyNames.PROP_NAME, name); vc.SetBooleanValue(CommonPropertyNames.PROP_VALUE, value); return vc; }     V. Build the Property Set Call this code from the processScanRequest method to build the property set.   // Create an instance of a new Info Table using the standard "NamedVTQ" Data Shape InfoTable propertySet = new InfoTable(getDataShapeDefinition("NamedVTQ")); // Set name/value for Temperature using convenience function propertySet.addRow(createNumberValueCollection("Temperature", temperature)); // Set name/value for Pressure using convenience function propertySet.addRow(createNumberValueCollection("Pressure", pressure)); // Set name/value for TotalFlow using convenience function propertySet.addRow(createNumberValueCollection("TotalFlow", this._totalFlow)); // Set name/value for InletValve using convenience function propertySet.addRow(createBooleanValueCollection("InletValve", inletValveStatus)); // Set name/value for FaultStatus using convenience function propertySet.addRow(createBooleanValueCollection("FaultStatus", faultStatus)); // Set the property set Info Table property base.setProperty("DevicePropertySet", propertySet);     VI. Update the subscribed properties These two lines of code update the properties and events, actually sending the property set (containing all property updates) to the Platform.   base.updateSubscribedProperties(15000); base.updateSubscribedEvents(60000);     Conclusion Following these steps will enable the Edge to build a property set before sending any property updates to the Platform. The Platform can then rely on caching to process the business logic with no database dependency, which is faster and more efficient than any other approach. Finally the updates are still written to the database, so in the end, there is no functional difference between using a property set and binding each property individually. Please don't hesitate to comment here with any questions about this approach.
View full tip
Checkout the below video which explores the Creo As A Service (CAAS) feature of the Product Insight extension. This allows to retrieve Creo analysis computation inside ThingWorx through the Analytics Manager framework.  
View full tip
Joint Study by Cognizant and the Economist Intelligence Unit (EIU) - Making products smart can deliver game-changing innovation,enriched customer experiences and new, across-the board levels of efficiency. From R&D and manufacturing, through distribution and after-sales support, product data is changing how products are built, sold and cared for. Our latest research reveals practical steps business leaders can take to benefit from this quickly intensifying and accelerating trend.
View full tip
Time series prediction uses a model to predict future values based on previously observed values. Time series data differs somewhat from non-time series data in both the formatting of the data and the training of predictive models. This article will highlight several important considerations when dealing with time series data. Preparing Time Series Data: The data must contain exactly one field with Op Type “TEMPORAL” and one field with Op Type “ENTITY_ID”, which defines the identifier for an entity, such as a machine serial number. The ENTITY_ID field should remain the same as long as there are no missing timestamps and it is within the same asset but should be different for different assets or asset runs in order to accurately assign history during model training and scoring.     The TEMPORAL field is a numeric field indicating the order of the data rows for a specific entity . One should also ensure that data is formatted such that the timestamps are equally spaced (for example, one data point every minute) and that no gaps exist in the sequence of numbers.   If there are gaps in the time series data, it is recommended to restart the series after the gap as a new entity. Alternatively, if the gap is small enough (few data points), linear interpolation based on the gap endpoint values within the same entity is generally acceptable.   Model Creation in Time Series: When creating a timeseries model in Analytics Builder, you will be asked to specify a lookback size and lookahead parameter. The lookback size determines how many historical datapoints (including the current row) will be used in the model. The lookahead indicates how many time steps ahead to predict.  If the value of the goal variable is not known at time of scoring, unchecking Use Goal History will use the goal column during training but not its history during scoring.   Time Series models can also be created in Services using the Training Thing. The lookback size and lookahead parameter are specified in the CreateJob service. The virtualSensor field is used to indicate if the model should be trained to predict values for a field that will not be available during scoring. For example, one can train a time series model to predict Volume using evolving Temperature and Pressure, based on sensor data for these three variables over a period of time. However, the Volume sensor may be removed from further assets in order to reduce costs, and the predictive model can be used instead.   Two important considerations: ThingWorx Analytics will expand historical data in the time series into new columns. This process creates new features using the values of the previous time steps. Additionally, low order derivatives, together with average and standard deviation features are computed over small contiguous subgroups of the historical data.   The expansion process can make the dataset exceptionally wide, so time series training is generally significantly slower compared to training with no history on the same dataset. This gets exacerbated when lookback size = 0 (auto-windowing, a process where the system is trying to find the optimal lookback). If there are columns that are not changing or change infrequently (such as a device serial number or zip code of the device’s location), these should be marked as Static when importing the data. Any columns labeled Static will not be expanded to create new features. Care also needs to be taken to exclude any features that are known to not be relevant to the prediction. Using a large lookback can eliminate how many examples / entities the model has available to train. For example, if a lookback of 8 is used, then any entities that have less than 8 examples will not be used in training. For the same reason, scoring for time series produces less results than the number of rows provided as input: if 10 rows are provided and lookback is 6, then only 5 predictions will be produced.
View full tip
With the advent of faster and cheaper hardware, and owing to vast improvements in connectivity such as Gigabit networks and 5G, data is collected and processed at an ever increasing pace. As such, there is a related need for the Analytical Models built on top of such data to evolve over time. As part of this evolution, modelling experiments with more data, new variables, new techniques, and different training parameters will be performed. The resulting models need to be tracked, monitored, and appropriate versions need to be used for scoring in various scenarios. This creates the need for version control of Analytical Models.   ThingWorx Analytics makes it easy to implement an “Analytics Model Version Control” system via two mechanisms: A job Id and timestamp based identification system, so if a model with the same jobName (model name) is retrained, the old model will still exist and can be easily retrieved A tagging mechanism for the above jobs   Before using the above techniques to version control Analytics Models, it is critical to decide what represents “the same model” to be versioned. Unlike in the world of Software Engineering where the concept to be versioned is a file / folder that evolves over time, in the world of Analytics Models, there could be several, potentially customer specific definitions. For example, one definition could be “same training parameters, but trained on the latest, most comprehensive data available”. Yet another, more relaxed definition could be “any training parameters, but same training dataset and goal” or even “any training parameters, on any historical version of the training dataset”.   Once this concept is agreed upon within the customer's organization, and if training is done in a ThingWorx service by calling the APIs, for a given jobName (model name) one can simply query the tags for a LatestVersion type tag, increment, and create the new model with the same jobName and incremented tag. Any model version with the same jobName and its corresponding performance metrics can then be accessed using the tag. Additional tags (such as techniques used, dataset version, etc) can be added if desired to make retrieval of context dependent models more efficient.
View full tip
While I was writing my previous post about Purging ValueStream entries from Deleted Things , it occurred to me that a similar issue would happen to those using the InfluxDB as persistence provider for their ValueStreams.   I wanted to take the opportunity that the logic is still fresh in my mind and extend the utility to do the same thing with InfluxDB. I also used the opportunity to create a dynamic InfoTable as it's been a while since I did it.    Also, it shows how to directly interact with the InfluxDB through its API.    The modification is based on a new thing called influxDBConnector  which has the following services:   dropMeasurements: Deletes all the DB entries related to a Thing; getAllEntries: Queries all entries related to a Thing. It has a string output; getAllEntriesTable: Queries all entries related to a Thing. Infotable output; getMissingThings: Queries ThingWorx Things tables and InfluxDB measurements. It filters to have in the result only the Measurements that are not in the Things table; showMeasurements; Gets all the measurements in InfluxDB   Also the following properties: database: Influx database name used to persist the value stream data; InfluxLink: hostname:port to the InfluxDB server   Like the previous example, I created a sample mashup (purgeInfluxDBStreamsMashup ) to help to execute the services:   Obviously this utility expects that there's an existing Influx persistence provider.   Once more: these services affect directly the DB and need to be used carefully and only by Administrators.  Make sure you fully test it before using it in production.   The attached XML contains the complete utility for PostgreSQL and InfluxDB.   Hope it helps Ewerton  
View full tip
  Hi everyone,   If you’ve been curious about Solution Central, our new cloud tool for application management and deployment, and wondered how it can help your existing apps, then get ready because we have a feature for you! Our initial release of Solution Central focused on deploying apps built on 8.5 or later. Today, whether big or small, round or square, red or blue, built by PTC or by you, all apps can be deployed using Solution Central. Using Operator Advisor 8.4? We got you covered. Using an old extension? No worries. Using an app built on 7.2? Not a problem. Solution Central can deploy all apps.   This ability to deploy any app, regardless of which version it was created on, is just one of the many improvements we’re releasing to help you accelerate your deployment. You can take advantage of this latest capability in ThingWorx 8.5.3 and Solution Central 1.0.3. Try it out today!   We’re not stopping there, though.   We have a lot that we’re working on and wanted to share a quick glimpse of what you can expect in the very near future: ability to manage solution lifecycle through deletion of solutions and their versions ability to secure and manage connected environments through improved activation and deactivation services advanced administrative capability to withdraw and re-initiate deployment requests   The above functionality, along with additional enhancements, are targeted for TW8.5.5 and SC1.1.0 in early April.   These improvements will help to maximize the time savings and overall value you derive from Solution Central’s unique deployment services.   As always, stay connected, Kaya
View full tip
Announcements