cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

ThingWorx Navigate is now Windchill Navigate Learn More

IoT Tips

Sort by:
ThingWorx provides multiple ways to deliver your data to the server. You can choose from the C based EMS to your own C application that uses the C SDK as well as SDKs for many popular languages but what can you do if the device you want to collect data on is so small that it need a very lightweight data delivery method. Normally you would consider using the REST web service interface and writing your own custom client to post your data by there is an alternative, MQTT. MQTT is a lightweight protocol that can be used from an Arduino with an Ethernet Shield that can stream real time data directly to ThingWorx by installing the MQTT Marketplace Extension on your server. To learn more about how this kind of solution worked, I created this slide deck while building a hardware example: DeliveringArduinoDataToThingworx.pdf Hopefully, it can help others out who want to create this kind of solution as well.
View full tip
Hiya,   I recently prepared a short demo which shows how to onboard and use Azure IoT devices in ThingWorx and added some usability tips and tricks to help others who might struggle with some of the things that I did.     The good news... I recorded and posted it to YouTube here.   •Connect Azure IoT Hub with ThingWorx (to be updated soon for 9.0 release) •Using the Azure IoT Dev Kit with ThingWorx •Getting the Azure IoT Hub Connector Up and Running (V3/8.5)   Enjoy, and don't hesitate to comment with your own tips and feedback.   Cheers,   Greg
View full tip
Hi Community,   Although we have reference architectures and integration paths for connecting devices to ThingWorx through Azure IoT; no one has ever written anything about doing the same from one ThingWorx to another.  I thought I’d change that and put some ideas out there around how one might go about doing this.  Although this is not officially supported or recommended by PTC; I have consulted with a number of leading SMEs on the subject, which have participated in forming the basis of my thinking outlined here.   Components Required (in order of communication path): On-premise ThingWorx Platform Protocol Adapter Toolkit* (CXS) - MQTT Azure IoT Edge Azure IoT Hub ThingWorx Azure IoT Hub Connector (CXS) Azure Cloud-hosted ThingWorx Platform   PAT (2) with codec to encode MQTT messages publishes to on-premise IoT Edge MQTT endpoint which handles store-and-forward of messages to IoT Hub.  An Azure IoT device would exist for each Thing you wish to represent on the ThingWorx servers.  The Azure IoT Hub Connector would pick-up the incoming messages and pass them on to the cloud ThingWorx which would decode the MQTT payload and map to Thing property updates.   The only part that I presently don’t like about this approach is that you’ll need to decode the MQTT messages on the ThingWorx platform in the cloud when they are received from the IoT Hub, and this mechanism will need to also need to handle encoding and publishing back to the IoT Hub if C2D (Cloud-to-Device) messages are to be implemented (aka bi-directional).  This is required as ThingWorx only supports AlwaysOn as an application level protocol so some form of mapping needs to be done.   * Another approach would be to replace the PAT with a custom agent which implements both the ThingWorx Edge SDK and the Azure IoT device SDK   Regards,   Greg Eva
View full tip
Hi Community,   I've recently had a number of questions from colleagues around architectures involving MQTT and what our preferred approach was.  After some internal verification, I wanted to share an aggregate of my findings with the ThingWorx Architect and Developer Community.   PTC currently supports four methods for integrating with MQTT for IoT projects. ThingWorx Azure IoT Hub Connector ThingWorx MQTT Extension ThingWorx Kepware Server Choice is nice, but it adds complexity and sometimes confusion.  The intent of this article is to clarify and provide direction on the subject to help others choose the path best suited for their situation.   ThingWorx MQTT Extension The ThingWorx MQTT extension has been available on the marketplace as an unsupported “PTC Labs” extension for a number of years.  Recently its status has been upgraded to “PTC Supported” and it has received some attention from R&D getting some bug fixes and security enhancements.  Most people who have used MQTT with ThingWorx are familiar with this extension.  As with anything, it has advantages and disadvantages.  You can easily import the extension without having administrative access to the machine, it’s easy to move around and store with projects, and can be up and running quite quickly.  However it is also quite limited when it comes to the flexibility required when building a production application, is tied directly to the core platform, and does not get feature/functionality updates.   The MQTT extension is a good choice for PoCs, demos, benchmarks, and prototypes as it provides MQTT integration relatively quickly and easily.  As an extension which runs with the core platform, it is not a good choice as a part of a client/enterprise application where MQTT communication reliability is critical.   ThingWorx Azure IoT Hub Connector Although Azure IoT Hub is not a fully functional MQTT broker, Azure IoT does support MQTT endpoints on both IoT Hub and IoT Edge.  This can be an interesting option to have MQTT devices publish to Azure IoT and be integrated to ThingWorx using the Azure IoT Hub Connector without actually requiring an MQTT broker to run and be maintained.  The Azure IoT Hub Connector works similarly to the PAT and is built on the Connection Server, but adds the notion of device management and security provided by Azure IoT.   When using Azure IoT Edge configured as a transparent gateway with buffering (store and forward) enabled, this approach has the added benefit of being able to buffer MQTT device messages at a remote site with the ability to handle Internet interruptions without losing data.   This approach has the added benefit of having far greater integrated security capabilities by leveraging certificates and tying into Azure KeyVault, as well as easily scaling up resources receiving the MQTT messages (IoT Hub and Azure IoT Hub Connector).  Considering that this approach is build on the Connection Server core, it also follows our deployment guidance for processing communications outside of the core platform (unlike the extension approach).   ThingWorx Kepware Server As some will note, KepWare has some pretty awesome MQTT capabilities: both as north and southbound interfaces.  The MQTT Client driver allows creating an MQTT channel to devices communicating via MQTT with auto-tag creation (from the MQTT payload).  Coupled with the native ThingWorx AlwaysOn connection, you can easily connect KepWare to an on-premise MQTT broker and connect these devices to ThingWorx over AlwaysOn.   The IoT Gateway plug-in has an MQTT agent which allows publishing data from all of your KepWare connected devices to an MQTT broker or endpoint.  The MQTT agent can also receive tag updates on a different topic and write back to the controllers.  We’ve used this MQTT agent to connect industrial control system data to ThingWorx through cloud platforms like Azure IoT, AWS, and communications providers.   ThingWorx Product Segment Direction A key factor in deciding how to design your solution should be aligned with our product development direction.  The ThingWorx Product Management and R&D teams have for years been putting their focus on scalable and enterprise-ready approaches that our partners and customers can build upon.  I mention this to make it clear that not all supported approaches carry the same weight.  Although we do support the MQTT extension, it is not in active development due to the fact that out-of-platform microservices-based communication interfaces are our direction forward.   The Azure IoT Hub Connector, being built on the Connection Server is currently the way forward for MQTT communications to the ThingWorx Foundation.   Regards,   Greg Eva
View full tip
I created this recently for another group -  might be useful    Video Link to Create a Database connection to you Postgres Thingwork Database 
View full tip
When using the Auto-bind section of an EMS configuration it is very important to note the difference between "gateway":true and "gateway":false. Using either gateway value, when used with a valid "name" field, will result in the EMS attempting to bind the Thing with the ThingWorx platform, and will allow the EMS to respond to file transfer and tunnel services related to the auto-bound things, but this is around where the similarities end. Non-Gateway: This type of auto-bound thing can be thought of as a placeholder because the EMS will still require a LuaScriptResource to be bound in order to respond to property/service/event related messages. There must be a corresponding Thing based on the RemoteThing template (or any RemoteThing derived template e.g. RemoteThingWithFileTransfer) on the ThingWorx server in order for the bind to succeed. There are many reasons to use this type of auto-bound thing, but the most common is to bind a simple thing that can facilitate file transfer and tunnel services but does not need any custom services, properties, or events that would be provided by custom lua scripts within the LuaScriptResource. Gateway: An auto-bound gateway can be bound to the ThingWorx platform ephemerally if there is no Thing present to bind with on the platform. To clarify, if no Things exist with the matching Thing Name on the platform, and the EMS is attempting to bind a Gateway, a Thing will be automatically created on the platform to bind with the auto-bound gateway. This newly created ephemeral thing will only be accessible through the ThingWorx REST API, and once the EMS unbinds the gateway the ephemeral thing will be deleted If there is a pre-existing Thing on the ThingWorx server, then it must be based off of the EMSGateway template in order for the bind to succeed. The EMSGateway template, used both normally and ephemerally, will provide some gateway specific services that would otherwise be inaccessible to a normal remote thing. See EMSGateway Class Documentation for more details.
View full tip
The App URI in the ThingWorx Remote Thing Tunnel configuration specifies the endpoint of the specified tunnel. The default value (/Thingworx/tunnel/vnc.jsp) will point to the built in ThingWorx VNC client that can be downloaded through the Remote Access Widget in a Mashup to provide VNC remote desktop access. Leaving the App URI blank will result in the Tunnel being connected to the listen port on the users machine as specified in the Remote Access Widget​. In this case the user must supply the application client (e.g. an ssh client) in order to connect to the tunnel endpoint.
View full tip
Developing with Axeda Artisan (for Axeda Platform, v6.8 and later) Axeda Artisan is a development tool based on the Apache Maven build system. Artisan includes authoring, management, and deployment mechanisms that define and manage the development of Axeda Platform extension points, allowing you to flexibly install, update, and uninstall many types of objects on the Axeda Platform. The Components of Artisan The Artisan framework is comprised of several components: The Axeda Platform Each instance of the Axeda Platform contains libraries and sample Artisan Projects (archetypes) that are downloaded by developers to author and deploy their own Artisan Projects. The Artisan Project An Artisan Project is a collection of content and configuration files that are used to define a set of objects that will be installed on the Axeda Platform. The Artisan Installer The Artisan Installer is a flexible, configurable application that uses the Apache Maven build life-cycle to upload the contents of an Artisan Project to the Axeda Platform. Axeda Platform APIs The Artisan Installer interacts with the Axeda Platform and uses Axeda APIs to install, upgrade, or uninstall ArtisanProjects. How You Work With Artisan The goal of using Artisan is to develop an Artisan Project that will become an object or application that runs on the Axeda Platform. There are several phases involved in creating an Artisan Project: Preparing Your Development Environment – When first working with Artisan, you need to prepare your development environment by installing or configuring Java, Apache Maven, and any other required development tools. Creating an Artisan Project – Artisan uses archetypes as the basis for Artisan Projects. Archetypes are used to generate an Artisan Project that contains the correct type of development framework, which you then modify to create your own Artisan Project: The Hello World archetype provides a sample project you can use to create common objects on the Axeda Platform. The Machine Streams archetype provides a sample project you can use to configure machine streams for your Axeda Platform. Installing an Artisan Project on the Axeda Platform – The Artisan Installer packages the contents of the Artisan Project, and deploys them on the Axeda Platform. Testing the Artisan Project – Once installed, you can test your Artisan Project to verify its functionality and identify any issues that need to be addressed. It is common for an Artisan Project to be installed on the Axeda Platform several times as issues are identified and corrected during development. Creating a Stand-alone Installer – Once ready for deployment to production, the Artisan Installer is used to create a stand-alone installer for distributing the project contents to other instances of the Axeda Platform. Where to Go from Here To begin working with the Artisan framework to create your own projects, refer to the following: Artisan Landing Page – A page hosted on the Axeda Platform that introduces Artisan and serves as a starting point for creating an Artisan Project. The Artisan Landing Page can be found at the following location: http://instance_name:port/artisan                     Where instance_name:port/ is the ip address and port of an instance of the Axeda Platform. Axeda® Artisan Developer’s Guide – A reference that introduces Artisan, describes how to prepare your development environment, and provides detailed technical information concerning creating an Artisan Project and configuring the Artisan Installer. The Axeda® Artisan Developer’s Guide is available with all Axeda product documentation from the PTC Support site, http://support.ptc.com/
View full tip
Axeda Machine Streams enables external Platform integrators to access the current, raw data from connected assets. The Platform can stream the data item, alarm, mobile location, and registration messages from connected assets to an ActiveMQ server or Azure Service Bus endpoint. Streamed data can be used for data analytics or reporting, or simply for storage. This article explains the Machine Streams Data Relay project that Axeda provides. This sample project illustrates how stream consumers can create their own projects to relay Machine Stream messages from ActiveMQ or Azure Service Bus into their environments. The Machine Streams Data Relay project was created using Apache Maven. The project operates by dispatching messages to a log message processor. Each machine streams message is logged to stdout. Note: The "Axeda Features Guide" provides a high level introduction to the Axeda Machine Streams feature. That PDF is available from PTC Support (http://support.ptc.com/).) Downloading and Installing the Project The machine-streams-data-relay project is provided as a tar.gz archive for Linux users and a .zip archive for Windows users. Each archive includes a Maven project with all source code. This page provides downloads and full source for the machine-steams-data-relay Maven project. The Data Relay project files are available from here. Prerequisites To download, build, and compile the machine-streams-data-relay project, you will need the following: Access to an Axeda Platform instance configured to stream asset data (for ActiveMQ endpoint this includes the Axeda provided ActiveMQ machine-streams plugin/overlay (axeda-jms-plugin-r<SVN_REVISION>-machine-streams.zip, which is provided here. ActiveMQ or Azure Service Bus server configured for Machine Streams. Instructions for configuring an ActiveMQ or Azure Service Bus server for Machine Streams are provided in the “Axeda® Machine Streams: A Guide to Setting Up Broker Endpoints", available with all documentation from PTC Support (http://support.ptc.com/). At least one machine stream (Axeda Artisan Machine Streams Archetype) configured to stream data to the ActiveMQ or Azure Service Bus server for your assets. (Complete information about creating machine streams and adding machine stream support to the Axeda Platform is provided in the “Axeda v2 API/Services Developers Reference Guide” available from PTC Support (http://support.ptc.com/).) Access to the ActiveMQ or Azure Service Bus server configured as the endpoint for streamed Machine Streams content Oracle Java JDK 1.7 or greater and java and javac installed and available in your PATH (if you need instructions for this, see http://www.oracle.com/technetwork/java/javase/downloads/index.html) Maven 3.0.4 or greater and mvn installed and available in your PATH (if you need instructions for this, see http://maven.apache.org/download.cgi) Note: For the Machine Streams Data Relay project to work successfully, the Axeda Platform instance and the ActiveMQ or Azure Service Bus server instance must be configured with support for Axeda Machine Streams, and at least one machine stream must be configured to stream data. Complete information about configuring Axeda Platform for Axeda Machine Streams, including the data format for the resulting streams (XML or JSON) is available in the “Axeda v2 API/Services Developers Reference Guide.” Instructions for configuring an ActiveMQ or Azure Service Bus server for Machine Streams are provided in the “Axeda® Machine Streams: A Guide to Setting Up Broker Endpoints” Reference Guide (available from PTC Support (http://support.ptc.com/)). Building the Project This page provides instructions for building the Data Relay project for Linux and for Windows environments. 1. Download and uncompress the project for your environment Linux: Click here for the machine-streams-data-relay-1.0.3-project.tar.gz # tar -zxvf machine-streams-data-relay-1.0.3-project.tar.gz # cd machine-streams-data-relay-1.0.3 Windows: Click here for the machine-streams-data-relay-1.0.3-project.zip Unzip the project to the following directory: C:\machine-streams-data-relay-1.0.3 2. Edit the ActiveMQ or Azure Service Bus configuration file (configAMQ.properties or configASB.properties) in src\main\scripts\ as needed. sample Config.properties files for the MachineStreamsDataRelay component For ActiveMQ broker endpoints - configAMQ.properties # The ActiveMQ broker URL. brokerURL=tcp://localhost:62000 # The ActiveMQ queue name to process messages from. # It can be a single queue: MachineStream.stream01 # Or a wildcard queue: MachineStream.> queueName=MachineStream.> # The username used to connect to the ActiveMQ queue username=axedaadmin # The password used to connect to the ActiveMQ queue password=zQXuLzhQgcyRZ25JCDXYEPBCT2kx48 # The number of ActiveMQ broker connections. numConnections=10 # The number of sessions per connection. Note that each session will create a separate thread. numSessionsPerConnection=5 # The number of concurrent threads used for processing machine streams messages. numProcessingThreads=100 # The type of message listener container. # default = single queue name per connection. # multiDestination = supports multiple queue names per connection messageListenerContainerType=default For Azure Service Bus broker endpoints - configASB.properties # The ASB broker URL. brokerURL=amqps://your-azure-service-bus-namespace.servicebus.windows.net # The ASB queues to process messages from. # It can be a single queue: MachineStream.stream01 # Or multiple queues separated by a comma: MachineStream.stream01,MachineStream.stream02 # Or a queue range defined by the following syntax: MachineStream.stream[01-20] queueName=MachineStream.stream[01-50] # The username used to connect to the ASB queue(s) username=your-azure-service-bus-username # The password used to connect to the ASB queue(s) password=the-password-for-your-azure-service-bus-username # The max number of ASB broker connections. numConnections=10 # The number of concurrent threads used for processing machine streams messages. numProcessingThreads=100 # The type of message listener container. # default = single queue name per connection. # multiDestination = supports multiple queue names per connection messageListenerContainerType=multiDestination Note: messageListenerContainerType is provided because Azure Service Bus does not support wildcard queue names. The configuration details are as follows: Name Description Value brokerURL location of the ActiveMQ or Azure Service Bus (broker) location of the ActiveMQ or Azure Service Bus server (broker) queueName Name of the ActiveMQ or Azure Service Bus queue from which you want to process messages To define a single queue: MachineStream.<insert single queue name here> To define a wildcard queue name for multiple queues:MachineStream. It can be a single queue:  MachineStream.stream01 Or multiple queues separated by a comma: MachineStream.stream01,MachineStream.stream02 Or a queue range defined by the following syntax: MachineStream.stream[01-20]: queueName=MachineStream.stream[01-50] (if you have multiple queues and you want to use ASB, then you have to use multiDestination and use the range) username username used to connect to the ActiveMQ or Azure Service Bus queue For ActiveMQ: username=axedaadmin For ASB: username=your-azure-service-bus-username password used to connect to the ActiveMQ or Azure Service Bus queue password used to connect to the ActiveMQ or ASB queue(s) numConnections number of ActiveMQ or Azure Service Bus broker connections Default is 10 broker connections numSessionsPerConnection The number of sessions per connection. Note that each session will create a separate thread. (This key is used infrequently.) APPLICABLE TO ACTIVEMQ ONLY. Default is 5 sessions per connection APPLICABLE TO ACTIVEMQ ONLY. numProcessingThreads The number of concurrent threads used for processing machine streams messages. Default is 100 concurrent threads messageListenerContainerType The type of message listener container. Default is single queue name per connection. Supports multiple queue names per connection 3. Build code using Maven.  Use -DskipTests option if you want to skip tests.  This will build all source code and produce a bin archive in the target directory. For Linux: # mvn package -DskipTests For Windows: c:\> mvn package -DskipTests 4. Enter the target directory and uncompress *bin.tar.gz archive and enter correct directory For Linux: # cd target # tar -zxvf machine-streams-data-relay-1.0.3-bin.tar.gz # cd machine-streams-data-relay-1.0.3 For Windows: c:\> cd target c:\> unzip machine-streams-data-relay-1.0.3.bin.zip c:\> cd machine-streams-data-relay-1.0.3 5. Start the application. For Linux: # ./machineStreamsDataRelay.sh <config properties file> for example: e.g. ./machineStreamsDataRelay.sh configOfYourChoice.properties For Windows: # ./machineStreamsDataRelay.sh <config properties file> for example: e.g. ./machineStreamDataRelay.bat configOfYourChoice.properties See the two example config files included within the project: configASB.properties (for Azure Service Bus) and configAMQ.properties (for ActiveMQ). 6. Scan the output. If your ActiveMQ configuration is correct, output similar to the following should appear, and no ERRORS should be shown: 2014-03-26 10:27:06.179 [main] INFO  [MessageListenerServiceImpl]: Initializing connections to tcp://localhost:62000 username=axedaadmin 2014-03-26 10:27:06.346 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 1: queue=MachineStream.> numSessions=5 2014-03-26 10:27:06.351 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 2: queue=MachineStream.> numSessions=5 2014-03-26 10:27:06.356 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 3: queue=MachineStream.> numSessions=5 2014-03-26 10:27:06.365 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 4: queue=MachineStream.> numSessions=5 2014-03-26 10:27:06.369 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 5: queue=MachineStream.> numSessions=5 2014-03-26 10:27:06.381 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 6: queue=MachineStream.> numSessions=5 2014-03-26 10:27:06.388 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 7: queue=MachineStream.> numSessions=5 2014-03-26 10:27:06.402 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 8: queue=MachineStream.> numSessions=5 2014-03-26 10:27:06.411 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 9: queue=MachineStream.> numSessions=5 2014-03-26 10:27:06.416 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 10: queue=MachineStream.> numSessions=5 If your Azure Service Bus configuration is correct, output similar to the following should appear, and no ERRORS should be shown: 2014-10-01 16:51:30.114 [main] INFO [MessageListenerServiceImpl]: Initializing Connections to amqps://acme.servicebus.windows.net username=owner 2014-10-01 16:51:31.613 [ConnectionRecovery-thread-6] INFO [MultiDestinationMessageListenerContainer]: Connection 6 created 0/10 queue consumers 2014-10-01 16:51:31.614 [ConnectionRecovery-thread-8] INFO [MultiDestinationMessageListenerContainer]: Connection 8 created 0/10 queue consumers 2014-10-01 16:51:31.614 [ConnectionRecovery-thread-10] INFO [MultiDestinationMessageListenerContainer]: Connection 10 created 0/9 queue consumers 2014-10-01 16:51:31.614 [ConnectionRecovery-thread-2] INFO [MultiDestinationMessageListenerContainer]: Connection 2 created 0/10 queue consumers 2014-10-01 16:51:31.614 [ConnectionRecovery-thread-3] INFO [MultiDestinationMessageListenerContainer]: Connection 3 created 0/10 queue consumers 2014-10-01 16:51:31.614 [ConnectionRecovery-thread-5] INFO [MultiDestinationMessageListenerContainer]: Connection 5 created 0/10 queue consumers 2014-10-01 16:51:31.615 [ConnectionRecovery-thread-9] INFO [MultiDestinationMessageListenerContainer]: Connection 9 created 0/10 queue consumers 2014-10-01 16:51:31.615 [ConnectionRecovery-thread-4] INFO [MultiDestinationMessageListenerContainer]: Connection 4 created 0/10 queue consumers 2014-10-01 16:51:31.621 [ConnectionRecovery-thread-7] INFO [MultiDestinationMessageListenerContainer]: Connection 7 created 0/10 queue consumers 2014-10-01 16:51:31.756 [ConnectionRecovery-thread-1] INFO [MultiDestinationMessageListenerContainer]: Connection 1 created 0/10 queue consumers 2014-10-01 16:51:32.613 [ConnectionRecovery-thread-6] INFO [MultiDestinationMessageListenerContainer]: Connection 6 created 9/10 queue consumers 2014-10-01 16:51:32.614 [ConnectionRecovery-thread-8] INFO [MultiDestinationMessageListenerContainer]: Connection 8 created 9/10 queue consumers 2014-10-01 16:51:32.614 [ConnectionRecovery-thread-10] INFO [MultiDestinationMessageListenerContainer]: Connection 10 created 7/9 queue consumers 2014-10-01 16:51:32.614 [ConnectionRecovery-thread-2] INFO [MultiDestinationMessageListenerContainer]: Connection 2 created 10/10 queue consumers 2014-10-01 16:51:32.615 [ConnectionRecovery-thread-3] INFO [MultiDestinationMessageListenerContainer]: Connection 3 created 9/10 queue consumers 2014-10-01 16:51:32.615 [ConnectionRecovery-thread-5] INFO [MultiDestinationMessageListenerContainer]: Connection 5 created 0/10 queue consumers 2014-10-01 16:51:32.615 [ConnectionRecovery-thread-9] INFO [MultiDestinationMessageListenerContainer]: Connection 9 created 7/10 queue consumers 2014-10-01 16:51:32.615 [ConnectionRecovery-thread-4] INFO [MultiDestinationMessageListenerContainer]: Connection 4 created 9/10 queue consumers 2014-10-01 16:51:32.623 [ConnectionRecovery-thread-7] INFO [MultiDestinationMessageListenerContainer]: Connection 7 created 9/10 queue consumers 2014-10-01 16:51:32.756 [ConnectionRecovery-thread-1] INFO [MultiDestinationMessageListenerContainer]: Connection 1 created 10/10 queue consumers 2014-10-01 16:51:32.833 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 1: numQueues=10 initTimeMillis=2631 millis 2014-10-01 16:51:32.833 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 2: numQueues=10 initTimeMillis=2488 millis 2014-10-01 16:51:33.613 [ConnectionRecovery-thread-6] INFO [MultiDestinationMessageListenerContainer]: Connection 6 created 10/10 queue consumers 2014-10-01 16:51:33.614 [ConnectionRecovery-thread-8] INFO [MultiDestinationMessageListenerContainer]: Connection 8 created 10/10 queue consumers 2014-10-01 16:51:33.614 [ConnectionRecovery-thread-10] INFO [MultiDestinationMessageListenerContainer]: Connection 10 created 9/9 queue consumers 2014-10-01 16:51:33.615 [ConnectionRecovery-thread-3] INFO [MultiDestinationMessageListenerContainer]: Connection 3 created 9/10 queue consumers 2014-10-01 16:51:33.615 [ConnectionRecovery-thread-5] INFO [MultiDestinationMessageListenerContainer]: Connection 5 created 0/10 queue consumers 2014-10-01 16:51:33.615 [ConnectionRecovery-thread-9] INFO [MultiDestinationMessageListenerContainer]: Connection 9 created 8/10 queue consumers 2014-10-01 16:51:33.615 [ConnectionRecovery-thread-4] INFO [MultiDestinationMessageListenerContainer]: Connection 4 created 9/10 queue consumers 2014-10-01 16:51:33.623 [ConnectionRecovery-thread-7] INFO [MultiDestinationMessageListenerContainer]: Connection 7 created 10/10 queue consumers 2014-10-01 16:51:34.615 [ConnectionRecovery-thread-5] INFO [MultiDestinationMessageListenerContainer]: Connection 5 created 0/10 queue consumers 2014-10-01 16:51:34.615 [ConnectionRecovery-thread-3] INFO [MultiDestinationMessageListenerContainer]: Connection 3 created 9/10 queue consumers 2014-10-01 16:51:34.615 [ConnectionRecovery-thread-9] INFO [MultiDestinationMessageListenerContainer]: Connection 9 created 8/10 queue consumers 2014-10-01 16:51:34.615 [ConnectionRecovery-thread-4] INFO [MultiDestinationMessageListenerContainer]: Connection 4 created 9/10 queue consumers 2014-10-01 16:51:35.615 [ConnectionRecovery-thread-5] INFO [MultiDestinationMessageListenerContainer]: Connection 5 created 9/10 queue consumers 2014-10-01 16:51:35.615 [ConnectionRecovery-thread-3] INFO [MultiDestinationMessageListenerContainer]: Connection 3 created 9/10 queue consumers 2014-10-01 16:51:35.615 [ConnectionRecovery-thread-9] INFO [MultiDestinationMessageListenerContainer]: Connection 9 created 8/10 queue consumers 2014-10-01 16:51:35.616 [ConnectionRecovery-thread-4] INFO [MultiDestinationMessageListenerContainer]: Connection 4 created 9/10 queue consumers 2014-10-01 16:51:36.616 [ConnectionRecovery-thread-5] INFO [MultiDestinationMessageListenerContainer]: Connection 5 created 9/10 queue consumers 2014-10-01 16:51:36.616 [ConnectionRecovery-thread-3] INFO [MultiDestinationMessageListenerContainer]: Connection 3 created 9/10 queue consumers 2014-10-01 16:51:36.616 [ConnectionRecovery-thread-4] INFO [MultiDestinationMessageListenerContainer]: Connection 4 created 9/10 queue consumers 2014-10-01 16:51:36.616 [ConnectionRecovery-thread-9] INFO [MultiDestinationMessageListenerContainer]: Connection 9 created 8/10 queue consumers 2014-10-01 16:51:37.616 [ConnectionRecovery-thread-5] INFO [MultiDestinationMessageListenerContainer]: Connection 5 created 9/10 queue consumers 2014-10-01 16:51:37.616 [ConnectionRecovery-thread-3] INFO [MultiDestinationMessageListenerContainer]: Connection 3 created 9/10 queue consumers 2014-10-01 16:51:37.616 [ConnectionRecovery-thread-4] INFO [MultiDestinationMessageListenerContainer]: Connection 4 created 9/10 queue consumers 2014-10-01 16:51:37.616 [ConnectionRecovery-thread-9] INFO [MultiDestinationMessageListenerContainer]: Connection 9 created 8/10 queue consumers 2014-10-01 16:51:38.616 [ConnectionRecovery-thread-3] INFO [MultiDestinationMessageListenerContainer]: Connection 3 created 10/10 queue consumers 2014-10-01 16:51:38.617 [ConnectionRecovery-thread-9] INFO [MultiDestinationMessageListenerContainer]: Connection 9 created 10/10 queue consumers 2014-10-01 16:51:38.616 [ConnectionRecovery-thread-4] INFO [MultiDestinationMessageListenerContainer]: Connection 4 created 10/10 queue consumers 2014-10-01 16:51:38.616 [ConnectionRecovery-thread-5] INFO [MultiDestinationMessageListenerContainer]: Connection 5 created 10/10 queue consumers 2014-10-01 16:51:38.643 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 3: numQueues=10 initTimeMillis=8491 millis 2014-10-01 16:51:38.643 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 4: numQueues=10 initTimeMillis=8490 millis 2014-10-01 16:51:38.643 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 5: numQueues=10 initTimeMillis=8490 millis 2014-10-01 16:51:38.643 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 6: numQueues=10 initTimeMillis=3485 millis 2014-10-01 16:51:38.643 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 7: numQueues=10 initTimeMillis=3495 millis 2014-10-01 16:51:38.643 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 8: numQueues=10 initTimeMillis=3485 millis 2014-10-01 16:51:38.643 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 9: numQueues=10 initTimeMillis=8488 millis 2014-10-01 16:51:38.643 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 10: numQueues=9 initTimeMillis=3485 millis 7. To verify that messages are being streamed properly from the Axeda Platform, send DataItems from your connected Assets. You should see messages similar to the following. (Remember that each Asset you are testing must have an associated Machine Stream.) 2014-03-26 10:45:16.309 [pool-1-thread-1] INFO  [LogMessageProcessor]: StreamedDataItem: Model,Asset1,799021d6-70a3-7c32-0000-00000000021d,false,Wed Mar 26 14:45:16 EDT 2014,temp,43,analog 2014-03-26 10:45:21.137 [pool-1-thread-2] INFO  [LogMessageProcessor]: StreamedDataItem: Model,Asset2,799021d6-70a3-7c32-0000-000000000225,false,Wed Mar 26 14:45:21 EDT 2014,temp,43,analog 2014-03-26 10:45:26.134 [pool-1-thread-3] INFO  [LogMessageProcessor]: StreamedDataItem: Model,Asset1,799021d6-70a3-7c32-0000-00000000022b,false,Wed Mar 26 14:45:26 EDT 2014,temp,44,analog 2014-03-26 10:45:31.135 [pool-1-thread-4] INFO  [LogMessageProcessor]: StreamedDataItem: Model,Asset2,799021d6-70a3-7c32-0000-000000000231,false,Wed Mar 26 14:45:31 EDT 2014,temp,44,analog 2014-03-26 10:45:36.142 [pool-1-thread-5] INFO  [LogMessageProcessor]: StreamedDataItem: Model,Asset1,799021d6-70a3-7c32-0000-000000000237,false,Wed Mar 26 14:45:36 EDT 2014,temp,45,analog 2014-03-26 10:45:41.146 [pool-1-thread-6] INFO  [LogMessageProcessor]: StreamedDataItem: Model,Asset2,799021d6-70a3-7c32-0000-00000000023d,false,Wed Mar 26 14:45:41 EDT 2014,temp,45,analog Configuring a CustomMessageProcessor By default, the project is configured to use a LogMessageProcessor that logs each streamed message it receives to standard out. The project takes a StreamedMessage in either an XML or JSON format (as configured in the MachineStream SDKv2 object) and decodes the message into a StreamedMessage Java object. LogMessageProcessor.java implements the MessageProcessor interface. Here is the MessageProcessor.java interface: MessageProcessor.java package com.axeda.tools.streams.processor; import com.axeda.tools.streams.model.StreamedMessage; /** * This class defines the message processor method callback that will called for message processing. * Note that this methods will be called by multiple threads concurrently. */ public interface MessageProcessor { /** * Process a machine stream message. Note that this method will be called by multiple threads concurrently.  * The number of concurrent processing threads is defined in MachineStreamsConfig.getNumProcessingThreads(). * If you add code here that significantly slows down message processing, then there is the potential that * MessageListenerService threads will also block.  When the MessageListenerService threads block, this means that * messages will start to backup in the ActiveMQ or ASB message queues. If you are processing a large number of messages, * then you may need to adjust your configuration parameters or optimize your processMessage() code. * @param message machine streams message to process */public void processMessage(StreamedMessage message);} An additional class named CustomMessageProcessor.java has been provided so that you can provide your own custom message processing logic: CustomMessageProcessor.java package com.axeda.tools.streams.processor; import org.springframework.stereotype.Component; import com.axeda.tools.streams.model.StreamedAlarm; import com.axeda.tools.streams.model.StreamedDataItemMessage; import com.axeda.tools.streams.model.StreamedMessage; import com.axeda.tools.streams.model.StreamedMobileLocation; import com.axeda.tools.streams.model.StreamedRegistrationMessage; /** * This class was provided for customers to implement their own message processing business * logic. To use this class, change the @Autowired messageProcessor qualifier in * MessageProcessingServiceImpl.java to @Qualifier("customMessageProcessor") */ @Component("customMessageProcessor") public class CustomMessageProcessor implements MessageProcessor { /** * (non-Javadoc) * @see com.axeda.tools.streams.processor.MessageProcessor#processMessage(com.axeda.tools.streams.model.StreamedMessage) * * Process a machine stream message. Note that this method will be called by multiple threads * concurrently. The number of concurrent processing threads is defined in * MachineStreamsConfig.getNumProcessingThreads(). * If you add code here that significantly slows down message processing, then there is the * potential that MessageListenerService threads will also block. When the MessageListenerService * threads block, this means that messages will start to backup in the ActiveMQ or Azure Service Bus message queues. If you * are processing a large number of messages, then you may need to adjust your configuration parameters * or optimize your processMessage() code. */ @SuppressWarnings("unused") @Override public void processMessage(StreamedMessage message) { if (message instanceof StreamedDataItemMessage) { StreamedDataItemMessage dataItem = (StreamedDataItemMessage) message; // add your business logic here } else if (message instanceof StreamedAlarm) { StreamedAlarm alarm = (StreamedAlarm) message; // add your business logic here } else if (message instanceof StreamedMobileLocation) { StreamedMobileLocation mobileLocation = (StreamedMobileLocation) message; // add your business logic here } else if (message instanceof StreamedRegistrationMessage) { StreamedRegistrationMessage registration = (StreamedRegistrationMessage)message; // add your business logic here } } } The Axeda Platform Machine Streams feature currently support 4 different message types: StreamedDataItemMessage StreamedAlarm StreamedMobileLocation StreamedRegistrationMessage For each of the different message types, you should add your message processing business logic.  You may want to write each message to your favorite NoSql database or to a flat file. Once you have completed your changes to the CustomObjectMessageProcessor, then you must make one change in the MessageProcessingServiceImpl.java class to use this Spring bean. Uncomment this line  // @Qualifier("customMessageProcessor") Comment this line @Qualifier("logMessageProcessor") The following code snppet shows what your changes should look like when you are finished: MessageProcessingServiceImpl.java @Component("messageProcessingService") public class MessageProcessingServiceImpl implements MessageProcessingService private static final Logger LOGGER = LoggerFactory.getLogger(MessageProcessingServiceImpl.class); @Autowired private MessageDecoder messageDecoder; @Autowired // If you want to use the CustomMessageProcessor instead of the default LogMessageProcessor then change this Qualifier to @Qualifier("customMessageProcessor") //@Qualifier("logMessageProcessor") private MessageProcessor messageProcessor; private ExecutorService executorService;
View full tip
Complete information about installing and using the Axeda Machine Streams Data Relay Project is available here. This page provides the files for Axeda Machine Streams Data Relay Project: For Linux users ready to run (bin) files: download the machine-streams-data-relay-1.0.3-bin.tar_.gz archive full Maven project files: download the machine-streams-data-relay-1.0.3-project.tar_.gz archive For Windows users ready to run (bin) files: download the machine-streams-data-relay-1.0.3-bin.zip archive full Maven project files: download the machine-streams-data-relay-1.0.3-project.zip archive From the links below, select the project you want to use.
View full tip
This Zip file contains the Axeda patch (axeda-jms-plugin-<version>-machine-streams) required for proper installation and configuration of an Apache ActiveMQ server to use with the Axeda Machine Streams service (which is supported for Axeda Platform v6.8 and later). Note: Information about the Axeda Machine Streams feature is provided in the Axeda Features Guide available from the Axeda Support site, http://help.axeda.com. This patch overlay needs to be applied to the v5.8.0 ActiveMQ server installed as the Axeda Machine Streams endpoint broker, so that Axeda Platform can send streamed content to that server endpoint. Complete instructions for installing and configuring an Apache ActiveMQ server for Axeda Machine Streams are provided in the reference, Axeda® Machine Streams: A Guide to Setting Up Broker Endpoints. This guide is available with all Axeda product documentation from the PTC Support site.
View full tip
A student in the Axeda Groovy course had some good questions. Instead of answering via email, I thought I'd answer here, so we could share the knowledge. Domain Objects - How can I customize or extend Axeda provided domain objects? (Eg., I need to store whether a user is external or internal user) How can I achieve this? This is a perfect use case for the Axeda Extended Data API. Documentation on the API can be found in the Axeda Platform v1 API Developer's Reference Guide. A good, "Getting Started," topic is available on Axeda Mentor - Getting Started With the Axeda Extended Data API. To customize/extend/decorate Axeda-provided domain objects, use Extended Objects. Extended Objects can be thought of in the abstract as a collection of custom database table rows. The analogy isn't perfect, but it helps with understanding. The use case in the question is very common. Extended Objects can stand on their own, or they can be associated with an Axeda domain object with the setInternalID() method. In the following sample code, we "decorate" an Axeda User object with two new fields, isExternalUser and companyID: private void updateAdditionalFieldsForUser(User user, boolean isExternalUser, String companyID) {   // GET OBJECT TYPE - This example assumes the ExtendedObjectType has already been created   ExtendedObjectType extObjectType = extendedObjectService.findExtendedObjectTypeByClassname("com.axeda.drm.sdk.user.User")   // GET PROPERTY TYPES   PropertyType isExternalUserPropertyType = findOrCreatePropertyType(extObjectType, "IS_EXTERNAL_USER")   PropertyType companyIdPropertyType = findOrCreatePropertyType(extObjectType, "COMPANY_ID")      /* GET THE EXTENDED OBJECT   * Note - the example provides a findExtendedObject() method that searches by INTERNAL ID. This linkage via the internal ID   * associates a domain object to an ExtendedObject, allowing us to "decorate" it with custom attributes   */   ExtendedObject extObject = findExtendedObject(extObjectType, user.id.value)   if (extObject == null)   {     extObject = new ExtendedObject()     extObject.setExtendedObjectType(extObjectType)     extObject.setInternalObjectId(user.id.value)     extObject.addProperty(createExtendedProperty(isExternalUserPropertyType, isExternalUser.toString()))     extObject.addProperty(createExtendedProperty(companyIdPropertyType, companyID))     extendedObjectService.createExtendedObject(extObject)   }   else   {     addOrUpdateExtendedProperty(extObject, isExternalUserPropertyType, "IS_EXTERNAL_USER", isExternalUser.toString())     addOrUpdateExtendedProperty(extObject, companyIdPropertyType, "COMPANY_ID", companyID)     extendedObjectService.updateExtendedObject(extObject)   } }  /**  * Adds or Updates an extended property.  *  * @param extendedObject the extended object to associate with the property  * @param propertyType the type of the property  * @param propertyName the name of the property  * @param propertyValue the value of the property  */ private void addOrUpdateExtendedProperty(ExtendedObject extendedObject, PropertyType propertyType, String propertyName, String propertyValue) {   Property extendedProperty = extendedObject.getPropertyByName(propertyName)   if (extendedProperty == null)   {   extendedProperty = createExtendedProperty(propertyType, propertyValue)   extendedObject.addProperty(extendedProperty)   }   else   {   extendedProperty.setValue(propertyValue)   } }  /**  * Retrieves and returns the extended object with the given type and internal id.  *  * @param extObjectType the type of the desired external object  * @param internalObjectId the internal id of the desired extended object  * @return the extended object matching the given details or <code>null</code> in case there's no match  */ private ExtendedObject findExtendedObject(ExtendedObjectType extObjectType, Long internalObjectId) {   ExtendedObjectSearchCriteria searchCriteria = new ExtendedObjectSearchCriteria()   searchCriteria.setExtendedObjectTypeId(extObjectType.getId())   searchCriteria.setInternalObjectId(internalObjectId)    List<ExtendedObject> extObjects = extendedObjectService.findExtendedObjects(searchCriteria, -1, 0, "name")    if (!extObjects?.isEmpty())   {    return extObjects[0]   }   return null }  /**  * Gets a property type given its name and the associated extended object type.  * If it cannot find the property type it will create it.  *  * @param extendedObjectType the associated extended object type for the property type  * @param propertyTypeName the property type name  * @return the property type  */  private PropertyType findOrCreatePropertyType(ExtendedObjectType extendedObjectType, String propertyTypeName)  {     PropertyType propertyType = extendedObjectType.getPropertyTypeByName(propertyTypeName)     if (propertyType == null)     {       propertyType = new PropertyType()       propertyType.setDataType(PropertyDataType.String)       propertyType.setName(propertyTypeName)       propertyType.setExtendedObjectType(extendedObjectType)       extendedObjectService.createPropertyType(propertyType)       extendedObjectType.addPropertyType(propertyType)     }     return propertyType  } By using Extended Objects, and associating them with Axeda domain objects via an internal ID, we can decorate/extend/customize those domain objects with use-case-specific attributes.
View full tip
The Axeda Platform has long had the ability to write custom logic to retrieve, manipulate and create data.  In the current versions of the Platform, there are two classes of API, Version 1 (v1) and Version 2 (v2).  The v1 APIs allow a developer to work with data on the Platform, but all of the APIs are subject to the maxQueryResults configuration property, which by default limits the number of results per query to 1000. For some subsets of data, this can be inadequate to process data.  In comes the v2 API, which introduces pagination. One of the first things a new user does when exploring the V2 API, is something like the following: HistoricalDataItemValueCriteria criteria = new HistoricalDataItemValueCriteria() criteria.assetId = '9701' criteria.startDate = '2014-07-23T12:33:00Z' criteria.endDate = '2014-07-23T12:44:00Z' DataItemBridge dbridge = com.axeda.sdk.v2.dsl.Bridges.dataItemBridge FindDataItemValueResult results = dbridge.findHistoricalValues(criteria)           And they get frustrated when they only get the same 100 rows of data.  Repeat after me: V2 API invocations (find operations) are limited to batches of 100 results at a time! But that's not the end of the story.  With a small change, the query above can be tuned to iterate through all results that match the search criteria:  HistoricalDataItemValueCriteria criteria = new HistoricalDataItemValueCriteria() criteria.assetId = '9701' criteria.startDate = '2014-07-23T12:33:00Z' criteria.endDate = '2014-07-23T12:44:00Z' criteria.pageNumber = 1 criteria.pageSize = 100 // Default. DataItemBridge dbridge = com.axeda.sdk.v2.dsl.Bridges.dataItemBridge FindDataItemValueResult results = dbridge.findHistoricalValues(criteria) tcount = 0 while ( (results = dbridge.findHistoricalValues(criteria)) != null  && tcount < results .totalCount) {   results.dataItems.each { res ->     tcount++   }   criteria.pageNumber = criteria.pageNumber + 1 }    I currently recommend that people avoid using the count() or countDomainObjectByCriteria() functions if you're then going to call a find.  Currently both the count*() and find functions compute total results, and doubles execution time of just those two calls.  Total count is only computed when running the first find() operation, so the code pattern above is so far the most efficient way I've seen to run these operations on the platform. So having covered how to do this in code (custom objects), let's turn our attention to the REST APIs - the other entry-point for using these capabilities.  The REST API doesn't offer a count*() function, but the first find() invocation (if using XML) brings back totalCount as part of the result set.  You can use this in your application to decide how many times to call the REST end-point to retrieve your data.  So for the example above: POST:  https://customer-sandbox.axeda.com/services/v2/dataItem/findHistoricalValues HEADERS: Content-Type: application/xml Accept: application/xml BODY: <?xml version="1.0" encoding="UTF-8"?> <HistoricalDataItemValueCriteria xmlns="http://www.axeda.com/services/v2" pageSize="100" pageNumber="1"> <assetId>9701</assetId> <StartDate>2014-07-23T12:33:00Z</StartDate> <endDate>2014-07-23T12:35:02Z</endDate> </HistoricalDataItemValueCriteria>      RESULTS: <v2:FindAssetResult totalCount="1882" xmlns:v2="http://www.axeda.com/services/v2" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">    <v2:criteria pageSize="100" pageNumber="1">       <v2:name>*</v2:name>       <v2:propertyNames/>    </v2:criteria>    <v2:assets>    </v2:assets> </v2:FindAssetResult>      Or JSON: POST:  https://customer-sandbox.axeda.com/services/v2/dataItem/findHistoricalValues HEADERS: Content-Type: application/xml Accept: application/xml BODY: {   "id":  9701,   "startDate": "2014-07-23T12:33:00Z",   "endDate": "2014-07-23T12:35:02Z",   "pageNumber": 1,   "pageSize": 2 }      And that's how you work around the maxQueryResults limitation of the v1 APIs.  Some APIs do not currently have matching v2 Bridges (e.g. MobileLocation and DataItemAssociation), in which case the limitation will still apply.  Creative use of the query Criteria will allow you to work around these limitations as we continue to improve the V2 API. Regards, -Chris
View full tip
<p>We live in a connected world where we can (want!) to receive instant updates and notifications. ThingWorx leverages the power of Web 2.0 and its Always-On technology to deliver that, but our friendly SMS providers have also provided an easy and powerful way that can be used to deliver SMS notifications right to your phone. Email to Text!</p><div>Set up a 'notification' Thing using our MailServer Template, set up your outgoing e-mail server and you are now ready to invoke the 'SendMessage' service on a given event. All you need now is the email address of your SMS number, which you can find by following this link: <a href="http://sms411.net/how-to-send-email-to-a-phone/" target="_blank"><span style="font-size:8.5pt;line-height:115%;font-family:&quot;Arial&quot;,&quot;sans-serif&quot;">List of e-Mail to SMS  addresses</span></a></div><p class="MsoNormal"><o:p></o:p></p><div><p class="MsoNormal"><o:p></o:p></p></div><p></p>
View full tip
Connecting to other databases seemed to be a hot desirable from our training feedback. So we've just added a video tutorial in the Wiki for connecting ThingWorx to SQL Server (or SQL Server Express). See topic 7.04 or go to the Video Appendix. In essence connecting to other databases like mySQL or Oracle will work the same way except you will have to change the Database URL and JDBC reference. Also let me take this opportunity to wish everyone a blessed holiday season!
View full tip
One commonly asked question is what are the correct settings for the Configuration Tables tab when creating/setting up a Database Thing to connect to a SQL Server (2005 or later) database.  There are a couple of ways to do this but the tried and true settings are listed below. connectionValidationString - SELECT GetDate() jDBCConnectionURL - jdbc:sqlserver://servername;databaseName=databasename jDBCDriverClass - com.microsoft.sqlserver.jdbc.SQLServerDriver Max number of connections in the pool - 5 (this can be modified based on number of concurrent connections required) Database Password - databaseusername Database User Name - databaseuserpassword <br> The jdbc driver file sqljdbc4.jar is by default installed with the ThingWorx server.  It is located in TomcatDir\webapps\Thingworx\WEB-INF\lib\
View full tip
Since the marketplace extension is no longer supported and the drivers may be outdated, you may build your own jdbc package/extension: Download the Extension Metadata file Here Download the appropriate JDBC driver Build the extension structure by creating the directory lib/common Place the JAR file in this directory location: lib/common/<JDBC driver jar file> Modify the name attribute of the ExtensionPackage entity in the metadata.xml file as needed Point the file attribute of the FileResource entity to the name of the JDBC JAR file The metadata also contains a ThingTemplate the name is set to MySqlServer, but can be modified as needed Select the lib folder and metadata.xml file and send to a zip archive Tip: The name of the zip archive should match the name given in the name attribute of the ExtensionPackage entity in the metadata.xml file Import the newly created extension as usual To the JDBC extension, simply create a new thing and assign it the new ThingTemplate that was imported with the JDBC extension Configuration Field Explanation: JDBC Driver Class Name ​Depends on the driver being used Refer to documentation JDBC Connection String ​Defines the information needed to establish a connection with the database Connection string examples can be found in the ThingWorx Help Center ConnectionValidationString ​A simple query that will work regardless of table names to be executed to verify return values from the database   Alternatively, you may download the jdbc connector creator from the marketplace here https://marketplace.thingworx.com/Items/jdbc-connector-extension Then you may just view the mashup and use it to package your jdbc jar into an extension (which can be later imported into ThingWorx).  
View full tip
Objective Use Influx as a database to store data coming from Kepware ThingWorx Industrial Connectivity server   Prerequisite Configure ThingWorx connection to Kepware’s KEPServerEX  and bind tags that exist in KEPServerEX to things in the ThingWorx model as referenced in Industrial Connections Example   Configuration Steps 1. Create database in Influx for ThingWorx: Connect:    influx -precision rfc3339 > SHOW DATABASES > CREATE DATABASE thingworx   2. Create Influx Persistence Provider     and configure   3. In the Industrial Thing where the Remote Properties are bounded define Value Stream     and make sure to have Persistence Provider set to Influx and is set to Active     4. In the Value Stream Properties and Alerts define the mappings using Manage Bindings to specify what properties are to be stored in this value stream     5. Save it and test it to make sure properties are stored in Influx: > use thingworx > show measurements   name   ----   Channel1.Device1 > show field keys on thingworx from "Channel1.Device1" > select Channel1_Device1_Tag2 from "Channel1.Device1"   name: Channel1.Device1   fieldKey fieldType   -------- ---------   Channel1_Device1_Tag10 integer   Channel1_Device1_Tag11 integer   Channel1_Device1_Tag12 integer   Channel1_Device1_Tag13 integer   Channel1_Device1_Tag14 integer   Channel1_Device1_Tag15 integer   Channel1_Device1_Tag16 integer   Channel1_Device1_Tag17 integer   Channel1_Device1_Tag18 integer   Channel1_Device1_Tag19 integer   Channel1_Device1_Tag2 integer   Channel1_Device1_Tag20 integer   Channel1_Device1_Tag21 integer   Channel1_Device1_Tag3 integer   Channel1_Device1_Tag4 integer   Channel1_Device1_Tag5 integer   Channel1_Device1_Tag6 integer   Channel1_Device1_Tag7 integer   Channel1_Device1_Tag8 integer   Channel1_Device1_Tag9 integer   shows data stored in Channel1_Device1_Tag2: >select Channel1_Device1_Tag2 from "Channel1.Device1"   2019-02-20T16:26:13.699Z 8043   2019-02-20T16:26:14.715Z 8044   2019-02-20T16:26:15.728Z 8045   2019-02-20T16:26:16.728Z 8046   2019-02-20T16:26:17.727Z 8047   2019-02-20T16:26:18.725Z 8048   2019-02-20T16:26:19.724Z 8049   2019-02-20T16:26:20.722Z 8050   2019-02-20T16:26:21.723Z 8051   2019-02-20T16:26:22.722Z 8052
View full tip
  Meet Chris. Chris leads the ThingWorx Flow portfolio, which will be releasing in 8.4. Chris has “a long history with workflow and a lot of interest in it.” He was originally hired as a product manager by Jim Heppelmann [PTC’s CEO] about 20 years ago to build a workflow engine for Windchill, our PLM solution. As a matter of fact, this feature is still a part of Windchill today. Pretty neat, right? We were able to leverage Chris’ workflow expertise and experience to create ThingWorx Flow. Check out more below.   Kaya: Why did we create ThingWorx Flow? What challenge does it solve? Chris: In order for customers to realize the full business value for connected solutions, it typically isn’t enough to just surface an alert to show a dashboard of health. Customers want to automatically do things like create service tickets or automatically order consumables as they run low. To do that—to realize that automated value—you need a couple of things. First, you need to be able to easily connect to enterprise systems (like SAP, Salesforce or Microsoft Dynamics) to create service cases. Second, you need to be able to define and execute your business process logic.   Kaya: Can you give me an example? Chris: Definitely. The business process may involve the need to replace a filter based on a contamination alert. After I receive the alert, I need to look up in the bill of materials to find out which filter is applicable to that particular device or product; if it’s not in stock onsite, I need to order it. When it arrives, I then need to schedule the service, knowing that it’s going to be there at a particular time, which entails working with another service system like Salesforce.   So, you have a number of these different enterprise systems that you need to connect to, and you need to support that orchestrated business process to realize the value of a fully automated flow. That’s what has really led us to do it—because, again, the compelling business value in these connected device solutions is often around an automated approach to service or maintenance issues. The value is compelling because automated processes minimize delays, support business growth and scale without hiring as many additional people and eliminate human errors and variability that lead to improved quality at lower costs or stronger margins. Sample ThingWorx Orchestrate Workflow: Send RFQ to Supplier and Add Part to List If State ChangesKaya: So, who ultimately benefits from Flow? Chris: The benefits are realized by an organization as a whole in terms of increased responsiveness, flexibility, reduced costs and higher availability. The ThingWorx solution is unique in that it allows an organization to quickly realize these benefits by optimizing the participation of several key roles. As a result, no role becomes a bottleneck to addressing an urgent business need.   In addition to developers, the many business analysts across the organization are now empowered to define and modify their business processes themselves through Flow. They do this by using the system connections (or subflows) created by their system experts and ThingWorx models that were defined by developers. All of this work is achieved using our visual modeling tool without the need for extensive training.   The system experts can define portions of flows or subflows that get, create or edit information in the systems they understand most within the Flow editor. This enables each role to focus on using their most valuable skills and ultimately eliminates the bottlenecks that cause reduction in business responsiveness when everything must be done by a few highly-specialized experts. Now, developers can leverage the outputs of flows to drive behavior in the application and visualize key KPIs and overall health.   Kaya: So, I see that we’re not only connecting business systems, but we’re also connecting people. We enable them to leverage each other’s different perspectives and areas of expertise. I know we gave developers a sneak peek of ThingWorx Flow in one of our latest posts. Do you have a more detailed demo that we could share this week? Chris: Sure thing. Check this out. (view in My Videos)   Kaya: Wow! I can definitely see how Flow saves immense time in workflow creation. Now, final question: what is your favorite aspect about working at PTC? Chris: The most interesting thing to me is that I get to work with so many different customers in so many different verticals that have such a diverse set of challenging and interesting problems. It’s fun to work together with these customers for us to understand their problems, their unique environments and then to, with them, build solutions using a combination of our products and their existing systems and tooling—that’s what I think is most fascinating.   I learn something new about the way things are made, manufactured, built and serviced every day, and it just makes my life much more interesting. I don’t feel like I’m doing the same thing every day. Working with customers to understand and address their challenges and help them realize their business goals is really rewarding and, again, the diversity of those requests is what makes the job particularly interesting and fascinating.
View full tip
  Meet Anthony. Anthony came to PTC from a large industrial company as a user of Axeda, a leading device connectivity company that PTC acquired in 2014. With a background in aerospace engineering and experience in a variety of industries including life safety, healthcare, nuclear power, and oil and gas, Anthony has been working to create new value around innovation for customers transitioning to ThingWorx. When he’s not working on IIoT, he’s playing music on Cape Cod, photographing Hawaiian landscapes or bringing awesome inflatable chairs into the office.    You may recall from last week's post that Thing Presence is one of the top three features in 8.4 that you might not know about. This week, I spoke with Anthony for a deeper dive on Thing Presence. Check out how it went.   Kaya: What was the challenge users were facing that led us to create Thing Presence? Anthony: Axeda customers transitioning to ThingWorx were struggling with the connectivity use case and kept asking, ‘why is my asset always offline?’ When we evaluated it, we discovered that it was tied to the difference between AlwaysOn and Axeda eMessage protocol architectures. IsConnected will always report false for polling and duty cycle devices.   The use case of Thing Presence is to know that the asset is reporting into the network and is ready to provide information (push) or be accessed to retrieve information or do a remote desktop support (pull). This use case is relevant for any asset in ThingWorx that uses duty cycle.   In ThingWorx 8.4 (coming in early 2019), the new IsReporting state will inform the user when a polling device is communicating on a regular basis. If it is, then IsReporting will be true. The IsReporting state resolves the discrepancy wherein devices that are on duty cycle appear disconnected due to the IsConnected state reporting false.  New "IsReporting" state improves visibility of an asset's communication state Kaya: How exactly does Thing Presence work? Anthony: You can think of it in terms of having teenagers. You tell them they need to check in with you on a regular basis through text message. If a text is missed, all of a sudden you take action.   Now, imagine the teenager is a device. If a device was supposed to check in every five minutes and it misses one poll, I want to flag that as a problem. The challenge with that from a service perspective is that sometimes your service personnel will go out and work on a device and may need to take it offline for a bit of time; we need to factor that in. We certainly don’t want to deploy someone or try to fix something when a service technician is already there.   You might decide, ‘my average service visit is an hour, so, if I miss a couple of pings, I’m okay; but, if I’m offline for more than an hour, then I’d like to know about it because I’d like to take action.’ Thing Presence allows you to define that window.   Kaya: You’ve mentioned ‘duty cycle’ and ‘polling cycle.’ Can you explain these terms? Anthony: Duty cycle and polling cycle are the same thing. It means that a device has a time for which it is expected to check in, and, provided that it checks in within that timeframe, all is good with the connection.   Connected services rely on a connection. As soon as the connection is broken, I no longer have the ability to service the asset.   Kaya: Given everything we’ve discussed, where do you see Thing Presence headed? Anthony: The next piece of the equation for us is to provide information on the health of the connection. When you look at servicing a remote asset, you need to a) know that it is communicating, and b) know that the connection is healthy before you try anything. I wouldn’t want to try a software update if I am losing connection with my asset on a regular basis.   What do we mean by health? We mean: is the device checking in when it should be? If it’s not, is there a pattern to that connection, and are those patterns tied to applications? For example, is it only on during working hours? Does it turn off during holidays? If the device is in a school, does it turn off during summer maintenance work? This allows us to garner insights on how and when the equipment is being used, not just operating status. At the end of the day, does this mean I can apply analytics and AI tools to it? Absolutely. Is it the first place I would apply it? Probably not.   Kaya: In your mind, what is the next big thing coming in ThingWorx that you’re particularly excited about? Anthony: Mashup 2.0 and Asset Advisor 8.4. (Double mic drop.)   Kaya: That’s awesome. My last question is related to you. Can you tell me what your favorite aspect is about working at PTC? Anthony: The chaos. Very often it’s chaos that breeds innovation. What I mean by that is that, if you try to create something because you sit down and you say, ‘I am going to innovate,’ very often that is a failure because the majority of the time it’s the influence of a deadline, a customer need or an application at hand that makes the environment trying and sometimes hectic. But, it is in these challenging environments where you can be the most creative and innovative as an engineer.
View full tip
Meet Neal. When Neal joined PTC five years ago, he immediately hit the ground running on IoT initiatives, working in multiple areas ranging from pre-sales to partner relations. Today, he is a Worldwide ThingWorx Center of Excellence Principal Lead at PTC, and his biggest focus is supporting the go-to-market for the Microsoft partnership.   I sat down with Neal recently to hear the details on exactly how Azure and ThingWorx can be used to develop world-class IIoT applications.   Kaya: Can you explain how Azure and ThingWorx work together? Neal: Yes, so Azure provides the cloud infrastructure that our customers need in order to deploy ThingWorx.   By having Azure as our preferred cloud platform, we’re able to specialize our R&D efforts into utilizing functionality that is available in Azure, rather than having to reinvent the wheel ourselves for each cloud platform in the attempt to remain cloud-agnostic. By leveraging a single, already quite powerful, cloud platform through Azure, we’re able to maximize our development efforts.   Kaya: What was the major problem that led to us investigating cloud options? Neal: There were two issues that our users had. The first was we often had complicated installs and setup procedures because we were genericizing the whole process, so the initial setup and run was complicated and expensive. For example, we were requiring them to setup additional VMs and components, and we were giving them generic instructions because we were being very agnostic, when they had already chosen outside of us to use one of the cloud platforms. We knew our customers wanted to move fast, so we had to make it easier and faster for them to see results.   The other issue we ran into with customers was the confusion in the offerings. For ThingWorx, ingest is just one aspect of IoT. ThingWorx is particularly strong in areas like enabling mixed reality and augmented reality as well as application enablement. And, while we also have the ability to perform ingest capabilities, Microsoft is especially powerful when it comes to ingest capabilities and security. We decided that the smartest solution was to leverage Microsoft’s expertise in data ingestion to make ThingWorx even more powerful; so, we made the Azure IoT Hub Connector. By partnering with Microsoft, we have joint architecture where you can see how Microsoft provides key features and ThingWorx will run on top of those features and get you faster to the market to develop the application.   Below is an example of a high-availability deployment of ThingWorx on Azure that utilizes ThingWorx Azure IoT Connectors to access an Azure IoT Hub.  High-Availability Deployment of ThingWorx on Azure Kaya: Why did we partner with Azure? What specific benefits does Azure offer over other cloud services providers? Neal: When we started to look at what our customers were using for cloud services, we noticed that a lot were using Microsoft. When we join forces with Microsoft, we have a much more wholistic offering around digital transformation. With the partnership, PTC and Microsoft are able to leverage each partner’s respective strengths to provide even more powerful IIoT solutions. You have Windchill and Microsoft’s business application strategies; you have Vuforia and Microsoft’s mixed reality and augmented reality strategies; and, you have ThingWorx on the Microsoft Azure cloud. Overall, you have a much more complete and powerful offering together.   Kaya: What is your favorite aspect about working at PTC? Neal: The growth. There’s been a lot of changes over the last five years that I’ve been here. PTC has been able to leverage its strengths and long-time experience in the CAD and PLM markets to enter and ultimately become a leader in the IIoT market, according to reports by research firms like Gartner and Forrester. In short, the growing IIoT market and PTC’s leadership in the industry.   Note to Readers: You’ve likely heard about our major strategic partnership with Microsoft to leverage our respective IIoT and cloud technologies to optimize the creation, deployment, management and overall use of your IIoT applications. If you haven’t heard about the partnership, check out the press release here. If you’re curious about more aspects of PTC’s partnership with Microsoft, check out this site devoted to sharing how ThingWorx and Azure are better together.
View full tip
Announcements