cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

Community Tip - Did you know you can set a signature that will be added to all your posts? Set it here! X

IoT Tips

Sort by:
Developing with Axeda Artisan (for Axeda Platform, v6.8 and later) Axeda Artisan is a development tool based on the Apache Maven build system. Artisan includes authoring, management, and deployment mechanisms that define and manage the development of Axeda Platform extension points, allowing you to flexibly install, update, and uninstall many types of objects on the Axeda Platform. The Components of Artisan The Artisan framework is comprised of several components: The Axeda Platform Each instance of the Axeda Platform contains libraries and sample Artisan Projects (archetypes) that are downloaded by developers to author and deploy their own Artisan Projects. The Artisan Project An Artisan Project is a collection of content and configuration files that are used to define a set of objects that will be installed on the Axeda Platform. The Artisan Installer The Artisan Installer is a flexible, configurable application that uses the Apache Maven build life-cycle to upload the contents of an Artisan Project to the Axeda Platform. Axeda Platform APIs The Artisan Installer interacts with the Axeda Platform and uses Axeda APIs to install, upgrade, or uninstall ArtisanProjects. How You Work With Artisan The goal of using Artisan is to develop an Artisan Project that will become an object or application that runs on the Axeda Platform. There are several phases involved in creating an Artisan Project: Preparing Your Development Environment – When first working with Artisan, you need to prepare your development environment by installing or configuring Java, Apache Maven, and any other required development tools. Creating an Artisan Project – Artisan uses archetypes as the basis for Artisan Projects. Archetypes are used to generate an Artisan Project that contains the correct type of development framework, which you then modify to create your own Artisan Project: The Hello World archetype provides a sample project you can use to create common objects on the Axeda Platform. The Machine Streams archetype provides a sample project you can use to configure machine streams for your Axeda Platform. Installing an Artisan Project on the Axeda Platform – The Artisan Installer packages the contents of the Artisan Project, and deploys them on the Axeda Platform. Testing the Artisan Project – Once installed, you can test your Artisan Project to verify its functionality and identify any issues that need to be addressed. It is common for an Artisan Project to be installed on the Axeda Platform several times as issues are identified and corrected during development. Creating a Stand-alone Installer – Once ready for deployment to production, the Artisan Installer is used to create a stand-alone installer for distributing the project contents to other instances of the Axeda Platform. Where to Go from Here To begin working with the Artisan framework to create your own projects, refer to the following: Artisan Landing Page – A page hosted on the Axeda Platform that introduces Artisan and serves as a starting point for creating an Artisan Project. The Artisan Landing Page can be found at the following location: http://instance_name:port/artisan                     Where instance_name:port/ is the ip address and port of an instance of the Axeda Platform. Axeda® Artisan Developer’s Guide – A reference that introduces Artisan, describes how to prepare your development environment, and provides detailed technical information concerning creating an Artisan Project and configuring the Artisan Installer. The Axeda® Artisan Developer’s Guide is available with all Axeda product documentation from the PTC Support site, http://support.ptc.com/
View full tip
Axeda Machine Streams enables external Platform integrators to access the current, raw data from connected assets. The Platform can stream the data item, alarm, mobile location, and registration messages from connected assets to an ActiveMQ server or Azure Service Bus endpoint. Streamed data can be used for data analytics or reporting, or simply for storage. This article explains the Machine Streams Data Relay project that Axeda provides. This sample project illustrates how stream consumers can create their own projects to relay Machine Stream messages from ActiveMQ or Azure Service Bus into their environments. The Machine Streams Data Relay project was created using Apache Maven. The project operates by dispatching messages to a log message processor. Each machine streams message is logged to stdout. Note: The "Axeda Features Guide" provides a high level introduction to the Axeda Machine Streams feature. That PDF is available from PTC Support (http://support.ptc.com/).) Downloading and Installing the Project The machine-streams-data-relay project is provided as a tar.gz archive for Linux users and a .zip archive for Windows users. Each archive includes a Maven project with all source code. This page provides downloads and full source for the machine-steams-data-relay Maven project. The Data Relay project files are available from here. Prerequisites To download, build, and compile the machine-streams-data-relay project, you will need the following: Access to an Axeda Platform instance configured to stream asset data (for ActiveMQ endpoint this includes the Axeda provided ActiveMQ machine-streams plugin/overlay (axeda-jms-plugin-r<SVN_REVISION>-machine-streams.zip, which is provided here. ActiveMQ or Azure Service Bus server configured for Machine Streams. Instructions for configuring an ActiveMQ or Azure Service Bus server for Machine Streams are provided in the “Axeda® Machine Streams: A Guide to Setting Up Broker Endpoints", available with all documentation from PTC Support (http://support.ptc.com/). At least one machine stream (Axeda Artisan Machine Streams Archetype) configured to stream data to the ActiveMQ or Azure Service Bus server for your assets. (Complete information about creating machine streams and adding machine stream support to the Axeda Platform is provided in the “Axeda v2 API/Services Developers Reference Guide” available from PTC Support (http://support.ptc.com/).) Access to the ActiveMQ or Azure Service Bus server configured as the endpoint for streamed Machine Streams content Oracle Java JDK 1.7 or greater and java and javac installed and available in your PATH (if you need instructions for this, see http://www.oracle.com/technetwork/java/javase/downloads/index.html) Maven 3.0.4 or greater and mvn installed and available in your PATH (if you need instructions for this, see http://maven.apache.org/download.cgi) Note: For the Machine Streams Data Relay project to work successfully, the Axeda Platform instance and the ActiveMQ or Azure Service Bus server instance must be configured with support for Axeda Machine Streams, and at least one machine stream must be configured to stream data. Complete information about configuring Axeda Platform for Axeda Machine Streams, including the data format for the resulting streams (XML or JSON) is available in the “Axeda v2 API/Services Developers Reference Guide.” Instructions for configuring an ActiveMQ or Azure Service Bus server for Machine Streams are provided in the “Axeda® Machine Streams: A Guide to Setting Up Broker Endpoints” Reference Guide (available from PTC Support (http://support.ptc.com/)). Building the Project This page provides instructions for building the Data Relay project for Linux and for Windows environments. 1. Download and uncompress the project for your environment Linux: Click here for the machine-streams-data-relay-1.0.3-project.tar.gz # tar -zxvf machine-streams-data-relay-1.0.3-project.tar.gz # cd machine-streams-data-relay-1.0.3 Windows: Click here for the machine-streams-data-relay-1.0.3-project.zip Unzip the project to the following directory: C:\machine-streams-data-relay-1.0.3 2. Edit the ActiveMQ or Azure Service Bus configuration file (configAMQ.properties or configASB.properties) in src\main\scripts\ as needed. sample Config.properties files for the MachineStreamsDataRelay component For ActiveMQ broker endpoints - configAMQ.properties # The ActiveMQ broker URL. brokerURL=tcp://localhost:62000 # The ActiveMQ queue name to process messages from. # It can be a single queue: MachineStream.stream01 # Or a wildcard queue: MachineStream.> queueName=MachineStream.> # The username used to connect to the ActiveMQ queue username=axedaadmin # The password used to connect to the ActiveMQ queue password=zQXuLzhQgcyRZ25JCDXYEPBCT2kx48 # The number of ActiveMQ broker connections. numConnections=10 # The number of sessions per connection. Note that each session will create a separate thread. numSessionsPerConnection=5 # The number of concurrent threads used for processing machine streams messages. numProcessingThreads=100 # The type of message listener container. # default = single queue name per connection. # multiDestination = supports multiple queue names per connection messageListenerContainerType=default For Azure Service Bus broker endpoints - configASB.properties # The ASB broker URL. brokerURL=amqps://your-azure-service-bus-namespace.servicebus.windows.net # The ASB queues to process messages from. # It can be a single queue: MachineStream.stream01 # Or multiple queues separated by a comma: MachineStream.stream01,MachineStream.stream02 # Or a queue range defined by the following syntax: MachineStream.stream[01-20] queueName=MachineStream.stream[01-50] # The username used to connect to the ASB queue(s) username=your-azure-service-bus-username # The password used to connect to the ASB queue(s) password=the-password-for-your-azure-service-bus-username # The max number of ASB broker connections. numConnections=10 # The number of concurrent threads used for processing machine streams messages. numProcessingThreads=100 # The type of message listener container. # default = single queue name per connection. # multiDestination = supports multiple queue names per connection messageListenerContainerType=multiDestination Note: messageListenerContainerType is provided because Azure Service Bus does not support wildcard queue names. The configuration details are as follows: Name Description Value brokerURL location of the ActiveMQ or Azure Service Bus (broker) location of the ActiveMQ or Azure Service Bus server (broker) queueName Name of the ActiveMQ or Azure Service Bus queue from which you want to process messages To define a single queue: MachineStream.<insert single queue name here> To define a wildcard queue name for multiple queues:MachineStream. It can be a single queue:  MachineStream.stream01 Or multiple queues separated by a comma: MachineStream.stream01,MachineStream.stream02 Or a queue range defined by the following syntax: MachineStream.stream[01-20]: queueName=MachineStream.stream[01-50] (if you have multiple queues and you want to use ASB, then you have to use multiDestination and use the range) username username used to connect to the ActiveMQ or Azure Service Bus queue For ActiveMQ: username=axedaadmin For ASB: username=your-azure-service-bus-username password used to connect to the ActiveMQ or Azure Service Bus queue password used to connect to the ActiveMQ or ASB queue(s) numConnections number of ActiveMQ or Azure Service Bus broker connections Default is 10 broker connections numSessionsPerConnection The number of sessions per connection. Note that each session will create a separate thread. (This key is used infrequently.) APPLICABLE TO ACTIVEMQ ONLY. Default is 5 sessions per connection APPLICABLE TO ACTIVEMQ ONLY. numProcessingThreads The number of concurrent threads used for processing machine streams messages. Default is 100 concurrent threads messageListenerContainerType The type of message listener container. Default is single queue name per connection. Supports multiple queue names per connection 3. Build code using Maven.  Use -DskipTests option if you want to skip tests.  This will build all source code and produce a bin archive in the target directory. For Linux: # mvn package -DskipTests For Windows: c:\> mvn package -DskipTests 4. Enter the target directory and uncompress *bin.tar.gz archive and enter correct directory For Linux: # cd target # tar -zxvf machine-streams-data-relay-1.0.3-bin.tar.gz # cd machine-streams-data-relay-1.0.3 For Windows: c:\> cd target c:\> unzip machine-streams-data-relay-1.0.3.bin.zip c:\> cd machine-streams-data-relay-1.0.3 5. Start the application. For Linux: # ./machineStreamsDataRelay.sh <config properties file> for example: e.g. ./machineStreamsDataRelay.sh configOfYourChoice.properties For Windows: # ./machineStreamsDataRelay.sh <config properties file> for example: e.g. ./machineStreamDataRelay.bat configOfYourChoice.properties See the two example config files included within the project: configASB.properties (for Azure Service Bus) and configAMQ.properties (for ActiveMQ). 6. Scan the output. If your ActiveMQ configuration is correct, output similar to the following should appear, and no ERRORS should be shown: 2014-03-26 10:27:06.179 [main] INFO  [MessageListenerServiceImpl]: Initializing connections to tcp://localhost:62000 username=axedaadmin 2014-03-26 10:27:06.346 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 1: queue=MachineStream.> numSessions=5 2014-03-26 10:27:06.351 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 2: queue=MachineStream.> numSessions=5 2014-03-26 10:27:06.356 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 3: queue=MachineStream.> numSessions=5 2014-03-26 10:27:06.365 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 4: queue=MachineStream.> numSessions=5 2014-03-26 10:27:06.369 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 5: queue=MachineStream.> numSessions=5 2014-03-26 10:27:06.381 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 6: queue=MachineStream.> numSessions=5 2014-03-26 10:27:06.388 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 7: queue=MachineStream.> numSessions=5 2014-03-26 10:27:06.402 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 8: queue=MachineStream.> numSessions=5 2014-03-26 10:27:06.411 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 9: queue=MachineStream.> numSessions=5 2014-03-26 10:27:06.416 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 10: queue=MachineStream.> numSessions=5 If your Azure Service Bus configuration is correct, output similar to the following should appear, and no ERRORS should be shown: 2014-10-01 16:51:30.114 [main] INFO [MessageListenerServiceImpl]: Initializing Connections to amqps://acme.servicebus.windows.net username=owner 2014-10-01 16:51:31.613 [ConnectionRecovery-thread-6] INFO [MultiDestinationMessageListenerContainer]: Connection 6 created 0/10 queue consumers 2014-10-01 16:51:31.614 [ConnectionRecovery-thread-8] INFO [MultiDestinationMessageListenerContainer]: Connection 8 created 0/10 queue consumers 2014-10-01 16:51:31.614 [ConnectionRecovery-thread-10] INFO [MultiDestinationMessageListenerContainer]: Connection 10 created 0/9 queue consumers 2014-10-01 16:51:31.614 [ConnectionRecovery-thread-2] INFO [MultiDestinationMessageListenerContainer]: Connection 2 created 0/10 queue consumers 2014-10-01 16:51:31.614 [ConnectionRecovery-thread-3] INFO [MultiDestinationMessageListenerContainer]: Connection 3 created 0/10 queue consumers 2014-10-01 16:51:31.614 [ConnectionRecovery-thread-5] INFO [MultiDestinationMessageListenerContainer]: Connection 5 created 0/10 queue consumers 2014-10-01 16:51:31.615 [ConnectionRecovery-thread-9] INFO [MultiDestinationMessageListenerContainer]: Connection 9 created 0/10 queue consumers 2014-10-01 16:51:31.615 [ConnectionRecovery-thread-4] INFO [MultiDestinationMessageListenerContainer]: Connection 4 created 0/10 queue consumers 2014-10-01 16:51:31.621 [ConnectionRecovery-thread-7] INFO [MultiDestinationMessageListenerContainer]: Connection 7 created 0/10 queue consumers 2014-10-01 16:51:31.756 [ConnectionRecovery-thread-1] INFO [MultiDestinationMessageListenerContainer]: Connection 1 created 0/10 queue consumers 2014-10-01 16:51:32.613 [ConnectionRecovery-thread-6] INFO [MultiDestinationMessageListenerContainer]: Connection 6 created 9/10 queue consumers 2014-10-01 16:51:32.614 [ConnectionRecovery-thread-8] INFO [MultiDestinationMessageListenerContainer]: Connection 8 created 9/10 queue consumers 2014-10-01 16:51:32.614 [ConnectionRecovery-thread-10] INFO [MultiDestinationMessageListenerContainer]: Connection 10 created 7/9 queue consumers 2014-10-01 16:51:32.614 [ConnectionRecovery-thread-2] INFO [MultiDestinationMessageListenerContainer]: Connection 2 created 10/10 queue consumers 2014-10-01 16:51:32.615 [ConnectionRecovery-thread-3] INFO [MultiDestinationMessageListenerContainer]: Connection 3 created 9/10 queue consumers 2014-10-01 16:51:32.615 [ConnectionRecovery-thread-5] INFO [MultiDestinationMessageListenerContainer]: Connection 5 created 0/10 queue consumers 2014-10-01 16:51:32.615 [ConnectionRecovery-thread-9] INFO [MultiDestinationMessageListenerContainer]: Connection 9 created 7/10 queue consumers 2014-10-01 16:51:32.615 [ConnectionRecovery-thread-4] INFO [MultiDestinationMessageListenerContainer]: Connection 4 created 9/10 queue consumers 2014-10-01 16:51:32.623 [ConnectionRecovery-thread-7] INFO [MultiDestinationMessageListenerContainer]: Connection 7 created 9/10 queue consumers 2014-10-01 16:51:32.756 [ConnectionRecovery-thread-1] INFO [MultiDestinationMessageListenerContainer]: Connection 1 created 10/10 queue consumers 2014-10-01 16:51:32.833 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 1: numQueues=10 initTimeMillis=2631 millis 2014-10-01 16:51:32.833 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 2: numQueues=10 initTimeMillis=2488 millis 2014-10-01 16:51:33.613 [ConnectionRecovery-thread-6] INFO [MultiDestinationMessageListenerContainer]: Connection 6 created 10/10 queue consumers 2014-10-01 16:51:33.614 [ConnectionRecovery-thread-8] INFO [MultiDestinationMessageListenerContainer]: Connection 8 created 10/10 queue consumers 2014-10-01 16:51:33.614 [ConnectionRecovery-thread-10] INFO [MultiDestinationMessageListenerContainer]: Connection 10 created 9/9 queue consumers 2014-10-01 16:51:33.615 [ConnectionRecovery-thread-3] INFO [MultiDestinationMessageListenerContainer]: Connection 3 created 9/10 queue consumers 2014-10-01 16:51:33.615 [ConnectionRecovery-thread-5] INFO [MultiDestinationMessageListenerContainer]: Connection 5 created 0/10 queue consumers 2014-10-01 16:51:33.615 [ConnectionRecovery-thread-9] INFO [MultiDestinationMessageListenerContainer]: Connection 9 created 8/10 queue consumers 2014-10-01 16:51:33.615 [ConnectionRecovery-thread-4] INFO [MultiDestinationMessageListenerContainer]: Connection 4 created 9/10 queue consumers 2014-10-01 16:51:33.623 [ConnectionRecovery-thread-7] INFO [MultiDestinationMessageListenerContainer]: Connection 7 created 10/10 queue consumers 2014-10-01 16:51:34.615 [ConnectionRecovery-thread-5] INFO [MultiDestinationMessageListenerContainer]: Connection 5 created 0/10 queue consumers 2014-10-01 16:51:34.615 [ConnectionRecovery-thread-3] INFO [MultiDestinationMessageListenerContainer]: Connection 3 created 9/10 queue consumers 2014-10-01 16:51:34.615 [ConnectionRecovery-thread-9] INFO [MultiDestinationMessageListenerContainer]: Connection 9 created 8/10 queue consumers 2014-10-01 16:51:34.615 [ConnectionRecovery-thread-4] INFO [MultiDestinationMessageListenerContainer]: Connection 4 created 9/10 queue consumers 2014-10-01 16:51:35.615 [ConnectionRecovery-thread-5] INFO [MultiDestinationMessageListenerContainer]: Connection 5 created 9/10 queue consumers 2014-10-01 16:51:35.615 [ConnectionRecovery-thread-3] INFO [MultiDestinationMessageListenerContainer]: Connection 3 created 9/10 queue consumers 2014-10-01 16:51:35.615 [ConnectionRecovery-thread-9] INFO [MultiDestinationMessageListenerContainer]: Connection 9 created 8/10 queue consumers 2014-10-01 16:51:35.616 [ConnectionRecovery-thread-4] INFO [MultiDestinationMessageListenerContainer]: Connection 4 created 9/10 queue consumers 2014-10-01 16:51:36.616 [ConnectionRecovery-thread-5] INFO [MultiDestinationMessageListenerContainer]: Connection 5 created 9/10 queue consumers 2014-10-01 16:51:36.616 [ConnectionRecovery-thread-3] INFO [MultiDestinationMessageListenerContainer]: Connection 3 created 9/10 queue consumers 2014-10-01 16:51:36.616 [ConnectionRecovery-thread-4] INFO [MultiDestinationMessageListenerContainer]: Connection 4 created 9/10 queue consumers 2014-10-01 16:51:36.616 [ConnectionRecovery-thread-9] INFO [MultiDestinationMessageListenerContainer]: Connection 9 created 8/10 queue consumers 2014-10-01 16:51:37.616 [ConnectionRecovery-thread-5] INFO [MultiDestinationMessageListenerContainer]: Connection 5 created 9/10 queue consumers 2014-10-01 16:51:37.616 [ConnectionRecovery-thread-3] INFO [MultiDestinationMessageListenerContainer]: Connection 3 created 9/10 queue consumers 2014-10-01 16:51:37.616 [ConnectionRecovery-thread-4] INFO [MultiDestinationMessageListenerContainer]: Connection 4 created 9/10 queue consumers 2014-10-01 16:51:37.616 [ConnectionRecovery-thread-9] INFO [MultiDestinationMessageListenerContainer]: Connection 9 created 8/10 queue consumers 2014-10-01 16:51:38.616 [ConnectionRecovery-thread-3] INFO [MultiDestinationMessageListenerContainer]: Connection 3 created 10/10 queue consumers 2014-10-01 16:51:38.617 [ConnectionRecovery-thread-9] INFO [MultiDestinationMessageListenerContainer]: Connection 9 created 10/10 queue consumers 2014-10-01 16:51:38.616 [ConnectionRecovery-thread-4] INFO [MultiDestinationMessageListenerContainer]: Connection 4 created 10/10 queue consumers 2014-10-01 16:51:38.616 [ConnectionRecovery-thread-5] INFO [MultiDestinationMessageListenerContainer]: Connection 5 created 10/10 queue consumers 2014-10-01 16:51:38.643 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 3: numQueues=10 initTimeMillis=8491 millis 2014-10-01 16:51:38.643 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 4: numQueues=10 initTimeMillis=8490 millis 2014-10-01 16:51:38.643 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 5: numQueues=10 initTimeMillis=8490 millis 2014-10-01 16:51:38.643 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 6: numQueues=10 initTimeMillis=3485 millis 2014-10-01 16:51:38.643 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 7: numQueues=10 initTimeMillis=3495 millis 2014-10-01 16:51:38.643 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 8: numQueues=10 initTimeMillis=3485 millis 2014-10-01 16:51:38.643 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 9: numQueues=10 initTimeMillis=8488 millis 2014-10-01 16:51:38.643 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 10: numQueues=9 initTimeMillis=3485 millis 7. To verify that messages are being streamed properly from the Axeda Platform, send DataItems from your connected Assets. You should see messages similar to the following. (Remember that each Asset you are testing must have an associated Machine Stream.) 2014-03-26 10:45:16.309 [pool-1-thread-1] INFO  [LogMessageProcessor]: StreamedDataItem: Model,Asset1,799021d6-70a3-7c32-0000-00000000021d,false,Wed Mar 26 14:45:16 EDT 2014,temp,43,analog 2014-03-26 10:45:21.137 [pool-1-thread-2] INFO  [LogMessageProcessor]: StreamedDataItem: Model,Asset2,799021d6-70a3-7c32-0000-000000000225,false,Wed Mar 26 14:45:21 EDT 2014,temp,43,analog 2014-03-26 10:45:26.134 [pool-1-thread-3] INFO  [LogMessageProcessor]: StreamedDataItem: Model,Asset1,799021d6-70a3-7c32-0000-00000000022b,false,Wed Mar 26 14:45:26 EDT 2014,temp,44,analog 2014-03-26 10:45:31.135 [pool-1-thread-4] INFO  [LogMessageProcessor]: StreamedDataItem: Model,Asset2,799021d6-70a3-7c32-0000-000000000231,false,Wed Mar 26 14:45:31 EDT 2014,temp,44,analog 2014-03-26 10:45:36.142 [pool-1-thread-5] INFO  [LogMessageProcessor]: StreamedDataItem: Model,Asset1,799021d6-70a3-7c32-0000-000000000237,false,Wed Mar 26 14:45:36 EDT 2014,temp,45,analog 2014-03-26 10:45:41.146 [pool-1-thread-6] INFO  [LogMessageProcessor]: StreamedDataItem: Model,Asset2,799021d6-70a3-7c32-0000-00000000023d,false,Wed Mar 26 14:45:41 EDT 2014,temp,45,analog Configuring a CustomMessageProcessor By default, the project is configured to use a LogMessageProcessor that logs each streamed message it receives to standard out. The project takes a StreamedMessage in either an XML or JSON format (as configured in the MachineStream SDKv2 object) and decodes the message into a StreamedMessage Java object. LogMessageProcessor.java implements the MessageProcessor interface. Here is the MessageProcessor.java interface: MessageProcessor.java package com.axeda.tools.streams.processor; import com.axeda.tools.streams.model.StreamedMessage; /** * This class defines the message processor method callback that will called for message processing. * Note that this methods will be called by multiple threads concurrently. */ public interface MessageProcessor { /** * Process a machine stream message. Note that this method will be called by multiple threads concurrently.  * The number of concurrent processing threads is defined in MachineStreamsConfig.getNumProcessingThreads(). * If you add code here that significantly slows down message processing, then there is the potential that * MessageListenerService threads will also block.  When the MessageListenerService threads block, this means that * messages will start to backup in the ActiveMQ or ASB message queues. If you are processing a large number of messages, * then you may need to adjust your configuration parameters or optimize your processMessage() code. * @param message machine streams message to process */public void processMessage(StreamedMessage message);} An additional class named CustomMessageProcessor.java has been provided so that you can provide your own custom message processing logic: CustomMessageProcessor.java package com.axeda.tools.streams.processor; import org.springframework.stereotype.Component; import com.axeda.tools.streams.model.StreamedAlarm; import com.axeda.tools.streams.model.StreamedDataItemMessage; import com.axeda.tools.streams.model.StreamedMessage; import com.axeda.tools.streams.model.StreamedMobileLocation; import com.axeda.tools.streams.model.StreamedRegistrationMessage; /** * This class was provided for customers to implement their own message processing business * logic. To use this class, change the @Autowired messageProcessor qualifier in * MessageProcessingServiceImpl.java to @Qualifier("customMessageProcessor") */ @Component("customMessageProcessor") public class CustomMessageProcessor implements MessageProcessor { /** * (non-Javadoc) * @see com.axeda.tools.streams.processor.MessageProcessor#processMessage(com.axeda.tools.streams.model.StreamedMessage) * * Process a machine stream message. Note that this method will be called by multiple threads * concurrently. The number of concurrent processing threads is defined in * MachineStreamsConfig.getNumProcessingThreads(). * If you add code here that significantly slows down message processing, then there is the * potential that MessageListenerService threads will also block. When the MessageListenerService * threads block, this means that messages will start to backup in the ActiveMQ or Azure Service Bus message queues. If you * are processing a large number of messages, then you may need to adjust your configuration parameters * or optimize your processMessage() code. */ @SuppressWarnings("unused") @Override public void processMessage(StreamedMessage message) { if (message instanceof StreamedDataItemMessage) { StreamedDataItemMessage dataItem = (StreamedDataItemMessage) message; // add your business logic here } else if (message instanceof StreamedAlarm) { StreamedAlarm alarm = (StreamedAlarm) message; // add your business logic here } else if (message instanceof StreamedMobileLocation) { StreamedMobileLocation mobileLocation = (StreamedMobileLocation) message; // add your business logic here } else if (message instanceof StreamedRegistrationMessage) { StreamedRegistrationMessage registration = (StreamedRegistrationMessage)message; // add your business logic here } } } The Axeda Platform Machine Streams feature currently support 4 different message types: StreamedDataItemMessage StreamedAlarm StreamedMobileLocation StreamedRegistrationMessage For each of the different message types, you should add your message processing business logic.  You may want to write each message to your favorite NoSql database or to a flat file. Once you have completed your changes to the CustomObjectMessageProcessor, then you must make one change in the MessageProcessingServiceImpl.java class to use this Spring bean. Uncomment this line  // @Qualifier("customMessageProcessor") Comment this line @Qualifier("logMessageProcessor") The following code snppet shows what your changes should look like when you are finished: MessageProcessingServiceImpl.java @Component("messageProcessingService") public class MessageProcessingServiceImpl implements MessageProcessingService private static final Logger LOGGER = LoggerFactory.getLogger(MessageProcessingServiceImpl.class); @Autowired private MessageDecoder messageDecoder; @Autowired // If you want to use the CustomMessageProcessor instead of the default LogMessageProcessor then change this Qualifier to @Qualifier("customMessageProcessor") //@Qualifier("logMessageProcessor") private MessageProcessor messageProcessor; private ExecutorService executorService;
View full tip
Complete information about installing and using the Axeda Machine Streams Data Relay Project is available here. This page provides the files for Axeda Machine Streams Data Relay Project: For Linux users ready to run (bin) files: download the machine-streams-data-relay-1.0.3-bin.tar_.gz archive full Maven project files: download the machine-streams-data-relay-1.0.3-project.tar_.gz archive For Windows users ready to run (bin) files: download the machine-streams-data-relay-1.0.3-bin.zip archive full Maven project files: download the machine-streams-data-relay-1.0.3-project.zip archive From the links below, select the project you want to use.
View full tip
This Zip file contains the Axeda patch (axeda-jms-plugin-<version>-machine-streams) required for proper installation and configuration of an Apache ActiveMQ server to use with the Axeda Machine Streams service (which is supported for Axeda Platform v6.8 and later). Note: Information about the Axeda Machine Streams feature is provided in the Axeda Features Guide available from the Axeda Support site, http://help.axeda.com. This patch overlay needs to be applied to the v5.8.0 ActiveMQ server installed as the Axeda Machine Streams endpoint broker, so that Axeda Platform can send streamed content to that server endpoint. Complete instructions for installing and configuring an Apache ActiveMQ server for Axeda Machine Streams are provided in the reference, Axeda® Machine Streams: A Guide to Setting Up Broker Endpoints. This guide is available with all Axeda product documentation from the PTC Support site.
View full tip
Hello everyone, This post is meant to fill the gap that Basic Rules of ThingWorx Development is having. You can follow these rules even before starting the development process and keep them in mind to have an organized and easy to maintain application. I will update this post in the future with more best practices and advice. Best Practices and suggestions: In order to have a clean and quick progress in any project the approach should be modular. If the modular approach is implemented also the development process should be thought of in a modular way. This will give much needed independence to each individual developer especially if the team concurrently works on the same instance. Some rules need to be in place in order for the project to be as smooth as possible: Every developer must have its own user. This is more important when developing on the same Thingworx instance but it’s a good practice when developing on individual instances as well. Every developer will be responsible for complete modules, from the respective screens of the GUI to the functionality services and business logic. If concurrent work on the same Entity needs to happen then communication between the developers and time sharing on that entity is needed without developers overwriting each other’s code. Don't decide to go into edit mode if there is someone else already editing. That will get you to a dead end. For the point no. 3 to work, after editing an Entity each user must press the Cancel Edit button and leave that Entity in View mode. When searching for services or properties developers should avoid pressing on the name of the Entity which is a link that directly opens the Entity in Edit mode they should rather use the button with the magnifying glass to the left of the name that will then take them in View mode. As a result of the modular approach each module will have its own Utility Thing that will contain services, properties, events and subscriptions that help develop the functionality for that module. Each module will have its own tags and the format could be: <Client_Name><GUI/Business><Module_Name>   8. The integration of the modules will be done in the Master by a single person in charge with that master or by each developer at a time.   9. Depending on the case the Data Model could be treated as a module in its own right or can be integrated in each module if the project permits. How to manage multiple users working on the same code in Composer: (Thanks to Pai Chung) Currently Thingworx within the development environment allows you to heavily document all your works, that includes ‘Save with Comment’. We encourage the use of the Documentation field and the ‘Save with Comment’ option. However generally development is not isolated to one environment. Thingworx provides several ways to back up the information. Backup – this is a true Database backup that creates an additional database in ThingworxBackupStorage and basically can be used as a restore, by copying it back into ThingworxStorage Export to ThingworxStorage – this is a full model export (with or without data) that can be triggered at any time. It can use Date filters to export according to Modified date. This is server side. Export to File – this allows you to export a single or group of entities/data according to a variety of filters. This is client side. Export to Source Controlled Entities – this allows you to export to a file folder structure or Zip that can be easily checked into a Source Control system. How to approach Source Control: After some initial modeling, Export to Source Control Entities and check this into your Source Control system From this point forward all developers have to follow a Check in/check out process Every time an Entity Group security setting is made, Export to ThingworxStorage and also check that into Source Control overwrite the previous. All in use Extensions should be in one zip and also reside in Source Control To do a restore or deploy Install the Platform Install extensions Import from ThingworxStorage the last Export checked in Import each single Entity file, in the proper order. Import each single Data file   6.  Clean up dead entities (if there is a reference list) Additional steps to take to help safeguard the development. Make sure the Automatic backup is running Export the Entity to a subfolder with the Date of the Edit     3.  Full Export to ThingworxStorage to run every day after development stars - This can be scripted and triggered by a timer or scheduler subscription (<Server>/Thingworx/ExportDatabase/?WithData=true). In this way you have a backup with everything that was on before you started working each day so you can roll back if an error occurs. CONTINUED 7 Sep 2015 How to organize wiring needs when developing the GUI: Starting from the idea that we can divide the GUI elements in Display Elements and Action Elements I have created a common form in order to be filled with information necessary for the wiring of that Element. UI Element Type Display Element / User Action Element Thing Name Name of the thing where data / service is found Service Name Service inside the Thing that returns the data / is the subject of the action Property(ies) Name Thing property / column name (when service returns an infotable) for Data Elements / Input parameters for the service to be run if User Action Element Additional Logic Additional information regarding the way the information sources change when preconditions are met. Usually means new services or mashup logic is needed.  I suggest that an additional companion document to the GUI description document to be created. This document will contain the previous form (table) for each screen/slide so that the work on specific screen/slide could be done independently. To be continued...
View full tip
Joint Study by Cognizant and the Economist Intelligence Unit (EIU) - Making products smart can deliver game-changing innovation,enriched customer experiences and new, across-the board levels of efficiency. From R&D and manufacturing, through distribution and after-sales support, product data is changing how products are built, sold and cared for. Our latest research reveals practical steps business leaders can take to benefit from this quickly intensifying and accelerating trend.
View full tip
A student in the Axeda Groovy course had some good questions. Instead of answering via email, I thought I'd answer here, so we could share the knowledge. Domain Objects - How can I customize or extend Axeda provided domain objects? (Eg., I need to store whether a user is external or internal user) How can I achieve this? This is a perfect use case for the Axeda Extended Data API. Documentation on the API can be found in the Axeda Platform v1 API Developer's Reference Guide. A good, "Getting Started," topic is available on Axeda Mentor - Getting Started With the Axeda Extended Data API. To customize/extend/decorate Axeda-provided domain objects, use Extended Objects. Extended Objects can be thought of in the abstract as a collection of custom database table rows. The analogy isn't perfect, but it helps with understanding. The use case in the question is very common. Extended Objects can stand on their own, or they can be associated with an Axeda domain object with the setInternalID() method. In the following sample code, we "decorate" an Axeda User object with two new fields, isExternalUser and companyID: private void updateAdditionalFieldsForUser(User user, boolean isExternalUser, String companyID) {   // GET OBJECT TYPE - This example assumes the ExtendedObjectType has already been created   ExtendedObjectType extObjectType = extendedObjectService.findExtendedObjectTypeByClassname("com.axeda.drm.sdk.user.User")   // GET PROPERTY TYPES   PropertyType isExternalUserPropertyType = findOrCreatePropertyType(extObjectType, "IS_EXTERNAL_USER")   PropertyType companyIdPropertyType = findOrCreatePropertyType(extObjectType, "COMPANY_ID")      /* GET THE EXTENDED OBJECT   * Note - the example provides a findExtendedObject() method that searches by INTERNAL ID. This linkage via the internal ID   * associates a domain object to an ExtendedObject, allowing us to "decorate" it with custom attributes   */   ExtendedObject extObject = findExtendedObject(extObjectType, user.id.value)   if (extObject == null)   {     extObject = new ExtendedObject()     extObject.setExtendedObjectType(extObjectType)     extObject.setInternalObjectId(user.id.value)     extObject.addProperty(createExtendedProperty(isExternalUserPropertyType, isExternalUser.toString()))     extObject.addProperty(createExtendedProperty(companyIdPropertyType, companyID))     extendedObjectService.createExtendedObject(extObject)   }   else   {     addOrUpdateExtendedProperty(extObject, isExternalUserPropertyType, "IS_EXTERNAL_USER", isExternalUser.toString())     addOrUpdateExtendedProperty(extObject, companyIdPropertyType, "COMPANY_ID", companyID)     extendedObjectService.updateExtendedObject(extObject)   } }  /**  * Adds or Updates an extended property.  *  * @param extendedObject the extended object to associate with the property  * @param propertyType the type of the property  * @param propertyName the name of the property  * @param propertyValue the value of the property  */ private void addOrUpdateExtendedProperty(ExtendedObject extendedObject, PropertyType propertyType, String propertyName, String propertyValue) {   Property extendedProperty = extendedObject.getPropertyByName(propertyName)   if (extendedProperty == null)   {   extendedProperty = createExtendedProperty(propertyType, propertyValue)   extendedObject.addProperty(extendedProperty)   }   else   {   extendedProperty.setValue(propertyValue)   } }  /**  * Retrieves and returns the extended object with the given type and internal id.  *  * @param extObjectType the type of the desired external object  * @param internalObjectId the internal id of the desired extended object  * @return the extended object matching the given details or <code>null</code> in case there's no match  */ private ExtendedObject findExtendedObject(ExtendedObjectType extObjectType, Long internalObjectId) {   ExtendedObjectSearchCriteria searchCriteria = new ExtendedObjectSearchCriteria()   searchCriteria.setExtendedObjectTypeId(extObjectType.getId())   searchCriteria.setInternalObjectId(internalObjectId)    List<ExtendedObject> extObjects = extendedObjectService.findExtendedObjects(searchCriteria, -1, 0, "name")    if (!extObjects?.isEmpty())   {    return extObjects[0]   }   return null }  /**  * Gets a property type given its name and the associated extended object type.  * If it cannot find the property type it will create it.  *  * @param extendedObjectType the associated extended object type for the property type  * @param propertyTypeName the property type name  * @return the property type  */  private PropertyType findOrCreatePropertyType(ExtendedObjectType extendedObjectType, String propertyTypeName)  {     PropertyType propertyType = extendedObjectType.getPropertyTypeByName(propertyTypeName)     if (propertyType == null)     {       propertyType = new PropertyType()       propertyType.setDataType(PropertyDataType.String)       propertyType.setName(propertyTypeName)       propertyType.setExtendedObjectType(extendedObjectType)       extendedObjectService.createPropertyType(propertyType)       extendedObjectType.addPropertyType(propertyType)     }     return propertyType  } By using Extended Objects, and associating them with Axeda domain objects via an internal ID, we can decorate/extend/customize those domain objects with use-case-specific attributes.
View full tip
In the ptc-windchill-extension-1.0.0-14.zip archive there is an extension called infotableselector_ExtensionPackage.zip​ . This extension enables the use of the Widget called Infotable Selector, which can be used to clear the selection in a grid. For how to use this widget, take a look at the picture:
View full tip
Today we're going to learn how to use the Axeda Platform SDK v2 APIs to upload a file to the platform and create a software package.  This document is a work in progress, but we're going to show you everything you need to get started.  In my case I am using the very useful and easy to use Postman REST Client app available from the Chrome Store.  I'll be using some terms below (API Object Names) that can be found in the documents listed in the bibliography at the end of this article. Assumptions (Replace these with your own versions): username:  joe, password: password1! platform instance:  axedaplatform.example.com First things first, we need to authenticate to the platform and get a session id (header x_axeda_wss_sessionid). (Note: Postman does not automatically URL encode query parameters - this can be especially important for the password) GET:  https://axedaplatform.example.com/services/v1/rest/Auth/login?principal.username=joe&password=password1! You'll receive a response like this following: <ns1:WSSessionInfo xmlns:ns1="http://type.v1.webservices.sl.axeda.com" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="ns1:WSSessionInfo">     <ns1:created>2015-06-02T15:16:49 +0000</ns1:created>     <ns1:expired>false</ns1:expired>     <ns1:sessionId>1a5XXXXX-d9aa-47f2-ac4f-28765ce5dbc5</ns1:sessionId>     <ns1:sessionTimeout>1800</ns1:sessionTimeout> </ns1:WSSessionInfo>                Excellent, now we have a session id! For the rest of the API calls (unless otherwise indicated), all of the following headers are set to the following: x_axeda_wss_sessionid: 1a5XXXXX-d9aa-47f2-ac4f-28765ce5dbc5 Content-Type: application/xml Accept: application/xml The next step is to get our ModelReference: POST:  https://axedaplatform.example.com/services/v2/rest/model/findOne <?xml version="1.0" encoding="UTF-8"?> <ModelCriteria xmlns="http://www.axeda.com/services/v2"> <modelNumber>MyModelName</modelNumber> </ModelCriteria>          Which will return output like: <v2:Model xmlns:v2="http://www.axeda.com/services/v2" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"         id="MyModelName" systemId="6141" label="managed" detail="MyModelName"         restUrl="https://sandbox.axeda.com/services/v2/rest/model/id/6141">     <v2:name>MyModelName</v2:name>     <v2:modelNumber>MyModelName</v2:modelNumber>     <v2:autoRegisterAssets>false</v2:autoRegisterAssets>     <v2:type>MANAGED</v2:type> ... </v2:Model>          The key piece of information we need from that request is the systemId. A little bit about our file (lorem-ipsum.txt): Lorem ipsum dolor sit amet, consectetur adipiscing elit. Integer nec odio. Praesent libero. Sed cursus ante dapibus diam. Sed nisi. Nulla quis sem at nibh elementum imperdiet. Duis sagittis ipsum. Praesent mauris. Fusce nec tellus sed augue semper porta. Mauris massa. Vestibulum lacinia arcu eget nulla. File-size: 307 MD5 Sum: 22b229c7ecc49cfa11255beb06c7f4fe The next step is to create a FileUploadSession and upload our file.  This will create for us the FileInfoReference we need to create our SoftwarePackage. PUT:  https://axedaplatform.example.com/services/v2/rest/file/session BODY: <?xml version="1.0"?> <FileUploadSession xmlns='http://www.axeda.com/services/v2'>   <files>     <file xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance' xsi:type='FileInfo'>       <filename>lorem-ipsum.txt</filename>       <md5>22b229c7ecc49cfa11255beb06c7f4fe</md5>       <filesize>307</filesize>       <contentType>application/text</contentType>     </file>   </files>   <expirationDate/>   <status/>   <updatedDate/>   <username/>   <version/> </FileUploadSession>              And our response if all goes OK (HTTP 200) looks like the following: <v2:ExecutionResult xmlns:v2="http://www.axeda.com/services/v2"         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" successful="true" totalCount="1">     <v2:succeeded>         <v2:success xsi:type="v2:FileUploadSessionSuccessfulOperation">             <v2:ref>16265</v2:ref>             <v2:id>16265</v2:id>             <v2:uploadUri>sftp://DISABLED</v2:uploadUri>             <v2:session systemId="16265" label="16265" detail="16265"                 restUrl="https://sandbox.axeda.com/services/v2/rest/file/id/16265">                 <v2:files>                     <v2:file xsi:type="v2:FileInfo" id="1068731" systemId="1068731"                         label="lorem-ipsum.txt" detail="1068731"> ... </v2:success> </v2:succeeded> </v2:ExecutionResult>         In this case, we just need the value of <v2:file systemId>, which is 1068731. TIME TO UPLOAD THE FILE CONTENTS!!! PUT: https://axedaplatform.example.com/services/v2/rest/file/1068731/content/ Extra Headers: X-File-Name: lorem-ipsum.txt X-File-Size: 307 Content-Type: multipart/form-data; boundary=----WebKitFormBoundary7MA4YWxkTrZu0gW BODY:  There needs to be a mime-part called 'file-content' that contains the contents or lorem-ipsum.txt ----WebKitFormBoundary7MA4YWxkTrZu0gW Content-Disposition: form-data; name="file-content"; filename="cfk-lorem-ipsum.txt" Content-Type: text/plain ----WebKitFormBoundary7MA4YWxkTrZu0gW         Note:  If using Postman, SoapUI or other automated tool, this will be handled automatically for you - do not specify a Content-Type header in this case. And our response, assuming an HTTP 200: <v2:ExecutionResult xmlns:v2="http://www.axeda.com/services/v2" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" successful="true" totalCount="1">     <v2:succeeded>         <v2:success>             <v2:ref>1068731</v2:ref>             <v2:id>1068731</v2:id>         </v2:success>     </v2:succeeded>     <v2:failures /> </v2:ExecutionResult>        This is just confirming our success!  Excellent.  Now we come to the SoftwarePackage.  We need two key pieces of information, the ModelReference (6141) and the FileInfoReference (1068731): POST: https://axedaplatform.example.com/services/v2/rest/softwarePackage Headers: Our defaults, Content-Type and x_axeda_wss_sessionid BODY: <?xml version="1.0" encoding="UTF-8"?> <SoftwarePackage xmlns="http://www.axeda.com/services/v2" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">   <name>TEST-REST-PACKAGE</name>   <model systemId="6141" />   <version>1.0.0.1</version>   <primaryAgentsOnly>true</primaryAgentsOnly>   <retriesEnabled>true</retriesEnabled>   <instructions>     <instruction xsi:type="DownloadFileInstruction">       <file xsi:type="FileInfo" systemId="1068731"/>       <destinationDirectory>C:\temp</destinationDirectory>       <compressed>false</compressed>       <executable>false</executable>       <pathRelative>false</pathRelative>       <overwriteExistingEnabled>true</overwriteExistingEnabled>     </instruction>   </instructions> </SoftwarePackage>        And our results: <v2:ExecutionResult xmlns:v2="http://www.axeda.com/services/v2"          xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" successful="true" totalCount="1">     <v2:succeeded>         <v2:success>             <v2:ref>TEST-REST-PACKAGE||1.0.0.1</v2:ref>             <v2:id>45863</v2:id>         </v2:success>     </v2:succeeded>     <v2:failures /> </v2:ExecutionResult>        And PROOF! I hope this helps you in your projects, and helps demystify the Axeda Platform REST API a little for you. Regards, -Chris Bibliography (Documents available from Support Portal): Axeda v2 API/Services Developer's Reference Guide_6.8 Axeda Platform Web Services Developer Reference v2REST_6.8 Change History: 2015-09-24 : Change HTTP Methods of session create and content send to PUT from POST
View full tip
The Axeda Platform has long had the ability to write custom logic to retrieve, manipulate and create data.  In the current versions of the Platform, there are two classes of API, Version 1 (v1) and Version 2 (v2).  The v1 APIs allow a developer to work with data on the Platform, but all of the APIs are subject to the maxQueryResults configuration property, which by default limits the number of results per query to 1000. For some subsets of data, this can be inadequate to process data.  In comes the v2 API, which introduces pagination. One of the first things a new user does when exploring the V2 API, is something like the following: HistoricalDataItemValueCriteria criteria = new HistoricalDataItemValueCriteria() criteria.assetId = '9701' criteria.startDate = '2014-07-23T12:33:00Z' criteria.endDate = '2014-07-23T12:44:00Z' DataItemBridge dbridge = com.axeda.sdk.v2.dsl.Bridges.dataItemBridge FindDataItemValueResult results = dbridge.findHistoricalValues(criteria)           And they get frustrated when they only get the same 100 rows of data.  Repeat after me: V2 API invocations (find operations) are limited to batches of 100 results at a time! But that's not the end of the story.  With a small change, the query above can be tuned to iterate through all results that match the search criteria:  HistoricalDataItemValueCriteria criteria = new HistoricalDataItemValueCriteria() criteria.assetId = '9701' criteria.startDate = '2014-07-23T12:33:00Z' criteria.endDate = '2014-07-23T12:44:00Z' criteria.pageNumber = 1 criteria.pageSize = 100 // Default. DataItemBridge dbridge = com.axeda.sdk.v2.dsl.Bridges.dataItemBridge FindDataItemValueResult results = dbridge.findHistoricalValues(criteria) tcount = 0 while ( (results = dbridge.findHistoricalValues(criteria)) != null  && tcount < results .totalCount) {   results.dataItems.each { res ->     tcount++   }   criteria.pageNumber = criteria.pageNumber + 1 }    I currently recommend that people avoid using the count() or countDomainObjectByCriteria() functions if you're then going to call a find.  Currently both the count*() and find functions compute total results, and doubles execution time of just those two calls.  Total count is only computed when running the first find() operation, so the code pattern above is so far the most efficient way I've seen to run these operations on the platform. So having covered how to do this in code (custom objects), let's turn our attention to the REST APIs - the other entry-point for using these capabilities.  The REST API doesn't offer a count*() function, but the first find() invocation (if using XML) brings back totalCount as part of the result set.  You can use this in your application to decide how many times to call the REST end-point to retrieve your data.  So for the example above: POST:  https://customer-sandbox.axeda.com/services/v2/dataItem/findHistoricalValues HEADERS: Content-Type: application/xml Accept: application/xml BODY: <?xml version="1.0" encoding="UTF-8"?> <HistoricalDataItemValueCriteria xmlns="http://www.axeda.com/services/v2" pageSize="100" pageNumber="1"> <assetId>9701</assetId> <StartDate>2014-07-23T12:33:00Z</StartDate> <endDate>2014-07-23T12:35:02Z</endDate> </HistoricalDataItemValueCriteria>      RESULTS: <v2:FindAssetResult totalCount="1882" xmlns:v2="http://www.axeda.com/services/v2" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">    <v2:criteria pageSize="100" pageNumber="1">       <v2:name>*</v2:name>       <v2:propertyNames/>    </v2:criteria>    <v2:assets>    </v2:assets> </v2:FindAssetResult>      Or JSON: POST:  https://customer-sandbox.axeda.com/services/v2/dataItem/findHistoricalValues HEADERS: Content-Type: application/xml Accept: application/xml BODY: {   "id":  9701,   "startDate": "2014-07-23T12:33:00Z",   "endDate": "2014-07-23T12:35:02Z",   "pageNumber": 1,   "pageSize": 2 }      And that's how you work around the maxQueryResults limitation of the v1 APIs.  Some APIs do not currently have matching v2 Bridges (e.g. MobileLocation and DataItemAssociation), in which case the limitation will still apply.  Creative use of the query Criteria will allow you to work around these limitations as we continue to improve the V2 API. Regards, -Chris
View full tip
One of the signature features of the Axeda Platform is our alarm notification, signalling and auditing capabilities.   Our dashboard offers a simplified view into assets that are in an alarm state, and provides interaction between devices and operators.  For some customers the dashboard may be too extensive for their application needs.  The Axeda Platform from versions 6.6 onward provide a number of ways of interacting with Alarms to allow you to present this data to remote clients (Android, iOS, etc.) or to build extended business logic around alarm processing. If one were to create a remote management application for Android, for example, there are the REST APIs available to interact with Assets and Alarms.  For aggregate operations where network traffic and round-trip time can be a concern, we have our Scripto API also available that allows you to use the Custom Object functionality to deliver information on many different aggregating criteria, and allow developers to get the data needed to build the applications to solve their business requirements. Shown below is a REST API call you might make to retrieve all alarms between a certain time and date. POST:   https://INSTANCENAME/services/v2/rest/alarm/find <v2:AlarmCriteria xmlns:v2="http://www.axeda.com/services/v2" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">    <v2:date xsi:type="v2:BetweenDateQuery">     <v2:start>2015-01-01T00:00:00.000Z</v2:start>     <v2:end>2015-01-31T23:59:59.000Z</v2:end>   </v2:date>   <v2:states/> </v2:AlarmCriteria>   In a custom object, this would like like the following: import static com.axeda.sdk.v2.dsl.Bridges.* import com.axeda.services.v2.* import com.axeda.sdk.v2.exception.* def q = new com.axeda.services.v2.BetweenDateQuery() q.start = new Date() q.end = new Date() ac = new AlarmCriteria(date:q) aresults = alarmBridge.find(ac)   Using the same API endpoint, here's how you would retrieve data by severity: <v2:AlarmCriteria xmlns:v2="http://www.axeda.com/services/v2" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">    <v2:severity xsi:type="v2:GreaterThanEqualToNumericQuery">     <v2:value>900</v2:value>   </v2:severity>   <v2:states/> </v2:AlarmCriteria>   Or in a custom object: import static com.axeda.sdk.v2.dsl.Bridges.* import com.axeda.services.v2.* import com.axeda.sdk.v2.exception.*   def q = new com.axeda.services.v2.GreaterThanEqualToNumericQuery() q.value = 900 ac = new AlarmCriteria(severity:q) aresults = alarmBridge.find(ac)   Currently the Query Types do not map properly in JSON objects - use XML to perform these types of queries via the REST APIs. References: Axeda v2 API/Services Developer's Reference Guide 6.6 Axeda Platform Web Services Developer Reference v2 REST 6.6 Axeda v2 API/Services Developer's Reference Guide 6.8 Axeda Platform Web Services Developer Reference v2 REST 6.8
View full tip
Background: Firewall-Friendly Agents can be configured for server certificate authentication in the Axeda Builder project or via the Axeda Deployment Utility. When server certificate authentication is configured, the Agent will compare the certificate chain sent by the Platform to a local copy of the CA certificate chain stored in the SSLCACert.pem file in the Agent’s home directory. The certificate validation compares three things: Does the name of the Platform certificate match the name in the request? Does the CA certificate match the CA certificate that signed the Platform certificate? Is the Platform or CA certificate not expired? If the answer to any of these questions is “no”, then connection is refused and the Agent does not communicate further with the Platform. To determine if certificate trouble is an issue, see the Agent log: EKernel.log or xGate.log. Recommendation: For Agent-Platform communications, we recommend always using SSL/HTTPS. If the Agent is not configured to validate the server certificate (via the trusted CA certificate), the system is vulnerable to a number of security attacks, including “man in the middle” attacks. This is critically important from a security perspective. Note: For on-premise customers, if the Platform certificate needs to change, always update the SSLCACert.pem file on all Agents before updating the Platform certificate. (If the certificate is changed on the Platform before it is changed on the Agents, communications from the Agent will stop.) Note: Axeda ODC automatically notifies on-demand customers about any certificate updates and renewals. At this point, though, Axeda ODC certificate updates are not scheduled for several years. Finally, it is recommended that your Axeda Builder project always specify “Validate Server Certificate” and set the encryption level to the strongest level supported by the Web server. Axeda recommends 168 bit encryption, which will use one of the following encryption ciphers: AES256-SHA or DES-CBC3-SHA. Need more information? For information about configuring and managing Agent certificate authentication, see Using SSL with Axeda® Platform Guide.
View full tip
Background: In the event that a Gateway/Connector Agent is offline or unable to connect to the Axeda Cloud Server, it uses an internal message queue to store information until the connection is restored. The message queue size is configured in the Axeda Builder project. By default, the queue is 200KB in size. Depending on how frequently your Agent sends data or how much data your Agent is collecting and trying to send, 200KB may be too small.  If the queue is too small, the data will “overflow” the queue. The queue is kept in memory only; data is not stored to disk and will be removed in a First-In-First-Out (FIFO) manner when the queue overflows. If you see queue overflow error messages in the Agent log (either EKernel.log and xGate.log), it may be time to change the size of the outbound message queue. The correct size setting for the Agent outbound message queue takes three variables into consideration: How much information you are sending? What is the maximum expected duration for loss of connection to the Internet (Cloud Server)? How much memory is available for your process? The more information the Agent is trying to send, the larger the queue size setting should be. Consider also that if your Agents are offline (disconnected) for a long period of time, they will likely accumulate lots of data, which may overflow the outbound message queue. If this is the case, you’ll need to increase the queue or risk losing data. Recommendation: Consider how the Agent operates (offline/online data collection) and how much data may be queued. When selecting the size of the queue, it’s important to maintain a balance between protecting against data loss and not occupying too much memory. If you do determine that you need to increase the outbound message queue size based, it’s important to note that Axeda recommends a maximum size of the outbound message queue of about 2MB. Need more information? For information about specifying Agent outbound message queue size, see the online help in Axeda® Builder (Enterprise Server Settings). For information about how the Agent delivers data to the Platform (via EEnterpriseProxy/xgEnterpriseProxy), see the Agent user’s guide for your Agent: either Axeda® Platform Axeda® Gateway User’s Guide (PDF) or Axeda® Platform Axeda® Connector User’s Guide (PDF). Axeda Support Site links: Axeda® Gateway User’s Guide, Axeda® Connector User’s Guide.
View full tip
Background: Axeda Agents can be configured with standard drivers to collect event-driven data, which is then sent to the Platform. Axeda provides many standard event-driven data (EDD) drivers for use with the Axeda Agent (as explained in Axeda® Agents EDD Toolkit Reference (PDF)). All EDD  drivers are configured by an xml file and enabled in Axeda Builder, through the Agent Data Items configuration. You can configure an EDD driver to send important information from your process to the Agent, including data items, events and alarms. The manner in which you configure your drivers will affect the ability for your project to operate efficiently. Recommendation: Use drivers to reduce the amount of data sent to the platform. Instead of sending data items to the Platform, which then generates an event or alarm, it is possible to use the drivers to scan for specific data points or conditions and send an event or alarm. Before you can configure your agents, you first need to determine how often you will need your agent to send data to the Enterprise server. Two example workflows and recommendations: If you want to monitor a data item every second or two, configure the Agent to do the monitoring If you want to trend information once per day, perform that logic at the Enterprise Server. These examples may address your actual use case or your needs may fall somewhere in between. Ultimately, you want to consider that time scale (how often you want to monitor or trend data) and resulting data volume should drive how your system handles data. More data is available at the Agent, and at a higher frequency, then that needed at the Platform. Processing at the Agent ensures that only the important results are communicated to the Platform, leading to a “cleaner” experience for the Platform. Using this guidance as a best practice will help reduce network traffic for your customers as well as ensure the best experience for Enterprise users using server data in their dashboards, reports, and custom applications. Need more information? For information about the standard EDD Drivers, see the Axeda® Agents: EDD Drivers Reference (PDF).
View full tip
Background: The frequency with which an Agent checks its connection to the Axeda Cloud Server is called the Agent “ping rate” (also known as heartbeat). (For Axeda IDM Agents, ping rate is referred to as “poll rate”; the meaning is the same.) Pings are a very important aspect of Firewall-Friendly communication. All communication between the Agent and the Cloud Server is initiated by the Agent. In addition to indicating the Agent is still active, the Ping also gives the Cloud Server an opportunity to send commands back to the Agent on the Ping acknowledgement. The ping rate effectively defines how long users must wait before they can deliver a command or request to an Agent. Typical commands may include setting a data item, starting an Access Session, or running a script. The place where Ping rate is most noticeable to system users is when requesting a remote session. When a session request has been submitted by the user, the Cloud Server waits for the next Agent ping in order to send down the command to begin the session process. A longer ping rate means the remote session takes longer to get started. (Note that the same is true of any command initiated from the Axeda Cloud Server.) Ping traffic comprises the majority of inbound traffic to the Cloud Server. The higher the ping rate, the more resources are consumed on the Server and the greater the requirements for network bandwidth for the customer. Unnecessarily high ping rates will result in an increase in network traffic on your customer's network. By default, the ping rate for Firewall-Friendly Agents is 60 seconds, or every 1 minute. The Agent ping rate is set using Axeda Builder when configuring the project. The ping rate can also be set via an action from the Axeda Cloud Server. When set via an action, the new ping rate is in effect until the next Agent restart (at which time the Agent will go back to the default ping rate set in the project). The Axeda Cloud Server also uses Agent ping rate to determine when assets are missing. One of the model settings is to define how many missed pings (or missed pings and time) will cause a device to be marked as missing. The default setting for new models is to mark assets as missing after they’ve missed 3 consecutive pings. Recommendations: Make sure that your Agents’ ping rates are set to reasonable frequencies. The ping rate should be set based on use case and not necessarily volume. The recommended practice is to make sure the ping rate is never set less than 60 seconds. Where possible the ping rate should be set to 2 minutes or higher. In the end, it is often user expectations around starting Access sessions that drives the ping rate value. If only occasional user access is required, one recommendation is to dynamically adjust the ping rate when conditions require expedited communication with the Cloud Server. One use case is to expedite a remote session when a device is in alarm condition or when an end user needs assistance. In this case you would temporarily increase the ping rate. This can be done using an action from the Cloud Server, by downloading a software package ping rate update, or by Agent extension using the SDK. (For information about using the Agent SDK, see the Axeda® Platform Extending Axeda® Agents PDF.) You can configure alerts to indicate if an asset is missing. Axeda recommends that you configure the alert to a reasonable time given your resources and the expense of tracking every missing asset. A reasonable missing alert for your organization may be 1-2 days, meaning the Server generates the missing asset alert only after the asset has been missing for one or two days, based on its ping rate, and an asset should be marked as missing only after 15 missed pings or 30 minutes (whichever is less). The most common cause of a missing asset is not an issue with the device but rather the loss of Internet connectivity. Note: Any communication from an Agent also serves the function of a Ping. E.g., if the ping rate is set to 30 minutes and the device is sending a data value every 5 minutes, the effective Ping rate is 5 minutes. Need more information? For information about specifying Agent ping rate, see the online help in Axeda® Builder (Enterprise Server Settings). If setting the ping rate from Platform actions or verifying Agent ping rates, see the online help of the Axeda® Connected Management Applications.
View full tip
Background: In very rare situations, it is possible that the Firewall-Friendly Agent process may stop running. If the Agent is not running, no machine monitoring or communication with the Cloud Server is possible. Recommendation: WatchDog is a little known yet very helpful feature available with Firewall-Friendly Agents. This program lets you monitor whether an Agent is running; if it’s not running, WatchDog can restart that Agent if needed. If the Agent process fails, WatchDog can bring it back up! WatchDog can also be configured to watch other processes. You can configure WatchDog to run as a service (for Windows) or daemon (for Linux). You will register the Watchdog to run as a service.  The Watchdog configuration file will specify the process(es) to be monitored and what to do when one exits. The options are to attempt to restart the process or to restart the system. Note: Watchdog detects only if a watched process exits. It will not detect or report on processes that may be “hanging”. Need more information? For information about configuring and using WatchDog, see the Agent User’s Guide for your Agent: either Axeda® Platform Axeda® Gateway User’s Guide (PDF) or Axeda® Platform Axeda® Connector User’s Guide (PDF).
View full tip
Background: Customer machines create files containing data, configuration, log data, etc. These files may contain critical information for service technicians about the asset and its data. How can you make sure you get this important information to the Cloud as soon as it’s available? Use File Watcher, a standard tool available with Firewall-Friendly Agents. You can configure File Watcher to monitor a directory or file specification and automatically upload the file(s) to the Axeda Cloud. By monitoring selected directories and files on the device, the File Watcher determines which files have changed or appeared and then upload those files to the Axeda Cloud Server. Sometimes the size of the file being written or copied into the file watch target is large enough that it takes time to complete. Without taking the right precautions, the file upload may try to start before the file creation process is completed. Recommendations: To ensure that large files don’t cause problems, we recommend using a “move” or “rename” operation instead of a file copy operation to place your file into a watched directory. Unlike a “copy” operation, using autonomous operations like move or rename will ensure that the Agent doesn’t detect a file is in the midst of growing and prematurely begin to upload that operation. In situations when it isn’t possible to use the autonomous “move” or “rename” operations, you can implement a delay. Setting a delay in the File Watcher configuration will prevent the Agent from sending files to the Axeda CloudServer before those files have transferred completely to the watched file directory. Finally, use file compression wisely. You need to balance the benefits from compressing files before sending with the potential adverse impact that compression will have on the Agent. Compressing very large files BEFORE sending to the Platform will affect the Agent; however, compressing smaller files can be beneficial. If the Agent’s computer is of lower power, the CPU may have to work overtime to compress huge files or to send huge uncompressed files. As a general rule, compressing files before sending is recommended. However, depending on your Agent and network setup, it may prove more beneficial to stream uncompressed files rather than compress first and then send. Need more information? For information about configuring and using file watchers, see the Axeda® Builder User’s Guide (PDF) or the online help in Axeda Builder.
View full tip
<p>We live in a connected world where we can (want!) to receive instant updates and notifications. ThingWorx leverages the power of Web 2.0 and its Always-On technology to deliver that, but our friendly SMS providers have also provided an easy and powerful way that can be used to deliver SMS notifications right to your phone. Email to Text!</p><div>Set up a 'notification' Thing using our MailServer Template, set up your outgoing e-mail server and you are now ready to invoke the 'SendMessage' service on a given event. All you need now is the email address of your SMS number, which you can find by following this link: <a href="http://sms411.net/how-to-send-email-to-a-phone/" target="_blank"><span style="font-size:8.5pt;line-height:115%;font-family:&quot;Arial&quot;,&quot;sans-serif&quot;">List of e-Mail to SMS  addresses</span></a></div><p class="MsoNormal"><o:p></o:p></p><div><p class="MsoNormal"><o:p></o:p></p></div><p></p>
View full tip
Connecting to other databases seemed to be a hot desirable from our training feedback. So we've just added a video tutorial in the Wiki for connecting ThingWorx to SQL Server (or SQL Server Express). See topic 7.04 or go to the Video Appendix. In essence connecting to other databases like mySQL or Oracle will work the same way except you will have to change the Database URL and JDBC reference. Also let me take this opportunity to wish everyone a blessed holiday season!
View full tip
One commonly asked question is what are the correct settings for the Configuration Tables tab when creating/setting up a Database Thing to connect to a SQL Server (2005 or later) database.  There are a couple of ways to do this but the tried and true settings are listed below. connectionValidationString - SELECT GetDate() jDBCConnectionURL - jdbc:sqlserver://servername;databaseName=databasename jDBCDriverClass - com.microsoft.sqlserver.jdbc.SQLServerDriver Max number of connections in the pool - 5 (this can be modified based on number of concurrent connections required) Database Password - databaseusername Database User Name - databaseuserpassword <br> The jdbc driver file sqljdbc4.jar is by default installed with the ThingWorx server.  It is located in TomcatDir\webapps\Thingworx\WEB-INF\lib\
View full tip
Remember that when you are calling an external URL to fetch data via an API call to another system that you must encode special characters specifically. For example the URL that you may type into a browser to test may look like this: https://someserver.somwhere.com:443/apicall?parameter1=test string&parameter2=test^number but when scripting that into a string variable you'll need to replace the space and the carrot with the proper encoded values (%20 and %5E) var params = { username : "me", password : "password", url : "https://someserver.somwhere.com:443/apicall?parameter1=test%20string&parameter2=test%5Enumber", ignoreSSLErrors : false, timeout : 60, headers : headers }; var result = Resources['ContentLoaderFunctions'].LoadXML(params); also note that in this instance we're making a secure connection therefore port 443 (typically the default) was explicitly specified...
View full tip