cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

Community Tip - New to the community? Learn how to post a question and get help from PTC and industry experts! X

IoT Tips

Sort by:
The Axeda Platform provides a few mechanisms for putting user-defined pages or UI modules into the dashboards, or allowing end-users to host AJAX based applications from the same instance their data is retrieved from.  This simple application illustrates the use of jQuery to call Scripto and return a JSON formatted array of current data for an Axeda asset. Prerequisites: First steps taken with Axeda Artisan Basic understanding of HTML, JavaScript and jQuery Axeda Platform v6.5 or greater (Axeda Customers and Partners) Artisan project attached to this article Features: Authentication from a Web app Use of CurrentDataFinder API Scripto from jQuery Files of Note ​(Locations are from the root of Artisan project) index.html – main HTML index page ..\artisan-starter-html\src\main\webapp\index.html app.js – JavaScript code to build application and call Scripto ..\artisan-starter-html\src\main\webapp\scripts\app.js axeda.js – axeda web services JavaScript code ..\artisan-starter-html\src\main\webapp\scripts\axeda.js DataItemsWithScripto.groovy – custom object on Axeda platform ..\artisan-starter-scripts\src\main\groovy\DataItemsWithScripto.groovy Screenshots: Further Reading Developing with Axeda Artisan Extending the Axeda Platform UI - Custom Tabs and Modules
View full tip
Sometimes you need the values from different ThingTemplate members in ONE grid. Therefore it would be great, if you can join 2 "GetImplementedThingsWithData" results into a common one. Here a script that works generally as long as you don't mess with datatypes on same column names. I'm very interested, if someone can find a much easier solution. The Union function was the only one I found suited for the task, but this needs preparation of the infotables upfront. Input: Table1 :Infotable Table2: Infotable Output: Infotable Here the "Snippet": // Define params for an Infotable to hold column names var params = {   infoTableName: "field" /* STRING */ }; // Define column 1 var newField = new Object(); newField.name = "field"; newField.baseType = 'STRING'; // Two 1 columns Infotables to store the field definition; var field1 = Resources["InfoTableFunctions"].CreateInfoTable(params); field1.AddField(newField); var field2 = Resources["InfoTableFunctions"].CreateInfoTable(params); field2.AddField(newField); // Define the cell to add to Infotable var myField = new Object(); myField.field = ""; myField.baseType = "STRING"; // Loop through Table1 var dataShapeFields = Table1.dataShape.fields; for (var fieldName in dataShapeFields) {   logger.debug('field1 name is ' + dataShapeFields[fieldName].name);     myField.field = dataShapeFields[fieldName].name;    field1.AddRow(myField); } // Loop through Table2 var dataShapeFields = Table2.dataShape.fields; for (var fieldName in dataShapeFields) {   logger.debug('field2 name is ' + dataShapeFields[fieldName].name);    myField.field = dataShapeFields[fieldName].name;    field2.AddRow(myField); } // Using inner join functionality to filter only the values that exist in both var params = {   columns2: "field" /* STRING */,   columns1: "field" /* STRING */, joinType: "INNER" /* STRING */,   t1: field1 /* INFOTABLE */, t2: field2 /* INFOTABLE */,   joinColumns1: "field" /* STRING */,   joinColumns2: "field" /* STRING */ }; var commonFields = Resources["InfoTableFunctions"].Intersect(params); // Loop over the result to build a search string var commonColumns = ""; var tableLength = commonFields.rows.length; for (var x = 0; x < tableLength; x++) {   var row = commonFields.rows ;   commonColumns = commonColumns + row.field + ","; } // Reduce Table1 to match only common columns var params = { t: Table1 /* INFOTABLE */, columns: commonColumns /* STRING */ }; var result1 = Resources["InfoTableFunctions"].Distinct(params); // Reduce Table2 to match only common columns var params = {   t: Table2 /* INFOTABLE */,   columns: commonColumns /* STRING */ }; var result2 = Resources["InfoTableFunctions"].Distinct(params); // At the END JOIN the tables together (does not work if colums are different) var params = {   t1: result1 /* INFOTABLE */,   t2: result2 /* INFOTABLE */ }; var result = Resources["InfoTableFunctions"].Union(params);
View full tip
Javascript, everyone knows it, at least a little bit. What if I told you that you could do serious data acquisition with just a little bit of Javascript and you may already have the tools to do it, right now on your "Off the Shelf" device. Node.js is a command line implementation of Javascript that can be run on common, credit card sized devices like the Raspberry PI or the Intel Edison. I suspect that if you already know about Node.js, you may have encountered its non-blocking asynchronous, "Call back", style of programming which can be a little different that most other languages which block or wait for commands to complete. While this can be a benefit for increasing performance, it can also be a barrier to entry for new users. This is the problem that Node Red really solves. Node Red is a web based Integrated Development Environment (IDE) that turns the "Call Back" style Javascript programming of Node.js into a series of interconnected Nodes, each Node of which represents a Javascript function which is connected by a callback to another node/function. A simple hello world program in Node Red would look something like this ( with annotations in red) : You can re-create this program using the Node Red IDE yourself. Here is a brief video (with no sound) which should familiarize you with how to create your own hello world flow. Video Link : 1333 How can you install Node Red on your own system to try it out? The good news is, if you have a Raspberry PI 2 with a NOOBS installed on it, Node.js and Node Red come pre-installed. If you do not already have it installed, or want to install it on your own system it is still pretty simple. Here are the steps: 1. Download and install Node.js (https://nodejs.org/en/download/) 2. Run the command:  sudo npm install -g --unsafe-perm node-red     Omit the sudo on windows (see http://nodered.org/docs/getting-started/installation.html  for more info) 3. You now have Node Red. To run it, just type: node-red  on your command line. 4. Using your web browser goto http://localhost:1880 and the Node Red IDE will appear in your browser. How about a real hardware integration example? Node Red comes with many built in Nodes and many more nodes you can add to connect to specific peripherals you may have on your device. Rather than provide a complete tutorial on Node Red, I will focus on discussing using this IDE to re-create a hardware integration that I created in the past using the Java SDK, The Raspberry PI, AM2302 Weather Station (see Weather Applications with Raspberry Pi | ThingWorx)​. This example contains detailed specifics on the attachment of the AM2302 Temperature/Humidity sensor to your Raspberry PI. I am going to assume you have the hardware already attached to your Raspberry PI as described in this tutorial ( https://learn.adafruit.com/dht-humidity-sensing-on-raspberry-pi-with-gdocs-logging/overview ). I am also assuming that you have installed the python based sample program described in this tutorial as well and you now have a python script called "AdafruitDHT.py" installed on your PI that produces the following output when it is run. pi@raspberrypi:~/projects/Adafruit_Python_DHT/examples $ sudo ./AdafruitDHT.py 2302 4 Temp=22.3*  Humidity=30.6% pi@raspberrypi:~/projects/Adafruit_Python_DHT/examples $ If you don't have any of this hardware installed, you can still proceed with this example and just create your own temperature and humidity values manually. We are going to connect the output of this python script directly to ThingWorx and sample its output value every 5 seconds. I will start assuming you do not have the Am2302 hardware and create simulated values. I will then replace them with the actual output of the python script as a final step. Polling versus Interrupt Driven Data Collection In the Java SDK version of this example, we are polling for changes in data. Every so many seconds our device will wake up and take a reading. How do we recreate the same effect in Node Red without having to push an inject button every 5 seconds. No. We need an input node that activates on its own every 5 seconds. The Inject Node will do this. Drag out an inject node and configure it as shown below. This is an input node so it will be starting a new flow. It will fire off every 5 seconds from the minute this sheet is deployed. Simulate Data Collection Lets generate a random humidity and temperature value before getting the actual data. For this node we will use a Function node. Drag one out and configure it as shown below. Here is the Javascript for this node so you can cut and paste it into this dialog. var tempF = Math.random() * 40 + 60; var tempC = (tempF-32)/1.8; var humidity = Math.random() * 80 + 20; msg.payload = {     "tempF":tempF,     "tempC":tempC,     "humidity":humidity     }; return msg;                                    Remember that the returned message is the message that the next node will receive. The payload property is the standard or default property of a message that most nodes use to pass data between each other. Here, our payload is an object with all of our simulated data in it. Lets Test it Out Connect the two nodes together and add a debug output node and deploy your sheet. The completed flow will look like this. As soon as you deploy you should see the following output in your debug tab and every five seconds another data sample will be generated. So how does this data get to ThingWorx? What we need to do is take this data and deliver it to ThingWorx in the form of a REST web service call. This is easier to do than it sounds. First off, lets create a Thing on your ThingWorx server that looks like this. Now give it these properties. Next, create an Application Key in the application keys section of the composer. Assign it to the "Administrator" user. Your keyId will of course be different. This key will be the credential you need to post your data. Installing the ThingRest Node Red Node To simplify the process of posting the data to ThingWorx, I have created my own custom node to post data. To install a custom node into your Node Red installation you have to find the directory Node Red is using to store your sheets in. By default this is a directory called ".node-red" in your home directory. On a Raspberry PI this directory would be /home/pi/.node-red. If you are running Node Red now, quit it by hitting control-c and cd into the .node-red directory. Below is the sequence of commands you would issue on your PI to install the ThingRest node. cd ~/.node-red npm install git+https://git@github.com/obiwan314/node-red-node-thingrest.git node-red                     The node package manager (npm) will install this new node automatically into your .node-red directory. Now re-run node-red and go back to your browser and refresh your Node Red IDE. You should now have a "REST Thing" node. Adding a REST Thing node to your flow Drag a REST Thing output node into your flow and configure it as shown below. Remember, your Application Key will be different than the one shown here. Also, your ThingWorx server URL may be different if your server is not on the same machine you are working on. Now connect it as shown below. When you deploy this sheet, you will be posting data to ThingWorx. Go back to your WeatherStation1 Thing in ThingWorx and use the Refresh button shown below to see your data changing. Wait, that is? Thats the whole data collection program? Yes. The flow above is the equivalent of the Java SDK code from the Java weather station example. Now for Some Real Data As promised, we will now replace the simulated data in the Generate Data node with real data obtained from the "~/projects/Adafruit_Python_DHT/examples/AdafruitDHT.py 2302 4" python command on your Raspberry PI using an Exec node. The exec node can be found at the very bottom of your node palette. It executes a command and returns the results as msg.payload to the next node in the flow. You may have noticed it has three outputs instead of one. In order these outputs are your Standard output, Standard Error and the integer return code of the process. Use the first output node to get the results of this command. Now Connect this in place of the Generate Data Node as shown below. At this point, we can't connect the collected data to the WeatherStation1 Thing because it is in the wrong format. It is console output and we need it in the form of a Javascript object. We are going to need a function to parse the console output into a Javascript object. Add the function node shown below. Here is the Javascript for cut and paste convenience. var temphumidArray=msg.payload.split(" "); var tempC = parseFloat(temphumidArray[0].replace("*","").split('=')[1]); var tempF = tempC *1.8 + 32; var humidity = parseFloat(temphumidArray[2].replace("%","").replace("\n","").split('=')[1]); msg.payload = {     "humidity":humidity,     "tempF":tempF,     "tempC":tempC   }; return msg;   Now msg.payload contains a javascript object identical to the one we were generating at random but now it is using real data. Connect up your nodes so they appear as shown below but when you deploy, don't expect it to work yet because there is still one problem you will have to get around. This python script expects to be run as the root user. How to run Node Red as Root You can start Node Red as root with the following command sudo node-red -u /home/pi/.node-red   Note that the -u argument is required to make sure you keep using the pi user's .node-red directory. If you loose your REST Thing node, you are not using the pi user's .node-red directory, but root's instead. If you see any error messages in your debug window, try re-attaching the the debug node to the Collect Data node and see what is being produced by the exec node. Don't forget to verify that your tempC,tempF and humidity properties are updating in ThingWorx. Lets Add a GPS Location You may have noticed that there is a stationLocation property on the WeatherStation1 Thing. Lets set that to a fixed location to complete this example of 40.0568764,-75.6720953,18. Below is the modified Javascript to update in the Parse Data node to add this location. var temphumidArray=msg.payload.split(" "); var tempC = parseFloat(temphumidArray[0].replace("*","").split('=')[1]); var tempF = tempC *1.8 + 32; var humidity = parseFloat(temphumidArray[2].replace("%","").replace("\n","").split('=')[1]); msg.payload = {     "humidity":humidity,     "tempF":tempF,     "tempC":tempC,     "stationLocation":"40.0568764,-75.6720953,18" }; return msg; What's Next? Node Red has many more nodes that you can add to your project through the use of the npm command. There is a GPIO node library you can install at https://github.com/monteslu/node-red-contrib-gpio which will give you input and output nodes for the GPIO pins on your PI as well, This library also supports accessing Arduino's attached to the PI over a USB cable which expand the possibilities for data collection and peripheral control.Hopefully this article has exposed you to the many other possibilities for connecting devices to your ThingWorx Server. The Rest Thing node is using the HTTP REST protocol to talk to ThingWorx. In the near future, with the Introduction of the ThingWorx Javascript SDK, a Node Red library can be created that uses ThingWorx AlwaysOn WebSockets protocol to communicate with your ThingWorx server which will offer even more capabilities and better performance.
View full tip
In the recent times, one of the frequent questions regarding PostgreSQL is which tools are good with PostgreSQL. With the growing functionality of PostgreSQL, the number of vendors are willing to produce tools for PostgreSQL. There are lot of tools for management, development, data visualization and the list if growing. Here, I'm listing a few tools that might be of interest to Thingworx users. psql terminal: The psql client is a command-line client distributed with PostgreSQL, often called as interactive terminal. psql is a simple yet powerful tool with which you can directly interface with the PostgreSQL server. The psql client comes default with the PostgreSQL database. Key features: Issue queries either through commands or from a file. Provides shell-like features to automate tasks. For more information, refer http://www.postgresql.org/docs/9.5/static/app-psql.html pgAdmin III: pgAdmin III is a GUI based administration and development tool for PostgreSQL database. It delivers the needs of both admin and normal users from writing simple SQL queries to developing complex databases. Key features: Open source and cross-platform support. No additional drivers are required. Supports more than 30 different languages. Note: pgAdmin III comes default with postgreSQL9.4 installer. For more information, refer http://www.pgadmin.org/download/ phpPgAdmin: phpPgAdmin is a web-based client for managing PostgreSQL databases. It provides the user with a convenient way to create databases, create tables, alter tables and query the data using SQL. Key features: Open source and supports PostgreSQL 9.x. Requires webserver. Administer multiple servers. Supports the slony master-slave replication engine. For phpPgAdmin download: http://phppgadmin.sourceforge.net/doku.php?id=download TeamPostgreSQL: TeamPostgreSQL is a browser-based tool for PostgreSQL administration. Using TeamPostgreSQL, database objects can be accessed from anywhere in the web browser. Key features: Open source and cross-platform support. Supports SSH for both the web interface and the database connections. GUI with tabbed SQL editors. For TeamPostgreSQL download: http://www.teampostgresql.com/download.jsp   Monitoring Tools pgBadger: pgBadger is a PostgreSQL log analyzer for generating reports from the PostgreSQL log files. It is built in Perl language and uses a javascript and bootstrap libraries. Often seen as a replacement for pgfouine log analyzer. Key features: Open source community project. Autodetects postgreSQL log file formats (stderr, syslog or csvlog). Provides SQL queries related reports and statistics. Can also set limits to only report errors. Generates Pie charts and Time based charts. For more information, refer http://dalibo.github.io/pgbadger/. Git download: https://github.com/dalibo/pgbadger/releases PostgreStats: Postgrestats is a software that has automated scripts to easily view statistics such as commits, rollbacks, user inserts, updates and deletes in a time-based intervals. Postgrestats gets installed and executes on the database server, it customizes the main conf file. Postgrestats also provides an enterprise application for Replication mode and High Availability. Key features: Open source and easy-to-setup installation.  Take a snapshot report based on time intervals. Optional email-on-update. Text file Data storage. Also provides enterprise application, PostgreStats Enterprise. For more information, refer: http://www.postgrestats.com/subs/docs.html    Slemma: Slemma is a collaborative, data visualization tool for PostgreSQL database. Slemma allows database connections with a near to one-click integration and can generate a dashboard from files. Slemma comes with a commercial license with a $29 per user per month pricing. Key features: Create charts and interactive dashboards by selecting tables. Non-developers can easily create visualizations (with no coding). Email dashboards automatically to clients or your entire team. For more information, refer https://slemma.com/ Ubiq: Ubiq is a web-based buisness intelligence and reporting tool for PostgreSQL server. Ubiq creates reports and online dashboards, providing the feature to export in multiple formats. Ubiq is distributed with a commercial license. Key features: Drag & drop interface to create interactive charts, dashboards and reports. Apply powerful filters and functions to the data. Share your work and schedule email reports. For more information, refer http://ubiq.co/tour
View full tip
This is a slide deck I created while learning how to post data from an Arduino to ThingWorx using MQTT protocol.
View full tip
ThingWorx is great for storing large amounts of data coming from your devices but it can also be used like a traditional, row based database for information you would like to integrate with your thing data. Attached to this blog entry is a short example of creating an address book database using a DataTable and a DataShape. It does not focus on creating mashups but sticks with discussing the modeling and service calls you would use to create a simple database.
View full tip
Axeda Machine Streams enables external Platform integrators to access the current, raw data from connected assets. The Platform can stream the data item, alarm, mobile location, and registration messages from connected assets to an ActiveMQ server or Azure Service Bus endpoint. Streamed data can be used for data analytics or reporting, or simply for storage. This article explains the Machine Streams Data Relay project that Axeda provides. This sample project illustrates how stream consumers can create their own projects to relay Machine Stream messages from ActiveMQ or Azure Service Bus into their environments. The Machine Streams Data Relay project was created using Apache Maven. The project operates by dispatching messages to a log message processor. Each machine streams message is logged to stdout. Note: The "Axeda Features Guide" provides a high level introduction to the Axeda Machine Streams feature. That PDF is available from PTC Support (http://support.ptc.com/).) Downloading and Installing the Project The machine-streams-data-relay project is provided as a tar.gz archive for Linux users and a .zip archive for Windows users. Each archive includes a Maven project with all source code. This page provides downloads and full source for the machine-steams-data-relay Maven project. The Data Relay project files are available from here. Prerequisites To download, build, and compile the machine-streams-data-relay project, you will need the following: Access to an Axeda Platform instance configured to stream asset data (for ActiveMQ endpoint this includes the Axeda provided ActiveMQ machine-streams plugin/overlay (axeda-jms-plugin-r<SVN_REVISION>-machine-streams.zip, which is provided here. ActiveMQ or Azure Service Bus server configured for Machine Streams. Instructions for configuring an ActiveMQ or Azure Service Bus server for Machine Streams are provided in the “Axeda® Machine Streams: A Guide to Setting Up Broker Endpoints", available with all documentation from PTC Support (http://support.ptc.com/). At least one machine stream (Axeda Artisan Machine Streams Archetype) configured to stream data to the ActiveMQ or Azure Service Bus server for your assets. (Complete information about creating machine streams and adding machine stream support to the Axeda Platform is provided in the “Axeda v2 API/Services Developers Reference Guide” available from PTC Support (http://support.ptc.com/).) Access to the ActiveMQ or Azure Service Bus server configured as the endpoint for streamed Machine Streams content Oracle Java JDK 1.7 or greater and java and javac installed and available in your PATH (if you need instructions for this, see http://www.oracle.com/technetwork/java/javase/downloads/index.html) Maven 3.0.4 or greater and mvn installed and available in your PATH (if you need instructions for this, see http://maven.apache.org/download.cgi) Note: For the Machine Streams Data Relay project to work successfully, the Axeda Platform instance and the ActiveMQ or Azure Service Bus server instance must be configured with support for Axeda Machine Streams, and at least one machine stream must be configured to stream data. Complete information about configuring Axeda Platform for Axeda Machine Streams, including the data format for the resulting streams (XML or JSON) is available in the “Axeda v2 API/Services Developers Reference Guide.” Instructions for configuring an ActiveMQ or Azure Service Bus server for Machine Streams are provided in the “Axeda® Machine Streams: A Guide to Setting Up Broker Endpoints” Reference Guide (available from PTC Support (http://support.ptc.com/)). Building the Project This page provides instructions for building the Data Relay project for Linux and for Windows environments. 1. Download and uncompress the project for your environment Linux: Click here for the machine-streams-data-relay-1.0.3-project.tar.gz # tar -zxvf machine-streams-data-relay-1.0.3-project.tar.gz # cd machine-streams-data-relay-1.0.3 Windows: Click here for the machine-streams-data-relay-1.0.3-project.zip Unzip the project to the following directory: C:\machine-streams-data-relay-1.0.3 2. Edit the ActiveMQ or Azure Service Bus configuration file (configAMQ.properties or configASB.properties) in src\main\scripts\ as needed. sample Config.properties files for the MachineStreamsDataRelay component For ActiveMQ broker endpoints - configAMQ.properties # The ActiveMQ broker URL. brokerURL=tcp://localhost:62000 # The ActiveMQ queue name to process messages from. # It can be a single queue: MachineStream.stream01 # Or a wildcard queue: MachineStream.> queueName=MachineStream.> # The username used to connect to the ActiveMQ queue username=axedaadmin # The password used to connect to the ActiveMQ queue password=zQXuLzhQgcyRZ25JCDXYEPBCT2kx48 # The number of ActiveMQ broker connections. numConnections=10 # The number of sessions per connection. Note that each session will create a separate thread. numSessionsPerConnection=5 # The number of concurrent threads used for processing machine streams messages. numProcessingThreads=100 # The type of message listener container. # default = single queue name per connection. # multiDestination = supports multiple queue names per connection messageListenerContainerType=default For Azure Service Bus broker endpoints - configASB.properties # The ASB broker URL. brokerURL=amqps://your-azure-service-bus-namespace.servicebus.windows.net # The ASB queues to process messages from. # It can be a single queue: MachineStream.stream01 # Or multiple queues separated by a comma: MachineStream.stream01,MachineStream.stream02 # Or a queue range defined by the following syntax: MachineStream.stream[01-20] queueName=MachineStream.stream[01-50] # The username used to connect to the ASB queue(s) username=your-azure-service-bus-username # The password used to connect to the ASB queue(s) password=the-password-for-your-azure-service-bus-username # The max number of ASB broker connections. numConnections=10 # The number of concurrent threads used for processing machine streams messages. numProcessingThreads=100 # The type of message listener container. # default = single queue name per connection. # multiDestination = supports multiple queue names per connection messageListenerContainerType=multiDestination Note: messageListenerContainerType is provided because Azure Service Bus does not support wildcard queue names. The configuration details are as follows: Name Description Value brokerURL location of the ActiveMQ or Azure Service Bus (broker) location of the ActiveMQ or Azure Service Bus server (broker) queueName Name of the ActiveMQ or Azure Service Bus queue from which you want to process messages To define a single queue: MachineStream.<insert single queue name here> To define a wildcard queue name for multiple queues:MachineStream. It can be a single queue:  MachineStream.stream01 Or multiple queues separated by a comma: MachineStream.stream01,MachineStream.stream02 Or a queue range defined by the following syntax: MachineStream.stream[01-20]: queueName=MachineStream.stream[01-50] (if you have multiple queues and you want to use ASB, then you have to use multiDestination and use the range) username username used to connect to the ActiveMQ or Azure Service Bus queue For ActiveMQ: username=axedaadmin For ASB: username=your-azure-service-bus-username password used to connect to the ActiveMQ or Azure Service Bus queue password used to connect to the ActiveMQ or ASB queue(s) numConnections number of ActiveMQ or Azure Service Bus broker connections Default is 10 broker connections numSessionsPerConnection The number of sessions per connection. Note that each session will create a separate thread. (This key is used infrequently.) APPLICABLE TO ACTIVEMQ ONLY. Default is 5 sessions per connection APPLICABLE TO ACTIVEMQ ONLY. numProcessingThreads The number of concurrent threads used for processing machine streams messages. Default is 100 concurrent threads messageListenerContainerType The type of message listener container. Default is single queue name per connection. Supports multiple queue names per connection 3. Build code using Maven.  Use -DskipTests option if you want to skip tests.  This will build all source code and produce a bin archive in the target directory. For Linux: # mvn package -DskipTests For Windows: c:\> mvn package -DskipTests 4. Enter the target directory and uncompress *bin.tar.gz archive and enter correct directory For Linux: # cd target # tar -zxvf machine-streams-data-relay-1.0.3-bin.tar.gz # cd machine-streams-data-relay-1.0.3 For Windows: c:\> cd target c:\> unzip machine-streams-data-relay-1.0.3.bin.zip c:\> cd machine-streams-data-relay-1.0.3 5. Start the application. For Linux: # ./machineStreamsDataRelay.sh <config properties file> for example: e.g. ./machineStreamsDataRelay.sh configOfYourChoice.properties For Windows: # ./machineStreamsDataRelay.sh <config properties file> for example: e.g. ./machineStreamDataRelay.bat configOfYourChoice.properties See the two example config files included within the project: configASB.properties (for Azure Service Bus) and configAMQ.properties (for ActiveMQ). 6. Scan the output. If your ActiveMQ configuration is correct, output similar to the following should appear, and no ERRORS should be shown: 2014-03-26 10:27:06.179 [main] INFO  [MessageListenerServiceImpl]: Initializing connections to tcp://localhost:62000 username=axedaadmin 2014-03-26 10:27:06.346 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 1: queue=MachineStream.> numSessions=5 2014-03-26 10:27:06.351 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 2: queue=MachineStream.> numSessions=5 2014-03-26 10:27:06.356 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 3: queue=MachineStream.> numSessions=5 2014-03-26 10:27:06.365 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 4: queue=MachineStream.> numSessions=5 2014-03-26 10:27:06.369 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 5: queue=MachineStream.> numSessions=5 2014-03-26 10:27:06.381 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 6: queue=MachineStream.> numSessions=5 2014-03-26 10:27:06.388 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 7: queue=MachineStream.> numSessions=5 2014-03-26 10:27:06.402 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 8: queue=MachineStream.> numSessions=5 2014-03-26 10:27:06.411 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 9: queue=MachineStream.> numSessions=5 2014-03-26 10:27:06.416 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 10: queue=MachineStream.> numSessions=5 If your Azure Service Bus configuration is correct, output similar to the following should appear, and no ERRORS should be shown: 2014-10-01 16:51:30.114 [main] INFO [MessageListenerServiceImpl]: Initializing Connections to amqps://acme.servicebus.windows.net username=owner 2014-10-01 16:51:31.613 [ConnectionRecovery-thread-6] INFO [MultiDestinationMessageListenerContainer]: Connection 6 created 0/10 queue consumers 2014-10-01 16:51:31.614 [ConnectionRecovery-thread-8] INFO [MultiDestinationMessageListenerContainer]: Connection 8 created 0/10 queue consumers 2014-10-01 16:51:31.614 [ConnectionRecovery-thread-10] INFO [MultiDestinationMessageListenerContainer]: Connection 10 created 0/9 queue consumers 2014-10-01 16:51:31.614 [ConnectionRecovery-thread-2] INFO [MultiDestinationMessageListenerContainer]: Connection 2 created 0/10 queue consumers 2014-10-01 16:51:31.614 [ConnectionRecovery-thread-3] INFO [MultiDestinationMessageListenerContainer]: Connection 3 created 0/10 queue consumers 2014-10-01 16:51:31.614 [ConnectionRecovery-thread-5] INFO [MultiDestinationMessageListenerContainer]: Connection 5 created 0/10 queue consumers 2014-10-01 16:51:31.615 [ConnectionRecovery-thread-9] INFO [MultiDestinationMessageListenerContainer]: Connection 9 created 0/10 queue consumers 2014-10-01 16:51:31.615 [ConnectionRecovery-thread-4] INFO [MultiDestinationMessageListenerContainer]: Connection 4 created 0/10 queue consumers 2014-10-01 16:51:31.621 [ConnectionRecovery-thread-7] INFO [MultiDestinationMessageListenerContainer]: Connection 7 created 0/10 queue consumers 2014-10-01 16:51:31.756 [ConnectionRecovery-thread-1] INFO [MultiDestinationMessageListenerContainer]: Connection 1 created 0/10 queue consumers 2014-10-01 16:51:32.613 [ConnectionRecovery-thread-6] INFO [MultiDestinationMessageListenerContainer]: Connection 6 created 9/10 queue consumers 2014-10-01 16:51:32.614 [ConnectionRecovery-thread-8] INFO [MultiDestinationMessageListenerContainer]: Connection 8 created 9/10 queue consumers 2014-10-01 16:51:32.614 [ConnectionRecovery-thread-10] INFO [MultiDestinationMessageListenerContainer]: Connection 10 created 7/9 queue consumers 2014-10-01 16:51:32.614 [ConnectionRecovery-thread-2] INFO [MultiDestinationMessageListenerContainer]: Connection 2 created 10/10 queue consumers 2014-10-01 16:51:32.615 [ConnectionRecovery-thread-3] INFO [MultiDestinationMessageListenerContainer]: Connection 3 created 9/10 queue consumers 2014-10-01 16:51:32.615 [ConnectionRecovery-thread-5] INFO [MultiDestinationMessageListenerContainer]: Connection 5 created 0/10 queue consumers 2014-10-01 16:51:32.615 [ConnectionRecovery-thread-9] INFO [MultiDestinationMessageListenerContainer]: Connection 9 created 7/10 queue consumers 2014-10-01 16:51:32.615 [ConnectionRecovery-thread-4] INFO [MultiDestinationMessageListenerContainer]: Connection 4 created 9/10 queue consumers 2014-10-01 16:51:32.623 [ConnectionRecovery-thread-7] INFO [MultiDestinationMessageListenerContainer]: Connection 7 created 9/10 queue consumers 2014-10-01 16:51:32.756 [ConnectionRecovery-thread-1] INFO [MultiDestinationMessageListenerContainer]: Connection 1 created 10/10 queue consumers 2014-10-01 16:51:32.833 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 1: numQueues=10 initTimeMillis=2631 millis 2014-10-01 16:51:32.833 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 2: numQueues=10 initTimeMillis=2488 millis 2014-10-01 16:51:33.613 [ConnectionRecovery-thread-6] INFO [MultiDestinationMessageListenerContainer]: Connection 6 created 10/10 queue consumers 2014-10-01 16:51:33.614 [ConnectionRecovery-thread-8] INFO [MultiDestinationMessageListenerContainer]: Connection 8 created 10/10 queue consumers 2014-10-01 16:51:33.614 [ConnectionRecovery-thread-10] INFO [MultiDestinationMessageListenerContainer]: Connection 10 created 9/9 queue consumers 2014-10-01 16:51:33.615 [ConnectionRecovery-thread-3] INFO [MultiDestinationMessageListenerContainer]: Connection 3 created 9/10 queue consumers 2014-10-01 16:51:33.615 [ConnectionRecovery-thread-5] INFO [MultiDestinationMessageListenerContainer]: Connection 5 created 0/10 queue consumers 2014-10-01 16:51:33.615 [ConnectionRecovery-thread-9] INFO [MultiDestinationMessageListenerContainer]: Connection 9 created 8/10 queue consumers 2014-10-01 16:51:33.615 [ConnectionRecovery-thread-4] INFO [MultiDestinationMessageListenerContainer]: Connection 4 created 9/10 queue consumers 2014-10-01 16:51:33.623 [ConnectionRecovery-thread-7] INFO [MultiDestinationMessageListenerContainer]: Connection 7 created 10/10 queue consumers 2014-10-01 16:51:34.615 [ConnectionRecovery-thread-5] INFO [MultiDestinationMessageListenerContainer]: Connection 5 created 0/10 queue consumers 2014-10-01 16:51:34.615 [ConnectionRecovery-thread-3] INFO [MultiDestinationMessageListenerContainer]: Connection 3 created 9/10 queue consumers 2014-10-01 16:51:34.615 [ConnectionRecovery-thread-9] INFO [MultiDestinationMessageListenerContainer]: Connection 9 created 8/10 queue consumers 2014-10-01 16:51:34.615 [ConnectionRecovery-thread-4] INFO [MultiDestinationMessageListenerContainer]: Connection 4 created 9/10 queue consumers 2014-10-01 16:51:35.615 [ConnectionRecovery-thread-5] INFO [MultiDestinationMessageListenerContainer]: Connection 5 created 9/10 queue consumers 2014-10-01 16:51:35.615 [ConnectionRecovery-thread-3] INFO [MultiDestinationMessageListenerContainer]: Connection 3 created 9/10 queue consumers 2014-10-01 16:51:35.615 [ConnectionRecovery-thread-9] INFO [MultiDestinationMessageListenerContainer]: Connection 9 created 8/10 queue consumers 2014-10-01 16:51:35.616 [ConnectionRecovery-thread-4] INFO [MultiDestinationMessageListenerContainer]: Connection 4 created 9/10 queue consumers 2014-10-01 16:51:36.616 [ConnectionRecovery-thread-5] INFO [MultiDestinationMessageListenerContainer]: Connection 5 created 9/10 queue consumers 2014-10-01 16:51:36.616 [ConnectionRecovery-thread-3] INFO [MultiDestinationMessageListenerContainer]: Connection 3 created 9/10 queue consumers 2014-10-01 16:51:36.616 [ConnectionRecovery-thread-4] INFO [MultiDestinationMessageListenerContainer]: Connection 4 created 9/10 queue consumers 2014-10-01 16:51:36.616 [ConnectionRecovery-thread-9] INFO [MultiDestinationMessageListenerContainer]: Connection 9 created 8/10 queue consumers 2014-10-01 16:51:37.616 [ConnectionRecovery-thread-5] INFO [MultiDestinationMessageListenerContainer]: Connection 5 created 9/10 queue consumers 2014-10-01 16:51:37.616 [ConnectionRecovery-thread-3] INFO [MultiDestinationMessageListenerContainer]: Connection 3 created 9/10 queue consumers 2014-10-01 16:51:37.616 [ConnectionRecovery-thread-4] INFO [MultiDestinationMessageListenerContainer]: Connection 4 created 9/10 queue consumers 2014-10-01 16:51:37.616 [ConnectionRecovery-thread-9] INFO [MultiDestinationMessageListenerContainer]: Connection 9 created 8/10 queue consumers 2014-10-01 16:51:38.616 [ConnectionRecovery-thread-3] INFO [MultiDestinationMessageListenerContainer]: Connection 3 created 10/10 queue consumers 2014-10-01 16:51:38.617 [ConnectionRecovery-thread-9] INFO [MultiDestinationMessageListenerContainer]: Connection 9 created 10/10 queue consumers 2014-10-01 16:51:38.616 [ConnectionRecovery-thread-4] INFO [MultiDestinationMessageListenerContainer]: Connection 4 created 10/10 queue consumers 2014-10-01 16:51:38.616 [ConnectionRecovery-thread-5] INFO [MultiDestinationMessageListenerContainer]: Connection 5 created 10/10 queue consumers 2014-10-01 16:51:38.643 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 3: numQueues=10 initTimeMillis=8491 millis 2014-10-01 16:51:38.643 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 4: numQueues=10 initTimeMillis=8490 millis 2014-10-01 16:51:38.643 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 5: numQueues=10 initTimeMillis=8490 millis 2014-10-01 16:51:38.643 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 6: numQueues=10 initTimeMillis=3485 millis 2014-10-01 16:51:38.643 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 7: numQueues=10 initTimeMillis=3495 millis 2014-10-01 16:51:38.643 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 8: numQueues=10 initTimeMillis=3485 millis 2014-10-01 16:51:38.643 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 9: numQueues=10 initTimeMillis=8488 millis 2014-10-01 16:51:38.643 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 10: numQueues=9 initTimeMillis=3485 millis 7. To verify that messages are being streamed properly from the Axeda Platform, send DataItems from your connected Assets. You should see messages similar to the following. (Remember that each Asset you are testing must have an associated Machine Stream.) 2014-03-26 10:45:16.309 [pool-1-thread-1] INFO  [LogMessageProcessor]: StreamedDataItem: Model,Asset1,799021d6-70a3-7c32-0000-00000000021d,false,Wed Mar 26 14:45:16 EDT 2014,temp,43,analog 2014-03-26 10:45:21.137 [pool-1-thread-2] INFO  [LogMessageProcessor]: StreamedDataItem: Model,Asset2,799021d6-70a3-7c32-0000-000000000225,false,Wed Mar 26 14:45:21 EDT 2014,temp,43,analog 2014-03-26 10:45:26.134 [pool-1-thread-3] INFO  [LogMessageProcessor]: StreamedDataItem: Model,Asset1,799021d6-70a3-7c32-0000-00000000022b,false,Wed Mar 26 14:45:26 EDT 2014,temp,44,analog 2014-03-26 10:45:31.135 [pool-1-thread-4] INFO  [LogMessageProcessor]: StreamedDataItem: Model,Asset2,799021d6-70a3-7c32-0000-000000000231,false,Wed Mar 26 14:45:31 EDT 2014,temp,44,analog 2014-03-26 10:45:36.142 [pool-1-thread-5] INFO  [LogMessageProcessor]: StreamedDataItem: Model,Asset1,799021d6-70a3-7c32-0000-000000000237,false,Wed Mar 26 14:45:36 EDT 2014,temp,45,analog 2014-03-26 10:45:41.146 [pool-1-thread-6] INFO  [LogMessageProcessor]: StreamedDataItem: Model,Asset2,799021d6-70a3-7c32-0000-00000000023d,false,Wed Mar 26 14:45:41 EDT 2014,temp,45,analog Configuring a CustomMessageProcessor By default, the project is configured to use a LogMessageProcessor that logs each streamed message it receives to standard out. The project takes a StreamedMessage in either an XML or JSON format (as configured in the MachineStream SDKv2 object) and decodes the message into a StreamedMessage Java object. LogMessageProcessor.java implements the MessageProcessor interface. Here is the MessageProcessor.java interface: MessageProcessor.java package com.axeda.tools.streams.processor; import com.axeda.tools.streams.model.StreamedMessage; /** * This class defines the message processor method callback that will called for message processing. * Note that this methods will be called by multiple threads concurrently. */ public interface MessageProcessor { /** * Process a machine stream message. Note that this method will be called by multiple threads concurrently.  * The number of concurrent processing threads is defined in MachineStreamsConfig.getNumProcessingThreads(). * If you add code here that significantly slows down message processing, then there is the potential that * MessageListenerService threads will also block.  When the MessageListenerService threads block, this means that * messages will start to backup in the ActiveMQ or ASB message queues. If you are processing a large number of messages, * then you may need to adjust your configuration parameters or optimize your processMessage() code. * @param message machine streams message to process */public void processMessage(StreamedMessage message);} An additional class named CustomMessageProcessor.java has been provided so that you can provide your own custom message processing logic: CustomMessageProcessor.java package com.axeda.tools.streams.processor; import org.springframework.stereotype.Component; import com.axeda.tools.streams.model.StreamedAlarm; import com.axeda.tools.streams.model.StreamedDataItemMessage; import com.axeda.tools.streams.model.StreamedMessage; import com.axeda.tools.streams.model.StreamedMobileLocation; import com.axeda.tools.streams.model.StreamedRegistrationMessage; /** * This class was provided for customers to implement their own message processing business * logic. To use this class, change the @Autowired messageProcessor qualifier in * MessageProcessingServiceImpl.java to @Qualifier("customMessageProcessor") */ @Component("customMessageProcessor") public class CustomMessageProcessor implements MessageProcessor { /** * (non-Javadoc) * @see com.axeda.tools.streams.processor.MessageProcessor#processMessage(com.axeda.tools.streams.model.StreamedMessage) * * Process a machine stream message. Note that this method will be called by multiple threads * concurrently. The number of concurrent processing threads is defined in * MachineStreamsConfig.getNumProcessingThreads(). * If you add code here that significantly slows down message processing, then there is the * potential that MessageListenerService threads will also block. When the MessageListenerService * threads block, this means that messages will start to backup in the ActiveMQ or Azure Service Bus message queues. If you * are processing a large number of messages, then you may need to adjust your configuration parameters * or optimize your processMessage() code. */ @SuppressWarnings("unused") @Override public void processMessage(StreamedMessage message) { if (message instanceof StreamedDataItemMessage) { StreamedDataItemMessage dataItem = (StreamedDataItemMessage) message; // add your business logic here } else if (message instanceof StreamedAlarm) { StreamedAlarm alarm = (StreamedAlarm) message; // add your business logic here } else if (message instanceof StreamedMobileLocation) { StreamedMobileLocation mobileLocation = (StreamedMobileLocation) message; // add your business logic here } else if (message instanceof StreamedRegistrationMessage) { StreamedRegistrationMessage registration = (StreamedRegistrationMessage)message; // add your business logic here } } } The Axeda Platform Machine Streams feature currently support 4 different message types: StreamedDataItemMessage StreamedAlarm StreamedMobileLocation StreamedRegistrationMessage For each of the different message types, you should add your message processing business logic.  You may want to write each message to your favorite NoSql database or to a flat file. Once you have completed your changes to the CustomObjectMessageProcessor, then you must make one change in the MessageProcessingServiceImpl.java class to use this Spring bean. Uncomment this line  // @Qualifier("customMessageProcessor") Comment this line @Qualifier("logMessageProcessor") The following code snppet shows what your changes should look like when you are finished: MessageProcessingServiceImpl.java @Component("messageProcessingService") public class MessageProcessingServiceImpl implements MessageProcessingService private static final Logger LOGGER = LoggerFactory.getLogger(MessageProcessingServiceImpl.class); @Autowired private MessageDecoder messageDecoder; @Autowired // If you want to use the CustomMessageProcessor instead of the default LogMessageProcessor then change this Qualifier to @Qualifier("customMessageProcessor") //@Qualifier("logMessageProcessor") private MessageProcessor messageProcessor; private ExecutorService executorService;
View full tip
One of the signature features of the Axeda Platform is our alarm notification, signalling and auditing capabilities.   Our dashboard offers a simplified view into assets that are in an alarm state, and provides interaction between devices and operators.  For some customers the dashboard may be too extensive for their application needs.  The Axeda Platform from versions 6.6 onward provide a number of ways of interacting with Alarms to allow you to present this data to remote clients (Android, iOS, etc.) or to build extended business logic around alarm processing. If one were to create a remote management application for Android, for example, there are the REST APIs available to interact with Assets and Alarms.  For aggregate operations where network traffic and round-trip time can be a concern, we have our Scripto API also available that allows you to use the Custom Object functionality to deliver information on many different aggregating criteria, and allow developers to get the data needed to build the applications to solve their business requirements. Shown below is a REST API call you might make to retrieve all alarms between a certain time and date. POST:   https://INSTANCENAME/services/v2/rest/alarm/find <v2:AlarmCriteria xmlns:v2="http://www.axeda.com/services/v2" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">    <v2:date xsi:type="v2:BetweenDateQuery">     <v2:start>2015-01-01T00:00:00.000Z</v2:start>     <v2:end>2015-01-31T23:59:59.000Z</v2:end>   </v2:date>   <v2:states/> </v2:AlarmCriteria>   In a custom object, this would like like the following: import static com.axeda.sdk.v2.dsl.Bridges.* import com.axeda.services.v2.* import com.axeda.sdk.v2.exception.* def q = new com.axeda.services.v2.BetweenDateQuery() q.start = new Date() q.end = new Date() ac = new AlarmCriteria(date:q) aresults = alarmBridge.find(ac)   Using the same API endpoint, here's how you would retrieve data by severity: <v2:AlarmCriteria xmlns:v2="http://www.axeda.com/services/v2" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">    <v2:severity xsi:type="v2:GreaterThanEqualToNumericQuery">     <v2:value>900</v2:value>   </v2:severity>   <v2:states/> </v2:AlarmCriteria>   Or in a custom object: import static com.axeda.sdk.v2.dsl.Bridges.* import com.axeda.services.v2.* import com.axeda.sdk.v2.exception.*   def q = new com.axeda.services.v2.GreaterThanEqualToNumericQuery() q.value = 900 ac = new AlarmCriteria(severity:q) aresults = alarmBridge.find(ac)   Currently the Query Types do not map properly in JSON objects - use XML to perform these types of queries via the REST APIs. References: Axeda v2 API/Services Developer's Reference Guide 6.6 Axeda Platform Web Services Developer Reference v2 REST 6.6 Axeda v2 API/Services Developer's Reference Guide 6.8 Axeda Platform Web Services Developer Reference v2 REST 6.8
View full tip