cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

Community Tip - Did you get an answer that solved your problem? Please mark it as an Accepted Solution so others with the same problem can find the answer easily. X

IoT Tips

Sort by:
The following script takes a parameter of a model name, a device serial number and a data item name, finds the asset location and uses that longitude to determine the current TimeZone.  It then converts the Timezone of the data item timestamp to an Eastern Standard Timezone timestamp. import groovy.xml.MarkupBuilder import com.axeda.drm.sdk.Context import java.util.TimeZone import com.axeda.drm.sdk.data.* import com.axeda.drm.sdk.device.* import com.axeda.common.sdk.jdbc.*; import net.sf.json.JSONObject import net.sf.json.JSONArray import com.axeda.drm.sdk.mobilelocation.MobileLocationFinder import com.axeda.drm.sdk.mobilelocation.MobileLocation import com.axeda.drm.sdk.mobilelocation.CurrentMobileLocationFinder def response try {     Context ctx = Context.getUserContext()     ModelFinder mfinder = new ModelFinder(ctx)     mfinder.setName(parameters.model_name)     Model m = mfinder.find()     DeviceFinder dfinder = new DeviceFinder(ctx)     dfinder.setModel(m);     dfinder.setSerialNumber(parameters.device)     Device d = dfinder.find()     CurrentMobileLocationFinder cmlFinder = new CurrentMobileLocationFinder(ctx);     cmlFinder.setDeviceId(d.id.getValue());     MobileLocation ml = cmlFinder.find();     def lng = -72.158203125     if (ml?.lng){         lng = ml?.lng     }     // set boundaries for timezones - longitudes     def est = setUSTimeZone(-157.95415000000003)     def tz = setUSTimeZone(lng)     CurrentDataFinder cdfinder = new CurrentDataFinder(ctx, d)     DataValue dvalue = cdfinder.find(parameters.data_item_name)     def adjtime = convertToNewTimeZone(dvalue.getTimestamp(),tz,est)     def results = JSONObject.fromObject(lat: ml?.lat, lng: ml?.lng, current: [name: dvalue.dataItem.name, time: adjtime.format("MM/dd/yyyy HH:mm"), value: dvalue.asString()]).toString(2)     response = results } catch (Exception e) {     response = [                 message: "Error: " + e.message             ]     response =  JSONObject.fromObject(response).toString(2) } return ['Content-Type': 'application/json', 'Cache-Control':'no-cache', 'Content': response] def setUSTimeZone(lng){     TimeZone tz     // set boundaries for US timezones by longitude     if (lng <= -67.1484375 && lng > -85.517578125){         tz = TimeZone.getTimeZone("EST");     }     else if (lng <= -85.517578125 && lng > -96.591796875){         tz = TimeZone.getTimeZone("CST");     }     else if (lng <= -96.591796875 && lng > -113.90625){         tz = TimeZone.getTimeZone("MST");     }     else if (lng <= -113.90625){         tz = TimeZone.getTimeZone("PST");     }     logger.info(tz)     return tz } public Date convertToNewTimeZone(Date date, TimeZone oldTimeZone, TimeZone newTimeZone){     long oldDateinMilliSeconds=date.time - oldTimeZone.rawOffset     // oldtimeZone.rawOffset returns the difference(in milliSeconds) of time in that timezone with the time in GMT     // date.time returns the milliseconds of the date     Date dateInGMT=new Date(oldDateinMilliSeconds)     long convertedDateInMilliSeconds = dateInGMT.time + newTimeZone.rawOffset     Date convertedDate = new Date(convertedDateInMilliSeconds)     return convertedDate }
View full tip
This script illustrates how to call a Groovy script as an external web service.  This example also applies to calling any external web service that relies on a username and password. Parameters: external_username external_password script_name import com.axeda.drm.sdk.Context import com.axeda.drm.sdk.device.DeviceFinder import com.axeda.drm.sdk.data.CurrentDataFinder import com.axeda.drm.sdk.device.Device import com.axeda.drm.sdk.data.HistoricalDataFinder import com.axeda.drm.sdk.device.DataItem import net.sf.json.JSONObject import com.axeda.drm.sdk.device.ModelFinder import groovyx.net.http.* import static groovyx.net.http.ContentType.* import static groovyx.net.http.Method.* /** * CallScriptoAsExternalWebService.groovy * * This script illustrates how to call a Groovy script as an external web service. * * @param external_username       -   (REQ):Str Username for the external web service. * @param external_password       -   (REQ):Str Password for the external web service. * @param script_name             -   (REQ):Str Script Name to call. * * */ def result try { validateParameters(actual: parameters, expected: ["external_username", "external_password", "script_name"]) // authentication tokens (username + password) def auth_tokens = [username: parameters.external_username, password: parameters.external_password] http = new HTTPBuilder( 'http://platform.axeda.com/services/v1/rest/Scripto/execute/'+parameters.script_name ) // pass in dummy parameters to the script for illustration def parammap = [key1: "val1", key2: "val2"] // Call the script     http.request (GET, JSON) {       uri.query = auth_tokens + parammap       response.success = { resp, json ->         // traverse the wrapped json response     result = json.wsScriptoExecuteResponse.content.$          }       response.failure = { resp ->         result = response.failure       }      } } catch (Throwable any) {     logger.error any.localizedMessage } return ['Content-Type': 'application/json', 'Content': result] static def validateParameters(Map args) {     if (!args.containsKey("actual")) {         throw new Exception("validateParameters(args) requires 'actual' key.")     }     if (!args.containsKey("expected")) {         throw new Exception("validateParameters(args) requires 'expected' key.")     }     def config = [             require_username: false     ]     Map actualParameters = args.actual.clone() as Map     List expectedParameters = args.expected     config.each { key, value ->         if (args.options?.containsKey(key)) {             config[key] = args.options[key]         }     }     if (!config.require_username) { actualParameters.remove("username") }     expectedParameters.each { paramName ->         if (!actualParameters.containsKey(paramName) || !actualParameters[paramName]) {             throw new IllegalArgumentException(                     "Parameter '${paramName}' was not found in the query; '${paramName}' is a reqd. parameter.")         }     } }
View full tip
Calling external services from M2M applications is a critical aspect of building end-to-end solutions.  Knowing how to apply network timeouts when connecting to external servers can prevent unexpected and problematic network hang-ups. Let's investigate how to create a safe networking flow using HttpClient, HttpBuilder, and Apache’s FTPClient class. Background Custom Objects called from Expression Rules have a configurable maximum execution time.  This is set by the com.axeda.drm.rules.statistics.rule-time-threshold property.  Without this safeguard in place long running or misbehaved Custom Objects can cause internal processing queues to fill and the server will suffer a performance degradation. In Java (and Groovy) all network calls internally use InputStream.read() to establish the socket connection and to read data from the socket.  It is possible for faulty external servers (such as an FTP server) to hang and not properly respond.  This means that the InputStream.read() method will continuously wait for the server to respond with data, and the server will never respond.  According to the Java spec, InputStream.read() may be uninterruptable while it is waiting for data.  This means that if a Custom Object has exceeded the com.axeda.drm.rules.statistics.rule-time-threshold the Rule Sniper will still not be able to interrupt the Custom Object’s execution if it is waiting on InputStream.read().  Because the Custom Object cannot be stopped, the internal processing queues will eventually fill. Even though InputStream.read() is uninterruptable it is still possible to set timeouts for it to be able to give up on a connection.  Beyond that, we want to make sure that the connection is completely disconnected. Types of Timeouts There are typically two types of timeouts that should be set when making calls over the web: the Connection Timeout and the Socket Timeout.  The Connection Timeout is the maximum amount of time that should be allowed when establishing the bi-directional socket connection between the client and server.  Behind the scenes socket connection involves resolving the domain name of the server to an IP address, and then the server opening a port to connect with the client’s port.  The Socket Timeout is the timeout that limits the amount of time each socket operation is allowed to take.  It limits the amount of time InputStream.read() will listen for a server’s response.  If a server is faulty or overloaded it may take a long time (or forever) to respond to a request.  This timeout limits the amount of time the client will wait for the server to respond. When making any calls from a Custom Object to an external server (either making WebService calls, or FTP transfers), you should always set the Connection Timeout and the Socket Timeout.  Always try to keep the timeouts as reasonably small as possible.  Failure to do so could unexpectedly impact your Axeda server.  Consider a Custom Object that takes an average of 10 seconds to run is called to make an external WebService call once a minute. This will not cause any issues and the  system will be stable.  If the external server suddenly has a performance degredation and now the external WebService call takes over a minute to run, the execution queue will eventually fill, causing performance degradation to the Axeda system.  To protect against this scenario, set the timeouts to limit the call to one minute, and log whenever the time limit is exceeded. Examples Provided below are examples of properly set timeouts and thorough connection management use HttpClient, HttpBuilder, and FTPClient.  All of these examples assume they are being executed from Custom Objects. By default, set the Connection Timeout to 10 seconds.  In normal circumstances, connections should not take more then 10 seconds.  If they are exceeding this time there is a good chance of networking issues between the client and server. The Socket Timeout can vary per use-case.  The examples provided set the Socket Timeout to 30 seconds, which should be sufficient for typical WebService calls and small to medium sized FTP file transfers.  Depending exactly on what is being done, the timout may have to be increased.  If you expect the call to go over 5 minutes please contact Axeda Support to investigate increasing  com.axeda.drm.rules.statistics.rule-time-threshold property (which defaults to 5 minutes). ​HttpClient​ //HttpClient import org.apache.http.client.HttpClient import org.apache.http.impl.client.DefaultHttpClient import org.apache.http.client.methods.HttpGet import org.apache.http.HttpResponse import org.apache.http.params.BasicHttpParams import org.apache.http.params.HttpParams import org.apache.http.params.HttpConnectionParams int TENSECONDS  = 10*1000 int THIRTYSECONDS = 30*1000 final HttpParams httpParams = new BasicHttpParams() //Establishing the connection should take <10 seconds in most circumstances HttpConnectionParams.setConnectionTimeout(httpParams, TENSECONDS) //The data transfer/call should take <30 seconds.  Adjust as necessary if receiving large data sets. HttpConnectionParams.setSoTimeout(httpParams, THIRTYSECONDS) HttpClient hc = new DefaultHttpClient(httpParams) try {   //Simply get the contents of http://www.axeda.com and log it to the Custom Object Log   HttpGet get = new HttpGet("http://www.axeda.com")   HttpResponse response = hc.execute(get)   BufferedReader br = new BufferedReader( new InputStreamReader( response.getEntity().getContent()))   br.readLines().each {     logger.info it   } } finally {   //Make sure to shutdown the connectionManager   hc.getConnectionManager().shutdown() } return true https://gist.github.com/axeda/5189092/raw/2f7b93c5f96ed8f445df4364b885486bc6fa1feb/HttpClientTimeouts.groovy HttpBuilder import groovyx.net.http.HTTPBuilder import static groovyx.net.http.ContentType.* import static groovyx.net.http.Method.* int TENSECONDS  = 10*1000; int THIRTYSECONDS = 30*1000; HTTPBuilder builder = new HTTPBuilder('http://www.axeda.com') //HTTPBuilder has no direct methods to add timeouts.  We have to add them to the HttpParams of the underlying HttpClient builder.getClient().getParams().setParameter("http.connection.timeout", new Integer(TENSECONDS)) builder.getClient().getParams().setParameter("http.socket.timeout", new Integer(THIRTYSECONDS)) try {   //Simply get the contents of http://www.axeda.com and log it to the Custom Object Log   builder.request(GET, TEXT){     response.success = { resp, res ->       res.readLines().each {         logger.info it       }       }   } } finally {   //Make sure to always shut down the HTTPBuilder when you’re done with it   builder.shutdown() } return true https://gist.github.com/axeda/5189102/raw/66bb3a4f4f096681847de1d2d38971e6293c4c6b/HttpBuilderTimeouts.groovy FtpClient Apache’s FTPClient has a third type of timeout, the Default Timeout.  The Default Timeout is a timeout that further ensures that socket timeouts are always used.  Note: Default Timeout does not set a timeout for the .connect() method. import org.apache.commons.net.ftp.* import java.io.InputStream import java.io.ByteArrayInputStream String ftphost = "127.0.0.1" String ftpuser = "test" String ftppwd = "test" int ftpport = 21 String ftpDir = "tmp/FTP" int TENSECONDS  = 10*1000 int THIRTYSECONDS = 30*1000 //Declare FTP client FTPClient ftp = new FTPClient() try {   ftp.setConnectTimeout(TENSECONDS)   ftp.setDefaultTimeout(TENSECONDS)   ftp.connect(ftphost, ftpport)   //30 seconds to log on.  Also 30 seconds to change to working directory.   ftp.setSoTimeout(THIRTYSECONDS)   def reply = ftp.getReplyCode()   if (!FTPReply.isPositiveCompletion(reply))   {     throw new Exception("Unable to connect to FTP server")   }   if (!ftp.login(ftpuser, ftppwd))   {     throw new Exception("Unable to login to FTP server")   }   if (!ftp.changeWorkingDirectory(ftpDir))   {     throw new Exception("Unable to change working directory on FTP server")   }   //Change the timeout here for a large file transfer that will take over 30 seconds   //ftp.setSoTimeout(THIRTYSECONDS);   ftp.setFileType(FTPClient.ASCII_FILE_TYPE)   ftp.enterLocalPassiveMode()   String filetxt = "Some String file content"   InputStream is = new ByteArrayInputStream(filetxt.getBytes('US-ASCII'))   try   {     if (!ftp.storeFile("myFile.txt", is))     {       throw new Exception("Unable to write file to FTP server")     }   } finally   {     //Make sure to always close the inputStream     is.close()   } } catch(Exception e) {   //handle exceptions here by logging or auditing } finally {   //if the IO is timed out or force disconnected, exceptions may be thrown when trying to logout/disconnect   try   {     //10 seconds to log off.  Also 10 seconds to disconnect.     ftp.setSoTimeout(TENSECONDS);     ftp.logout();     //depending on the state of the server the .logout() may throw an exception,     //we want to ensure complete disconnect.   }   catch(Exception innerException)   {       //You potentially just want to log that there was a logout exception.     }   finally   {     //Make sure to always disconnect.  If not, there is a chance you will leave hanging sockects     ftp.disconnect();   } } return true https://gist.github.com/axeda/5189120/raw/83545305a38d03b6a73a80fbf4999be3d6b3e74e/FtpClientConnectionTimeouts.groovy
View full tip
This script creates a csv file from the audit log filtered by the User Access category, so dates of when users logged in or logged out. *** see update below *** Note:  The csv file has the same name as the Groovy script and does NOT have the .csv extension . To get the .csv extension, the Groovy script has to be renamed to AuditEntryToCSV.csv.groovy .  Suggestions on how to improve this are welcome. *** Update ***: The download works without the renamed groovy script by returning text instead of an input stream.  The script has been modified to illustrate this. Parameters: days - the number of days past to fetch audit logs model_name - the model name of the asset serial_number - the serial number of the asset import com.axeda.drm.sdk.device.ModelFinder import com.axeda.drm.sdk.Context import com.axeda.common.sdk.id.Identifier import com.axeda.drm.sdk.device.Model import com.axeda.drm.sdk.device.DeviceFinder import com.axeda.drm.sdk.device.Device import com.axeda.drm.sdk.audit.AuditCategoryList import com.axeda.drm.sdk.audit.AuditCategory import com.axeda.drm.sdk.audit.AuditEntryFinder import com.axeda.drm.sdk.audit.SortType import com.axeda.drm.sdk.audit.AuditEntry import groovy.xml.MarkupBuilder import com.axeda.platform.sdk.v1.services.ServiceFactory /* * AuditEntryToCSV.groovy * * Creates a csv file from the audit log filtered by the User Access category, so dates of when users logged in or logged out. * * @param days        -   (REQ):Str number of days to search. * @param model_name        -   (REQ):Str name of the model. * @param serial_number        -   (REQ):Str serial number of the device. * * @note - the csv file has the same name as the Groovy script and does NOT have the .csv extension . To get * the .csv extension, the Groovy script has to be renamed to AuditEntryToCSV.csv.groovy . * * @author Sara Streeter <sstreeter@axeda.com> */ def writer = new StringWriter() def xml = new MarkupBuilder(writer) try {    def ctx = Context.getUserContext()    ModelFinder modelFinder = new ModelFinder(ctx)    modelFinder.setName(parameters.model_name)    Model model = modelFinder.find()    DeviceFinder deviceFinder = new DeviceFinder(ctx)    deviceFinder.setSerialNumber(parameters.serial_number)    Device device = deviceFinder.find()    AuditCategoryList acl = new AuditCategoryList()    acl.add(AuditCategory.USER_ACCESS)    long now = System.currentTimeMillis()    Date today = new Date(now)    def paramdays = parameters.days ? parameters.days: 5    long days = 1000 * 60 * 60 * 24 * Integer.valueOf(paramdays)    AuditEntryFinder aef = new AuditEntryFinder(ctx)    aef.setCategories(acl)    aef.setToDate(today)    aef.setFromDate(new Date(now - (days)))    aef.setSortType(SortType.DATE)    aef.sortDescending()    List<AuditEntry> audits = aef.findAll() // use a Data Accumulator to store the information def dataStoreIdentifier = "FILE-CSV-audit_log" def daSvc = new ServiceFactory().dataAccumulatorService if (daSvc.doesAccumulationExist(dataStoreIdentifier, device.id.value)) {     daSvc.deleteAccumulation(dataStoreIdentifier, device.id.value) } // assemble the response    audits.each { AuditEntry audit ->            def row = [                audit?.id.value,                audit?.user?.username,                audit?.date,                audit?.category?.bundleKey,                audit?.message            ]         row = row.join(',')         row += '\n'         daSvc.writeChunk(dataStoreIdentifier, device.id.value, row);        } // stream the data accumulator to create the file    InputStream is = daSvc.streamAccumulation(dataStoreIdentifier, device.id.value) return ['Content-Type': 'text/csv', 'Content-Disposition':'attachment; filename=AuditEntryCSVFile.csv', 'Content': is.text] } catch (def ex) {    xml.Response() {        Fault {            Code('Groovy Exception')            Message(ex.getMessage())            StringWriter sw = new StringWriter();            PrintWriter pw = new PrintWriter(sw);            ex.printStackTrace(pw);            Detail(sw.toString())        }    } logger.info(writer.toString()) }
View full tip
This script finds all the data items both current and historical on all the assets of a model and outputs them as XML. Parameters: model_name from_time to_time import com.axeda.drm.sdk.Context import com.axeda.drm.sdk.device.ModelFinder import com.axeda.drm.sdk.device.Model import com.axeda.drm.sdk.device.DeviceFinder import com.axeda.drm.sdk.data.CurrentDataFinder import com.axeda.drm.sdk.device.Device import com.axeda.drm.sdk.data.HistoricalDataFinder import groovy.xml.MarkupBuilder /* * AllDataItems2XML.groovy * * Find all the historical and current data items for all assets in a given model. * * @param model_name        -   (REQ):Str name of the model. * @param from_time         -   (REQ):Long millisecond timestamp to begin query from. * @param to_time           -   (REQ):Long millisecond timestamp to end query at. * * @note from_time and to_time should be provided because it limits the query size. * * @author Sara Streeter <sstreeter@axeda.com> */ def response = [:] def writer = new StringWriter() def xml = new MarkupBuilder(writer) // measure the script run time def timeProfiles = [:] def scriptStartTime = new Date() try { // getUserContext is supported as of release 6.1.5 and higher     final def CONTEXT = Context.getUserContext() // confirm that required parameters have been provided     validateParameters(actual: parameters, expected: ["model_name", "from_time", "to_time"]) // find the model     def modelFinder = new ModelFinder(CONTEXT)     modelFinder.setName(parameters.model_name)     Model model = modelFinder.findOne() // throw exception if no model found     if (!model) {         throw new Exception("No model found for ${parameters.model_name}.")     } // find all assets of that model     def assetFinder = new DeviceFinder(CONTEXT)     assetFinder.setModel(model)     def assets = assetFinder.findAll() // find the current and historical data values for each asset //note: since device will be set on the datafinders going forward, a dummy device is set on instantiation which is not actually stored     def currentDataFinder = new CurrentDataFinder(CONTEXT, new Device(CONTEXT, "placeholder", model))     def historicalDataFinder = new HistoricalDataFinder(CONTEXT, new Device(CONTEXT, "placeholder", model))     historicalDataFinder.startDate = new Date(parameters.from_time as Long)     historicalDataFinder.endDate = new Date(parameters.to_time as Long) // assemble the response     xml.Response(){         assets.each { Device asset ->             currentDataFinder.device = asset             def currentValueList = currentDataFinder.find()             historicalDataFinder.device = asset             def valueList = historicalDataFinder.find()             Asset(){                     id(asset.id.value)                     name( asset.name)                     serial_number(asset.serialNumber)                     model_id( asset.model.id.value)                     model_name(asset.model.name)                     current_data(){                         currentValueList.each{ data ->                         timestamp( data?.getTimestamp()?.format("yyyyMMdd HH:mm"))                          name(data?.dataItem?.name)                          value( data?.asString())                     }}                     historical_data(){                         valueList.each { data ->                         timestamp( data?.getTimestamp()?.format("yyyyMMdd HH:mm"))                          name(data?.dataItem?.name)                          value( data?.asString())                     }}             }         }     } } catch (def ex) {       xml.Response() {     Fault {           Code('Groovy Exception')           Message(ex.getMessage())           StringWriter sw = new StringWriter();           PrintWriter pw = new PrintWriter(sw);           ex.printStackTrace(pw);           Detail(sw.toString())         }       } } return ['Content-Type': 'text/xml', 'Content': writer.toString()] private Map createTimeProfile(String label, Date startTime, Date endTime) {     [             (label): [                     startTime: [timestamp: startTime.time, readable: startTime.toString()],                     endTime: [timestamp: endTime.time, readable: endTime.toString()],                     profile: [                             elapsed_millis: endTime.time - startTime.time,                             elapsed_secs: (endTime.time - startTime.time) / 1000                     ]             ]     ] } private validateParameters(Map args) {     if (!args.containsKey("actual")) {         throw new Exception("validateParameters(args) requires 'actual' key.")     }     if (!args.containsKey("expected")) {         throw new Exception("validateParameters(args) requires 'expected' key.")     }     def config = [             require_username: false     ]     Map actualParameters = args.actual.clone() as Map     List expectedParameters = args.expected     config.each { key, value ->         if (args.options?.containsKey(key)) {             config[key] = args.options[key]         }     }     if (!config.require_username) { actualParameters.remove("username") }     expectedParameters.each { paramName ->         if (!actualParameters.containsKey(paramName) || !actualParameters[paramName]) {             throw new IllegalArgumentException(                     "Parameter '${paramName}' was not found in the query; '${paramName}' is a reqd. parameter.")         }     } } Sample Output: <Response>   <Asset>   <id>2864</id>   <name>keg24</name>   <serial_number>keg24</serial_number>   <model_id>1081</model_id>   <model_name>Kegerator</model_name>   <current_data>   <timestamp>20111103 14:44</timestamp>   <name>currKegPercentage</name>   <value>34.0</value>   <timestamp>20111103 14:38</timestamp>   <name>currTempF</name>   <value>43.0</value>   </current_data>   <historical_data />   </Asset>   <Asset>   <id>2861</id>   <name>keg28</name>   <serial_number>keg28</serial_number>   <model_id>1081</model_id>   <model_name>Kegerator</model_name>   <current_data>   <timestamp />   <name>currKegPercentage</name>   <value>?</value>   <timestamp>20111103 14:21</timestamp>   <name>currTempF</name>   <value>43.0</value>   </current_data>   <historical_data />   </Asset>   <Asset>   <id>2863</id>   <name>keg21</name>   <serial_number>keg21</serial_number>   <model_id>1081</model_id>   <model_name>Kegerator</model_name>   <current_data>   <timestamp />   <name>currKegPercentage</name>   <value>?</value>   <timestamp>20111103 14:39</timestamp>   <name>currTempF</name>   <value>42.0</value>   </current_data>   <historical_data />   </Asset>   <Asset>   <id>2862</id>   <name>keg25</name>   <serial_number>keg25</serial_number>   <model_id>1081</model_id>   <model_name>Kegerator</model_name>   <current_data>   <timestamp>20111103 14:36</timestamp>   <name>currKegPercentage</name>   <value>34.0</value>   <timestamp />   <name>currTempF</name>   <value>?</value>   </current_data>   <historical_data />   </Asset>   <Asset>   <id>2867</id>   <name>keg29</name>   <serial_number>keg29</serial_number>   <model_id>1081</model_id>   <model_name>Kegerator</model_name>   <current_data>   <timestamp>20111103 14:48</timestamp>   <name>currKegPercentage</name>   <value>35.0</value>   <timestamp />   <name>currTempF</name>   <value>?</value>   </current_data>   <historical_data />   </Asset>   <Asset>   <id>2865</id>   <name>keg27</name>   <serial_number>keg27</serial_number>   <model_id>1081</model_id>   <model_name>Kegerator</model_name>   <current_data>   <timestamp>20111103 14:39</timestamp>   <name>currKegPercentage</name>   <value>34.0</value>   <timestamp>20111103 14:44</timestamp>   <name>currTempF</name>   <value>42.0</value>   </current_data>   <historical_data />   </Asset>   <Asset>   <id>2866</id>   <name>keg23</name>   <serial_number>keg23</serial_number>   <model_id>1081</model_id>   <model_name>Kegerator</model_name>   <current_data>   <timestamp>20111103 14:46</timestamp>   <name>currKegPercentage</name>   <value>34.0</value>   <timestamp />   <name>currTempF</name>   <value>?</value>   </current_data>   <historical_data />   </Asset> </Response>
View full tip
This is an example of an advanced apc-metadata.xml file for use with Axeda Artisan that includes examples of how to create different data structures on the platform, including Expression Rules and System Timers. Step 1 - In the upload.xml make sure the artisan-installer is 1.2     <dependencies>         <dependency>             <groupId>com.axeda.community</groupId>             <artifactId>artisan-installer</artifactId>             <version>1.2</version>         </dependency>     </dependencies> Step 2 - <?xml version="1.0" encoding="UTF-8"?> <!--         apc-metadata.xml --> <apcmetadata>     <models>         <model>             <name>DemoModel</name>             <!--standalone or gateway-->             <type>gateway</type>             <DataItems>                 <DataItem>                     <name>PendingMessage</name>                     <!--string or digital or analog-->                     <type>string</type>                     <visible>false</visible>                     <stored>true</stored>                     <!--0 = no history,1 = Stored,2 = no storage,3 = on change-->                     <storageOption>2</storageOption>                     <readOnly>false</readOnly>                 </DataItem>                 <DataItem>                     <name>ConsumableData</name>                     <!--string or digital or analog-->                     <type>string</type>                     <visible>false</visible>                     <stored>true</stored>                     <!--0 = no history,1 = Stored,2 = no storage,3 = on change-->                     <storageOption>2</storageOption>                     <readOnly>false</readOnly>                 </DataItem>             </DataItems>         </model>     </models>     <ruleTimers>           <ruletimer>               <name>SFTP Retry</name>               <description></description>               <!--midnight gmt-->               <schedule>0 0 0 * * ?</schedule>               <rules>                   <rule>SFTP Retry</rule>               </rules>           </ruletimer>     </ruleTimers>     <expressionRules>                 <rule>             <name>SFTP Retry</name>             <description></description>             <enabled>true</enabled>             <applyToAll>true</applyToAll>             <type>SystemTimer</type>             <ifExpression><![CDATA[true]]></ifExpression>             <thenExpression>                 <![CDATA[ExecuteCustomObject("SFTPRetry")]]></thenExpression>             <elseExpression></elseExpression>             <consecutive>true</consecutive>             <models>                 <model>DemoMOdel</model>             </models>         </rule>     </expressionRules>     <customobjects>         <customobject>             <name>GetChartData</name>             <type>Action</type>             <sourcefile>GetChartData.groovy</sourcefile>             <params>                 <param name="username" description="(REQUIRED) The name of the calling user"/>             </params>         </customobject>         <customobject>             <name>GetChartData_rss</name>             <type>Action</type>             <sourcefile>GetChartData_rss.groovy</sourcefile>             <params>                 <param name="username" description="(REQUIRED) The name of the calling user"/>             </params>         </customobject>         <customobject>             <name>GetAddress</name>             <type>Action</type>             <sourcefile>GetAddress.groovy</sourcefile>             <params>                 <!--<param name="username" description="(REQUIRED) The name of the calling user"/>-->             </params>         </customobject>     </customobjects>     <applications>         <application>             <description>Chart Example</description>             <applicationId>chartexample</applicationId>             <indexFile>index.html</indexFile>             <!--<zipFile></zipFile>-->             <sourcePath>artisan-starter-html/src/main/webapp</sourcePath>         </application>     </applications> </apcmetadata>
View full tip
Shown below is example code that when deployed in the appropriate container, will allow an end-user to talk to the Axeda Platform Integration Queue. A customer should supply their unique values for the following properties: queueName user password url import java.util.Properties; import javax.jms.*; import javax.naming.*; public class SampleConsumer {     private String queueName = "com.axeda.integration.ACME.queue";     private String user = "system";     private String password = "manager"; //private String url = "ssl://hostname:61616";   private String url = "tcp://hostname:61616";     private boolean transacted;     private boolean isRunning = false;     public static void main(String[] args) throws NamingException, JMSException     {         SampleConsumer consumer = new SampleConsumer();         consumer.run();     }     public SampleConsumer()     {         /** For SSL connections only, add the following: **/ //        System.setProperty("javax.net.ssl.keyStore", "path/to/client.ks"); //        System.setProperty("javax.net.ssl.keyStorePassword", "password"); //        System.setProperty("javax.net.ssl.trustStore", "path/to/client.ts");     }     public void run() throws NamingException, JMSException     {           isRunning = true;            //JNDI properties         Properties props = new Properties();         props.setProperty(Context.INITIAL_CONTEXT_FACTORY, "org.apache.activemq.jndi.ActiveMQInitialContextFactory");         props.setProperty(Context.PROVIDER_URL, url);            //specify queue propertyname as queue.jndiname         props.setProperty("queue.slQueue", queueName);            javax.naming.Context ctx = new InitialContext(props);         ConnectionFactory connectionFactory = (ConnectionFactory)ctx.lookup("ConnectionFactory");         Connection connection = connectionFactory.createConnection(user, password);         connection.start();            Session session = connection.createSession(transacted, Session.AUTO_ACKNOWLEDGE);            Destination destination = (Destination)ctx.lookup("slQueue");         //Using Message selector ObjectClass = ‘AlarmImpl’         MessageConsumer consumer = session.createConsumer(destination, "ObjectClass= 'LinkedList'");            while (isRunning)         {             System.out.println("Waiting for message...");             Message message = consumer.receive(1000);             if (message != null && message instanceof TextMessage) {                 TextMessage txtMsg = (TextMessage)message;                 System.out.println("Received: " + txtMsg.getText());             }         }         System.out.println("Closing connection");         consumer.close();         session.close();         connection.close();     } }
View full tip
Since the advent of 6.1.6 we've been able to access the body of a post in a Groovy script.  This frees us from the tyranny of those pesky predefined parameters and opens up all sorts of Javascript object-passing possibilities. To keep this example as simple as possible, there are only two files: postbody.html TestPostBody.groovy postbody.html <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"             "http://www.w3.org/TR/html4/loose.dtd"> <html> <head>     <title>Scripto Post Body Demo</title>     <meta http-equiv="content-type" content="text/html; charset=utf-8">     <meta name="viewport" content="width=device-width, initial-scale=1.0, maximum-scale=1.0, user-scalable=no"/>     <meta name="apple-mobile-web-app-capable" content="yes"/>     <meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1">     <meta http-equiv="refresh" content="3600"> </head> <body> <div id="wrapper"> <div id="header"> <h1>Scripto Post Body Demo</h1> </div> <p> Username:<input type="text" id="username" /><br /> Password:<input type="password" id="password"  /><br /><br /> Enter some valid JSON (validate it <a href="http://jsonlint.com/" alt="jsonlint">here</a> if you're not sure): <br /><textarea id="jsoninput" rows=10 cols=20></textarea><br /> Enter arbitrary Text: <br /><textarea id="textinput" rows=10 cols=20></textarea> <input type="submit" value="Go" id="submitbtn"  onclick="poststuff();"/> </p> <div id="response"></div> </div>         <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.7.1/jquery.min.js"></script>         <script src="http://ajax.googleapis.com/ajax/libs/jqueryui/1.8.16/jquery-ui.min.js"></script>   <script type="text/javascript">         var poststuff= function(){             var data = {}             var temp             if ($("#jsoninput").val() != ""){                 try {                     temp = $.parseJSON($("#jsoninput").val())                 }                 catch (e){                     temp = ""                 }                 if (temp && temp != ""){                     data = JSON.stringify(temp)                 }             }             else if ($("#textinput").val() != ""){                 data.text = $("#textinput").val()                 data = JSON.stringify(data)             }             else data = {"testing":"hello"}             if ($("#username").val() != "" && $("#password").val() != ""){                 // you need contentType in order for the POST to succeed                 var promise = $.ajax({                     type:"POST",                     url: "http://dev6.axeda.com/services/v1/rest/Scripto/execute/TestPostBody?username=" + $("#username").val() + "&password=" + $("#password").val(),                     data: data,                     contentType:"application/json; charset=utf-8",                     dataType:"text"                 })                 $.when(promise).then(function(json){                     $("#response").html("<p>" + JSON.stringify(json) + "</p><br />Check your console for the object.<br />")                     console.log($.parseJSON(json))                     $("#jsoninput").val("")                     $("#textinput").val("")                 })             }         } </script> </body> </html> TestPostBody.groovy import net.sf.json.JSONObject import groovy.json.JsonSlurper import static com.axeda.drm.sdk.scripto.Request.*; try {     // just get the string body content     response = [body: body]     response.element = []     // parse the text into a JSON Object or JSONArray suitable for traversing in Groovy     // this assumes the body is stringified JSON     def slurper = new JsonSlurper()     def result = slurper.parseText(body)     result.each{ response.element << it } } catch (Exception e) {     response = [                 faultcode: 'Groovy Exception',                 faultstring: e.message             ]; } return ["Content-Type": "application/json","Content":JSONObject.fromObject(response).toString(2)] The "body" variable is passed in as a standalone implicit object of type String.  The key here is that to process the string as a Json object in Groovy, we send stringed JSON from the Javascript, rather than the straight JSON object. FYI: If you happen to be using Scripto Editor, you might like to know that importing the Request class disables the sidebar input of parameters.  You can enter the parameters in the sidebar, but if this import is included the parameters will not be visible to the script. To access the POST body through the Request Object, you can also refer to: Using Axeda Scripto
View full tip
The Axeda Platform provides a few mechanisms for putting user-defined pages or UI modules into the dashboards, or allowing end-users to host AJAX based applications from the same instance their data is retrieved from.  This simple application illustrates the use of jQuery to call Scripto and return a JSON formatted array of current data for an Axeda asset. Prerequisites: First steps taken with Axeda Artisan Basic understanding of HTML, JavaScript and jQuery Axeda Platform v6.5 or greater (Axeda Customers and Partners) Artisan project attached to this article Features: Authentication from a Web app Use of CurrentDataFinder API Scripto from jQuery Files of Note ​(Locations are from the root of Artisan project) index.html – main HTML index page ..\artisan-starter-html\src\main\webapp\index.html app.js – JavaScript code to build application and call Scripto ..\artisan-starter-html\src\main\webapp\scripts\app.js axeda.js – axeda web services JavaScript code ..\artisan-starter-html\src\main\webapp\scripts\axeda.js DataItemsWithScripto.groovy – custom object on Axeda platform ..\artisan-starter-scripts\src\main\groovy\DataItemsWithScripto.groovy Screenshots: Further Reading Developing with Axeda Artisan Extending the Axeda Platform UI - Custom Tabs and Modules
View full tip
Back in 2018 an interesting capability was added to ThingWorx Foundation allowing you to enable statistical calculation of service and subscription execution.   We typically advise customers to approach this with caution for production systems as the additional overhead can be more than you want to add to the work the platform needs to handle.  This said, these statistics is used consciously can be extremely helpful during development, testing, and troubleshooting to help ascertain which entities are executing what services and where potential system bottlenecks or areas deserving performance optimization may lie.   Although I've used the Utilization Subsystem services for statistics for some time now, I've always found that the Composer table view is not sufficient for a deeper multi-dimensional analysis.  Today I took a first step in remedying this by getting these metrics into Excel and I wanted to share it with the community as it can be quite helpful in giving developers and architects another view into their ThingWorx applications and to take and compare benchmarks to ensure that the operational and scaling is happening as was expected when the application was put into production.   Utilization Subsystem Statistics You can enable and configure statistics calculation from the Subsystem Configuration tab.  The help documentation does a good job of explaining this so I won't mention it here.  Base guidance is not to use Persisted statistics, nor percentile calculation as both have significant performance impacts.  Aggregate statistics are less resource intensive as there are less counters so this would be more appropriate for a production environment.  Specific entity statistics require greater resources and this will scale up as well with the number of provisioned entities that you have (ie: 1,000 machines versus 10,000 machines) whereas aggregate statistics will remain more constant as you scale up your deployment and its load.   Utilization Subsystem Services In the subsystem Services tab, you can select "UtilizationSubsystem" from the filter drop down and you will see all of the relevant services to retrieve and reset the statistics.     Here I'm using the GetEntityStatistics service to get entity statistics for Services and Subscriptions.     Giving us something like this.      Using Postman to Save the Results to File I have used Postman to do the same REST API call and to format the results as HTML and to save these results to file so that they can be imported into Excel.   You need to call '/Thingworx/Subsystems/UtilizationSubsystem/Services/GetEntityStatistics' as a POST request with the Content-Type and Accept headers set to 'application/xml'.  Of course you also need to add an appropriately permissioned and secured AppKey to the headers in order to authenticate and get service execution authorization.     You'll note the Export Results > Save to a file menu over on the right to get your results saved.   Importing the HTML Results into Excel As simple as I would like to hope that getting a standard web formatted file into Excel should be, it didn't turn out to be as easy as I would have hoped and so I have to switch over to Windows to take advantage of Power Query.   From the Data ribbon, select Get Data > From File > From XML.  Then find and select the HTML file saved in the previous step.     Once it has loaded the file and done some preparation, you'll need to select the GetEntityStatistics table in the results on the left.  This should display all of the statistics in a preview table on the right.     Once the query completed, you should have a table showing your statistical data ready for... well... slicing and dicing.     The good news is that I did the hard part for you, so you can just download the attached spreadsheet and update the dataset with your fresh data to have everything parsed out into separate columns for you.     Now you can use the column filters to search for entity or service patterns or to select specific entities or attributes that you want to analyze.  You'll need to later clear the column filters to get your whole dataset back.     Updating the Spreadsheet with Fresh Data In order to make this data and its analysis more relevant, I went back and reset all of the statistics and took a new sample which was exactly one hour long.  This way I would get correct recent min/max execution time values as well as having a better understanding of just how many executions / triggers are happening in a one hour period for my benchmark.   Once I got the new HTML file save, I went into Excel's Data ribbon, selected a cell in the data table area, and clicked "Queries & Connections" which brought up the pane on the right which shows my original query.     Hovering over this query, I'm prompted with some stuff and I chose "Edit".     Then I clicked on the tiny little gear to the right of "Source" over on the pane on the right side.     Finally I was able to select the new file and Power Query opened it up for me.     I just needed to click "Close & Load" to save and refresh the query providing data to the table.     The only thing at this point is that I didn't have my nice little sparklines as my regional decimal character is not a period - so I selected the time columns and did a "Replace All" from '.' to ',' to turn them into numbers instead of text.     Et Voila!   There you have it - ready to sort, filter, search and review to help you better understand which parts of your application may be overly resource hungry, or even to spot faulty equipment that may be communicating and triggering workflows far more often than it should.   Specific vs General Depending on the type of analysis that you're doing you might find that the aggregate statistics are a better option.  As they'll be far, far less that the entity specific statistics they'll do a better job of giving you a holistic view of the types of things that are happening with your ThingWorx applications execution.   The entity specific data set that I'm showing here would be a better choice for troubleshooting and diagnostics to try to understand why certain customers/assets/machines are behaving strangely as we can specifically drill into these stats.  Keep in mind however that you should then compare these findings with the general baseline to see how this particular asset is behaving compared to the whole fleet.   As a size guideline - I did an entity specific version of this file for a customer with 1,000 machines and the Excel spreadsheet was 7Mb compared to the 30kb of the one attached here and just opening it and saving it was tough for Excel (likely due to all of my nested formulas).  Just keep this in mind as you use this feature as there is memory overhead meaning also garbage collection and associated CPU usage for such.
View full tip
Analytics projects typically involve using the Analytics API rather than the Analytics Builder to accomplish different tasks. The attached documentation provides examples of code snippets that can be used to automate the most common analytics tasks on a project such as: Creating a dataset Training a Model Real time scoring predictive and prescriptive Retrieving the validation metrics for a model Appending additional data to a dataset Retraining the model The documentation also provides examples that are specific to time series datasets. The attached .zip file contains both the document as well as some entities that you need to import in ThingWorx to access the services provided in the examples. 
View full tip
In the recent times, one of the frequent questions regarding PostgreSQL is which tools are good with PostgreSQL. With the growing functionality of PostgreSQL, the number of vendors are willing to produce tools for PostgreSQL. There are lot of tools for management, development, data visualization and the list if growing. Here, I'm listing a few tools that might be of interest to Thingworx users. psql terminal: The psql client is a command-line client distributed with PostgreSQL, often called as interactive terminal. psql is a simple yet powerful tool with which you can directly interface with the PostgreSQL server. The psql client comes default with the PostgreSQL database. Key features: Issue queries either through commands or from a file. Provides shell-like features to automate tasks. For more information, refer http://www.postgresql.org/docs/9.5/static/app-psql.html pgAdmin III: pgAdmin III is a GUI based administration and development tool for PostgreSQL database. It delivers the needs of both admin and normal users from writing simple SQL queries to developing complex databases. Key features: Open source and cross-platform support. No additional drivers are required. Supports more than 30 different languages. Note: pgAdmin III comes default with postgreSQL9.4 installer. For more information, refer http://www.pgadmin.org/download/ phpPgAdmin: phpPgAdmin is a web-based client for managing PostgreSQL databases. It provides the user with a convenient way to create databases, create tables, alter tables and query the data using SQL. Key features: Open source and supports PostgreSQL 9.x. Requires webserver. Administer multiple servers. Supports the slony master-slave replication engine. For phpPgAdmin download: http://phppgadmin.sourceforge.net/doku.php?id=download TeamPostgreSQL: TeamPostgreSQL is a browser-based tool for PostgreSQL administration. Using TeamPostgreSQL, database objects can be accessed from anywhere in the web browser. Key features: Open source and cross-platform support. Supports SSH for both the web interface and the database connections. GUI with tabbed SQL editors. For TeamPostgreSQL download: http://www.teampostgresql.com/download.jsp   Monitoring Tools pgBadger: pgBadger is a PostgreSQL log analyzer for generating reports from the PostgreSQL log files. It is built in Perl language and uses a javascript and bootstrap libraries. Often seen as a replacement for pgfouine log analyzer. Key features: Open source community project. Autodetects postgreSQL log file formats (stderr, syslog or csvlog). Provides SQL queries related reports and statistics. Can also set limits to only report errors. Generates Pie charts and Time based charts. For more information, refer http://dalibo.github.io/pgbadger/. Git download: https://github.com/dalibo/pgbadger/releases PostgreStats: Postgrestats is a software that has automated scripts to easily view statistics such as commits, rollbacks, user inserts, updates and deletes in a time-based intervals. Postgrestats gets installed and executes on the database server, it customizes the main conf file. Postgrestats also provides an enterprise application for Replication mode and High Availability. Key features: Open source and easy-to-setup installation.  Take a snapshot report based on time intervals. Optional email-on-update. Text file Data storage. Also provides enterprise application, PostgreStats Enterprise. For more information, refer: http://www.postgrestats.com/subs/docs.html    Slemma: Slemma is a collaborative, data visualization tool for PostgreSQL database. Slemma allows database connections with a near to one-click integration and can generate a dashboard from files. Slemma comes with a commercial license with a $29 per user per month pricing. Key features: Create charts and interactive dashboards by selecting tables. Non-developers can easily create visualizations (with no coding). Email dashboards automatically to clients or your entire team. For more information, refer https://slemma.com/ Ubiq: Ubiq is a web-based buisness intelligence and reporting tool for PostgreSQL server. Ubiq creates reports and online dashboards, providing the feature to export in multiple formats. Ubiq is distributed with a commercial license. Key features: Drag & drop interface to create interactive charts, dashboards and reports. Apply powerful filters and functions to the data. Share your work and schedule email reports. For more information, refer http://ubiq.co/tour
View full tip
I've had a lot of questions over the years working with Azure IoT, Kepware, and ThingWorx that I really struggled getting answers to. I was always grateful when someone took the time to help me understand, and now it is time to repay the favour.   People ask me many things about Azure (in a ThingWorx context), and one of the common ones has been about MQTT communications from Kepware to ThingWorx using IoT Hub. Recently the topic has come up again as more and more of the ThingWorx expert community start to work with Azure IoT. Today, I took the time to build, test, validate, and share an approach and utilities to do this in cases where the Azure Industrial IoT OPC UA integration is overkill or simply a step later in the project plan. Enjoy!   End to end Integration of Kepware to ThingWorx using MQTT over Azure IoT (YoutTube 45 minute deep-dive)   ThingWorx entities for import (ThingWorx 9.0)   This approach can be quite good for a simple demo if you have a Kepware Integrator or Kepware Enterprise license, but the use of IoT Gateway for many servers and tags can be quite costly.   Those looking to leverage Azure IoT Hub for MQTT integration to ThingWorx would likely also find this recorded session and shared utilities quite helpful.   Cheers, Greg
View full tip
For a recent project, I was needing to find all of the children in a Network Hierarchy of a particular template type... so I put together a little script that I thought I'd share. Maybe this will be useful to others as well.   In my situation, this script lived in the Location template. This was useful so that I could find all the Sensor Things under any particular node, no matter how deep they are.   For example, given a network like this: Location 1 Sensor 1 Location 1A Sensor 2 Sensor 3 Location 1AA Sensor 4 Location 1B Sensor 5 If you run this service in Location 1, you'll get an InfoTable with these Things: Sensor 1 Sensor 2 Sensor 3 Sensor 4 Sensor 5 From Location 1A: Sensor 2 Sensor 3 Sensor 4 From Location 1AA: Sensor 4 From Location 1B: Sensor 5   For this service, these are the inputs/outputs: Inputs: none Output: InfoTable of type NetworkConnection   // CreateInfoTableFromDataShape(infoTableName:STRING("InfoTable"), dataShapeName:STRING):INFOTABLE(AlertSummary) let result = Resources["InfoTableFunctions"].CreateInfoTableFromDataShape({ infoTableName : "InfoTable", dataShapeName : "NetworkConnection" }); // since the hierarchy could contain locations or sensors, need to recursively loop down to get all the sensors function findChildrenSensors(thingName) { let childrenThings = Networks["Hierarchy_NW"].GetChildConnections({ name: thingName /* STRING */ }); for each (var row in childrenThings.rows) { // row.to has the name of the child Thing if (Things[row.to].IsDerivedFromTemplate({thingTemplateName: "Location_TT"})) { findChildrenSensors(row.to); } else if (Things[row.to].IsDerivedFromTemplate({thingTemplateName: "Sensor_TT"})) { result.AddRow(row); } } } findChildrenSensors(me.name);    
View full tip
Applicable Releases: ThingWorx Platform 7.0 to 8.5   Description:   Covers how to apply patch upgrades to ThingWorx installation, with the following agenda: How to read ThingWorx version Upgrading to a major/minor version of the platform Focus on upgrading to a patch version of the platform Upgrading extensions       Always check the patch release notes for additional information and specific steps
View full tip
Hiya,   I recently prepared a short demo which shows how to onboard and use Azure IoT devices in ThingWorx and added some usability tips and tricks to help others who might struggle with some of the things that I did.     The good news... I recorded and posted it to YouTube here.   •Connect Azure IoT Hub with ThingWorx (to be updated soon for 9.0 release) •Using the Azure IoT Dev Kit with ThingWorx •Getting the Azure IoT Hub Connector Up and Running (V3/8.5)   Enjoy, and don't hesitate to comment with your own tips and feedback.   Cheers,   Greg
View full tip
Here is a spreadsheet that I created which helps to estimate data transfer volumes for the purpose of estimating egress costs when transferring data out of region.   You find that there are a number of input parameters like numbers of assets, properties, file sizes, compression ratio, as well as a page with the cost elements which can be updated from the Interweb.    
View full tip
Axeda Machine Streams enables external Platform integrators to access the current, raw data from connected assets. The Platform can stream the data item, alarm, mobile location, and registration messages from connected assets to an ActiveMQ server or Azure Service Bus endpoint. Streamed data can be used for data analytics or reporting, or simply for storage. This article explains the Machine Streams Data Relay project that Axeda provides. This sample project illustrates how stream consumers can create their own projects to relay Machine Stream messages from ActiveMQ or Azure Service Bus into their environments. The Machine Streams Data Relay project was created using Apache Maven. The project operates by dispatching messages to a log message processor. Each machine streams message is logged to stdout. Note: The "Axeda Features Guide" provides a high level introduction to the Axeda Machine Streams feature. That PDF is available from PTC Support (http://support.ptc.com/).) Downloading and Installing the Project The machine-streams-data-relay project is provided as a tar.gz archive for Linux users and a .zip archive for Windows users. Each archive includes a Maven project with all source code. This page provides downloads and full source for the machine-steams-data-relay Maven project. The Data Relay project files are available from here. Prerequisites To download, build, and compile the machine-streams-data-relay project, you will need the following: Access to an Axeda Platform instance configured to stream asset data (for ActiveMQ endpoint this includes the Axeda provided ActiveMQ machine-streams plugin/overlay (axeda-jms-plugin-r<SVN_REVISION>-machine-streams.zip, which is provided here. ActiveMQ or Azure Service Bus server configured for Machine Streams. Instructions for configuring an ActiveMQ or Azure Service Bus server for Machine Streams are provided in the “Axeda® Machine Streams: A Guide to Setting Up Broker Endpoints", available with all documentation from PTC Support (http://support.ptc.com/). At least one machine stream (Axeda Artisan Machine Streams Archetype) configured to stream data to the ActiveMQ or Azure Service Bus server for your assets. (Complete information about creating machine streams and adding machine stream support to the Axeda Platform is provided in the “Axeda v2 API/Services Developers Reference Guide” available from PTC Support (http://support.ptc.com/).) Access to the ActiveMQ or Azure Service Bus server configured as the endpoint for streamed Machine Streams content Oracle Java JDK 1.7 or greater and java and javac installed and available in your PATH (if you need instructions for this, see http://www.oracle.com/technetwork/java/javase/downloads/index.html) Maven 3.0.4 or greater and mvn installed and available in your PATH (if you need instructions for this, see http://maven.apache.org/download.cgi) Note: For the Machine Streams Data Relay project to work successfully, the Axeda Platform instance and the ActiveMQ or Azure Service Bus server instance must be configured with support for Axeda Machine Streams, and at least one machine stream must be configured to stream data. Complete information about configuring Axeda Platform for Axeda Machine Streams, including the data format for the resulting streams (XML or JSON) is available in the “Axeda v2 API/Services Developers Reference Guide.” Instructions for configuring an ActiveMQ or Azure Service Bus server for Machine Streams are provided in the “Axeda® Machine Streams: A Guide to Setting Up Broker Endpoints” Reference Guide (available from PTC Support (http://support.ptc.com/)). Building the Project This page provides instructions for building the Data Relay project for Linux and for Windows environments. 1. Download and uncompress the project for your environment Linux: Click here for the machine-streams-data-relay-1.0.3-project.tar.gz # tar -zxvf machine-streams-data-relay-1.0.3-project.tar.gz # cd machine-streams-data-relay-1.0.3 Windows: Click here for the machine-streams-data-relay-1.0.3-project.zip Unzip the project to the following directory: C:\machine-streams-data-relay-1.0.3 2. Edit the ActiveMQ or Azure Service Bus configuration file (configAMQ.properties or configASB.properties) in src\main\scripts\ as needed. sample Config.properties files for the MachineStreamsDataRelay component For ActiveMQ broker endpoints - configAMQ.properties # The ActiveMQ broker URL. brokerURL=tcp://localhost:62000 # The ActiveMQ queue name to process messages from. # It can be a single queue: MachineStream.stream01 # Or a wildcard queue: MachineStream.> queueName=MachineStream.> # The username used to connect to the ActiveMQ queue username=axedaadmin # The password used to connect to the ActiveMQ queue password=zQXuLzhQgcyRZ25JCDXYEPBCT2kx48 # The number of ActiveMQ broker connections. numConnections=10 # The number of sessions per connection. Note that each session will create a separate thread. numSessionsPerConnection=5 # The number of concurrent threads used for processing machine streams messages. numProcessingThreads=100 # The type of message listener container. # default = single queue name per connection. # multiDestination = supports multiple queue names per connection messageListenerContainerType=default For Azure Service Bus broker endpoints - configASB.properties # The ASB broker URL. brokerURL=amqps://your-azure-service-bus-namespace.servicebus.windows.net # The ASB queues to process messages from. # It can be a single queue: MachineStream.stream01 # Or multiple queues separated by a comma: MachineStream.stream01,MachineStream.stream02 # Or a queue range defined by the following syntax: MachineStream.stream[01-20] queueName=MachineStream.stream[01-50] # The username used to connect to the ASB queue(s) username=your-azure-service-bus-username # The password used to connect to the ASB queue(s) password=the-password-for-your-azure-service-bus-username # The max number of ASB broker connections. numConnections=10 # The number of concurrent threads used for processing machine streams messages. numProcessingThreads=100 # The type of message listener container. # default = single queue name per connection. # multiDestination = supports multiple queue names per connection messageListenerContainerType=multiDestination Note: messageListenerContainerType is provided because Azure Service Bus does not support wildcard queue names. The configuration details are as follows: Name Description Value brokerURL location of the ActiveMQ or Azure Service Bus (broker) location of the ActiveMQ or Azure Service Bus server (broker) queueName Name of the ActiveMQ or Azure Service Bus queue from which you want to process messages To define a single queue: MachineStream.<insert single queue name here> To define a wildcard queue name for multiple queues:MachineStream. It can be a single queue:  MachineStream.stream01 Or multiple queues separated by a comma: MachineStream.stream01,MachineStream.stream02 Or a queue range defined by the following syntax: MachineStream.stream[01-20]: queueName=MachineStream.stream[01-50] (if you have multiple queues and you want to use ASB, then you have to use multiDestination and use the range) username username used to connect to the ActiveMQ or Azure Service Bus queue For ActiveMQ: username=axedaadmin For ASB: username=your-azure-service-bus-username password used to connect to the ActiveMQ or Azure Service Bus queue password used to connect to the ActiveMQ or ASB queue(s) numConnections number of ActiveMQ or Azure Service Bus broker connections Default is 10 broker connections numSessionsPerConnection The number of sessions per connection. Note that each session will create a separate thread. (This key is used infrequently.) APPLICABLE TO ACTIVEMQ ONLY. Default is 5 sessions per connection APPLICABLE TO ACTIVEMQ ONLY. numProcessingThreads The number of concurrent threads used for processing machine streams messages. Default is 100 concurrent threads messageListenerContainerType The type of message listener container. Default is single queue name per connection. Supports multiple queue names per connection 3. Build code using Maven.  Use -DskipTests option if you want to skip tests.  This will build all source code and produce a bin archive in the target directory. For Linux: # mvn package -DskipTests For Windows: c:\> mvn package -DskipTests 4. Enter the target directory and uncompress *bin.tar.gz archive and enter correct directory For Linux: # cd target # tar -zxvf machine-streams-data-relay-1.0.3-bin.tar.gz # cd machine-streams-data-relay-1.0.3 For Windows: c:\> cd target c:\> unzip machine-streams-data-relay-1.0.3.bin.zip c:\> cd machine-streams-data-relay-1.0.3 5. Start the application. For Linux: # ./machineStreamsDataRelay.sh <config properties file> for example: e.g. ./machineStreamsDataRelay.sh configOfYourChoice.properties For Windows: # ./machineStreamsDataRelay.sh <config properties file> for example: e.g. ./machineStreamDataRelay.bat configOfYourChoice.properties See the two example config files included within the project: configASB.properties (for Azure Service Bus) and configAMQ.properties (for ActiveMQ). 6. Scan the output. If your ActiveMQ configuration is correct, output similar to the following should appear, and no ERRORS should be shown: 2014-03-26 10:27:06.179 [main] INFO  [MessageListenerServiceImpl]: Initializing connections to tcp://localhost:62000 username=axedaadmin 2014-03-26 10:27:06.346 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 1: queue=MachineStream.> numSessions=5 2014-03-26 10:27:06.351 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 2: queue=MachineStream.> numSessions=5 2014-03-26 10:27:06.356 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 3: queue=MachineStream.> numSessions=5 2014-03-26 10:27:06.365 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 4: queue=MachineStream.> numSessions=5 2014-03-26 10:27:06.369 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 5: queue=MachineStream.> numSessions=5 2014-03-26 10:27:06.381 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 6: queue=MachineStream.> numSessions=5 2014-03-26 10:27:06.388 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 7: queue=MachineStream.> numSessions=5 2014-03-26 10:27:06.402 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 8: queue=MachineStream.> numSessions=5 2014-03-26 10:27:06.411 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 9: queue=MachineStream.> numSessions=5 2014-03-26 10:27:06.416 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 10: queue=MachineStream.> numSessions=5 If your Azure Service Bus configuration is correct, output similar to the following should appear, and no ERRORS should be shown: 2014-10-01 16:51:30.114 [main] INFO [MessageListenerServiceImpl]: Initializing Connections to amqps://acme.servicebus.windows.net username=owner 2014-10-01 16:51:31.613 [ConnectionRecovery-thread-6] INFO [MultiDestinationMessageListenerContainer]: Connection 6 created 0/10 queue consumers 2014-10-01 16:51:31.614 [ConnectionRecovery-thread-8] INFO [MultiDestinationMessageListenerContainer]: Connection 8 created 0/10 queue consumers 2014-10-01 16:51:31.614 [ConnectionRecovery-thread-10] INFO [MultiDestinationMessageListenerContainer]: Connection 10 created 0/9 queue consumers 2014-10-01 16:51:31.614 [ConnectionRecovery-thread-2] INFO [MultiDestinationMessageListenerContainer]: Connection 2 created 0/10 queue consumers 2014-10-01 16:51:31.614 [ConnectionRecovery-thread-3] INFO [MultiDestinationMessageListenerContainer]: Connection 3 created 0/10 queue consumers 2014-10-01 16:51:31.614 [ConnectionRecovery-thread-5] INFO [MultiDestinationMessageListenerContainer]: Connection 5 created 0/10 queue consumers 2014-10-01 16:51:31.615 [ConnectionRecovery-thread-9] INFO [MultiDestinationMessageListenerContainer]: Connection 9 created 0/10 queue consumers 2014-10-01 16:51:31.615 [ConnectionRecovery-thread-4] INFO [MultiDestinationMessageListenerContainer]: Connection 4 created 0/10 queue consumers 2014-10-01 16:51:31.621 [ConnectionRecovery-thread-7] INFO [MultiDestinationMessageListenerContainer]: Connection 7 created 0/10 queue consumers 2014-10-01 16:51:31.756 [ConnectionRecovery-thread-1] INFO [MultiDestinationMessageListenerContainer]: Connection 1 created 0/10 queue consumers 2014-10-01 16:51:32.613 [ConnectionRecovery-thread-6] INFO [MultiDestinationMessageListenerContainer]: Connection 6 created 9/10 queue consumers 2014-10-01 16:51:32.614 [ConnectionRecovery-thread-8] INFO [MultiDestinationMessageListenerContainer]: Connection 8 created 9/10 queue consumers 2014-10-01 16:51:32.614 [ConnectionRecovery-thread-10] INFO [MultiDestinationMessageListenerContainer]: Connection 10 created 7/9 queue consumers 2014-10-01 16:51:32.614 [ConnectionRecovery-thread-2] INFO [MultiDestinationMessageListenerContainer]: Connection 2 created 10/10 queue consumers 2014-10-01 16:51:32.615 [ConnectionRecovery-thread-3] INFO [MultiDestinationMessageListenerContainer]: Connection 3 created 9/10 queue consumers 2014-10-01 16:51:32.615 [ConnectionRecovery-thread-5] INFO [MultiDestinationMessageListenerContainer]: Connection 5 created 0/10 queue consumers 2014-10-01 16:51:32.615 [ConnectionRecovery-thread-9] INFO [MultiDestinationMessageListenerContainer]: Connection 9 created 7/10 queue consumers 2014-10-01 16:51:32.615 [ConnectionRecovery-thread-4] INFO [MultiDestinationMessageListenerContainer]: Connection 4 created 9/10 queue consumers 2014-10-01 16:51:32.623 [ConnectionRecovery-thread-7] INFO [MultiDestinationMessageListenerContainer]: Connection 7 created 9/10 queue consumers 2014-10-01 16:51:32.756 [ConnectionRecovery-thread-1] INFO [MultiDestinationMessageListenerContainer]: Connection 1 created 10/10 queue consumers 2014-10-01 16:51:32.833 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 1: numQueues=10 initTimeMillis=2631 millis 2014-10-01 16:51:32.833 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 2: numQueues=10 initTimeMillis=2488 millis 2014-10-01 16:51:33.613 [ConnectionRecovery-thread-6] INFO [MultiDestinationMessageListenerContainer]: Connection 6 created 10/10 queue consumers 2014-10-01 16:51:33.614 [ConnectionRecovery-thread-8] INFO [MultiDestinationMessageListenerContainer]: Connection 8 created 10/10 queue consumers 2014-10-01 16:51:33.614 [ConnectionRecovery-thread-10] INFO [MultiDestinationMessageListenerContainer]: Connection 10 created 9/9 queue consumers 2014-10-01 16:51:33.615 [ConnectionRecovery-thread-3] INFO [MultiDestinationMessageListenerContainer]: Connection 3 created 9/10 queue consumers 2014-10-01 16:51:33.615 [ConnectionRecovery-thread-5] INFO [MultiDestinationMessageListenerContainer]: Connection 5 created 0/10 queue consumers 2014-10-01 16:51:33.615 [ConnectionRecovery-thread-9] INFO [MultiDestinationMessageListenerContainer]: Connection 9 created 8/10 queue consumers 2014-10-01 16:51:33.615 [ConnectionRecovery-thread-4] INFO [MultiDestinationMessageListenerContainer]: Connection 4 created 9/10 queue consumers 2014-10-01 16:51:33.623 [ConnectionRecovery-thread-7] INFO [MultiDestinationMessageListenerContainer]: Connection 7 created 10/10 queue consumers 2014-10-01 16:51:34.615 [ConnectionRecovery-thread-5] INFO [MultiDestinationMessageListenerContainer]: Connection 5 created 0/10 queue consumers 2014-10-01 16:51:34.615 [ConnectionRecovery-thread-3] INFO [MultiDestinationMessageListenerContainer]: Connection 3 created 9/10 queue consumers 2014-10-01 16:51:34.615 [ConnectionRecovery-thread-9] INFO [MultiDestinationMessageListenerContainer]: Connection 9 created 8/10 queue consumers 2014-10-01 16:51:34.615 [ConnectionRecovery-thread-4] INFO [MultiDestinationMessageListenerContainer]: Connection 4 created 9/10 queue consumers 2014-10-01 16:51:35.615 [ConnectionRecovery-thread-5] INFO [MultiDestinationMessageListenerContainer]: Connection 5 created 9/10 queue consumers 2014-10-01 16:51:35.615 [ConnectionRecovery-thread-3] INFO [MultiDestinationMessageListenerContainer]: Connection 3 created 9/10 queue consumers 2014-10-01 16:51:35.615 [ConnectionRecovery-thread-9] INFO [MultiDestinationMessageListenerContainer]: Connection 9 created 8/10 queue consumers 2014-10-01 16:51:35.616 [ConnectionRecovery-thread-4] INFO [MultiDestinationMessageListenerContainer]: Connection 4 created 9/10 queue consumers 2014-10-01 16:51:36.616 [ConnectionRecovery-thread-5] INFO [MultiDestinationMessageListenerContainer]: Connection 5 created 9/10 queue consumers 2014-10-01 16:51:36.616 [ConnectionRecovery-thread-3] INFO [MultiDestinationMessageListenerContainer]: Connection 3 created 9/10 queue consumers 2014-10-01 16:51:36.616 [ConnectionRecovery-thread-4] INFO [MultiDestinationMessageListenerContainer]: Connection 4 created 9/10 queue consumers 2014-10-01 16:51:36.616 [ConnectionRecovery-thread-9] INFO [MultiDestinationMessageListenerContainer]: Connection 9 created 8/10 queue consumers 2014-10-01 16:51:37.616 [ConnectionRecovery-thread-5] INFO [MultiDestinationMessageListenerContainer]: Connection 5 created 9/10 queue consumers 2014-10-01 16:51:37.616 [ConnectionRecovery-thread-3] INFO [MultiDestinationMessageListenerContainer]: Connection 3 created 9/10 queue consumers 2014-10-01 16:51:37.616 [ConnectionRecovery-thread-4] INFO [MultiDestinationMessageListenerContainer]: Connection 4 created 9/10 queue consumers 2014-10-01 16:51:37.616 [ConnectionRecovery-thread-9] INFO [MultiDestinationMessageListenerContainer]: Connection 9 created 8/10 queue consumers 2014-10-01 16:51:38.616 [ConnectionRecovery-thread-3] INFO [MultiDestinationMessageListenerContainer]: Connection 3 created 10/10 queue consumers 2014-10-01 16:51:38.617 [ConnectionRecovery-thread-9] INFO [MultiDestinationMessageListenerContainer]: Connection 9 created 10/10 queue consumers 2014-10-01 16:51:38.616 [ConnectionRecovery-thread-4] INFO [MultiDestinationMessageListenerContainer]: Connection 4 created 10/10 queue consumers 2014-10-01 16:51:38.616 [ConnectionRecovery-thread-5] INFO [MultiDestinationMessageListenerContainer]: Connection 5 created 10/10 queue consumers 2014-10-01 16:51:38.643 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 3: numQueues=10 initTimeMillis=8491 millis 2014-10-01 16:51:38.643 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 4: numQueues=10 initTimeMillis=8490 millis 2014-10-01 16:51:38.643 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 5: numQueues=10 initTimeMillis=8490 millis 2014-10-01 16:51:38.643 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 6: numQueues=10 initTimeMillis=3485 millis 2014-10-01 16:51:38.643 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 7: numQueues=10 initTimeMillis=3495 millis 2014-10-01 16:51:38.643 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 8: numQueues=10 initTimeMillis=3485 millis 2014-10-01 16:51:38.643 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 9: numQueues=10 initTimeMillis=8488 millis 2014-10-01 16:51:38.643 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 10: numQueues=9 initTimeMillis=3485 millis 7. To verify that messages are being streamed properly from the Axeda Platform, send DataItems from your connected Assets. You should see messages similar to the following. (Remember that each Asset you are testing must have an associated Machine Stream.) 2014-03-26 10:45:16.309 [pool-1-thread-1] INFO  [LogMessageProcessor]: StreamedDataItem: Model,Asset1,799021d6-70a3-7c32-0000-00000000021d,false,Wed Mar 26 14:45:16 EDT 2014,temp,43,analog 2014-03-26 10:45:21.137 [pool-1-thread-2] INFO  [LogMessageProcessor]: StreamedDataItem: Model,Asset2,799021d6-70a3-7c32-0000-000000000225,false,Wed Mar 26 14:45:21 EDT 2014,temp,43,analog 2014-03-26 10:45:26.134 [pool-1-thread-3] INFO  [LogMessageProcessor]: StreamedDataItem: Model,Asset1,799021d6-70a3-7c32-0000-00000000022b,false,Wed Mar 26 14:45:26 EDT 2014,temp,44,analog 2014-03-26 10:45:31.135 [pool-1-thread-4] INFO  [LogMessageProcessor]: StreamedDataItem: Model,Asset2,799021d6-70a3-7c32-0000-000000000231,false,Wed Mar 26 14:45:31 EDT 2014,temp,44,analog 2014-03-26 10:45:36.142 [pool-1-thread-5] INFO  [LogMessageProcessor]: StreamedDataItem: Model,Asset1,799021d6-70a3-7c32-0000-000000000237,false,Wed Mar 26 14:45:36 EDT 2014,temp,45,analog 2014-03-26 10:45:41.146 [pool-1-thread-6] INFO  [LogMessageProcessor]: StreamedDataItem: Model,Asset2,799021d6-70a3-7c32-0000-00000000023d,false,Wed Mar 26 14:45:41 EDT 2014,temp,45,analog Configuring a CustomMessageProcessor By default, the project is configured to use a LogMessageProcessor that logs each streamed message it receives to standard out. The project takes a StreamedMessage in either an XML or JSON format (as configured in the MachineStream SDKv2 object) and decodes the message into a StreamedMessage Java object. LogMessageProcessor.java implements the MessageProcessor interface. Here is the MessageProcessor.java interface: MessageProcessor.java package com.axeda.tools.streams.processor; import com.axeda.tools.streams.model.StreamedMessage; /** * This class defines the message processor method callback that will called for message processing. * Note that this methods will be called by multiple threads concurrently. */ public interface MessageProcessor { /** * Process a machine stream message. Note that this method will be called by multiple threads concurrently.  * The number of concurrent processing threads is defined in MachineStreamsConfig.getNumProcessingThreads(). * If you add code here that significantly slows down message processing, then there is the potential that * MessageListenerService threads will also block.  When the MessageListenerService threads block, this means that * messages will start to backup in the ActiveMQ or ASB message queues. If you are processing a large number of messages, * then you may need to adjust your configuration parameters or optimize your processMessage() code. * @param message machine streams message to process */public void processMessage(StreamedMessage message);} An additional class named CustomMessageProcessor.java has been provided so that you can provide your own custom message processing logic: CustomMessageProcessor.java package com.axeda.tools.streams.processor; import org.springframework.stereotype.Component; import com.axeda.tools.streams.model.StreamedAlarm; import com.axeda.tools.streams.model.StreamedDataItemMessage; import com.axeda.tools.streams.model.StreamedMessage; import com.axeda.tools.streams.model.StreamedMobileLocation; import com.axeda.tools.streams.model.StreamedRegistrationMessage; /** * This class was provided for customers to implement their own message processing business * logic. To use this class, change the @Autowired messageProcessor qualifier in * MessageProcessingServiceImpl.java to @Qualifier("customMessageProcessor") */ @Component("customMessageProcessor") public class CustomMessageProcessor implements MessageProcessor { /** * (non-Javadoc) * @see com.axeda.tools.streams.processor.MessageProcessor#processMessage(com.axeda.tools.streams.model.StreamedMessage) * * Process a machine stream message. Note that this method will be called by multiple threads * concurrently. The number of concurrent processing threads is defined in * MachineStreamsConfig.getNumProcessingThreads(). * If you add code here that significantly slows down message processing, then there is the * potential that MessageListenerService threads will also block. When the MessageListenerService * threads block, this means that messages will start to backup in the ActiveMQ or Azure Service Bus message queues. If you * are processing a large number of messages, then you may need to adjust your configuration parameters * or optimize your processMessage() code. */ @SuppressWarnings("unused") @Override public void processMessage(StreamedMessage message) { if (message instanceof StreamedDataItemMessage) { StreamedDataItemMessage dataItem = (StreamedDataItemMessage) message; // add your business logic here } else if (message instanceof StreamedAlarm) { StreamedAlarm alarm = (StreamedAlarm) message; // add your business logic here } else if (message instanceof StreamedMobileLocation) { StreamedMobileLocation mobileLocation = (StreamedMobileLocation) message; // add your business logic here } else if (message instanceof StreamedRegistrationMessage) { StreamedRegistrationMessage registration = (StreamedRegistrationMessage)message; // add your business logic here } } } The Axeda Platform Machine Streams feature currently support 4 different message types: StreamedDataItemMessage StreamedAlarm StreamedMobileLocation StreamedRegistrationMessage For each of the different message types, you should add your message processing business logic.  You may want to write each message to your favorite NoSql database or to a flat file. Once you have completed your changes to the CustomObjectMessageProcessor, then you must make one change in the MessageProcessingServiceImpl.java class to use this Spring bean. Uncomment this line  // @Qualifier("customMessageProcessor") Comment this line @Qualifier("logMessageProcessor") The following code snppet shows what your changes should look like when you are finished: MessageProcessingServiceImpl.java @Component("messageProcessingService") public class MessageProcessingServiceImpl implements MessageProcessingService private static final Logger LOGGER = LoggerFactory.getLogger(MessageProcessingServiceImpl.class); @Autowired private MessageDecoder messageDecoder; @Autowired // If you want to use the CustomMessageProcessor instead of the default LogMessageProcessor then change this Qualifier to @Qualifier("customMessageProcessor") //@Qualifier("logMessageProcessor") private MessageProcessor messageProcessor; private ExecutorService executorService;
View full tip
While I was writing my previous post about Purging ValueStream entries from Deleted Things , it occurred to me that a similar issue would happen to those using the InfluxDB as persistence provider for their ValueStreams.   I wanted to take the opportunity that the logic is still fresh in my mind and extend the utility to do the same thing with InfluxDB. I also used the opportunity to create a dynamic InfoTable as it's been a while since I did it.    Also, it shows how to directly interact with the InfluxDB through its API.    The modification is based on a new thing called influxDBConnector  which has the following services:   dropMeasurements: Deletes all the DB entries related to a Thing; getAllEntries: Queries all entries related to a Thing. It has a string output; getAllEntriesTable: Queries all entries related to a Thing. Infotable output; getMissingThings: Queries ThingWorx Things tables and InfluxDB measurements. It filters to have in the result only the Measurements that are not in the Things table; showMeasurements; Gets all the measurements in InfluxDB   Also the following properties: database: Influx database name used to persist the value stream data; InfluxLink: hostname:port to the InfluxDB server   Like the previous example, I created a sample mashup (purgeInfluxDBStreamsMashup ) to help to execute the services:   Obviously this utility expects that there's an existing Influx persistence provider.   Once more: these services affect directly the DB and need to be used carefully and only by Administrators.  Make sure you fully test it before using it in production.   The attached XML contains the complete utility for PostgreSQL and InfluxDB.   Hope it helps Ewerton  
View full tip
In this post I show how to use Federation in ThingWorx to execute services on a different ThingWorx platform instance. In the use case below I set up one ThingWorx instance in the Factory and another instance in the Cloud, whereby the latter is executing a service which is actually running on the former.   Please find the document in attachment.   HTH, Alessio Marchetti  
View full tip
One of the signature features of the Axeda Platform is our alarm notification, signalling and auditing capabilities.   Our dashboard offers a simplified view into assets that are in an alarm state, and provides interaction between devices and operators.  For some customers the dashboard may be too extensive for their application needs.  The Axeda Platform from versions 6.6 onward provide a number of ways of interacting with Alarms to allow you to present this data to remote clients (Android, iOS, etc.) or to build extended business logic around alarm processing. If one were to create a remote management application for Android, for example, there are the REST APIs available to interact with Assets and Alarms.  For aggregate operations where network traffic and round-trip time can be a concern, we have our Scripto API also available that allows you to use the Custom Object functionality to deliver information on many different aggregating criteria, and allow developers to get the data needed to build the applications to solve their business requirements. Shown below is a REST API call you might make to retrieve all alarms between a certain time and date. POST:   https://INSTANCENAME/services/v2/rest/alarm/find <v2:AlarmCriteria xmlns:v2="http://www.axeda.com/services/v2" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">    <v2:date xsi:type="v2:BetweenDateQuery">     <v2:start>2015-01-01T00:00:00.000Z</v2:start>     <v2:end>2015-01-31T23:59:59.000Z</v2:end>   </v2:date>   <v2:states/> </v2:AlarmCriteria>   In a custom object, this would like like the following: import static com.axeda.sdk.v2.dsl.Bridges.* import com.axeda.services.v2.* import com.axeda.sdk.v2.exception.* def q = new com.axeda.services.v2.BetweenDateQuery() q.start = new Date() q.end = new Date() ac = new AlarmCriteria(date:q) aresults = alarmBridge.find(ac)   Using the same API endpoint, here's how you would retrieve data by severity: <v2:AlarmCriteria xmlns:v2="http://www.axeda.com/services/v2" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">    <v2:severity xsi:type="v2:GreaterThanEqualToNumericQuery">     <v2:value>900</v2:value>   </v2:severity>   <v2:states/> </v2:AlarmCriteria>   Or in a custom object: import static com.axeda.sdk.v2.dsl.Bridges.* import com.axeda.services.v2.* import com.axeda.sdk.v2.exception.*   def q = new com.axeda.services.v2.GreaterThanEqualToNumericQuery() q.value = 900 ac = new AlarmCriteria(severity:q) aresults = alarmBridge.find(ac)   Currently the Query Types do not map properly in JSON objects - use XML to perform these types of queries via the REST APIs. References: Axeda v2 API/Services Developer's Reference Guide 6.6 Axeda Platform Web Services Developer Reference v2 REST 6.6 Axeda v2 API/Services Developer's Reference Guide 6.8 Axeda Platform Web Services Developer Reference v2 REST 6.8
View full tip
Announcements