cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

Community Tip - Want the oppurtunity to discuss enhancements to PTC products? Join a working group! X

IoT Tips

Sort by:
Hello, Since there have been discussions regarding SNMP capabilities when using the ThingWorx platform, I have made a guide on how you can manage SNMP content with the help of an EMS. Library used: SNMP4J - http://www.snmp4j.org/ Purpose: trapping SNMP events from an edge network, through a JAVA SDK EdgeMicroServer implementation then forwarding the trap information to the ThingWorx server. Background: There are devices, like network elements (routers, switches) that raise SNMP traps in external networks. Usually there are third party systems that collect this information, and then you can query them, but if you want to catch directly the trap, you can use this starter-kit implementation. Attached to this blog post you can find an archive containing the source code files and the entities that you will need for running the example, and a document with information on how to setup and run and the thought process behind the project. Regards, Andrei Valeanu
View full tip
This is an example for setting up remote desktop and file transfer for an asset in Thingworx Utilities using the Java Edge SDK. Step 1.   EMS Configuration ClientConfigurator config = new ClientConfigurator(); // application key RemoteAccessThingKey String appKey = "s2ad46d04-5907-4182-88c2-0aad284f902c"; config.setAppKey(appKey); // Thingworx server Uri config.setUri("wss://10.128.49.63:8445//Thingworx/WS"); config.ignoreSSLErrors(true); config.setReconnectInterval(15); SecurityClaims claims = SecurityClaims.fromAppKey(appKey); config.setSecurityClaims(claims);      // enable tunnels for the EMS config.tunnelsEnabled(true); // initialize a virtual thing with identifier PTCDemoRemoteAccessThing VirtualThing myThing = new VirtualThing(ThingName, "PTCDemoRemoteAccessThing", "PTCDemoRemoteAccessThing", client); **// for the file transfer functionality FileTransferVirtualThing myThing = new FileTransferVirtualThing(ThingName, "PTCDemoRemoteAccessThing", "PTCDemoRemoteAccessThing", client); myThing.addVirtualDirectory("AssetRepo",  "E:/AssetRepo"); Step 2.   Install TightVNC on the asset ( http://www.tightvnc.com/download.php ) Step 3.   Go to Thingworx Composer, search for thing PTCDemoRemoteAccessThing                 a.   pair  PTCDemoRemoteAccessThing with identifier PTCDemoRemoteAccessThing                 b.   go to PTCDemoRemoteAccessThing Configuration  and add a tunnel name: vnc host: asset IP port: 5900 (this is the default port for VNC servers; it can be changed)                 c.   go to PTCDemoRemoteAccessThing Properties and set the vncPassword Troubleshooting                 If the VNC server is on the same machine with the Thingworx server, check “Allow loopback connections” from Access Control tab in TightVNC Server Configuration.
View full tip
When we do connections to JDBC connected databases, as much as possible we want to leverage the Database Server side capabilities to do the querying, aggregating and everything else for us. So if we need to do filtering or math add it to the query statement so it is done server side, by the Database server not the Thingworx runtime server. Besides that there is much more that can be done, since databases are powerful (all the good stuff like joins, union, distinct, generated and calculated fields and what not) The more we can do Database server side the better it is for the Thingworx runtime performance. Now everyone hopefully knows the [[  ]] parameter substitution. So we can easily build an SQL service that has several input parameters, it will look like: Select * from table where item1=[[par1]] AND item2 =[[par2]] etc. But we can take this up a notch with the super powerful yet super dangerous <<   >> Now we can do a service that just says <<sqlQuery>> and use another service to build something like: select * from table where item1 in “val1,val2,val3” etc. If you can avoid it, only use [[  ]] but if needed there always is <<   >> but you must make sure that you properly secure that service with the system user and validate the service that invokes this service, since <<  >> is vulnerable to SQL Injection
View full tip
The ThingWorx Android SDK was designed to load without modification into Android Studio but due to recent changes in Android Studio, the Thingworx Always On Android SDK 1.0 needs to have minor modifications made to its build files before it will load or build. This changes will be made in the 1.1 release in July, 2016 but until then they will have to be updated after the user downloads the SDK. Android Studio changed the minimum required Android build tools for gradle. This also forces a new version of gradle to be required. The following changes must be made to each set of SDK project build files. The tw-android-sdk/gradle/wrapper/gradle-wrapper.properties file ( there is only one present in the entire sdk) must have this line changed from: distributionUrl=https\://services.gradle.org/distributions/gradle-2.4-all.zip to: distributionUrl=https\://services.gradle.org/distributions/gradle-2.10-all.zip In all build.gradle files change from: buildToolsVersion "19.1.0" to: buildToolsVersion “21.0.0” also in all build.gradle files change from: dependencies { classpath 'com.android.tools.build:gradle:1.3.0' } to: dependencies { classpath 'com.android.tools.build:gradle:2.1.0' } Now you should be able to import and build all examples again.
View full tip
Attached to this article is my slide deck from Liveworx 2016 titled "Getting Mobile with ThingWorx and Android". It is an overview of the ThingWorx android SDK which had its 1.0 release this past April. Also attached is the Light Blue Bean bluetooth integration example I demoed at the show.
View full tip
Objective Learn how the Scripto Web Service helps you to present platform information in your HTML with JavaScript and dynamic page updating.  After this tutorial, you will know how to create Axeda Custom Objects that return formatted results to JavaScript using XmlHttpResponse, and how a very simple page can incorporate platform data into your browser-based user interface. Part 1 - Simple Scripto In Using Scripto, you learned how Scripto can be called from very simple clients, even the most basic HTTP tools. This tutorial builds on the examples in that tutorial. The following HelloWorld script accepts a parameter named "foo". This means that the caller of the script may supply a value for this parameter, and the script simple returns a message that includes the value supplied. import static com.axeda.sdk.v2.dsl.Bridges.* import com.axeda.services.v2.* import com.axeda.sdk.v2.exception.* return "Hello world, ${parameters.foo}" In the first part of this tutorial, we'll be creating an HTML page with some JavaScript that simply calls the HelloWorld script and puts the result on the page. Create an HTML File Open up your favorite text editor and create a blank document. Paste in this simple scaffold, which includes a very simple FORM with fields for your developer platform email and password, and the "foo" parameter. <html> <head> <title>Axeda Developer Connection Simple Ajax HelloWorld Example</title> </head> <body> <form name="f1">         Platform email (login): <input name="username" type="text"><br/>         Password: <input name="password" type="password"><br/>         foo: <input name="foo" type="text"><br/> <input value="Go" type="button" onclick='JavaScript: callScripto()'/></p> <div id="result"></div> </form> </body> </html> Pretty basic HTML that you've seen lots of times. Notice the form onclick refers to a JavaScript function. We'll be adding that next. Add the JavaScript Directly under the <title> tag, add the following <script language="Javascript"> var scriptoURL ="http://dev6.axeda.com/services/v1/rest/Scripto/execute/"; var scriptName ="HelloWorld2"; </script> This defines our JavaScript block, and a couple of constants to tell our script where the server's Scripto REST endpoint is, and the name of the script we will be running. Let's add in our callScripto() function. Paste the following directly under the scriptName variable declaration: function callScripto(){ try{                 netscape.security.PrivilegeManager.enablePrivilege("UniversalBrowserRead"); }catch(e){ // must be IE }    var xmlHttpReq =false;    var self =this;    // Mozilla/Safari    if(window.XMLHttpRequest){                 self.xmlHttpReq =new XMLHttpRequest();    }// IE elseif(window.ActiveXObject){                 self.xmlHttpReq =new ActiveXObject("Microsoft.XMLHTTP"); }    var form = document.forms['f1'];    var username = form.username.value;    var password = form.password.value;    var url = scriptoURL + scriptName +"?username="+ username +"&password="+ password;             self.xmlHttpReq.open('POST', url,true);             self.xmlHttpReq.setRequestHeader('Content-Type', 'application/x-www-form-urlencoded');             self.xmlHttpReq.onreadystatechange =function() {       if(self.xmlHttpReq.readyState ==4){                     updatepage(self.xmlHttpReq.responseText);       }    }    var foo = form.foo.value;    var qstr ='foo='+escape(foo);    self.xmlHttpReq.send(qstr); } That was a lot to process in one chunk, so let's examine each piece. This piece just tells the browser that we'll be wanting to make some Ajax calls to a remote server. We'll be running the example right off a local file system (at first), so this is necessary to ask for permission. try{                 netscape.security.PrivilegeManager.enablePrivilege("UniversalBrowserRead"); }catch(e){ // must be IE } This part creates an XmlHttpRequest object, which is a standard object available in browsers via JavaScript. Because of slight browser differences, this code creates the correct object based on the browser type. var xmlHttpReq =false; var self =this; // Mozilla/Safari if(window.XMLHttpRequest){                 self.xmlHttpReq =new XMLHttpRequest(); } // IE elseif(window.ActiveXObject){                 self.xmlHttpReq =new ActiveXObject("Microsoft.XMLHTTP"); } Next we create the URL that will be used to make the HTTP call. This simply combines our scriptoURL, scriptName, and platform credentials. var form = document.forms['f1']; var username = form.username.value; var password = form.password.value; var url = scriptoURL + scriptName +"?username="+ username +"&password="+ password; Now let's tell the xmlHttpReq object what we want from it. we'll also reference the name of another JavaScript function which will be invoked when the operation completes. self.xmlHttpReq.open('POST', url,true);     self.xmlHttpReq.setRequestHeader('Content-Type', 'application/x-www-form-urlencoded');     self.xmlHttpReq.onreadystatechange =function(){ if(self.xmlHttpReq.readyState ==4){             updatepage(self.xmlHttpReq.responseText); } } Finally, for this function, we'll grab the "foo" parameter from the form and tell the prepped xmlHttpReq object to post it. var qstr ='foo='+escape(foo);     self.xmlHttpReq.send(qstr); almost done. We just need to supply the updatepage function that we referenced above. Add this code directly before the </script> close tag: function updatepage(str){             document.getElementById("result").innerHTML = str; } Try it out Save your file as helloworld.html and open it in a browser by starting your browser and choosing "Open File". You can also download a zip with the file prepared for you at the end of this page. If you are using Internet Explorer, IE will pop a bar asking you if it is OK for the script inside this page to execute a script. Choose "Allow Blocked Content". Type in your platform email address (the address you registered for the developer connection with) and your password. Enter any text that you like for "foo". When you click "Go", the area below the button will display the result of the Scripto call. Note that if you are using Mozilla Firefox, you will be warned about the script wanting to access a remote server. Click "Allow". Congratulations! You have learned how to call a Custom Object-backed Scripto service to display dynamic platform content inside a very simple HTML page. Next Steps Be sure to check out the tutorial on Hosting Custom Applications to learn how you can make this page get directly served from your platform account, with its very own URL. Also explore code samples that show more sophisticated HTML+AJAX examples using Google Charts and other presentation tools.
View full tip
Adaptive Machine Messaging Protocol The Adaptive Machine Messaging Protocol (AMMP) is a simple, byte-efficient, lightweight messaging protocol used to facilitate Internet of Things (IoT) communications and to build IoT connectivity into your product. Using a RESTful API, AMMP provides a semantic structure for IoT information exchange and leverages HTTPS as the means for sending and receiving messages between an edge device and the Axeda® Machine Cloud®. AMMP uses JavaScript Object Notation (JSON) allowing any device that is capable of making an HTTP transmission to interact with the Axeda Platform. Utilizing a common network transport that is friendly to local network proxies and firewalls, and at the same time using JSON for a compact, human-readable, language-independent, and easily constructed data representation, AMMP simplifies device communication and reduces the work needed to connect to the Axeda Machine Cloud. For complete information about the Adaptive Machine Messaging Protocol, refer to the Adaptive Machine Messaging Protocol (AMMP) Technical Reference. AMMP Toolkits The AMMP Toolkits are libraries that allows you to connect your devices to the Axeda Platform using AMMP.  The AMMP Toolkits support transmission of data, alarms, events, locations; error handling and reporting; as well as exchanging files with the Axeda Platform. AMMP Android-Based Toolkit The AMMP Android-Based Toolkit library conforms to the AMMP Protocol Version 1.1. AMMP Android Toolkit AMMP Android Toolkit Developers Reference AMMP Protocol v1.1 Technical Reference AMMP Java-Based ToolkitThe AMMP Java-Based Toolkit library conforms to the AMMP Protocol Version 1.1.  AMMP Java Toolkit AMMP Java Toolkit Developers Reference AMMP Protocol v1.1 Technical Reference AMMP C-Based ToolkitThe AMMP C-Based Toolkit library conforms to the AMMP Protocol Version 1.1. AMMP C Toolkit AMMP C Toolkit Developers Reference AMMP Protocol v1.1 Technical Reference The above resources may be found at the PTC Support Portal.
View full tip
Scripto provides a RESTful endpoint for Groovy Custom Objects on the Axeda Platform.  Custom Objects exposed via Scripto can be accessed via a GET or a POST, and the script will have access to request parameters or body contents. Any Custom Object of the "Action" type will automatically be exposed via Scripto. The URL for a Scripto service is currently defined by the name of the Custom Object: GET: http://{{YourHostName}}/services/v1/rest/Scripto/execute/<customObjectName> Scripto enables the creation of "Domain Specific Services". This allows implementers to take the Axeda Domain Objects (Assets, Models, DataItems, Alarms) and expose them via a service that models the real-world domain directly (trucks, ATMs, MRI Machines, sensor readings). This is especially useful when creating a domain-specific UI, or when integrating with another application that will push or pull data. Authentication There are several ways to test your Scripto scripts, as well as several different authentication methods. The following authentication methods can be used: Request Parameter credentials: ?username=<yourUserName>&password=<yourPassword> Request Parameter sessionId (retrieved from the Auth service): ?sessionid=<sessionId> Basic Authentication (challenge): From a browser or CURL, simply browse to the URL to receive an HTTP Basic challenge. Request Parameters You can access the parameters to the Groovy script via two Objects, Call and Request. Request is actually just a sub-class of Call, so the values will always be the same regardless of which Object you use.  Although parameters may be accessed off of either object, Call is preferable when Chaining Custom Objects (TODO LINK) together.  Call also includes a reference to the logger which can be used to log debug messages. GET:  http://{{YourHostName}}/services/v1/rest/Scripto/execute/<Your Script Name>?sessionid=<Session Id>&serial_number=mySerialNumber Accessing Parameters through the Request Object import com.axeda.drm.sdk.scripto.Request // Request.parameters is a map of strings def serial_number = Request.parameters.serial_number assert serial_number == "mySerialNumber"       Accessing Parameters through the Call Object import com.axeda.drm.sdk.customobject.Call // Call.parameters is a map of strings def serial_number = Call.parameters.serial_number assert serial_number == "mySerialNumber"       Accessing the POST Body through the Request Object The content from a POST request to Scripto is accessible as a string via the body field in the Request object.  Use Slurpers for XML or JSON to parse it into an object. POST:  http://{{YourHostName}}/services/v1/rest/Scripto/execute/<Your Script Name>?sessionid=<Session Id> Response: { "serial_number":"mySerialNumber"} import com.axeda.drm.sdk.scripto.Request def body = Request.body def slurper = new JsonSlurper() def result = slurper.parseText(body) assert result.serial_number == "mySerialNumber"       Returning Plain Text Groovy custom objects must return some content.  The format of that content is flexible and can be returned as plain text, JSON, XML, or even binary files. The follow example simply returns plain text. GET:  http://{{YourHostName}}/services/v1/rest/Scripto/execute/<Your Script Name> // Outputs:  hello return ["Content-Type":"text/plain","Content":"hello"]       Returning JSON We use the JSONObject Class to format our Map-based content into a JSON structure. The eliminates the need for any concern around formatting, you just build up Maps of Maps and it will be properly formatted by the fromObject() utility method. GET:  http://{{YourHostName}}/services/v1/rest/Scripto/execute/<Your Script Name> import net.sf.json.JSONObject root = [   items:[    num_1: “one”,    num_2: “two”            ] ] /** Outputs {   "items": {  "num_1": "one", "num_2": "two"  } } **/ return ['Content-Type': 'application/json', 'Content': JSONObject.fromObject(root).toString(2)]       Link to JSONObject documentation Returning XML To return XML, we use the MarkupBuilder to build the XML response. This allows us to create code that follows the format of the XML that is being generated. GET:  http://{{YourHostName}}/services/v1/rest/Scripto/execute/<Your Script Name>?sessionid=<Session Id> import groovy.xml.MarkupBuilder def writer = new StringWriter() def xml = new MarkupBuilder(writer) xml.root(){     items(){         num_1("one")         num_2("two")     } } /** Outputs <root>   <items>     <num_1>one</num_1>     <num_2>two</num_2>   </items> </root> **/ return ['Content-Type': 'text/xml', 'Content': writer.toString()]       Link to Groovy MarkupBuilder documentation Returning Binary Content To return binary content, you typically will use the fileStore API to upload a file that you can then download using Scripto.  See the fileInfo section to learn more. In this example we connect the InputStream which is associated with the getFileData() method directly to the output of the Scripto script. This will cause the bytes available in the stream to be directly forwarded to the client as the body of the response. GET:  http://{{Your Host Name}}/services/v1/rest/Scripto/execute/{{Your Script Name}}?sessionid={{Session Id}}&fileId=123 import static com.axeda.sdk.v2.dsl.Bridges.* import com.axeda.services.v2.* import com.axeda.sdk.v2.exception.* def contentType = parameters.type ?: 'image/jpg' return ['Content':fileInfoBridge.getFileData(parameters.fileId), 'Content-Type':contentType]   The Auth Service - Authentication via AJAX Groovy scripts are accessible to AJAX-powered HTML apps with Axeda instance credentials.  To obtain a session from an Axeda server, you should make a GET call to the Authentication service. The service is located at the following example URL: https://{{YourHostName}}/services/v1/rest/Auth/login This service accepts a valid username/password combination in the incoming Request parameters and returns a SessionID. The parameter names it expects to see are as follows: Property Name Description principal.username The username for the valid Axeda credential. password The password for the supplied credential. A sample request to the Auth Service: GET: https://{{YourHostName}}/services/v1/rest/Auth/login?principal.username=YOURUSER&password=YOURPASS Would yield this response (at this time the response is always in XML): <ns1:WSSessionInfo xsi:type="ns1:WSSessionInfo" xmlns:ns1="http://type.v1.webservices.sl.axeda.com" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">   <ns1:created>2013-08-12T13:19:37 +0000</ns1:created>   <ns1:expired>false</ns1:expired>   <ns1:sessionId>19c33190-dded-4655-b2c0-921528f7b873</ns1:sessionId> <ns1:sessionTimeout> 1800 </ns1:sessionTimeout> </ns1:WSSessionInfo>       The response fields are as follows: Field Name Description created The timestamp for the date the session was created expired A boolean indicating whether or not this session is expired (should be false) sessionId The ID of the session which you will use in subsequent requests sessionTimeout The time (in seconds) that this session will remain active for The Auth Service is frequently invoked from JavaScript as part of Custom Applications. The following code demonstrates this style of invocation. function authenticate(host, username, password) {             try {                 netscape.security.PrivilegeManager.enablePrivilege("UniversalBrowserRead");             } catch (e) {                 // must be IE             }             var xmlHttpReq = false;             var self = this;             // Mozilla/Safari             if (window.XMLHttpRequest) {                 self.xmlHttpReq = new XMLHttpRequest();             }             // IE             else if (window.ActiveXObject) {                 self.xmlHttpReq = new ActiveXObject("Microsoft.XMLHTTP");             }             var SERVICES_PATH = "/services/v1/rest/"             var url = host + SERVICES_PATH + "Auth/login?principal.username=" + username + "&password=" + password;             self.xmlHttpReq.open('GET', url, true);             self.xmlHttpReq.onreadystatechange = function() {                 if (self.xmlHttpReq.readyState == 4) {                     getSessionId(self.xmlHttpReq.responseXML);                 }             }             self.xmlHttpReq.send() } function getSessionId(xml) {             var value             if (window.ActiveXObject) {                 // xml traversing with IE                 var objXML = new ActiveXObject("MSXML2.DOMDocument.6.0");                 objXML.async = false;                 var xmldoc = objXML.loadXML(xml);                 objXML.setProperty("SelectionNamespaces", "xmlns:ns1='http://type.v1.webservices.sl.axeda.com'");                 objXML.setProperty("SelectionLanguage","XPath");                 value =  objXML.selectSingleNode("//ns1:sessionId").childNodes[0].nodeValue;             } else {                 // xml traversing in non-IE browsers                 var node = xml.getElementsByTagNameNS("*", "sessionId")                 value = node[0].textContent             }             return value } authenticate ("http://mydomain.axeda.com", "myUsername", "myPassword")       Calling Scripto via AJAX Once you have obtained a session id through authentication via AJAX, you can use that session id in Scripto calls. The following is a utility function which is frequently used to wrap Scripto invocations from a UI. function callScripto(host, scriptName, sessionId, parameter) {             try {                 netscape.security.PrivilegeManager.enablePrivilege("UniversalBrowserRead");             } catch (e) {                 // must be IE             }             var xmlHttpReq = false;             var self = this;             // Mozilla/Safari             if (window.XMLHttpRequest) {                 self.xmlHttpReq = new XMLHttpRequest();             }             // IE             else if (window.ActiveXObject) {                 self.xmlHttpReq = new ActiveXObject("Microsoft.XMLHTTP");             }             var url = host + SERVICES_PATH + "Scripto/execute/" + scriptName + "?sessionid=" + sessionId;             self.xmlHttpReq.open('GET', url, true);             self.xmlHttpReq.onreadystatechange = function() {                 if (self.xmlHttpReq.readyState == 4) {                     updatepage(div, self.xmlHttpReq.responseText);                 }             }             self.xmlHttpReq.send(parameter); } function updatepage(div, str) {             document.getElementById(div).innerHTML = str; } callScripto("http://mydomain.axeda.com", "myGroovyScriptName", "mySessionId", "myparameter=foo")       A more modern jQuery-based example might look like the following: function callScripto(host, scriptName, sessionId, parameter) {     var url = host + '/services/v1/rest/Scripto/execute/' + scriptName + '?sessionid=' + sessionId     if ( parameter != null ) url += '&' + parameter     $.ajax({url: url,               success:  function(response) {  updatepage(div, response); }           }); } function updatepage(div, str) {     $("#" + div).innerHTML = str } callScripto("http://mydomain.axeda.com", "myGroovyScriptName", "mySessionId", "myparameter=foo") In Conclusion As shown above, Scripto offers a number of ways to interact with the platform.  On each version of the Axeda Platform, all supported v1 and v2 APIs are available for Scripto to interact with the Axeda domain objects and implement business logic to solve real-world customer problems. Bibliography ​(PTC.net account required)     Axeda v2 API/Services Developer's Reference Version 6.8.3 August 2015     Axeda® v1 API Developer’s Reference Guide Version 6.8 August 2014     Documentation Map for Axeda® 6.8.2 January 2015
View full tip
From the documentation, a SOLR node is only needed in case of using DataTables.  If the SOLR configuration field left blank, the extension will request to provide an input. Are SOLR nodes required or optional in order to use DSE with TW (in the hypothetical case of not using DataTables)?      -- As for functionality of the Thingworx, a Solr node is not required. However, the extension does try to validate the configuration, and hence, at this point, a SOLR node is mandatory to properly configure the extension. This will be fixed in the future. When there are 2 entries for addresses, one for a Cassandra Cluster and one for a Solr Cluster, are they the same Cluster, or different Clusters?      -- They could be either. There can be one machine with SOLR enabled and using the same IP for both Cassandra and Solr. However, it's not recommend for production workloads. It would be perfectly fine for development or test environments. In a Cluster, in order to have Solr and Cassandra nodes, use of Datacenters is required. Even if a Datacenter isn't explicitlydefined, a default install of DSE will create two data centers called "Cassandra" and "Solr" which is what would be seen see in the default "Cassandra Keyspace Settings" property in the configuration. If the user does create Datacenters with specific names then they will have to update the "Cassandra Keyspace Settings" property to reflect the same. I.e. replication = {'class':'NetworkTopologyStrategy', 'Cassandra':1, 'Solr':1} The number in front (1 being the default) represents the replication factor (https://docs.datastax.com/en/cql/3.1/cql/cql_using/update_ks_rf_t.html) depending on the number of nodes in each data center
View full tip
I've had a lot of questions over the years working with Azure IoT, Kepware, and ThingWorx that I really struggled getting answers to. I was always grateful when someone took the time to help me understand, and now it is time to repay the favour.   People ask me many things about Azure (in a ThingWorx context), and one of the common ones has been about MQTT communications from Kepware to ThingWorx using IoT Hub. Recently the topic has come up again as more and more of the ThingWorx expert community start to work with Azure IoT. Today, I took the time to build, test, validate, and share an approach and utilities to do this in cases where the Azure Industrial IoT OPC UA integration is overkill or simply a step later in the project plan. Enjoy!   End to end Integration of Kepware to ThingWorx using MQTT over Azure IoT (YoutTube 45 minute deep-dive)   ThingWorx entities for import (ThingWorx 9.0)   This approach can be quite good for a simple demo if you have a Kepware Integrator or Kepware Enterprise license, but the use of IoT Gateway for many servers and tags can be quite costly.   Those looking to leverage Azure IoT Hub for MQTT integration to ThingWorx would likely also find this recorded session and shared utilities quite helpful.   Cheers, Greg
View full tip
This document provides API information for all 51.0 releases of ThingWorx Machine Learning.
View full tip
Every edge component that connects to the ThingWorx platform requires an Application Key.  This 'AppKey' provides both authentication and authorization control.  When an edge component connects it steps through a connection process.  The second step of that process is to send the AppKey to the platform.  The platform will inspect the key and ensure that it is valid.  It also creates a session for that edge connection and associates the AppKey with the session.  Any future requests that are sent over that AlwaysOn connection will execute under the security context configured for the user associated with the AppKey. In order for edge applications to interact with the platform they require a certain set of permissions.  It is a best practice to not associate the Administrator user with an Application Key.  Doing this would allow an edge application to invoke any and all services on the platform, and to modify the property values of any thing.  The permissions applied to an edge component's AppKey should be the minimum set required for your application to function. The AppKey associated with an edge component is typically associated with a single Thing, or a collection of Things, usually of the same ThingTemplate.  Identify the Thing(s) or ThingTemplate(s) that your application will interact with.  There are four types of interactions for edge components: property reads, property writes, service invocations, and event executions (edge components do not subscribe to events).  These four types of interactions match the runtime permissions that can be configured on a Thing, or the 'run time instance' permissions for a ThingTemplate. If an edge application will be reading or writing all properties of a particular Thing, then applying the 'read property' and 'write property' permissions is appropriate.  If only a select set of a Thing's properties will be read or written, then read and/or write permission should be disabled, and only the select properties should be enabled using overrides. Since every Thing has a number of generic services, the 'service execute' permission should be disabled, and overrides should be configured for the selected services that the edge needs access to.  In addition, overrides should be configured for the 'UpdateSubscribeProeprtyValues' service and for the 'ProcessRemoteEvents' service.  Edge components often use these service to update a collection of properties or to fire a set of events. Finally, if your edge application triggers events on a Thing, overrides should be used to provide execute permission for those events. In summary, the safest path to configuring edge permissions is to create a new user and AppKey with no permissions applied, and to then selectively apply permissions for that user only on the Thing or Things that your edge components will interact with.
View full tip
Applicable Releases: ThingWorx Platform 7.0 to 8.5   Description:   Covers the main topics that need to be considered when evaluating or designing a scalability strategy for the environment and applications The agenda contains the following topics: Introduction Connection Server Federation High Availability       Some of the databases mentioned in the presentation are no longer supported Please refer to the compatibility matrix to check supported databases Related Articles How to configure a Federate architecture Is high availability supported in ThingWorx  
View full tip
Applicable Releases: ThingWorx Platform 7.0 to 8.5   Description:   Introduction to Edge connectivity in Thingworx Foundation: Edge concept and definition Available edge products Why use Edge products What is Edge Microserver and Lua Script Resource What are the SDKs What are connection servers AlwaysOn and HTTP protocols ThingTemplates to connect remote devices     The session was recorded in an old ThingWorx version, but all the concepts are still applicable
View full tip
Hiya,   I recently prepared a short demo which shows how to onboard and use Azure IoT devices in ThingWorx and added some usability tips and tricks to help others who might struggle with some of the things that I did.     The good news... I recorded and posted it to YouTube here.   •Connect Azure IoT Hub with ThingWorx (to be updated soon for 9.0 release) •Using the Azure IoT Dev Kit with ThingWorx •Getting the Azure IoT Hub Connector Up and Running (V3/8.5)   Enjoy, and don't hesitate to comment with your own tips and feedback.   Cheers,   Greg
View full tip
Hi Community,   Although we have reference architectures and integration paths for connecting devices to ThingWorx through Azure IoT; no one has ever written anything about doing the same from one ThingWorx to another.  I thought I’d change that and put some ideas out there around how one might go about doing this.  Although this is not officially supported or recommended by PTC; I have consulted with a number of leading SMEs on the subject, which have participated in forming the basis of my thinking outlined here.   Components Required (in order of communication path): On-premise ThingWorx Platform Protocol Adapter Toolkit* (CXS) - MQTT Azure IoT Edge Azure IoT Hub ThingWorx Azure IoT Hub Connector (CXS) Azure Cloud-hosted ThingWorx Platform   PAT (2) with codec to encode MQTT messages publishes to on-premise IoT Edge MQTT endpoint which handles store-and-forward of messages to IoT Hub.  An Azure IoT device would exist for each Thing you wish to represent on the ThingWorx servers.  The Azure IoT Hub Connector would pick-up the incoming messages and pass them on to the cloud ThingWorx which would decode the MQTT payload and map to Thing property updates.   The only part that I presently don’t like about this approach is that you’ll need to decode the MQTT messages on the ThingWorx platform in the cloud when they are received from the IoT Hub, and this mechanism will need to also need to handle encoding and publishing back to the IoT Hub if C2D (Cloud-to-Device) messages are to be implemented (aka bi-directional).  This is required as ThingWorx only supports AlwaysOn as an application level protocol so some form of mapping needs to be done.   * Another approach would be to replace the PAT with a custom agent which implements both the ThingWorx Edge SDK and the Azure IoT device SDK   Regards,   Greg Eva
View full tip
Developing with Axeda Artisan (for Axeda Platform, v6.8 and later) Axeda Artisan is a development tool based on the Apache Maven build system. Artisan includes authoring, management, and deployment mechanisms that define and manage the development of Axeda Platform extension points, allowing you to flexibly install, update, and uninstall many types of objects on the Axeda Platform. The Components of Artisan The Artisan framework is comprised of several components: The Axeda Platform Each instance of the Axeda Platform contains libraries and sample Artisan Projects (archetypes) that are downloaded by developers to author and deploy their own Artisan Projects. The Artisan Project An Artisan Project is a collection of content and configuration files that are used to define a set of objects that will be installed on the Axeda Platform. The Artisan Installer The Artisan Installer is a flexible, configurable application that uses the Apache Maven build life-cycle to upload the contents of an Artisan Project to the Axeda Platform. Axeda Platform APIs The Artisan Installer interacts with the Axeda Platform and uses Axeda APIs to install, upgrade, or uninstall ArtisanProjects. How You Work With Artisan The goal of using Artisan is to develop an Artisan Project that will become an object or application that runs on the Axeda Platform. There are several phases involved in creating an Artisan Project: Preparing Your Development Environment – When first working with Artisan, you need to prepare your development environment by installing or configuring Java, Apache Maven, and any other required development tools. Creating an Artisan Project – Artisan uses archetypes as the basis for Artisan Projects. Archetypes are used to generate an Artisan Project that contains the correct type of development framework, which you then modify to create your own Artisan Project: The Hello World archetype provides a sample project you can use to create common objects on the Axeda Platform. The Machine Streams archetype provides a sample project you can use to configure machine streams for your Axeda Platform. Installing an Artisan Project on the Axeda Platform – The Artisan Installer packages the contents of the Artisan Project, and deploys them on the Axeda Platform. Testing the Artisan Project – Once installed, you can test your Artisan Project to verify its functionality and identify any issues that need to be addressed. It is common for an Artisan Project to be installed on the Axeda Platform several times as issues are identified and corrected during development. Creating a Stand-alone Installer – Once ready for deployment to production, the Artisan Installer is used to create a stand-alone installer for distributing the project contents to other instances of the Axeda Platform. Where to Go from Here To begin working with the Artisan framework to create your own projects, refer to the following: Artisan Landing Page – A page hosted on the Axeda Platform that introduces Artisan and serves as a starting point for creating an Artisan Project. The Artisan Landing Page can be found at the following location: http://instance_name:port/artisan                     Where instance_name:port/ is the ip address and port of an instance of the Axeda Platform. Axeda® Artisan Developer’s Guide – A reference that introduces Artisan, describes how to prepare your development environment, and provides detailed technical information concerning creating an Artisan Project and configuring the Artisan Installer. The Axeda® Artisan Developer’s Guide is available with all Axeda product documentation from the PTC Support site, http://support.ptc.com/
View full tip
Axeda Machine Streams enables external Platform integrators to access the current, raw data from connected assets. The Platform can stream the data item, alarm, mobile location, and registration messages from connected assets to an ActiveMQ server or Azure Service Bus endpoint. Streamed data can be used for data analytics or reporting, or simply for storage. This article explains the Machine Streams Data Relay project that Axeda provides. This sample project illustrates how stream consumers can create their own projects to relay Machine Stream messages from ActiveMQ or Azure Service Bus into their environments. The Machine Streams Data Relay project was created using Apache Maven. The project operates by dispatching messages to a log message processor. Each machine streams message is logged to stdout. Note: The "Axeda Features Guide" provides a high level introduction to the Axeda Machine Streams feature. That PDF is available from PTC Support (http://support.ptc.com/).) Downloading and Installing the Project The machine-streams-data-relay project is provided as a tar.gz archive for Linux users and a .zip archive for Windows users. Each archive includes a Maven project with all source code. This page provides downloads and full source for the machine-steams-data-relay Maven project. The Data Relay project files are available from here. Prerequisites To download, build, and compile the machine-streams-data-relay project, you will need the following: Access to an Axeda Platform instance configured to stream asset data (for ActiveMQ endpoint this includes the Axeda provided ActiveMQ machine-streams plugin/overlay (axeda-jms-plugin-r<SVN_REVISION>-machine-streams.zip, which is provided here. ActiveMQ or Azure Service Bus server configured for Machine Streams. Instructions for configuring an ActiveMQ or Azure Service Bus server for Machine Streams are provided in the “Axeda® Machine Streams: A Guide to Setting Up Broker Endpoints", available with all documentation from PTC Support (http://support.ptc.com/). At least one machine stream (Axeda Artisan Machine Streams Archetype) configured to stream data to the ActiveMQ or Azure Service Bus server for your assets. (Complete information about creating machine streams and adding machine stream support to the Axeda Platform is provided in the “Axeda v2 API/Services Developers Reference Guide” available from PTC Support (http://support.ptc.com/).) Access to the ActiveMQ or Azure Service Bus server configured as the endpoint for streamed Machine Streams content Oracle Java JDK 1.7 or greater and java and javac installed and available in your PATH (if you need instructions for this, see http://www.oracle.com/technetwork/java/javase/downloads/index.html) Maven 3.0.4 or greater and mvn installed and available in your PATH (if you need instructions for this, see http://maven.apache.org/download.cgi) Note: For the Machine Streams Data Relay project to work successfully, the Axeda Platform instance and the ActiveMQ or Azure Service Bus server instance must be configured with support for Axeda Machine Streams, and at least one machine stream must be configured to stream data. Complete information about configuring Axeda Platform for Axeda Machine Streams, including the data format for the resulting streams (XML or JSON) is available in the “Axeda v2 API/Services Developers Reference Guide.” Instructions for configuring an ActiveMQ or Azure Service Bus server for Machine Streams are provided in the “Axeda® Machine Streams: A Guide to Setting Up Broker Endpoints” Reference Guide (available from PTC Support (http://support.ptc.com/)). Building the Project This page provides instructions for building the Data Relay project for Linux and for Windows environments. 1. Download and uncompress the project for your environment Linux: Click here for the machine-streams-data-relay-1.0.3-project.tar.gz # tar -zxvf machine-streams-data-relay-1.0.3-project.tar.gz # cd machine-streams-data-relay-1.0.3 Windows: Click here for the machine-streams-data-relay-1.0.3-project.zip Unzip the project to the following directory: C:\machine-streams-data-relay-1.0.3 2. Edit the ActiveMQ or Azure Service Bus configuration file (configAMQ.properties or configASB.properties) in src\main\scripts\ as needed. sample Config.properties files for the MachineStreamsDataRelay component For ActiveMQ broker endpoints - configAMQ.properties # The ActiveMQ broker URL. brokerURL=tcp://localhost:62000 # The ActiveMQ queue name to process messages from. # It can be a single queue: MachineStream.stream01 # Or a wildcard queue: MachineStream.> queueName=MachineStream.> # The username used to connect to the ActiveMQ queue username=axedaadmin # The password used to connect to the ActiveMQ queue password=zQXuLzhQgcyRZ25JCDXYEPBCT2kx48 # The number of ActiveMQ broker connections. numConnections=10 # The number of sessions per connection. Note that each session will create a separate thread. numSessionsPerConnection=5 # The number of concurrent threads used for processing machine streams messages. numProcessingThreads=100 # The type of message listener container. # default = single queue name per connection. # multiDestination = supports multiple queue names per connection messageListenerContainerType=default For Azure Service Bus broker endpoints - configASB.properties # The ASB broker URL. brokerURL=amqps://your-azure-service-bus-namespace.servicebus.windows.net # The ASB queues to process messages from. # It can be a single queue: MachineStream.stream01 # Or multiple queues separated by a comma: MachineStream.stream01,MachineStream.stream02 # Or a queue range defined by the following syntax: MachineStream.stream[01-20] queueName=MachineStream.stream[01-50] # The username used to connect to the ASB queue(s) username=your-azure-service-bus-username # The password used to connect to the ASB queue(s) password=the-password-for-your-azure-service-bus-username # The max number of ASB broker connections. numConnections=10 # The number of concurrent threads used for processing machine streams messages. numProcessingThreads=100 # The type of message listener container. # default = single queue name per connection. # multiDestination = supports multiple queue names per connection messageListenerContainerType=multiDestination Note: messageListenerContainerType is provided because Azure Service Bus does not support wildcard queue names. The configuration details are as follows: Name Description Value brokerURL location of the ActiveMQ or Azure Service Bus (broker) location of the ActiveMQ or Azure Service Bus server (broker) queueName Name of the ActiveMQ or Azure Service Bus queue from which you want to process messages To define a single queue: MachineStream.<insert single queue name here> To define a wildcard queue name for multiple queues:MachineStream. It can be a single queue:  MachineStream.stream01 Or multiple queues separated by a comma: MachineStream.stream01,MachineStream.stream02 Or a queue range defined by the following syntax: MachineStream.stream[01-20]: queueName=MachineStream.stream[01-50] (if you have multiple queues and you want to use ASB, then you have to use multiDestination and use the range) username username used to connect to the ActiveMQ or Azure Service Bus queue For ActiveMQ: username=axedaadmin For ASB: username=your-azure-service-bus-username password used to connect to the ActiveMQ or Azure Service Bus queue password used to connect to the ActiveMQ or ASB queue(s) numConnections number of ActiveMQ or Azure Service Bus broker connections Default is 10 broker connections numSessionsPerConnection The number of sessions per connection. Note that each session will create a separate thread. (This key is used infrequently.) APPLICABLE TO ACTIVEMQ ONLY. Default is 5 sessions per connection APPLICABLE TO ACTIVEMQ ONLY. numProcessingThreads The number of concurrent threads used for processing machine streams messages. Default is 100 concurrent threads messageListenerContainerType The type of message listener container. Default is single queue name per connection. Supports multiple queue names per connection 3. Build code using Maven.  Use -DskipTests option if you want to skip tests.  This will build all source code and produce a bin archive in the target directory. For Linux: # mvn package -DskipTests For Windows: c:\> mvn package -DskipTests 4. Enter the target directory and uncompress *bin.tar.gz archive and enter correct directory For Linux: # cd target # tar -zxvf machine-streams-data-relay-1.0.3-bin.tar.gz # cd machine-streams-data-relay-1.0.3 For Windows: c:\> cd target c:\> unzip machine-streams-data-relay-1.0.3.bin.zip c:\> cd machine-streams-data-relay-1.0.3 5. Start the application. For Linux: # ./machineStreamsDataRelay.sh <config properties file> for example: e.g. ./machineStreamsDataRelay.sh configOfYourChoice.properties For Windows: # ./machineStreamsDataRelay.sh <config properties file> for example: e.g. ./machineStreamDataRelay.bat configOfYourChoice.properties See the two example config files included within the project: configASB.properties (for Azure Service Bus) and configAMQ.properties (for ActiveMQ). 6. Scan the output. If your ActiveMQ configuration is correct, output similar to the following should appear, and no ERRORS should be shown: 2014-03-26 10:27:06.179 [main] INFO  [MessageListenerServiceImpl]: Initializing connections to tcp://localhost:62000 username=axedaadmin 2014-03-26 10:27:06.346 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 1: queue=MachineStream.> numSessions=5 2014-03-26 10:27:06.351 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 2: queue=MachineStream.> numSessions=5 2014-03-26 10:27:06.356 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 3: queue=MachineStream.> numSessions=5 2014-03-26 10:27:06.365 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 4: queue=MachineStream.> numSessions=5 2014-03-26 10:27:06.369 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 5: queue=MachineStream.> numSessions=5 2014-03-26 10:27:06.381 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 6: queue=MachineStream.> numSessions=5 2014-03-26 10:27:06.388 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 7: queue=MachineStream.> numSessions=5 2014-03-26 10:27:06.402 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 8: queue=MachineStream.> numSessions=5 2014-03-26 10:27:06.411 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 9: queue=MachineStream.> numSessions=5 2014-03-26 10:27:06.416 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 10: queue=MachineStream.> numSessions=5 If your Azure Service Bus configuration is correct, output similar to the following should appear, and no ERRORS should be shown: 2014-10-01 16:51:30.114 [main] INFO [MessageListenerServiceImpl]: Initializing Connections to amqps://acme.servicebus.windows.net username=owner 2014-10-01 16:51:31.613 [ConnectionRecovery-thread-6] INFO [MultiDestinationMessageListenerContainer]: Connection 6 created 0/10 queue consumers 2014-10-01 16:51:31.614 [ConnectionRecovery-thread-8] INFO [MultiDestinationMessageListenerContainer]: Connection 8 created 0/10 queue consumers 2014-10-01 16:51:31.614 [ConnectionRecovery-thread-10] INFO [MultiDestinationMessageListenerContainer]: Connection 10 created 0/9 queue consumers 2014-10-01 16:51:31.614 [ConnectionRecovery-thread-2] INFO [MultiDestinationMessageListenerContainer]: Connection 2 created 0/10 queue consumers 2014-10-01 16:51:31.614 [ConnectionRecovery-thread-3] INFO [MultiDestinationMessageListenerContainer]: Connection 3 created 0/10 queue consumers 2014-10-01 16:51:31.614 [ConnectionRecovery-thread-5] INFO [MultiDestinationMessageListenerContainer]: Connection 5 created 0/10 queue consumers 2014-10-01 16:51:31.615 [ConnectionRecovery-thread-9] INFO [MultiDestinationMessageListenerContainer]: Connection 9 created 0/10 queue consumers 2014-10-01 16:51:31.615 [ConnectionRecovery-thread-4] INFO [MultiDestinationMessageListenerContainer]: Connection 4 created 0/10 queue consumers 2014-10-01 16:51:31.621 [ConnectionRecovery-thread-7] INFO [MultiDestinationMessageListenerContainer]: Connection 7 created 0/10 queue consumers 2014-10-01 16:51:31.756 [ConnectionRecovery-thread-1] INFO [MultiDestinationMessageListenerContainer]: Connection 1 created 0/10 queue consumers 2014-10-01 16:51:32.613 [ConnectionRecovery-thread-6] INFO [MultiDestinationMessageListenerContainer]: Connection 6 created 9/10 queue consumers 2014-10-01 16:51:32.614 [ConnectionRecovery-thread-8] INFO [MultiDestinationMessageListenerContainer]: Connection 8 created 9/10 queue consumers 2014-10-01 16:51:32.614 [ConnectionRecovery-thread-10] INFO [MultiDestinationMessageListenerContainer]: Connection 10 created 7/9 queue consumers 2014-10-01 16:51:32.614 [ConnectionRecovery-thread-2] INFO [MultiDestinationMessageListenerContainer]: Connection 2 created 10/10 queue consumers 2014-10-01 16:51:32.615 [ConnectionRecovery-thread-3] INFO [MultiDestinationMessageListenerContainer]: Connection 3 created 9/10 queue consumers 2014-10-01 16:51:32.615 [ConnectionRecovery-thread-5] INFO [MultiDestinationMessageListenerContainer]: Connection 5 created 0/10 queue consumers 2014-10-01 16:51:32.615 [ConnectionRecovery-thread-9] INFO [MultiDestinationMessageListenerContainer]: Connection 9 created 7/10 queue consumers 2014-10-01 16:51:32.615 [ConnectionRecovery-thread-4] INFO [MultiDestinationMessageListenerContainer]: Connection 4 created 9/10 queue consumers 2014-10-01 16:51:32.623 [ConnectionRecovery-thread-7] INFO [MultiDestinationMessageListenerContainer]: Connection 7 created 9/10 queue consumers 2014-10-01 16:51:32.756 [ConnectionRecovery-thread-1] INFO [MultiDestinationMessageListenerContainer]: Connection 1 created 10/10 queue consumers 2014-10-01 16:51:32.833 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 1: numQueues=10 initTimeMillis=2631 millis 2014-10-01 16:51:32.833 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 2: numQueues=10 initTimeMillis=2488 millis 2014-10-01 16:51:33.613 [ConnectionRecovery-thread-6] INFO [MultiDestinationMessageListenerContainer]: Connection 6 created 10/10 queue consumers 2014-10-01 16:51:33.614 [ConnectionRecovery-thread-8] INFO [MultiDestinationMessageListenerContainer]: Connection 8 created 10/10 queue consumers 2014-10-01 16:51:33.614 [ConnectionRecovery-thread-10] INFO [MultiDestinationMessageListenerContainer]: Connection 10 created 9/9 queue consumers 2014-10-01 16:51:33.615 [ConnectionRecovery-thread-3] INFO [MultiDestinationMessageListenerContainer]: Connection 3 created 9/10 queue consumers 2014-10-01 16:51:33.615 [ConnectionRecovery-thread-5] INFO [MultiDestinationMessageListenerContainer]: Connection 5 created 0/10 queue consumers 2014-10-01 16:51:33.615 [ConnectionRecovery-thread-9] INFO [MultiDestinationMessageListenerContainer]: Connection 9 created 8/10 queue consumers 2014-10-01 16:51:33.615 [ConnectionRecovery-thread-4] INFO [MultiDestinationMessageListenerContainer]: Connection 4 created 9/10 queue consumers 2014-10-01 16:51:33.623 [ConnectionRecovery-thread-7] INFO [MultiDestinationMessageListenerContainer]: Connection 7 created 10/10 queue consumers 2014-10-01 16:51:34.615 [ConnectionRecovery-thread-5] INFO [MultiDestinationMessageListenerContainer]: Connection 5 created 0/10 queue consumers 2014-10-01 16:51:34.615 [ConnectionRecovery-thread-3] INFO [MultiDestinationMessageListenerContainer]: Connection 3 created 9/10 queue consumers 2014-10-01 16:51:34.615 [ConnectionRecovery-thread-9] INFO [MultiDestinationMessageListenerContainer]: Connection 9 created 8/10 queue consumers 2014-10-01 16:51:34.615 [ConnectionRecovery-thread-4] INFO [MultiDestinationMessageListenerContainer]: Connection 4 created 9/10 queue consumers 2014-10-01 16:51:35.615 [ConnectionRecovery-thread-5] INFO [MultiDestinationMessageListenerContainer]: Connection 5 created 9/10 queue consumers 2014-10-01 16:51:35.615 [ConnectionRecovery-thread-3] INFO [MultiDestinationMessageListenerContainer]: Connection 3 created 9/10 queue consumers 2014-10-01 16:51:35.615 [ConnectionRecovery-thread-9] INFO [MultiDestinationMessageListenerContainer]: Connection 9 created 8/10 queue consumers 2014-10-01 16:51:35.616 [ConnectionRecovery-thread-4] INFO [MultiDestinationMessageListenerContainer]: Connection 4 created 9/10 queue consumers 2014-10-01 16:51:36.616 [ConnectionRecovery-thread-5] INFO [MultiDestinationMessageListenerContainer]: Connection 5 created 9/10 queue consumers 2014-10-01 16:51:36.616 [ConnectionRecovery-thread-3] INFO [MultiDestinationMessageListenerContainer]: Connection 3 created 9/10 queue consumers 2014-10-01 16:51:36.616 [ConnectionRecovery-thread-4] INFO [MultiDestinationMessageListenerContainer]: Connection 4 created 9/10 queue consumers 2014-10-01 16:51:36.616 [ConnectionRecovery-thread-9] INFO [MultiDestinationMessageListenerContainer]: Connection 9 created 8/10 queue consumers 2014-10-01 16:51:37.616 [ConnectionRecovery-thread-5] INFO [MultiDestinationMessageListenerContainer]: Connection 5 created 9/10 queue consumers 2014-10-01 16:51:37.616 [ConnectionRecovery-thread-3] INFO [MultiDestinationMessageListenerContainer]: Connection 3 created 9/10 queue consumers 2014-10-01 16:51:37.616 [ConnectionRecovery-thread-4] INFO [MultiDestinationMessageListenerContainer]: Connection 4 created 9/10 queue consumers 2014-10-01 16:51:37.616 [ConnectionRecovery-thread-9] INFO [MultiDestinationMessageListenerContainer]: Connection 9 created 8/10 queue consumers 2014-10-01 16:51:38.616 [ConnectionRecovery-thread-3] INFO [MultiDestinationMessageListenerContainer]: Connection 3 created 10/10 queue consumers 2014-10-01 16:51:38.617 [ConnectionRecovery-thread-9] INFO [MultiDestinationMessageListenerContainer]: Connection 9 created 10/10 queue consumers 2014-10-01 16:51:38.616 [ConnectionRecovery-thread-4] INFO [MultiDestinationMessageListenerContainer]: Connection 4 created 10/10 queue consumers 2014-10-01 16:51:38.616 [ConnectionRecovery-thread-5] INFO [MultiDestinationMessageListenerContainer]: Connection 5 created 10/10 queue consumers 2014-10-01 16:51:38.643 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 3: numQueues=10 initTimeMillis=8491 millis 2014-10-01 16:51:38.643 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 4: numQueues=10 initTimeMillis=8490 millis 2014-10-01 16:51:38.643 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 5: numQueues=10 initTimeMillis=8490 millis 2014-10-01 16:51:38.643 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 6: numQueues=10 initTimeMillis=3485 millis 2014-10-01 16:51:38.643 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 7: numQueues=10 initTimeMillis=3495 millis 2014-10-01 16:51:38.643 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 8: numQueues=10 initTimeMillis=3485 millis 2014-10-01 16:51:38.643 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 9: numQueues=10 initTimeMillis=8488 millis 2014-10-01 16:51:38.643 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 10: numQueues=9 initTimeMillis=3485 millis 7. To verify that messages are being streamed properly from the Axeda Platform, send DataItems from your connected Assets. You should see messages similar to the following. (Remember that each Asset you are testing must have an associated Machine Stream.) 2014-03-26 10:45:16.309 [pool-1-thread-1] INFO  [LogMessageProcessor]: StreamedDataItem: Model,Asset1,799021d6-70a3-7c32-0000-00000000021d,false,Wed Mar 26 14:45:16 EDT 2014,temp,43,analog 2014-03-26 10:45:21.137 [pool-1-thread-2] INFO  [LogMessageProcessor]: StreamedDataItem: Model,Asset2,799021d6-70a3-7c32-0000-000000000225,false,Wed Mar 26 14:45:21 EDT 2014,temp,43,analog 2014-03-26 10:45:26.134 [pool-1-thread-3] INFO  [LogMessageProcessor]: StreamedDataItem: Model,Asset1,799021d6-70a3-7c32-0000-00000000022b,false,Wed Mar 26 14:45:26 EDT 2014,temp,44,analog 2014-03-26 10:45:31.135 [pool-1-thread-4] INFO  [LogMessageProcessor]: StreamedDataItem: Model,Asset2,799021d6-70a3-7c32-0000-000000000231,false,Wed Mar 26 14:45:31 EDT 2014,temp,44,analog 2014-03-26 10:45:36.142 [pool-1-thread-5] INFO  [LogMessageProcessor]: StreamedDataItem: Model,Asset1,799021d6-70a3-7c32-0000-000000000237,false,Wed Mar 26 14:45:36 EDT 2014,temp,45,analog 2014-03-26 10:45:41.146 [pool-1-thread-6] INFO  [LogMessageProcessor]: StreamedDataItem: Model,Asset2,799021d6-70a3-7c32-0000-00000000023d,false,Wed Mar 26 14:45:41 EDT 2014,temp,45,analog Configuring a CustomMessageProcessor By default, the project is configured to use a LogMessageProcessor that logs each streamed message it receives to standard out. The project takes a StreamedMessage in either an XML or JSON format (as configured in the MachineStream SDKv2 object) and decodes the message into a StreamedMessage Java object. LogMessageProcessor.java implements the MessageProcessor interface. Here is the MessageProcessor.java interface: MessageProcessor.java package com.axeda.tools.streams.processor; import com.axeda.tools.streams.model.StreamedMessage; /** * This class defines the message processor method callback that will called for message processing. * Note that this methods will be called by multiple threads concurrently. */ public interface MessageProcessor { /** * Process a machine stream message. Note that this method will be called by multiple threads concurrently.  * The number of concurrent processing threads is defined in MachineStreamsConfig.getNumProcessingThreads(). * If you add code here that significantly slows down message processing, then there is the potential that * MessageListenerService threads will also block.  When the MessageListenerService threads block, this means that * messages will start to backup in the ActiveMQ or ASB message queues. If you are processing a large number of messages, * then you may need to adjust your configuration parameters or optimize your processMessage() code. * @param message machine streams message to process */public void processMessage(StreamedMessage message);} An additional class named CustomMessageProcessor.java has been provided so that you can provide your own custom message processing logic: CustomMessageProcessor.java package com.axeda.tools.streams.processor; import org.springframework.stereotype.Component; import com.axeda.tools.streams.model.StreamedAlarm; import com.axeda.tools.streams.model.StreamedDataItemMessage; import com.axeda.tools.streams.model.StreamedMessage; import com.axeda.tools.streams.model.StreamedMobileLocation; import com.axeda.tools.streams.model.StreamedRegistrationMessage; /** * This class was provided for customers to implement their own message processing business * logic. To use this class, change the @Autowired messageProcessor qualifier in * MessageProcessingServiceImpl.java to @Qualifier("customMessageProcessor") */ @Component("customMessageProcessor") public class CustomMessageProcessor implements MessageProcessor { /** * (non-Javadoc) * @see com.axeda.tools.streams.processor.MessageProcessor#processMessage(com.axeda.tools.streams.model.StreamedMessage) * * Process a machine stream message. Note that this method will be called by multiple threads * concurrently. The number of concurrent processing threads is defined in * MachineStreamsConfig.getNumProcessingThreads(). * If you add code here that significantly slows down message processing, then there is the * potential that MessageListenerService threads will also block. When the MessageListenerService * threads block, this means that messages will start to backup in the ActiveMQ or Azure Service Bus message queues. If you * are processing a large number of messages, then you may need to adjust your configuration parameters * or optimize your processMessage() code. */ @SuppressWarnings("unused") @Override public void processMessage(StreamedMessage message) { if (message instanceof StreamedDataItemMessage) { StreamedDataItemMessage dataItem = (StreamedDataItemMessage) message; // add your business logic here } else if (message instanceof StreamedAlarm) { StreamedAlarm alarm = (StreamedAlarm) message; // add your business logic here } else if (message instanceof StreamedMobileLocation) { StreamedMobileLocation mobileLocation = (StreamedMobileLocation) message; // add your business logic here } else if (message instanceof StreamedRegistrationMessage) { StreamedRegistrationMessage registration = (StreamedRegistrationMessage)message; // add your business logic here } } } The Axeda Platform Machine Streams feature currently support 4 different message types: StreamedDataItemMessage StreamedAlarm StreamedMobileLocation StreamedRegistrationMessage For each of the different message types, you should add your message processing business logic.  You may want to write each message to your favorite NoSql database or to a flat file. Once you have completed your changes to the CustomObjectMessageProcessor, then you must make one change in the MessageProcessingServiceImpl.java class to use this Spring bean. Uncomment this line  // @Qualifier("customMessageProcessor") Comment this line @Qualifier("logMessageProcessor") The following code snppet shows what your changes should look like when you are finished: MessageProcessingServiceImpl.java @Component("messageProcessingService") public class MessageProcessingServiceImpl implements MessageProcessingService private static final Logger LOGGER = LoggerFactory.getLogger(MessageProcessingServiceImpl.class); @Autowired private MessageDecoder messageDecoder; @Autowired // If you want to use the CustomMessageProcessor instead of the default LogMessageProcessor then change this Qualifier to @Qualifier("customMessageProcessor") //@Qualifier("logMessageProcessor") private MessageProcessor messageProcessor; private ExecutorService executorService;
View full tip
Complete information about installing and using the Axeda Machine Streams Data Relay Project is available here. This page provides the files for Axeda Machine Streams Data Relay Project: For Linux users ready to run (bin) files: download the machine-streams-data-relay-1.0.3-bin.tar_.gz archive full Maven project files: download the machine-streams-data-relay-1.0.3-project.tar_.gz archive For Windows users ready to run (bin) files: download the machine-streams-data-relay-1.0.3-bin.zip archive full Maven project files: download the machine-streams-data-relay-1.0.3-project.zip archive From the links below, select the project you want to use.
View full tip
Since the marketplace extension is no longer supported and the drivers may be outdated, you may build your own jdbc package/extension: Download the Extension Metadata file Here Download the appropriate JDBC driver Build the extension structure by creating the directory lib/common Place the JAR file in this directory location: lib/common/<JDBC driver jar file> Modify the name attribute of the ExtensionPackage entity in the metadata.xml file as needed Point the file attribute of the FileResource entity to the name of the JDBC JAR file The metadata also contains a ThingTemplate the name is set to MySqlServer, but can be modified as needed Select the lib folder and metadata.xml file and send to a zip archive Tip: The name of the zip archive should match the name given in the name attribute of the ExtensionPackage entity in the metadata.xml file Import the newly created extension as usual To the JDBC extension, simply create a new thing and assign it the new ThingTemplate that was imported with the JDBC extension Configuration Field Explanation: JDBC Driver Class Name ​Depends on the driver being used Refer to documentation JDBC Connection String ​Defines the information needed to establish a connection with the database Connection string examples can be found in the ThingWorx Help Center ConnectionValidationString ​A simple query that will work regardless of table names to be executed to verify return values from the database   Alternatively, you may download the jdbc connector creator from the marketplace here https://marketplace.thingworx.com/Items/jdbc-connector-extension Then you may just view the mashup and use it to package your jdbc jar into an extension (which can be later imported into ThingWorx).  
View full tip
Objective Use Influx as a database to store data coming from Kepware ThingWorx Industrial Connectivity server   Prerequisite Configure ThingWorx connection to Kepware’s KEPServerEX  and bind tags that exist in KEPServerEX to things in the ThingWorx model as referenced in Industrial Connections Example   Configuration Steps 1. Create database in Influx for ThingWorx: Connect:    influx -precision rfc3339 > SHOW DATABASES > CREATE DATABASE thingworx   2. Create Influx Persistence Provider     and configure   3. In the Industrial Thing where the Remote Properties are bounded define Value Stream     and make sure to have Persistence Provider set to Influx and is set to Active     4. In the Value Stream Properties and Alerts define the mappings using Manage Bindings to specify what properties are to be stored in this value stream     5. Save it and test it to make sure properties are stored in Influx: > use thingworx > show measurements   name   ----   Channel1.Device1 > show field keys on thingworx from "Channel1.Device1" > select Channel1_Device1_Tag2 from "Channel1.Device1"   name: Channel1.Device1   fieldKey fieldType   -------- ---------   Channel1_Device1_Tag10 integer   Channel1_Device1_Tag11 integer   Channel1_Device1_Tag12 integer   Channel1_Device1_Tag13 integer   Channel1_Device1_Tag14 integer   Channel1_Device1_Tag15 integer   Channel1_Device1_Tag16 integer   Channel1_Device1_Tag17 integer   Channel1_Device1_Tag18 integer   Channel1_Device1_Tag19 integer   Channel1_Device1_Tag2 integer   Channel1_Device1_Tag20 integer   Channel1_Device1_Tag21 integer   Channel1_Device1_Tag3 integer   Channel1_Device1_Tag4 integer   Channel1_Device1_Tag5 integer   Channel1_Device1_Tag6 integer   Channel1_Device1_Tag7 integer   Channel1_Device1_Tag8 integer   Channel1_Device1_Tag9 integer   shows data stored in Channel1_Device1_Tag2: >select Channel1_Device1_Tag2 from "Channel1.Device1"   2019-02-20T16:26:13.699Z 8043   2019-02-20T16:26:14.715Z 8044   2019-02-20T16:26:15.728Z 8045   2019-02-20T16:26:16.728Z 8046   2019-02-20T16:26:17.727Z 8047   2019-02-20T16:26:18.725Z 8048   2019-02-20T16:26:19.724Z 8049   2019-02-20T16:26:20.722Z 8050   2019-02-20T16:26:21.723Z 8051   2019-02-20T16:26:22.722Z 8052
View full tip
Announcements