cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 
Sort by:
Below is where I will discuss the simple implementation of constructing a POST request in Java. I have embedded the entire source at the bottom of this post for easy copy and paste. To start you will want to define the URL you are trying to POST to: String url = " http://127.0.0.1:80/Thingworx/Things/Thing_Name/Services/​Service_to_Post_to​"; Breaking down this url String: ​ http://​ - a non-SSL connection is being used in this example 127.0.0.1:80 -- the address and port that ThingWorx is hosted on /Thingworx -- this bit is necessary because we are talking to ThingWorx /Things -- Things is used as an example here because the service I am posting to is on a Thing Some alternatives to substitute in are ThingTemplates, ThingShapes, Resources, and Subsystems /​Thing_Name​ -- Substitute in the name of your Thing where the service is located /Services -- We are calling a service on the Thing, so this is how you drill down to it /​Service_to_Post_to​ -- Substitute in the name of the service you are trying to invoke Create a URL object: URL obj = new URL(url); Class URL, included in the java.net.URL import, r epresents a Uniform Resource Locator, a pointer to a "resource" on the Internet. Adding the port is optional, but if it is omitted port 80 will be used by default. Define a HttpURLConnection object to later open a single connection to the URL specified: HttpURLConnection con = (HttpURLConnection) obj.openConnection(); Class HttpURLConnection, included in the java.net.HttpURLConnection import, provides a single instance to connect to the URL specified. The method openConnection is called to create a new instance of a connection, but there is no connection actually made at this point. Set the type of request and the header values to pass: con.setRequestMethod("POST"); con.setRequestProperty("Accept", "application/json"); con.setRequestProperty("Content-Type", "application/json"); con.setRequestProperty("appKey", "80aab639-ad99-43c8-a482-2e1e5dc86a2d"); You can see that we are performing a POST request, passing in an Accept header, a Content-Type header, and a ThingWorx specific appKey header. Pass true into the setDoOutput method because we are performing a POST request; when sending a PUT request we would pass in true as well. When there is no request body being sent false can be passed in to denote there is no "output" and we are making a GET request.         con.setDoOutput(true); Create a DataOutputStream object that wraps around the con object's output stream. We will call the flush method on the DataOutputStream object to push the REST request from the stream to the url defined for POSTing. We immediately close the DataOutputStream object because we are done making a request.         DataOutputStream wr = new DataOutputStream(con.getOutputStream());     wr.flush();     wr.close();           The DataOutputStream class lets the Java SDK write primitive Java data types to the ​con​ object's output stream. The next line returns the HTTP status code returned from the request. This will be something like 200 for success or 401 for unauthorized.         int responseCode = con.getResponseCode(); The final block of this code uses a BufferedReader that wraps an InputStreamReader that wraps the con object's input stream (the byte response from the server). This BufferedReader object is then used to iterate through each line in the response and append it to a StringBuilder object. Once that has completed we close the BufferedReader object and print the response we just retrieved.         BufferedReader in = new BufferedReader(new InputStreamReader(con.getInputStream()));     String inputLine;     StringBuilder response = new StringBuilder();     while((inputLine = in.readLine()) != null) {       response.append(inputLine);     }     in.close();     System.out.println(response.toString());    The InputStreamReader decodes bytes to character streams using a specified charset.         The BufferedReader provides a more efficient way to read characters from an InputStreamReader object.         The StringBuilder object is an unsynchronized method of creating a String representation of the content residing in the BufferedReader object. StringBuffer can be used instead in a case where multi-threaded synchronization is necessary.       Below is the block of code in it's entirety from the discussion above: public void sendPost() throws Exception {   String url = " http://127.0.0.1:80/Thingworx/Things/Thing_Name/Services/Service_to_Post_to ";   URL obj = new URL(url);   HttpURLConnection con = (HttpURLConnection) obj.openConnection();   //add request header   con.setRequestMethod("POST");   con.setRequestProperty("Accept", "application/json");   con.setRequestProperty("Content-Type", "application/json");   con.setRequestProperty("appKey", "80aab639-ad99-43c8-a482-2e1e5dc86a2d");   // Send post request   con.setDoOutput(true);   DataOutputStream wr = new DataOutputStream(con.getOutputStream());   wr.flush();   wr.close();   int responseCode = con.getResponseCode();   System.out.println("\nSending 'POST' request to URL : " + url);   System.out.println("Response Code : " + responseCode);   BufferedReader in = new BufferedReader(new InputStreamReader(con.getInputStream()));   String inputLine;   StringBuilder response = new StringBuilder();   while((inputLine = in.readLine()) != null) {   response.append(inputLine);   }   in.close();   //print result   System.out.println(response.toString());   }
View full tip
Modbus is a commonly used communications protocol that allows data transfer between computers and PLCs. This is intended to be a simple guide on setting up and using a Modbus PLC Simulator with ThingWorx. ThingWorx provides Modbus packages for Windows, Linux and Linux ARM. The Modbus Package contains libraries and lua files intended to be used along with the Edge Microserver. Note: The Modbus package is not intended as an out of the box solution Requirements: ThingWorx Platform Edge Microserver Modbus Package Modbus PLC Simulator In this guide, a free Modbus PLC Simulator​ is used. Here is the direct download link for their v8.20 binary release. Configuring the EMS: The first step is to configure the EMS as a gateway. This is done via adding an auto_bind section in the config.json: "auto_bind": [ {     "name": "ModbusGateway",     "gateway": true }] This creates an ephemeral Thing that only exists when the EMS is running. The next step is to modify the config.lua to include the Modbus configuration. Copy over the contents of the etc folder of the Modbus Package over to the etc folder of the EMS. A sample config_modbus.lua is provided in the Modbus Package as a reference. The following code defines a Thing called MyPLC (which is a Remote Thing created on the Platform): scripts.MyPLC = {     file = "thing.lua",     template = "modbusExample",     identifier = "plc",     updateRate = 2000 } scripts.Thingworx = {     file = "thingworx.lua" } scripts.modbus_handler = {     file = "modbus_handler.lua",     name = "modbus_handler",     host = "localhost" } Adding 'modbusExample' to the above script enables the usage of the same located at /etc/custom/templates/. 'modbusExample' is a reference point for creating a script to add the registers of the PLC. The given template has examples for different basetypes. The different types of available registers are noted and referenced in the modbus.lua file available under /etc/thingworx/lua/. Setting up the PLC Simulator: Extract the mod_RSsim to a folder and run the executable. Since we are 'simulating' a PLC connection, set the protocol to Modbus TCP/IP. Change the I/O to Holding Registers (or any other relevant option), with the Address set to Dec. In the Simulation menu, select 'No animation' if you want to enter values manually or use 'Increment BYTES' to automatically generate values. This PLC Simulator will run at port 502. The Connection: With the EMS & luaScriptResource running, the PLC Simulator should have a connection to the platform with activity on the received/sent section. Now if you open the Remote Thing 'MyPLC' in the platform, the isConnected property (under the Properties section) should be true. (If not, go back to General Information, click on Browse in the Identifier section and select 'plc'). Go back to the Properties section, and click on Manage Bindings. Click on the Remote tab and the list of defined properties should appear. For example, the following code from the modbusExample.lua: properties.Int16HoldRegExample = {key="holding_register/1/40001?format=Int16", handler="modbus_handler", basetype="NUMBER"} denotes a property named Int16HoldRegExample at register 40001. The value at the address 40001 in the PLC Simulator should correspond with the value at the platform once this property is added and the Thing saved. If you are running into any errors when connecting with a Raspberry Pi, please take a look atDuan Gauche's follow up document/ guide - Using your Raspberry Pi with the Edge Microserver and Modbus
View full tip
Every edge component that connects to the ThingWorx platform requires an Application Key.  This 'AppKey' provides both authentication and authorization control.  When an edge component connects it steps through a connection process.  The second step of that process is to send the AppKey to the platform.  The platform will inspect the key and ensure that it is valid.  It also creates a session for that edge connection and associates the AppKey with the session.  Any future requests that are sent over that AlwaysOn connection will execute under the security context configured for the user associated with the AppKey. In order for edge applications to interact with the platform they require a certain set of permissions.  It is a best practice to not associate the Administrator user with an Application Key.  Doing this would allow an edge application to invoke any and all services on the platform, and to modify the property values of any thing.  The permissions applied to an edge component's AppKey should be the minimum set required for your application to function. The AppKey associated with an edge component is typically associated with a single Thing, or a collection of Things, usually of the same ThingTemplate.  Identify the Thing(s) or ThingTemplate(s) that your application will interact with.  There are four types of interactions for edge components: property reads, property writes, service invocations, and event executions (edge components do not subscribe to events).  These four types of interactions match the runtime permissions that can be configured on a Thing, or the 'run time instance' permissions for a ThingTemplate. If an edge application will be reading or writing all properties of a particular Thing, then applying the 'read property' and 'write property' permissions is appropriate.  If only a select set of a Thing's properties will be read or written, then read and/or write permission should be disabled, and only the select properties should be enabled using overrides. Since every Thing has a number of generic services, the 'service execute' permission should be disabled, and overrides should be configured for the selected services that the edge needs access to.  In addition, overrides should be configured for the 'UpdateSubscribeProeprtyValues' service and for the 'ProcessRemoteEvents' service.  Edge components often use these service to update a collection of properties or to fire a set of events. Finally, if your edge application triggers events on a Thing, overrides should be used to provide execute permission for those events. In summary, the safest path to configuring edge permissions is to create a new user and AppKey with no permissions applied, and to then selectively apply permissions for that user only on the Thing or Things that your edge components will interact with.
View full tip
Prerequisite Download the .NET SDK from the PTC Support Portal and set up the SteamSensor Example according the directions found in the ThingWorx Help Center SDK Steam Sensor Example In ThingWorx Create a Remote thing using the RemoteThingWithFileTransfer template (SteamSensor1 in example) Create a file repository and execute the CreateFolder service to create a folder in the repository folder in ThingworxStorage (MyRepository in example) In SteamThing.cs At the top of the file, import the file transfer class using com.thingworx.communications.client.things.filetransfer; ” Create a virtual thing that extends FileTransferVirtualThing E.g. using steam sensor Thing public class SteamThing : FileTransferVirtualThing Edit SteamThing as follows {               public SteamThing(string name, string description, string identifier, ConnectedThingClient client, Dictionary<string, string> virtualDirectories)             : base(name, description, client, virtualDirectories) } In Client.cs Create a new Dictionary above the Steam Things. Select any name you wish as the virtual directory name and set the directory path. In this example, it is named EdgeDirectory and set to the root of the C Drive. Dictionary<string, string> virtualDirectories = new Dictionary<string, string>()             {                 {"EdgeDirectory", "C:\\"}             }; Modify the SteamThing to include your newly created virtual directories in the SteamThing parameters // Create two Virtual Things SteamThing sensor1 = new SteamThing("SteamSensor1", "1st Floor Steam Sensor", "SN0001", client, virtualDirectories); SteamThing sensor2 = new SteamThing("SteamSensor2", "2nd Floor Steam Sensor", "SN0002", client, virtualDirectories); To send or receive a file from the server, it is recommended that the built in GetFile and Send File are used. Create a remote service in the SDK containing either GetFile or SendFile GetFile — Get a file from the Server. sourceRepo — The entityName to get the file from. sourcePath — The path to the file to get. sourceFile — Name of the file to get. targetPath — The local VIRTUAL path of the resulting file (not including the file name). targetFile — Name of the resulting file in the target directory. timeout — Timeout, in seconds, for the transfer. A zero will use the systems default timeout. async — If true return immediately and call a callback function when the transfer is complete if false, block until the transfer is complete. Note that the file callback function will be called in any case. E.g. GetFile("MyRepository", "/", "test.txt", "EdgeDirectory", "movedFile.txt", 10000, true); SendFile — Sends a file to the Server. This method takes the following parameters: sourcePath — The VIRTUAL path to the file to send (not including the file name). sourceFile — Name of the file to send. targetRepo — Target repostiory of the file. targetPath — Path of the resulting file in the target repo (not including the file name). targetFile — Name of the resulting file in the target directory. timeout — Timeout, in seconds, for the transfer. A zero will use the systems default timeout. async — If true return immediately and call a callback function when the transfer is complete if false, block until the transfer is complete. Note that the file callback function will be called in any case. E.g. SendFile("/EdgeDirectory", "test.txt", "MyRepository", "/", "movedFile.txt",  10000,  true); From Composer, bring in the Remote Service on the SteamSensor thing and execute it. Files can now be transferred to or from the .NET SDK
View full tip
Hello Developer Community, We are pleased to announce pre-release availability of the ThingWorx Edge SDK for Android! The Android SDK beta is built off the Java SDK code base, but replaces the Netty websocket client with Autobahn for compatibility with Android OS.  T hose familiar with the Java SDK API will feel very much at home in the Android SDK.  We recommend beginning with the included sample application.  Please watch this thread for upcoming beta releases.  b4 adds file transfers between the ThingWorx platform and Android devices and an example application. We welcome your questions and comments in the thread below!  Happy coding! Regards, ThingWorx Edge Products Team
View full tip
Official name: DataStax Enterprise, sometimes referred as Cassandra. Note: DBA skills required, free self-paced training can be found here Training | DataStax The extension package can further be obtained through Technical Support. Thingworx 6.0 introduces DSE as a backend database scaling to much greater byte count, ad Neo4j performance limitations hit at 50Gbs. Some of the main reasons to consider DSE are: 1. Elastic scalability -- Alows to easily add capacity online to accommodate more customers and more data when needed. 2. Always on architecture -- Contains no single point of failure (as with traditional master/slave RDBMS's and other NoSQL solutions) resulting in continious availability for business-critical applications that can't afford to go down. 3. Fast linear-scale performance -- Enables sub-second response times with linear scalability (double the throughput with two nodes, quadruple it with four, and so on) to deliver response time speeds. 4. Flexible data storage -- Easily accommodates the full range of data formats - structured, semi-structured and unstructured -- that run through today's modern applications. 5. Easy data distribution -- Read and write to any node with all changes being automatically synchronized across a cluster, giving maximum flexibility to distribute data by replicating across multiple datacenters, cloud, and even mixed cloud/on-premise environments. Note: Windows+DSE is currently not fully supported. Connecting Thingworx: Prerequisite : fully configured DSE database. 1. Obtain the dse_persistancePackage 2. Import as an extension in Composer. 3. In composer, create a new persistence provider. 4. Select the imported package as Persistence Provider Package. 5. In Configuration tab:      - For Cassandra Cluster Host, enter the IP address set in cassandra.yaml or localhost if hosted locally      - Enter new of existing Cassandra Keyspace name      - Enter Solr Cluster URL      - Other fields can be left at default (*) 6. Go to Services and execute TestConnectivity service to ensure True response. 7. When creating new Stream, Value Stream, or a Data Table, set Persistence Provider to the one created in previous steps. Currently all reads and writes are done through Thingworx and all Thingworx data is encoded in DSE.  Opcenter still allows to see connectes streams, datatables, valuestreams. *SimpleStrategy can be used for a single data center, or NetworkTopologyStrategy is recommended for most deployments, because it is much easier to expand to multiple data centers when required by future expansion. Is there a limit of data per node? 1 TB is a reasonable limit on how much data a single node can handle, but in reality, a node is not at all limited by the size of the data, only the rate of operations. A node might have only 80 GB of data on it, but if it's continuously hit with random reads and doesn't have a lot of RAM, it might not even be able to handle that number of requests at a reasonable rate. Similarly, a node might have 10 TB of data, but if it's rarely read from, or there is a small portion of data that is hot (so it could be effectively cached), it will do just fine. If the replication factor is above 1 and there is no reads at consistency level ALL, other replicas will be able to respond quickly to read requests, so there won't be a large difference in latency seen from a client perspective.
View full tip
The ThingWorx EMS and SDK based applications follow a three step process when connecting to the Platform: Establish the physical websocket:  The client opens a websocket to the Platform using the host and port that it has been configured to use.  The websocket URL exposed at the Platform is /Thingworx/WS.  TLS will be negotiated at this time as well. Authenticate:  The client sends a AUTH message to the platform, containing either an App Key (recommended) or username/password.  The AUTH message is part of the Thingworx AlwaysOn protocol.  If the client attempts to send any other message before the AUTH, the server will disconnect it.  The server will also disconnect the client if it does not receive an AUTH message within 15 seconds.  This time is configurable in the WSCommunicationSubsystem Configuration tab and is named "Amount of time to wait for authentication message (secs)." Once authenticated the SDK/EMS is able to interact with the Platform according to the permissions applied to its credentials.  For the EMS, this means that any client making HTTP calls to its REST interface can access Platform functionality.  For this reason, the EMS only listens for HTTP connections on localhost (this can be changed using the http_server.host setting in your config.json). At this point, the client can make requests to the platform and interact with it, much like a HTTP client can interact with the Platform's REST interface.  However, the Platform can still not direct requests to the edge. Bind:  A BIND message is another message type in the ThingWorx AlwaysOn protocol.  A client can send a BIND message to the Platform containing one or more Thing names or identifiers.  When the Platform receives the BIND message, it will associate those Things with the websocket it received the BIND message over.  This will allow the Platform to send request messages to those Things, over the websocket.  It will also update the isConnected and lastConnection time properties for the newly bound Things. A client can also send an UNBIND request.  This tells the Platform to remove the association between the Thing and the websocket.  The Thing's isConnected property will then be updated to false. For the EMS, edge applications can register using the /Thingworx/Things/LocalEms/Services/AddEdgeThing service (this is how the script resource registers Things).  When a registration occurs, the EMS will send a BIND message to the Platform on behalf of that new resource.  Edge applications can de-register (and have an UNBIND message sent) by calling /Thingworx/Things/LocalEms/RemoveEdgeThing.
View full tip
The .net-sdk can be configured to emit very detailed debugging and diagnostic information to a log file during execution. The .net-sdk uses the standard .NET System.Diagnostic infrastructure for Logging, as such, all configuration of the .net-sdk logger is done via the standard .NET Logging configuration system. By default, Logging is configured via the standard .NET “App.config” file. Log messages can be routed to any standard .NET TraceListener. Optionally, ThingWorx provides a FixedFieldTraceListener which can be used to output log messages to a file. The use of the ThingWorx provided FixedFieldTraceListener is recommended. The FixedFieldTraceListener when configured will automatically create a "logs" directory in the same location as (a sibling to) the running executable file (.exe). This "logs" directory will contain the log files. Every .NET Class can be configured as a specific “Trace Source” which emits log messages. It is recommended to add at least the following Trace Sources to your App.config file to receive the most useful amount of information: com.thingworx.communications.client.BaseClient com.thingworx.communications.client.ConnectedThingClient com.thingworx.communications.client.things.VirtualThing com.thingworx.communications.client.TwApiWrapper com.thingworx.communications.client.things.filetransfer.FileTransferVirtualThing com.thingworx.communications.client.things.contentloader.ContentLoaderVirtualThing The amount of information emitted can range from very low level Trace messages (the Verbose setting) to nothing at all (the Off setting). The “SourceLevels Enumeration” can be used to control how much information is written out to the log file. For reference, this is the <add name="SourceSwitch" value="Information" /> element in the sample below. Below is sample App.config file. <?xml version="1.0" encoding="utf-8"?> <configuration>     <system.diagnostics>       <sources>         <source name="com.thingworx.common.utils.JSONUtilities" switchName="SourceSwitch" switchType="System.Diagnostics.SourceSwitch" >           <listeners>             <add name="file" />           </listeners>         </source>         <source name="com.thingworx.communications.client.TwApiWrapper" switchName="SourceSwitch" switchType="System.Diagnostics.SourceSwitch" >           <listeners>             <add name="file" />           </listeners>         </source>         <source name="com.thingworx.communications.client.BaseClient" switchName="SourceSwitch" switchType="System.Diagnostics.SourceSwitch" >           <listeners>             <add name="file" />           </listeners>         </source>         <source name="com.thingworx.communications.client.ConnectedThingClient" switchName="SourceSwitch" switchType="System.Diagnostics.SourceSwitch" >           <listeners>             <add name="file" />           </listeners>         </source>         <source name="com.thingworx.communications.client.things.contentloader.ContentLoaderVirtualThing" switchName="SourceSwitch" switchType="System.Diagnostics.SourceSwitch" >           <listeners>             <add name="file" />           </listeners>         </source>         <source name="com.thingworx.communications.client.things.filetransfer.FileTransferVirtualThing" switchName="SourceSwitch" switchType="System.Diagnostics.SourceSwitch" >           <listeners>             <add name="file" />           </listeners>         </source>         <source name="com.thingworx.communications.client.things.VirtualThing" switchName="SourceSwitch" switchType="System.Diagnostics.SourceSwitch" >           <listeners>             <add name="file" />           </listeners>         </source>         <source name="com.thingworx.metadata.annotations.MetadataAnnotationParser" switchName="SourceSwitch" switchType="System.Diagnostics.SourceSwitch" >           <listeners>             <add name="file" />           </listeners>         </source>       </sources>       <switches>         <add name="SourceSwitch" value="Information" />       </switches>       <sharedListeners>         <add name="file" type="com.thingworx.common.logging.FixedFieldTraceListener, thingworx-dotnet-common, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" initializeData="false"/>       </sharedListeners>       <trace autoflush="true" indentsize="4" />   </system.diagnostics> </configuration>
View full tip
Recently a customer from the ThingWorx Academic Program sent in a sample program they were having problems with. They were trying to post data from a Raspberry PI using Python to their ThingWorx server. It turns out that their program did work just fine and was also a great example of posting data from a PI using REST. Here is how to set up this example. 1. Import the attached "Things_TempAndHumidityThing.xml" entity file. 2. from the PI run 'sudo pip install requests' 3. from the PI run 'sudo pip install logging' 4. from the PI run 'sudo pip install http_client ' 5. Create a python file call test.py that contains this example code: #!/usr/bin/python import requests import json import logging import sys # These two lines enable debugging at httplib level (requests->urllib3->http.client) # You will see the REQUEST, including HEADERS and DATA, and RESPONSE with HEADERS but without DATA. # The only thing missing will be the response.body which is not logged. try:     import http.client as http_client except ImportError:     # Python 2     import httplib as http_client http_client.HTTPConnection.debuglevel = 1 # You must initialize logging, otherwise you'll not see debug output. logging.basicConfig() logging.getLogger().setLevel(logging.DEBUG) requests_log = logging.getLogger("requests.packages.urllib3") requests_log.setLevel(logging.DEBUG) requests_log.propagate = True #NYP Webserver URL in Thingworx NYP_Webhost = sys.argv[1] App_Key = sys.argv[2] ThingName = 'TempAndHumidityThing' headers = { 'Content-Type': 'application/json', 'appKey': App_Key } payload = { 'Prop_Temperature': 45, 'Prop_Humidity': 33 } response = requests.put(NYP_Webhost + '/Thingworx/Things/' + ThingName + '/Properties/*', headers=headers, json=payload, verify=False) 6. From the command line run, './test.py http://twhome:8080 e9274d87-58aa-4d60-b27f-e67962f3e5c4' except substitute your server and your app key. 7. A successful response should look like: INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): twhome send: 'PUT /Thingworx/Things/TempAndHumidityThing/Properties/* HTTP/1.1\r\nHost: twhome:8080\r\nappKey: e9274d87-58aa-4d60-b27f-e67962f3e5c4\r\nContent-Length: 45\r\nAccept-Encoding: gzip, deflate\r\nAccept: */*\r\nUser-Agent: python-requests/2.8.1\r\nConnection: keep-alive\r\nContent-Type: application/json\r\n\r\n{"Prop_Temperature": 45, "Prop_Humidity": 33}' reply: 'HTTP/1.1 200 OK\r\n' header: Server: Apache-Coyote/1.1 header: Set-Cookie: JSESSIONID=E7436D2E6AE81C84EC197D406E7E365A; Path=/Thingworx/; HttpOnly header: Expires: 0 header: Cache-Control: no-store, no-cache header: Cache-Control: post-check=0, pre-check=0 header: Pragma: no-cache header: Content-Type: text/html;charset=UTF-8 header: Transfer-Encoding: chunked header: Date: Mon, 09 Nov 2015 12:39:24 GMT DEBUG:requests.packages.urllib3.connectionpool:"PUT /Thingworx/Things/TempAndHumidityThing/Properties/* HTTP/1.1" 200 None My thanks to the customer who sent in the simple example.
View full tip
ThingWorx provides multiple ways to deliver your data to the server. You can choose from the C based EMS to your own C application that uses the C SDK as well as SDKs for many popular languages but what can you do if the device you want to collect data on is so small that it need a very lightweight data delivery method. Normally you would consider using the REST web service interface and writing your own custom client to post your data by there is an alternative, MQTT. MQTT is a lightweight protocol that can be used from an Arduino with an Ethernet Shield that can stream real time data directly to ThingWorx by installing the MQTT Marketplace Extension on your server. To learn more about how this kind of solution worked, I created this slide deck while building a hardware example: DeliveringArduinoDataToThingworx.pdf Hopefully, it can help others out who want to create this kind of solution as well.
View full tip
This is a slide deck I created while learning how to post data from an Arduino to ThingWorx using MQTT protocol.
View full tip
ThingWorx is great for storing large amounts of data coming from your devices but it can also be used like a traditional, row based database for information you would like to integrate with your thing data. Attached to this blog entry is a short example of creating an address book database using a DataTable and a DataShape. It does not focus on creating mashups but sticks with discussing the modeling and service calls you would use to create a simple database.
View full tip
A while back, when I was learning about the ThingWorx platform, I could not find any good description of how InfoTables worked so I reverse engineered everything I could about them from a javascript perspective and wrote a short paper that I think is still of use today so I posted a copy here. It discusses what role InfoTables have as a ThingWorx primitive for passing data to and from ThingWorx and some of the lesser known capabilities that the data structure itself has built in. Here is a link to it Getting to Know InfoTables.pdf .
View full tip
When using the Auto-bind section of an EMS configuration it is very important to note the difference between "gateway":true and "gateway":false. Using either gateway value, when used with a valid "name" field, will result in the EMS attempting to bind the Thing with the ThingWorx platform, and will allow the EMS to respond to file transfer and tunnel services related to the auto-bound things, but this is around where the similarities end. Non-Gateway: This type of auto-bound thing can be thought of as a placeholder because the EMS will still require a LuaScriptResource to be bound in order to respond to property/service/event related messages. There must be a corresponding Thing based on the RemoteThing template (or any RemoteThing derived template e.g. RemoteThingWithFileTransfer) on the ThingWorx server in order for the bind to succeed. There are many reasons to use this type of auto-bound thing, but the most common is to bind a simple thing that can facilitate file transfer and tunnel services but does not need any custom services, properties, or events that would be provided by custom lua scripts within the LuaScriptResource. Gateway: An auto-bound gateway can be bound to the ThingWorx platform ephemerally if there is no Thing present to bind with on the platform. To clarify, if no Things exist with the matching Thing Name on the platform, and the EMS is attempting to bind a Gateway, a Thing will be automatically created on the platform to bind with the auto-bound gateway. This newly created ephemeral thing will only be accessible through the ThingWorx REST API, and once the EMS unbinds the gateway the ephemeral thing will be deleted If there is a pre-existing Thing on the ThingWorx server, then it must be based off of the EMSGateway template in order for the bind to succeed. The EMSGateway template, used both normally and ephemerally, will provide some gateway specific services that would otherwise be inaccessible to a normal remote thing. See EMSGateway Class Documentation for more details.
View full tip
Everywhere in the Thingworx Platform (even the edge and extensions) you see the data structure called InfoTables.  What are they?  They are used to return data from services, map values in mashup and move information around the platform.  What they are is very simple, how they are setup and used is also simple but there are a lot of ways to manipulate them.  Simply put InfoTables are JSON data, that is all.  However they use a standard structure that the platform can recognize and use. There are two peices to an InfoTable, the DataShape definition and the rows array.  The DataShape is the definition of each row value in the rows array.  This is not accessible directly in service code but there are function and structures to manipulate it in services if needed. Example InfoTable Definitions and Values: { dataShape: {     fieldDefinitions : {           name: "ColOneName", baseType: "STRING"     },     {           name: "ColTwoName", baseType: "NUMBER"     }, rows: [     {ColOneName: "FirstValue", ColTwoName: 13},     {ColOneName: "SecondValue, ColTwoName: 14}     ] } So you can see that the dataShape value is made up of a group of JSON objects that are under the fieldDefinitions element.  Each field is a "name" element, which of course defined the field name, and the "baseType" element which is the Thingworx primitive type of the named field.  Typically this structure is automatically created by using a DataShape object that is defined in the platform.  This is also the reason DataShapes need to be defined, so that fields can be defined not only for InfoTables, but also for DataTables and Streams.  This is how Mashups know what the structure of the data is when creating bindings to widgets and other parts of the platform can display data in a structured format. The other part is the "rows" element which contains an array of of JSON objects which contain the actual data in the InfoTable. Accessing the values in the rows is as simple as using standard JavaScript syntax for JSON.  To access the number in the first row of the InfoTable referenced above (if the name of the InfoTable variable is "MyInfoTable") is done using MyInfoTable.rows[0].ColTowName.  This would return a value of 13.  As you can not the JSON array index starts at zero. Looping through an InfoTable in service script is also very simple.  You can use the index in a standard "for loop" structure, but a little cleaner way is to use a "for each loop" like this... for each (row in MyInfoTable.rows) {     var colOneVal = row.ColOneName;     ... } It is important to note that outputs of many base services in the platform have an output of the InfoTable type and that most of these have system defined datashapes built into the platform (such as QueryDataTableEntries, GetImplimentingThings, QueryNumberPropertyHistory and many, many more).  Also all service results from query services accessing external databases are returned in the structure of an InfoTable. Manipulating an InfoTable in script is easy using various functions built into the platform.  Many of these can be found in the "Snippets" tab of the service editor in Composer in both the InfoTableFunctions Resource and InfoTable Code Snippets. Some of my favorites and most commonly used... Create a blank InfoTable: var params = {   infoTableName: "MyTable" }; var MyInfoTable= Resources["InfoTableFunctions"].CreateInfoTable(params); Add a new field to any InfoTable: MyInfoTable.AddField({name: "ColNameThree", baseType: "BOOLEAN"}); Delete a field: MyInfoTable.RemoveField("ColNameThree"); Add a data row: MyInfoTable.AddRow({ColOneName: "NewRowValue", ColTwoName: 15}); Delete one or more data row matching the values defined (Note you can define multiple field in this statement): //delete all rows that have a value of 13 in ColNameOne MyInfoTable.Delete({ColNameOne: 13}); Create an InfoTable using a predefined DataShape: var params = {   infoTableName: "MyInfoTable",   dataShapeName: "dataShapeName" }; var MyInfoTable = Resources["InfoTableFunctions"].CreateInfoTableFromDataShape(params); There are many more functions built into the platform, including ones to filter, sort and query rows.  These can be extremely useful when tying to return limited or more strictly structured InfoTable data.  Hopefully this gives you a better understanding and use of this critical part of the Thingworx Platform.
View full tip
The App URI in the ThingWorx Remote Thing Tunnel configuration specifies the endpoint of the specified tunnel. The default value (/Thingworx/tunnel/vnc.jsp) will point to the built in ThingWorx VNC client that can be downloaded through the Remote Access Widget in a Mashup to provide VNC remote desktop access. Leaving the App URI blank will result in the Tunnel being connected to the listen port on the users machine as specified in the Remote Access Widget​. In this case the user must supply the application client (e.g. an ssh client) in order to connect to the tunnel endpoint.
View full tip
Developing with Axeda Artisan (for Axeda Platform, v6.8 and later) Axeda Artisan is a development tool based on the Apache Maven build system. Artisan includes authoring, management, and deployment mechanisms that define and manage the development of Axeda Platform extension points, allowing you to flexibly install, update, and uninstall many types of objects on the Axeda Platform. The Components of Artisan The Artisan framework is comprised of several components: The Axeda Platform Each instance of the Axeda Platform contains libraries and sample Artisan Projects (archetypes) that are downloaded by developers to author and deploy their own Artisan Projects. The Artisan Project An Artisan Project is a collection of content and configuration files that are used to define a set of objects that will be installed on the Axeda Platform. The Artisan Installer The Artisan Installer is a flexible, configurable application that uses the Apache Maven build life-cycle to upload the contents of an Artisan Project to the Axeda Platform. Axeda Platform APIs The Artisan Installer interacts with the Axeda Platform and uses Axeda APIs to install, upgrade, or uninstall ArtisanProjects. How You Work With Artisan The goal of using Artisan is to develop an Artisan Project that will become an object or application that runs on the Axeda Platform. There are several phases involved in creating an Artisan Project: Preparing Your Development Environment – When first working with Artisan, you need to prepare your development environment by installing or configuring Java, Apache Maven, and any other required development tools. Creating an Artisan Project – Artisan uses archetypes as the basis for Artisan Projects. Archetypes are used to generate an Artisan Project that contains the correct type of development framework, which you then modify to create your own Artisan Project: The Hello World archetype provides a sample project you can use to create common objects on the Axeda Platform. The Machine Streams archetype provides a sample project you can use to configure machine streams for your Axeda Platform. Installing an Artisan Project on the Axeda Platform – The Artisan Installer packages the contents of the Artisan Project, and deploys them on the Axeda Platform. Testing the Artisan Project – Once installed, you can test your Artisan Project to verify its functionality and identify any issues that need to be addressed. It is common for an Artisan Project to be installed on the Axeda Platform several times as issues are identified and corrected during development. Creating a Stand-alone Installer – Once ready for deployment to production, the Artisan Installer is used to create a stand-alone installer for distributing the project contents to other instances of the Axeda Platform. Where to Go from Here To begin working with the Artisan framework to create your own projects, refer to the following: Artisan Landing Page – A page hosted on the Axeda Platform that introduces Artisan and serves as a starting point for creating an Artisan Project. The Artisan Landing Page can be found at the following location: http://instance_name:port/artisan                     Where instance_name:port/ is the ip address and port of an instance of the Axeda Platform. Axeda® Artisan Developer’s Guide – A reference that introduces Artisan, describes how to prepare your development environment, and provides detailed technical information concerning creating an Artisan Project and configuring the Artisan Installer. The Axeda® Artisan Developer’s Guide is available with all Axeda product documentation from the PTC Support site, http://support.ptc.com/
View full tip
Axeda Machine Streams enables external Platform integrators to access the current, raw data from connected assets. The Platform can stream the data item, alarm, mobile location, and registration messages from connected assets to an ActiveMQ server or Azure Service Bus endpoint. Streamed data can be used for data analytics or reporting, or simply for storage. This article explains the Machine Streams Data Relay project that Axeda provides. This sample project illustrates how stream consumers can create their own projects to relay Machine Stream messages from ActiveMQ or Azure Service Bus into their environments. The Machine Streams Data Relay project was created using Apache Maven. The project operates by dispatching messages to a log message processor. Each machine streams message is logged to stdout. Note: The " Axeda Features Guide " provides a high level introduction to the Axeda Machine Streams feature. That PDF is available from PTC Support (http://support.ptc.com/).) Downloading and Installing the Project The machine-streams-data-relay project is provided as a tar.gz archive for Linux users and a .zip archive for Windows users. Each archive includes a Maven project with all source code. This page provides downloads and full source for the machine-steams-data-relay Maven project. The Data Relay project files are available from here. Prerequisites To download, build, and compile the machine-streams-data-relay project, you will need the following: Access to an Axeda Platform instance configured to stream asset data (for ActiveMQ endpoint this includes the Axeda provided ActiveMQ machine-streams plugin/overlay (axeda-jms-plugin-r<SVN_REVISION>-machine-streams.zip, which is provided here. ActiveMQ or Azure Service Bus server configured for Machine Streams. Instructions for configuring an ActiveMQ or Azure Service Bus server for Machine Streams are provided in the “ Axeda® Machine Streams: A Guide to Setting Up Broker Endpoints", available with all documentation from PTC Support (http://support.ptc.com/ ). At least one machine stream (Axeda Artisan Machine Streams Archetype) configured to stream data to the ActiveMQ or Azure Service Bus server for your assets. ( Complete information about creating machine streams and adding machine stream support to the Axeda Platform is provided in the “ Axeda v2 API/Services Developers Reference Guide” available from PTC Support (http://support.ptc.com/).) Access to the ActiveMQ or Azure Service Bus server configured as the endpoint for streamed Machine Streams content Oracle Java JDK 1.7 or greater and java and javac installed and available in your PATH (if you need instructions for this, see http://www.oracle.com/technetwork/java/javase/downloads/index.html) Maven 3.0.4 or greater and mvn installed and available in your PATH (if you need instructions for this, see http://maven.apache.org/download.cgi) Note: For the Machine Streams Data Relay project to work successfully, the Axeda Platform instance and the ActiveMQ or Azure Service Bus server instance must be configured with support for Axeda Machine Streams, and at least one machine stream must be configured to stream data. Complete information about configuring Axeda Platform for Axeda Machine Streams, including the data format for the resulting streams (XML or JSON) is available in the “Axeda v2 API/Services Developers Reference Guide.” Instructions for configuring an ActiveMQ or Azure Service Bus server for Machine Streams are provided in the “Axeda® Machine Streams: A Guide to Setting Up Broker Endpoints” Reference Guide (available from PTC Support (http://support.ptc.com/)). Building the Project This page provides instructions for building the Data Relay project for Linux and for Windows environments. 1. Download and uncompress the project for your environment Linux: Click here for the machine-streams-data-relay-1.0.3-project.tar.gz # tar -zxvf machine-streams-data-relay-1.0.3-project.tar.gz # cd machine-streams-data-relay-1.0.3 Windows: Click here for the machine-streams-data-relay-1.0.3-project.zip Unzip the project to the following directory: C:\machine-streams-data-relay-1.0.3 2. Edit the ActiveMQ or Azure Service Bus configuration file (configAMQ.properties or configASB.properties) in src\main\scripts\ as needed. sample Config.properties files for the MachineStreamsDataRelay component For ActiveMQ broker endpoints - configAMQ.properties # The ActiveMQ broker URL. brokerURL=tcp://localhost:62000 # The ActiveMQ queue name to process messages from. # It can be a single queue: MachineStream.stream01 # Or a wildcard queue: MachineStream.> queueName=MachineStream.> # The username used to connect to the ActiveMQ queue username=axedaadmin # The password used to connect to the ActiveMQ queue password=zQXuLzhQgcyRZ25JCDXYEPBCT2kx48 # The number of ActiveMQ broker connections. numConnections=10 # The number of sessions per connection. Note that each session will create a separate thread. numSessionsPerConnection=5 # The number of concurrent threads used for processing machine streams messages. numProcessingThreads=100 # The type of message listener container. # default = single queue name per connection. # multiDestination = supports multiple queue names per connection messageListenerContainerType=default For Azure Service Bus broker endpoints - configASB.properties # The ASB broker URL. brokerURL=amqps://your-azure-service-bus-namespace.servicebus.windows.net # The ASB queues to process messages from. # It can be a single queue: MachineStream.stream01 # Or multiple queues separated by a comma: MachineStream.stream01,MachineStream.stream02 # Or a queue range defined by the following syntax: MachineStream.stream[01-20] queueName=MachineStream.stream[01-50] # The username used to connect to the ASB queue(s) username=your-azure-service-bus-username # The password used to connect to the ASB queue(s) password=the-password-for-your-azure-service-bus-username # The max number of ASB broker connections. numConnections=10 # The number of concurrent threads used for processing machine streams messages. numProcessingThreads=100 # The type of message listener container. # default = single queue name per connection. # multiDestination = supports multiple queue names per connection messageListenerContainerType=multiDestination Note: messageListenerContainerType is provided because Azure Service Bus does not support wildcard queue names. The configuration details are as follows: Name Description Value brokerURL location of the ActiveMQ or Azure Service Bus (broker) location of the ActiveMQ or Azure Service Bus server (broker) queueName Name of the ActiveMQ or Azure Service Bus queue from which you want to process messages To define a single queue: MachineStream.<insert single queue name here> To define a wildcard queue name for multiple queues:MachineStream. It can be a single queue:  MachineStream.stream01 Or multiple queues separated by a comma: MachineStream.stream01,MachineStream.stream02 Or a queue range defined by the following syntax: MachineStream.stream[01-20]: queueName=MachineStream.stream[01-50] (if you have multiple queues and you want to use ASB, then you have to use multiDestination and use the range) username username used to connect to the ActiveMQ or Azure Service Bus queue For ActiveMQ: username=axedaadmin For ASB: username=your-azure-service-bus-username password used to connect to the ActiveMQ or Azure Service Bus queue password used to connect to the ActiveMQ or ASB queue(s) numConnections number of ActiveMQ or Azure Service Bus broker connections Default is 10 broker connections numSessionsPerConnection The number of sessions per connection. Note that each session will create a separate thread. (This key is used infrequently.) APPLICABLE TO ACTIVEMQ ONLY. Default is 5 sessions per connection APPLICABLE TO ACTIVEMQ ONLY. numProcessingThreads The number of concurrent threads used for processing machine streams messages. Default is 100 concurrent threads messageListenerContainerType The type of message listener container. Default is single queue name per connection. Supports multiple queue names per connection 3. Build code using Maven.  Use -DskipTests option if you want to skip tests.  This will build all source code and produce a bin archive in the target directory. For Linux: # mvn package -DskipTests For Windows: c:\> mvn package -DskipTests 4. Enter the target directory and uncompress *bin.tar.gz archive and enter correct directory For Linux: # cd target # tar -zxvf machine-streams-data-relay-1.0.3-bin.tar.gz # cd machine-streams-data-relay-1.0.3 For Windows: c:\> cd target c:\> unzip machine-streams-data-relay-1.0.3.bin.zip c:\> cd machine-streams-data-relay-1.0.3 5. Start the application. For Linux: # ./machineStreamsDataRelay.sh <config properties file> for example: e.g. ./machineStreamsDataRelay.sh configOfYourChoice.properties For Windows: # ./machineStreamsDataRelay.sh <config properties file> for example: e.g. ./machineStreamDataRelay.bat configOfYourChoice.properties See the two example config files included within the project: configASB.properties (for Azure Service Bus) and configAMQ.properties (for ActiveMQ). 6. Scan the output. If your ActiveMQ configuration is correct, output similar to the following should appear, and no ERRORS should be shown: 2014-03-26 10:27:06.179 [main] INFO  [MessageListenerServiceImpl]: Initializing connections to tcp://localhost:62000 username=axedaadmin 2014-03-26 10:27:06.346 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 1: queue=MachineStream.> numSessions=5 2014-03-26 10:27:06.351 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 2: queue=MachineStream.> numSessions=5 2014-03-26 10:27:06.356 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 3: queue=MachineStream.> numSessions=5 2014-03-26 10:27:06.365 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 4: queue=MachineStream.> numSessions=5 2014-03-26 10:27:06.369 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 5: queue=MachineStream.> numSessions=5 2014-03-26 10:27:06.381 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 6: queue=MachineStream.> numSessions=5 2014-03-26 10:27:06.388 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 7: queue=MachineStream.> numSessions=5 2014-03-26 10:27:06.402 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 8: queue=MachineStream.> numSessions=5 2014-03-26 10:27:06.411 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 9: queue=MachineStream.> numSessions=5 2014-03-26 10:27:06.416 [main] INFO  [MessageListenerServiceImpl]: Initialized connection 10: queue=MachineStream.> numSessions=5 If your Azure Service Bus configuration is correct, output similar to the following should appear, and no ERRORS should be shown: 2014-10-01 16:51:30.114 [main] INFO [MessageListenerServiceImpl]: Initializing Connections to amqps://acme.servicebus.windows.net username=owner 2014-10-01 16:51:31.613 [ConnectionRecovery-thread-6] INFO [MultiDestinationMessageListenerContainer]: Connection 6 created 0/10 queue consumers 2014-10-01 16:51:31.614 [ConnectionRecovery-thread-8] INFO [MultiDestinationMessageListenerContainer]: Connection 8 created 0/10 queue consumers 2014-10-01 16:51:31.614 [ConnectionRecovery-thread-10] INFO [MultiDestinationMessageListenerContainer]: Connection 10 created 0/9 queue consumers 2014-10-01 16:51:31.614 [ConnectionRecovery-thread-2] INFO [MultiDestinationMessageListenerContainer]: Connection 2 created 0/10 queue consumers 2014-10-01 16:51:31.614 [ConnectionRecovery-thread-3] INFO [MultiDestinationMessageListenerContainer]: Connection 3 created 0/10 queue consumers 2014-10-01 16:51:31.614 [ConnectionRecovery-thread-5] INFO [MultiDestinationMessageListenerContainer]: Connection 5 created 0/10 queue consumers 2014-10-01 16:51:31.615 [ConnectionRecovery-thread-9] INFO [MultiDestinationMessageListenerContainer]: Connection 9 created 0/10 queue consumers 2014-10-01 16:51:31.615 [ConnectionRecovery-thread-4] INFO [MultiDestinationMessageListenerContainer]: Connection 4 created 0/10 queue consumers 2014-10-01 16:51:31.621 [ConnectionRecovery-thread-7] INFO [MultiDestinationMessageListenerContainer]: Connection 7 created 0/10 queue consumers 2014-10-01 16:51:31.756 [ConnectionRecovery-thread-1] INFO [MultiDestinationMessageListenerContainer]: Connection 1 created 0/10 queue consumers 2014-10-01 16:51:32.613 [ConnectionRecovery-thread-6] INFO [MultiDestinationMessageListenerContainer]: Connection 6 created 9/10 queue consumers 2014-10-01 16:51:32.614 [ConnectionRecovery-thread-8] INFO [MultiDestinationMessageListenerContainer]: Connection 8 created 9/10 queue consumers 2014-10-01 16:51:32.614 [ConnectionRecovery-thread-10] INFO [MultiDestinationMessageListenerContainer]: Connection 10 created 7/9 queue consumers 2014-10-01 16:51:32.614 [ConnectionRecovery-thread-2] INFO [MultiDestinationMessageListenerContainer]: Connection 2 created 10/10 queue consumers 2014-10-01 16:51:32.615 [ConnectionRecovery-thread-3] INFO [MultiDestinationMessageListenerContainer]: Connection 3 created 9/10 queue consumers 2014-10-01 16:51:32.615 [ConnectionRecovery-thread-5] INFO [MultiDestinationMessageListenerContainer]: Connection 5 created 0/10 queue consumers 2014-10-01 16:51:32.615 [ConnectionRecovery-thread-9] INFO [MultiDestinationMessageListenerContainer]: Connection 9 created 7/10 queue consumers 2014-10-01 16:51:32.615 [ConnectionRecovery-thread-4] INFO [MultiDestinationMessageListenerContainer]: Connection 4 created 9/10 queue consumers 2014-10-01 16:51:32.623 [ConnectionRecovery-thread-7] INFO [MultiDestinationMessageListenerContainer]: Connection 7 created 9/10 queue consumers 2014-10-01 16:51:32.756 [ConnectionRecovery-thread-1] INFO [MultiDestinationMessageListenerContainer]: Connection 1 created 10/10 queue consumers 2014-10-01 16:51:32.833 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 1: numQueues=10 initTimeMillis=2631 millis 2014-10-01 16:51:32.833 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 2: numQueues=10 initTimeMillis=2488 millis 2014-10-01 16:51:33.613 [ConnectionRecovery-thread-6] INFO [MultiDestinationMessageListenerContainer]: Connection 6 created 10/10 queue consumers 2014-10-01 16:51:33.614 [ConnectionRecovery-thread-8] INFO [MultiDestinationMessageListenerContainer]: Connection 8 created 10/10 queue consumers 2014-10-01 16:51:33.614 [ConnectionRecovery-thread-10] INFO [MultiDestinationMessageListenerContainer]: Connection 10 created 9/9 queue consumers 2014-10-01 16:51:33.615 [ConnectionRecovery-thread-3] INFO [MultiDestinationMessageListenerContainer]: Connection 3 created 9/10 queue consumers 2014-10-01 16:51:33.615 [ConnectionRecovery-thread-5] INFO [MultiDestinationMessageListenerContainer]: Connection 5 created 0/10 queue consumers 2014-10-01 16:51:33.615 [ConnectionRecovery-thread-9] INFO [MultiDestinationMessageListenerContainer]: Connection 9 created 8/10 queue consumers 2014-10-01 16:51:33.615 [ConnectionRecovery-thread-4] INFO [MultiDestinationMessageListenerContainer]: Connection 4 created 9/10 queue consumers 2014-10-01 16:51:33.623 [ConnectionRecovery-thread-7] INFO [MultiDestinationMessageListenerContainer]: Connection 7 created 10/10 queue consumers 2014-10-01 16:51:34.615 [ConnectionRecovery-thread-5] INFO [MultiDestinationMessageListenerContainer]: Connection 5 created 0/10 queue consumers 2014-10-01 16:51:34.615 [ConnectionRecovery-thread-3] INFO [MultiDestinationMessageListenerContainer]: Connection 3 created 9/10 queue consumers 2014-10-01 16:51:34.615 [ConnectionRecovery-thread-9] INFO [MultiDestinationMessageListenerContainer]: Connection 9 created 8/10 queue consumers 2014-10-01 16:51:34.615 [ConnectionRecovery-thread-4] INFO [MultiDestinationMessageListenerContainer]: Connection 4 created 9/10 queue consumers 2014-10-01 16:51:35.615 [ConnectionRecovery-thread-5] INFO [MultiDestinationMessageListenerContainer]: Connection 5 created 9/10 queue consumers 2014-10-01 16:51:35.615 [ConnectionRecovery-thread-3] INFO [MultiDestinationMessageListenerContainer]: Connection 3 created 9/10 queue consumers 2014-10-01 16:51:35.615 [ConnectionRecovery-thread-9] INFO [MultiDestinationMessageListenerContainer]: Connection 9 created 8/10 queue consumers 2014-10-01 16:51:35.616 [ConnectionRecovery-thread-4] INFO [MultiDestinationMessageListenerContainer]: Connection 4 created 9/10 queue consumers 2014-10-01 16:51:36.616 [ConnectionRecovery-thread-5] INFO [MultiDestinationMessageListenerContainer]: Connection 5 created 9/10 queue consumers 2014-10-01 16:51:36.616 [ConnectionRecovery-thread-3] INFO [MultiDestinationMessageListenerContainer]: Connection 3 created 9/10 queue consumers 2014-10-01 16:51:36.616 [ConnectionRecovery-thread-4] INFO [MultiDestinationMessageListenerContainer]: Connection 4 created 9/10 queue consumers 2014-10-01 16:51:36.616 [ConnectionRecovery-thread-9] INFO [MultiDestinationMessageListenerContainer]: Connection 9 created 8/10 queue consumers 2014-10-01 16:51:37.616 [ConnectionRecovery-thread-5] INFO [MultiDestinationMessageListenerContainer]: Connection 5 created 9/10 queue consumers 2014-10-01 16:51:37.616 [ConnectionRecovery-thread-3] INFO [MultiDestinationMessageListenerContainer]: Connection 3 created 9/10 queue consumers 2014-10-01 16:51:37.616 [ConnectionRecovery-thread-4] INFO [MultiDestinationMessageListenerContainer]: Connection 4 created 9/10 queue consumers 2014-10-01 16:51:37.616 [ConnectionRecovery-thread-9] INFO [MultiDestinationMessageListenerContainer]: Connection 9 created 8/10 queue consumers 2014-10-01 16:51:38.616 [ConnectionRecovery-thread-3] INFO [MultiDestinationMessageListenerContainer]: Connection 3 created 10/10 queue consumers 2014-10-01 16:51:38.617 [ConnectionRecovery-thread-9] INFO [MultiDestinationMessageListenerContainer]: Connection 9 created 10/10 queue consumers 2014-10-01 16:51:38.616 [ConnectionRecovery-thread-4] INFO [MultiDestinationMessageListenerContainer]: Connection 4 created 10/10 queue consumers 2014-10-01 16:51:38.616 [ConnectionRecovery-thread-5] INFO [MultiDestinationMessageListenerContainer]: Connection 5 created 10/10 queue consumers 2014-10-01 16:51:38.643 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 3: numQueues=10 initTimeMillis=8491 millis 2014-10-01 16:51:38.643 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 4: numQueues=10 initTimeMillis=8490 millis 2014-10-01 16:51:38.643 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 5: numQueues=10 initTimeMillis=8490 millis 2014-10-01 16:51:38.643 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 6: numQueues=10 initTimeMillis=3485 millis 2014-10-01 16:51:38.643 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 7: numQueues=10 initTimeMillis=3495 millis 2014-10-01 16:51:38.643 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 8: numQueues=10 initTimeMillis=3485 millis 2014-10-01 16:51:38.643 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 9: numQueues=10 initTimeMillis=8488 millis 2014-10-01 16:51:38.643 [main] INFO [MessageListenerServiceImpl]: Initialized Connection 10: numQueues=9 initTimeMillis=3485 millis 7. To verify that messages are being streamed properly from the Axeda Platform, send DataItems from your connected Assets. You should see messages similar to the following. (Remember that each Asset you are testing must have an associated Machine Stream.) 2014-03-26 10:45:16.309 [pool-1-thread-1] INFO  [LogMessageProcessor]: StreamedDataItem: Model,Asset1,799021d6-70a3-7c32-0000-00000000021d,false,Wed Mar 26 14:45:16 EDT 2014,temp,43,analog 2014-03-26 10:45:21.137 [pool-1-thread-2] INFO  [LogMessageProcessor]: StreamedDataItem: Model,Asset2,799021d6-70a3-7c32-0000-000000000225,false,Wed Mar 26 14:45:21 EDT 2014,temp,43,analog 2014-03-26 10:45:26.134 [pool-1-thread-3] INFO  [LogMessageProcessor]: StreamedDataItem: Model,Asset1,799021d6-70a3-7c32-0000-00000000022b,false,Wed Mar 26 14:45:26 EDT 2014,temp,44,analog 2014-03-26 10:45:31.135 [pool-1-thread-4] INFO  [LogMessageProcessor]: StreamedDataItem: Model,Asset2,799021d6-70a3-7c32-0000-000000000231,false,Wed Mar 26 14:45:31 EDT 2014,temp,44,analog 2014-03-26 10:45:36.142 [pool-1-thread-5] INFO  [LogMessageProcessor]: StreamedDataItem: Model,Asset1,799021d6-70a3-7c32-0000-000000000237,false,Wed Mar 26 14:45:36 EDT 2014,temp,45,analog 2014-03-26 10:45:41.146 [pool-1-thread-6] INFO  [LogMessageProcessor]: StreamedDataItem: Model,Asset2,799021d6-70a3-7c32-0000-00000000023d,false,Wed Mar 26 14:45:41 EDT 2014,temp,45,analog Configuring a CustomMessageProcessor By default, the project is configured to use a LogMessageProcessor that logs each streamed message it receives to standard out. The project takes a StreamedMessage in either an XML or JSON format (as configured in the MachineStream SDKv2 object) and decodes the message into a StreamedMessage Java object. LogMessageProcessor.java implements the MessageProcessor interface. Here is the MessageProcessor.java interface: MessageProcessor.java package com.axeda.tools.streams.processor; import com.axeda.tools.streams.model.StreamedMessage; /** * This class defines the message processor method callback that will called for message processing. * Note that this methods will be called by multiple threads concurrently. */ public interface MessageProcessor { /** * Process a machine stream message. Note that this method will be called by multiple threads concurrently.  * The number of concurrent processing threads is defined in MachineStreamsConfig.getNumProcessingThreads(). * If you add code here that significantly slows down message processing, then there is the potential that * MessageListenerService threads will also block.  When the MessageListenerService threads block, this means that * messages will start to backup in the ActiveMQ or ASB message queues. If you are processing a large number of messages, * then you may need to adjust your configuration parameters or optimize your processMessage() code. * @param message machine streams message to process */public void processMessage(StreamedMessage message);} An additional class named CustomMessageProcessor.java has been provided so that you can provide your own custom message processing logic: CustomMessageProcessor.java package com.axeda.tools.streams.processor; import org.springframework.stereotype.Component; import com.axeda.tools.streams.model.StreamedAlarm; import com.axeda.tools.streams.model.StreamedDataItemMessage; import com.axeda.tools.streams.model.StreamedMessage; import com.axeda.tools.streams.model.StreamedMobileLocation; import com.axeda.tools.streams.model.StreamedRegistrationMessage; /** * This class was provided for customers to implement their own message processing business * logic. To use this class, change the @Autowired messageProcessor qualifier in * MessageProcessingServiceImpl.java to @Qualifier("customMessageProcessor") */ @Component("customMessageProcessor") public class CustomMessageProcessor implements MessageProcessor { /** * (non-Javadoc) * @see com.axeda.tools.streams.processor.MessageProcessor#processMessage(com.axeda.tools.streams.model.StreamedMessage) * * Process a machine stream message. Note that this method will be called by multiple threads * concurrently. The number of concurrent processing threads is defined in * MachineStreamsConfig.getNumProcessingThreads(). * If you add code here that significantly slows down message processing, then there is the * potential that MessageListenerService threads will also block. When the MessageListenerService * threads block, this means that messages will start to backup in the ActiveMQ or Azure Service Bus message queues. If you * are processing a large number of messages, then you may need to adjust your configuration parameters * or optimize your processMessage() code. */ @SuppressWarnings("unused") @Override public void processMessage(StreamedMessage message) { if (message instanceof StreamedDataItemMessage) { StreamedDataItemMessage dataItem = (StreamedDataItemMessage) message; // add your business logic here } else if (message instanceof StreamedAlarm) { StreamedAlarm alarm = (StreamedAlarm) message; // add your business logic here } else if (message instanceof StreamedMobileLocation) { StreamedMobileLocation mobileLocation = (StreamedMobileLocation) message; // add your business logic here } else if (message instanceof StreamedRegistrationMessage) { StreamedRegistrationMessage registration = (StreamedRegistrationMessage)message; // add your business logic here } } } The Axeda Platform Machine Streams feature currently support 4 different message types: StreamedDataItemMessage StreamedAlarm StreamedMobileLocation StreamedRegistrationMessage For each of the different message types, you should add your message processing business logic.  You may want to write each message to your favorite NoSql database or to a flat file. Once you have completed your changes to the CustomObjectMessageProcessor, then you must make one change in the MessageProcessingServiceImpl.java class to use this Spring bean. Uncomment this line  // @Qualifier("customMessageProcessor") Comment this line @Qualifier("logMessageProcessor") The following code snppet shows what your changes should look like when you are finished: MessageProcessingServiceImpl.java @Component("messageProcessingService") public class MessageProcessingServiceImpl implements MessageProcessingService private static final Logger LOGGER = LoggerFactory.getLogger(MessageProcessingServiceImpl.class); @Autowired private MessageDecoder messageDecoder; @Autowired // If you want to use the CustomMessageProcessor instead of the default LogMessageProcessor then change this Qualifier to @Qualifier("customMessageProcessor") //@Qualifier("logMessageProcessor") private MessageProcessor messageProcessor; private ExecutorService executorService;
View full tip
Complete information about installing and using the Axeda Machine Streams Data Relay Project is available here. This page provides the files for Axeda Machine Streams Data Relay Project: For Linux users ready to run (bin) files: download the machine-streams-data-relay-1.0.3-bin.tar_.gz archive full Maven project files: download the machine-streams-data-relay-1.0.3-project.tar_.gz archive For Windows users ready to run (bin) files: download the machine-streams-data-relay-1.0.3-bin.zip archive full Maven project files: download the machine-streams-data-relay-1.0.3-project.zip archive From the links below, select the project you want to use.
View full tip
This Zip file contains the Axeda patch (axeda-jms-plugin-<version>-machine-streams) required for proper installation and configuration of an Apache ActiveMQ server to use with the Axeda Machine Streams service (which is supported for Axeda Platform v6.8 and later). Note: Information about the Axeda Machine Streams feature is provided in the Axeda Features Guide available from the Axeda Support site, http://help.axeda.com. This patch overlay needs to be applied to the v5.8.0 ActiveMQ server installed as the Axeda Machine Streams endpoint broker, so that Axeda Platform can send streamed content to that server endpoint. Complete instructions for installing and configuring an Apache ActiveMQ server for Axeda Machine Streams are provided in the reference, Axeda® Machine Streams: A Guide to Setting Up Broker Endpoints. This guide is available with all Axeda product documentation from the PTC Support site.
View full tip
Announcements