cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

Community Tip - If community subscription notifications are filling up your inbox you can set up a daily digest and get all your notifications in a single email. X

IoT Tips

Sort by:
One of the killer features of the Axeda Platform is the Axeda Console, a browser-based online portal where developers and business users alike can browse information in an out-of-the-box graphical user interface.  The Axeda Console is functional, re-brandable and extensible, and can easily form the foundation for a customized connected product experience. Let's take a tour of the Axeda Console and explore what it means to have a full-featured connected app right at the start of your development. What this tutorial covers This tutorial discusses the landscape of the online browser-based suite of tools accessible to Axeda customers.  It does not do a deep dive into each of the available applications, but rather serves as an introduction to the user interface. Sections of the Axeda Applications Console that are discussed: Landing Page (Home) User Preferences Asset Dashboard Axeda Help Note: This article features screenshots from Axeda 6.5 which is the current release as of July 1, 2013.  In prior versions the Axeda Applications Console has also been referred to as ServiceLink.  Stay tuned for Axeda 6.6! What can I do from here? From the landing page for the Axeda Console, you can access recent assets in your right sidebar or search for assets in the left sidebar.  Each of the links in the main Welcome text corresponds to a main tab. Troubleshoot, Monitor, and Service Assets - (Service tab) An Overview of the status of assets, filterable by a search on fields such as serial number, model, organization, etc. Access and Control Remote Assets - (Access tab) If you are familiar with Windows Remote Desktop, this will seem familiar.  This allows you to log into and control an asset as if you were typing from a physical keyboard directly into it without having to be on the same network or in the same location.  This is particularly useful when the asset is behind a firewall or other controlled network. Install and Deploy Software Updates - (Software tab) This tool provides the ability to create, view, configure, delete and deploy software packages (like a file that contains an update) to assets. View Usage Data and Asset Charts - (Usage tab) You can use the Axeda Usage application to track and analyze asset usage. Add New Assets, Organizations and Models - (Configuration tab) Find tools here for creating, updating and deleting domain objects. Administrator Users, Groups and Assets - (Administration tab) Manage users, groups, auditing, and system-setup tasks The remaining tabs that are not linked from the Home page are either custom tabs or less frequently used tabs (depending on use case). The custom tabs are examples of custom applications that are not distributed out of the box with an Axeda instance. Wireless - (custom tab) an integration with the Jasper API that allows the user to monitor SIMs activated in their assets Maintenance - track information about the operation of machines against service cycles Case - manage the resolutions of asset issues Report - (requires an additional license) provides a suite of standard reports, custom reports may also be added Dashboard - allows you to create a landing page that displays information that is interesting to you Simulator - (custom tab) an app that allows you to set data items, alarms, mobile locations, and geofences on an asset For more details on Custom Tabs and the Extended UI, please take a look at [Extending the UI - Custom Tabs and Modules]. (Coming soon) User Preferences Each user in an Axeda instance has a certain set of privileges and visibility, which determine what actions she can take and what information she can see.  A user also has control over certain aspects of her own use of the Axeda Console, which are configurable from the yellow Preferences link, located in the top right corner of the page. This opens up the User Preferences page. The User Preferences link allows you to set defaults for your user only.  From here you can change the following settings: User Attributes (email and password) Locale - Change the locale which also sets the display language Time Zone - Change the time zone as displayed in the Applications Console (note that this does NOT affect individual asset time zone.  Asset time zone is reported by the agent) Notification Styles - Specify which contact methods are appropriate for you and for what severity of triggered alert Default Application - Set which tab should open up when you log into the Axeda Console Items Per Page (Long Table) - For longer listings of items, how many rows should be displayed Items Per Page (Short Table) - For shorter listings of items, how many rows should be displayed Asset Dashboard As the asset is the center of the Axeda universe, so the Asset Dashboard could be considered the central feature of the Axeda Console.  You can open up the dashboard for any particular asset by clicking it in the Service tab or in the Recent Assets shortcuts. You can also add modules within the Asset Dashboard that are either a custom application or the output of an Extended UI Module type custom object.  From the Asset Dashboard you have an at-a-glance view of this asset's current data, alarms, uploaded files, and location to name a few. The Asset Dashboard is built for viewing information about the asset.  To perform create/read/update/delete functions on the asset, you will need to search using the Configuration tool instead. To view a list of models or any domain object available for configuration, click the drop down arrow next to the View sub-tab and select the object name. Once you have the list of models displayed, click the Preferences link on the model to access a Model Preferences Dashboard that allows you to configure the model image, the modules displayed, and other features of the Asset Dashboard. Axeda Help As part of learning more about the Axeda Console, make use of the documentation available to you by clicking the Help link in the top right corner of the page. This will open a pop-up which contains information about the page you have open.  It allows you to do a deep dive into any aspect of the Axeda Console, and includes search and a browsable index on Axeda topics. Make sure to research topics in the Help section while troubleshooting your assets and applications.
View full tip
Setting up the ThingWorx Server RemoteThing, ApplicationKey, and TunnelSubsystem Tunneling from the ThingWorx platform to an Edge Device can be easily done with a few preparation steps on the platform side: Create an ApplicationKey entity on the ThingWorx server so that the EMS or SDK you are using can authenticate with the platform Create a RemoteThingWithTunnels or RemoteThingWithTunnelsAndFileTransfer Thing for the remote device to bind to Either ThingTemplate will work, the only difference is if you want to use any native file transfer capabilities that are provided by ThingWorx In the newly created Thing, on the General Information page, click on the drop-down menu next to Enable Tunneling and select Override - Enabled ​Go to the Configuration​ section under ​Entity Information ​on the right and click on the Add My Tunnel ​button The Tunnel Name is used to identify what tunnel to use in the RemoteAccessWidget you will bind to the tunnel The Host will remain 127.0.0.1 because this is from the perspective of where the vnc server is to the remote device In my example they are on the same device The Port value should be the Port that the server is listening on This is typically 5900, but my vnc server is running on port 5901 for this example The App URI can be cleared out because we do not need to reference that file Here is a link to a further explanation on what the App URI is for: ThingWorx Tunneling App URI's The # of Connections and Protocol can remain their default values unless you have a reason to change them Navigate back to Home and look for the TunnelSubsystem under the Subsystems page Click on the TunnelSubsystem Click on the Configuration option on the left Modify the Public host name used for tunnels field and the Public port used for tunnels field to the host and port of your ThingWorx server Save and close the TunnelSubsystem Configuring the Edge Device For this example I'm going to keep it simple and set up an EMS (Edge MicroServer) instead of an SDK. This EMS will be on a totally separate device (an Ubuntu machine), while my ThingWorx server is on my local machine. Download the latest EMS onto a separate machine Configure the config.json file settings to match the server's host, port, and application key The ​tunnel​ block will be necessary to add as well, see below for an example of a working config.json file: Configure the config.lua file to match the name of the RemoteThingWithTunnels we created earlier; in this instance the name of my RemoteThing is ​EdgeThing​: Run the EMS and LSR (Lua Script Resource) The LSR EdgeThing​ will bind automatically to the RemoteThingWithTunnels we created earlier To verify there is successful connection between the platform and EMS go to the ​EdgeThing​'s Properties page and check to see if the ​isConnected ​property is currently set to ​true​ If it's not, please refer to this Help Center section for further troubleshooting. There is a list of error codes here. Installing a VNC Viewer and Server The next series of steps talks about configuring a VNC Server on the EMS machine and a VNC Client on the computer you are using to connect to the server. For this example I will be using packages tightvncserver, xfce4, xfce4-goodies, and vnc4server on my Ubuntu machine that hosts the EMS, and I will be using the tightvnc viewer available for download here. The following steps describe how to configure the Ubuntu machine so that it will be ready to accept vnc requests: I want to note that I am specifically using a 64-bit Ubuntu 14.04 LTS OS Run the following commands: sudo apt-get update sudo apt-get install xfce4 xfce4-goodies tightvncserver Run the vncserver and you will be prompted to setup a password I used password to keep it simple, but you will want to use something relatively secure We will want to kill this instance right away so we can proceed with further configuration vncserver -kill :1 ​Make a backup of the ​xstartup​ file in case things go awry mv ~/.vnc/xstartup ~/.vnc/xstartup.bak Create a new xstartup ​file to proceed with the setup nano ~/.vnc/xstartup Insert the following commands into the file, and they will be exercised every time the server starts or is restarted: #!/bin/bash xrdb $HOME/.Xresources startxfce4 & The first command in the file tells the VNC's GUI framework to reference the .Xresources file, which is where a user can change vnc settings The second command launches the XFCE -- the graphical software Ensure that the xstartup ​file has executable privileges: sudo chmod +x ~/.vnc/xstartup Start the server back up with vncserver For the machine that is being used to view the Mashup, install the tightvnc server from the link mentioned above. You should double-click the tightvnc-jviewer.jar file to run the viewer application now so it is up and ready for the ​Establishing a Tunnel ​section​. Creating the RemoteAccess Mashup This next portion of the tutorial covers creating the Mashup that will be asked by any user who wants to remote into the Edge device. Go to Composer Home and open the Mashup menu option on the left side of the screen Add a new Static or Dynamic Mashup Drag-and-drop a RemoteAccessWidget onto the Mashup Click on the RemoteAccessWidget and modify the RemoteThingName, TunnelName, and AcceptSelfSignedCertificates ​properties for the connection The RemoteThingName is the name of the Edge Thing the remote device is bound to The TunnelName is the name of the tunnel we added to the Edge Thing in the Configuration screen The AcceptSelfSignedCertificates is only used when using an SSL connection with self signed certs View the Mashup and the RemoteAccess Widget should have a green plus sign on it if the connection from the EMS to platform is up and connected Establishing a Tunnel The following section is the last part of the process where we actually establish a tunnel between the client, platform, and remote device. Open the Mashup with the RemoteAccess Widget if you closed it Click on the RemoteAccess Widget to being the wsadapter.jnlp download Once that has completed click on the wsadapter.jnlp file to run it Keep in mind that there is a default 90 second timeout defined in the TunnelSubsystem that will render the wsadapter.jnlp file useless and you will have to download a new one if the connection is not established within that timeframe If you receive the following error message you may need to reconfigure your TunnelSubsystem configuration options for your server because the thingworx-tunnel-launcher.jar was unable to be found at that address If you receive the following error message after you will need to modify your security settings in your Java options. This is done by opening ​Configure Java​, navigating to the ​Security ​tab, and then adding your ThingWorx server's IP and port to the site list via the ​Edit Site List...​ button You should have received a Security Warning message upon successfully finding the thingworx-tunnel-launcher.jar file that you will click the ​Run​ button on and check the I accept the risk and want to run this application​ A pop-up, like the following, will be seen and you know the tunnel is now open for tightvnc to connect through Do not click ​OK​, instead, please proceed to the next step. Clicking OK will close the tunnel if you have not connected to the EMS via the VNC Viewer yet. Open the tightvnc-jviewer.jar and type in the corresponding host and port that a vnc connection should be established to: localhost ​ and port ​16345​ are used because we have already established a connection to the EMS and it is listening for a vnc connection on port 16345 -- per the ThingWorx pop-up we just saw Click ​Connect​ and a new window should appear showing the GUI environment of your Ubuntu server like below
View full tip
I had just finished writing an integration test that needed to update a Thing on a ThingWorx server using only classes in the Java JDK with as few dependencies as possible and before I moved on, I though I would blog about this example since it makes a great starting point for posting data to ThingWorx. ThingWorx has a Java SDK which uses the HTTP Websockets protocol and you can download it from our online at the ThingWorx IoT Marketplace​ that offers great performance and far more capabilities than this example. If you are looking, however for the simplest, minimum dependency example of delivering data to ThingWorx, this is it. This examples uses the REST interface to your ThingWorx server. It requires only classes already found in your JDK (JDK 7) and optionally includes the JSON Simple jar. References to this jar can be removed if you want to create your property update JSON object yourself. Below is the Java Class. package com.thingworx.rest; import org.json.simple.JSONObject; import javax.net.ssl.HostnameVerifier; import javax.net.ssl.HttpsURLConnection; import javax.net.ssl.SSLContext; import javax.net.ssl.SSLSession; import javax.net.ssl.TrustManager; import javax.net.ssl.X509TrustManager; import java.io.IOException; import java.io.OutputStreamWriter; import java.net.HttpURLConnection; import java.net.URL; import java.security.KeyManagementException; import java.security.NoSuchAlgorithmException; import java.security.cert.CertificateException; import java.security.cert.X509Certificate; /** * Author: bill.reichardt@thingworx.com * Date: 4/22/16 */ public class SimpleThingworxRestPropertyUpdater {    static {    //Disable All SSL Security Testing (Not for production!)      try {       disableSSLCertificateChecking();      } catch (Exception e) {       e.printStackTrace();      }      HttpsURLConnection.setDefaultHostnameVerifier(new HostnameVerifier(){        public boolean verify(String hostname, SSLSession session) {return true;}      });   }    public static void main(String[] args) {      // like http://localhost:8080 or https://localhost:443      String serverUrl = args[0];      // Generate one of these from the composer under Application Keys      String appKey = args[1];      String thingName = args[2];      // You don't have to use the Simple JSON class, just pass a JSON string to restUpdateProperties()      // This Thing has three properties, a (NUMBER), b (STRING) and c (BOOLEAN)      JSONObject properties = new JSONObject();      properties.put("a", new Integer(100));      properties.put("b", "My New String Value");      properties.put("c", true);      String payload= properties.toJSONString();      try {        int response = restUpdateProperties(serverUrl, appKey, thingName, payload);        System.out.println("Response Status="+response);      } catch (Exception e) {        e.printStackTrace();      }   }    public static int restUpdateProperties(String serverUrl, String appKey, String thingName, String payload) throws IOException {     String httpUrlString = serverUrl + "/Thingworx/Things/"+thingName+"/Properties/*";     System.out.println("Performing HTTP PUT request to "+httpUrlString);     System.out.println("Payload is "+payload);     URL url = new URL(httpUrlString);     HttpURLConnection httpURLConnection = (HttpURLConnection) url.openConnection();     httpURLConnection.setUseCaches(false);     httpURLConnection.setDoOutput(true);     httpURLConnection.setRequestMethod("PUT");     httpURLConnection.setRequestProperty ("Content-Type", "application/json");     httpURLConnection.setRequestProperty ("appKey",appKey);     OutputStreamWriter out = new OutputStreamWriter(httpURLConnection.getOutputStream());     out.write(payload);     out.close();     httpURLConnection.getInputStream();     return httpURLConnection.getResponseCode();   }    /**   * Disables the SSL certificate checking for new instances of {@link HttpsURLConnection} This has been created to   * aid testing on a local box, not for use on production.   */    private static void disableSSLCertificateChecking() throws KeyManagementException, NoSuchAlgorithmException {   TrustManager[] trustAllCerts = new TrustManager[] { new X509TrustManager() {      public X509Certificate[] getAcceptedIssuers() {        return null;      }      public void checkClientTrusted(X509Certificate[] arg0, String arg1) throws CertificateException {}      public void checkServerTrusted(X509Certificate[] arg0, String arg1) throws CertificateException {}      } };      SSLContext sc = SSLContext.getInstance("TLS");      sc.init(null, trustAllCerts, new java.security.SecureRandom());      HttpsURLConnection.setDefaultSSLSocketFactory(sc.getSocketFactory());   } } When run, it prints out what the request body should look like in JSON: Performing HTTP PUT request to https://localhost:443/Thingworx/Things/SimpleThing/Properties/* Payload is  {"a":100,"b":"My New String Value","c":true} Response Status=200 I have attached the full Gradle project that builds and runs this example class as a zip file to this article. When you download it, if you have Java JDK 7 already installed an on your path, you can run the example with the command: On Linux or OSX ./gradlew simplerest Windows gradlew.bat simplerest Don't forget to edit the build.gradle file to use your server's URL and application key. You will also find the Thing used in this example in the entities folder of this project and you can import it on your server to test it out. It is a Thing that is based on GenericThing and has three properties, a (NUMBER), b (STRING) and c (BOOLEAN).
View full tip
Integrating LDAP authentication into Thingworx is fairly simple. Since release 5.0 and later, the out-of-the-box (OOTB) Thingworx authenticators already include the necessary code to validate a user's credentials against an LDAP server. These authenticators look to see if an LDAP server is connected every time a user attempts a login, and then further check to see if this user exists in the LDAP server. If the username does exist in LDAP, then Thingworx will check if the password entered is a match to the password stored within LDAP. If the password entered does not match the password stored in LDAP, then Thingworx will next check if the password matches the one stored in Thingworx for that user. So in order for a user to login to Thingworx, they must have a user Thing created for them within Thingworx Composer (this can be done programmatically, see below), and a valid password which matches either an LDAP account password or the password as it is set for that user on the Thing in Thingworx Composer. The first thing a developer needs to do to integrate LDAP is configure their Thingworx instance so that it can find the LDAP server and access its contents. This is done by importing an XML file which will allow the developer to see a Thing that comes with the Thingworx platform (see attached file "directoryServices.xml"). The Thing that needs configuring is called ApacheDS3 and it is a DirectoryServices Thing. The largest task for a developer to do to integrate LDAP into Thingworx involves importing their LDAP users into Thingworx. Getting the LDAP usernames out of the LDAP server will vary depending on which distribution of LDAP is in use. However, once the developer acquires this information, using it to create users in Thingworx is simple. The developer will need to create a Thing Service which creates a dummy password and assigns the LDAP username in the parameters. Then they can pass the parameters into the CreateUser service of the “EntitiyServices” resource: var params = { password: "SOMETHING_COMPLICATED", //dummy password does not matter, but you don't want an accidental match, so make it something very complicated, and standard to your company's LDAP users name: ldap_username, //retrieve from LDAP description: "This user was created as part of LDAP import", //can be whatever you'd like tags: undefined }; Resources["EntityServices"].CreateUser(params); // no return Any users created in this way will be redirected to Squeal if there is no home mashup assigned, so you will have to add an additional bit of code which assigns the home mashups to users, looping through something like this: var params = {     name: "dashboard" //replace this with String name of dashboard (must exist) }; Users[username].SetHomeMashup(params); For full steps on integrating LDAP and Thingworx, including instructions on how to set up an ApacheDS test LDAP server, see the Thingworx support article titled “Integrate LDAP Authentication and Import LDAP User Directory into Thingworx” (reference document – CS221840).
View full tip
Sometimes you need the values from different ThingTemplate members in ONE grid. Therefore it would be great, if you can join 2 "GetImplementedThingsWithData" results into a common one. Here a script that works generally as long as you don't mess with datatypes on same column names. I'm very interested, if someone can find a much easier solution. The Union function was the only one I found suited for the task, but this needs preparation of the infotables upfront. Input: Table1 :Infotable Table2: Infotable Output: Infotable Here the "Snippet": // Define params for an Infotable to hold column names var params = {   infoTableName: "field" /* STRING */ }; // Define column 1 var newField = new Object(); newField.name = "field"; newField.baseType = 'STRING'; // Two 1 columns Infotables to store the field definition; var field1 = Resources["InfoTableFunctions"].CreateInfoTable(params); field1.AddField(newField); var field2 = Resources["InfoTableFunctions"].CreateInfoTable(params); field2.AddField(newField); // Define the cell to add to Infotable var myField = new Object(); myField.field = ""; myField.baseType = "STRING"; // Loop through Table1 var dataShapeFields = Table1.dataShape.fields; for (var fieldName in dataShapeFields) {   logger.debug('field1 name is ' + dataShapeFields[fieldName].name);     myField.field = dataShapeFields[fieldName].name;    field1.AddRow(myField); } // Loop through Table2 var dataShapeFields = Table2.dataShape.fields; for (var fieldName in dataShapeFields) {   logger.debug('field2 name is ' + dataShapeFields[fieldName].name);    myField.field = dataShapeFields[fieldName].name;    field2.AddRow(myField); } // Using inner join functionality to filter only the values that exist in both var params = {   columns2: "field" /* STRING */,   columns1: "field" /* STRING */, joinType: "INNER" /* STRING */,   t1: field1 /* INFOTABLE */, t2: field2 /* INFOTABLE */,   joinColumns1: "field" /* STRING */,   joinColumns2: "field" /* STRING */ }; var commonFields = Resources["InfoTableFunctions"].Intersect(params); // Loop over the result to build a search string var commonColumns = ""; var tableLength = commonFields.rows.length; for (var x = 0; x < tableLength; x++) {   var row = commonFields.rows ;   commonColumns = commonColumns + row.field + ","; } // Reduce Table1 to match only common columns var params = { t: Table1 /* INFOTABLE */, columns: commonColumns /* STRING */ }; var result1 = Resources["InfoTableFunctions"].Distinct(params); // Reduce Table2 to match only common columns var params = {   t: Table2 /* INFOTABLE */,   columns: commonColumns /* STRING */ }; var result2 = Resources["InfoTableFunctions"].Distinct(params); // At the END JOIN the tables together (does not work if colums are different) var params = {   t1: result1 /* INFOTABLE */,   t2: result2 /* INFOTABLE */ }; var result = Resources["InfoTableFunctions"].Union(params);
View full tip
Remote Timeouts Some Notes: Format Units: unit of measure for the timeout or limit (seconds, milliseconds, cycles, etc.) Description: describes the timeouts Outcome: describes the default behavior if a timeout or limit is reached. Related Timeouts: lists other timeouts that are closely related to the timeout in question, meaning they should be configured together because one timeout will affect another timeout Notes This guide is heavily focused on the C SDK; certain timeouts may have different names in other SDK's or agents There are no descriptions of any imposed delays or timeouts related to thread pools on the ThingWorx Platform Local timeouts (not related to remote requests) were intentionally not added There are far too many applications to provide detail about every situation introduced by every timeout, but this should provide a good starting point for custom timeout configuration Edge socket_read_timeout Units: milliseconds Description: used to free the socket mutex allowing another service to read on the socket. Increasing this value is beneficial in low resource systems, but could lead to slower performance Outcome: socket read retry Related timeouts: ssl_read_timeout ssl_read_timeout Units: milliseconds Description: If a partial record is read but not saved, it is possible to remove part of an ssl record that would have otherwise been essential in decrypting the entire record. This timeout is used to prevent this situation; it will allow a function to re-acquire the socket mutex in the event that a partial ssl record was captured but the socket_read_timeout was reached. Outcome: websocket read retry Related timeouts: socket_read_timeout frame_read_timeout Units: milliseconds Description: essentially an idle socket timeout. If an edge device requests a message from the ThingWorx Platform, and nothing is read after the request for the time value specified in this property (not even request headers/ssl header), then the websocket is assumed to be experiencing an error and the connection is closed Outcome: websocket disconnect Related timeouts: message_timeout message_timeout Units: milliseconds Description: the max overall message time that that the edge will wait for a full response during a particular request to the ThingWorx Platform. This timeout can be overridden by the frame read timeout if there is no activity on the socket for a given expected response period Outcome: websocket disconnect Related timeouts:frame_read_timeout, pingpong_timeout, Message Timeout (WSCommunication subsystem) pingpong_timeout Units: milliseconds Description: the ping and pong messages are the heartbeat of the AlwaysOn protocol. If a pong is not received <pingpong_timeout> ms after the ping is sent, the websocket will disconnect even if there are successful messages during the ping/pong period. If a pong is received DURING the read loop of another service, the pongs will be routed to the pong manager and recorded to prevent a pong timeout Outcome: websocket disconnect Related timeouts: message_timeout connect_timeout Units: milliseconds Description: when attempting to connect to the ThingWorx Platform, the connection and authentication will wait on an idle socket for the specified number of seconds before closing the connection and retrying Outcome: close socket and attempt to reconnect Related timeouts: connect_retries, Auth Message timeout (WSCommunication subsystem) connect_retries Units: integer (number of tries, not actually a time measurement) Description: not actually coupled to an explicit time value. sets the max number of reconnect_timeouts that the edge will tolerate before giving up. In certain SDK's -1 will correspond to an infinite number of retries. Outcome: stop attempting to reconnect Related timeouts: connect_timeout, Auth Message timeout (WSCommunication subsystem) file_xfer_timeout Units: milliseconds Description: If a file transfer to an edge device becomes idle for too long, this timeout will trigger an error and free the memory associated with the file in the program (this does not delete files on disk in case they are to be resumed later) Outcome: the file transfer is stopped, an error is reported, and associated file transfer memory objects are free'd Related timeouts: File Transfer Idle Timeout, Copy Timeout (Service) ThingWorx Platform Related Services Units: milliseconds Description: some timeouts are passed in as a parameter to a service on an edge device (SendFile, twApi_InvokeService, etc.). These timeouts will act similarly to the WSCommunication message timeout, but they are driven from an edge device instead of the ThingWorx Platform. Outcome: timeout error reported Related timeouts: message_timeout, frame_read_timeout Remote Thing (ThingWorx Platform) Service Timeout Units: seconds Description: these timeouts are set explicitly in composer when editing remote services. These values will override the default message timeout that is set in the WSCommunication subsystem Outcome: service execution error Related timeouts: Property Timeout Property Timeout Units: seconds Description: these timeouts are set explicitly in composer when editing remote properties. These values will override the default message timeout that is set in the WSCommunication subsystem. Outcome: property get/set error Related timeouts: Service Timeout ThingWorx Platform Subsystems WSCommunicationSubsystem Idle Connection Timeout Units: seconds Description: if a particular websocket connection has not received or sent a message in the specified time, the connection is assumed to be invalid. The ThingWorx Platform will unbind any related things then disconnect the websocket. This should be set higher than the pingpong_timeout value Outcome: websocket disconnect Related timeouts: pingpong_timeout Auth Message Timeout Units: seconds Description: when a websocket first connects (before binding) the connection will be allowed to stay open for the specified time interval without authenticating. Increasing this value will accommodate high latency devices, but the ThingWorx Platform will be more vulnerable to saturating its own connections with unauthorized websockets. Outcome: websocket disconnect Related timeouts: connect_timeout Message Response Timeout Units: seconds Description: the max amount of time that is allowed during an edge request before claiming the service result as a failure Outcome: property get/set or service execution error Related timeouts: message_timeout (edge) TunnelSubsystem Startup Tunnel Timeout Units: seconds Description: once a remote tunnel is opened it will be given a specified time interval to establish an end-to-end connection before closing. For example, an SSH tunnel is opened but no client is attached to the endpoint Outcome: close tunnel, report error Related timeouts: n/a Idle Tunnel Timeout Units: seconds Description: once a tunnel is established, and an end-to-end connection is established, this will monitor the activity on the socket and report a timeout if there is no read/write activity for the specified time interval Outcome: close tunnel, report error Related timeouts: n/a FileTransferSubsystem File Transfer Idle Timeout Units: seconds Description: when the file transfer subsystem Copy service is executed a series of secondary remote services will be executed to complete the transfer. The File Transfer Idle Timeout will monitor the activity of each secondary service and stop the entire Copy service if any one secondary service records no activity for the specified time interval. Outcome: transfer stopped, error reported Related timeouts: Copy Timeout (Service) Copy (Service) Timeout Units: seconds Description: the number of seconds that the File Transfer Subsystem waits for the completion of a file transfer. This is set every time a transfer is executed. Outcome: transfer stopped, error reported Related timeouts: File Transfer Idle Timeout
View full tip
To setup the Single-Sign On with Windchill, we can just follow steps in Windchill extension guide. However, there is a huge problem to use "Websocket" for EMS or Edge SDKs from devices since Apache for Windchill blocks to pass "ws" or "wss" protocol. It's like a problem of a proxy server. There might be a couple of ways to avoid this issue, but I suggest to change filter-mappings for the SSO filter. When you look at the Windchill extension guide, it says that users set filters for all incoming URLs of ThingWorx by using "/*" filter mappings. Please use below settings for "web.xml" of ThingWorx server to avoid the problem that I stated above. It looks quite long and complicated, but basically the filter mappings from settings for "AuthenticationFilter" which are already defined by default except "Websocket" related urls. <!-- Windchill Extension SSO Start--> <filter> <filter-name>IdentityProviderAuthenticationFilter</filter-name> <filter-class>com.ptc.connected.plm.thingworx.wc.idp.client.filter.IdentityProviderAuthenticationFilter</filter-class> <init-param> <param-name>idpLoginUrl</param-name> <param-value>http(s)://<SERVERHOSTURL>/Windchill/wtcore/jsp/genIdKey.jsp</param-value> </init-param> </filter> <filter-mapping>   <filter-name>IdentityProviderAuthenticationFilter</filter-name>   <url-pattern>/extensions/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/action-authenticate/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/action-login/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/action-confirm-creds/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/action-change-password/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/ThingworxMain.html</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/ThingworxMain.html/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Server/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/ApplicationKeys/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Networks/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Dashboards/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/DirectoryServices/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Authenticators/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/PersistenceProviderPackages/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/tunnel/wsadapter.jsp</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/tunnel/adapter.jsp</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Logs/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Resources/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Subsystems/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Users/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Home/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/StateDefinitions/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/StyleDefinitions/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/ScriptFunctionLibraries/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/AtomFeedService/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/DataShapes/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Importer/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/ImageEncoder/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Exporter/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/ExportDatabase/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/ExportTheme/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/ExportDefaultEntities/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/ImportDatabase/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/DataExporter/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/DataImporter/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Widgets/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Groups/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/ThingPackages/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Things/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/ThingTemplates/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/ThingShapes/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/DataTags/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/ModelTags/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Composer/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Squeal/index.html</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Runtime/index.html</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Mashups/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Menus/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/MediaEntities/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/loaders/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/demos/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/ExtensionPackageUploader/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/ExtensionPackages/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/FileRepositoryUploader/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/FileRepositoryDownloader/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/FileRepositories/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/xmpp/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/LocalizationTables/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Organizations/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/RemoteTunnel/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/PersistenceProviders/*</url-pattern>   </filter-mapping> <filter> <filter-name>IdentityProviderKeyValidationFilter</filter-name> <filter-class>com.ptc.connected.plm.thingworx.wc.idp.client.filter.IdentityProviderKeyValidationFilter</filter-class> <init-param> <param-name>keyValidationUrl</param-name> <param-value>http(s)://<SERVERHOSTURL>/Windchill/login/validateIdKey.jsp</param-value> </init-param> </filter> <filter-mapping>   <filter-name>IdentityProviderKeyValidationFilter</filter-name>   <url-pattern>/extensions/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/action-authenticate/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/action-login/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/action-confirm-creds/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/action-change-password/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/ThingworxMain.html</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/ThingworxMain.html/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Server/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/ApplicationKeys/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Networks/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Dashboards/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/DirectoryServices/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Authenticators/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/PersistenceProviderPackages/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/tunnel/wsadapter.jsp</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/tunnel/adapter.jsp</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Logs/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Resources/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Subsystems/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Users/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Home/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/StateDefinitions/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/StyleDefinitions/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/ScriptFunctionLibraries/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/AtomFeedService/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/DataShapes/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Importer/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/ImageEncoder/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Exporter/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/ExportDatabase/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/ExportTheme/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/ExportDefaultEntities/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/ImportDatabase/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/DataExporter/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/DataImporter/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Widgets/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Groups/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/ThingPackages/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Things/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/ThingTemplates/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/ThingShapes/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/DataTags/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/ModelTags/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Composer/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Squeal/index.html</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Runtime/index.html</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Mashups/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Menus/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/MediaEntities/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/loaders/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/demos/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/ExtensionPackageUploader/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/ExtensionPackages/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/FileRepositoryUploader/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/FileRepositoryDownloader/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/FileRepositories/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/xmpp/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/LocalizationTables/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Organizations/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/RemoteTunnel/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/PersistenceProviders/*</url-pattern>   </filter-mapping> <!-- Windchill Extension SSO End-->
View full tip
Not as simple a question as it sounds.  There more options than some might think and choosing the right one can be the difference between a well performing application and one that struggles as it scales up in size.  There are options both internal and external to the Thingworx platform that can be used.  Each has their own use cases and cost considerations.   Internal to Thingworx there are three options as the storage provider PostGreSQL, Microsoft SQL Server (Azure SQL for PTC hosted systems) and InFlux DB.  PostGreSQL can be used for storing the Thingworx model structure and data,  and is an open source technology, meaning no additional cost.  SQL Server allows the same model and data storage but has licensing costs associated.  Both perform well up to an estimated 500 Gb of data storage (this is a rough estimate dependant on use case).  For very high volume data InFlux is the choice, it performs well for large data sets.   External to Thingworx you can use virtually any data storage technology the provides a JDBC connector or even one that has a driver that can be used to create a Thingworx Extension via our SDK or edge SDKs.  The platform knows how to use JDBC drivers so this can easily be used to connect to relational data storage like Oracle.   The first real question to ask when making the choice of where to store data is, what does my data look like?  Many systems are adapted or migrated from legacy systems which may include relational data, others simply have this structure by necessity.  If the data will need to use complex SQL to retrieve (like using joins, like, cursors, temp tables, etc.) then store the data in a true relational database.  If it is simple historical data, time series data or data that does not require compounding or recursive calculation to be useful, then keep it in platform data storage.   The second question to ask is, how much data will I be storing.  This adds a bit of complexity to where data is best stored.  There is no limit to the number of records in any data structure however, the Thingworx Platform storage is optimized to store and retrieve time series data, using the ValueSteam and Stream types built into the Platform.  This is the most common IoT data structure and in this case you can refer back to the previous information when choosing  the correct backend storage.  Data tables can be used when contained in small data sets (around 100,000 records or less) you can use Platform storage for this as these are intended for largely static data structures.  Retrieving data when DataTables grow larger than this will begin to slow performance quickly. This is because currently Thingworx will do a full scan of the data, in this specific type of structure, when querying because all of the logic for the query or filter is done on the platform, not on the database (this will likely change in a future version).  So small amounts of data can be quickly loaded and parsed in memory. NOTE (Neo4j specific): In datatables if you add a index to a column, these indexes are used when calling "FindDataTableEntries" but not when using "QueryDataTableEntries".   Streams and ValueStreams, however, are optimized for time series data.  In these structures Thingworx has built in datetime filters that allow for very fast retrieval of data based on a date range.  When the number of records returned after the date range is applied is still a very large number (100,00 - 200,000) you may see a drop in performance of a query at that point.  Just as before, all records, after the date filter is applied, are returned to the Platform and further query and filtering are done in memory.   The querying/retrieval of data is commonly where the greatest performance issues are seen.  Using a JDBC connector to send the query to the database (even if it is PostGreSQL, SQL Server Or InFlux) can help, or if the historical data is not queried regularly you can move this data to a separate Thingworx data store (another DataTable or Stream).   That would leave only large data sets of non-time series data as the outlier.  This scenario could perform equally well (or poorly) primarily on how the data will be retrieved. If there are loose relationship between the data that need to be used then a relational system that would allow these to be executed on the database server is preferred.  Sequential data that does not need this type of processing could be stored in InFlux.   This is a base outline of considerations when designing data storage on your application.  Most use cases are unique and may have additional considerations around process and cost.
View full tip
Javascript, everyone knows it, at least a little bit. What if I told you that you could do serious data acquisition with just a little bit of Javascript and you may already have the tools to do it, right now on your "Off the Shelf" device. Node.js is a command line implementation of Javascript that can be run on common, credit card sized devices like the Raspberry PI or the Intel Edison. I suspect that if you already know about Node.js, you may have encountered its non-blocking asynchronous, "Call back", style of programming which can be a little different that most other languages which block or wait for commands to complete. While this can be a benefit for increasing performance, it can also be a barrier to entry for new users. This is the problem that Node Red really solves. Node Red is a web based Integrated Development Environment (IDE) that turns the "Call Back" style Javascript programming of Node.js into a series of interconnected Nodes, each Node of which represents a Javascript function which is connected by a callback to another node/function. A simple hello world program in Node Red would look something like this ( with annotations in red) : You can re-create this program using the Node Red IDE yourself. Here is a brief video (with no sound) which should familiarize you with how to create your own hello world flow. Video Link : 1333 How can you install Node Red on your own system to try it out? The good news is, if you have a Raspberry PI 2 with a NOOBS installed on it, Node.js and Node Red come pre-installed. If you do not already have it installed, or want to install it on your own system it is still pretty simple. Here are the steps: 1. Download and install Node.js (https://nodejs.org/en/download/) 2. Run the command:  sudo npm install -g --unsafe-perm node-red     Omit the sudo on windows (see http://nodered.org/docs/getting-started/installation.html  for more info) 3. You now have Node Red. To run it, just type: node-red  on your command line. 4. Using your web browser goto http://localhost:1880 and the Node Red IDE will appear in your browser. How about a real hardware integration example? Node Red comes with many built in Nodes and many more nodes you can add to connect to specific peripherals you may have on your device. Rather than provide a complete tutorial on Node Red, I will focus on discussing using this IDE to re-create a hardware integration that I created in the past using the Java SDK, The Raspberry PI, AM2302 Weather Station (see Weather Applications with Raspberry Pi | ThingWorx)​. This example contains detailed specifics on the attachment of the AM2302 Temperature/Humidity sensor to your Raspberry PI. I am going to assume you have the hardware already attached to your Raspberry PI as described in this tutorial ( https://learn.adafruit.com/dht-humidity-sensing-on-raspberry-pi-with-gdocs-logging/overview ). I am also assuming that you have installed the python based sample program described in this tutorial as well and you now have a python script called "AdafruitDHT.py" installed on your PI that produces the following output when it is run. pi@raspberrypi:~/projects/Adafruit_Python_DHT/examples $ sudo ./AdafruitDHT.py 2302 4 Temp=22.3*  Humidity=30.6% pi@raspberrypi:~/projects/Adafruit_Python_DHT/examples $ If you don't have any of this hardware installed, you can still proceed with this example and just create your own temperature and humidity values manually. We are going to connect the output of this python script directly to ThingWorx and sample its output value every 5 seconds. I will start assuming you do not have the Am2302 hardware and create simulated values. I will then replace them with the actual output of the python script as a final step. Polling versus Interrupt Driven Data Collection In the Java SDK version of this example, we are polling for changes in data. Every so many seconds our device will wake up and take a reading. How do we recreate the same effect in Node Red without having to push an inject button every 5 seconds. No. We need an input node that activates on its own every 5 seconds. The Inject Node will do this. Drag out an inject node and configure it as shown below. This is an input node so it will be starting a new flow. It will fire off every 5 seconds from the minute this sheet is deployed. Simulate Data Collection Lets generate a random humidity and temperature value before getting the actual data. For this node we will use a Function node. Drag one out and configure it as shown below. Here is the Javascript for this node so you can cut and paste it into this dialog. var tempF = Math.random() * 40 + 60; var tempC = (tempF-32)/1.8; var humidity = Math.random() * 80 + 20; msg.payload = {     "tempF":tempF,     "tempC":tempC,     "humidity":humidity     }; return msg;                                    Remember that the returned message is the message that the next node will receive. The payload property is the standard or default property of a message that most nodes use to pass data between each other. Here, our payload is an object with all of our simulated data in it. Lets Test it Out Connect the two nodes together and add a debug output node and deploy your sheet. The completed flow will look like this. As soon as you deploy you should see the following output in your debug tab and every five seconds another data sample will be generated. So how does this data get to ThingWorx? What we need to do is take this data and deliver it to ThingWorx in the form of a REST web service call. This is easier to do than it sounds. First off, lets create a Thing on your ThingWorx server that looks like this. Now give it these properties. Next, create an Application Key in the application keys section of the composer. Assign it to the "Administrator" user. Your keyId will of course be different. This key will be the credential you need to post your data. Installing the ThingRest Node Red Node To simplify the process of posting the data to ThingWorx, I have created my own custom node to post data. To install a custom node into your Node Red installation you have to find the directory Node Red is using to store your sheets in. By default this is a directory called ".node-red" in your home directory. On a Raspberry PI this directory would be /home/pi/.node-red. If you are running Node Red now, quit it by hitting control-c and cd into the .node-red directory. Below is the sequence of commands you would issue on your PI to install the ThingRest node. cd ~/.node-red npm install git+https://git@github.com/obiwan314/node-red-node-thingrest.git node-red                     The node package manager (npm) will install this new node automatically into your .node-red directory. Now re-run node-red and go back to your browser and refresh your Node Red IDE. You should now have a "REST Thing" node. Adding a REST Thing node to your flow Drag a REST Thing output node into your flow and configure it as shown below. Remember, your Application Key will be different than the one shown here. Also, your ThingWorx server URL may be different if your server is not on the same machine you are working on. Now connect it as shown below. When you deploy this sheet, you will be posting data to ThingWorx. Go back to your WeatherStation1 Thing in ThingWorx and use the Refresh button shown below to see your data changing. Wait, that is? Thats the whole data collection program? Yes. The flow above is the equivalent of the Java SDK code from the Java weather station example. Now for Some Real Data As promised, we will now replace the simulated data in the Generate Data node with real data obtained from the "~/projects/Adafruit_Python_DHT/examples/AdafruitDHT.py 2302 4" python command on your Raspberry PI using an Exec node. The exec node can be found at the very bottom of your node palette. It executes a command and returns the results as msg.payload to the next node in the flow. You may have noticed it has three outputs instead of one. In order these outputs are your Standard output, Standard Error and the integer return code of the process. Use the first output node to get the results of this command. Now Connect this in place of the Generate Data Node as shown below. At this point, we can't connect the collected data to the WeatherStation1 Thing because it is in the wrong format. It is console output and we need it in the form of a Javascript object. We are going to need a function to parse the console output into a Javascript object. Add the function node shown below. Here is the Javascript for cut and paste convenience. var temphumidArray=msg.payload.split(" "); var tempC = parseFloat(temphumidArray[0].replace("*","").split('=')[1]); var tempF = tempC *1.8 + 32; var humidity = parseFloat(temphumidArray[2].replace("%","").replace("\n","").split('=')[1]); msg.payload = {     "humidity":humidity,     "tempF":tempF,     "tempC":tempC   }; return msg;   Now msg.payload contains a javascript object identical to the one we were generating at random but now it is using real data. Connect up your nodes so they appear as shown below but when you deploy, don't expect it to work yet because there is still one problem you will have to get around. This python script expects to be run as the root user. How to run Node Red as Root You can start Node Red as root with the following command sudo node-red -u /home/pi/.node-red   Note that the -u argument is required to make sure you keep using the pi user's .node-red directory. If you loose your REST Thing node, you are not using the pi user's .node-red directory, but root's instead. If you see any error messages in your debug window, try re-attaching the the debug node to the Collect Data node and see what is being produced by the exec node. Don't forget to verify that your tempC,tempF and humidity properties are updating in ThingWorx. Lets Add a GPS Location You may have noticed that there is a stationLocation property on the WeatherStation1 Thing. Lets set that to a fixed location to complete this example of 40.0568764,-75.6720953,18. Below is the modified Javascript to update in the Parse Data node to add this location. var temphumidArray=msg.payload.split(" "); var tempC = parseFloat(temphumidArray[0].replace("*","").split('=')[1]); var tempF = tempC *1.8 + 32; var humidity = parseFloat(temphumidArray[2].replace("%","").replace("\n","").split('=')[1]); msg.payload = {     "humidity":humidity,     "tempF":tempF,     "tempC":tempC,     "stationLocation":"40.0568764,-75.6720953,18" }; return msg; What's Next? Node Red has many more nodes that you can add to your project through the use of the npm command. There is a GPIO node library you can install at https://github.com/monteslu/node-red-contrib-gpio which will give you input and output nodes for the GPIO pins on your PI as well, This library also supports accessing Arduino's attached to the PI over a USB cable which expand the possibilities for data collection and peripheral control.Hopefully this article has exposed you to the many other possibilities for connecting devices to your ThingWorx Server. The Rest Thing node is using the HTTP REST protocol to talk to ThingWorx. In the near future, with the Introduction of the ThingWorx Javascript SDK, a Node Red library can be created that uses ThingWorx AlwaysOn WebSockets protocol to communicate with your ThingWorx server which will offer even more capabilities and better performance.
View full tip
A user can make a direct REST call to Thingworx platform, but when it comes to a website trying to make a REST call. The platform server blocks the request as it is a Cross-Origin request. To enable this feature, the platform server needs to allow Cross-Origin request from all/specific websites. Enabling Cross-Origin request can be done by adding CORS filter to the server. CORS (Cross-Origin Resource Sharing) specification enables the cross-origin requests from other websites deployed in a different server. By enabling CORS filter, a 3rd party tool or a website can retrieve the data from Thingworx instance. Follow the below steps inorder to update the CORS filter: Update web.xml file (located in $CATALINA_HOME/conf/web.xml) For Minimal Configurations, add the below code: <filter> <filter-name>CorsFilter</filter-name>   <filter-class>org.apache.catalina.filters.CorsFilter</filter-class> </filter> <filter-mapping>   <filter-name>CorsFilter</filter-name>   <url-pattern>/*</url-pattern>         // "*" opens platform to all URL patterns, recommended to use limited patterns. </filter-mapping> NOTE: the url-pattern - /* opens the Thingworx application to every domain. For advanced configuration, follow the below code: <filter> <filter-name>CorsFilter</filter-name> <filter-class>org.apache.catalina.filters.CorsFilter</filter-class> <init-param> <param-name>cors.allowed.origins</param-name> <param-value> http://www.customerwebaddress.com </param-value> </init-param> <init-param> <param-name>cors.allowed.methods</param-name> <param-value>GET,POST,HEAD,OPTIONS,PUT</param-value> </init-param> <init-param> <param-name>cors.allowed.headers</param-name> <param-value>Content-Type,X-Requested-With,accept,Origin,Access-Control-Request-Method,Access-Control-Request-Headers</param-value> </init-param> <init-param> <param-name>cors.exposed.headers</param-name> <param-value>Access-Control-Allow-Origin,Access-Control-Allow-Credentials</param-value> </init-param> <init-param> <param-name>cors.support.credentials</param-name> <param-value>true</param-value> </init-param> <init-param> <param-name>cors.preflight.maxage</param-name> <param-value>10</param-value> </init-param> </filter> <filter-mapping> <filter-name>CorsFilter</filter-name> <url-pattern>/* </url-pattern>   // "*" opens platform to all URL patterns, recommended to use limited patterns. </filter-mapping> NOTE: update the cors.allowed.origin parameter with the desired web address Save web.xml file Restart tomcat For additional information, please follow the official tomcat reference document: http://tomcat.apache.org/tomcat-7.0-doc/config/filter.html#CORS_Filter Tested this using an online Javascript editor (jsfiddle) and executing the below script <script> var data = null; var xhr = new XMLHttpRequest(); xhr.open("GET", "http://localhost:8080/Thingworx/Things", true); xhr.withCredentials = true; xhr.send(); </script> The request was successful and list of things are returned.
View full tip
Hi I have attached a Postman collection, this can be used as a template and be modified. steps to upload the collection to Postman. 1. In your Postman window click at Import. 2. Once you clicked import, you can chose your file. 3. The collection is now visible in your left side of the window.
View full tip
JavaMelody is an open source (LGPL) application that measures and calculates statistical information based on application usage. The resulting data can be viewed in a variety of formats including evolution charts, which track various operations and server attributes over time. There are also robust reporting options that allow data to be exported in either HTML of PDF formats. Installation Installation is fairly simple and can be done in just a few minutes. Download the distribution from JavaMelody Wiki and extract the javamelody.jar, available at https://github.com/javamelody/javamelody/releases Step 1: Download the java melody file (in unix, use the following command*): wget javamelody.googlecode.com/files/javamelody-1.49.0.zip Note: Ensure the latest version available at the link provided above before executing the unix command, modify the version accordingly. Step 2: Extract the zip file (using the following command in unix, note the version from step 1); unzip javamelody-1.49.0.zip Step 3: Copy the javamelody.jar and jrobin-x.jar from the javamelody installable to the WEB-INF/lib directory of the war file deployed in the tomcat using the following command in unix: cp -pr javamelody-1.49.0 jrobin-x.jar /opt/tomcat/server/webapps/<application name>/WEB-INF/lib Step 4: Edit the web.xml file from WEB-INF directory of the war file deployed in the tomcat and add the following lines in the web.xml before the description of the servlet.ie. mostly at the starting of the web.xml file.                 <filter> <filter-name>monitoring</filter-name>                <filter-class>net.bull.javamelody.MonitoringFilter</filter-class>        </filter>        <filter-mapping>                <filter-name>monitoring</filter-name>                <url-pattern>/*</url-pattern>        </filter-mapping>        <listener>                <listener-class>net.bull.javamelody.SessionListener</listener-class>        </listener> Step 5: Restart the tomcat server after editing the web.xml and access the javamelody page using the following url pattern: http://<hostname on which tomcat is configured>:<Port number on which the application is accessed>/<application name>/monitoring The url can be customized in the configuration file. Reports can be viewed in weekly, daily, or monthly formats. They can also be downloaded or can be sent over email in pdf format. iText library for WebApps and Java’s Mail and Activation libraries are required on the server in order to use the mail session. The report provides the same information that can be found in monitoring web page with both high-level and detailed information. CPU&Memory usage: Detailed SQL Information: SQL Statistics: Server Requests: System threads, caches: Data Caches: System Overhead ​On the JavaMelody Wiki, https://github.com/javamelody/javamelody/wiki/Overhead​ one can find a healthy discussion about system overhead. It seems that the general consensus is that  the overhead cost caused by JavaMelody is very low and that the feature is safe to enable full-time in QA environment. ->JavaMelody records only statistics and not events, so the overhead of memory is quite minimal. ->No  I/O on the wire and minimal I/O on disk. If no problem arises, it can be considered to enable JavaMelody on the production environment as well. Using a tool like JavaMelody can lead to valuable insights on how to optimize servers or uncover otherwise hidden issues, providing value that exceeds the overhead cost.
View full tip
For those of you that aren't aware - the newest version of the Eclipse Plugin for Extension Development was made available last week in the ThingWorx Marketplace here. Because of the infancy of the product, there is not an official process for supplying release notes along with the plugin.  These are not official or all encompassing, but cover the main items worked on for 7.0. New Features: Added Configuration Table Wizard for code generation SDK Javadocs now automatically linked to SDK resources on project creation When creating a Service, Trace logging statements are generated inside of it (along with appropriate initializers) ThingWorx Source actions are now available from right click menu within a .java file Bugs: Fixed problem where some BaseTypes are not uppercase in annotations when generating code Fixed error when Creating and importing Extension Projects when the Eclipse install has a space in the file path Fixed inconsistent formatting in the metadata.xml when adding new Entities We are hoping to have a more official Release Note process for the next release.  Feel free to reply with questions or concerns.
View full tip
Thingworx provides the capability to use JDBC to connect to Relational Databases. What would be the steps to take? 1. Find the proper JDBC JAR file, this can be easily located by keeping in mind your database and its version and doing an online search. 2. Download the JDBC Extension Creator from the Marketplace 3. Follow the instructions to create the actual JDBC extension you will be using 4. Create a Thing based on the ThingTemplate from the JDBC extension - This represents your actual connection to the database 5. Set up the configuration:      a. Connection String - Usually I use connectionstrings.com to find that      b. Validation String - This has to be a VALID SQL statement within the context of the database you are connecting to (Like SELECT SYSDATE FROM DUAL for Oracle)      c. Proper User Name and Password as defined in the database you are connecting to 6. SAVE 7. To check if you are properly connected, go back into Edit mode and go to Services, create a new SQL Query or Command and check Tables and Columns Tab. Actual Tables should show up now. 8. If it doesn't work, check your application log.
View full tip
This project is developed out of curiosity of how ThingWorx communicates with sensors and vice versa. Immediately a Smart Parking system idea struck to our mind and I started working on it. While heading from home to office I always worry about car parking space in office especially in rainy season. This project will help user in getting parking space. This project has 4 sections as follows, 1) Smart Parking system: A system application developed in ThingWorx guides user to find empty car parking space. Sensors placed at each car parking slot senses the presence of car. A program running on Raspberry Pi board collects sensor information and sends that information to the Smart Car Parking System application in ThingWorx. The data received through sensor is displayed on ThingWorx dashboard/mashup. 2) Live Traffic: This inherits a Google Map and shows the traffic around user's current location. 3) Traffic Blog: If user is visiting a place and have questions regarding parking, traffic condition etc., then user can post their questions here and people around that area can answer it. Questions are not restricted for parking related questions but like best places to visit in areas, restaurant, shops etc. 4) Automobile Wiki: This page provides an documented help regarding anything related to automobile e.g. how to change car tyres?, how to change car wipers? etc.
View full tip
Troubleshooting platform issues is  generally done by using a layer approach, similar to a simplified OSI Model. From bottom to top, the following layers represent the areas to analyze during each step: 1. Physical (Server, power, wired connections): check the server status and condition, CPU and memory levels. 2. Software (Operating system, tomcat, java versions, compatibility, and configuration): refer to the compatibility matrix to ensure the requirements are met; verify Tomcat  java configuration. * Note: Tomcat manager, server status, conveniently provides this information in one place. 3. Network: ensure  proper connectivity, port availability, firewall  configuration, and additional security, if applicable. 4. Application. The main focus of this blog post will concentrate on the step 4. As the Thingworx application is driven by Tomcat, first available tools coming "out-of-the-box" is the built-in Tomcat manager app.  Clicking on the "Server Status" provides the information on the versions, memory usage, processes, times and thread counts. Keep in mind, the default Tomcat maximum thread number is 200. Some additional tools that could assist in troubleshooting java applications and gathering performance metrics are: Javamelody, new relic, profiler4j. These have to be obtained, installed, and configured separately. Javamelody: Free and lightweight monitoring tool which does not do any profiling, safe to use in production environments. It comes with a series of plug-ins including for Grails, Jenkins and Jira. New relic: Real-time Java application monitoring, features code deployment reports, transaction tracing across different tiers and the ability to create alerts. Subscription fee applies. Profiler4j: Profiler4J is a free open-source tool for profiling in Java. It is enabled by passing an argument at start-up with a path to the Profiler4J .jar file. It comes with several graphs and charts showing a call graph with method details, a call tree, a memory monitor, a class list and thread monitoring. From the application perspective, Thingworx composer provides a PlatformSubsystem and LoggingSubsystem: PlatformSubsystem contains such services as GetPerformanceMetrics, GetSummaryInformation, GetThingworxVersion, and more to provide fundamental information for any troubleshooting scenario. LoggingSubsystem contains the logs, log settings, and other monitoring values. List of recommended tools for troubleshooting all layers: Wireshark: monitors network traffic Jstack: monitors memory consumption of specific threads Dynatrace: system performance and web application performance jconsole: system or application performance ​​
View full tip
Scripto provides a RESTful endpoint for Groovy Custom Objects on the Axeda Platform.  Custom Objects exposed via Scripto can be accessed via a GET or a POST, and the script will have access to request parameters or body contents. Any Custom Object of the "Action" type will automatically be exposed via Scripto. The URL for a Scripto service is currently defined by the name of the Custom Object: GET: http://{{YourHostName}}/services/v1/rest/Scripto/execute/<customObjectName> Scripto enables the creation of "Domain Specific Services". This allows implementers to take the Axeda Domain Objects (Assets, Models, DataItems, Alarms) and expose them via a service that models the real-world domain directly (trucks, ATMs, MRI Machines, sensor readings). This is especially useful when creating a domain-specific UI, or when integrating with another application that will push or pull data. Authentication There are several ways to test your Scripto scripts, as well as several different authentication methods. The following authentication methods can be used: Request Parameter credentials: ?username=<yourUserName>&password=<yourPassword> Request Parameter sessionId (retrieved from the Auth service): ?sessionid=<sessionId> Basic Authentication (challenge): From a browser or CURL, simply browse to the URL to receive an HTTP Basic challenge. Request Parameters You can access the parameters to the Groovy script via two Objects, Call and Request. Request is actually just a sub-class of Call, so the values will always be the same regardless of which Object you use.  Although parameters may be accessed off of either object, Call is preferable when Chaining Custom Objects (TODO LINK) together.  Call also includes a reference to the logger which can be used to log debug messages. GET:  http://{{YourHostName}}/services/v1/rest/Scripto/execute/<Your Script Name>?sessionid=<Session Id>&serial_number=mySerialNumber Accessing Parameters through the Request Object import com.axeda.drm.sdk.scripto.Request // Request.parameters is a map of strings def serial_number = Request.parameters.serial_number assert serial_number == "mySerialNumber"       Accessing Parameters through the Call Object import com.axeda.drm.sdk.customobject.Call // Call.parameters is a map of strings def serial_number = Call.parameters.serial_number assert serial_number == "mySerialNumber"       Accessing the POST Body through the Request Object The content from a POST request to Scripto is accessible as a string via the body field in the Request object.  Use Slurpers for XML or JSON to parse it into an object. POST:  http://{{YourHostName}}/services/v1/rest/Scripto/execute/<Your Script Name>?sessionid=<Session Id> Response: { "serial_number":"mySerialNumber"} import com.axeda.drm.sdk.scripto.Request def body = Request.body def slurper = new JsonSlurper() def result = slurper.parseText(body) assert result.serial_number == "mySerialNumber"       Returning Plain Text Groovy custom objects must return some content.  The format of that content is flexible and can be returned as plain text, JSON, XML, or even binary files. The follow example simply returns plain text. GET:  http://{{YourHostName}}/services/v1/rest/Scripto/execute/<Your Script Name> // Outputs:  hello return ["Content-Type":"text/plain","Content":"hello"]       Returning JSON We use the JSONObject Class to format our Map-based content into a JSON structure. The eliminates the need for any concern around formatting, you just build up Maps of Maps and it will be properly formatted by the fromObject() utility method. GET:  http://{{YourHostName}}/services/v1/rest/Scripto/execute/<Your Script Name> import net.sf.json.JSONObject root = [   items:[    num_1: “one”,    num_2: “two”            ] ] /** Outputs {   "items": {  "num_1": "one", "num_2": "two"  } } **/ return ['Content-Type': 'application/json', 'Content': JSONObject.fromObject(root).toString(2)]       Link to JSONObject documentation Returning XML To return XML, we use the MarkupBuilder to build the XML response. This allows us to create code that follows the format of the XML that is being generated. GET:  http://{{YourHostName}}/services/v1/rest/Scripto/execute/<Your Script Name>?sessionid=<Session Id> import groovy.xml.MarkupBuilder def writer = new StringWriter() def xml = new MarkupBuilder(writer) xml.root(){     items(){         num_1("one")         num_2("two")     } } /** Outputs <root>   <items>     <num_1>one</num_1>     <num_2>two</num_2>   </items> </root> **/ return ['Content-Type': 'text/xml', 'Content': writer.toString()]       Link to Groovy MarkupBuilder documentation Returning Binary Content To return binary content, you typically will use the fileStore API to upload a file that you can then download using Scripto.  See the fileInfo section to learn more. In this example we connect the InputStream which is associated with the getFileData() method directly to the output of the Scripto script. This will cause the bytes available in the stream to be directly forwarded to the client as the body of the response. GET:  http://{{Your Host Name}}/services/v1/rest/Scripto/execute/{{Your Script Name}}?sessionid={{Session Id}}&fileId=123 import static com.axeda.sdk.v2.dsl.Bridges.* import com.axeda.services.v2.* import com.axeda.sdk.v2.exception.* def contentType = parameters.type ?: 'image/jpg' return ['Content':fileInfoBridge.getFileData(parameters.fileId), 'Content-Type':contentType]   The Auth Service - Authentication via AJAX Groovy scripts are accessible to AJAX-powered HTML apps with Axeda instance credentials.  To obtain a session from an Axeda server, you should make a GET call to the Authentication service. The service is located at the following example URL: https://{{YourHostName}}/services/v1/rest/Auth/login This service accepts a valid username/password combination in the incoming Request parameters and returns a SessionID. The parameter names it expects to see are as follows: Property Name Description principal.username The username for the valid Axeda credential. password The password for the supplied credential. A sample request to the Auth Service: GET: https://{{YourHostName}}/services/v1/rest/Auth/login?principal.username=YOURUSER&password=YOURPASS Would yield this response (at this time the response is always in XML): <ns1:WSSessionInfo xsi:type="ns1:WSSessionInfo" xmlns:ns1="http://type.v1.webservices.sl.axeda.com" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">   <ns1:created>2013-08-12T13:19:37 +0000</ns1:created>   <ns1:expired>false</ns1:expired>   <ns1:sessionId>19c33190-dded-4655-b2c0-921528f7b873</ns1:sessionId> <ns1:sessionTimeout> 1800 </ns1:sessionTimeout> </ns1:WSSessionInfo>       The response fields are as follows: Field Name Description created The timestamp for the date the session was created expired A boolean indicating whether or not this session is expired (should be false) sessionId The ID of the session which you will use in subsequent requests sessionTimeout The time (in seconds) that this session will remain active for The Auth Service is frequently invoked from JavaScript as part of Custom Applications. The following code demonstrates this style of invocation. function authenticate(host, username, password) {             try {                 netscape.security.PrivilegeManager.enablePrivilege("UniversalBrowserRead");             } catch (e) {                 // must be IE             }             var xmlHttpReq = false;             var self = this;             // Mozilla/Safari             if (window.XMLHttpRequest) {                 self.xmlHttpReq = new XMLHttpRequest();             }             // IE             else if (window.ActiveXObject) {                 self.xmlHttpReq = new ActiveXObject("Microsoft.XMLHTTP");             }             var SERVICES_PATH = "/services/v1/rest/"             var url = host + SERVICES_PATH + "Auth/login?principal.username=" + username + "&password=" + password;             self.xmlHttpReq.open('GET', url, true);             self.xmlHttpReq.onreadystatechange = function() {                 if (self.xmlHttpReq.readyState == 4) {                     getSessionId(self.xmlHttpReq.responseXML);                 }             }             self.xmlHttpReq.send() } function getSessionId(xml) {             var value             if (window.ActiveXObject) {                 // xml traversing with IE                 var objXML = new ActiveXObject("MSXML2.DOMDocument.6.0");                 objXML.async = false;                 var xmldoc = objXML.loadXML(xml);                 objXML.setProperty("SelectionNamespaces", "xmlns:ns1='http://type.v1.webservices.sl.axeda.com'");                 objXML.setProperty("SelectionLanguage","XPath");                 value =  objXML.selectSingleNode("//ns1:sessionId").childNodes[0].nodeValue;             } else {                 // xml traversing in non-IE browsers                 var node = xml.getElementsByTagNameNS("*", "sessionId")                 value = node[0].textContent             }             return value } authenticate ("http://mydomain.axeda.com", "myUsername", "myPassword")       Calling Scripto via AJAX Once you have obtained a session id through authentication via AJAX, you can use that session id in Scripto calls. The following is a utility function which is frequently used to wrap Scripto invocations from a UI. function callScripto(host, scriptName, sessionId, parameter) {             try {                 netscape.security.PrivilegeManager.enablePrivilege("UniversalBrowserRead");             } catch (e) {                 // must be IE             }             var xmlHttpReq = false;             var self = this;             // Mozilla/Safari             if (window.XMLHttpRequest) {                 self.xmlHttpReq = new XMLHttpRequest();             }             // IE             else if (window.ActiveXObject) {                 self.xmlHttpReq = new ActiveXObject("Microsoft.XMLHTTP");             }             var url = host + SERVICES_PATH + "Scripto/execute/" + scriptName + "?sessionid=" + sessionId;             self.xmlHttpReq.open('GET', url, true);             self.xmlHttpReq.onreadystatechange = function() {                 if (self.xmlHttpReq.readyState == 4) {                     updatepage(div, self.xmlHttpReq.responseText);                 }             }             self.xmlHttpReq.send(parameter); } function updatepage(div, str) {             document.getElementById(div).innerHTML = str; } callScripto("http://mydomain.axeda.com", "myGroovyScriptName", "mySessionId", "myparameter=foo")       A more modern jQuery-based example might look like the following: function callScripto(host, scriptName, sessionId, parameter) {     var url = host + '/services/v1/rest/Scripto/execute/' + scriptName + '?sessionid=' + sessionId     if ( parameter != null ) url += '&' + parameter     $.ajax({url: url,               success:  function(response) {  updatepage(div, response); }           }); } function updatepage(div, str) {     $("#" + div).innerHTML = str } callScripto("http://mydomain.axeda.com", "myGroovyScriptName", "mySessionId", "myparameter=foo") In Conclusion As shown above, Scripto offers a number of ways to interact with the platform.  On each version of the Axeda Platform, all supported v1 and v2 APIs are available for Scripto to interact with the Axeda domain objects and implement business logic to solve real-world customer problems. Bibliography ​(PTC.net account required)     Axeda v2 API/Services Developer's Reference Version 6.8.3 August 2015     Axeda® v1 API Developer’s Reference Guide Version 6.8 August 2014     Documentation Map for Axeda® 6.8.2 January 2015
View full tip
In the recent times, one of the frequent questions regarding PostgreSQL is which tools are good with PostgreSQL. With the growing functionality of PostgreSQL, the number of vendors are willing to produce tools for PostgreSQL. There are lot of tools for management, development, data visualization and the list if growing. Here, I'm listing a few tools that might be of interest to Thingworx users. psql terminal: The psql client is a command-line client distributed with PostgreSQL, often called as interactive terminal. psql is a simple yet powerful tool with which you can directly interface with the PostgreSQL server. The psql client comes default with the PostgreSQL database. Key features: Issue queries either through commands or from a file. Provides shell-like features to automate tasks. For more information, refer http://www.postgresql.org/docs/9.5/static/app-psql.html pgAdmin III: pgAdmin III is a GUI based administration and development tool for PostgreSQL database. It delivers the needs of both admin and normal users from writing simple SQL queries to developing complex databases. Key features: Open source and cross-platform support. No additional drivers are required. Supports more than 30 different languages. Note: pgAdmin III comes default with postgreSQL9.4 installer. For more information, refer http://www.pgadmin.org/download/ phpPgAdmin: phpPgAdmin is a web-based client for managing PostgreSQL databases. It provides the user with a convenient way to create databases, create tables, alter tables and query the data using SQL. Key features: Open source and supports PostgreSQL 9.x. Requires webserver. Administer multiple servers. Supports the slony master-slave replication engine. For phpPgAdmin download: http://phppgadmin.sourceforge.net/doku.php?id=download TeamPostgreSQL: TeamPostgreSQL is a browser-based tool for PostgreSQL administration. Using TeamPostgreSQL, database objects can be accessed from anywhere in the web browser. Key features: Open source and cross-platform support. Supports SSH for both the web interface and the database connections. GUI with tabbed SQL editors. For TeamPostgreSQL download: http://www.teampostgresql.com/download.jsp   Monitoring Tools pgBadger: pgBadger is a PostgreSQL log analyzer for generating reports from the PostgreSQL log files. It is built in Perl language and uses a javascript and bootstrap libraries. Often seen as a replacement for pgfouine log analyzer. Key features: Open source community project. Autodetects postgreSQL log file formats (stderr, syslog or csvlog). Provides SQL queries related reports and statistics. Can also set limits to only report errors. Generates Pie charts and Time based charts. For more information, refer http://dalibo.github.io/pgbadger/. Git download: https://github.com/dalibo/pgbadger/releases PostgreStats: Postgrestats is a software that has automated scripts to easily view statistics such as commits, rollbacks, user inserts, updates and deletes in a time-based intervals. Postgrestats gets installed and executes on the database server, it customizes the main conf file. Postgrestats also provides an enterprise application for Replication mode and High Availability. Key features: Open source and easy-to-setup installation.  Take a snapshot report based on time intervals. Optional email-on-update. Text file Data storage. Also provides enterprise application, PostgreStats Enterprise. For more information, refer: http://www.postgrestats.com/subs/docs.html    Slemma: Slemma is a collaborative, data visualization tool for PostgreSQL database. Slemma allows database connections with a near to one-click integration and can generate a dashboard from files. Slemma comes with a commercial license with a $29 per user per month pricing. Key features: Create charts and interactive dashboards by selecting tables. Non-developers can easily create visualizations (with no coding). Email dashboards automatically to clients or your entire team. For more information, refer https://slemma.com/ Ubiq: Ubiq is a web-based buisness intelligence and reporting tool for PostgreSQL server. Ubiq creates reports and online dashboards, providing the feature to export in multiple formats. Ubiq is distributed with a commercial license. Key features: Drag & drop interface to create interactive charts, dashboards and reports. Apply powerful filters and functions to the data. Share your work and schedule email reports. For more information, refer http://ubiq.co/tour
View full tip
From the documentation, a SOLR node is only needed in case of using DataTables.  If the SOLR configuration field left blank, the extension will request to provide an input. Are SOLR nodes required or optional in order to use DSE with TW (in the hypothetical case of not using DataTables)?      -- As for functionality of the Thingworx, a Solr node is not required. However, the extension does try to validate the configuration, and hence, at this point, a SOLR node is mandatory to properly configure the extension. This will be fixed in the future. When there are 2 entries for addresses, one for a Cassandra Cluster and one for a Solr Cluster, are they the same Cluster, or different Clusters?      -- They could be either. There can be one machine with SOLR enabled and using the same IP for both Cassandra and Solr. However, it's not recommend for production workloads. It would be perfectly fine for development or test environments. In a Cluster, in order to have Solr and Cassandra nodes, use of Datacenters is required. Even if a Datacenter isn't explicitlydefined, a default install of DSE will create two data centers called "Cassandra" and "Solr" which is what would be seen see in the default "Cassandra Keyspace Settings" property in the configuration. If the user does create Datacenters with specific names then they will have to update the "Cassandra Keyspace Settings" property to reflect the same. I.e. replication = {'class':'NetworkTopologyStrategy', 'Cassandra':1, 'Solr':1} The number in front (1 being the default) represents the replication factor (https://docs.datastax.com/en/cql/3.1/cql/cql_using/update_ks_rf_t.html) depending on the number of nodes in each data center
View full tip
The purpose of this post is to provide some ideas and help diagnosing issues in mashup. First, check if the problem occurs at mashup runtime or in design(edit) mode. Runtime: Is the issue visual or related to improper service execution? (e.g, "my data is displaying correctly but the styling or formatting is wrong" -- visual, "my data is displayed incorrectly but the styling and formatting is right" -- improper service execution) For visual/styling/formatting issues, return to the edit mode of mashup, and ensure the proper style definitions were set up. Ensure the logic behind the connections is correct. Check configuration of the widget(s) involved. Were there any changes made to the styles after the mashup was saved and run the first time? If so, try - clearing the browser cache;  -reconnecting the dependent entity with the style involved in the issue. If the problem persists, contact technical support to raise a cosmetic defect ticket. For improper service execution, return to the composer and use the "test" button on the service to execute and validate the output. If the outputs are incorrect, check the code inside of the service. If the outputs come out as expected, try reconnecting the service in the mashup design mode and clearing the browser cache. If the issue is related to the data from the user database not displaying  -- ensure the database connectivity and proper credentials. If the problem persists, reach out to the technical support to raise a defect.    2.   Design/edit mode: If the widgets are not displaying correctly or not appearing in the list: Check the extensions involved are appearing under the extension manager. Re-upload if needed and restart the composer. If the Google Maps widget is not showing in the mashup the first time of being used, allow up to 2 hrs to load and cache. Submit a ticket to technical support, including the screenshots of the issue. For other styling, formatting, or improper display issues at design time: document the observation and supply the screenshots to the technical support team for investigation. Note: See Tools and approaches used in troubleshooting Twx issues.
View full tip