cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

Community Tip - Help us improve the PTC Community by taking this short Community Survey! X

IoT Tips

Sort by:
I had just finished writing an integration test that needed to update a Thing on a ThingWorx server using only classes in the Java JDK with as few dependencies as possible and before I moved on, I though I would blog about this example since it makes a great starting point for posting data to ThingWorx. ThingWorx has a Java SDK which uses the HTTP Websockets protocol and you can download it from our online at the ThingWorx IoT Marketplace​ that offers great performance and far more capabilities than this example. If you are looking, however for the simplest, minimum dependency example of delivering data to ThingWorx, this is it. This examples uses the REST interface to your ThingWorx server. It requires only classes already found in your JDK (JDK 7) and optionally includes the JSON Simple jar. References to this jar can be removed if you want to create your property update JSON object yourself. Below is the Java Class. package com.thingworx.rest; import org.json.simple.JSONObject; import javax.net.ssl.HostnameVerifier; import javax.net.ssl.HttpsURLConnection; import javax.net.ssl.SSLContext; import javax.net.ssl.SSLSession; import javax.net.ssl.TrustManager; import javax.net.ssl.X509TrustManager; import java.io.IOException; import java.io.OutputStreamWriter; import java.net.HttpURLConnection; import java.net.URL; import java.security.KeyManagementException; import java.security.NoSuchAlgorithmException; import java.security.cert.CertificateException; import java.security.cert.X509Certificate; /** * Author: bill.reichardt@thingworx.com * Date: 4/22/16 */ public class SimpleThingworxRestPropertyUpdater {    static {    //Disable All SSL Security Testing (Not for production!)      try {       disableSSLCertificateChecking();      } catch (Exception e) {       e.printStackTrace();      }      HttpsURLConnection.setDefaultHostnameVerifier(new HostnameVerifier(){        public boolean verify(String hostname, SSLSession session) {return true;}      });   }    public static void main(String[] args) {      // like http://localhost:8080 or https://localhost:443      String serverUrl = args[0];      // Generate one of these from the composer under Application Keys      String appKey = args[1];      String thingName = args[2];      // You don't have to use the Simple JSON class, just pass a JSON string to restUpdateProperties()      // This Thing has three properties, a (NUMBER), b (STRING) and c (BOOLEAN)      JSONObject properties = new JSONObject();      properties.put("a", new Integer(100));      properties.put("b", "My New String Value");      properties.put("c", true);      String payload= properties.toJSONString();      try {        int response = restUpdateProperties(serverUrl, appKey, thingName, payload);        System.out.println("Response Status="+response);      } catch (Exception e) {        e.printStackTrace();      }   }    public static int restUpdateProperties(String serverUrl, String appKey, String thingName, String payload) throws IOException {     String httpUrlString = serverUrl + "/Thingworx/Things/"+thingName+"/Properties/*";     System.out.println("Performing HTTP PUT request to "+httpUrlString);     System.out.println("Payload is "+payload);     URL url = new URL(httpUrlString);     HttpURLConnection httpURLConnection = (HttpURLConnection) url.openConnection();     httpURLConnection.setUseCaches(false);     httpURLConnection.setDoOutput(true);     httpURLConnection.setRequestMethod("PUT");     httpURLConnection.setRequestProperty ("Content-Type", "application/json");     httpURLConnection.setRequestProperty ("appKey",appKey);     OutputStreamWriter out = new OutputStreamWriter(httpURLConnection.getOutputStream());     out.write(payload);     out.close();     httpURLConnection.getInputStream();     return httpURLConnection.getResponseCode();   }    /**   * Disables the SSL certificate checking for new instances of {@link HttpsURLConnection} This has been created to   * aid testing on a local box, not for use on production.   */    private static void disableSSLCertificateChecking() throws KeyManagementException, NoSuchAlgorithmException {   TrustManager[] trustAllCerts = new TrustManager[] { new X509TrustManager() {      public X509Certificate[] getAcceptedIssuers() {        return null;      }      public void checkClientTrusted(X509Certificate[] arg0, String arg1) throws CertificateException {}      public void checkServerTrusted(X509Certificate[] arg0, String arg1) throws CertificateException {}      } };      SSLContext sc = SSLContext.getInstance("TLS");      sc.init(null, trustAllCerts, new java.security.SecureRandom());      HttpsURLConnection.setDefaultSSLSocketFactory(sc.getSocketFactory());   } } When run, it prints out what the request body should look like in JSON: Performing HTTP PUT request to https://localhost:443/Thingworx/Things/SimpleThing/Properties/* Payload is  {"a":100,"b":"My New String Value","c":true} Response Status=200 I have attached the full Gradle project that builds and runs this example class as a zip file to this article. When you download it, if you have Java JDK 7 already installed an on your path, you can run the example with the command: On Linux or OSX ./gradlew simplerest Windows gradlew.bat simplerest Don't forget to edit the build.gradle file to use your server's URL and application key. You will also find the Thing used in this example in the entities folder of this project and you can import it on your server to test it out. It is a Thing that is based on GenericThing and has three properties, a (NUMBER), b (STRING) and c (BOOLEAN).
View full tip
A user can make a direct REST call to Thingworx platform, but when it comes to a website trying to make a REST call. The platform server blocks the request as it is a Cross-Origin request. To enable this feature, the platform server needs to allow Cross-Origin request from all/specific websites. Enabling Cross-Origin request can be done by adding CORS filter to the server. CORS (Cross-Origin Resource Sharing) specification enables the cross-origin requests from other websites deployed in a different server. By enabling CORS filter, a 3rd party tool or a website can retrieve the data from Thingworx instance. Follow the below steps inorder to update the CORS filter: Update web.xml file (located in $CATALINA_HOME/conf/web.xml) For Minimal Configurations, add the below code: <filter> <filter-name>CorsFilter</filter-name>   <filter-class>org.apache.catalina.filters.CorsFilter</filter-class> </filter> <filter-mapping>   <filter-name>CorsFilter</filter-name>   <url-pattern>/*</url-pattern>         // "*" opens platform to all URL patterns, recommended to use limited patterns. </filter-mapping> NOTE: the url-pattern - /* opens the Thingworx application to every domain. For advanced configuration, follow the below code: <filter> <filter-name>CorsFilter</filter-name> <filter-class>org.apache.catalina.filters.CorsFilter</filter-class> <init-param> <param-name>cors.allowed.origins</param-name> <param-value> http://www.customerwebaddress.com </param-value> </init-param> <init-param> <param-name>cors.allowed.methods</param-name> <param-value>GET,POST,HEAD,OPTIONS,PUT</param-value> </init-param> <init-param> <param-name>cors.allowed.headers</param-name> <param-value>Content-Type,X-Requested-With,accept,Origin,Access-Control-Request-Method,Access-Control-Request-Headers</param-value> </init-param> <init-param> <param-name>cors.exposed.headers</param-name> <param-value>Access-Control-Allow-Origin,Access-Control-Allow-Credentials</param-value> </init-param> <init-param> <param-name>cors.support.credentials</param-name> <param-value>true</param-value> </init-param> <init-param> <param-name>cors.preflight.maxage</param-name> <param-value>10</param-value> </init-param> </filter> <filter-mapping> <filter-name>CorsFilter</filter-name> <url-pattern>/* </url-pattern>   // "*" opens platform to all URL patterns, recommended to use limited patterns. </filter-mapping> NOTE: update the cors.allowed.origin parameter with the desired web address Save web.xml file Restart tomcat For additional information, please follow the official tomcat reference document: http://tomcat.apache.org/tomcat-7.0-doc/config/filter.html#CORS_Filter Tested this using an online Javascript editor (jsfiddle) and executing the below script <script> var data = null; var xhr = new XMLHttpRequest(); xhr.open("GET", "http://localhost:8080/Thingworx/Things", true); xhr.withCredentials = true; xhr.send(); </script> The request was successful and list of things are returned.
View full tip
Below is where I will discuss the simple implementation of constructing a POST request in Java. I have embedded the entire source at the bottom of this post for easy copy and paste. To start you will want to define the URL you are trying to POST to: String url = "http://127.0.0.1:80/Thingworx/Things/Thing_Name/Services/​Service_to_Post_to​"; Breaking down this url String: ​http://​ - a non-SSL connection is being used in this example 127.0.0.1:80 -- the address and port that ThingWorx is hosted on /Thingworx -- this bit is necessary because we are talking to ThingWorx /Things -- Things is used as an example here because the service I am posting to is on a Thing Some alternatives to substitute in are ThingTemplates, ThingShapes, Resources, and Subsystems /​Thing_Name​ -- Substitute in the name of your Thing where the service is located /Services -- We are calling a service on the Thing, so this is how you drill down to it /​Service_to_Post_to​ -- Substitute in the name of the service you are trying to invoke Create a URL object: URL obj = new URL(url); Class URL, included in the java.net.URL import, represents a Uniform Resource Locator, a pointer to a "resource" on the Internet. Adding the port is optional, but if it is omitted port 80 will be used by default. Define a HttpURLConnection object to later open a single connection to the URL specified: HttpURLConnection con = (HttpURLConnection) obj.openConnection(); Class HttpURLConnection, included in the java.net.HttpURLConnection import, provides a single instance to connect to the URL specified. The method openConnection is called to create a new instance of a connection, but there is no connection actually made at this point. Set the type of request and the header values to pass: con.setRequestMethod("POST"); con.setRequestProperty("Accept", "application/json"); con.setRequestProperty("Content-Type", "application/json"); con.setRequestProperty("appKey", "80aab639-ad99-43c8-a482-2e1e5dc86a2d"); You can see that we are performing a POST request, passing in an Accept header, a Content-Type header, and a ThingWorx specific appKey header. Pass true into the setDoOutput method because we are performing a POST request; when sending a PUT request we would pass in true as well. When there is no request body being sent false can be passed in to denote there is no "output" and we are making a GET request.         con.setDoOutput(true); Create a DataOutputStream object that wraps around the con object's output stream. We will call the flush method on the DataOutputStream object to push the REST request from the stream to the url defined for POSTing. We immediately close the DataOutputStream object because we are done making a request.         DataOutputStream wr = new DataOutputStream(con.getOutputStream());     wr.flush();     wr.close();           The DataOutputStream class lets the Java SDK write primitive Java data types to the ​con​ object's output stream. The next line returns the HTTP status code returned from the request. This will be something like 200 for success or 401 for unauthorized.         int responseCode = con.getResponseCode(); The final block of this code uses a BufferedReader that wraps an InputStreamReader that wraps the con object's input stream (the byte response from the server). This BufferedReader object is then used to iterate through each line in the response and append it to a StringBuilder object. Once that has completed we close the BufferedReader object and print the response we just retrieved.         BufferedReader in = new BufferedReader(new InputStreamReader(con.getInputStream()));     String inputLine;     StringBuilder response = new StringBuilder();     while((inputLine = in.readLine()) != null) {       response.append(inputLine);     }     in.close();     System.out.println(response.toString());    The InputStreamReader decodes bytes to character streams using a specified charset.         The BufferedReader provides a more efficient way to read characters from an InputStreamReader object.         The StringBuilder object is an unsynchronized method of creating a String representation of the content residing in the BufferedReader object. StringBuffer can be used instead in a case where multi-threaded synchronization is necessary.      Below is the block of code in it's entirety from the discussion above: public void sendPost() throws Exception {   String url = "http://127.0.0.1:80/Thingworx/Things/Thing_Name/Services/Service_to_Post_to";   URL obj = new URL(url);   HttpURLConnection con = (HttpURLConnection) obj.openConnection();   //add request header   con.setRequestMethod("POST");   con.setRequestProperty("Accept", "application/json");   con.setRequestProperty("Content-Type", "application/json");   con.setRequestProperty("appKey", "80aab639-ad99-43c8-a482-2e1e5dc86a2d");   // Send post request   con.setDoOutput(true);   DataOutputStream wr = new DataOutputStream(con.getOutputStream());   wr.flush();   wr.close();   int responseCode = con.getResponseCode();   System.out.println("\nSending 'POST' request to URL : " + url);   System.out.println("Response Code : " + responseCode);   BufferedReader in = new BufferedReader(new InputStreamReader(con.getInputStream()));   String inputLine;   StringBuilder response = new StringBuilder();   while((inputLine = in.readLine()) != null) {   response.append(inputLine);   }   in.close();   //print result   System.out.println(response.toString());   }
View full tip
The App URI in the ThingWorx Remote Thing Tunnel configuration specifies the endpoint of the specified tunnel. The default value (/Thingworx/tunnel/vnc.jsp) will point to the built in ThingWorx VNC client that can be downloaded through the Remote Access Widget in a Mashup to provide VNC remote desktop access. Leaving the App URI blank will result in the Tunnel being connected to the listen port on the users machine as specified in the Remote Access Widget​. In this case the user must supply the application client (e.g. an ssh client) in order to connect to the tunnel endpoint.
View full tip
The Axeda Platform has long had the ability to write custom logic to retrieve, manipulate and create data.  In the current versions of the Platform, there are two classes of API, Version 1 (v1) and Version 2 (v2).  The v1 APIs allow a developer to work with data on the Platform, but all of the APIs are subject to the maxQueryResults configuration property, which by default limits the number of results per query to 1000. For some subsets of data, this can be inadequate to process data.  In comes the v2 API, which introduces pagination. One of the first things a new user does when exploring the V2 API, is something like the following: HistoricalDataItemValueCriteria criteria = new HistoricalDataItemValueCriteria() criteria.assetId = '9701' criteria.startDate = '2014-07-23T12:33:00Z' criteria.endDate = '2014-07-23T12:44:00Z' DataItemBridge dbridge = com.axeda.sdk.v2.dsl.Bridges.dataItemBridge FindDataItemValueResult results = dbridge.findHistoricalValues(criteria)           And they get frustrated when they only get the same 100 rows of data.  Repeat after me: V2 API invocations (find operations) are limited to batches of 100 results at a time! But that's not the end of the story.  With a small change, the query above can be tuned to iterate through all results that match the search criteria:  HistoricalDataItemValueCriteria criteria = new HistoricalDataItemValueCriteria() criteria.assetId = '9701' criteria.startDate = '2014-07-23T12:33:00Z' criteria.endDate = '2014-07-23T12:44:00Z' criteria.pageNumber = 1 criteria.pageSize = 100 // Default. DataItemBridge dbridge = com.axeda.sdk.v2.dsl.Bridges.dataItemBridge FindDataItemValueResult results = dbridge.findHistoricalValues(criteria) tcount = 0 while ( (results = dbridge.findHistoricalValues(criteria)) != null  && tcount < results .totalCount) {   results.dataItems.each { res ->     tcount++   }   criteria.pageNumber = criteria.pageNumber + 1 }    I currently recommend that people avoid using the count() or countDomainObjectByCriteria() functions if you're then going to call a find.  Currently both the count*() and find functions compute total results, and doubles execution time of just those two calls.  Total count is only computed when running the first find() operation, so the code pattern above is so far the most efficient way I've seen to run these operations on the platform. So having covered how to do this in code (custom objects), let's turn our attention to the REST APIs - the other entry-point for using these capabilities.  The REST API doesn't offer a count*() function, but the first find() invocation (if using XML) brings back totalCount as part of the result set.  You can use this in your application to decide how many times to call the REST end-point to retrieve your data.  So for the example above: POST:  https://customer-sandbox.axeda.com/services/v2/dataItem/findHistoricalValues HEADERS: Content-Type: application/xml Accept: application/xml BODY: <?xml version="1.0" encoding="UTF-8"?> <HistoricalDataItemValueCriteria xmlns="http://www.axeda.com/services/v2" pageSize="100" pageNumber="1"> <assetId>9701</assetId> <StartDate>2014-07-23T12:33:00Z</StartDate> <endDate>2014-07-23T12:35:02Z</endDate> </HistoricalDataItemValueCriteria>      RESULTS: <v2:FindAssetResult totalCount="1882" xmlns:v2="http://www.axeda.com/services/v2" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">    <v2:criteria pageSize="100" pageNumber="1">       <v2:name>*</v2:name>       <v2:propertyNames/>    </v2:criteria>    <v2:assets>    </v2:assets> </v2:FindAssetResult>      Or JSON: POST:  https://customer-sandbox.axeda.com/services/v2/dataItem/findHistoricalValues HEADERS: Content-Type: application/xml Accept: application/xml BODY: {   "id":  9701,   "startDate": "2014-07-23T12:33:00Z",   "endDate": "2014-07-23T12:35:02Z",   "pageNumber": 1,   "pageSize": 2 }      And that's how you work around the maxQueryResults limitation of the v1 APIs.  Some APIs do not currently have matching v2 Bridges (e.g. MobileLocation and DataItemAssociation), in which case the limitation will still apply.  Creative use of the query Criteria will allow you to work around these limitations as we continue to improve the V2 API. Regards, -Chris
View full tip
Hi everyone, As everyone knows already, the main way to define Properties inside the EMS Java SDK is to use annotations at the beginning of the VirtualThing class implementation. There are some use-cases when we need to define those properties dynamically, at runtime, like for example when we use a VirtualThing to push a sensor's data from a Device Cloud to the ThingWorx server, for multiple customers. In this case, the number properties differ based on customers, and due to the large number of variations, we need to be able to define programmatically the Properties themselves. The following code will do just that: for (int i = 0; i < int_PropertiesLength; i++) {     Node nNode = device_Properties.item(i);     PropertyDefinition pd;     AspectCollection aspects = new AspectCollection();     if (NumberUtils.isNumber(str_NodeValue))     {         pd = new PropertyDefinition(nNode.getNodeName(), " ", BaseTypes.NUMBER);     }     else if (str_NodeValue=="true"|str_NodeValue=="false")     {         pd = new PropertyDefinition(nNode.getNodeName(), " ", BaseTypes.BOOLEAN);     }     else     pd = new PropertyDefinition(nNode.getNodeName(), " ", BaseTypes.STRING);     aspects.put(Aspects.ASPECT_DATACHANGETYPE,    new StringPrimitive(DataChangeType.VALUE.name()));     //Add the dataChangeThreshold aspect     aspects.put(Aspects.ASPECT_DATACHANGETHRESHOLD, new NumberPrimitive(0.0));     //Add the cacheTime aspect     aspects.put(Aspects.ASPECT_CACHETIME, new IntegerPrimitive(0));     //Add the isPersistent aspect     aspects.put(Aspects.ASPECT_ISPERSISTENT, new BooleanPrimitive(false));     //Add the isReadOnly aspect     aspects.put(Aspects.ASPECT_ISREADONLY, new BooleanPrimitive(true));     //Add the pushType aspect     aspects.put("pushType", new StringPrimitive(DataChangeType.ALWAYS.name()));     aspects.put(Aspects.ASPECT_ISLOGGED,new BooleanPrimitive(true));     //Add the defaultValue aspect if needed...     //aspects.put(Aspects.ASPECT_DEFAULTVALUE, new BooleanPrimitive(true));     pd.setAspects(aspects);     super.defineProperty(pd); }  //you need to comment initializeFromAnnotations() and use instead the initialize() in order for this to work. //super.initializeFromAnnotations();   super.initialize(); Please put this code in the Constructor method of your VirtualThing extending implementation. It needs to be run exactly once, at any instance creation. This method relies on the manual discovery of the sensor properties that you will do before this. Depending on the implementation you can either do the discovery of the properties here in this method (too slow), or you can pass it as a parameter to the constructor (better). Hope it helps!
View full tip
Official name: DataStax Enterprise, sometimes referred as Cassandra. Note: DBA skills required, free self-paced training can be found here Training | DataStax The extension package can further be obtained through Technical Support. Thingworx 6.0 introduces DSE as a backend database scaling to much greater byte count, ad Neo4j performance limitations hit at 50Gbs. Some of the main reasons to consider DSE are: 1. Elastic scalability -- Alows to easily add capacity online to accommodate more customers and more data when needed. 2. Always on architecture -- Contains no single point of failure (as with traditional master/slave RDBMS's and other NoSQL solutions) resulting in continious availability for business-critical applications that can't afford to go down. 3. Fast linear-scale performance -- Enables sub-second response times with linear scalability (double the throughput with two nodes, quadruple it with four, and so on) to deliver response time speeds. 4. Flexible data storage -- Easily accommodates the full range of data formats - structured, semi-structured and unstructured -- that run through today's modern applications. 5. Easy data distribution -- Read and write to any node with all changes being automatically synchronized across a cluster, giving maximum flexibility to distribute data by replicating across multiple datacenters, cloud, and even mixed cloud/on-premise environments. Note: Windows+DSE is currently not fully supported. Connecting Thingworx: Prerequisite: fully configured DSE database. 1. Obtain the dse_persistancePackage 2. Import as an extension in Composer. 3. In composer, create a new persistence provider. 4. Select the imported package as Persistence Provider Package. 5. In Configuration tab:      - For Cassandra Cluster Host, enter the IP address set in cassandra.yaml or localhost if hosted locally      - Enter new of existing Cassandra Keyspace name      - Enter Solr Cluster URL      - Other fields can be left at default (*) 6. Go to Services and execute TestConnectivity service to ensure True response. 7. When creating new Stream, Value Stream, or a Data Table, set Persistence Provider to the one created in previous steps. Currently all reads and writes are done through Thingworx and all Thingworx data is encoded in DSE.  Opcenter still allows to see connectes streams, datatables, valuestreams. *SimpleStrategy can be used for a single data center, or NetworkTopologyStrategy is recommended for most deployments, because it is much easier to expand to multiple data centers when required by future expansion. Is there a limit of data per node? 1 TB is a reasonable limit on how much data a single node can handle, but in reality, a node is not at all limited by the size of the data, only the rate of operations. A node might have only 80 GB of data on it, but if it's continuously hit with random reads and doesn't have a lot of RAM, it might not even be able to handle that number of requests at a reasonable rate. Similarly, a node might have 10 TB of data, but if it's rarely read from, or there is a small portion of data that is hot (so it could be effectively cached), it will do just fine. If the replication factor is above 1 and there is no reads at consistency level ALL, other replicas will be able to respond quickly to read requests, so there won't be a large difference in latency seen from a client perspective.
View full tip
Hi Community,   I've recently had a number of questions from colleagues around architectures involving MQTT and what our preferred approach was.  After some internal verification, I wanted to share an aggregate of my findings with the ThingWorx Architect and Developer Community.   PTC currently supports four methods for integrating with MQTT for IoT projects. ThingWorx Azure IoT Hub Connector ThingWorx MQTT Extension ThingWorx Kepware Server Choice is nice, but it adds complexity and sometimes confusion.  The intent of this article is to clarify and provide direction on the subject to help others choose the path best suited for their situation.   ThingWorx MQTT Extension The ThingWorx MQTT extension has been available on the marketplace as an unsupported “PTC Labs” extension for a number of years.  Recently its status has been upgraded to “PTC Supported” and it has received some attention from R&D getting some bug fixes and security enhancements.  Most people who have used MQTT with ThingWorx are familiar with this extension.  As with anything, it has advantages and disadvantages.  You can easily import the extension without having administrative access to the machine, it’s easy to move around and store with projects, and can be up and running quite quickly.  However it is also quite limited when it comes to the flexibility required when building a production application, is tied directly to the core platform, and does not get feature/functionality updates.   The MQTT extension is a good choice for PoCs, demos, benchmarks, and prototypes as it provides MQTT integration relatively quickly and easily.  As an extension which runs with the core platform, it is not a good choice as a part of a client/enterprise application where MQTT communication reliability is critical.   ThingWorx Azure IoT Hub Connector Although Azure IoT Hub is not a fully functional MQTT broker, Azure IoT does support MQTT endpoints on both IoT Hub and IoT Edge.  This can be an interesting option to have MQTT devices publish to Azure IoT and be integrated to ThingWorx using the Azure IoT Hub Connector without actually requiring an MQTT broker to run and be maintained.  The Azure IoT Hub Connector works similarly to the PAT and is built on the Connection Server, but adds the notion of device management and security provided by Azure IoT.   When using Azure IoT Edge configured as a transparent gateway with buffering (store and forward) enabled, this approach has the added benefit of being able to buffer MQTT device messages at a remote site with the ability to handle Internet interruptions without losing data.   This approach has the added benefit of having far greater integrated security capabilities by leveraging certificates and tying into Azure KeyVault, as well as easily scaling up resources receiving the MQTT messages (IoT Hub and Azure IoT Hub Connector).  Considering that this approach is build on the Connection Server core, it also follows our deployment guidance for processing communications outside of the core platform (unlike the extension approach).   ThingWorx Kepware Server As some will note, KepWare has some pretty awesome MQTT capabilities: both as north and southbound interfaces.  The MQTT Client driver allows creating an MQTT channel to devices communicating via MQTT with auto-tag creation (from the MQTT payload).  Coupled with the native ThingWorx AlwaysOn connection, you can easily connect KepWare to an on-premise MQTT broker and connect these devices to ThingWorx over AlwaysOn.   The IoT Gateway plug-in has an MQTT agent which allows publishing data from all of your KepWare connected devices to an MQTT broker or endpoint.  The MQTT agent can also receive tag updates on a different topic and write back to the controllers.  We’ve used this MQTT agent to connect industrial control system data to ThingWorx through cloud platforms like Azure IoT, AWS, and communications providers.   ThingWorx Product Segment Direction A key factor in deciding how to design your solution should be aligned with our product development direction.  The ThingWorx Product Management and R&D teams have for years been putting their focus on scalable and enterprise-ready approaches that our partners and customers can build upon.  I mention this to make it clear that not all supported approaches carry the same weight.  Although we do support the MQTT extension, it is not in active development due to the fact that out-of-platform microservices-based communication interfaces are our direction forward.   The Azure IoT Hub Connector, being built on the Connection Server is currently the way forward for MQTT communications to the ThingWorx Foundation.   Regards,   Greg Eva
View full tip
A student in the Axeda Groovy course had some good questions. Instead of answering via email, I thought I'd answer here, so we could share the knowledge. Domain Objects - How can I customize or extend Axeda provided domain objects? (Eg., I need to store whether a user is external or internal user) How can I achieve this? This is a perfect use case for the Axeda Extended Data API. Documentation on the API can be found in the Axeda Platform v1 API Developer's Reference Guide. A good, "Getting Started," topic is available on Axeda Mentor - Getting Started With the Axeda Extended Data API. To customize/extend/decorate Axeda-provided domain objects, use Extended Objects. Extended Objects can be thought of in the abstract as a collection of custom database table rows. The analogy isn't perfect, but it helps with understanding. The use case in the question is very common. Extended Objects can stand on their own, or they can be associated with an Axeda domain object with the setInternalID() method. In the following sample code, we "decorate" an Axeda User object with two new fields, isExternalUser and companyID: private void updateAdditionalFieldsForUser(User user, boolean isExternalUser, String companyID) {   // GET OBJECT TYPE - This example assumes the ExtendedObjectType has already been created   ExtendedObjectType extObjectType = extendedObjectService.findExtendedObjectTypeByClassname("com.axeda.drm.sdk.user.User")   // GET PROPERTY TYPES   PropertyType isExternalUserPropertyType = findOrCreatePropertyType(extObjectType, "IS_EXTERNAL_USER")   PropertyType companyIdPropertyType = findOrCreatePropertyType(extObjectType, "COMPANY_ID")      /* GET THE EXTENDED OBJECT   * Note - the example provides a findExtendedObject() method that searches by INTERNAL ID. This linkage via the internal ID   * associates a domain object to an ExtendedObject, allowing us to "decorate" it with custom attributes   */   ExtendedObject extObject = findExtendedObject(extObjectType, user.id.value)   if (extObject == null)   {     extObject = new ExtendedObject()     extObject.setExtendedObjectType(extObjectType)     extObject.setInternalObjectId(user.id.value)     extObject.addProperty(createExtendedProperty(isExternalUserPropertyType, isExternalUser.toString()))     extObject.addProperty(createExtendedProperty(companyIdPropertyType, companyID))     extendedObjectService.createExtendedObject(extObject)   }   else   {     addOrUpdateExtendedProperty(extObject, isExternalUserPropertyType, "IS_EXTERNAL_USER", isExternalUser.toString())     addOrUpdateExtendedProperty(extObject, companyIdPropertyType, "COMPANY_ID", companyID)     extendedObjectService.updateExtendedObject(extObject)   } }  /**  * Adds or Updates an extended property.  *  * @param extendedObject the extended object to associate with the property  * @param propertyType the type of the property  * @param propertyName the name of the property  * @param propertyValue the value of the property  */ private void addOrUpdateExtendedProperty(ExtendedObject extendedObject, PropertyType propertyType, String propertyName, String propertyValue) {   Property extendedProperty = extendedObject.getPropertyByName(propertyName)   if (extendedProperty == null)   {   extendedProperty = createExtendedProperty(propertyType, propertyValue)   extendedObject.addProperty(extendedProperty)   }   else   {   extendedProperty.setValue(propertyValue)   } }  /**  * Retrieves and returns the extended object with the given type and internal id.  *  * @param extObjectType the type of the desired external object  * @param internalObjectId the internal id of the desired extended object  * @return the extended object matching the given details or <code>null</code> in case there's no match  */ private ExtendedObject findExtendedObject(ExtendedObjectType extObjectType, Long internalObjectId) {   ExtendedObjectSearchCriteria searchCriteria = new ExtendedObjectSearchCriteria()   searchCriteria.setExtendedObjectTypeId(extObjectType.getId())   searchCriteria.setInternalObjectId(internalObjectId)    List<ExtendedObject> extObjects = extendedObjectService.findExtendedObjects(searchCriteria, -1, 0, "name")    if (!extObjects?.isEmpty())   {    return extObjects[0]   }   return null }  /**  * Gets a property type given its name and the associated extended object type.  * If it cannot find the property type it will create it.  *  * @param extendedObjectType the associated extended object type for the property type  * @param propertyTypeName the property type name  * @return the property type  */  private PropertyType findOrCreatePropertyType(ExtendedObjectType extendedObjectType, String propertyTypeName)  {     PropertyType propertyType = extendedObjectType.getPropertyTypeByName(propertyTypeName)     if (propertyType == null)     {       propertyType = new PropertyType()       propertyType.setDataType(PropertyDataType.String)       propertyType.setName(propertyTypeName)       propertyType.setExtendedObjectType(extendedObjectType)       extendedObjectService.createPropertyType(propertyType)       extendedObjectType.addPropertyType(propertyType)     }     return propertyType  } By using Extended Objects, and associating them with Axeda domain objects via an internal ID, we can decorate/extend/customize those domain objects with use-case-specific attributes.
View full tip
Attached (as PDF) are some steps to quickly get started with the Thingworx MQTT Extension so that you can subscribe / publish topics.
View full tip
The Protocol Adapter Toolkit (PAT) is an SDK that allows developers to write a custom Connector that enables edge devices to connect to and communicate with the ThingWorx Platform.   In this blog, I will be dabbling with the MQTT Sample Project that uses the MQTT Channel introduced in PAT 1.1.2.   Preamble All the PAT sample projects are documented in detail in their respective README.md. This post is an illustrated Walk-thru for the MQTT Sample project, please review its README.md for in depth information. More reading in Protocol Adapter Toolkit (PAT) overview PAT 1.1.2 is supported with ThingWorx Platform 8.0 and 8.1 - not fully supported with 8.2 yet.   MQTT Connector features The MQTT Sample project provides a Codec implementation that service MQTT requests and a command line MQTT client to test the Connector. The sample MQTT Codec handles Edge initiated requests read a property from the ThingWorx Platform write a property to the ThingWorx Platform execute a service on the ThingWorx Platform send an event to the ThingWorx Platform (also uses a ServiceEntityNameMapper to map an edgeId to an entityName) All these actions require a security token that will be validated by a Platform service via a InvokeServiceAuthenticator.   This Codec also handles Platform initiated requests (egress message) write a property to the Edge device execute a service without response on the Edge device  Components and Terminology       Mqtt Messages originated from the Edge Device (inbound) are published to the sample MQTT topic Mqtt Messages originated from the Connector (outbound) are published to the mqtt/outbound MQTT topic   Codec A pluggable component that interprets Edge Messages and converts them to ThingWorx Platform Messages to enable interoperability between an Edge Device and the ThingWorx Platform. Connector A running instance of a Protocol Adapter created using the Protocol Adapter Toolkit. Edge Device The device that exists external to the Connector which may produce and/or consume Edge Messages. (mqtt) Edge Message A data structure used for communication defined by the Edge Protocol.  An Edge Message may include routing information for the Channel and a payload for Codec. Edge Messages originate from the Edge Device (inbound) as well as the Codec (outbound). (mqtt) Channel The specific mechanism used to transmit Edge Messages to and from Edge Devices. The Protocol Adapter Toolkit currently includes support for HTTP, WebSocket, MQTT, and custom Channels. A Channel takes the data off of the network and produces an Edge Message for consumption by the Codec and takes Edge Messages produced by the Codec and places the message payload data back onto the network. Platform Connection The connection layer between a Connector and ThingWorx core Platform Message The abstract representation of a message destined for and coming from the ThingWorx Platform (e.g. WriteProperty, InvokeService). Platform Messages are produced by the Codec for incoming messages and provided to the Codec for outgoing messages/responses.   Installation and Build  Protocol Adapter Toolkit installation The media is available from PTC Software Downloads : ThingWorx Connection Server > Release 8.2 > ThingWorx Protocol Adapter Toolkit Just unzip the media on your filesystem, there is no installer The MQTT Sample Project is available in <protocol-adapter-toolkit>\samples\mqtt Eclipse Project setup Prerequisite : Eclipse IDE (I'm using Neon.3 release) Eclipse Gradle extension - for example the Gradle IDE Pack available in the Eclipse Marketplace Import the MQTT Project : File > Import > Gradle (STS) > Gradle (STS) Project Browser to <protocol-adapter-toolkit>\samples\mqtt, then [Build Model] and select the mqtt project     Review the sample MQTT codec and test client Connector : mqtt > src/main/java > com.thingworx.connector.sdk.samples.codec.MqttSampleCodec decode : converts an MqttEdgeMessage to a PlatformRequest encode (3 flavors) : converts a PlatformMessage or an InfoTable or a Throwable to a MqttEdgeMessage Note that most of the conversion logic is common to all sample projects (websocket, rest, mqtt) and is done in an helper class : SampleProtocol The SampleProtocol sources are available in the <protocol-adapter-toolkit>\samples\connector-api-sample-protocol project - it can be imported in eclipse the same way as the mqtt. SampleTokenAuthenticator and SampleEntityNameMapper are also defined in the <protocol-adapter-toolkit>\samples\connector-api-sample-protocol project. Client : mqtt > src/client/java > com.thingworx.connector.sdk.samples.MqttClient Command Line MQTT client based on Eclipse Paho that allows to test edge initiated and platform initiated requests. Build the sample MQTT Connector and test client Select the mqtt project then RMB > Gradle (STS) > Task Quick Launcher > type Clean build +  [enter] This creates a distributable archive (zip+tar) in <protocol-adapter-toolkit>\samples\mqtt\build\distributions that packages the sample mqtt connector, some startup scripts, an xml with sample entities to import on the platform and a sample connector.conf. Note that I will test the connector and the client directly from Eclipse, and will not use this package. Runtime configuration and setup MQTT broker I'm just using a Mosquitto broker Docker image from Docker Hub​   docker run -d -p 1883:1883 --name mqtt ncarlier/mqtt  ThingWorx Platform appKey and ConnectionServicesExtension From the ThingWorx Composer : Create an Application Key for your Connector (remember to increase the expiration date - to make it simple I bind it to Administrator) Import the ConnectionServicesExtension-x.y.z.zip and pat-extension-x.y.z.zip extensions available in <protocol-adapter-toolkit>\requiredExtensions  Connector configuration Edit <protocol-adapter-toolkit>\samples\mqtt\src\main\dist\connector.conf Update the highlighted entries below to match your configuration :   include "application" cx-server {   connector {     active-channel = "mqtt"     bind-on-first-communication = true     channel.mqtt {       broker-urls = [ "tcp://localhost:1883" ]       // at least one subscription must be defined       subscriptions {        "sample": [ "com.thingworx.connector.sdk.samples.codec.MqttSampleCodec", 1 ]       }       outbound-codec-class = "com.thingworx.connector.sdk.samples.codec.MqttSampleCodec"     }   }   transport.websockets {     app-key = "00000000-0000-0000-0000-000000000000"     platforms = "wss://thingWorxServer:8443/Thingworx/WS"   }   // Health check service default port (9009) was in used on my machine. Added the following block to change it.   health-check {      port = 9010   } }  Start the Connector Run the Connector directly from Eclipse using the Gradle Task RMB > Run As ... > Gradle (STS) Build (Alternate technique)  Debug as Java Application from Eclipse Select the mqtt project, then Run > Debug Configurations .... Name : mqtt-connector Main class:  com.thingworx.connectionserver.ConnectionServer On the argument tab add a VM argument : -Dconfig.file=<protocol-adapter-toolkit>\samples\mqtt\src\main\dist\connector.conf Select [Debug]  Verify connection to the Platform From the ThingWorx Composer, Monitoring > Connection Servers Verify that a Connection Server with name protocol-adapter-cxserver-<uuid> is listed  Testing  Import the ThingWorx Platform sample Things From the ThingWorx Composer Import/Export > From File : <protocol-adapter-toolkit>\samples\mqtt\src\main\dist\SampleEntities.xml Verify that WeatherThing, EntityNameConverter and EdgeTokenAuthenticator have been imported. WeatherThing : RemoteThing that is used to test our Connector EdgeTokenAuthenticator : holds a sample service (ValidateToken) used to validate the security token provided by the Edge device EntityNameConverter : holds a sample service (GetEntityName) used to map an edgeId to an entityName  Start the test MQTT client I will run the test client directly from Eclipse Select the mqtt project, then Run > Run Configurations .... Name : mqtt-client Main class:  com.thingworx.connector.sdk.samples.MqttClient On the argument tab add a Program argument : tcp://<mqtt_broker_host>:1883 Select [Run] Type the client commands in the Eclipse Console  Test Edge initiated requests     Read a property from the ThingWorx Platform In the MQTT client console enter : readProperty WeatherThing temp   Sending message: {"propertyName":"temp","requestId":1,"authToken":"token1234","action":"readProperty","deviceId":"WeatherThing"} to topic: sample Received message: {"temp":56.3,"requestId":1} from topic: mqtt/outbound Notes : An authToken is sent with the request, it is validated by a platform service using the SampleTokenAuthenticator (this authenticator is common to all the PAT samples and is defined in <protocol-adapter-toolkit>\samples\connector-api-sample-protocol) EntityNameMapper is not used by readProperty (no special reason for that) The PlatformRequest message built by the codec is ReadPropertyMessage   Write a property to the ThingWorx Platform In the MQTT client console enter : writeProperty WeatherThing temp 20   Sending message: {"temp":"20","propertyName":"temp","requestId":2,"authToken":"token1234","action":"writeProperty","deviceId":"WeatherThing"} to topic: sample Notes : An authToken is sent with the request, it is validated by a platform service using the SampleTokenAuthenticator EntityNameMapper is not used by writeProperty The PlatformRequest message built by the codec is WritePropertyMessage No Edge message is sent back to the device   Send an event to the ThingWorx Platform   In the MQTT client console enter : fireEvent Weather WeatherEvent SomeDescription   Sending message: {"requestId":5,"authToken":"token1234","action":"fireEvent","eventName":"WeatherEvent","message":"Some description","deviceId":"Weather"} to topic: sample Notes : An authToken is sent with the request, it is validated by a platform service using the SampleTokenAuthenticator fireEvent uses a EntityNameMapper (SampleEntityNameMapper) to map the deviceId (Weather) to a Thing name (WeatherThing), the mapping is done by a platform service The PlatformRequest message built by the codec is FireEventMessage No Edge message is sent back to the device   Execute a service on the ThingWorx Platform ... can be tested with the GetAverageTemperature on WeatherThing ... Test Platform initiated requests     Write a property to the Edge device The MQTT Connector must be configured to bind the Thing with the Platform when the first message is received for the Thing. This was done by setting the bind-on-first-communication=true in connector.conf When a Thing is bound, the remote egress messages will be forwarded to the Connector The Edge initiated requests above should have done the binding, but if the Connector was restarted since, just bind again with : readProperty WeatherThing isConnected From the ThingWorx composer update the temp property value on WeatherThing to 30 An egress message is logged in the MQTT client console :   Received message: {"egressMessages":[{"propertyName":"temp","propertyValue":30,"type":"PROPERTY"}]} from topic: mqtt/outbound   Execute a service on the ThingWorx Platform ... can be tested with the SetNtpService on WeatherThing ...
View full tip
Protocol Adapter Toolkit (PAT) is an SDK that allows developers to write a custom Connector that enables edge devices (without native AlwaysOn support) to connect to and communicate with the ThingWorx Platform. A typical use case is edge communication using a protocol that can't be changed (e.g. MQTT). Prior to PAT, developers had to use the ThingWorx (Edge or Platform) SDKs, or the ThingWorx REST interface, to enable the edge devices to communicate with ThingWorx. Overview PAT provides three main components: the Channel, the Codec, and the ThingWorx Platform Connection. The Channel implements a network protocol to communicate directly with the Edge Device. Its responsibilities include reading data from an Edge Device, writing data to an Edge Device, and routing data to the correct Codec. You can implement your own custom channel or use one of the out of the box channels provided by PTC : WebSocket, HTTP (1.0.x) and MQTT (1.1.x). The Codec translates messages from your edge devices into messages that ThingWorx platform can process (property read/write,service call, events), and provides a means to take the results of those actions and turn them back into messages for the device.  You must implement the Codec. The Platform Connection layer sends and receives messages with the ThingWorx platform. Note : The PAT Connector capabilities depend on edge protocol and channel implementation. Installation The PAT installation media contains : README.md - start here SDK (Java API) and runtime libraries PAT skeleton project (Gradle) Sample codec implementations for the WebSocket, HTTP, and MQTT channels (Gradle) Sample Custom Channel implementation (basic TCP protocol adapter) (Gradle) Required extensions to be installed on the platform : ConnectionServicesExtension and pat-extension Reference Documents ThingWorx Protocol Adapter Toolkit Developers Guide 1.0.0 README.md in various levels of installation folders ThingWorx Connection Services and Compatibility Matrix 1.0.0 Related Knowledge Protocol Adapter Toolkit - MQTT Sample Project hands-on (1.1.x)
View full tip
Video Author:                     Stefan Tatka Original Post Date:            June 6, 2016   Description: This ThingWorx Tutorial will demonstrate how to configure and initiate remote file transfers using the .NET SDK.      
View full tip
I have put together a small sample of how to get property values from a Windows Powershell command into Thingworx through an agent using the Java SDK. In order to use this you need to import entities from ExampleExport.xml and then run SteamSensorClient.java passing in the parameters shown in run-configuration.txt (URL, port and AppKey must be adapted for your system). ExampleExport.xml is a sample file distributed with the Java SDK which I’ve also included in the zipfile attached to this post. You need to go in Thingworx Composer to Import/Export … Import from File … Entities … Single File … Choose File … Import. Further instructions / details are given in this short video: Video Link : 2181
View full tip
When I tried to set String property values with Chinese letters by using C SDK, I could see only broken characters in Thingworx. The cause of the problem is simple and it's encoding. In order to solve this problem, you need to convert encoding to UTF-8 in your C code. I used 'libiconv' library to do it. This guide is for Win32 version. If you want to make a library for other platforms such as Linux, you can create or get a library for your own by googling. How to Get the Source Code of libiconv At the moment, the most recent version of libiconv is 1.15. You can download the source code of libiconv from here. How to Build I used MS Visual Studio 2012, but the explanation can be applied to the earlier versions of MS Visual Studio and express editions. Step 1. Download the most recent version of libiconv and unzip the file. Step 2. Make a new Win32 Project. Let's say "libiconv" as the project name. Check to create directory for solution. Choose DLL as the application type and check Empty Project for additional options. Click the button "Finish" to generate the new project. Step 3. Copy files from the folders of libiconv to project folders. To build "libiconv", you need to compile three files "localcharset.c", "relocatable.c" and "iconv.c". That's the key! Copy three files "relocatable.h", "relocatable.c" and "iconv.c" in the folder "...\libiconv-1.15\lib\" to the project folder "...\libiconv\libiconv\". Copy "...\libiconv-1.15\libcharset\lib\localcharset.c" to the project folder "...\libiconv\libiconv\". Copy "...\libiconv-1.15\libcharset\include\localcharset.h.build.in" to the project folder "...\libiconv\libiconv\" and then, rename the copied "localcharset.h.build.in" to "localcharset.h". Copy "...\libiconv-1.15\windows\libiconv.rc" to the project folder "...\libiconv\libiconv\". Make folder "include" under the project folder "...\libiconv\". Copy "...\libiconv-1.15\include\iconv.h.build.in" to the project include folder "...\libiconv\include" and then, rename the copied "iconv.h.build.in" to "iconv.h". Copy "...\libiconv-1.15\config.h.in" to the project include folder "...\libiconv\include" and then, rename the copied "config.h.in" to "config.h". Copy all the header files (*.h) and definition files (*.def) in the folder "...\libiconv-1.15\lib" to the project include folder "...\libiconv\include". Step 4. Add existing items. Execute "project > Add Existing items..." at the main menu to add existing items to the project. Step 5. Project Settings. You can make 64-bit platform through configuration manager in order to generate libiconv.dll for 64-bit system. You can also make two other configurations "ReleaseStatic" and "DebugStatic" in order to generate libiconvStatic.lib as a static link library. At the project properties, change Output Directory as "$(SolutionDir)$(Configuration)_$(Platform)\" and Intermediate Directory as "$(SolutionDir)obj\$(ProjectName)\$(Configuration)_$(Platform)\". Change Include Directories as "..\include;$(IncludePath)": You have to add "BUILDING_LIBICONV" and "BUILDING_LIBCHARSET" to Peprocessor Definitions of all Platforms and of all configurations. You'd better set Runtime Library to "Multi-threaded" when building dynamic link library libiconv.dll. Then, the dependency on VC Runtime library can be controlled by the applications that will be built and dynamically linked with libiconv.dll because libiconv.dll does not need VC Runtime library but only the application that uses libiconv.dll may or may not need VC Runtime library. However, when building the static link library libiconvStatic.lib, you can choose Runtime Library option for libiconvStatic.lib depending on the application that uses libiconvStatic.lib. You have to change Precompiled Header option to "Not Using Precompiled Headers". Step 6. Tweak the source code. libiconv.rc Open libiconv.rc with text editor or the source code editor of Visual Studio IDE by double-clicking libiconv.rc in the Solution explorer and insert some code at line 4 as follows: /////////////    ADD    ///////////// #define PACKAGE_VERSION_MAJOR 1 #define PACKAGE_VERSION_MINOR 14 #define PACKAGE_VERSION_SUBMINOR 0 #define PACKAGE_VERSION_STRING "1.14" ///////////////////////////////////// You may be asked to change Line endings to "Windows (CR LF)". Then, let it do so. It will be more convenient for you if you mainly use Windows. localcharset.c Open localcharset.c and delete or comment the lines 80 - 83 as follows: //////////////////  DELETE //////////////// ///* Get LIBDIR.  */ //#ifndef LIBDIR //# include "configmake.h" //#endif /////////////////////////////////////////// iconv.c Open iconv.c and delete or comment the lines 250 - 252 and add three lines there as follows: ///////////////////////// DELETE /////////////////////// //size_t iconv (iconv_t icd, //              ICONV_CONST char* * inbuf, size_t *inbytesleft, //              char* * outbuf, size_t *outbytesleft) /////////////////////////   ADD   ////////////////////// size_t iconv (iconv_t icd,               const char* * inbuf, size_t *inbytesleft,               char* * outbuf, size_t *outbytesleft) //////////////////////////////////////////////////////// localcharset.h Open localcharset.h and delete or comment the lines 21 - 25 and add 7 lines there as follows: /////////////////////////   DELETE  //////////////////////// //#if @HAVE_VISIBILITY@ && BUILDING_LIBCHARSET //#define LIBCHARSET_DLL_EXPORTED __attribute__((__visibility__("default"))) //#else //#define LIBCHARSET_DLL_EXPORTED //#endif /////////////////////////    ADD    ////////////////////// #ifdef BUILDING_LIBCHARSET #define LIBCHARSET_DLL_EXPORTED __declspec(dllexport) #elif USING_STATIC_LIBICONV #define LIBCHARSET_DLL_EXPORTED #else #define LIBCHARSET_DLL_EXPORTED __declspec(dllimport) #endif //////////////////////////////////////////////////////////////////// config.h Open config.h in the project include folder "...\libiconv\include" and delete or comment the lines 29 - 30 as follows: ///////////////////////// DELETE /////////////////////// ///* Define as good substitute value for EILSEQ. */ //#undef EILSEQ //////////////////////////////////////////////////////// Otherwise you can redefine EILSEQ as good substitute value. iconv.h Open iconv.h in the project include folder "...\libiconv\include" and delete or comment the line 175 and add 1 line as follows: /////////////////////////  DELETE  /////////////////////// //#if @HAVE_WCHAR_T@ /////////////////////////    ADD   ////////////////////// #if HAVE_WCHAR_T //////////////////////////////////////////////////////////////////////////////// Delete or comment the line 128 and add 1 line as follows: /////////////////////////  DELETE  /////////////////////// //#if @USE_MBSTATE_T@ /////////////////////////   ADD   ////////////////////// #if USE_MBSTATE_T //////////////////////////////////////////////////////////////////////////////// Delete or comment the lines 107-108 and add 2 lines as follows: /////////////////////////  DELETE  /////////////////////// //#if @USE_MBSTATE_T@ //#if @BROKEN_WCHAR_H@ /////////////////////////  ADD  ////////////////////// #if USE_MBSTATE_T #if BROKEN_WCHAR_H //////////////////////////////////////////////////////////////////////////////// Delete or comment the line 89 and add 2 lines as follows: /////////////////////////  DELETE /////////////////////// //extern LIBICONV_DLL_EXPORTED size_t iconv (iconv_t cd, @ICONV_CONST@ char* * inbuf, //size_t *inbytesleft, char* * outbuf, size_t *outbytesleft); /////////////////////////    ADD   ////////////////////// extern LIBICONV_DLL_EXPORTED size_t iconv (iconv_t cd, const char* * inbuf,   size_t *inbytesleft, char* * outbuf, size_t *outbytesleft); //////////////////////////////////////////////////////////////////////////////// Delete or comment the lines 25 - 30 and add 8 lines as follows: /////////////////////////  DELETE /////////////////////// //#if @HAVE_VISIBILITY@ && BUILDING_LIBICONV //#define LIBICONV_DLL_EXPORTED __attribute__((__visibility__("default"))) //#else //#define LIBICONV_DLL_EXPORTED //#endif //extern LIBICONV_DLL_EXPORTED @DLL_VARIABLE@ int _libiconv_version; /* Likewise */ /////////////////////////    ADD   ////////////////////// #if BUILDING_LIBICONV #define LIBICONV_DLL_EXPORTED __declspec(dllexport) #elif USING_STATIC_LIBICONV #define LIBICONV_DLL_EXPORTED #else #define LIBICONV_DLL_EXPORTED __declspec(dllimport) #endif extern LIBICONV_DLL_EXPORTED int _libiconv_version; /* Likewise */ //////////////////////////////////////////////////////////////////////////////// How to Use When you use newly built libiconv, the only header file that you need is iconv.h. You will need to link either the import library libiconv.lib or the static library libiconvStatic.lib in your project property or write the code in one of your source file as follows: #pragma comment (lib, "libiconv.lib") or #pragma comment (lib, "libiconvStatic.lib") In the source of the application that uses this library either libiconv.dll or libiconvStatic.lib, if you don't define anything but only include iconv.h, your application will use libiconv.dll while it will use libiconvStatic.lib if you define USING_STATIC_LIBICONV before you include iconv.h in your application as follows: //#define USING_STATIC_LIBICONV #include <iconv.h> And in C SDK code, I used a "SteamSensorWithFileTransferAndTunneling" sample code and added codes in the "dataCollectionTask" function as below. void dataCollectionTask(DATETIME now, void * params) {   iconv_t ic;   char* in_buf = "Hi, 文健英";   char *to_chrset = "UTF-8";   char *from_chrset = "EUC-KR";   size_t in_size = strlen(in_buf);   size_t  out_size = sizeof(wchar_t) * in_size * 4;   char* out_buf = malloc(out_size);   // Caution: iconv's inbuf, outbuf are double pointers, so need to define separate pointers and pass addresses.   char* in_ptr = in_buf;   char* out_ptr = out_buf;   size_t out_buf_left;   size_t result;   memset(out_buf, 0x00, out_size);      ic = iconv_open(to_chrset, from_chrset);   if (ic == (iconv_t) -1)   {         printf("Not supported code \n");         exit(1);   }   printf("input len = %d, %s\n", in_size, in_buf);   out_buf_left = out_size;   iconv(ic, &in_ptr, &in_size, &out_ptr, &out_buf_left);   //printf("input len = %d, output len=%d %s\n", in_size, out_size - out_buf_left, out_buf);   iconv_close(ic);   /* TW_LOG(TW_TRACE,"dataCollectionTask: Executing"); */   properties.TotalFlow = rand()/(RAND_MAX/10.0);   properties.Pressure = 18 + rand()/(RAND_MAX/5.0);   properties.Location.latitude = properties.Location.latitude + ((double)(rand() - RAND_MAX))/RAND_MAX/5;   properties.Location.longitude = properties.Location.longitude + ((double)(rand() - RAND_MAX))/RAND_MAX/5;   properties.Temperature  = 400 + rand()/(RAND_MAX/40);   properties.BigGiantString = out_buf; // Set values for String property   /* Check for a fault.  Only do something if we haven't already */   if (properties.Temperature > properties.TemperatureLimit && properties.FaultStatus == FALSE) {   twInfoTable * faultData = 0;   char msg[140];   properties.FaultStatus = TRUE;   properties.InletValve = TRUE;   sprintf(msg,"%s Temperature %2f exceeds threshold of %2f",   thingName, properties.Temperature, properties.TemperatureLimit);   faultData = twInfoTable_CreateFromString("message", msg, TRUE);   twApi_FireEvent(TW_THING, thingName, "SteamSensorFault", faultData, -1, TRUE);   twInfoTable_Delete(faultData);   }   /* Update the properties on the server */   sendPropertyUpdate(); }
View full tip
This post will cover the challenges I've had while going through the setup of .NET SDK based ADO Service for SQL Server DB Connection. I'll be starting from the scratch on setting up the service for this to present full picture on the setup. Pre-requisite 1. Download and install Microsoft SQL Server Express or Enterprise edition, for testing I worked with Express edition : https://www.microsoft.com/en-us/sql-server/sql-server-editions-express 2. Once installed, it's imperative that the TCP/IP Protocol is enabled in the SQL Server Configuration Manager for the SQL Server 3. Download ThingWorx Edge ADO Service from PTC Software download page What is ThingWorx ADO Service? An ActiveX Data Object service allowing connection to a Microsoft database source e.g. MS SQL Server, MS Excel or MS .NET application to the ThingWorx platform. It is based on the ThingWorx .NET SDK. Installing ADO Service Let me begin by saying this is just a summary, in a crude way of course, of ThingWorx Edge ADO Service Configuration Guide. So when in doubt it's strongly recommended to go through the guide,also provided together with the downloaded package. I'll be using the ThingWorx ADO Service v5.6.1, most recent release, for the purpose of this blog. Depending if you are on x86 or x64 Windows navigate to the C:\Windows\Microsoft.NET for accessing the InstallUtil.exe You'll find the above specified file under following two locations, use the one that applies to your use case. i) For x64 : C:\Windows\Microsoft.NET\Framework64\v4.0.30319 ii) For x86 : C:\Windows\Microsoft.NET\Framework\v4.0.30319 1. Copy over the desired InstallUtil.exe to the location where you have unzipped the ADO Service package, the one downloaded above. e.g. I've put mine at C:\Software\ThingWorxSoftware\ADOService\ 2. Start a command prompt (Windows Start Menu > Command Prompt) and execute the InstallUtil.exe ThingWorxADOService.exe 3. This should create a service and some additional info in the \\ADOService folder in the form of InstallUtil.InstallLog 4. Check the log for confirmation, you should see something similar Running a transacted installation. ...     .... The Commit phase completed successfully. The transacted install has completed. ​​5. In Windows Explorer navigate to the folder containing all the unzipped files, and edit the AdoThing.config 6. For this blog I've security disabled, though obviously in production you'd definitely want to enable it 7. Configure the ConnectionSettings as per your requirement (refer to the guide for more detail on settings), below I'm noting the settings that will require configuration in its most minimum form (I've also attached my complete AdoThing.config file for reference) "rows": [       {         "Address": "localhost",         "Port": 8080,         "Resource": "/Thingworx/WS",         "IsSecure": false,         "ThingName": "AdoThing",         "AppKey": "f7e230ac-3ce9-4d91-8560-ad035b09fc70",         "AllowSelfSignedCertificates": false,         "DisableCertValidation": true,           "DisableEncryption": true       }     ] 8. Configure the connection string for the SQL Server in following section, in the same file opened above     "rows": [       {         "ConnectionType": "OleDb",         "ConnectionString": "Provider=SQLNCLI11;Server=localhosts\\SQLEXPRESS;Database=TWXDB;Uid=sa;Pwd=login123;",         "AlwaysConnected": true,         "QueryEnabled": true,         "CommandEnabled": true,         "CommandTimeout": 60       }     ] 9. Just to highlight what's what in ConnectionString above: "ConnectionString": "Provider=SQLNCLI11;Server=<Machine/ClientName>\\SQLServerInstanceName;Database=<databaseName>;Uid=<userName>;Pwd=<password>;" 10. To get correct connection string syntax for different source refer to the ConnectionStrings.com 11. Save the file 12. Navigate to the windows services by opening Windows Start > Run > services.msc 13. Check for the service ThingWorx .NET ADO Client as you'll have to start it if it's set to Manual, like so in my case Following message will be logged on successful connection  in the DotNETSDK -X-X-X.log : [Critical] twWs_Connect: Websocket connected! At the end of the blog I'll share some of the errors that I came across while working on this and how to go about addressing them. Creating and connecting to Remote Database Thing Now, let's navigate to the ThingWorx Composer and create a Thing with RemoteDatabase Template to consume the resource created above in the form of ADO Service. I've named my thing as AdoThing while creating it in ThingWorx Composer, which matches with the ThingName used in the AdoThing.json file. If everything went through as needed you should see the isConnected = true in the AdoThing's Properties section. Since, this is a Database thing I can now go about creating all the required services concerning the Create, Update, Delete (CRUD) operations, just like for any database for created using the RDBMS Connector. Handling errors while setting up the ADO Service Here are some of the errors that I encountered while setting up the ADO service for this blog: Error 1: com.thingworx.ado.AdoThing Cannot connect to database. : System.Data.OleDb.OleDbException: Login timeout expired Note: Logged in DotNetSDK-X-X-X.log Cause & Resolution: - Service is not able to successfully reach or authenticate against the SQL Server Express DB instance - Ensure that the TCP/IP is enabled for the Protocols for the SQL Express, as I have shared in the screenshot above - Make sure that the username / password used for authenticating with the database is correctly provided while configuring the settings for the OLEDB section in    AdoThing.config Error 2: com.thingworx.ado.AdoThing GetTables OleDbException error : System.Data.OleDb.OleDbException Note: Logged in Application.log from ThingWorx platform Cause & Resolution - This exception is thrown when user attempts to check for the available tables, while creating the service in the ThingWorx Composer - Resolution to this is similar to that mentioned above for Error 1 Error 3: [U: SYSTEM] [O: com.thingworx.ado.AdoThing] OleDbException [code = -2147217865, message = Invalid object name 'TWXDB.DemoTable'.] executing SQL query Note: Logged in Application.log from ThingWorx platform while testing/executing the SQL service created in the ThingWorx Composer Cause & Resolution - The error is due to the usage of DB name in front of the table name, it's not required since the DB name is already selected in the connection String Error 4: [O: com.thingworx.Configuration] Could not read configuration file. : Newtonsoft.Json.JsonReaderException: Bad JSON escape sequence: \S. Path 'Settings.rows[0].ConnectionString', line 656, position 71. Note: Logged in DotNetSDK-X-X-X.log Cause & Resolution - This is caused due to the "ConnectionString": "Provider=SQLNCLI11;Server=<machineNameOrIP>\SQLEXPRESS;Database=TWXDB;Uid=sa;Pwd=login123;", - Json requires this to be escaped thus switching to "ConnectionString": "Provider=SQLNCLI11;Server=<machineNameOrIP>\\SQLEXPRESS;Database=TWXDB;Uid=sa;Pwd=login123;", resolved the issue - Among many other, https://jsonformatter.curiousconcept.com/​ is quite helpful in weeding out the issues from the JSON syntax Error 4: [O: com.thingworx.ado.AdoClient] Error while initializing new AdoThing, or opening connection to Platform. : System.AccessViolationException: Attempted to read or write protected memory. This is often an indication that other memory is corrupt.     at com.thingworx.communications.client.TwApiWrapper.twApi_Connect(UInt32 timeout, Int32 retries)     at com.thingworx.communications.client.TwApiWrapper.Connect(UInt32 timeout, Int16 retries)     at com.thingworx.communications.client.BaseClient.start()     at com.thingworx.ado.AdoClient.run() Note: Logged in DotNetSDK-X-X-X.log Cause & Resolution - This error is observed when using FIPS version of the  ADO Service, esp. when downloaded from the ThingWorx Marketplace - Make sure to recheck the SSL configuration - When not using SSL check that the x64 and x86 directories only contain twApi.dll as by default FIPS version contain two additional dlls i.e. libeay32.dll & ssleay32.dll in both x64 & x86 directories
View full tip
To setup the Single-Sign On with Windchill, we can just follow steps in Windchill extension guide. However, there is a huge problem to use "Websocket" for EMS or Edge SDKs from devices since Apache for Windchill blocks to pass "ws" or "wss" protocol. It's like a problem of a proxy server. There might be a couple of ways to avoid this issue, but I suggest to change filter-mappings for the SSO filter. When you look at the Windchill extension guide, it says that users set filters for all incoming URLs of ThingWorx by using "/*" filter mappings. Please use below settings for "web.xml" of ThingWorx server to avoid the problem that I stated above. It looks quite long and complicated, but basically the filter mappings from settings for "AuthenticationFilter" which are already defined by default except "Websocket" related urls. <!-- Windchill Extension SSO Start--> <filter> <filter-name>IdentityProviderAuthenticationFilter</filter-name> <filter-class>com.ptc.connected.plm.thingworx.wc.idp.client.filter.IdentityProviderAuthenticationFilter</filter-class> <init-param> <param-name>idpLoginUrl</param-name> <param-value>http(s)://<SERVERHOSTURL>/Windchill/wtcore/jsp/genIdKey.jsp</param-value> </init-param> </filter> <filter-mapping>   <filter-name>IdentityProviderAuthenticationFilter</filter-name>   <url-pattern>/extensions/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/action-authenticate/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/action-login/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/action-confirm-creds/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/action-change-password/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/ThingworxMain.html</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/ThingworxMain.html/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Server/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/ApplicationKeys/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Networks/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Dashboards/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/DirectoryServices/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Authenticators/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/PersistenceProviderPackages/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/tunnel/wsadapter.jsp</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/tunnel/adapter.jsp</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Logs/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Resources/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Subsystems/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Users/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Home/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/StateDefinitions/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/StyleDefinitions/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/ScriptFunctionLibraries/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/AtomFeedService/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/DataShapes/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Importer/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/ImageEncoder/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Exporter/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/ExportDatabase/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/ExportTheme/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/ExportDefaultEntities/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/ImportDatabase/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/DataExporter/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/DataImporter/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Widgets/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Groups/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/ThingPackages/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Things/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/ThingTemplates/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/ThingShapes/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/DataTags/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/ModelTags/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Composer/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Squeal/index.html</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Runtime/index.html</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Mashups/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Menus/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/MediaEntities/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/loaders/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/demos/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/ExtensionPackageUploader/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/ExtensionPackages/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/FileRepositoryUploader/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/FileRepositoryDownloader/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/FileRepositories/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/xmpp/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/LocalizationTables/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Organizations/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/RemoteTunnel/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/PersistenceProviders/*</url-pattern>   </filter-mapping> <filter> <filter-name>IdentityProviderKeyValidationFilter</filter-name> <filter-class>com.ptc.connected.plm.thingworx.wc.idp.client.filter.IdentityProviderKeyValidationFilter</filter-class> <init-param> <param-name>keyValidationUrl</param-name> <param-value>http(s)://<SERVERHOSTURL>/Windchill/login/validateIdKey.jsp</param-value> </init-param> </filter> <filter-mapping>   <filter-name>IdentityProviderKeyValidationFilter</filter-name>   <url-pattern>/extensions/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/action-authenticate/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/action-login/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/action-confirm-creds/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/action-change-password/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/ThingworxMain.html</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/ThingworxMain.html/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Server/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/ApplicationKeys/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Networks/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Dashboards/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/DirectoryServices/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Authenticators/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/PersistenceProviderPackages/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/tunnel/wsadapter.jsp</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/tunnel/adapter.jsp</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Logs/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Resources/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Subsystems/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Users/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Home/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/StateDefinitions/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/StyleDefinitions/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/ScriptFunctionLibraries/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/AtomFeedService/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/DataShapes/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Importer/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/ImageEncoder/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Exporter/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/ExportDatabase/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/ExportTheme/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/ExportDefaultEntities/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/ImportDatabase/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/DataExporter/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/DataImporter/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Widgets/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Groups/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/ThingPackages/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Things/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/ThingTemplates/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/ThingShapes/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/DataTags/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/ModelTags/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Composer/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Squeal/index.html</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Runtime/index.html</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Mashups/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Menus/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/MediaEntities/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/loaders/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/demos/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/ExtensionPackageUploader/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/ExtensionPackages/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/FileRepositoryUploader/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/FileRepositoryDownloader/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/FileRepositories/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/xmpp/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/LocalizationTables/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Organizations/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/RemoteTunnel/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/PersistenceProviders/*</url-pattern>   </filter-mapping> <!-- Windchill Extension SSO End-->
View full tip
The .net-sdk can be configured to emit very detailed debugging and diagnostic information to a log file during execution. The .net-sdk uses the standard .NET System.Diagnostic infrastructure for Logging, as such, all configuration of the .net-sdk logger is done via the standard .NET Logging configuration system. By default, Logging is configured via the standard .NET “App.config” file. Log messages can be routed to any standard .NET TraceListener. Optionally, ThingWorx provides a FixedFieldTraceListener which can be used to output log messages to a file. The use of the ThingWorx provided FixedFieldTraceListener is recommended. The FixedFieldTraceListener when configured will automatically create a "logs" directory in the same location as (a sibling to) the running executable file (.exe). This "logs" directory will contain the log files. Every .NET Class can be configured as a specific “Trace Source” which emits log messages. It is recommended to add at least the following Trace Sources to your App.config file to receive the most useful amount of information: com.thingworx.communications.client.BaseClient com.thingworx.communications.client.ConnectedThingClient com.thingworx.communications.client.things.VirtualThing com.thingworx.communications.client.TwApiWrapper com.thingworx.communications.client.things.filetransfer.FileTransferVirtualThing com.thingworx.communications.client.things.contentloader.ContentLoaderVirtualThing The amount of information emitted can range from very low level Trace messages (the Verbose setting) to nothing at all (the Off setting). The “SourceLevels Enumeration” can be used to control how much information is written out to the log file. For reference, this is the <add name="SourceSwitch" value="Information" /> element in the sample below. Below is sample App.config file. <?xml version="1.0" encoding="utf-8"?> <configuration>     <system.diagnostics>       <sources>         <source name="com.thingworx.common.utils.JSONUtilities" switchName="SourceSwitch" switchType="System.Diagnostics.SourceSwitch" >           <listeners>             <add name="file" />           </listeners>         </source>         <source name="com.thingworx.communications.client.TwApiWrapper" switchName="SourceSwitch" switchType="System.Diagnostics.SourceSwitch" >           <listeners>             <add name="file" />           </listeners>         </source>         <source name="com.thingworx.communications.client.BaseClient" switchName="SourceSwitch" switchType="System.Diagnostics.SourceSwitch" >           <listeners>             <add name="file" />           </listeners>         </source>         <source name="com.thingworx.communications.client.ConnectedThingClient" switchName="SourceSwitch" switchType="System.Diagnostics.SourceSwitch" >           <listeners>             <add name="file" />           </listeners>         </source>         <source name="com.thingworx.communications.client.things.contentloader.ContentLoaderVirtualThing" switchName="SourceSwitch" switchType="System.Diagnostics.SourceSwitch" >           <listeners>             <add name="file" />           </listeners>         </source>         <source name="com.thingworx.communications.client.things.filetransfer.FileTransferVirtualThing" switchName="SourceSwitch" switchType="System.Diagnostics.SourceSwitch" >           <listeners>             <add name="file" />           </listeners>         </source>         <source name="com.thingworx.communications.client.things.VirtualThing" switchName="SourceSwitch" switchType="System.Diagnostics.SourceSwitch" >           <listeners>             <add name="file" />           </listeners>         </source>         <source name="com.thingworx.metadata.annotations.MetadataAnnotationParser" switchName="SourceSwitch" switchType="System.Diagnostics.SourceSwitch" >           <listeners>             <add name="file" />           </listeners>         </source>       </sources>       <switches>         <add name="SourceSwitch" value="Information" />       </switches>       <sharedListeners>         <add name="file" type="com.thingworx.common.logging.FixedFieldTraceListener, thingworx-dotnet-common, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" initializeData="false"/>       </sharedListeners>       <trace autoflush="true" indentsize="4" />   </system.diagnostics> </configuration>
View full tip
I created this recently for another group -  might be useful    Video Link to Create a Database connection to you Postgres Thingwork Database 
View full tip
<p>We live in a connected world where we can (want!) to receive instant updates and notifications. ThingWorx leverages the power of Web 2.0 and its Always-On technology to deliver that, but our friendly SMS providers have also provided an easy and powerful way that can be used to deliver SMS notifications right to your phone. Email to Text!</p><div>Set up a 'notification' Thing using our MailServer Template, set up your outgoing e-mail server and you are now ready to invoke the 'SendMessage' service on a given event. All you need now is the email address of your SMS number, which you can find by following this link: <a href="http://sms411.net/how-to-send-email-to-a-phone/" target="_blank"><span style="font-size:8.5pt;line-height:115%;font-family:&quot;Arial&quot;,&quot;sans-serif&quot;">List of e-Mail to SMS  addresses</span></a></div><p class="MsoNormal"><o:p></o:p></p><div><p class="MsoNormal"><o:p></o:p></p></div><p></p>
View full tip
  Meet Chris. Chris leads the ThingWorx Flow portfolio, which will be releasing in 8.4. Chris has “a long history with workflow and a lot of interest in it.” He was originally hired as a product manager by Jim Heppelmann [PTC’s CEO] about 20 years ago to build a workflow engine for Windchill, our PLM solution. As a matter of fact, this feature is still a part of Windchill today. Pretty neat, right? We were able to leverage Chris’ workflow expertise and experience to create ThingWorx Flow. Check out more below.   Kaya: Why did we create ThingWorx Flow? What challenge does it solve? Chris: In order for customers to realize the full business value for connected solutions, it typically isn’t enough to just surface an alert to show a dashboard of health. Customers want to automatically do things like create service tickets or automatically order consumables as they run low. To do that—to realize that automated value—you need a couple of things. First, you need to be able to easily connect to enterprise systems (like SAP, Salesforce or Microsoft Dynamics) to create service cases. Second, you need to be able to define and execute your business process logic.   Kaya: Can you give me an example? Chris: Definitely. The business process may involve the need to replace a filter based on a contamination alert. After I receive the alert, I need to look up in the bill of materials to find out which filter is applicable to that particular device or product; if it’s not in stock onsite, I need to order it. When it arrives, I then need to schedule the service, knowing that it’s going to be there at a particular time, which entails working with another service system like Salesforce.   So, you have a number of these different enterprise systems that you need to connect to, and you need to support that orchestrated business process to realize the value of a fully automated flow. That’s what has really led us to do it—because, again, the compelling business value in these connected device solutions is often around an automated approach to service or maintenance issues. The value is compelling because automated processes minimize delays, support business growth and scale without hiring as many additional people and eliminate human errors and variability that lead to improved quality at lower costs or stronger margins. Sample ThingWorx Orchestrate Workflow: Send RFQ to Supplier and Add Part to List If State ChangesKaya: So, who ultimately benefits from Flow? Chris: The benefits are realized by an organization as a whole in terms of increased responsiveness, flexibility, reduced costs and higher availability. The ThingWorx solution is unique in that it allows an organization to quickly realize these benefits by optimizing the participation of several key roles. As a result, no role becomes a bottleneck to addressing an urgent business need.   In addition to developers, the many business analysts across the organization are now empowered to define and modify their business processes themselves through Flow. They do this by using the system connections (or subflows) created by their system experts and ThingWorx models that were defined by developers. All of this work is achieved using our visual modeling tool without the need for extensive training.   The system experts can define portions of flows or subflows that get, create or edit information in the systems they understand most within the Flow editor. This enables each role to focus on using their most valuable skills and ultimately eliminates the bottlenecks that cause reduction in business responsiveness when everything must be done by a few highly-specialized experts. Now, developers can leverage the outputs of flows to drive behavior in the application and visualize key KPIs and overall health.   Kaya: So, I see that we’re not only connecting business systems, but we’re also connecting people. We enable them to leverage each other’s different perspectives and areas of expertise. I know we gave developers a sneak peek of ThingWorx Flow in one of our latest posts. Do you have a more detailed demo that we could share this week? Chris: Sure thing. Check this out. (view in My Videos)   Kaya: Wow! I can definitely see how Flow saves immense time in workflow creation. Now, final question: what is your favorite aspect about working at PTC? Chris: The most interesting thing to me is that I get to work with so many different customers in so many different verticals that have such a diverse set of challenging and interesting problems. It’s fun to work together with these customers for us to understand their problems, their unique environments and then to, with them, build solutions using a combination of our products and their existing systems and tooling—that’s what I think is most fascinating.   I learn something new about the way things are made, manufactured, built and serviced every day, and it just makes my life much more interesting. I don’t feel like I’m doing the same thing every day. Working with customers to understand and address their challenges and help them realize their business goals is really rewarding and, again, the diversity of those requests is what makes the job particularly interesting and fascinating.
View full tip
Announcements