cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

ThingWorx Navigate is now Windchill Navigate Learn More

IoT & Connectivity Tips

Sort by:
Does WSEMS use SSL pinning? Yes, it does. To explain the process a bit more, the ssl pinning is the act of verifying a server certificate by comparing it to the exact certificate that is expected. We “install” a certificate on the EMS (copying the cert to the ems device, and specifying the cert in config.json), EMS will then check all incoming certs against the cert in config.json, looking for an exact match and verifying the certificate chain. Should it be expected that the client, WSEMS.exe, installs the client certificate in order to validate the server certificate using SSL pinning? This does not happen automatically, the cert must be downloaded and manually added to the device. The config must also be manually updated. If so, how we issue the client cert for WSEMS.exe? Does it need to be issued the same way as the server certificate? If talking about pinning, one could use either the server cert directly, or the root cert (recommended). The root cert is the public cert of the signing authority, e.g. if the server uses a verisign issued cert, one can verify the authenticity of the signer by having the verisign public cert. If talking about client certs (2 way auth, client verifies server, server verifies client) then the process is a little different. What happens when the client certificate expires? Do all the devices go offline when the client certificate expires? The device won’t connect with a failure to authorize. Once the server cert expires, the server cert needs to be updated everywhere. This is the advantage to using something like the verisign public cert (root cert) as the installed cert on the client. The root certs usually last longer than the issued certs, but they will have to replaced eventually when they expire. When using Entrust certificates, obtained at https://www.entrust.com/get-support/ssl-certificate-support/root-certificate-downloads/ , how do we know that the certifactes are fed to the WSEMS configuration incorrectly? With the wrong configuration, WSEMS would error out rejecting the Entrust certificate: Non-fips Error code 20: Invalid certificate Fips: Error code 19:  self signed certificate in certificate chain How to properly install Entrust certificate to be accessed by EMS? Public certificate needs to be downloaded and put in a directory that is accessible to the EMS. The path to be added to the cert chain field of the configuration. Even if its trusted it will always need to be installed on the EMS. If a certificate is self-signed, meaning it's non-trusted by default, OpenSSL would throw errors and compain. To solve this, it needs to be installed as a trusted server. If it's signed by a non-trusted CA, that CA's certificate needs to be installed as well. Once the certificates are obtained this if the block of the configuration we are interested in: "certificates":    {                   "validate":          true, //Not validated model is not recommended                       "allow_self_signed":  false, //Self signed model is not recommended, yet theoretically better than non-validated                       "cert_chain":      " " } //recommended, trusted - also the only way to work with a FIPS enabled EMS May use the SSL test tool (for example,https://www.ssllabs.com )to find all chains. For Entrust, both Entrust L1k and G2 need to be downloaded: Because cert_chain is an array type of field, the format to install the certificates paths, would be: "cert_chain": ["path1"], ["path2"] "cert_chain": ["C:\\ThingworxLocation\\MicroServerLocation\\locationofcertificates\\entrust_l1k.pem","C:\\ThingworxLocation\\MicroServerLocation\\locationofcertificates\\entrust_g2_ca.pem"]
View full tip
In case we would like to create an external application and we aren't sure what's the best solution to use, below are some useful tips. Scenario: Let's say we use a gateway in order to access the external application we want to create. We would like to implement this gateway translating the ThingWorx standard protocol to the SCADA protocol. The system administrator, who manages the grid, has the own secure system, with a standard for communication inside the SCADA system, and we want to be able to get data from our system to the system they have. Let's also consider that the data is connected on the electrical field. Tips: It is recommended to develop a 3rd party that on one side talks to ThingWorx, and on the other side, talks to the SCADA system. This external ThingWorx application that we want to implement would have a series-edge-interface allowing to enter in our customer's Ethernet network, in order for both systems to communicate. JDBC is not recommended - it's mostly for connecting to the data base, which in our case is not the main purpose. Each REST API call to the platform uses a security accreditation (appkey or user/password); depending of the permissions contained in that token, the access can be allowed to certain parts of the platform. Reasons for using REST API: REST API is simple and not dependent on any format that the data comes from ThingWorx. It can be used offline, online, synchronously, asynchronously, and is easy to manage from a formatting point of view. ThingWorx can give a lot of options: like exporting information via xml to a plain xml file, to parse it to whatever protocol we have on the other hand: Either our application would have to handle xml inputs from ThingWorx and process it towards SCADA compatible output. Or our application will have to handle xml input from ThingWorx and process it towards SCADA compatible output. Or we can talk directly via REST API and read on a per-thing basis (using the web services). The interface application just has to know how to read xmls or REST API calls (which are provided with an xml formatted response). SDK has already library written in C, C# and Java. SDK C, C# and Java use the AlwaysOn protocol (web socket)  and are more firewall friendly. It's mostly like for speed and automated processing, so when known exactly what happens and we trust the other side and we know there are little chances for errors.​ If we go with REST API or SDK , the application that is developed will have complete access inside ThingWorx, like change/edit the things. If we want to have access in both ways, not only in reading data, but also in update/delete information, etc, SDK and REST API ​can be used, because we have the whole range of commands, like set property values, call on services, etc. We can limit access, if we want, for security reasons. SDK offers access to the same services as REST API, but in a different way. Otherwise, it's better to go with xml decoupled files. Conclusion: for this particular scenario, better use SDK.
View full tip
Introduction In-Memory Column stores the data in columnar format contrary to row format. Allowing users to run faster analytics which is also the idea behind this is to push the computation as close to the data store as possible. In this post I'll configure the Oracle database to enable this feature and then populate one or more tables in the In Memory Column store. This could be particularly helpful if you are using Oracle 12c as an external data store for storing data in database table via JDBC connection, current/historic values from DataTable, Streams or ValueStreams for running analytics or DMLs with lots of join and require lot of computation before the data is finally presented on to the Mashup(s). For this post I used the data generated by temperature sensor getting stored in ValueStream, exported to CSV from the ValueStream and imported it in the Oracle table. In-Memory Column Store vs In-Memory database Usage As mentioned above Oracle 12c version 12.1.2 comes with in built In-Memory Column Store feature. As the name suggest it allows data to be populated in RAM enabling high speed transaction and analytics on data without the need to traverse the HDD, and in some cases this is much faster than the buffer cache of the SGA. Without going into too much nitty-gritty it's important to note that In-Memory Column  Store does not equate to In-Memory database. While it could be possible to move the entire schema, if it's small enough, to fit in the defined memory for In-Memory Column Store, the idea however is to speed up the computation requiring analytics on one or more table(s) which are heavily queried by the users. If you are interested in In-Memory Database as persistence provider for ThingWorx please refer to the documentation Using SAP HANA as the Persistence Provider which is one of the option among other available Persistence Providers for ThingWorx. What changes are required to the current Oracle 12c installation or JDBC connection to ThingWorx? In-Memory Column Store feature is an inbuilt feature in Oracle 12.1.2 and only needs to be enabled, as it's not by default. This can be enabled without having to bring any sort of change to the following : 1. The existing SQL services created within ThingWorx 2. General application architecture accessing the tables in the Oracle database over JDBC 3. Existing Oracle 12c installation Getting Started What will it take to enable In-Memory Column Store? This feature can be enabled by following few steps : 1. Enable this feature in the Oracle 12.1.2 installation, by assigning some memory in RAM for InMemory Column 2. Adjust the SGA size for the database to incorporate the memory assigned to the In-Memory Column 3. Bounce the database As mentioned above though this is an inbuilt feature with Oracle 12.1.2, but is not enabled by default and we can confirm this by executing following SQL in SQL*Plus or Oracle SQL Developer connecting to database for which we are enabling this feature. SQL> show parameter INMEMORY; Things to consider before enabling 1. Ensure that the hardware/ VM hosting the Oracle installation have sufficient RAM, 2. Ensure to bump up the SGA by the amount of memory assigned to In-Memory Column store, failing to do so may lead to database failing to start and will require recovery Note: Minimum memory that can be assigned to In-Memory Column Store is 100M Setting it all up For my test setup I will be assigning 5G to the In-Memory Column Store and will add this amount to the current SGA, to do this let's start the SQL*Plus with the rights that will allow me to make changes to the existinng SGA, so i'm using sys@orcl as sysdba  (ORCL is the test DB name i have for my database) Step 1: Start SQL*Plus, e.g. sqlplus sys@orcl as sysdba Step 2: ALTER SYSTEM SET INMEMORY_SIZE = 5G SCOPE=SPFILE; Step 3: ALTER SYSTEM SET SGA_TARGET = 20G SCOPE=SPFILE; Once done, bounce the database. And that's it! We should now be able to confirm that, via SQL*Plus, certain amount of memory, 5G in my case, has been assigned to the In-Memory Column Store feature SQL> show parameter inmemory Populating the In-Memory Column Store In-Memory Column Store will only populate the data from the table only on the first use or if the table is marked critical which will tell Oracle to populate as soon as the database comes online after restart. For more detail on the commands concerning the In-Memory Column Store refer to the OTN webpage I'll now use the SensorHistory table in which i have the ValueStream's exported data in CSV format, currently this table is holding ~32million+ rows, and populate them in columnar architecture of the In Memory Column Store with following command: SQL>ALTER TABLE SENSORHISTORY INMEMORY; // marking the table to be eligible for In-Memory column with default parameters Just to confirm that the data is still not populated since we have only marked the table to be eligible for In-Memory Column Store, if I now query the dynamic view V$IM_SEGMENTS for current usage of the InMemory, it'll confirm this: So now let's populate the In-Memory with a query which would require full table scan, e.g. SQL> select property_name, count(*) from sensorhistory           Group by property_name; Let's recheck the dynamic view V$IM_SEGMENTS As mentioned above, that this is completely transparent to the application layer, so if you already have an existing JDBC connection in ThingWorx to Oracle, all the existing services created for that table will continue to work as expected. If you don't have an existing JDBC connection to Oracle, it can be created with usual steps with no special configuration for In-Memory. Creating JDBC connection I'm including this section for the purpose of completeness, if you already have a working JDBC connection to Oracle 12.1.2 you can skip to Conclusion below.Now for accessing the above database along with the In-Memory Column Store table we'll now setup the JDBC connection to the Oracle, for that download and import the TW_Download_Relational Databases Connectors.zip (ThingWorx Marketplace) > unzip to access the Oracle12Connector_Extension.zip Step 1 : Import the extension in the ThingWorx by navigating to Import/Export > Import > Extensions Step 2: Create a Thing using the OracleDBServer12 Template, part of the extension we just imported Step 3: Here's how a valid configuration would look like to successfully connect to the database, ORCL in this case Step 4: Navigate to the Properties in the Entity Information panel on the left and verify that the isConnected property value is True. Conclusion This is a very short introduction to what could be a setup for improving the data analytics performed on the stored data, manifold. The data in the In-Memory Column Store is not stored in conventional row format, rather in large columnar format. If the need is to have simple SQL queries with not so many joins it could be that the SGA Cache would be sufficient and probably be faster and you may not gain much by configuring the In-Memory Column Store. However, queries requiring heavy computation on large data sets, having In-Memory Column Store configured could bring manifold increase in performance. Therefore if you need more guidelines on where you'd want to use the In-Memory Column Store, feel free to give following listed good reads a try along with real world data use case for reference. I will try to find some time to run my own benchmark and will try to put it out in a separate blog on performance gain. 1. Oracle Database In-Memory with Oracle Database 12c Release 2 : Oracle white paper 2. When to Use Oracle Database In-Memory :  Identifying Use Cases for Application Acceleration 3. Oracle Database 12c In-Memory Option 4. Testing Oracle In-Memory Column Store @ CERN
View full tip
Hi everyone, As everyone knows already, the main way to define Properties inside the EMS Java SDK is to use annotations at the beginning of the VirtualThing class implementation. There are some use-cases when we need to define those properties dynamically, at runtime, like for example when we use a VirtualThing to push a sensor's data from a Device Cloud to the ThingWorx server, for multiple customers. In this case, the number properties differ based on customers, and due to the large number of variations, we need to be able to define programmatically the Properties themselves. The following code will do just that: for (int i = 0; i < int_PropertiesLength; i++) {     Node nNode = device_Properties.item(i);     PropertyDefinition pd;     AspectCollection aspects = new AspectCollection();     if (NumberUtils.isNumber(str_NodeValue))     {         pd = new PropertyDefinition(nNode.getNodeName(), " ", BaseTypes.NUMBER);     }     else if (str_NodeValue=="true"|str_NodeValue=="false")     {         pd = new PropertyDefinition(nNode.getNodeName(), " ", BaseTypes.BOOLEAN);     }     else     pd = new PropertyDefinition(nNode.getNodeName(), " ", BaseTypes.STRING);     aspects.put(Aspects.ASPECT_DATACHANGETYPE,    new StringPrimitive(DataChangeType.VALUE.name()));     //Add the dataChangeThreshold aspect     aspects.put(Aspects.ASPECT_DATACHANGETHRESHOLD, new NumberPrimitive(0.0));     //Add the cacheTime aspect     aspects.put(Aspects.ASPECT_CACHETIME, new IntegerPrimitive(0));     //Add the isPersistent aspect     aspects.put(Aspects.ASPECT_ISPERSISTENT, new BooleanPrimitive(false));     //Add the isReadOnly aspect     aspects.put(Aspects.ASPECT_ISREADONLY, new BooleanPrimitive(true));     //Add the pushType aspect     aspects.put("pushType", new StringPrimitive(DataChangeType.ALWAYS.name()));     aspects.put(Aspects.ASPECT_ISLOGGED,new BooleanPrimitive(true));     //Add the defaultValue aspect if needed...     //aspects.put(Aspects.ASPECT_DEFAULTVALUE, new BooleanPrimitive(true));     pd.setAspects(aspects);     super.defineProperty(pd); }  //you need to comment initializeFromAnnotations() and use instead the initialize() in order for this to work. //super.initializeFromAnnotations();   super.initialize(); Please put this code in the Constructor method of your VirtualThing extending implementation. It needs to be run exactly once, at any instance creation. This method relies on the manual discovery of the sensor properties that you will do before this. Depending on the implementation you can either do the discovery of the properties here in this method (too slow), or you can pass it as a parameter to the constructor (better). Hope it helps!
View full tip
To setup the Single-Sign On with Windchill, we can just follow steps in Windchill extension guide. However, there is a huge problem to use "Websocket" for EMS or Edge SDKs from devices since Apache for Windchill blocks to pass "ws" or "wss" protocol. It's like a problem of a proxy server. There might be a couple of ways to avoid this issue, but I suggest to change filter-mappings for the SSO filter. When you look at the Windchill extension guide, it says that users set filters for all incoming URLs of ThingWorx by using "/*" filter mappings. Please use below settings for "web.xml" of ThingWorx server to avoid the problem that I stated above. It looks quite long and complicated, but basically the filter mappings from settings for "AuthenticationFilter" which are already defined by default except "Websocket" related urls. <!-- Windchill Extension SSO Start--> <filter> <filter-name>IdentityProviderAuthenticationFilter</filter-name> <filter-class>com.ptc.connected.plm.thingworx.wc.idp.client.filter.IdentityProviderAuthenticationFilter</filter-class> <init-param> <param-name>idpLoginUrl</param-name> <param-value>http(s)://<SERVERHOSTURL>/Windchill/wtcore/jsp/genIdKey.jsp</param-value> </init-param> </filter> <filter-mapping>   <filter-name>IdentityProviderAuthenticationFilter</filter-name>   <url-pattern>/extensions/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/action-authenticate/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/action-login/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/action-confirm-creds/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/action-change-password/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/ThingworxMain.html</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/ThingworxMain.html/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Server/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/ApplicationKeys/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Networks/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Dashboards/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/DirectoryServices/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Authenticators/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/PersistenceProviderPackages/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/tunnel/wsadapter.jsp</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/tunnel/adapter.jsp</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Logs/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Resources/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Subsystems/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Users/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Home/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/StateDefinitions/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/StyleDefinitions/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/ScriptFunctionLibraries/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/AtomFeedService/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/DataShapes/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Importer/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/ImageEncoder/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Exporter/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/ExportDatabase/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/ExportTheme/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/ExportDefaultEntities/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/ImportDatabase/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/DataExporter/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/DataImporter/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Widgets/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Groups/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/ThingPackages/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Things/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/ThingTemplates/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/ThingShapes/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/DataTags/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/ModelTags/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Composer/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Squeal/index.html</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Runtime/index.html</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Mashups/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Menus/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/MediaEntities/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/loaders/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/demos/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/ExtensionPackageUploader/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/ExtensionPackages/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/FileRepositoryUploader/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/FileRepositoryDownloader/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/FileRepositories/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/xmpp/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/LocalizationTables/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/Organizations/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/RemoteTunnel/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderAuthenticationFilter</filter-name>     <url-pattern>/PersistenceProviders/*</url-pattern>   </filter-mapping> <filter> <filter-name>IdentityProviderKeyValidationFilter</filter-name> <filter-class>com.ptc.connected.plm.thingworx.wc.idp.client.filter.IdentityProviderKeyValidationFilter</filter-class> <init-param> <param-name>keyValidationUrl</param-name> <param-value>http(s)://<SERVERHOSTURL>/Windchill/login/validateIdKey.jsp</param-value> </init-param> </filter> <filter-mapping>   <filter-name>IdentityProviderKeyValidationFilter</filter-name>   <url-pattern>/extensions/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/action-authenticate/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/action-login/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/action-confirm-creds/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/action-change-password/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/ThingworxMain.html</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/ThingworxMain.html/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Server/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/ApplicationKeys/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Networks/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Dashboards/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/DirectoryServices/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Authenticators/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/PersistenceProviderPackages/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/tunnel/wsadapter.jsp</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/tunnel/adapter.jsp</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Logs/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Resources/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Subsystems/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Users/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Home/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/StateDefinitions/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/StyleDefinitions/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/ScriptFunctionLibraries/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/AtomFeedService/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/DataShapes/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Importer/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/ImageEncoder/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Exporter/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/ExportDatabase/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/ExportTheme/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/ExportDefaultEntities/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/ImportDatabase/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/DataExporter/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/DataImporter/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Widgets/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Groups/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/ThingPackages/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Things/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/ThingTemplates/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/ThingShapes/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/DataTags/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/ModelTags/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Composer/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Squeal/index.html</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Runtime/index.html</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Mashups/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Menus/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/MediaEntities/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/loaders/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/demos/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/ExtensionPackageUploader/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/ExtensionPackages/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/FileRepositoryUploader/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/FileRepositoryDownloader/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/FileRepositories/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/xmpp/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/LocalizationTables/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/Organizations/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/RemoteTunnel/*</url-pattern>   </filter-mapping>   <filter-mapping>     <filter-name>IdentityProviderKeyValidationFilter</filter-name>     <url-pattern>/PersistenceProviders/*</url-pattern>   </filter-mapping> <!-- Windchill Extension SSO End-->
View full tip
This project is developed out of curiosity of how ThingWorx communicates with sensors and vice versa. Immediately a Smart Parking system idea struck to our mind and I started working on it. While heading from home to office I always worry about car parking space in office especially in rainy season. This project will help user in getting parking space. This project has 4 sections as follows, 1) Smart Parking system: A system application developed in ThingWorx guides user to find empty car parking space. Sensors placed at each car parking slot senses the presence of car. A program running on Raspberry Pi board collects sensor information and sends that information to the Smart Car Parking System application in ThingWorx. The data received through sensor is displayed on ThingWorx dashboard/mashup. 2) Live Traffic: This inherits a Google Map and shows the traffic around user's current location. 3) Traffic Blog: If user is visiting a place and have questions regarding parking, traffic condition etc., then user can post their questions here and people around that area can answer it. Questions are not restricted for parking related questions but like best places to visit in areas, restaurant, shops etc. 4) Automobile Wiki: This page provides an documented help regarding anything related to automobile e.g. how to change car tyres?, how to change car wipers? etc.
View full tip
I've had a lot of questions over the years working with Azure IoT, Kepware, and ThingWorx that I really struggled getting answers to. I was always grateful when someone took the time to help me understand, and now it is time to repay the favour.   People ask me many things about Azure (in a ThingWorx context), and one of the common ones has been about MQTT communications from Kepware to ThingWorx using IoT Hub. Recently the topic has come up again as more and more of the ThingWorx expert community start to work with Azure IoT. Today, I took the time to build, test, validate, and share an approach and utilities to do this in cases where the Azure Industrial IoT OPC UA integration is overkill or simply a step later in the project plan. Enjoy!   End to end Integration of Kepware to ThingWorx using MQTT over Azure IoT (YoutTube 45 minute deep-dive)   ThingWorx entities for import (ThingWorx 9.0)   This approach can be quite good for a simple demo if you have a Kepware Integrator or Kepware Enterprise license, but the use of IoT Gateway for many servers and tags can be quite costly.   Those looking to leverage Azure IoT Hub for MQTT integration to ThingWorx would likely also find this recorded session and shared utilities quite helpful.   Cheers, Greg
View full tip
This document provides API information for all 51.0 releases of ThingWorx Machine Learning.
View full tip
Below is where I will discuss the simple implementation of constructing a POST request in Java. I have embedded the entire source at the bottom of this post for easy copy and paste. To start you will want to define the URL you are trying to POST to: String url = "http://127.0.0.1:80/Thingworx/Things/Thing_Name/Services/​Service_to_Post_to​"; Breaking down this url String: ​http://​ - a non-SSL connection is being used in this example 127.0.0.1:80 -- the address and port that ThingWorx is hosted on /Thingworx -- this bit is necessary because we are talking to ThingWorx /Things -- Things is used as an example here because the service I am posting to is on a Thing Some alternatives to substitute in are ThingTemplates, ThingShapes, Resources, and Subsystems /​Thing_Name​ -- Substitute in the name of your Thing where the service is located /Services -- We are calling a service on the Thing, so this is how you drill down to it /​Service_to_Post_to​ -- Substitute in the name of the service you are trying to invoke Create a URL object: URL obj = new URL(url); Class URL, included in the java.net.URL import, represents a Uniform Resource Locator, a pointer to a "resource" on the Internet. Adding the port is optional, but if it is omitted port 80 will be used by default. Define a HttpURLConnection object to later open a single connection to the URL specified: HttpURLConnection con = (HttpURLConnection) obj.openConnection(); Class HttpURLConnection, included in the java.net.HttpURLConnection import, provides a single instance to connect to the URL specified. The method openConnection is called to create a new instance of a connection, but there is no connection actually made at this point. Set the type of request and the header values to pass: con.setRequestMethod("POST"); con.setRequestProperty("Accept", "application/json"); con.setRequestProperty("Content-Type", "application/json"); con.setRequestProperty("appKey", "80aab639-ad99-43c8-a482-2e1e5dc86a2d"); You can see that we are performing a POST request, passing in an Accept header, a Content-Type header, and a ThingWorx specific appKey header. Pass true into the setDoOutput method because we are performing a POST request; when sending a PUT request we would pass in true as well. When there is no request body being sent false can be passed in to denote there is no "output" and we are making a GET request.         con.setDoOutput(true); Create a DataOutputStream object that wraps around the con object's output stream. We will call the flush method on the DataOutputStream object to push the REST request from the stream to the url defined for POSTing. We immediately close the DataOutputStream object because we are done making a request.         DataOutputStream wr = new DataOutputStream(con.getOutputStream());     wr.flush();     wr.close();           The DataOutputStream class lets the Java SDK write primitive Java data types to the ​con​ object's output stream. The next line returns the HTTP status code returned from the request. This will be something like 200 for success or 401 for unauthorized.         int responseCode = con.getResponseCode(); The final block of this code uses a BufferedReader that wraps an InputStreamReader that wraps the con object's input stream (the byte response from the server). This BufferedReader object is then used to iterate through each line in the response and append it to a StringBuilder object. Once that has completed we close the BufferedReader object and print the response we just retrieved.         BufferedReader in = new BufferedReader(new InputStreamReader(con.getInputStream()));     String inputLine;     StringBuilder response = new StringBuilder();     while((inputLine = in.readLine()) != null) {       response.append(inputLine);     }     in.close();     System.out.println(response.toString());    The InputStreamReader decodes bytes to character streams using a specified charset.         The BufferedReader provides a more efficient way to read characters from an InputStreamReader object.         The StringBuilder object is an unsynchronized method of creating a String representation of the content residing in the BufferedReader object. StringBuffer can be used instead in a case where multi-threaded synchronization is necessary.      Below is the block of code in it's entirety from the discussion above: public void sendPost() throws Exception {   String url = "http://127.0.0.1:80/Thingworx/Things/Thing_Name/Services/Service_to_Post_to";   URL obj = new URL(url);   HttpURLConnection con = (HttpURLConnection) obj.openConnection();   //add request header   con.setRequestMethod("POST");   con.setRequestProperty("Accept", "application/json");   con.setRequestProperty("Content-Type", "application/json");   con.setRequestProperty("appKey", "80aab639-ad99-43c8-a482-2e1e5dc86a2d");   // Send post request   con.setDoOutput(true);   DataOutputStream wr = new DataOutputStream(con.getOutputStream());   wr.flush();   wr.close();   int responseCode = con.getResponseCode();   System.out.println("\nSending 'POST' request to URL : " + url);   System.out.println("Response Code : " + responseCode);   BufferedReader in = new BufferedReader(new InputStreamReader(con.getInputStream()));   String inputLine;   StringBuilder response = new StringBuilder();   while((inputLine = in.readLine()) != null) {   response.append(inputLine);   }   in.close();   //print result   System.out.println(response.toString());   }
View full tip
When using the Auto-bind section of an EMS configuration it is very important to note the difference between "gateway":true and "gateway":false. Using either gateway value, when used with a valid "name" field, will result in the EMS attempting to bind the Thing with the ThingWorx platform, and will allow the EMS to respond to file transfer and tunnel services related to the auto-bound things, but this is around where the similarities end. Non-Gateway: This type of auto-bound thing can be thought of as a placeholder because the EMS will still require a LuaScriptResource to be bound in order to respond to property/service/event related messages. There must be a corresponding Thing based on the RemoteThing template (or any RemoteThing derived template e.g. RemoteThingWithFileTransfer) on the ThingWorx server in order for the bind to succeed. There are many reasons to use this type of auto-bound thing, but the most common is to bind a simple thing that can facilitate file transfer and tunnel services but does not need any custom services, properties, or events that would be provided by custom lua scripts within the LuaScriptResource. Gateway: An auto-bound gateway can be bound to the ThingWorx platform ephemerally if there is no Thing present to bind with on the platform. To clarify, if no Things exist with the matching Thing Name on the platform, and the EMS is attempting to bind a Gateway, a Thing will be automatically created on the platform to bind with the auto-bound gateway. This newly created ephemeral thing will only be accessible through the ThingWorx REST API, and once the EMS unbinds the gateway the ephemeral thing will be deleted If there is a pre-existing Thing on the ThingWorx server, then it must be based off of the EMSGateway template in order for the bind to succeed. The EMSGateway template, used both normally and ephemerally, will provide some gateway specific services that would otherwise be inaccessible to a normal remote thing. See EMSGateway Class Documentation for more details.
View full tip
This is a follow-up post on my initial document about Edge Microserver (EMS) and Lua Script Resource (LSR) security. While the first part deals with fundamentals on secure configurations, this second part will give some more practical tips and tricks on how to implement these security measurements.   For more information it's also recommended to read through the Setting Up Secure Communications for WS EMS and LSR chapter in the ThingWorx Help Center. See also Trust & Encryption Theory and Hands On for more information and examples - especially around the concept of the Chain of Trust, which will be an important factor for this post as well.   In this post I will only reference the High Security options for both, the EMS and the LSR. Note that all commands and directories are Linux based - Windows equivalents might slightly differ.   Note - some of the configuration options are color coded for easy recognition: LSR resources / EMS resources   Password Encryption   It's recommended to encrypt all passwords and keys, so that they are not stored as cleartext in the config.lua / config.json files.   And of course it's also recommended, to use a more meaningful password than what I use as an example - which also means: do not use any password I mentioned here for your systems, they might too easy to guess now 🙂   The luaScriptResource script can be used for encryption:   ./luaScriptResource -encrypt "pword123" ############ Encrypted String AES:A26fBYKHJq+eMu0Fm2FlDw== ############   The wsems script can be used for encryption:   ./wsems -encrypt "pword123" ############ Encrypted String AES:A26fBYKHJq+eMu0Fm2FlDw== ############   Note that the encryption for both scripts will result in the same encrypted string. This means, either the wsems or luaScriptResource scripts can be used to retrieve the same results.   The string to encrypt can be provided with or without quotation marks. It is however recommended to quote the string, especially when the string contains blanks or spaces. Otherwise unexpected results might occur as blanks will be considered as delimiter symbols.   LSR Configuration   In the config.lua there are two sections to be configured:   scripts.script_resource which deals with the configuration of the LSR itself scripts.rap which deals with the connection to the EMS   HTTP Server Authentication   HTTP Server Authentication will require a username and password for accessing the LSR REST API.     scripts.script_resource_authenticate = true scripts.script_resource_userid = "luauser" scripts.script_resource_password = "pword123"     The password should be encrypted (see above) and the configuration should then be updated to   scripts.script_resource_password = "AES:A26fBYKHJq+eMu0Fm2FlDw=="   HTTP Server TLS Configuration   Configuration   HTTP Server TLS configuration will enable TLS and https for secure and encrypted communication channels from and to the LSR. To enable TLS and https, the following configuration is required:     scripts.script_resource_ssl = true scripts.script_resource_certificate_chain = "/pathToLSR/lsrcertificate.pem" scripts.script_resource_private_key = "/pathToLSR/key.pem" scripts.script_resource_passphrase = "keyForLSR"     It's also encouraged to not use the default certificate, but custom certificates instead. To explicitly set this, the following configuration can be added:     scripts.script_resource_use_default_certificate = false     Certificates, keys and encryption   The passphrase for the private key should be encrypted (see above) and the configuration should then be updated to     scripts.script_resource_passphrase = "AES:A+Uv/xvRWENWUzourErTZQ=="     The private_key should be available as .pem file and starts and ends with the following lines:     -----BEGIN ENCRYPTED PRIVATE KEY----- -----END ENCRYPTED PRIVATE KEY-----     As it's highly recommended to encrypt the private_key, the LSR needs to know the password for how to encrypt and use the key. This is done via the passphrase configuration. Naturally the passphrase should be encrypted in the config.lua to not allow spoofing the actual cleartext passphrase.   The certificate_chain holds the Chain of Trust of the LSR Server Certificate in a .pem file. It holds multiple entries for the the Root, Intermediate and Server Specific certificate starting and ending with the following line for each individual certificate and Certificate Authority (CA):     -----BEGIN CERTIFICATE----- -----END CERTIFICATE-----     After configuring TLS and https, the LSR REST API has to be called via https://lsrserver:8001 (instead of http).   Connection to the EMS   Authentication   To secure the connection to the EMS, the LSR must know the certificates and authentication details for the EMS:     scripts.rap_server_authenticate = true scripts.rap_userid = "emsuser" scripts.rap_password = "AES:A26fBYKHJq+eMu0Fm2FlDw=="     Supply the authentication credentials as defined in the EMS's config.json - as for any other configuration the password can be used in cleartext or encrypted. It's recommended to encrypt it here as well.   HTTPS and TLS   Use the following configuration establish the https connection and using certificates     scripts.rap_ssl = true scripts.rap_cert_file = "/pathToLSR/emscertificate.pem" scripts.rap_deny_selfsigned = true scripts.rap_validate = true     This forces the certificate to be validated and also denies selfsigned certificates. In case selfsigned certificates are used, you might want to adjust above values.   The cert_file is the full Chain of Trust as configured in the EMS' config.json http_server.certificate options. It needs to match exactly, so that the LSR can actually verify and trust the connections from and to the EMS.   EMS Configuration   In the config.lua there are two sections to be configured:   http_server which enables the HTTP Server capabilities for the EMS certificates which holds all certificates that the EMS must verify in order to communicate with other servers (ThingWorx Platform, LSR)   HTTP Server Authentication and TLS Configuration   HTTP Server Authentication will require a username and password for accessing the EMS REST API. HTTP Server TLS configuration will enable TLS and https for secure and encrypted communication channels from and to the EMS.   To enable both the following configuration can be used:   "http_server": { "host": "<emsHostName>", "port": 8000, "ssl": true, "certificate": "/pathToEMS/emscertificate.pem", "private_key": "/pathToEMS/key.pem", "passphrase": "keyForEMS", "authenticate": true, "user": "emsuser", "password": "pword123" }   The passphrase as well as the password should be encrypted (see above) and the configuration should then be updated to   "passphrase": "AES:D6sgxAEwWWdD5ZCcDwq4eg==", "password": "AES:A26fBYKHJq+eMu0Fm2FlDw=="   See LSR configuration for comments on the certificate and the private_key. The same principals apply here. Note that the certificate must hold the full Chain of Trust in a .pem file for the server hosting the EMS.   After configuring TLS and https, the EMS REST API has to be called via https://emsserver:8000 (instead of http).   Certificates Configuration   The certificates configuration hold all certificates that the EMS will need to validate. If ThingWorx is configured for HTTPS and the ws_connection.encryption is set to "ssl" the Chain of Trust for the ThingWorx Platform Server Certificate must be present in the .pem file. If the LSR is configured for HTTPS the Chain of Trust for the LSR Server Certificate must be present in the .pem file.   "certificates": { "validate": true, "allow_self_signed": false, "cert_chain" : "/pathToEMS/listOfCertificates.pem" } The listOfCertificates.pem is basicially a copy of the lsrcertificate.pem with the added ThingWorx certificates and CAs.   Note that all certificates to be validated as well as their full Chain of Trust must be present in this one .pem file. Multiple files cannot be configured.   Binding to the LSR   When binding to the LSR via the auto_bind configuration, the following settings must be configured:   "auto_bind": [{ "name": "<ThingName>", "host": "<lsrHostName>", "port": 8001, "protocol": "https", "user": "luauser", "password": "AES:A26fBYKHJq+eMu0Fm2FlDw==" }]   This will ensure that the EMS connects to the LSR via https and proper authentication.   Tips   Do not use quotation marks (") as part of the strings to be encrypted. This could result in unexpected behavior when running the encryption script. Do not use a semicolon (:) as part of any username. Authentication tokens are passed from browsers as "username:password" and a semicolon in a username could result in unexpected authentication behavior leading to failed authentication requests. In the Server Specific certificates, the CN must match the actual server name and also must match the name of the http_server.host (EMS) or script_resource_host (LSR) In the .pem files first store Server Specific certificates, then all required Intermediate CAs and finally all required Root CAs - any other order could affect the consistency of the files and the certificate might not be fully readable by the scripts and processes. If the EMS is configured with certifcates, the LSR must connect via a secure channel as well and needs to be configured to do so. If the LSR is configured with certifcates, the EMS must connect via a secure channel as well and needs to be configured to do so. For testing REST API calls with resources that require encryptions and authentcation, see also How to run REST API calls with Postman on the Edge Microserver (EMS) and Lua Script Resource (LSR)   Export PEM data from KeyStore Explorer   To generate a .pem file I usually use the KeyStore Explorer for Windows - in which I have created my certificates and manage my keystores. In the keystore, select a certificate and view its details Each certificate and CA in the chain can be viewed: Root, Intermediate and Server Specific Select each certificate and CA and use the "PEM" button on the bottom of the interface to view the actual PEM content Copy to clipboard and paste into .pem file To generate a .pem file for the private key, Right-click the certificate > Export > Export Private Key Choose "PKCS #8" Check "Encrypted" and use the default algorithm; define an "Encryption Password"; check the "PEM" checkbox and export it as .pkcs8 file The .pkcs8 file can then be renamed and used as .pem file The password set during the export process will be the scripts.script_resource_passphrase (LSR) or the http_server.passphrase (EMS) After generating the .pem files I copy them over to my Linux systems where they will need 644 permissions (-rw-r--r--)
View full tip
Connectors allow clients to establish a connection to Tomcat via the HTTP / HTTPS protocol. Tomcat allows for configuring multiple connectors so that users or devices can either connect via HTTP or HTTPS.   Usually users like you and me access websites by just typing the URL in the browser's address bar, e.g. "www.google.com". By default browsers assume that the connection should be established with the HTTP protocol. For HTTPS connections, the protocol has to be specified explictily, e.g. "https://www.google.com"   However - Google automatically forwards HTTP connections automatically as a HTTPS connection, so that all connections are using certificates and are via a secure channel and you will end up on "https://www.google.com" anyway.   To configure ThingWorx to only allow secure connections there are two options...   1) Remove HTTP access   If HTTP access is removed, users can no longer connect to the 80 or 8080 port. ThingWorx will only be accessible on port 443 (or 8443).   If connecting to port 8080 clients will not be redirected. The redirectPort in the Connector is only forwarding requests internally in Tomcat, not switching protocols and ports and not requiring a certificate when being used. The redirected port does not reflect in the client's connection but only manages internal port-forwarding in Tomcat.   By removing the HTTP ports for access any connection on port 80 (or 8080) will end up in an error message that the client cannot connect on this port.   To remove the HTTP ports, edit the <Tomcat>\conf\server.xml and comment out sections like       <!-- commented out to disallow connections on port 80 <Connector port="80" protocol="org.apache.coyote.http11.Http11NioProtocol" connectionTimeout="20000" redirectPort="443" /> -->     Save and restart Tomcat. If opening Tomcat (and ThingWorx) in a browser via http://myServer/ the connection will fail with a "This site can’t be reached", "ERR_CONNECTION_REFUSED" error.   2) Forcing insecure connections through secure ports   Alternatively, port 80 and 8080 can be kept open to still allow users and devices to connect. But instead of only internally forwarding the port, Tomcat can be setup to also forward the client to the new secure port. With this, users and devices can still use e.g. old bookmarks and do not have to explicitly set the HTTPS protocol in the address.   To configure this, edit the <Tomcat>\conf\web.xml and add the following section just before the closing </web-app> tag at the end of the file:     <security-constraint>        <web-resource-collection>              <web-resource-name>HTTPSOnly</web-resource-name>              <url-pattern>/*</url-pattern>        </web-resource-collection>        <user-data-constraint>              <transport-guarantee>CONFIDENTIAL</transport-guarantee>        </user-data-constraint> </security-constraint>     In <Tomcat>\conf\web.xml ensure that all HTTP Connectors (port 80 and 8080) have their redirect port set to the secure HTTPS Connector (usually port 443 or port 8443).   Save and restart Tomcat. If opening Tomcat (and ThingWorx) in a browser via http://myServer/ the connection will now be forwarded to the secure port. The browser will now show the connection as https://myServer/ instead and connections are secure and using certificates.   What next?   Configuring Tomcat to force insecure connection to actually secure HTTPS connection just requires a simple configuration change. If you want to read more about certificates, encryption and how to setup ThingWorx for HTTPS in the first place, be sure to also have a look at   Trust & Encryption - Theory Trust & Encryption - Hands On
View full tip
I have implemented an Edge Nano Server that offers the following advantages: Easy to setup Not limited to HTML protocol.  For example, an edge device can be implemented that connects to devices via Bluetooth Code can be found here: GitHub - cschellberg/EdgeGateway Code contains EdgeNanoServer, docker installation scripts(for installing Thingworx Platform), and a test client done in python. Don Schellberg Consultant
View full tip
Mapping previous versions of ThingWorx Analytics API to ThingWorx Analytics 8.1 Services Since ThingWorx Analytics 8.1, the classic server monolith has been replaced by a series of independent microservices. This new structure groups services around specific elements of functionality (data, training, results). Thus the use of the previous API commands to access ThingWorx Analytics functions has been replaced by the use of ThingWorx Services. Those Services exist within specific Microservice Things accessible in the ThingWorx Platform 8.1. The table below shows a mapping of the most common previous API commands from version 8.0 and previous versions to the version 8.1 related services. The table below does not contain an exhaustive listing either of API commands nor of Services. The API commands used below are samples which might require further information like headers and Body once used. These are used in the table below for reference purposes. Previous API Command Purpose Sample Syntax TWA 8.1 Service Analytics Thing related to Service Service description 1 Version Info GET: http://<IP Address>:8080/1.0/about/versioninfo VersionInfo This service is available in each Mircorservice Thing inheriting from Analytics Server Returns the internal version number for a specific microservice. The first two digits = ThingWorx Core version. The next three digits = version of the microservice. 2 Registering new Dataset POST: http://<IP Address>:8080/1.0/datasets/ CreateDataset Data Microservice Creates the dataset uploads the data along with its metadata and optimizes it automatically. 3 Checking Dataset Status GET: http://<IP Address>:8080/1.0/datasets/<DataSet Name> ListCreatedDatasets Data Microservice This old functionality is replaced by a Service that lists all the created Datasets 4 Creating Metadata POST: http://<IP Address>:8080/1.0/datasets/<DataSet Name>/configuration CreateDataset Data Microservice (Check line 2 for further information) 5 Checking Dataset Configuration GET: http://<IP Address>:8080/1.0/datasets/<DataSet Name>/configuration GetDatasetSchema Data Microservice Retrieves the metadata from a dataset. 6 Loading Dataset CSV POST: http://<IP Address>:8080/1.0/datasets/<DataSet Name>/data CreateDataset Data Microservice (Check line 2 for further information) 7 Checking Job Status GET: http://<IP Address>:8080/1.0/status/<Job ID> GetJobStatus Available in all created Microservices inheriting from AnalyticsJob Server Retrieves the status of a specific job 8 Signals Job POST: http://<IP Address>:8080/1.0/datasets/<DataSet Name>/signals CreateJob Signals Microservice Create a job to identify signals 9 Signal Results Job GET: http://<IP Address>:8080/1.0/datasets/<DataSet Name>/signals/<Job ID>/results RetrieveResult Signals Microservice Retrieve a result of a Signals job 10 Profile Job POST: http://<IP Address>:8080/1.0/datasets/<DataSet Name>/profiles CreateJob Profiling Microservice Creates a job to generate profiles. 11 Profile Result Job GET: http://<IP Address>:8080/1.0/datasets/<DataSet Name>/profiles/<Job ID>/results RetrieveResult Profiling Micorservice Retrieve the results of a profiles job. 12 Train Model Job POST: http://<IP Address>:8080/1.0/datasets/<DataSet Name>/prediction CreateJob Training Micorservice Create a prediction model job. 13 Train Model Result Job GET: http://<IP Address>:8080/1.0/datasets/<DataSet Name>/prediction/<Job ID>/results RetrieveModel Training Microservice Only retrieves the PMML model. But if a holdout for validation was specified in the CreateJob, a validation job is auto-created and runs. 14 Scoring Job POST: http://<IP Address>:8080/1.0/datasets/<DataSet Name>/predictive_scores BatchScore Prediction Microservice Submit Predictive Scoring Job 15 Scoring Job Result GET: http://<IP Address>:8080/1.0/datasets/<DataSet Name>/predictive_scores/<Job ID>/results RetrieveResult Prediction Microservice Retrieve results from prediction scoring jobs
View full tip
    About   This is part of a ThingBerry related blog post series.         ThingBerry is ThingWorx installed on a RaspBerry Pi, which can be used for portable demonstrations without the need of utilizing e.g. customer networks. Instead the ThingBerry provides its own custom WIFI hotspot and allows Things to connect and send / receive demo data on a small scale.   In this particual blog post we'll discuss on how to connect a ESP8266 module to the ThingBerry WIFI hotspot and send data from a DHT-11 sensor via the MQTT protocol.   As the ThingBerry is a highly unsupported environment for ThingWorx, please see this blog post for all related warnings.   Install MQTT broker on the ThingBerry     To install mosquitto as a MQTT broker, log in to the ThingBerry and run     sudo apt-get install mosquitto   This will provide a basic broker installation, which is good enough for this example. MQTT clients (including ThingWorx) will connect to this broker to exchange messages. There will be no added security like encrypted traffic shown in this example, it's however good practise to secure MQTT broker / client connections.   While the ESP8266 module is publishing information, ThingWorx will subscribe to the corresponding topics to update its internal property values with what is sent by the ESP8266 module.   For more information on MQTT, how to configure it for ThingWorx or more security relevant information also see   https://community.thingworx.com/message/5063#5063 https://community.thingworx.com/community/developers/blog/2016/08/08/securing-mqtt-connection-to-thingworx-platform?sr=tcontent   Configure the ESP8266     There are too many instructions on the web already on how to initially setup the ESP8266 and use it with the Arduino IDE. I'll therefore just refer to Google which covers the topic more extensively than I ever could.   All coding in this example is done in the Arduino IDE and is pushed to the ESP8266 (NodeMCU) via USB. For this you might need to install a CH340g USB driver for the NodeMCU.   In the Arduino IDE under Tools, I have set my environment to   Board: NodeMCU 1.0 (ESP-12E Module) CPU Frequency: 80 MHz Flash Size: 4M (3M SPIFFS) Upload Speed: 115200 Port: COM3   Under Sketch > Include Library > Manage Libraries add / install the following libraries:   DHT sensor library by Adafruit Adafruit Unified Sensor by Adafruit PubSubClient by Nick O'Leary   These bring the libraries necessary to read data from the DHT-11 sensor and to configure the ESP8266 as MQTT client.     Wiring the DHT-11 sensor     The following image shows the PINs on the ESP8266     I'm using a DHT-11 sensor with cables included and already fixed to a board with 3 PINs. In case you're using a different version, there might be additional components and wiring required, like a resistor etc. Google might help here as well.     Ensure that neither board nor sensor are plugged in, and the ESP8266 is powered off.   To hook the sensor up to the ESP8266, join   ( - ) to GND ( + ) to 3.3V (out) to D3   After all the connections are made, connect the ESP8266 via USB to a computer / laptop with the Arudino IDE configured.   Coding   In the Arduino IDE use the following code - adjust the WIFI settings and the MQTT broker configuration. Ensure to rename the ESP_xx name / topic to something more meaningful, e.g. a specific device name (or just leave it as is if in doubt).   Use the ssid and wpa_passphrase from the hostapd.conf used to configure the ThingBerry as WIFI hotspot.   Copy&paste the code below into the Arduino IDE, verify it and upload it to the ESP8266.     If searching for a WIFI connection, the device's blue LED will blink. A successful connection to the broker and publishing the values will result in a static blue LED. In case the LED is off, the connection to the broker is lost or messages cannot be published.   For troubleshooting, use the Serial Monitor function (at 115200 baud) in the Arduino IDE. In case sensor data cannot be read but the wiring is correct and the code addressing the correct PIN verify the sensor is indeed working. It took me a long time to figure out that the first sensor I used was a defective device.   The current configuration sends updates every 10 seconds - longer intervals might make more sense, but can trigger a timeout for the MQTT broker. In this case the program will re-connect automatically and log corresponding messages in the Serial Monitor. This might seem like an error, but is indeed intended behavior by the code and the MQTT broker.     Configure MQTT Thing in ThingWorx     Create a new Thing in ThingWorx based on the MQTT Template. Add two properties:   temperature humidity   Both set to persistent and logged and Data Change Type to ALWAYS. Also configure a Value Stream to log a history of values.   In the configuration, add two more subscriptions. Activate the "subscribe" checkbox and map name (local property) to topic (MQTT topic), e.g.   name = temperature; topic = ESP_xx/temp name = humidity; topic = ESP_xx/hum   Ensure the correct servernames, ports etc. are configured (an empty servername will use the localhost).   Save the configuration. Property values should now be updated from the MQTT broker, depending on what the device is sending.   Code #include "DHT.h" #include "PubSubClient.h" #include "ESP8266WiFi.h" /* * * Configure parameters for sensor and network / MQTT connections * */ // setup DHT 11 pin and sensor #define DHTPin D3 #define DHTTYPE DHT11 // setup WiFi credentials #define WLAN_SSID "mySSID" #define WLAN_PASS "WIFIpassword" // setup MQTT #define MQTTBROKER "mqttbrokerhostname" #define MQTTPORT 1883 // setup built-in blue LED #define LED 2 /* * ============================================================ * * DO NOT CHANGE ANYTHING BELOW * (unless you know what you're doing) * */ // initiate DHT DHT dht(DHTPin, DHTTYPE); // initiate MQTT client WiFiClient wifiClient; PubSubClient client(MQTTBROKER, MQTTPORT, wifiClient); /* * setup */ void setup() { // switch off internal LED pinMode(LED, OUTPUT); digitalWrite(LED, HIGH); // start serial monitor Serial.begin(115200); // start DHT dht.begin(); // start WiFi WiFi.begin(WLAN_SSID, WLAN_PASS); } /* * the loop */ void loop() { // while not connected to WiFi, print "." // after connection exit the loop // blink LED while having no WiFi signal boolean wifiReconnect = false; while (WiFi.status() != WL_CONNECTED) { digitalWrite(LED, LOW); delay(200); Serial.print("."); digitalWrite(LED, HIGH); delay(300); wifiReconnect = true; } // if WiFi has reconnected, print new connection information and turn on LED if (wifiReconnect == true) { // print connection information and local IP address, mac address Serial.println(); Serial.println("WiFi connected"); Serial.println(WiFi.localIP()); Serial.println(WiFi.macAddress()); Serial.println(); // turn on built-in LED to indiciate successful WiFi connection digitalWrite(LED, LOW); } // if MQTT client is not connected, connect again // turn on built-in LED to indicate a successful connection if (!client.connected()) { Serial.println("Disconnected from MQTT server... trying to connect"); if (client.connect("ESP_xx")) { Serial.println("Connected to MQTT server"); Serial.println("Topic = ESP_xx"); digitalWrite(LED, LOW); } else { Serial.println("MQTT connection failed"); digitalWrite(LED, HIGH); } Serial.println(); } // read temperature and humidity from sensor float t = dht.readTemperature(); float h = dht.readHumidity(); if (isnan(t) || isnan(h)) { // if temperature or humidity is not a number, print error Serial.println("Failed retrieving data from DHT sensor"); } else { // print temperature and humidity Serial.print(t); Serial.print("° - "); Serial.print(h); Serial.print("%"); Serial.println(); // only send values to MQTT broker, if client is connected if (client.connected()) { // boolean to check for errors during payload transfer bool isError = false; // create payload and publish values via MQTT client // use buffer to convert float to char* char buffer[10]; dtostrf(t, 0, 0, buffer); if (client.publish("ESP_xx/temp", buffer)) { Serial.print(" published /temp "); } else { Serial.print(" failed /temp "); isError = true; } dtostrf(h, 0, 0, buffer); if (client.publish("ESP_xx/hum", buffer)) { Serial.print(" published /hum "); } else { Serial.print(" failed /hum "); isError = true; } Serial.println(); // on error, turn off LED if (isError == true) { digitalWrite(LED, HIGH); } else { digitalWrite(LED, LOW); } } } // sleep for 10 seconds // if sleep > default mosquitto timeout : a reconnect is forced for each update-cycle delay(10000); }
View full tip
When I tried to set String property values with Chinese letters by using C SDK, I could see only broken characters in Thingworx. The cause of the problem is simple and it's encoding. In order to solve this problem, you need to convert encoding to UTF-8 in your C code. I used 'libiconv' library to do it. This guide is for Win32 version. If you want to make a library for other platforms such as Linux, you can create or get a library for your own by googling. How to Get the Source Code of libiconv At the moment, the most recent version of libiconv is 1.15. You can download the source code of libiconv from here. How to Build I used MS Visual Studio 2012, but the explanation can be applied to the earlier versions of MS Visual Studio and express editions. Step 1. Download the most recent version of libiconv and unzip the file. Step 2. Make a new Win32 Project. Let's say "libiconv" as the project name. Check to create directory for solution. Choose DLL as the application type and check Empty Project for additional options. Click the button "Finish" to generate the new project. Step 3. Copy files from the folders of libiconv to project folders. To build "libiconv", you need to compile three files "localcharset.c", "relocatable.c" and "iconv.c". That's the key! Copy three files "relocatable.h", "relocatable.c" and "iconv.c" in the folder "...\libiconv-1.15\lib\" to the project folder "...\libiconv\libiconv\". Copy "...\libiconv-1.15\libcharset\lib\localcharset.c" to the project folder "...\libiconv\libiconv\". Copy "...\libiconv-1.15\libcharset\include\localcharset.h.build.in" to the project folder "...\libiconv\libiconv\" and then, rename the copied "localcharset.h.build.in" to "localcharset.h". Copy "...\libiconv-1.15\windows\libiconv.rc" to the project folder "...\libiconv\libiconv\". Make folder "include" under the project folder "...\libiconv\". Copy "...\libiconv-1.15\include\iconv.h.build.in" to the project include folder "...\libiconv\include" and then, rename the copied "iconv.h.build.in" to "iconv.h". Copy "...\libiconv-1.15\config.h.in" to the project include folder "...\libiconv\include" and then, rename the copied "config.h.in" to "config.h". Copy all the header files (*.h) and definition files (*.def) in the folder "...\libiconv-1.15\lib" to the project include folder "...\libiconv\include". Step 4. Add existing items. Execute "project > Add Existing items..." at the main menu to add existing items to the project. Step 5. Project Settings. You can make 64-bit platform through configuration manager in order to generate libiconv.dll for 64-bit system. You can also make two other configurations "ReleaseStatic" and "DebugStatic" in order to generate libiconvStatic.lib as a static link library. At the project properties, change Output Directory as "$(SolutionDir)$(Configuration)_$(Platform)\" and Intermediate Directory as "$(SolutionDir)obj\$(ProjectName)\$(Configuration)_$(Platform)\". Change Include Directories as "..\include;$(IncludePath)": You have to add "BUILDING_LIBICONV" and "BUILDING_LIBCHARSET" to Peprocessor Definitions of all Platforms and of all configurations. You'd better set Runtime Library to "Multi-threaded" when building dynamic link library libiconv.dll. Then, the dependency on VC Runtime library can be controlled by the applications that will be built and dynamically linked with libiconv.dll because libiconv.dll does not need VC Runtime library but only the application that uses libiconv.dll may or may not need VC Runtime library. However, when building the static link library libiconvStatic.lib, you can choose Runtime Library option for libiconvStatic.lib depending on the application that uses libiconvStatic.lib. You have to change Precompiled Header option to "Not Using Precompiled Headers". Step 6. Tweak the source code. libiconv.rc Open libiconv.rc with text editor or the source code editor of Visual Studio IDE by double-clicking libiconv.rc in the Solution explorer and insert some code at line 4 as follows: /////////////    ADD    ///////////// #define PACKAGE_VERSION_MAJOR 1 #define PACKAGE_VERSION_MINOR 14 #define PACKAGE_VERSION_SUBMINOR 0 #define PACKAGE_VERSION_STRING "1.14" ///////////////////////////////////// You may be asked to change Line endings to "Windows (CR LF)". Then, let it do so. It will be more convenient for you if you mainly use Windows. localcharset.c Open localcharset.c and delete or comment the lines 80 - 83 as follows: //////////////////  DELETE //////////////// ///* Get LIBDIR.  */ //#ifndef LIBDIR //# include "configmake.h" //#endif /////////////////////////////////////////// iconv.c Open iconv.c and delete or comment the lines 250 - 252 and add three lines there as follows: ///////////////////////// DELETE /////////////////////// //size_t iconv (iconv_t icd, //              ICONV_CONST char* * inbuf, size_t *inbytesleft, //              char* * outbuf, size_t *outbytesleft) /////////////////////////   ADD   ////////////////////// size_t iconv (iconv_t icd,               const char* * inbuf, size_t *inbytesleft,               char* * outbuf, size_t *outbytesleft) //////////////////////////////////////////////////////// localcharset.h Open localcharset.h and delete or comment the lines 21 - 25 and add 7 lines there as follows: /////////////////////////   DELETE  //////////////////////// //#if @HAVE_VISIBILITY@ && BUILDING_LIBCHARSET //#define LIBCHARSET_DLL_EXPORTED __attribute__((__visibility__("default"))) //#else //#define LIBCHARSET_DLL_EXPORTED //#endif /////////////////////////    ADD    ////////////////////// #ifdef BUILDING_LIBCHARSET #define LIBCHARSET_DLL_EXPORTED __declspec(dllexport) #elif USING_STATIC_LIBICONV #define LIBCHARSET_DLL_EXPORTED #else #define LIBCHARSET_DLL_EXPORTED __declspec(dllimport) #endif //////////////////////////////////////////////////////////////////// config.h Open config.h in the project include folder "...\libiconv\include" and delete or comment the lines 29 - 30 as follows: ///////////////////////// DELETE /////////////////////// ///* Define as good substitute value for EILSEQ. */ //#undef EILSEQ //////////////////////////////////////////////////////// Otherwise you can redefine EILSEQ as good substitute value. iconv.h Open iconv.h in the project include folder "...\libiconv\include" and delete or comment the line 175 and add 1 line as follows: /////////////////////////  DELETE  /////////////////////// //#if @HAVE_WCHAR_T@ /////////////////////////    ADD   ////////////////////// #if HAVE_WCHAR_T //////////////////////////////////////////////////////////////////////////////// Delete or comment the line 128 and add 1 line as follows: /////////////////////////  DELETE  /////////////////////// //#if @USE_MBSTATE_T@ /////////////////////////   ADD   ////////////////////// #if USE_MBSTATE_T //////////////////////////////////////////////////////////////////////////////// Delete or comment the lines 107-108 and add 2 lines as follows: /////////////////////////  DELETE  /////////////////////// //#if @USE_MBSTATE_T@ //#if @BROKEN_WCHAR_H@ /////////////////////////  ADD  ////////////////////// #if USE_MBSTATE_T #if BROKEN_WCHAR_H //////////////////////////////////////////////////////////////////////////////// Delete or comment the line 89 and add 2 lines as follows: /////////////////////////  DELETE /////////////////////// //extern LIBICONV_DLL_EXPORTED size_t iconv (iconv_t cd, @ICONV_CONST@ char* * inbuf, //size_t *inbytesleft, char* * outbuf, size_t *outbytesleft); /////////////////////////    ADD   ////////////////////// extern LIBICONV_DLL_EXPORTED size_t iconv (iconv_t cd, const char* * inbuf,   size_t *inbytesleft, char* * outbuf, size_t *outbytesleft); //////////////////////////////////////////////////////////////////////////////// Delete or comment the lines 25 - 30 and add 8 lines as follows: /////////////////////////  DELETE /////////////////////// //#if @HAVE_VISIBILITY@ && BUILDING_LIBICONV //#define LIBICONV_DLL_EXPORTED __attribute__((__visibility__("default"))) //#else //#define LIBICONV_DLL_EXPORTED //#endif //extern LIBICONV_DLL_EXPORTED @DLL_VARIABLE@ int _libiconv_version; /* Likewise */ /////////////////////////    ADD   ////////////////////// #if BUILDING_LIBICONV #define LIBICONV_DLL_EXPORTED __declspec(dllexport) #elif USING_STATIC_LIBICONV #define LIBICONV_DLL_EXPORTED #else #define LIBICONV_DLL_EXPORTED __declspec(dllimport) #endif extern LIBICONV_DLL_EXPORTED int _libiconv_version; /* Likewise */ //////////////////////////////////////////////////////////////////////////////// How to Use When you use newly built libiconv, the only header file that you need is iconv.h. You will need to link either the import library libiconv.lib or the static library libiconvStatic.lib in your project property or write the code in one of your source file as follows: #pragma comment (lib, "libiconv.lib") or #pragma comment (lib, "libiconvStatic.lib") In the source of the application that uses this library either libiconv.dll or libiconvStatic.lib, if you don't define anything but only include iconv.h, your application will use libiconv.dll while it will use libiconvStatic.lib if you define USING_STATIC_LIBICONV before you include iconv.h in your application as follows: //#define USING_STATIC_LIBICONV #include <iconv.h> And in C SDK code, I used a "SteamSensorWithFileTransferAndTunneling" sample code and added codes in the "dataCollectionTask" function as below. void dataCollectionTask(DATETIME now, void * params) {   iconv_t ic;   char* in_buf = "Hi, 文健英";   char *to_chrset = "UTF-8";   char *from_chrset = "EUC-KR";   size_t in_size = strlen(in_buf);   size_t  out_size = sizeof(wchar_t) * in_size * 4;   char* out_buf = malloc(out_size);   // Caution: iconv's inbuf, outbuf are double pointers, so need to define separate pointers and pass addresses.   char* in_ptr = in_buf;   char* out_ptr = out_buf;   size_t out_buf_left;   size_t result;   memset(out_buf, 0x00, out_size);      ic = iconv_open(to_chrset, from_chrset);   if (ic == (iconv_t) -1)   {         printf("Not supported code \n");         exit(1);   }   printf("input len = %d, %s\n", in_size, in_buf);   out_buf_left = out_size;   iconv(ic, &in_ptr, &in_size, &out_ptr, &out_buf_left);   //printf("input len = %d, output len=%d %s\n", in_size, out_size - out_buf_left, out_buf);   iconv_close(ic);   /* TW_LOG(TW_TRACE,"dataCollectionTask: Executing"); */   properties.TotalFlow = rand()/(RAND_MAX/10.0);   properties.Pressure = 18 + rand()/(RAND_MAX/5.0);   properties.Location.latitude = properties.Location.latitude + ((double)(rand() - RAND_MAX))/RAND_MAX/5;   properties.Location.longitude = properties.Location.longitude + ((double)(rand() - RAND_MAX))/RAND_MAX/5;   properties.Temperature  = 400 + rand()/(RAND_MAX/40);   properties.BigGiantString = out_buf; // Set values for String property   /* Check for a fault.  Only do something if we haven't already */   if (properties.Temperature > properties.TemperatureLimit && properties.FaultStatus == FALSE) {   twInfoTable * faultData = 0;   char msg[140];   properties.FaultStatus = TRUE;   properties.InletValve = TRUE;   sprintf(msg,"%s Temperature %2f exceeds threshold of %2f",   thingName, properties.Temperature, properties.TemperatureLimit);   faultData = twInfoTable_CreateFromString("message", msg, TRUE);   twApi_FireEvent(TW_THING, thingName, "SteamSensorFault", faultData, -1, TRUE);   twInfoTable_Delete(faultData);   }   /* Update the properties on the server */   sendPropertyUpdate(); }
View full tip
It usually happens that we need to copy a large file to ThingWorx server periodically, and what's worse, the big file is changing(like a log file). This sample give a simpler way to implement. The main idea in the sample is: 1. Lower the management burden from ThingWorx server and instead it put all the work in edge SDK side 2. Save network burden with only uploading the incremented file and append it to the older file on ThingWorx server   Java SDK version in this sample: 6.0.1-255
View full tip
Unless created and owned by the Administrator user, by default MySQL Database Thing will not connect to the database as it requires certain permissions on the user. In order for a user other than an administrator to create a working database thing, they need three permissions (in addition to the typical subsystem and resource permissions - refer to https://www.youtube.com/watch?v=HzFqxvgHtpI&index=8&list=PLz1ppcU_kaneagUT9qgQfz3HByf6-9zTF ​ ):: Visibility to the Database Thing Template. Execute service permission on the EncryptPropertyValue service in the Encryption Services resource. Visibility to the DatabaseThing Thing Package. Typically to track down permissions issue, the most convenient and easy way is to use browser developer tools. For example in Chrome, developer tools can be used to view the API calls being sent by Composer, and the errors sent in response.   ThingWorx Composer doesn’t expose Thing Packages, so in order to set visibility to the DatabaseThing Thing Package, one would need to throw a REST API call at it. Hope this information helps in setting up a non-administrator own MSSQL database thing! *In addition refer to The use of System User
View full tip
Wanting to build a simple environment with ThingWorx, using a Raspbery Pi mini-computer and an Edge MicroServer (EMS), one has to bear in mind some apparently insignificant tips, that can make a huge difference in a well-connected platform.    The Raspbery Pi can have either a few sensors attached on it and/or a Sense HAT integrated with it. A Sense HAT is an add-on board for Raspbery Pi that has an 8x8 LED matrix, a five-button joystick and a sensor, gyroscope. Tips: The ThingWorx guide "Connecting ThingWorx EMS with Rasberry Pi Quickstart" can be expanded to read Sense HAT specific sensor information A Sense HAT integrated with Raspberry Pi can be accessed from anywhere, using the existing APIs to read/write Getting location information while using the Pi board (because The Sense HAT doesn't have this feature): The Raspberry Pi has several USB ports, therefore inserting a USB GPS can be used as a workaround for connecting and reading location information (or the GPIO pins could be used to connect a GPS board) To include services and properties to read Sense HAT sensors data, start from the existing LUA example.lua template and built a Pi specific one. The EMS is a pre-built application that can be installed on Windows or Linux and it provides a bridge between the remote device and the ThingWorx platform, using the REST API AlwaysOn protocol. Tips: To read data from a sensor, it is important to use the corresponding library for that sensor; if LUA scripting is used, then this library has to be easily integrated with LUA (LSR is optional with EMS; EMS can be used standalone). Using the correct library, the EMS will be able to gather the data and push it to ThingWorx. The LSR sends properties to the EMS over the HTTP(s) protocol, which converts it to the AlwaysOn Protocol, and sends it to the Platform LUA is a process manager useful if having to integrate devices from different vendors using different modes to interact To test if the LSR is working: go to Composer -> Properties tab of the remote thing, click "Manage Bindings", select the "Remote" tab, next to "Local" - here can be seen all the remote properties added to the thing by the LSR. For creating and biding more things: create the remote thing in ThingWorx and then add them to the config.lua file in the same way you have the gateway Listing any "Remote Things" that are ready to be connected, and are not associated with any Thing, yet: Monitoring; one method to try to bind them: type the name of the thing in the Identifier section of General Information of the Thing To request a local REST API call to the default EMS port 8000, from a LSR, make sure that the GET works fine (i.e for a property update:  localhost:8000/Thingworx/Things/MyThing/Properties/myString It should return you something like this: {"rows":[{"myString":""}],"fieldDefinitions":{"myString":{"name":"myString","description":"","type":"STRING","aspects":{}}}}
View full tip
Hello! I have just written a tutorial on how to set up Lua to be run from the command line. As many already know, there is no good way to debug Lua scripts as they are used in package deployment in Software Content Manager, and building such a debugger is a vast and difficult undertaking. As an alternative, running small portions of code in the command line to ensure they will work as expected is one way to verify the validity of the Lua syntax prior to attempting a deployment. Here are the steps to set up Lua as a command line tool:   Grab the GCC compiler called TDM-GCC Run the exe file and follow the install instructions Remember the install directory for this, as the attached install script will need to be configured in a later step Default location in the install file is "C:\Program Files\TDM-GCC\" Note: leaving the "Add to PATH" selected will allow you to compile C code on the command line as well by typing "gcc" (this is not required for this set-up) Download the Lua source code ​This comes as a ".tar.gz" file, which can be tricky to extract in Windows Download 7zip for freeware which can unzip this type of archive Extract the Lua source code and navigate to the top level directory which should just contain one folder named like "lua-x.x.x" where the x's refer to the version Download and extract the attached zip file containing the build file Copy the "build.cmd" file from this to the top level directory of the Lua source code Modify the configuration as needed: Version number default is 5.3.4 (parameter is called lua_version) GCC install path default is "C:\Program Files\TDM-GCC\bin" (parameter is called lua_build_dir) Double click the "build.cmd" file A console window will appear with installation details If you see the following, then it worked successfully: The "lua\" directory will be created in the same folder as the "build.cmd" file Copy the "lua\" directory to "C:\Program Files\" Open "Computer" > "System Properties" > "Advanced system settings" Click "Environment Variables" > "New..." Call the variable "LUA" and make the value "C:\Program Files\lua\bin" Find the "Path" variable and click "Edit..." At the end of what is already there (do NOT delete anything that is already there), add "%LUA%" (make sure there is a ";" between the previous path and this entry) Click "Ok" Open a new command prompt (has to be new to load the new path) and type "lua" to see if it works Example syntax test from Lua command line:   I hope this is helpful to people! Let me know if you have any questions!
View full tip
The ThingWorx Platform is fully exposed using the REST API including every property, service, subsystem, and function.  This means that a remote device can integrate with ThingWorx by sending correctly formatted Hyper Text Transfer Protocol (HTTP) requests. Such an application could alter thing properties, execute services, and more. To help you get started using the REST API for connecting your edge devices to ThingWorx, our ThingWorx developers put together a few resources on the Developer Portal: New to developing with ThingWorx? Use our REST API Quickstart guide that explains how to: create your first Thing, add a property to your Thing, then send and retrieve data. Advanced ThingWorx user? This new REST API how-to series features instructions on how to use REST API for many common tasks, incl. a troubleshooting section. Use ThingWorx frequently but haven’t learned the syntax by heart? We got you covered. The REST API cheat sheet gives details of the most frequently used REST API commands.
View full tip
Announcements