cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

ThingWorx Navigate is now Windchill Navigate Learn More

IoT & Connectivity Tips

Sort by:
A user can make a direct REST call to Thingworx platform, but when it comes to a website trying to make a REST call. The platform server blocks the request as it is a Cross-Origin request. To enable this feature, the platform server needs to allow Cross-Origin request from all/specific websites. Enabling Cross-Origin request can be done by adding CORS filter to the server. CORS (Cross-Origin Resource Sharing) specification enables the cross-origin requests from other websites deployed in a different server. By enabling CORS filter, a 3rd party tool or a website can retrieve the data from Thingworx instance. Follow the below steps inorder to update the CORS filter: Update web.xml file (located in $CATALINA_HOME/conf/web.xml) For Minimal Configurations, add the below code: <filter> <filter-name>CorsFilter</filter-name>   <filter-class>org.apache.catalina.filters.CorsFilter</filter-class> </filter> <filter-mapping>   <filter-name>CorsFilter</filter-name>   <url-pattern>/*</url-pattern>         // "*" opens platform to all URL patterns, recommended to use limited patterns. </filter-mapping> NOTE: the url-pattern - /* opens the Thingworx application to every domain. For advanced configuration, follow the below code: <filter> <filter-name>CorsFilter</filter-name> <filter-class>org.apache.catalina.filters.CorsFilter</filter-class> <init-param> <param-name>cors.allowed.origins</param-name> <param-value> http://www.customerwebaddress.com </param-value> </init-param> <init-param> <param-name>cors.allowed.methods</param-name> <param-value>GET,POST,HEAD,OPTIONS,PUT</param-value> </init-param> <init-param> <param-name>cors.allowed.headers</param-name> <param-value>Content-Type,X-Requested-With,accept,Origin,Access-Control-Request-Method,Access-Control-Request-Headers</param-value> </init-param> <init-param> <param-name>cors.exposed.headers</param-name> <param-value>Access-Control-Allow-Origin,Access-Control-Allow-Credentials</param-value> </init-param> <init-param> <param-name>cors.support.credentials</param-name> <param-value>true</param-value> </init-param> <init-param> <param-name>cors.preflight.maxage</param-name> <param-value>10</param-value> </init-param> </filter> <filter-mapping> <filter-name>CorsFilter</filter-name> <url-pattern>/* </url-pattern>   // "*" opens platform to all URL patterns, recommended to use limited patterns. </filter-mapping> NOTE: update the cors.allowed.origin parameter with the desired web address Save web.xml file Restart tomcat For additional information, please follow the official tomcat reference document: http://tomcat.apache.org/tomcat-7.0-doc/config/filter.html#CORS_Filter Tested this using an online Javascript editor (jsfiddle) and executing the below script <script> var data = null; var xhr = new XMLHttpRequest(); xhr.open("GET", "http://localhost:8080/Thingworx/Things", true); xhr.withCredentials = true; xhr.send(); </script> The request was successful and list of things are returned.
View full tip
Thingworx provides the capability to use JDBC to connect to Relational Databases. What would be the steps to take? 1. Find the proper JDBC JAR file, this can be easily located by keeping in mind your database and its version and doing an online search. 2. Download the JDBC Extension Creator from the Marketplace 3. Follow the instructions to create the actual JDBC extension you will be using 4. Create a Thing based on the ThingTemplate from the JDBC extension - This represents your actual connection to the database 5. Set up the configuration:      a. Connection String - Usually I use connectionstrings.com to find that      b. Validation String - This has to be a VALID SQL statement within the context of the database you are connecting to (Like SELECT SYSDATE FROM DUAL for Oracle)      c. Proper User Name and Password as defined in the database you are connecting to 6. SAVE 7. To check if you are properly connected, go back into Edit mode and go to Services, create a new SQL Query or Command and check Tables and Columns Tab. Actual Tables should show up now. 8. If it doesn't work, check your application log.
View full tip
Hello Developer Community, We are pleased to announce pre-release availability of the ThingWorx Edge SDK for Android! The Android SDK beta is built off the Java SDK code base, but replaces the Netty websocket client with Autobahn for compatibility with Android OS.  Those familiar with the Java SDK API will feel very much at home in the Android SDK.  We recommend beginning with the included sample application.  Please watch this thread for upcoming beta releases.  b4 adds file transfers between the ThingWorx platform and Android devices and an example application. We welcome your questions and comments in the thread below!  Happy coding! Regards, ThingWorx Edge Products Team
View full tip
Protocol Adapter Toolkit (PAT) is an SDK that allows developers to write a custom Connector that enables edge devices (without native AlwaysOn support) to connect to and communicate with the ThingWorx Platform. A typical use case is edge communication using a protocol that can't be changed (e.g. MQTT). Prior to PAT, developers had to use the ThingWorx (Edge or Platform) SDKs, or the ThingWorx REST interface, to enable the edge devices to communicate with ThingWorx. Overview PAT provides three main components: the Channel, the Codec, and the ThingWorx Platform Connection. The Channel implements a network protocol to communicate directly with the Edge Device. Its responsibilities include reading data from an Edge Device, writing data to an Edge Device, and routing data to the correct Codec. You can implement your own custom channel or use one of the out of the box channels provided by PTC : WebSocket, HTTP (1.0.x) and MQTT (1.1.x). The Codec translates messages from your edge devices into messages that ThingWorx platform can process (property read/write,service call, events), and provides a means to take the results of those actions and turn them back into messages for the device.  You must implement the Codec. The Platform Connection layer sends and receives messages with the ThingWorx platform. Note : The PAT Connector capabilities depend on edge protocol and channel implementation. Installation The PAT installation media contains : README.md - start here SDK (Java API) and runtime libraries PAT skeleton project (Gradle) Sample codec implementations for the WebSocket, HTTP, and MQTT channels (Gradle) Sample Custom Channel implementation (basic TCP protocol adapter) (Gradle) Required extensions to be installed on the platform : ConnectionServicesExtension and pat-extension Reference Documents ThingWorx Protocol Adapter Toolkit Developers Guide 1.0.0 README.md in various levels of installation folders ThingWorx Connection Services and Compatibility Matrix 1.0.0 Related Knowledge Protocol Adapter Toolkit - MQTT Sample Project hands-on (1.1.x)
View full tip
This blog post has been written in collaboration with nexiles GmbH, a PTC Software Partner   The Edge Micro Server (EMS) and the LUA Scripting Resource (LSR) allow for an easy connection of sensors and / or data to the ThingWorx Platform.   Connections are usually established through the HTTP protocol and a REST API which sends unencrypted data and user specific credentials or ThingWorx application keys. This could be fine for a non-production environment. However in a more secure environment or in production, encryption and authentication should be used wherever and whenever possible to ensure the safety and integrity of the data.       In this scenario a user, client machine or remote device can utilize the LUA endpoints via the REST APIs endpoints. For this an authentication mechanism with credentials is required to only allow authenticated sessions. Invokation of services etc. are all protected and encrypted via HTTPS. A default HTTP connection would transfer the credentials as clear text which is most likely not desired.   The LSR then communicates via HTTPS to the EMS.   The EMS communicates to the ThingWorx platform via the AlwaysOn protocol using an encrypted websocket on the platform side.   Prerequisite   ThingWorx is already fully configured for HTTPS. See Trust & Encryption Theory and Hands On for more information and examples   Securing the EMS   EMS to ThingWorx connection   The config.json holds information on the complete EMS configuration. To add a trusted and secure connection to the ThingWorx platform, the servers, connection type and certificates have to be adjusted.   Check the config.json.complete for more information and individual settings.   As an example the following configuration could be a rough outline.   Switch the websocket server port to 443 in ws_servers Declare the connection to the websocket as SSL / TLS in wc_connection Declare the certificate used by the ThingWorx platform in certificates Validate the certificate In case you're using a self-signed certificate, allow using it - otherwise just say "false" If not using a self-signed certificate, but a certificate based on a chain of trust, point to the full chain of trust in the cert_chain parameter I used the client certifcate in X509 (PEM) formatted .cer file I used the client certificate's private key as PKCS #8 (PEM) and encrypted it with the super-secure password "changeme" (no really, change it!)   With this the EMS can connect securely to the ThingWorx platform websocket.   EMS as HTTP(S) server   To secure incoming connections to the EMS, it first of all needs to act as a HTTP server which is then configured to use a custom certificate.   In config.json add the http_server section Define the host and port as well and set SSL to true The full path of the certificate (.cer file) must be provided   With this the EMS can receive client requests from the LSR through a secure interface, protecting the (meta) data sent from the LSR to the EMS.   For my tests I'm using a self-signed certificate - created in the Keystore Explorer and exported as X509 (PEM) formatted .cer file. The private key is not required for this part.   Authentication   To further secure the connection to the EMS acting as HTTP(S) server, it's recommended to use user and password for authentication.   With this, only connections are accepted, that have the configured credentials in the HTTP header.   config.json   {      "ws_servers": [ { "host": "supersecretems.ptc.com", "port": 443 } ],        "appKey": "<#appKey>",        "http_server":  {           "host": "localhost",           "port": 8000,           "ssl": true,           "certificate": "C:\\ThingWorx\\ems.cer",           "authenticate": true,           "user": "EMSAdmin",           "password": "EMSAdmin",           "content_read_timeout": 20000      },        "ws_connection": { "encryption": "ssl" },        "certificates": {           "validate": true,           "allow_self_signed": true,           "client_cert": "C:\\ThingWorx\\twx70.cer",           "key_file": "C:\\ThingWorx\\twx70.pkcs8",           "key_passphrase": "changeme"      }   }   Securing the LSR   LSR to EMS connection   All connections going from the LSR to the EMS are defined in the config.lua with the rap_ parameters.   To setup a secure connection to the EMS, we need to provide the server as defined in the config.json in the http_server section (e.g. the default localhost:8000). Define the usage of SSL / TLS as well as the certificate file.   Client to LSR connection   All connection going to the LSR from any client are defined in the config_lua witht the script_resource_ parameters.   To ensure that all requests are done via authenticated users, setup a userid and password. Configure the usage of SSL / TLS to encrypt the connection between clients and the LSR. A custom certificate is not necessarily required - the LSR provideds its own custom certificate by default.   Now opening https://localhost:8001 (default port for the LSR) in a browser will open an encrypted channel forcing authentication via the credentials defined above.   Of course this needs to be considered for other calls implementing e.g. a GET request to the LSR. This GET request also needs to provide the credentials in its header.   Authentication   It's recommended to also configure the LSR for using a credential based authentication mechanism.   When setting up the LSR to only accept incoming requests with credentials in the header, the script_resource_userid and script_resource_password can be used for authentication.   When connecting to an EMS using authentication, the rap_userid and rap_password can be used to authenticate with the credentials configured in the config.json   config.lua   The following configuration can be posted anywhere in config.lua - best place would be just below the log_level configuration.   scripts.rap_host = "localhost" scripts.rap_port = 8000 scripts.rap_ssl = true scripts.rap_deny_selfsigned = false scripts.rap_cert_file = "C:\ThingWorx\ems.cer" scripts.rap_server_authenticate = true scripts.rap_userid = "EMSAdmin" scripts.rap_password = "EMSAdmin" scripts.script_resource_userid = "admin" scripts.script_resource_password = "admin" scripts.script_resource_ssl = true   EMS specific configuration: rap LSR specific configuration: script_resource   Other considerations   When configuring the EMS and LSR for authentication and encrypted traffic, the configuration files hold information that not everyone should have access to.   Therefore the config.json and config.lua must be also protected from an Operating System point of view. Permissions should only be granted to the (process) user that is calling the EMS / LSR. Ensure that no one else can read / access the properties and certificates to avoid password-snooping on an OS level.   It's best to grant restricted access to the whole microserver directory, so that only privileged users can gain access.   You're next...   This blog should have given some insight on what's required and how it's configured to achieve a more secure and safer EMS / LSR integration in a real-life production environment. Of course it always depends on the actual implementation, but use these steps as a guideline to secure and protect your (Internet of) Things!
View full tip
Please refer to the release notes to find information on the new features/changes: PTC Here are some common questions and answers in regards to the Installers feature: Does that mean Thingworx 8 only support docker installation? Or standalone installation is still allowed? Only if using the new installer.  The war file download will still be available for non Docker installs. The WAR files will still be available and usable  the same way as in the past.  Users only need to use Docker if they use the installer. How do customers download/build the docker image? The image is not provided separetly, it is installed and configured by the installer. Does the installer install docker when necessary as well? Or is it expected that the user already has docker installed? No, the user must install it on their operating system before using the installer.  The installer will detect if Docker is properly installed.
View full tip
In this blog I will be testing with the WindchillSwaggerConnector, but most of the steps also apply to the generic SwaggerConnector.     Overview   The WindchillSwaggerConnector enables the connection to the Windchill REST endpoints through the Swagger (OpenAPI) specification. It is a specialized implementation of the SwaggerConnector. See Integration Connectors for documentation.   It relies on three components : Integration Runtime : microservice that runs outside of ThingWorx and has to be deployed separately, it uses Web Socket to communicate with the ThingWorx platform (similar to EMS). Integration Subsystem : available by default in 7.4 (not extension needed) Integration Connectors (WindchillSwaggerConnector) : available by default in 7.4 (not extension needed)   Currently, in 7.4, the WindchillSwaggerConnector  does not support SSO with Windchill (it is more targeted for a "gateway type" integration). Note that the PTC Navigate PDM apps are using the WindchillConnector and not the WindchillSwaggerConnector.   Integration Runtime microservice setup   The ThingWorx Integration Runtime is a microservice that runs outside of ThingWorx. It can run on the ThingWorx server or a remote machine. It is available for download from the ThingWorx Marketplace (Windows or Linux). The installation media contains 2 files : 1 JAR and 1 JSON configuration file.   For this demo, I'm installing the Integration Runtime on a remote machine and will not be using SSL.   1. Prerequisite for the Integration Runtime : Oracle Jre 8 (and of course a ThingWorx 7.4 platform server accessible) 2. Create an ApplicationKey in the composer for the Integration Runtime to use for communication to the ThingWorx platform. 3. Configure the Integration Runtime communication - ThingWorx host, port, appKey, ... - this is done on the Integration Runtime server via the JSON configuration file.   My integrationRuntime-settings.json (sslEnable=false, storagePath is ignored) : { "traceRoutes": "true", "storagePath": "/ThingworxStorage", "Thingworx": {     "appKey": "1234abcd-xxxx-yyyy-zzzz-5678efgh",     "host": "twx74neo",     "port": "8080",     "basePath": "/Thingworx",     "sslEnable": "false",     "ignoreSSLErrors": "true"   } } Note : It is important to completely remove the "SSL": {} block when not using SSL   4. Launch the Integration Runtime service (update the JAR and JSON filenames if needed) java -DconfigFile=integrationRuntime-settings.json -jar integration-runtime-7.4.0-b12.jar The Integration Runtime service uses Web Socket to communicate with the ThingWorx platform (similar to EMS). It registers itself with the ThingWorx platform.   Monitoring the Integration Runtime microservice        In the ThingWorx composer : Monitoring > Subsystems > Integration Subsystem      SMAINENTE1D1 is the hostname of my Integration Runtime server.   Custom WindchillSwaggerConnector implementation   Use the New Composer UI (some setting, such as API maps, are not available in the ThingWorx legacy composer) 1. Create a DataShape that is used to map the attributes being retrieved from Windchill WNCObjectDS : oid, type, name (all fields of type STRING)   2. Create a Thing named WNC11Connector that uses WindchillSwaggerConnector as Thing Template 3. Setup the Windchill connection under WNC11Connector > Configuration Authentication Type = fixed (SSO currently not supported) Username = <Windchill valid user> Password = <password for the Windchill user> Base URL : <Windchill app URL> (e.g. http://wncserver/Windchill)   4. Create an API maps under WNC11Connector > Services and API Maps > API Maps (New Composer only) My API Map : New API Map Mapping ID : FindBasicObjectsMap EndPoint : findObjects (choose the first one) Select DataShape : WNCObjectDS (created at step 1) and map the following attributes : name <- objName ($.items.attributes) type <- typeId ($.items) oid <- id ($.items)   After pressing [done] verify that the API ID is '/objects GET' (and not /structure/objects - otherwise recreate the mapping and choose the other findObjects endpoint). 5. Create a "Route" service under WNC11Connector > Services and API Maps > Services (New Composer only) Name : FindBasicObjects Type (Next to [Done] button) : Route Route Info | Endpoint : findObjects (same as step 4) Route Info | Mapping ID : FindBasicObjectsMap-xxxx created at step 4 Testing our custom WindchillSwaggerConnector   Test the WNC11Connector::FindBasicObjects service Note that the id (oid) and typeId (type) are returned by default by the /objects REST API - objName has to be explicitly requested.   Monitoring the Integration Connector        In the ThingWorx composer : Monitoring > Integration Connectors
View full tip
If the ThingWorx 7.4 installation with MSSQL db doesn't start with a license error on the splash screen, despite taking the necessary steps for specifying the license path, the error might be misleading and the problem is actually lying in the database connection. Going to THingworxStorage/logs and opening ApplicationLog.log might reveal the following (or similar ) errors: 2017-04-13 10:26:17.993-0400 [L: INFO] [O: c.t.s.ThingWorxServer] [I: ] [U: SuperUser] [S: ] [T: localhost-startStop-1] Sending Post Start Notifications... 2017-04-13 10:26:17.999-0400 [L: ERROR] [O: c.t.p.m.MssqlModelExceptionTranslator] [I: ] [U: SuperUser] [S: ] [T: localhost-startStop-1] [message: Could not create a transaction for ThingworxPersistenceProvider] 2017-04-13 10:26:18.001-0400 [L: ERROR] [O: E.c.t.p.d.FileTransferDocumentModelProvider] [I: ] [U: SuperUser] [S: ] [T: localhost-startStop-1] [context: Could not create a transaction for ThingworxPersistenceProvider][message: Could not create a transaction for ThingworxPersistenceProvider] 2017-04-13 10:26:18.004-0400 [L: ERROR] [O: c.t.p.m.MssqlModelExceptionTranslator] [I: ] [U: SuperUser] [S: ] [T: localhost-startStop-1] [message: Could not create a transaction for ThingworxPersistenceProvider] 2017-04-13 10:26:18.005-0400 [L: ERROR] [O: c.t.s.s.f.FileTransferSubsystem] [I: ] [U: SuperUser] [S: ] [T: localhost-startStop-1] Error loading queued transfer jobs from persistence provider 2017-04-13 10:26:18.006-0400 [L: ERROR] [O: c.t.p.m.MssqlModelExceptionTranslator] [I: ] [U: SuperUser] [S: ] [T: localhost-startStop-1] [message: Could not create a transaction for ThingworxPersistenceProvider] 2017-04-13 10:26:18.006-0400 [L: ERROR] [O: E.c.t.p.d.FileTransferDocumentModelProvider] [I: ] [U: SuperUser] [S: ] [T: localhost-startStop-1] [context: Could not create a transaction for ThingworxPersistenceProvider][message: Could not create a transaction for ThingworxPersistenceProvider] 2017-04-13 10:26:18.007-0400 [L: ERROR] [O: c.t.p.m.MssqlModelExceptionTranslator] [I: ] [U: SuperUser] [S: ] [T: localhost-startStop-1] [message: Could not create a transaction for ThingworxPersistenceProvider] 2017-04-13 10:26:18.008-0400 [L: ERROR] [O: c.t.s.s.f.FileTransferSubsystem] [I: ] [U: SuperUser] [S: ] [T: localhost-startStop-1] Error loading queued transfer jobs from persistence provider 2017-04-13 10:26:18.008-0400 [L: INFO] [O: c.t.s.ThingWorxServer] [I: ] [U: SuperUser] [S: ] [T: localhost-startStop-1] Thingworx Server Application...ON A few things to check here. First, how the database was set up. The standard expected port if 1433, however, if you choose a different port - ensure that the port is available. If it isn't - update the port setting in SQL configuration manager. Another possible root cause -- the database scripts did not run properly upon the setup. Currently, there is an active Jira PSPT-3587 for resolving the script issue (NOTE, this thread to be updated once the Jira is fixed). There are two options here, either to run the sql files manually (found in the "install" folder of the downloadable), or edit the bat scripts manually. It's recommended to edit the script files instead because running the sql commands allows more room for a mistake. The following line in the bat scripts needs to be edited to have the ".\" removed: sqlcmd.exe -S %server%\%serverinstance%,%port% -U %adminusername% -v loginname=%loginname% -v database=%database% -v thingworxusername=%thingworxusername% -v schema=%schema%  -i .\thingworx-database-setup.sql Note, that the scripts need to be run from the same directory ("/install")
View full tip
About This is the second part of a ThingBerry related blog post series. ThingBerry is ThingWorx installed on a RaspBerry Pi, which can be used for portable demonstrations without the need of utilizing e.g. customer networks. Instead the ThingBerry provides its own custom WIFI hotspot and allows Things to connect and send / receive demo data on a small scale. In this particual blog post we'll discuss on how to setup the ThingBerry as a WIFI hotspot to directly connect with mobile devices or other Raspberry Pis. As the ThingBerry is a highly unsupported environment for ThingWorx, please see this blog post for all related warnings. In case this guide looks familiar, it's probably because it's based on https://frillip.com/using-your-raspberry-pi-3-as-a-wifi-access-point-with-hostapd/ WIFI Hot Spot As the ThingBerry is currently connected via ethernet we can utilize the Raspberry Pi's WIFI connection to create a private network where all the wireless devices can connect to, e.g. another Raspberry Pi or a ESP8266 First we need to install dnsmasq and hostapd. Those will help setting up the access point and create a private DNS server to dynamically assign IP-addresses to connecting devices. sudo apt-get install dnsmasq hostapd Interfaces We will need to configure the wlan0 interface with a static IP. For this the dhcpcd needs to ignore the wlan0 interface. sudo nano /etc/dhcpcd.conf Paste the following content to the end of the file. This must be ABOVE any interface lines you may have added earlier! denyinterfaces wlan0 Save and exit. Let's now configure the static IP. sudo nano /etc/network/interfaces Comment out ALL lines for the wlan* configurations (e.g. wlan0, wlan1). By default there are three lines which need to be commented out by adding a # at the beginning of the line. After this the wlan0 can be pasted in: allow-hotplug wlan0 iface wlan0 inet static   address 192.168.0.1 netmask 255.255.255.0 network 192.168.0.0 broadcast 192.168.0.255 Save and exit. Now restart the dhcpcd service and reload the wlan0 configuration with sudo service dhcpcd restart sudo ifdown wlan0 sudo ifup wlan0 Hostapd Hostapd is used to configure the actual WIFI hot spot, e.g. the SSID and the WIFI password (wpa_passphrase) that's required to connect to this network. sudo nano /etc/hostapd/hostapd.conf Paste the following content: interface=wlan0 driver=nl80211 ssid=thingberry hw_mode=g channel=6 wmm_enabled=1 ieee80211n=1 country_code=DE macaddr_acl=0 ht_capab=[HT40][SHORT-GI-20][DSSS_CCK-40] auth_algs=1 ignore_broadcast_ssid=0 wpa=2 wpa_key_mgmt=WPA-PSK wpa_passphrase=changeme rsn_pairwise=CCMP If you prefer another SSID or a more secure password, please ensure updating above configuration! Save and exit. Check if the configuration is working via sudo /usr/sbin/hostapd /etc/hostapd/hostapd.conf It should return correctly, without any errors and finally show "wlan0: AP-ENABLED". With this you can now connect to the "thingberry" SSID. However there's no IP assigned automatically - so that sucks​ can be improved... Stop hostapd with CTRL+C and let's start it on boot. sudo nano /etc/default/hostapd At the end of the file, paste the following content: DAEMON_CONF="/etc/hostapd/hostapd.conf" Save and exit. DNSMASQ Dnsmasq allows to assign dynamic IP addresses. Let's backup the original configuration file and create a new one. sudo mv /etc/dnsmasq.conf /etc/dnsmasq.conf.orig  sudo nano /etc/dnsmasq.conf Paste the following content: interface=wlan0 listen-address=192.168.0.1 bind-interfaces dhcp-range=192.168.0.100,192.168.0.199,255.255.255.0,12h Save and exit. This will make the DNS service listen on 192.168.0.1 and assign IP addresses between 192.168.0.100 and 192.168.0.199 with a 12 hour lease. Next step is to setup the IPV4 forwarding for the wlan0 interface. sudo nano /etc/sysctl.conf Uncomment the following line: You can search in nano with CTRL+W net.ipv4.ip_forward=1 Save and exit. Hostname translation To be able to call the ThingBerry with its actual hostname, the hostname needs to be mapped in the host configuration. sudo nano /etc/hosts Search the line with your hostname and update it to the local IP address (as configured in the listen-address above), e.g. 192.168.0.1 thingberry Save and exit. Please note that this is the hostname and not the SSID of the network! Finalizing the Configuration Run the following command to enable the fowarding and reboot. sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward" sudo reboot Optional: Internet Access The ThingBerry is independent of any internet traffic. However if your connected devices or the ThingBerry itself need to contact the internet, the WIFI connection needs to route those packages to and from the (plugged-in) ethernet device. This can done through iptables sudo iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE  sudo iptables -A FORWARD -i eth0 -o wlan0 -m state --state RELATED,ESTABLISHED -j ACCEPT  sudo iptables -A FORWARD -i wlan0 -o eth0 -j ACCEPT All traffic will be internet related traffic will be forwarded between eth0 and wlan0 and vice versa. To load this configuration every time the ThingBerry is booted, it needs to be saved via sudo sh -c "iptables-save > /etc/iptables.ipv4.nat" Run this file by editing sudo nano /etc/rc.local Above the line exit 0 add the following line: iptables-restore < /etc/iptables.ipv4.nat Save and exit. Verification Connect to the new WIFI hotspot via the SSID and the password configured earlier through any WIFI capable device. When connecting to your new access point check your device's IP settings (in iOS, Android or a laptop / desktop device). It should show a 192.168.0.x IP address and should be pingable from the ThingBerry console. Connect to http://192.168.0.1/Thingworx or http://<servername>/Thingworx to open the Composer.
View full tip
Introduction Oracle 12c release introduced the concept of multi-tenant architecture for housing several databases running as service under a single database, I'll try to address the connectivity and required configuration to connect to one of the Pluggable database running in the multi-tenant architecture. Multi-tenant database architecture in scope of ThingWorx External Data Source What is multi-tenant Database architecture ? Running multiple databases under a single database installation. Oracle 12c allows user to create one database called Container Database (CDB) and then spawn several databases called Pluggable Databases (PDB) running as services under it. Why use multi-tenant architecture? Such a setup allows users to spawn a new PDB as and when needed with limited resource requirements, easily administer several PDBs just by administering the container database - since all the PDBs are contained within a single database's tablespace structure, start and stop individual PDB leading to low cost on maintaining different databases - as the resource management is limited to one CDB. When to use multi-tenant architecture? In scenarios like creating PoCs, different test environments requiring external data storage, maintaining different versions of dataset, having this run in the multi-tenant architecture could help save time, money and effort. Create Container Database (CDB) Creation of a Container Database (CDB) is not very different from creating a non Container Database use the attached guide Installing Oracle Database Software and Creating a Database.pdf same is accessible online. Create Pluggable Database (PDB) Use the attached Multitenant : Create and Configure a Pluggable Database (PDB) in Oracle Database 12c PDF guide to create and plug a Pluggable Database into the Container Database created in previous step, same is accessible online Using above guide I have bunch of pluggable databases as can be seen below. I'll be using TW724 for connecting to ThingWorx server as an external datasource for following example Connect to a Pluggable Database(PDB) as external data source for ThingWorx Download and unzip the Relational Databases Connectors Extension from ThingWorx Marketplace and extract Oracle12Connector_Extension Import Oracle12Connector_Extension to the ThingWorx using Extension -> Import Create a Thing using OracleDBServer12 Thing Template , e.g. TW724_PDB_Thing Navigate to the Configurations for TW724_PDB_Thing to update the default configuration: JDBC Driver Class Name : oracle.jdbc.OracleDriver JDBC Connection String : jdbc:oracle:thin:@//oravm.ptcnet.ptc.com:1521/tw724.ptcnet.ptc.com Database Username : <UserName> Database Password : <password>   5. Once done save the entity Note: A PDB in a container database can be reached only as a service and not using the CDB's SID. In the above configuration TW724 is a PDB which can be connected to via it's service name i.e. TW724.PTCNET.PTC.COM Let's head to the Services tab for TW724_PDB_Thing to query and access the PDB data Creating Services to access the PDB as external database source for ThingWorx Once the configuration is done the TW724_PDB_Thing is ready for use. The queries remain the same as any other SQL query needed to access the data from Oracle. Service for creating a Table Once on the Services tab for the TW724_PDB_Thing click on Add My Service select the service handler as SQL Command to use following script to create a testTable1 in the PDB create table testTable1 (     id NUMBER GENERATED ALWAYS AS IDENTITY primary key,     col1 varchar2(100),     col2 number ) Note: GENERATED ALWAYS AS IDENTITY option is Oracle 12c specific and I included it here for the reason that with Oracle 12c the possibility to auto generate is now built in with that option simplifying the sequence generation when compared with older Oracle versions such as Oracle 11g. User creating table will need access right on creating table and sequence checkout the Oracle documentation on Identity for more on this. Service for getting all the data from the table Add another service with script Select * from testTable1 for getting all the data from the table Service for inserting data into the table Adding another service with script insert into testTable1 (col1, col2) values ('TextValue', 123)  will insert the data into the table created above Service for getting all tables from the PDB i.e. TW724 Using Select * from tab lists all the available tables in the TW724 PDB Summary Just a quick wrap up on how this would look visually refer to the following image. Since this is a scalable setup - given the platform having enough resources it's possible to create upto 252 PDBs under a CDB therefore 252 PDBs could be created and configured to as many things extending the OracleDBServer12 Thing. ______________________________________________________________________________________________________________________________________________ Edit: Common Connection Troubleshooting If you observe the error something like this Unable to Invoke Service GetAllPDBTables on TW724_PDB_Thing : ORA-01033: ORACLE initialization or shutdown in progress Ensure that the pluggable database, in this error TW724 (since this is what I created and used above in my services) is opened and accessible. If it's not opened use the command after logging in as sys/system (with admin rights) in CDB, which is ORCL in via SQL*Plus or SQL Developer or any SQL utility of your choice capable of connecting to Oracle DB and open the pluggable database using the command : alter pluggable database tw724 open;
View full tip
For those of you that aren't aware - the newest version of the Eclipse Plugin for Extension Development was made available last week in the ThingWorx Marketplace here. Because of the infancy of the product, there is not an official process for supplying release notes along with the plugin.  These are not official or all encompassing, but cover the main items worked on for 7.0. New Features: Added Configuration Table Wizard for code generation SDK Javadocs now automatically linked to SDK resources on project creation When creating a Service, Trace logging statements are generated inside of it (along with appropriate initializers) ThingWorx Source actions are now available from right click menu within a .java file Bugs: Fixed problem where some BaseTypes are not uppercase in annotations when generating code Fixed error when Creating and importing Extension Projects when the Eclipse install has a space in the file path Fixed inconsistent formatting in the metadata.xml when adding new Entities We are hoping to have a more official Release Note process for the next release.  Feel free to reply with questions or concerns.
View full tip
Official name: DataStax Enterprise, sometimes referred as Cassandra. Note: DBA skills required, free self-paced training can be found here Training | DataStax The extension package can further be obtained through Technical Support. Thingworx 6.0 introduces DSE as a backend database scaling to much greater byte count, ad Neo4j performance limitations hit at 50Gbs. Some of the main reasons to consider DSE are: 1. Elastic scalability -- Alows to easily add capacity online to accommodate more customers and more data when needed. 2. Always on architecture -- Contains no single point of failure (as with traditional master/slave RDBMS's and other NoSQL solutions) resulting in continious availability for business-critical applications that can't afford to go down. 3. Fast linear-scale performance -- Enables sub-second response times with linear scalability (double the throughput with two nodes, quadruple it with four, and so on) to deliver response time speeds. 4. Flexible data storage -- Easily accommodates the full range of data formats - structured, semi-structured and unstructured -- that run through today's modern applications. 5. Easy data distribution -- Read and write to any node with all changes being automatically synchronized across a cluster, giving maximum flexibility to distribute data by replicating across multiple datacenters, cloud, and even mixed cloud/on-premise environments. Note: Windows+DSE is currently not fully supported. Connecting Thingworx: Prerequisite: fully configured DSE database. 1. Obtain the dse_persistancePackage 2. Import as an extension in Composer. 3. In composer, create a new persistence provider. 4. Select the imported package as Persistence Provider Package. 5. In Configuration tab:      - For Cassandra Cluster Host, enter the IP address set in cassandra.yaml or localhost if hosted locally      - Enter new of existing Cassandra Keyspace name      - Enter Solr Cluster URL      - Other fields can be left at default (*) 6. Go to Services and execute TestConnectivity service to ensure True response. 7. When creating new Stream, Value Stream, or a Data Table, set Persistence Provider to the one created in previous steps. Currently all reads and writes are done through Thingworx and all Thingworx data is encoded in DSE.  Opcenter still allows to see connectes streams, datatables, valuestreams. *SimpleStrategy can be used for a single data center, or NetworkTopologyStrategy is recommended for most deployments, because it is much easier to expand to multiple data centers when required by future expansion. Is there a limit of data per node? 1 TB is a reasonable limit on how much data a single node can handle, but in reality, a node is not at all limited by the size of the data, only the rate of operations. A node might have only 80 GB of data on it, but if it's continuously hit with random reads and doesn't have a lot of RAM, it might not even be able to handle that number of requests at a reasonable rate. Similarly, a node might have 10 TB of data, but if it's rarely read from, or there is a small portion of data that is hot (so it could be effectively cached), it will do just fine. If the replication factor is above 1 and there is no reads at consistency level ALL, other replicas will be able to respond quickly to read requests, so there won't be a large difference in latency seen from a client perspective.
View full tip
ThingWorx provides multiple ways to deliver your data to the server. You can choose from the C based EMS to your own C application that uses the C SDK as well as SDKs for many popular languages but what can you do if the device you want to collect data on is so small that it need a very lightweight data delivery method. Normally you would consider using the REST web service interface and writing your own custom client to post your data by there is an alternative, MQTT. MQTT is a lightweight protocol that can be used from an Arduino with an Ethernet Shield that can stream real time data directly to ThingWorx by installing the MQTT Marketplace Extension on your server. To learn more about how this kind of solution worked, I created this slide deck while building a hardware example: DeliveringArduinoDataToThingworx.pdf Hopefully, it can help others out who want to create this kind of solution as well.
View full tip
<p>We live in a connected world where we can (want!) to receive instant updates and notifications. ThingWorx leverages the power of Web 2.0 and its Always-On technology to deliver that, but our friendly SMS providers have also provided an easy and powerful way that can be used to deliver SMS notifications right to your phone. Email to Text!</p><div>Set up a 'notification' Thing using our MailServer Template, set up your outgoing e-mail server and you are now ready to invoke the 'SendMessage' service on a given event. All you need now is the email address of your SMS number, which you can find by following this link: <a href="http://sms411.net/how-to-send-email-to-a-phone/" target="_blank"><span style="font-size:8.5pt;line-height:115%;font-family:&quot;Arial&quot;,&quot;sans-serif&quot;">List of e-Mail to SMS  addresses</span></a></div><p class="MsoNormal"><o:p></o:p></p><div><p class="MsoNormal"><o:p></o:p></p></div><p></p>
View full tip
Attached (as PDF) are some steps to quickly get started with the Thingworx MQTT Extension so that you can subscribe / publish topics.
View full tip
Previously Installing & Connecting C SDK to Federated ThingWorx with VNC Tunneling to the Edge device Installing and configuring Web Socket Tunnel Extension on ThingWorx Platform Overview     Using the C SDK Edge client configuration we did earlier, we'll now create above illustrated setup. In this C SDK Client we'll push the data to ThingWorx Publisher with servername : TW802Neo to ThingWorx Subscriber servername : TW81. Notice that the SteamSensor2 on the pulisher server is the one binding to the C SDK client and the FederatedSteamThing on subscriber is only getting data from the SteamSensor2. Let's crack on!   Content   Configure ThingWorx to publish Configure ThingWorx to subscribe Publish entities from Publisher to the Subscriber Create Mashup to view data published to the subscriber Pre-requisite Minimum requirement is to have two ThingWorx servers running. Note that both ThingWorx systems can be publisher & subscriber at the same time.   Configure ThingWorx Publisher   Configuring Federation Subsystem   1. Navigate to ThingWorx Composer > Subsystems > Federation Subsystems and configure the following highlighted sections   Essentially its required to configure the Server Identification, i.e. My Server name (FQDN / IP) , My Server Description (optional) Federation subscribers this server publishes to, i.e. all the server you want to publish to from this server. Refer to the Federation Subsystem doc in the Help Center to check detail description on each configurable parameter.   2. Save the federation subsystem   Configuring a Thing to be published   1. Navigate back to the Composer home page and select the entity which you'd like to publish 2. In this case I'm using SteamSensor2 which is created to connect to the C SDK client 3. To publish edit the entity and click on Publish checkbox, like so 4. Save the entity   Configure ThingWorx Subscriber     Configuring Federation Subsystem   1. Navigate to ThingWorx Composer > Subsystems > Federation Subsystems and configure the following highlighted sections   Essentially its required to configure the Server Identification, i.e. My Server name (FQDN / IP) , My Server Description (optional) Refer to help center doc on Federation Subsystems should you need more detail on the configurable parameter If you only want to use this server as a subscriber of entities from the publishing ThingWorx Server, in that case you don't have to Configure the section Federation subscribers this server publishes to, I've configured here anyway to show that both servers can work as publishers and subscribers   2. Save the federation subsystem   Configuring a Thing to subscribe to a published Thing 1. Subscribing to an entity is fairly straight forward, I'll demonstrate by utilizing the C SDK client which is currently pushing values to my remote thing called SteamSensor2 on server https://tw802neo:443/Thingworx 2. I have already Published the StreamSensor2, see above section Configuring a Thing to be published 3. Create a Thing called FederatedStreamThing with RemoteThingWithTunnels as ThingTemplate, 4. Browser for the Identifier and select the required entity to which binding must be done, like so   5. Navigate to the Properties section for the entity, click Manage Bindings to search for the remote properties like so for adding them to this thing:     6. Save the entity and then we can see the values that were pushed from the client C SDK     7. Finally, we can also quickly see the values pulled via a Mashup created in the subscriber ThingWorx , below a is a simple mashup with grid widget pulling values using QueryPropertyHistory service  
View full tip
Finally there is an article which combines all of the available resources on certificate configuration to better enable developers to complete their production-worthy edge devices. Please see the official PTC documentation located here. Please feel free to comment with any questions, comments, or feedback on this! Happy developing!
View full tip
Check out this new KCS article which links to all known best practice documents available for ThingWorx. This article will get larger in time as more articles are published related to the Dos and Don'ts of building an IoT application! Do you know when to use timers, and where to implement their subscriptions? How about ensuring info tables are used at the proper time, and data tables at others? Pesky performance issues wherein ThingWorx runs slow for apparently no reason? All of these questions and more are addressed here!
View full tip
This zip file contains my slides and buildable project from my Liveworx 207 developer chat presentation featuring the Thing-e Robot and raspberry pi zero integration with ThingWorx using the C SDK.
View full tip
Connection server + InfluxDB + Grafana work together can display connection server metrics in graphic charts Download and set up connection server Download InfluxDB and configure InfluxDB edit <influxdb home>\influxdb.conf file Note: [admin] is InfluxDB admin login page, [http] is Grafana connected settings, [graphite] is connection server connected settings Run influxd.exe file with below commands (windows cmd) <path>/influxd -config <path>/influxdb.conf Run influx.exe file with below commands (in my case the port is 8008) <path>/influx -port 8008 Start connection server Login to InfluxDB by using URL http://localhost:8018. Choose Database graphite and Query SHOW MEASUREMENTS Note: There should be many connection server metrics Double click grafana-server.exe file (<path>/grafana-4.2.0.windows-x64/grafana-4.2.0/bin) Login to Grafana with URL http://localhost:3000 The default login username and password is admin/admin Create a new Data Source Note: Type, Url and Database properties are required. Save & Test. If the connection is set up successfully, there should be a green message pop-up. Create a new dashboard Add a new Row, and set up its metrics by create new querys Sample result:
View full tip
About This is the third part of a ThingBerry related blog post series. ThingBerry is ThingWorx installed on a RaspBerry Pi, which can be used for portable demonstrations without the need of utilizing e.g. customer networks. Instead the ThingBerry provides its own custom WIFI hotspot and allows Things to connect and send / receive demo data on a small scale. In this particual blog post we'll discuss on how to setup the ThingBerry for encrypted communication and trust via certificates. We will create a custom Root and Intermediate Certificate Authority as well as a server specific certificate. More information on the theory of Trust & Encryption can be found here: Trust &amp; Encryption - Theory As the ThingBerry is a highly unsupported environment for ThingWorx, please see this blog post for all related warnings. Preparing the environment To get started and keep all the working materials safe and secure within the scope of the pi user account, let's start creating a workspace directory and then install the openssl package cd ~ mkdir certs chmod 700 certs cd certs sudo apt-get install openssl Troubleshooting Tips Keytool To look at a specific certificate use the keytool and specify the .crt file to look at keytool -printcert -file rootca.crt This will show information on the certificate's configuration, such as signing authority or validity. Timestamps It's important, that the Chain of Trust follows a timely fashion. The Root CA has the longest expiration date. Within its validity the Intermediate CA must be valid. The server specific certificate must be valid in the timeframe of the Intermediate CA validity. As the validity is calculated based on seconds of the current (signing) action, I'm using valid days of Root CA 999 days Intermediate CA 888 days Server specific certificate 777 days Any other times, that make sense can be used, as long as the time constraints are followed. Otherwise the certificate will not be recognized as valid and will therefore not be trusted. Creating a Chain of Trust For this example we'll create a Root and Intermediate Certificate Authority (CA) as well as the server specific certificate. In a first step we can create all of the required keys. openssl genrsa -des3 -out rootca.key 4096 openssl genrsa -out intermediateca.key 4096 openssl genrsa -out server.key 4096 The -des3 option for the rootca will store a password and encrypt it within the key for more secure access. This tightens up security, so that the key for the rootca is not easily corrupted or exposed. Root Certificate Authority Having the keys, we now create the Root CA itself openssl req -new -x509 -sha256 -days 999 -key rootca.key -reqexts v3_req -extensions v3_ca -out rootca.crt Enter the information, starting with the password for the Root CA key. Use a easy to identify name for the Common Name (CN), e.g. "My Root CA" Intermediate Certificate Authority As the Intermediate CA needs to be signed / approved by the Root CA, we need a Certficiate Signing Request (CSR) openssl req -new -key intermediateca.key -reqexts v3_req -extensions v3_ca -out intermediateca.csr The v3_ca extension will mark this certificate as CA itself. This means that we can use the intermediate CA to sign other certificates as well. The CSR will be signed with the Root CA's key and certificate. For this the CSR must be sent to the whoever own the Root CA. In our case, we're the owner ourselves and have everything in the same directory, so no need to send it off to Let's Encrypt, Google or VeriSign. For signing the Intermediate CA as an actual CA, the Root CA must have a specific configuration to allow the v3_ca extension. This needs to be created via nano v3_ca.ext Into this new file, copy & paste the following: basicConstraints        = CA:TRUE subjectKeyIdentifier    = hash authorityKeyIdentifier  = keyid:always,issuer:always keyUsage                = cRLSign, dataEncipherment, digitalSignature, keyCertSign, keyEncipherment, nonRepudiation Save and exit. Finally, the Root CA is signing the Intermediate CA, using the v3_ca extension via: openssl x509 -req -in intermediateca.csr -CA rootca.crt -CAkey rootca.key -CAcreateserial -CAserial intermediateca.srl -extfile v3_ca.ext -days 888 -sha256 -out intermediateca.crt Certificate Authority Chain Having now two CAs, they need to be chained together. This is required for later to create the keystore used in Tomcat. Creating the Chain is probably the easiest part of this blog: cat intermediateca.crt rootca.crt > cachain.crt Server Specific Certificate The server specific certificate has to be signed and obtained by the Intermediate CA; for this a CSR is required. openssl req -new -key server.key -out server.csr The request is then signed with the Intermediate CA's key and certificate. For commerical certificates this signing process will be done by publicly trusted CAs, like Let's Encrypt, VeriSign or Google. In our case, we're the Intermediate CA ourselves and can sign the CSR. openssl x509 -req -in server.csr -CA intermediateca.crt -CAkey intermediateca.key -CAcreateserial -CAserial server.srl -days 777 -sha256 -out server.crt Creating Keystores For the keystore as used in Tomcat we need the CA Chain as well as the server specific certificate and the corresponding key. To generate a keystore that can be used in Tomcat, we first need to create a PKCS12 type keystore with openssl openssl pkcs12 -export -certfile cachain.crt -in server.crt -inkey server.key -out server.p12 This PKCS12 keystore can now be transformed into a JKS keystore, by importing the PKCS12 information into a new JKS keystore keytool -importkeystore -srckeystore server.p12 -srcstoretype PKCS12 -destkeystore keystore.jks -deststoretype JKS Important: the passwords for the PKCS12 keystore and the JKS keystore must match! Tomcat requires those passwords to be same - in case they are different, the configuration will not work and ThingWorx cannot be accessed through a secure connection. Configuring ThingWorx For ThingWorx, only the keystore is required. It can be copied over e.g. to certificates area in the storage directory. sudo cp keystore.jks /thingworx/storage/certificates sudo chmod 640 /thingworx/storage/certificates/keystore.jks sudo chown tomcat:tomcat /thingworx/storage/certificates/keystore.jks To enable it, Tomcat's server.xml needs to be updated. sudo nano $CATALINA_HOME/conf/server.xml Find the connector for port 80 and add a new connector after the port 80 connector: <Connector port="443" protocol="org.apache.coyote.http11.Http11NioProtocol"            maxThreads="150" connectionTimeout="20000"  redirectPort="8443"            SSLEnabled="true" scheme="https" secure="true" clientAuth="false" sslProtocol="TLSv1.2" enableLookups="false"            keystoreFile="/thingworx/storage/certificates/keystore.jks" keystorePass="<keystorePass>" /> Don't forget to update the password. Stop Tomcat sudo service tomcat stop Ensure Tomcat is actually down with ps -ef | grep tomcat If it's still running, just sudo kill it and start Tomcat again to activate the new configuration sudo service tomcat start And now? Congratulations! ThingWorx is now running on your ThingBerry using a secure channel. Access it via https://<thingberry>/Thingworx As HTTPS is automatically using port 443, you will automatically connect to the port configured in the server.xml Classic HTTP connections on port 80 can still be used. For more information about setting up ThingWorx with (self-signed) certificates also see https://support.ptc.com/appserver/cs/view/solution.jsp?n=CS193947 - there's also a note on forwarding all HTTP traffic to HTTPS. See also https://support.ptc.com/appserver/cs/view/solution.jsp?n=CS246292 for more information about using specific SSL protocols or cipher suites.
View full tip
Announcements