cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

ThingWorx Navigate is now Windchill Navigate Learn More

IoT & Connectivity Tips

Sort by:
An introduction to installing the ThingWorx platform. Information on the environment, prerequisites, and configuration steps when installing ThingWorx. Includes walkthroughs of installing with H2 and PostgreSQL databases, an introduction and demonstration of the Linux installation script, solutions to common installation problems and more.     For full-sized viewing, click on the YouTube link in the player controls.   Visit the Online Success Guide to access our Expert Session videos at any time as well as additional information about ThingWorx training and services.
View full tip
A confusion matrix is a technique for summarizing the performance of a classification algorithm. Classification accuracy alone can be misleading if you have an unequal number of observations in each class or if you have more than two classes in your data set. Calculating a confusion matrix can give you a better idea of what your classification model is getting right and what types of errors it is making. Classification Accuracy and its Limitations: ​Classification Accuracy = Correct Predictions/Total Predictions The main problem with classification accuracy is that it hides the detail you need to better understand the performance of your classification model. Below are two examples: 1.  When you are data has more than 2 classes. With 3 or more classes you may get a classification accuracy of 80%, but you don’t know if that is because all classes are being predicted equally well or whether one or two classes are being neglected by the model. 2.  When your data does not have an even number of classes. You may achieve accuracy of 90% or more, but this is not a good score if 90 records for every 100 belong to one class and you can achieve this score by always predicting the most common class value. Classification accuracy can hide the detail you need to diagnose the performance of your model. But thankfully we can tease apart this detail by using a confusion matrix. Confusion Matrix Terminology: A confusion matrix is a table that is often use to describe the performance of a classification model on a set of test data for which true values are known. Let’s start with an example for a binary classifier: N=165 Predicted no: Predicted yes: Actual no: 50 10 Actual yes: 5 100 What we can learn from Confusion Matrix? There are two possible predicted classes: "yes" and "no". If we were predicting the presence of a disease, for example, "yes" would mean they have the disease, and "no" would mean they don't have the disease. The classifier made a total of 165 predictions (e.g., 165 patients were being tested for the presence of that disease). Out of those 165 cases, the classifier predicted "yes" 110 times, and "no" 55 times. In reality, 105 patients in the sample have the disease, and 60 patients do not. Let's now define the most basic terms, which are whole numbers (not rates): True positives (TP): These are cases in which we predicted yes (they have the disease), and they do have the disease. True negatives (TN): We predicted no, and they don't have the disease. False positives (FP): We predicted yes, but they don't actually have the disease. (Also known as a "Type I error.") False negatives (FN): We predicted no, but they actually do have the disease. (Also known as a "Type II error.") N=165 Predicted No: Predicted Yes: Actual No: TN=50 FP=10 60 Actual Yes: FN=5 TP=100 105 55 110 This is a list of rates that are often computed from a confusion matrix for a binary classifier: Accuracy: Overall, how often is the classifier correct? 1. (TP+TN)/total = (100+50)/165 = 0.91 Misclassification Rate: Overall, how often is it wrong? 1. (FP+FN)/total = (10+5)/165 = 0.09 2. Equivalent to 1 minus Accuracy 3. Also known as "Error Rate" True Positive Rate: When it's actually yes, how often does it predict yes? 1. TP/actual yes = 100/105 = 0.95 2. Also known as "Sensitivity" or "Recall" False Positive Rate: When it's actually no, how often does it predict yes? 1. FP/actual no = 10/60 = 0.17 Specificity: When it's actually no, how often does it predict no? 1. TN/actual no = 50/60 = 0.83 2. Equivalent to 1 minus False Positive Rate Precision: When it predicts yes, how often is it correct? 1. TP/predicted yes = 100/110 = 0.91 Prevalence: How often does the yes condition actually occur in our sample? 1. Actual yes/total = 105/165 = 0.64
View full tip
The following power point contains some reference slides to start up with DSE/ThingWorx integration. Start with understanding DSE architecture and specifically, the differences compared to regular Relational Databases. Free online courses offered by DataStaxAcademy: –https://academy.datastax.com/courses/understanding-cassandra-architecture –https://academy.datastax.com/courses/installing-and-configuring-cassandra   The following section will guide you through some of the specifics: http://datastax.com/documentation/cassandra/2.0/cassandra/architecture/architecturePlanningAbout_c.html
View full tip
Key Functional Highlights Production Advisor is now available in the Freemium and Developer Kit downloads. Plant Managers are provided with real-time monitoring of production status and critical KPI’s such as utilization, performance, quality and OEE, by unifying data from disparate lines, assets and sensors. With Production Advisor, Plant Managers have the ability to detect and react instantly to production issues- reaching lower downtime, higher production throughput and better quality from the factory resources. Compatibility ThingWorx 8.0.1 KEPServerEX 6.2 KEPServerEX V6,1 and older as well as different OPC Servers (with Kepware OPC aggregator) Documentation ThingWorx Manufacturing Apps Setup and Configuration Guide: https://support.ptc.com/WCMS/files/173133/en/ThingWorxManufacturingAppsSetup_8-0-1.pdf ThingWorx Manufacturing Apps Customization Guide: https://support.ptc.com/WCMS/files/173135/en/ThingWorxManufacturingAppsCust_8-0-1.pdf Get Started Documentation on Portal: https://www.ptc.com/en/thingworx/manufacturing-apps/Dashboard/Get-Started (PTC users should use their normal login credentials and do not need to register on the portal) Download Freemium and Developer Kit (8.0.1) are available for download here: https://www.ptc.com/en/thingworx/manufacturing-apps/Dashboard (PTC users should use their normal login credentials and do not need to register on the portal ThingWorx Platform Extensions (8.1.0, released 1 Nov 2017) are available for download here: https://support.ptc.com/appserver/auth/it/esd/product.jsp?prodFamily=TWA
View full tip
Preface This guide applies to a clean installation of the CentOS 7 Minimal distribution. This is labeled as "Minimal ISO" on the CentOS.org website and the filename of the iso image used to install the operating system will resemble "CentOS-7-x86_64-Minimal-1611.iso." The machine used in this guide was a virtual machine created using Oracle VirtualBox but the same steps should apply to any machine with a clean CentOS 7 Minimal install. It is however possible that some installations may encounter slight variations due to hardware configurations. Before starting Unzip the downloaded "MED-..._ThingWorx-Analytics-Server-Linux-Standalone-8-0-0.zip".  Inside the unzipped directory you will find a file called "ThingWorxAnalyticsServer-8.0.0-linux-x64-installer.run". Before running step number 10, upload that file to your CentOS machine using a SFTP SCP tool of your choice. Configuration and installation steps Step 1: Install Docker with the following commands (these steps are presented at https://docs.docker.com/engine/installation/linux/docker-ce/centos/#install-using-the-repository😞 yum install -y yum-utils device-mapper-persistent-data lvm2 yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo yum makecache fast yum -y install docker-ce Step 2: Create a group called docker (If this command reports the group already exists, that is ok. You can move to the next step): groupadd docker Step 3: Add your non-root user to the docker group, in this example my non-root user is called "thingworx", please replace with the correct username: usermod -aG docker thingworx Step 4: Start the Docker service and enable it to auto start after reboot: systemctl start docker systemctl enable docker Step 5: Verify that docker is working: docker ps Step 6: After running the above command you should see a single line output that resembles the following: "CONTAINER ID        IMAGE              COMMAND            CREATED            STATUS              PORTS              NAMES" Step 7: Disable selinux with the two following commands. Note by doing this you will want to make sure if this is a public facing server that you take appropriate security measures to lock down the system. setenforce 0 sed -i -e 's/SELINUX=enforcing/SELINUX=disabled/' /etc/sysconfig/selinux Step 8: Set the hostname of your machine to something otherthan the default which is "localhost.localdomain".  In this case I am using the name "centos", this can be replaced with a name of your choosing: hostname centos echo "centos" > /etc/hostname Step 9: Allow traffic through the default CentOS firewall.  Note that in a production environment, the firewall should be configured more granular to allow incoming traffic to only the required ports (5432, 2181 and 8080). Please refer to CentOS documentation and consult security best practices within your organization for more information. The following commands will completely disable the CentOS firewall. systemctl disable firewalld systemctl stop firewalld Step 10: Ensure the ThingWorx Analytics Server installer is executable then run the installer. You may have to change to the directory where the installer was uploaded to the machine, in this case I have it in the home directory of the user named thingworx.  Please replace that path with the correct path for your machine.  Note below are 3 separate commands. cd /home/thingworx chmod +x ThingWorxAnalyticsServer-8.0.0-linux-x64-installer.run ./ThingWorxAnalyticsServer-8.0.0-linux-x64-installer.run Step 11: Verify that the ThingWorx Analytics Server installation is successful. Note that it may take a few minutes for the system to become available. Retry the command after a few minutes if an error is initially encountered. curl http://127.0.0.1:8080/analytics/1.0/about/versioninfo NOTE: The response from the above command should resemble the following: {"implementationVersion":"8.0.0"}
View full tip
There is a test on connection server 7.2. With 4core CPU and 8GB memory, we sent 1000 http requests every second and there is 5% http request losted. After changing configuration for connection server, the lost rate drop to 0.86%. Here are some suggestions to improve connection server performance Reset parameters in connection server configuration file cxserver.conf. (..\conf\cxserver.conf) Adjust parameters max-connection-pool-size and max-wait-queue-size Change the default JVM settings. (increase memory for JVM properly) In this case, I created a new file named startMyConnectionServer.bat file with below code: SET CONNECTION_SERVER_HOME=C:\connection-server-7.2.0.2095 SET JAVA_OPTS=-Xms2G -Xmx2G %CONNECTION_SERVER_HOME%\bin\connection-server.bat Increase connection server's hardware (memory, CPU cores) Minimum system requirement 16GB memory, 4 CPU cores. Refer to ThingWorx Core 8.0 System Requirements for more hardware information
View full tip
Please note that the below configuration is intended for testing purposes only.  Make sure that your final deployment is within your business security policies. The installation guide can be found at: http://support.ptc.com/WCMS/files/173161/en/ThingWorxDockerInstaller.pdf Postgres: Reference the Installation Guide above for a supported version of Postgres Once deployed, configure it to support remote connections: Navigate to: <PostgresInstallPoint>\data Open the following with a text editor: pg_hba.conf Find the line with IPv4 local connections Change 127.0.0.1/32 to 0.0.0.0/0 Restart PostgreSQL server NOTE: This could open up security vulnerabilities to the database, so make sure you take appropriate security measures if the data will be sensitive Docker: Find the appropriate Docker platform for your OS Docker Community Edition For Windows Server 2016, there is a download for the Edge (Windows Server 2016) under the above link -> Docker CE for Windows -> And then scroll down a little bit Docker Toolbox If you try to deploy the Docker Community Edition on a system that doesn't support, it will direct you to this installation instead At some point during or after the installation, it will prompt you to enable Hyper-V If this is a physical server, these settings will be in your Bios For VMWare, while the VM is powered down, go to VM-> Virtual Machine Properties -> Hardware -> Processors -> Enable 'Virtualize Intel VT-x/EPT or AMD-V/RVI' Restart, and make sure Docker is running (whale icon in your system tray for the Windows Server 2016 edge version) With Docker running, open a command prompt and look at your IP settings For windows Server 2016, right click the start menu -> Command Prompt (admin) and run IPCONFIG Write down the IP assigned to DockerNAT, as this is will be your Postgres HOST later Share your main drive with Docker In Windows Server 2016, right click the docker icon in the system tray -> Settings -> Shared Drives -> C: Thingworx Installation: At this point you should have Docker installed and Postgres remotely configured with only the admin user (postgres) The installer will create the image/container inside of Docker, Install Tomcat, and configure your database Below is a capture of the settings used in the above screenshots.  Anything not listed (like specifying the container name, which is twxfoundation by default) was left as the default values:       Installation Directory: C:\Program Files (x86)\twxEnterpriseFoundationPostgresDocker       ThingWorx License Directory: C:\Users\Administrator\Desktop\license.bin       Local ThingWorx Foundation Port: 8080       Java Initial Heap setting for TWX Foundation: 1024       Java Max Heap setting for TWX Foundation: 2048       RDS Instance: 1     PostgreSQL Host: 10.0.75.1       PostgreSQL Port: 5432       PostgreSQL Admin Schema: postgres       PostgreSQL Admin Username: postgres       PostgreSQL Admin Password: <see note>       PostgreSQL ThingWorx Foundation Schema: thingworx       PostgreSQL ThingWorx Foundation Username: thingworx       PostgreSQL ThingWorx Foundation Password: <see note>       PostgreSQL ThingWorx Tablespace Location: /                     ​NOTE:​ It is highly recommended to use a complex password (Letters of all cases, numbers, and symbols) as we have opened up our database to remote connections RDS was set to Yes (Default is no) PostgreSQL Host is the IP taken from the earlier steps In this example, the Tablespace location is defined inside of Docker, not Windows Post Install: Confirm that Thingworx is running properly by opening a broswer and attempting to log in For our example, the URL is http://10.0.75.1:8080/Thingworx Troubleshooting: If the installation fails, refer to the end of the Installation Guide on where to look for logs, and items that need to be cleaned up before attempting to install again If the install was successful, but connecting fails, run the following in the command prompt to look at the Docker Server's startup logs for hints: Docker logs -f twxfoundation *Note that twxfoundation is the default during installation.  If this was changed in your installation, use that instead
View full tip
Ran into this recently thought I share an approach to getting a table with multi-column distinct yet retaining all the columns of the row. If you use Distinct, you get only the Columns you do Distinct on. This isn't very helpful if you want the 'latest' or the 'first occurrences'  of records in your table with a combination of fields being unique. For example I had Process, Part, Dimension and Point for which I had multiple value and date time entries, but I only wanted the latest entries. Following is how I solved it, if you have a better way please leave a comment! P.S.: for the query I used the awesome query builder available in the snippet section! --------------------------------------- var q1Result = Things["MyThing"].QueryStreamEntriesWithData({maxItems:99999, query:query1}); //Below creates a temporary measurement table to store the latest meaurement values var params = {                 infoTableName : "InfoTable",                 dataShapeName : "MyDatashape.DS" }; // CreateInfoTableFromDataShape(infoTableName:STRING("InfoTable"), dataShapeName:STRING):INFOTABLE(MyDataShape.DS) var tempTable1 = Resources["InfoTableFunctions"].CreateInfoTableFromDataShape(params); // Extract only the latest measurements for the PART from the measurement result table 'q1Result' //The way we are going to reduce this to unique measurements is //1. records are in reverse order of date time //2. get distinct by Process Part Dim Point //3. Step through and match against distinct set //4. First match goes into final set //5. Upon match remove from distinct set //6. If no match then skip record //7. If no more distinct match records break loop var params = {                 t: q1Result /* INFOTABLE */,                 columns: 'ProcessID,PartID,Dimension,Point' /* STRING */ }; // result: INFOTABLE var distinctResult = Resources["InfoTableFunctions"].Distinct(params); for (var x = 0; x < q1Result.rows.length; x++) {     var query = {       "filters": {         "type": "AND",         "filters": [           {             "fieldName": "ProcessID",             "type": "EQ",             "value": q1Result.rows .ProcessID           },           {             "fieldName": "PartID",             "type": "EQ",             "value": q1Result.rows .PartID          },           {             "fieldName": "Dimension",             "type": "EQ",             "value": q1Result.rows .Dimension           },           {             "fieldName": "Point",             "type": "EQ",             "value": q1Result.rows .Point           }         ]       }     };   var params = {                 t: distinctResult /* INFOTABLE */,                 query: query /* QUERY */ }; // result: INFOTABLE var matchResult = Resources["InfoTableFunctions"].Query(params);     if (matchResult.rows.length == 1) {         tempTable1.AddRow(q1Result.rows );            var params = {             t: distinctResult /* INFOTABLE */,             query: query /* QUERY */         };         // result: INFOTABLE         var distinctResult = Resources["InfoTableFunctions"].DeleteQuery(params);         if (distinctResult.rows.length == 0) {                        break                    }            }    } //I now have a tempTable1 with the full rows and the 4 fields distinct result = tempTable1
View full tip
This has been moved to its new home in the Augmented Reality Category in the PTC Community.
View full tip
This is a useful trick for rolling up metrics in Thingworx across various levels of a hierarchy by using Networks, ThingShapes, and recursive service definitions. Say that you have a hierachy of Things in your model such as a Global view, a Region view, and a Store view -- this could be a Mfg Plant, a Building, an Asset, etc; whatever the core metric producing Thing is in your model -- where your Store has KPIs that you want to roll up across regions and globally. First, create a template for each of your hierarchical levels. In my case it is a GlobalTemplate, RegionalTemplate, and StoreTemplate. Add a property to your StoreTemplate that will be the KPI. Now, create a Thing for the Globe, and each of your Regions and Stores. Add them to a hierachical Network as such: Now, we need to create a ThingShape to aggregate our KPIs and apply it to the Global, Regional, and Store template. Now we will define a recursive funciton on our ThingShape called GetKPI ​and define it with the following: //define our base case, when the thing template we are on is the lowest level of our hierarchy, in this case the StoreTemplate if (me.thingTemplate == "StoreTemplate") {     //in our base case, the result is just the property for the metric we want to aggregate     result = me.someMetric } else {     //otherwise, we are at some other level in the hierarchy and we need to get our child connections from the network     //this gets all the things below us in the network     var params = {         name: me.name /* STRING */     };     // result: INFOTABLE dataShape: NetworkConnection     var network = Networks["Network"].GetChildConnections(params);     //loop through each of the things below us in the hierarchy and recursively add the result of GetKPI() to our result     result = 0;     for each (var row in network.rows) {             result += Things[row.to].GetKPI();     } } This is a simple case of just summing up a single property, but we can take this further using the Union and Aggregate snippets provided by thingworx to do other kinds of summarization. First add a new property called someAvgMetric ​to our StoreTemplate, and define a new service GetKPIProperties as such, with an InfoTable result, on the StoreTemplate varparams = {     propertyNames: {"items": ["someMetric", "someAvgMetric"]} /* JSON */ }; // result: INFOTABLE dataShape: "undefined" var result = me.GetNamedProperties(params); Now, define a new service on our ThingShape to utilize this service as our base case, and aggregate the resulting InfoTable when necessary. We'll call this service GetKPIAggregates: //define our base case if (me.thingTemplate == "StoreTemplate") {     //this function will be on the StoreTemplate, and returns the base infotable     result = me.GetKPIProperties() } else {     //grab our network     var params = {         name: me.name /* STRING */     };     // result: INFOTABLE dataShape: NetworkConnection     var network = Networks["Network"].GetChildConnections(params);     //need to create an empty infotable to union into. I glossed over this, but you'll need a datashape here     //create empty infotable     var result = Resources["InfoTableFunctions"].CreateInfoTableFromDataShape({ infoTableName: "InfoTable", dataShapeName: "KPIDataShape" });     //loop through and union each of our results to our new infotable     for each (var row in network.rows) {         var params = {             t1: result /* INFOTABLE */,             t2: Things[row.to].GetKPIAggregates() /* INFOTABLE */         };         var result = Resources["InfoTableFunctions"].Union(params);     }     //aggregate each of our fields     var params = {         t: result /* INFOTABLE */,         columns: "someMetric,someAvgMetric" /* STRING */,         aggregates: "SUM,AVERAGE" /* STRING */,         groupByColumns: undefined /* STRING */     };     // result: INFOTABLE     var result = Resources["InfoTableFunctions"].Aggregate(params);     //need to loop through each of our field names and make them match our base infotable     // infotable datashape iteration     var dataShapeFields = result.dataShape.fields;     for (var fieldName in dataShapeFields) {         var stringName = dataShapeFields[fieldName].name;         var params = {             t: result /* INFOTABLE */,             from: stringName /* STRING */,             to: stringName.split("_")[1] /* STRING */         };         // result: INFOTABLE         var result = Resources["InfoTableFunctions"].RenameField(params);     } } Now, in our mashups, we can use a DynamicThingShape and call our GetKPIs service at any level in our network, and our data will be aggregated correctly for whatever level we are at in the hierarchy!
View full tip
This has been moved to its new home in the Augmented Reality Category in the PTC Community.
View full tip
In this blog I will be testing the SAPODataConnector using the SAP Gateway - Demo Consumption System.   Overview   The SAPODataConnector enables the connection to the SAP Netweaver Gateway through the ODdata specification. It is a specialized implementation of the ODataConnector. See Integration Connectors for documentation.   It relies on three components : Integration Runtime : microservice that runs outside of ThingWorx and has to be deployed separately, it uses Web Socket to communicate with the ThingWorx platform (similar to EMS). Integration Subsystem : available by default since 7.4 (not extension needed) Integration Connector : SAPODataConnector available by default in 8.0 (not extension needed)   ThingWorx can use OAuth to access SAP, but in this blog I will just use basic authentication.   SAP Netweaver Gateway Demo system registration   1. Create an account on the Gateway Demo system (credentials to be used on the connector are sent by email) 2. Verify that the account has access to the basic OData sample service : https://sapes4.sapdevcenter.com/sap/opu/odata/IWBEP/GWSAMPLE_BASIC/   Integration Runtime microservice setup   1. Follow WindchillSwaggerConnector hands-on (7.4) - Integration Runtime microservice setup Note: Only one Integration Runtime instance is required for all your Integration Connectors (Multiple instances are supported for High Availability and scale).   SAPODataConnector setup   Use the New Composer UI (some setting, such as API maps, are not available in the ThingWorx legacy composer)     1. Create a DataShape that is used to map the attributes being retrieved from SAP SAPObjectDS : Id (STRING), Name (STRING), Price (NUMBER) 2. Create a Thing named TestSAPConnector that uses SAPODataConnector as thing template 3. Setup the SAP Netweaver Gateway connection under TestSAPConnector > Configuration Generic Connector Connection Settings Authentication Type = fixed HTTP Connector Connection Settings Username = <SAP Gateway user> Password = < SAP Gateway pwd> Base URL : https://sapes4.sapdevcenter.com/sap Relative URL : /opu/odata/IWBEP/GWSAMPLE_BASIC/ Connection URL : /opu/odata/IWBEP/GWSAMPLE_BASIC/$metadata 4. Create the API maps and service under TestSAPConnector > API Maps (New Composer only) Mapping ID : sap EndPoint : getProductSet Select DataShape : SAPObjectDS (created at step 1) and map the following attributes : Name <- Name Id <- ProductID Price <- Price Pick "Create a Service from this mapping"     Testing our Connector   Test the TestSAPConnector::getProductSet service (keep all the input parameters blank)
View full tip
Introduction SAP HANA database is among the several options available for ThingWorx as persistence provider, in this blog I'll share the experience and challenges I had while setting up the SAP HANA and then configuring it to run as Persistence Provider for ThingWorx Pre-requisite Eclipse Mars or higher SAP Hana Express Edition 2.0 ThingWorx Hana Persistence Provider Package​ Setting up SAP Hana SAP Hana Express Edition 2.0 is freely available from SAP via the download with a pre-built machine with SUSE Enterprise OS. For this blog I used the SAP Hana Express Edition 2.0 Server only version while downloading the pre-built VM from SAP Download the Server only or Server + Application VM from the SAP's website           2. Unzip and open the hxe.ova (file name will differ if you are downloading the server + application VM) file in one of the hyper-visor, in my case I'll be using VMWare Workstation, but should work just fine with others e.g. Oracle VirtualBox      3. Once opened in a hypervisor SUSE will boot up and you will be prompted for the new password, as VM is configured for force first time password change      4. If you are unsure of the next steps to follow refer to the Getting Started with SAP Hana express edition guide which is selected by default and gets downloaded when the VM is downloaded      5. Another thing to note about the PDF that it contains all the details on the username and password required for initial setup      6. Provide the password for the OS user which is hxeadm, note you can always refer to the PDF file which we downloaded while downloading the VM from SAP's download manager      7. Once the password is changed for the OS user, you will also be prompted to change the master DB user's password as well      8. Make note of these passwords SAP HANA DB Instance After completing the initial steps for setting up the VM and changing the password we can now check the pre-built database installed on the VM which is called as SystemDB. You can check that and same basic settings running on the instance by using the following command in the shell $ HDB info This command will print out the features and components installed on the VM, if you are using Server only VM as I did you shall see something of this printed on the shell. In case it doesn't show up you can start the database and check again of the running services $ HDB start The output will be something like this, note the highlighted part for below output already include couple of additional services that is because i have already created the ThingWorx Database : FYI: If you wish to shut down the database you can use following command: $ HDB Stop With this let's move to the next step to create the new database which will be used by the ThingWorx application. One thing to note that you can run the SAP Hana in multi-tenant or in single tenant mode. With the free edition i.e. the SAP Hana Express Edition multi-tenant is the only available option. This is why we will create the ThingWorx DB and the schema and use that to populate it with the required tables and other DB entities needed to support the persistence provider from ThingWorx webapp Creating and configuring the ThingWorx Database with ThingWorx Schema For this you'll be required to download from the PTC Software download page the package for the ThingWorx with SAP Hana Persistence provider package which contains the DB creation scripts. You can now choose to create the schema in two ways i.e.      a. Creating and Configuring the Database using PTC provided Bash Scripts      b. Manually Creating and configuring the SAP Hana Database Both of these details are covered in the Getting Started with SAP Hana and ThingWorx Guide For this blog I created everything manually that is by connecting via the client which i have installed as plugin in my existing Eclipse Mars. If you are running different version of Eclipse check this link If you are also using the Eclipse Mars you can use this link in Eclipse > Help > Install New Software > https://tools.hana.ondemand.com/mars/ Once installed make sure to restart the eclipse and open the SAP Hana Administration Console perspective, which should come up something like this: Since I have already installed and created some connections to the DB they are visible on the left side of the Eclipse. Creating Connection to the SAP Hana Database      1. Navigate to the Systems tab on the left in the SAP Hana Administration Console perspective in Eclipse      2. Click on the Add Systems icon located right on the Systems tab, like so      3. Provide some basic connection details like           a. Host Name: IP or machine name of the VM running the SAP Hana Express edition           b. Instance Number: 90           c. Mode: If you are using free express edition as I have used for this blog you will be required to select Multiple containers. As soon you will see that we will create ThingWorx as tenant DB alongside the System DB           d. Select System Database since for now we have not yet created a database called ThingWorx. Click next.      4. On the next page provide the username/password, i will be logging in as SYSTEM user since I have not yet created any other user and I'll use the master DB password which I'd set when      configuring the VM for the first time.      5. Click Finish to complete and test the connection Creating ThingWorx User and ThingWorx Database Successful connection creation will show the connection with green icon under the Systems tab which will allow you to then explore the connection for different objects existing under the SYSTEM database. To begin using the SQL statements select the connection and click on the SQL tab as highlighted in the screenshot below: With this you have now successfully created the connection and also have a SQL client to execute SQL tasks As mentioned previously you have two options to setup the schema and the data model for the ThingWorx Database, these are      1. Using the shell script to create and configure the ThingWorx Database and it's schema together with the required User and the data model. For this you will need following:           a. thingworxHanaDatabaseSetup.sh           b. thingworxHanaSchemaSetup.sh Note: These two shell scripts will call the SQL procedures provided in the install folder ThingWorx-Platform-Hana-X-X-X.zip you have downloaded from the PTC Software Download ThingWorx-Platform-Hana-X-X-X.zip package incorrectly provides sample platform-settings.json with PostgresPersistenceProviderPackage configuration for ThingWorx Installation. So make sure that user executing those scripts should have all the required files in place, preferably in same folder as the script files and should have sufficient access rights to execute the scripts      2. The other option is to do everything manually i.e. create Database and it's schema, it's user and the data model via the SQL scripts provided in the install folder Both of these options are covered quite in detail in the installation guide so I'd recommend referring to that for step by step process. For creating a SQL connection using the newly created TWADMIN user follow the steps described in the section Creating Connection to the SAP Hana Database above. Here's how the configuration for connecting to the ThingWorx DB will look like With this we are now connected to an empty THINGWORX database under which TWADMIN schema will be created using the SQL scripts provided in the install directory of ThingWorx-Platform-Hana-X-X-X.zip package. Note that it's important that below listed scripts are executed in following order      1. thingworx-model-schema.sql      2. thingworx-data-schema.sql      3. thingworx-property-schema.sql      4. thingworx-data-proc.sql Checking the port for ThingWorx Database created in SAP Hana This can be checked by executing the following command directly from the SQL Client in the Eclipse. You may need to execute this command using SYSTEM user account, if any permission objections are raised in the client select * from "SYS_DATABASES".m_services This will return the result of all the databases and services with their port, check the SQL Port for the ThingWorx database : Configuring the SAP Hana Persistence Provider Now that we have met all the pre-requisites to connect to the our ThingWorx Database in SAP HANA let's configure the platform-settings.json file under placed under the \\<ThingworxInstallationDirectory>\ThingworxStorage\ Note: The sample platform-settings.json included with the downloaded ThingWorx-Platform-Hana-X-X-X.zip package incorrectly includes the PostgresPersistenceProviderPackage. A Jira is already filed and can be tracked via this knowledge base article ThingWorx-Platform-Hana-X-X-X.zip package incorrectly provides sample platform-settings.json with PostgresPersistenceProviderPackage configuration for ThingWorx Installation As a workaround for now, make sure to replace the PostgresPersistenceProviderPackage with HanaPersistenceProviderPackage, like so { "PersistenceProviderPackageConfigs": { "HanaPersistenceProviderPackage": { "ConnectionInformation": { "driverClass": "com.sap.db.jdbc.Driver", "jdbcUrl": "jdbc:sap://<IPAddressForSAPHana>:39041/?autocommit=false&currentschema=TWADMIN", "dbSchema": "TWADMIN", "username": "TWADMIN", "password": "Thingworx123", "acquireIncrement": 5, "acquireRetryAttempts": 50, "acquireRetryDelay": 10000, "checkoutTimeout": 1000000, "fetchSize": 5000, "idleConnectionTestPeriod": 60, "initialPoolSize": 5, "maxConnectionAge": 0, "maxIdleTime": 0, "maxIdleTimeExcessConnections": 300, "maxPoolSize": 100, "maxStatements": 100, "minPoolSize": 5, "numHelperThreads": 8, "testConnectionOnCheckout": false, "unreturnedConnectionTimeout": 0                }           }      }     } Configuring Tomcat to connect to SAP Hana ThingWorx Database Thingworx SAP Hana persistence provider connects to the SAP Hana database instance using the ngdbc.jar. As mentioned in the Getting Started with SAP Hana and ThingWorx guide this jar is required to placed under the  <TomcatInstallationDirectory>\lib location (for more detail see ThingWorx SAP Hana Guide). You can obtain this jar by installing this SAP Hana Client or SAP Hana plugin installed in Eclipse. Note: Currently there's a known issue around ThingWorx failing to connect to SAP Hana Express Edition v2.0 due to the use of newer version of ngdbc.jar i.e. v2.0 or above. A Jira, already filed for this, can be tracked in this PTC Knowledgebase Article : Tomcat fails to start with error "Failed to create com.thingworx.persistence.hana.HanaPersistenceProviderPackage. Cause was due to 'null'" when installing ThingWorx with SAP Hana Persistence Provider You can download the jar directly from SAP Hana webpage with a valid service account. Since the older version is not available for direct download without service account from the SAP's download page, this jar can be obtained by installing the old Eclipse e.g. Kepler and then install the SAP Hana plugin on top of it which includes the jar version 1.XX in the folder created by the SAP Hana Tools For more detail on installing and configuring Tomcat or setting up the license file refer to the ThingWorx Core Installation guide Verifying the installation Finally, start the Tomcat service and try to access the ThingWorx Composer by typing in the URL e.g. http://localhost:8080/Thingworx You can also verify the connectivity between the ThingWorx and the SAP Hana by navigating to the application.log located under ThingworxStorage\logs and look for INFO [T: localhost-startStop-1] Database initialization complete. This will confirm the database was initialized and connected to successfully Common connectivity issues Issue 1: After deploying and starting up the Tomcat you may see that the ThingWorx webapp is stalling despite logging the message that the database initialization started Cause: This could happen for variety of reasons e.g.      a. Incorrect port was used in the JDBC connection URL in the platform-settings.json      b. ThingWorx database is incorrectly spelled      c. Connection is timing out e.g. due to network issues Solution:      a. For port ensure that correct port is used by executing this command to verify on which port the database service listening      b. Connection retries could be increased in the platform-settings.json file      c. Reconfirm the correct username Issue 2: Application log in ThingWorx Tomcat would log the error that persistence provider is null Cause: Incorrect jar version is used. Solution:      a. Recheck the jar version which has been used under tomcat\lib folder
View full tip
This blog post has been written in collaboration with nexiles GmbH, a PTC Software Partner   The Edge Micro Server (EMS) and the LUA Scripting Resource (LSR) allow for an easy connection of sensors and / or data to the ThingWorx Platform.   Connections are usually established through the HTTP protocol and a REST API which sends unencrypted data and user specific credentials or ThingWorx application keys. This could be fine for a non-production environment. However in a more secure environment or in production, encryption and authentication should be used wherever and whenever possible to ensure the safety and integrity of the data.       In this scenario a user, client machine or remote device can utilize the LUA endpoints via the REST APIs endpoints. For this an authentication mechanism with credentials is required to only allow authenticated sessions. Invokation of services etc. are all protected and encrypted via HTTPS. A default HTTP connection would transfer the credentials as clear text which is most likely not desired.   The LSR then communicates via HTTPS to the EMS.   The EMS communicates to the ThingWorx platform via the AlwaysOn protocol using an encrypted websocket on the platform side.   Prerequisite   ThingWorx is already fully configured for HTTPS. See Trust & Encryption Theory and Hands On for more information and examples   Securing the EMS   EMS to ThingWorx connection   The config.json holds information on the complete EMS configuration. To add a trusted and secure connection to the ThingWorx platform, the servers, connection type and certificates have to be adjusted.   Check the config.json.complete for more information and individual settings.   As an example the following configuration could be a rough outline.   Switch the websocket server port to 443 in ws_servers Declare the connection to the websocket as SSL / TLS in wc_connection Declare the certificate used by the ThingWorx platform in certificates Validate the certificate In case you're using a self-signed certificate, allow using it - otherwise just say "false" If not using a self-signed certificate, but a certificate based on a chain of trust, point to the full chain of trust in the cert_chain parameter I used the client certifcate in X509 (PEM) formatted .cer file I used the client certificate's private key as PKCS #8 (PEM) and encrypted it with the super-secure password "changeme" (no really, change it!)   With this the EMS can connect securely to the ThingWorx platform websocket.   EMS as HTTP(S) server   To secure incoming connections to the EMS, it first of all needs to act as a HTTP server which is then configured to use a custom certificate.   In config.json add the http_server section Define the host and port as well and set SSL to true The full path of the certificate (.cer file) must be provided   With this the EMS can receive client requests from the LSR through a secure interface, protecting the (meta) data sent from the LSR to the EMS.   For my tests I'm using a self-signed certificate - created in the Keystore Explorer and exported as X509 (PEM) formatted .cer file. The private key is not required for this part.   Authentication   To further secure the connection to the EMS acting as HTTP(S) server, it's recommended to use user and password for authentication.   With this, only connections are accepted, that have the configured credentials in the HTTP header.   config.json   {      "ws_servers": [ { "host": "supersecretems.ptc.com", "port": 443 } ],        "appKey": "<#appKey>",        "http_server":  {           "host": "localhost",           "port": 8000,           "ssl": true,           "certificate": "C:\\ThingWorx\\ems.cer",           "authenticate": true,           "user": "EMSAdmin",           "password": "EMSAdmin",           "content_read_timeout": 20000      },        "ws_connection": { "encryption": "ssl" },        "certificates": {           "validate": true,           "allow_self_signed": true,           "client_cert": "C:\\ThingWorx\\twx70.cer",           "key_file": "C:\\ThingWorx\\twx70.pkcs8",           "key_passphrase": "changeme"      }   }   Securing the LSR   LSR to EMS connection   All connections going from the LSR to the EMS are defined in the config.lua with the rap_ parameters.   To setup a secure connection to the EMS, we need to provide the server as defined in the config.json in the http_server section (e.g. the default localhost:8000). Define the usage of SSL / TLS as well as the certificate file.   Client to LSR connection   All connection going to the LSR from any client are defined in the config_lua witht the script_resource_ parameters.   To ensure that all requests are done via authenticated users, setup a userid and password. Configure the usage of SSL / TLS to encrypt the connection between clients and the LSR. A custom certificate is not necessarily required - the LSR provideds its own custom certificate by default.   Now opening https://localhost:8001 (default port for the LSR) in a browser will open an encrypted channel forcing authentication via the credentials defined above.   Of course this needs to be considered for other calls implementing e.g. a GET request to the LSR. This GET request also needs to provide the credentials in its header.   Authentication   It's recommended to also configure the LSR for using a credential based authentication mechanism.   When setting up the LSR to only accept incoming requests with credentials in the header, the script_resource_userid and script_resource_password can be used for authentication.   When connecting to an EMS using authentication, the rap_userid and rap_password can be used to authenticate with the credentials configured in the config.json   config.lua   The following configuration can be posted anywhere in config.lua - best place would be just below the log_level configuration.   scripts.rap_host = "localhost" scripts.rap_port = 8000 scripts.rap_ssl = true scripts.rap_deny_selfsigned = false scripts.rap_cert_file = "C:\ThingWorx\ems.cer" scripts.rap_server_authenticate = true scripts.rap_userid = "EMSAdmin" scripts.rap_password = "EMSAdmin" scripts.script_resource_userid = "admin" scripts.script_resource_password = "admin" scripts.script_resource_ssl = true   EMS specific configuration: rap LSR specific configuration: script_resource   Other considerations   When configuring the EMS and LSR for authentication and encrypted traffic, the configuration files hold information that not everyone should have access to.   Therefore the config.json and config.lua must be also protected from an Operating System point of view. Permissions should only be granted to the (process) user that is calling the EMS / LSR. Ensure that no one else can read / access the properties and certificates to avoid password-snooping on an OS level.   It's best to grant restricted access to the whole microserver directory, so that only privileged users can gain access.   You're next...   This blog should have given some insight on what's required and how it's configured to achieve a more secure and safer EMS / LSR integration in a real-life production environment. Of course it always depends on the actual implementation, but use these steps as a guideline to secure and protect your (Internet of) Things!
View full tip
Error: Failed to load SQL Modules into database Cluster. This error is usually seen during initializing the database cluster phase (in the setup as shown below). To resolve this issue, follow below steps: Create a PostgreSQL data folder before you start the installation (c:\postgres-data) and give full control for the user. Select the newly created data directory during the setup. After the successful installation, you can follow the remaining procedure for configuring it with ThingWorx from the respective installation document.
View full tip
This is a note/reminder in cross-reference to https://support.ptc.com/appserver/cs/view/solution.jsp?n=CS267248&lang=en_US Java 9 is expected to be released by Oracle September 21 2017 and is not currently supported for ThingWorx, refer to ThingWorx System Requirement guides for support details: ThingWorx 8 ThingWorx 7 ThingWorx 6
View full tip
Learning is a wide domain and thus, the field of machine learning has branched into several sub fields dealing with different types of learning tasks. Supervised vs Unsupervised Learning Since learning involves an interaction between the learner and the environment, we can divide tasks according to the nature of the interaction. Consider the task of detecting the hand written numbers among the lots of digital numbers vs task of anomaly detection. For the hand-written numbers detection task, we consider a setting in which the learner receives a set of training numbers, labeled as ‘Hand-Written’ and ‘Digital’. On the basis of such training, the learner should figure out a rule for labeling a newly arriving number. In contrast, for the task of anomaly detection, all the learner gets, as a training is a large set of numbers, without any labels on it, and the learner’s task is to detect ‘unusual’ numbers. Consider learning as a process of ‘using the experience to gain expertise’. Supervised learning describes a scenario in which the ‘experience’, a training example, consist significant information (labels Hand-Written or Digital) that is missing in the future arriving numbers, to which the learned experience is to be applied. Here, the acquired expertise is aimed at predicting the missing information for the validation data. However, in unsupervised learning, there is no difference between the training data and validation data. The learner processes input data with the goal of coming up with some summary or compressed version of the data. Clustering a data set into subsets of similar objects is a good example of such task. There is also an intermediate learning setting in which, the training set contains more information than the validation set. The learner here is required to predict even more information for the validation set. Consider a game of chess, where a value function describes each setting of a chess board. Now one may want to learn the degree why which White’s position is better than the black’s. The only information available to the learner at the training time is positions that occurred in an actual chess game and who won it. Such learning frameworks are investigated under reinforcement learning. Source: ‘Understanding machine learning: From theory to algorithms’ written by Shai Shalev-Shwartz and Shai Ben-David
View full tip
Check out this new KCS article which links to all known best practice documents available for ThingWorx. This article will get larger in time as more articles are published related to the Dos and Don'ts of building an IoT application! Do you know when to use timers, and where to implement their subscriptions? How about ensuring info tables are used at the proper time, and data tables at others? Pesky performance issues wherein ThingWorx runs slow for apparently no reason? All of these questions and more are addressed here!
View full tip
Persistent properties are stored in ThingWorx database while non-persistent properties are stored in memory. This means that the persistent values do not get erased or deleted if the thing restarts or platform restarts. The persistent properties can also be retrieved in the same way as non-persistent properties. To explain better how we can retrieve persistent data, please consider the below example: I took a device group which has multiple devices and defined 2 persistent properties. One is serial number and the other is firmware version number. Now in a mashup builder, the user gives the serial number and the corresponding firmware version will be retrieved. I achieved this by writing services for that thing. Created a Data shape with the name DeviceData and defined two filed definitions. One is Serial number and the other is firmware version. Created a thing template with the name DevGroup. Added a property in the thing template with the name DeviceData and selected the basetype as info table. Also selected the previously created data shape (Device Data) in data shape field. And made this property as persistent data by selecting Is Persistent. Saved the property. Created two devices with the names Device1 and Device2. Both the devices use the above template. Now for each device set the property values by clicking the set button in properties link. These values will be persistent, meaning they do not change even after refresh or when you restart ThingWorx. You can even set these properties at run time by just creating a service. Created GetFirmversion service which retrieves the firm version of given service number. Created mashup as shown below: . Once you select Device1, enter the serial number in the numeric entry field and click query button, the firmware version of the given serial number is displayed along with the data of device1. If we enter a wrong number, no data will be displayed. You can also set an error message instead of displaying empty values. Similar case when we select the device2 data. This is one way to retrieve persistent data. You can also obtain the same in many ways.
View full tip
This video will provide you with a brief introduction to the New Composer interface, which has been made available in the 7.4 and later releases of the ThingWorx Platform.  For complete details on what functionality is available within this next generation composer interface, and to also see what lies ahead on our road map, please refer to the following post in the ThingWorx Community: NG Composer feature availability
View full tip
With the new licensing introduction, it could get confusing at first on how to obtain and apply, especially with more than one app in place. This is an example on how to apply both foundation and manufacturing license when installing Thingworx 8. 1) Install Manufacturing App 8.0 and needed components (ex: Kepware) per  the guide with manufacturing app license - manufacturing app widget can now be accessed. 2) Accessing /Thingworx reports a licensing issue 3) Download Thingworx license from the license portal. 4) Rename the manufacturing app license.bin to <name>.bin and put Thingworx license.bin in the ThingworxPlatform folder. 5) Restart Thingworx service 6) Access /Thingworx and accept license agreement 7) Change license.bin back to the original manufacturing app license.bin (step 4) 😎 Restart Thingworx server 9) Both manufacturing app and foundation functions are available.
View full tip
Announcements