cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

ThingWorx Navigate is now Windchill Navigate Learn More

IoT & Connectivity Tips

Sort by:
Video Author:                      Asia Garrouj Original Post Date:            June 13, 2017 Applicable Releases:        ThingWorx Analytics 8.0   Description: This video is the first of a 3 part series walking you through how to setup ThingWatcher for Anomaly Detection. In this first video you will learn the basics of how to establish connectivity between KEPServer and the ThingWorx Platform.    
View full tip
Video Author:                     Durgesh Patel Original Post Date:            June 12, 2017 Applicable Releases:        ThingWorx   Description: In this video we review the different features available in Grid Advanced Extension version 2.0.  
View full tip
The purpose of this document is to see how you can setup an MXChip IoT DevKit and also how send the readings of this microprocessor to ThingWorx through an Azure cloud server. You will also learn how to view the values that are being sent.
View full tip
Video Author:                     Asia Garrouj Original Post Date:            June 13, 2017 Applicable Releases:        ThingWorx Analytics 8.0   Description: This video is the third of a 3 part series walking you through how to setup ThingWatcher for Anomaly Detection. In this second video you will learn how to use the the Anomaly Mashup to visualize data received from a remote device.    
View full tip
In the recent times, one of the frequent questions regarding PostgreSQL is which tools are good with PostgreSQL. With the growing functionality of PostgreSQL, the number of vendors are willing to produce tools for PostgreSQL. There are lot of tools for management, development, data visualization and the list if growing. Here, I'm listing a few tools that might be of interest to Thingworx users. psql terminal: The psql client is a command-line client distributed with PostgreSQL, often called as interactive terminal. psql is a simple yet powerful tool with which you can directly interface with the PostgreSQL server. The psql client comes default with the PostgreSQL database. Key features: Issue queries either through commands or from a file. Provides shell-like features to automate tasks. For more information, refer http://www.postgresql.org/docs/9.5/static/app-psql.html pgAdmin III: pgAdmin III is a GUI based administration and development tool for PostgreSQL database. It delivers the needs of both admin and normal users from writing simple SQL queries to developing complex databases. Key features: Open source and cross-platform support. No additional drivers are required. Supports more than 30 different languages. Note: pgAdmin III comes default with postgreSQL9.4 installer. For more information, refer http://www.pgadmin.org/download/ phpPgAdmin: phpPgAdmin is a web-based client for managing PostgreSQL databases. It provides the user with a convenient way to create databases, create tables, alter tables and query the data using SQL. Key features: Open source and supports PostgreSQL 9.x. Requires webserver. Administer multiple servers. Supports the slony master-slave replication engine. For phpPgAdmin download: http://phppgadmin.sourceforge.net/doku.php?id=download TeamPostgreSQL: TeamPostgreSQL is a browser-based tool for PostgreSQL administration. Using TeamPostgreSQL, database objects can be accessed from anywhere in the web browser. Key features: Open source and cross-platform support. Supports SSH for both the web interface and the database connections. GUI with tabbed SQL editors. For TeamPostgreSQL download: http://www.teampostgresql.com/download.jsp   Monitoring Tools pgBadger: pgBadger is a PostgreSQL log analyzer for generating reports from the PostgreSQL log files. It is built in Perl language and uses a javascript and bootstrap libraries. Often seen as a replacement for pgfouine log analyzer. Key features: Open source community project. Autodetects postgreSQL log file formats (stderr, syslog or csvlog). Provides SQL queries related reports and statistics. Can also set limits to only report errors. Generates Pie charts and Time based charts. For more information, refer http://dalibo.github.io/pgbadger/. Git download: https://github.com/dalibo/pgbadger/releases PostgreStats: Postgrestats is a software that has automated scripts to easily view statistics such as commits, rollbacks, user inserts, updates and deletes in a time-based intervals. Postgrestats gets installed and executes on the database server, it customizes the main conf file. Postgrestats also provides an enterprise application for Replication mode and High Availability. Key features: Open source and easy-to-setup installation.  Take a snapshot report based on time intervals. Optional email-on-update. Text file Data storage. Also provides enterprise application, PostgreStats Enterprise. For more information, refer: http://www.postgrestats.com/subs/docs.html    Slemma: Slemma is a collaborative, data visualization tool for PostgreSQL database. Slemma allows database connections with a near to one-click integration and can generate a dashboard from files. Slemma comes with a commercial license with a $29 per user per month pricing. Key features: Create charts and interactive dashboards by selecting tables. Non-developers can easily create visualizations (with no coding). Email dashboards automatically to clients or your entire team. For more information, refer https://slemma.com/ Ubiq: Ubiq is a web-based buisness intelligence and reporting tool for PostgreSQL server. Ubiq creates reports and online dashboards, providing the feature to export in multiple formats. Ubiq is distributed with a commercial license. Key features: Drag & drop interface to create interactive charts, dashboards and reports. Apply powerful filters and functions to the data. Share your work and schedule email reports. For more information, refer http://ubiq.co/tour
View full tip
Video Author:                     Asia Garrouj Original Post Date:            March 31, 2017 Applicable Releases:        ThingWorx Analytics 7.4 to 8.1   Description: This video is the second part of a two part video series walking thru the configuration of Analysis Event which is applied for Real-Time Scoring.  This second video will walk you thru the configuration of Analysis Event for Real Time Scoring and validating that a predictions job has been executed based on new input data.    
View full tip
The Axeda Platform provides a few mechanisms for putting user-defined pages or UI modules into the dashboards, or allowing end-users to host AJAX based applications from the same instance their data is retrieved from.  This simple application illustrates the use of jQuery to call Scripto and return a JSON formatted array of current data for an Axeda asset. Prerequisites: First steps taken with Axeda Artisan Basic understanding of HTML, JavaScript and jQuery Axeda Platform v6.5 or greater (Axeda Customers and Partners) Artisan project attached to this article Features: Authentication from a Web app Use of CurrentDataFinder API Scripto from jQuery Files of Note ​(Locations are from the root of Artisan project) index.html – main HTML index page ..\artisan-starter-html\src\main\webapp\index.html app.js – JavaScript code to build application and call Scripto ..\artisan-starter-html\src\main\webapp\scripts\app.js axeda.js – axeda web services JavaScript code ..\artisan-starter-html\src\main\webapp\scripts\axeda.js DataItemsWithScripto.groovy – custom object on Axeda platform ..\artisan-starter-scripts\src\main\groovy\DataItemsWithScripto.groovy Screenshots: Further Reading Developing with Axeda Artisan Extending the Axeda Platform UI - Custom Tabs and Modules
View full tip
Video Author:                     Asia Garrouj Original Post Date:            December 9, 2016 Applicable Releases:        ThingWorx Analytics 52.0 to 8.1   Description: This video walks you through how to upload data and shows the configuration settings.   Please Note: In this video, the shown configuration settings page is different for ThingWorx Analytics 8.1.  
View full tip
The following script takes a parameter of a model name, a device serial number and a data item name, finds the asset location and uses that longitude to determine the current TimeZone.  It then converts the Timezone of the data item timestamp to an Eastern Standard Timezone timestamp. import groovy.xml.MarkupBuilder import com.axeda.drm.sdk.Context import java.util.TimeZone import com.axeda.drm.sdk.data.* import com.axeda.drm.sdk.device.* import com.axeda.common.sdk.jdbc.*; import net.sf.json.JSONObject import net.sf.json.JSONArray import com.axeda.drm.sdk.mobilelocation.MobileLocationFinder import com.axeda.drm.sdk.mobilelocation.MobileLocation import com.axeda.drm.sdk.mobilelocation.CurrentMobileLocationFinder def response try {     Context ctx = Context.getUserContext()     ModelFinder mfinder = new ModelFinder(ctx)     mfinder.setName(parameters.model_name)     Model m = mfinder.find()     DeviceFinder dfinder = new DeviceFinder(ctx)     dfinder.setModel(m);     dfinder.setSerialNumber(parameters.device)     Device d = dfinder.find()     CurrentMobileLocationFinder cmlFinder = new CurrentMobileLocationFinder(ctx);     cmlFinder.setDeviceId(d.id.getValue());     MobileLocation ml = cmlFinder.find();     def lng = -72.158203125     if (ml?.lng){         lng = ml?.lng     }     // set boundaries for timezones - longitudes     def est = setUSTimeZone(-157.95415000000003)     def tz = setUSTimeZone(lng)     CurrentDataFinder cdfinder = new CurrentDataFinder(ctx, d)     DataValue dvalue = cdfinder.find(parameters.data_item_name)     def adjtime = convertToNewTimeZone(dvalue.getTimestamp(),tz,est)     def results = JSONObject.fromObject(lat: ml?.lat, lng: ml?.lng, current: [name: dvalue.dataItem.name, time: adjtime.format("MM/dd/yyyy HH:mm"), value: dvalue.asString()]).toString(2)     response = results } catch (Exception e) {     response = [                 message: "Error: " + e.message             ]     response =  JSONObject.fromObject(response).toString(2) } return ['Content-Type': 'application/json', 'Cache-Control':'no-cache', 'Content': response] def setUSTimeZone(lng){     TimeZone tz     // set boundaries for US timezones by longitude     if (lng <= -67.1484375 && lng > -85.517578125){         tz = TimeZone.getTimeZone("EST");     }     else if (lng <= -85.517578125 && lng > -96.591796875){         tz = TimeZone.getTimeZone("CST");     }     else if (lng <= -96.591796875 && lng > -113.90625){         tz = TimeZone.getTimeZone("MST");     }     else if (lng <= -113.90625){         tz = TimeZone.getTimeZone("PST");     }     logger.info(tz)     return tz } public Date convertToNewTimeZone(Date date, TimeZone oldTimeZone, TimeZone newTimeZone){     long oldDateinMilliSeconds=date.time - oldTimeZone.rawOffset     // oldtimeZone.rawOffset returns the difference(in milliSeconds) of time in that timezone with the time in GMT     // date.time returns the milliseconds of the date     Date dateInGMT=new Date(oldDateinMilliSeconds)     long convertedDateInMilliSeconds = dateInGMT.time + newTimeZone.rawOffset     Date convertedDate = new Date(convertedDateInMilliSeconds)     return convertedDate }
View full tip
  Design Your Data Model Guide Part 3   Step 7: Prioritize     The first step in the design process is to use the Thing-Component Matrix to identify and prioritize groups of Components that are shared across multiple Things. These groups will be prioritized by number of shared Components, from highest to lowest, enabling is to break out the most commonly used groups of Components and package them into reusable pieces. Let’s examine our example Thing-Component Matrix to identify and prioritize groups. In the table below, we have done this and recognized that there are FOUR groups.   NOTE: Each item in our unique Thing-Component Matrix would also count as a group on its own. This can be dealt with almost separately from our process, though, because there is no overlap between different Things. The "Templates for Unique Components" and "Adding Components Directly to Things" sections in the Iterate step of this guide covers these "one-offs."     Step 8: Largest Group     The base building block we use most often is the Thing Template. To start the design process, the first step is to create a Thing Template for the largest Component group. Applying this to our Smart Factory scenario, we'll take the largest group ("Group 1") and turn it  into a Thing Template using the Entity Relationship Diagram schematic.   Since every item on our production line shares these Components, we will name this Thing Template Line Asset. Now, let's build this using our Entity Relationship Diagram.   The result is a ThingWorx Thing Template with five Properties, one Event, and one Subscription.     Step 9: Iterate     Once an initial base template has been created for the largest group, the rest of the groups can be added by selecting the appropriate entity type (Thing Template, Thing Shape, or directly-instantiated Thing). The following Entity Decision Flowchart explains which entity type is used in which scenario:   Now that we have established our "Line Asset" Thing Template for our largest group, the next step is to iterate through each of our remaining groups. Following the flowchart, we will identify what entity type it should be and add it to our design.   Group 2 - "System Connector"   The second group represents connectors into both of our internal business systems. We will call them System Connectors.   If we look at Group 2 versus our "Largest Group" Thing Template, we can see that there is no overlap between their Components. This represents the third branch of the Entity Decision Flowchart, which means we want to create a new Thing Template.   Following this rule, here is the resulting template:   Group 3 - "Hazardous Asset"   The third group represents line assets that require emergency shutdown capabilities because under certain conditions, the machinery can become dangerous. We will call these Hazardous Assets.   If we look at our two previous Thing Templates, we can see that there is full overlap of these Components with our previous largest group, the "Line Asset" Thing Template. This represents the fourth branch of the Entity Decision Flowchart, which means we want to create a CHILD Thing Template.   Following this rule, here is the resulting Thing Template:   Group 4 - "Inventory Manager"   The fourth group represents line assets that keep track of inventory count, to ensure the number of assembled-products is equal to the number of checked-products for quality.   If we look at our existing Thing Templates, we can see that there is some overlap of the Components in our "Hazardous Asset" and "Line Asset" Thing Templates. This represents the second branch of the Entity Decision Flowchart, which means we want to create a Thing Shape.   Following this rule, here is the resulting Thing Shape:   Templates for Unique Components   Now that we have handled all our shared component groups, we also want to look at the unique component groups. Since we have already established that each group in our Unique Thing-Component Matrix does not share its Components with other Things, we can create Child Thing Templates for these line assets.   NOTE: Refer to the finished design at the bottom to reference all of the inherited Thing Templates and Thing Shapes for these Child Thing Templates.   Adding Components Directly to Things   In many cases, we will have Components that only exist for a single Thing. This frequently occurs when there will be only one of something in a system. In our case, we will only need one "System Connector" for each of the Maintenance System and the Production Order System.   Instantiate Your Things   At this point, the Data Model is fully built. All we need to do now is instantiate the actual Things which represent our real-world machines and digital-connections.     Step 10: Validate     The final step of the model breakdown design process is validation through instantiation. This is the process by which we take our designed Thing Templates / Thing Shapes / Things and actually create them on the ThingWorx platform to ensure they meet all of our requirements. This is done by tracing back through the chain of inheritance for all the Things in the data model to ensure they contain all of the required Components from the Thing-Component Matrix. Once we have verified that each Thing contains all of its requirements, the data model is complete.   Using this technique on each of your Things, you can explicitly prove that all of the requirements have been met.   Step 11: Next Steps   Congratulations! You've successfully completed the Design Your Data Model Guide covering the first three steps of the proposed data model design strategy for ThingWorx. This guide has given you the basic tools to: Create user stories Identify endpoints in your system Break down your data model using an Entity Relationship Diagram Decide when to use Thing Templates vs. Thing Shapes vs. directly-instantiated Things     The next guide in the Design and Implement Data Models to Enable Predictive Analytics learning path is Data Model Implementation.
View full tip
This is a follow-up post on my initial document about Edge Microserver (EMS) and Lua Script Resource (LSR) security. While the first part deals with fundamentals on secure configurations, this second part will give some more practical tips and tricks on how to implement these security measurements.   For more information it's also recommended to read through the Setting Up Secure Communications for WS EMS and LSR chapter in the ThingWorx Help Center. See also Trust & Encryption Theory and Hands On for more information and examples - especially around the concept of the Chain of Trust, which will be an important factor for this post as well.   In this post I will only reference the High Security options for both, the EMS and the LSR. Note that all commands and directories are Linux based - Windows equivalents might slightly differ.   Note - some of the configuration options are color coded for easy recognition: LSR resources / EMS resources   Password Encryption   It's recommended to encrypt all passwords and keys, so that they are not stored as cleartext in the config.lua / config.json files.   And of course it's also recommended, to use a more meaningful password than what I use as an example - which also means: do not use any password I mentioned here for your systems, they might too easy to guess now 🙂   The luaScriptResource script can be used for encryption:   ./luaScriptResource -encrypt "pword123" ############ Encrypted String AES:A26fBYKHJq+eMu0Fm2FlDw== ############   The wsems script can be used for encryption:   ./wsems -encrypt "pword123" ############ Encrypted String AES:A26fBYKHJq+eMu0Fm2FlDw== ############   Note that the encryption for both scripts will result in the same encrypted string. This means, either the wsems or luaScriptResource scripts can be used to retrieve the same results.   The string to encrypt can be provided with or without quotation marks. It is however recommended to quote the string, especially when the string contains blanks or spaces. Otherwise unexpected results might occur as blanks will be considered as delimiter symbols.   LSR Configuration   In the config.lua there are two sections to be configured:   scripts.script_resource which deals with the configuration of the LSR itself scripts.rap which deals with the connection to the EMS   HTTP Server Authentication   HTTP Server Authentication will require a username and password for accessing the LSR REST API.     scripts.script_resource_authenticate = true scripts.script_resource_userid = "luauser" scripts.script_resource_password = "pword123"     The password should be encrypted (see above) and the configuration should then be updated to   scripts.script_resource_password = "AES:A26fBYKHJq+eMu0Fm2FlDw=="   HTTP Server TLS Configuration   Configuration   HTTP Server TLS configuration will enable TLS and https for secure and encrypted communication channels from and to the LSR. To enable TLS and https, the following configuration is required:     scripts.script_resource_ssl = true scripts.script_resource_certificate_chain = "/pathToLSR/lsrcertificate.pem" scripts.script_resource_private_key = "/pathToLSR/key.pem" scripts.script_resource_passphrase = "keyForLSR"     It's also encouraged to not use the default certificate, but custom certificates instead. To explicitly set this, the following configuration can be added:     scripts.script_resource_use_default_certificate = false     Certificates, keys and encryption   The passphrase for the private key should be encrypted (see above) and the configuration should then be updated to     scripts.script_resource_passphrase = "AES:A+Uv/xvRWENWUzourErTZQ=="     The private_key should be available as .pem file and starts and ends with the following lines:     -----BEGIN ENCRYPTED PRIVATE KEY----- -----END ENCRYPTED PRIVATE KEY-----     As it's highly recommended to encrypt the private_key, the LSR needs to know the password for how to encrypt and use the key. This is done via the passphrase configuration. Naturally the passphrase should be encrypted in the config.lua to not allow spoofing the actual cleartext passphrase.   The certificate_chain holds the Chain of Trust of the LSR Server Certificate in a .pem file. It holds multiple entries for the the Root, Intermediate and Server Specific certificate starting and ending with the following line for each individual certificate and Certificate Authority (CA):     -----BEGIN CERTIFICATE----- -----END CERTIFICATE-----     After configuring TLS and https, the LSR REST API has to be called via https://lsrserver:8001 (instead of http).   Connection to the EMS   Authentication   To secure the connection to the EMS, the LSR must know the certificates and authentication details for the EMS:     scripts.rap_server_authenticate = true scripts.rap_userid = "emsuser" scripts.rap_password = "AES:A26fBYKHJq+eMu0Fm2FlDw=="     Supply the authentication credentials as defined in the EMS's config.json - as for any other configuration the password can be used in cleartext or encrypted. It's recommended to encrypt it here as well.   HTTPS and TLS   Use the following configuration establish the https connection and using certificates     scripts.rap_ssl = true scripts.rap_cert_file = "/pathToLSR/emscertificate.pem" scripts.rap_deny_selfsigned = true scripts.rap_validate = true     This forces the certificate to be validated and also denies selfsigned certificates. In case selfsigned certificates are used, you might want to adjust above values.   The cert_file is the full Chain of Trust as configured in the EMS' config.json http_server.certificate options. It needs to match exactly, so that the LSR can actually verify and trust the connections from and to the EMS.   EMS Configuration   In the config.lua there are two sections to be configured:   http_server which enables the HTTP Server capabilities for the EMS certificates which holds all certificates that the EMS must verify in order to communicate with other servers (ThingWorx Platform, LSR)   HTTP Server Authentication and TLS Configuration   HTTP Server Authentication will require a username and password for accessing the EMS REST API. HTTP Server TLS configuration will enable TLS and https for secure and encrypted communication channels from and to the EMS.   To enable both the following configuration can be used:   "http_server": { "host": "<emsHostName>", "port": 8000, "ssl": true, "certificate": "/pathToEMS/emscertificate.pem", "private_key": "/pathToEMS/key.pem", "passphrase": "keyForEMS", "authenticate": true, "user": "emsuser", "password": "pword123" }   The passphrase as well as the password should be encrypted (see above) and the configuration should then be updated to   "passphrase": "AES:D6sgxAEwWWdD5ZCcDwq4eg==", "password": "AES:A26fBYKHJq+eMu0Fm2FlDw=="   See LSR configuration for comments on the certificate and the private_key. The same principals apply here. Note that the certificate must hold the full Chain of Trust in a .pem file for the server hosting the EMS.   After configuring TLS and https, the EMS REST API has to be called via https://emsserver:8000 (instead of http).   Certificates Configuration   The certificates configuration hold all certificates that the EMS will need to validate. If ThingWorx is configured for HTTPS and the ws_connection.encryption is set to "ssl" the Chain of Trust for the ThingWorx Platform Server Certificate must be present in the .pem file. If the LSR is configured for HTTPS the Chain of Trust for the LSR Server Certificate must be present in the .pem file.   "certificates": { "validate": true, "allow_self_signed": false, "cert_chain" : "/pathToEMS/listOfCertificates.pem" } The listOfCertificates.pem is basicially a copy of the lsrcertificate.pem with the added ThingWorx certificates and CAs.   Note that all certificates to be validated as well as their full Chain of Trust must be present in this one .pem file. Multiple files cannot be configured.   Binding to the LSR   When binding to the LSR via the auto_bind configuration, the following settings must be configured:   "auto_bind": [{ "name": "<ThingName>", "host": "<lsrHostName>", "port": 8001, "protocol": "https", "user": "luauser", "password": "AES:A26fBYKHJq+eMu0Fm2FlDw==" }]   This will ensure that the EMS connects to the LSR via https and proper authentication.   Tips   Do not use quotation marks (") as part of the strings to be encrypted. This could result in unexpected behavior when running the encryption script. Do not use a semicolon (:) as part of any username. Authentication tokens are passed from browsers as "username:password" and a semicolon in a username could result in unexpected authentication behavior leading to failed authentication requests. In the Server Specific certificates, the CN must match the actual server name and also must match the name of the http_server.host (EMS) or script_resource_host (LSR) In the .pem files first store Server Specific certificates, then all required Intermediate CAs and finally all required Root CAs - any other order could affect the consistency of the files and the certificate might not be fully readable by the scripts and processes. If the EMS is configured with certifcates, the LSR must connect via a secure channel as well and needs to be configured to do so. If the LSR is configured with certifcates, the EMS must connect via a secure channel as well and needs to be configured to do so. For testing REST API calls with resources that require encryptions and authentcation, see also How to run REST API calls with Postman on the Edge Microserver (EMS) and Lua Script Resource (LSR)   Export PEM data from KeyStore Explorer   To generate a .pem file I usually use the KeyStore Explorer for Windows - in which I have created my certificates and manage my keystores. In the keystore, select a certificate and view its details Each certificate and CA in the chain can be viewed: Root, Intermediate and Server Specific Select each certificate and CA and use the "PEM" button on the bottom of the interface to view the actual PEM content Copy to clipboard and paste into .pem file To generate a .pem file for the private key, Right-click the certificate > Export > Export Private Key Choose "PKCS #8" Check "Encrypted" and use the default algorithm; define an "Encryption Password"; check the "PEM" checkbox and export it as .pkcs8 file The .pkcs8 file can then be renamed and used as .pem file The password set during the export process will be the scripts.script_resource_passphrase (LSR) or the http_server.passphrase (EMS) After generating the .pem files I copy them over to my Linux systems where they will need 644 permissions (-rw-r--r--)
View full tip
ThingWorx Analytics Builder - Upload Data   This video walks you through how to upload data and shows the configuration settings. Please be aware that shown configuration settings page is different for version 8.1.   Updated Link for access to this video:  ThingWorx Analytics Builder: Upload Data
View full tip
This video is the 2nd part, of a series of two videos, walking you through the configuration of Analysis Event which is applied for Real-Time Scoring. This part 2 video will walk you through the configuration of Analysis Event for Real-Time Scoring, and validate that a predictions job has been executed based on new input data.   Updated Link for access to this video:  Analytics Manger 7.4: Configure Analysis Event & Real Time Scoring Part 2 of 2
View full tip
OpenDJ is a directory server which is also the base for WindchillDS. It can be used for centralized user management and ThingWorx can be configured to login with users from this Directory Service.   Before we start Pre-requisiste Docker on Ubuntu JKS keystore with a valid certificate JKS keystore is stored in /docker/certificates - on the machine that runs the Docker environments Certificate is generated with a Subject Alternative Name (SAN) extension for hostname, fully qualified hostname and IP address of the opendj (Docker) server Change the blue phrases to the correct passwords, machine names, etc. when following the instructions If possible, use a more secure password than "Password123456"... the one I use is really bad   Related Links https://hub.docker.com/r/openidentityplatform/opendj/ https://backstage.forgerock.com/docs/opendj/2.6/admin-guide/#chap-change-certs https://backstage.forgerock.com/knowledge/kb/article/a43576900   Configuration Generate the PKCS12 certificate Assume this is our working directory on the Docker machine (with the JKS certificate in it)   cd /docker/certificates   Create .pin file containing the keystore password   echo "Password123456" > keystore.pin   Convert existing JKS keystore into a new PKCS12 keystore   keytool -importkeystore -srcalias muc-twx-docker -destalias server-cert -srckeystore muc-twx-docker.jks -srcstoretype JKS -srcstorepass `cat keystore.pin` -destkeystore keystore -deststoretype PKCS12 -deststorepass `cat keystore.pin` -destkeypass `cat keystore.pin`   Export keystore and Import into truststore   keytool -export -alias server-cert -keystore keystore -storepass `cat keystore.pin` -file server-cert.crt keytool -import -alias server-cert -keystore truststore -storepass `cat keystore.pin` -file server-cert.crt     Docker Image & Container Download and run   sudo docker pull openidentityplatform/opendj sudo docker run -d --name opendj --restart=always -p 389:1389 -p 636:1636 -p 4444:4444 -e BASE_DN=o=opendj -e ROOT_USER_DN=cn=Manager -e ROOT_PASSWORD=Password123456 -e SECRET_VOLUME=/var/secrets/opendj -v /docker/certificates:/var/secrets/opendj:ro openidentityplatform/opendj   After building the container, it MUST be restarted immediately in order for recognizing the new certificates   sudo docker restart opendj   Verify that the certificate is the correct one, execute on the machine that runs the Docker environments: openssl s_client -connect localhost:636 -showcerts   Load the .ldif Use e.g. JXplorer and connect   Select the opendj node LDIF > Import File (my demo breakingbad.ldif is attached to this post) Skip any warnings and messages and continue to import the file   ThingWorx Tomcat If ThingWorx runs in Docker as well, a pre-defined keystore could be copied during image creation. Otherwise connect to the container via commandline: sudo docker exec -it <ThingworxImageName> /bin/sh Tomcat configuration cd /usr/local/openjdk-8/jre/lib/security openssl s_client -connect 10.164.132.9:636 -showcerts Copy the certifcate between BEGIN CERTIFACTE and END CERTIFICATE of above output into opendj.pem, e.g. echo "<cert_goes_here>" > opendj.pem Import the certificate keytool -keystore cacerts -import -alias opendj -file opendj.pem -storepass changeit   ThingWorx Composer As the IP address is used (the hostname is not mapped in Docker container) the certificate must have a SAN containing the IP address     Only works with the TWLDAPExample Directory Service not the ADDS1, because ADDS1 uses hard coded Active Directory queries and structures and therefore does not work with OpenDJ. User ID (cn) must be pre-created in ThingWorx, so the user can login. There is no automatic user creation by the Directory Service. Make sure the Thing is Enabled under General Information   Appendix LDAP Structure for breakingbad.ldif cn=Manager / Password123456 All users with password Password123456    
View full tip
This Expert Session consists of the general overview for the multitenancy and platform security. It  discusses the available security levels, necessary basic resources, as well as provides information on the system user, and also includes several examples on how-to. It’s assumed that the audience is familiar with the Composer and its navigation.     For full-sized viewing, click on the YouTube link in the player controls.   Visit the Online Success Guide to access our Expert Session videos at any time as well as additional information about ThingWorx training and services.
View full tip
When predicting a Boolean goal such as Failure in the next hour or any other goal that has a yes or no answer, Thingworx Analytics(TWXA) models will output a 'risk' of the event occurring. TWXA will intelligently pick a threshold beyond which that risk warrants attention. 1. In Analytics Builder, click on the export button 2. This will export a PMML model and download it for you 3. Open up the PMML model, in the output section, you will find a condition that explains the threshold that was selected by TWX Analytics.   In this example case, TWXA chose 0.5 as the best Threshold.   Note: The export button will only be available in Builder for TWXA 8.4+.
View full tip
This Expert Session goes over ways to identify and develop a successful use case for ThingWorx Analytics. The example use case presented here is on employee retention in a fictional company with the goal of maximizing employee retention . This presentation will provide you with all the fundamentals you need to develop your own ThingWorx Analytics use cases from the ground up.     For full-sized viewing, click on the YouTube link in the player controls.   Visit the Online Success Guide to access our Expert Session videos at any time as well as additional information about ThingWorx training and services.
View full tip
Overview This document is targeted towards covering basic PostgreSQL monitoring and health check related system objects like tables, views, etc. This allows simple monitoring of PostgreSQL database via some custom services, which I'll attach at the end of this document, from the ThingWorx Composer itself. I'll also try to cover short detail on some of the services that are included with the Thing: PostgreSQLHealthCheck which implements Database ThingTemplate   Pre-requisite The document assumes that the user already has ThingWorx running with PostgreSQL as a Persistence Provider.   How to install Usage for this is fairly straight forward, import the Entities.twx and it will create required Thing which implements Database ThingTemplate and some DataShapes. Each Service under the Thing: PostgreSQLHealthCheck has its own DataShape. Feel free edit these services / DataShapes if you are looking to use output of these services  as part of your mashup(s).   Make sure to edit the PostgeSQL's JDBC Connection String, Username & password under the configuration section in order to connect to your PostgreSQL instance under the Thing PostgreSQLHealthCheck which will be created when Entities.twx is imported (attached with this blog)   Note : Users can use these services to query non-ThingWorx related database created with PostgreSQL as part of the external JDBC connection.   Reviewing Services from Thing: PostgreSQLHealthCheck   1. DescribeTableStructure - Takes two inputs **Table Name** and the **Schema Name** in which the ThingWorx database tables exists both inputs have default values that can be modified to match your PostgreSQL schema setup and required table name - It provides information on a Table's structure, see below     2. GetAllPSQLConfig - Provides runtime details on all the configurations done in the postgresql.conf which are in-effect - For detail on pg_settings see Postgresql 9.4 Doc     3. GetAllPSQLConfigLimited Similar to GetAllPSQLConfig, however with limited information   4. GetAllPSQLRoles - Lists all the database roles/users - Also lists their access rights permissions together with OID - Helpful in identifying if the role is active/inactive or carries any limitation on the DB connections     5. GetPG_Stat_Activity - Part of the Statistics Collector subsystem for the PostgreSQL DB - Shows current state of the schema e.g. connections, queries, etc. - For more detail on the output refer to the PostgreSQL 9.4 doc   6. GetPSQLDBLocksInformation - Shows the kind of locks in effect on which database and on which relation (table) - Particularly useful in identifying the relations and what lock mode is enabled on them     7. GetPSQLDBStat - Shows database wide statistics - Like Commits, reads, block reads, tupples (rows) fetched, inserted, deadlocks, etc - For more detail refer to PostgreSQL 9.4 doc   8. GetPSQLLogDesitnation - Checks where the PostgreSQL server logs are directed to - I.e. stderr, csvlog or syslog - Default is stderr   9. GetPSQLLogFileName - Fetches the log PostgreSQL log file name and the filename format - E.g. postgresql-%Y-%m-%d_%H%M%S.log    10. GetPSQLLoggingLocation - Fetches the location where the logs are stored for PostgreSQL - e.g. pg_log, which is also the default location - Desired location for the logs can be done in the postgresql.conf file   11. GetPSQLRelationIndexes - Gets information on the Indexes - Information like index size, number of rows, table names on which the index is created   12. GetPSQLReplicationStat - Shows information related to the Replication on PostgreSQL DB - Applicable to the PostgreSQL DBs where replication is enabled   13. GetPSQLTablespaceInfo - Takes tablespace name as input (String DataType), service defaults to 'thingworx' - modify if needed - Fetches information like owner oid, tablespace ACL     14. GetPSQLUserIndexIO - Fetches index that are created only on the User created DB objects - Shows relations (table) vs index relations ids (index on table), together with their names - Also shows additional info like number of disk blocks read from this index & number of buffer hits in this index     15. GetPSQLUserSequencesIOStats - Fetches informtion on Sequence objects used on user defined relations (tables) - Number of disk blocks read from this sequence & buffer hits in this sequence     16. GetPSQLUserTableIOStat - Fetches disk I/O information on the user created tables     17. GetPSQLUserTables - Fetches all the user created tables, together with their name, OID Disk I/O Last auto vacuum , vacuum Also lists the amount of rows each relation (table) has   Finally The attached entity has some additional service not yet covered in this blog, as they are minor services. Therefore for brevity of this blog I've left them out for now, feel free to explore or enhance this. I will continue to look for any additional services and will enhance this document and the entities belong to this.    If you are looking to enhance this feel free to fork from twxPostgreSqlHealthCheck over Github.
View full tip
Announcements