cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

ThingWorx Navigate is now Windchill Navigate Learn More

IoT & Connectivity Tips

Sort by:
I got this excellent question and I thought it worthwhile to put my answer here as well.   There are two ways to segregate information between clients. By default we use a ‘software’ approach to segregation by using Organizations. This allows you to designate a Client to an Organization/Organizational nodes and give those nodes ‘visibility’ to specific entities within the software. This will mean that ‘through software logic’ users can only see what they’ve been given visibility to see. This mainly applies to all the client’s equipment (Thingworx Things). They can only see their own equipment. This would also apply to a specific set of their data which is ValueStream data because that can only be retrieved from the perspective of a Thingworx Thing   Blog/Wiki/DataTable/Streams can store data across clients and do not utilize visibility on a row basis, in this case appropriate queries would need to be created to allow retrieval for only a specific client. In this case we use a construct for security that utilizes what we call the ‘system user’ and wrapper routines that work of the CurrentUser context, this allows you to create enforced validated queries against the data that will allow a user to only retrieve their specific data.   In regards to the data itself, if you need to, you can provide actual ‘physical’ segregation by using multiple persistence provider and mapping Blog/Wiki/DataTable/Streams/ValueStreams to different persistence providers. Persistence providers are basically additional database schemas (in one and the same database or different database) of the Thingworx data storage schema, allowing you to completely separate the location of where data is stored between clients. Note that just creating unique Blog/Wiki/DataTable/Streams/ValueStreams per client and using visibility is still only a logical / software way of providing segregation because the data will be stored in one and the same database schema also known as the Thingworx data persistence provider.
View full tip
Hi, ThingWorx users! We’re excited to share that we have partnered with InfluxData to make time series analysis in ThingWorx even easier. InfluxDB is a database by InfluxData that is “built specifically for metrics and events that empower developers to build next-generation IIoT, analytics and monitoring applications.”   Why InfluxData?  Today, application developers expect robust querying capabilities, fast response time, easy ways of aggregating and pivoting on data and leveraging results for reporting and visualization. IT and devops administrators also expect cost-effective storage and easy ways of aging data through archiving and the ability to keep large amounts of historical data to satisfy analysis requirements.   That’s why we’ve partnered with InfluxData to make it easier for developers to store, analyze and act on IIoT data in real-time. With InfluxData, developers can build connected IIoT applications more quickly while still incorporating the following capabilities: monitoring real-time alerting predictive maintenance streaming data anomaly and event detection visual and report-based analysis   We considered a few technologies for the purpose of improving ThingWorx time series analysis. Here are a few reasons we chose InfluxData: high compression of data ~45x ability to handle millions of writes per second* ability to read around thousands of rows in milliseconds* supports the standard time series functions of sampling, interpolation, time bucketing, aggregation, selector, transformation, predictor, etc.  * Query and write times will vary based on an individual ThingWorx application’s implementation with Influx. For example, as the number of concurrent reads increases, the query speed decreases. With the upcoming 8.4 release, the ThingWorx Sizing Guide will be updated to reflect representative performance for ThingWorx developers.   In addition to improved query capability, ThingWorx time series with Influx can now use less memory and CPU, giving your platform servers a bit of a break.   To start strategizing on how InfluxData can help you in your ThingWorx journey, here is a sneak preview of what it will look like:   New Features The new ThingWorx Influx Persistence Provider will make query services like ValueStream Thing QueryPropertyHistory, Stream Thing QueryStreamData and QueryStreamEntries even better. Simply create a new instance of the persistence provider, configure it to use your InfluxDB instance, create a new value stream (or stream) from the new persistence provider, and you’ll be writing, reading and analyzing your time series data like never before.   We’re also introducing a new enhancement to improve InfoTable support with time series data, including providing the ability to use a driver property. The driver property can be specified with the QueryPropertyHistoryWithDriverProperty service for time alignment and filling backward/forward in your stream queries.   Let’s walk through a driver property example where you have the properties of temperature, speed and battery level. Timestamp Temperature Speed Battery Level 1480589076592000000 80.003 5012 79 1480589077537000000 80.010 5011 79 1480589077550000000 80.010 5009 79 1480589077562000000 80.030 5011 78   Let’s say temperature is the key driver for your analysis. In other words, you are not concerned if speed or battery level changes—you only care about when temperature changes. We can specify temperature to be the driver property for that particular time and only return stream values for temperature, speed and battery level when temperature changes. If speed or battery level changes (but temperature does not change), the rows associated with those changes would not be included in the results set because neither speed nor battery level is a driver property. See chart below.  Timestamp Temperature (driver) Speed Battery Level 1480589076592000000 80.003 5012 79 1480589077537000000 80.010 5011 79 1480589077562000000 80.030 5011 78  Note that only three of the four rows are returned above because one entry in the original table did not have a change in temperature.    Stay Tuned Look out for these time series improvements and InfluxData integration in our upcoming 8.4 release. I’ll be sure to keep you updated on additional new features coming in our next release (like Orchestration and Mashup Builder 2.0), so check back shortly or subscribe to this Community so we can stay in touch. As always, if you have any questions, just ask Kaya!   Stay connected, Kaya
View full tip
GOBOT Framework GOBOT is a framework written in Go programming language. Useful for connecting robotic components, variety of hardware & IoT devices.   Framework consists of Robots -> Virtual entity representing rover, drones, sensors etc. Adaptors -> Allows connectivity to the hardware e.g. connection to Arduino is done using Firmata Adaptor, defining how to talk to it Drivers -> Defines specific functionality to support on specific hardware devices e.g. buttons, sensors, etc. API -> Provides RESTful API to query Robot status There are additional core features of the framework that I recommend having a look esp. Events, Commands allowing Subscribing / Publishing events to the device for more refer to the doc There's already a long list of Platforms for which the drivers and adaptors are available. For this blog I will be working with Arduino + Garmin LidarLite v3. There are cheaper versions available for distance measurement, however if you are looking for high performance, high precision optical distance measurement sensor, then this is it. Pre-requisite Install Go see doc Install Gort Install Gobot Wire-up LidarLite Sensor with Arduino How to connect For our current setup I have Arduino connected to Ubuntu 16 over serial port, see here if you are looking for a different platform.   For ubuntu you just need following 3 commands to connect and upload the firmata as our Adaptor to prepare Arduino for connectivity   // Look for the connected serial devices $ gort scan serial // install avrdude to upload firmata to the Arduino $ gort arduino install // uploading the firmata to the serial port found via first scan command, mine was found at /dev/ttyACM0 $ gort arduino upload firmata /dev/ttyACM0 Reading Sensor data Since there is a available driver for the LidarLite, I will be using it in the following Go code below in a file called main.go which connects and reads the sensor data.   For connecting and reading the sensor data we need the driver, connection object & the task / work that the robot is supposed to perform. Adaptor firmataAdaptor := firmata.NewAdaptor("/dev/ttyACM0") // this the port on which for me Arduino is connecting Driver As previously mentioned that Gobot provides several drivers on of the them is LidarLite we will be using this like so   d := i2c.NewLIDARLiteDriver(firmataAdaptor) Work Now that we have the adaptor & the driver setup lets assign the work this robot needs to do, which is to read the distance work := func() { gobot.Every(1*time.Second, func() { dist, err := d.Distance() if err != nil { log.Fatalln("failed to get dist") } fmt.Println("Fetching the dist", dist, "cms") }) } Notice the Every function provided by gobot to define that we want to perform certain action as the time lapses, here we are gathering the distance.   Note: The distance returned by the lidarLite sensor is in CMs & the max range for the sensor is 40m Robot Now we create the robot representing our entity which in this case is simple, its just the sensor itself   lidarRobot := gobot.NewRobot("lidarBot", []gobot.Connection{firmataAdaptor}, []gobot.Device{d}, work)   This defines the vitual representation of the entity and the driver + the work this robot needs to do. Here's the complete code. Before running this pacakge make sure to build it as you likely will have to execute the runnable with sudo. To build simply navigate to the folder in the shell where the main.go exists and execute   $ go build   This will create runnable file with the package name execute the same with sudo if needed like so   $ sudo ./GarminLidarLite   And if everything done as required following ouput will appear with sensor readings printed out every second 2018/08/05 22:46:54 Initializing connections... 2018/08/05 22:46:54 Initializing connection Firmata-634725A2E59CBD50 ... 2018/08/05 22:46:54 Initializing devices... 2018/08/05 22:46:54 Initializing device LIDARLite-5D4F0034ECE4D0EB ... 2018/08/05 22:46:54 Robot lidarBot initialized. 2018/08/05 22:46:54 Starting Robot lidarBot ... 2018/08/05 22:46:54 Starting connections... 2018/08/05 22:46:54 Starting connection Firmata-634725A2E59CBD50 on port /dev/ttyACM0... 2018/08/05 22:46:58 Starting devices... 2018/08/05 22:46:58 Starting device LIDARLite-5D4F0034ECE4D0EB... 2018/08/05 22:46:58 Starting work... Fetching the dist 166 cms Fetching the dist 165 cms Fetching the dist 165 cms Here's complete code for reference   package main import ( "fmt" "log" "time" "gobot.io/x/gobot" "gobot.io/x/gobot/drivers/i2c" "gobot.io/x/gobot/platforms/firmata" ) func main() { lidarLibTest() } // reading Garmin LidarLite data func lidarLibTest() { firmataAdaptor := firmata.NewAdaptor("/dev/ttyACM0") d := i2c.NewLIDARLiteDriver(firmataAdaptor) work := func() { gobot.Every(1*time.Second, func() { dist, err := d.Distance() if err != nil { log.Fatalln("failed to get dist") } fmt.Println("Fetching the dist", dist, "cms") }) } lidarRobot := gobot.NewRobot("lidarBot", []gobot.Connection{firmataAdaptor}, []gobot.Device{d}, work) lidarRobot.Start() }
View full tip
Prerequisite Install Go Install VSCode or desired IDE to write Go code, e.g. GoLand (commercial license required, 30days trial) Install Go extension for VSCode (if you are working with VSCode)   Content Building GET Request Building PUT Request Building POST Request   Building GET Request I'll be using net/http package from Go to perform the GET request to the ThingWorx Server by importing it   import (     "net/http" ) Next, we use the NewRequest() which takes method, URL & body. Since I'm sending a GET request my method will be GET, and the URL to the ThingWorx server & no body so will leave it to nil     url := myurl req, _ := http.NewRequest("GET", url, nil)   We are ignoring the error that NewRequest is returning as its already handled within the NewRequest() for us Use Header to add the request header to be received by the ThingWorx Server, note Header is of type map[string] []string (a key : value pair)     req.Header.Add("appKey", appkey) // passing the appkey from ThingWorx Server for authentication req.Header.Add("Accept", "application/json") // accepts json as response req.Header.Add("Cache-Control", "no-cache") // not using cache to fetch data Now we invoke the DefaultClient to perform the request & handling the error res, err := http.DefaultClient.Do(req)     if err != nil {         log.Println("Failed to get all entity list from the server", err)     } We need to close the body once we have received it and then we try to read the Body returned in our request     defer res.Body.Close()     body, _ := ioutil.ReadAll(res.Body) Here's complete function accepting URL & Application Key as string. Notice I am starting the function name with capital which denotes that I am making this as an exported function. See Exported/Unexported Identifiers In Go for more     func GetTwxServerEntities(myurl string, appkey string) {     url := myurl     req, _ := http.NewRequest("GET", url, nil)     req.Header.Add("appKey", appkey)     req.Header.Add("Accept", "application/json")     req.Header.Add("Cache-Control", "no-cache")     res, err := http.DefaultClient.Do(req)     if err != nil {         log.Println("Failed to get all entity list from the server", err)     }     defer res.Body.Close()     body, _ := ioutil.ReadAll(res.Body)     //fmt.Println(res)     fmt.Println(string(body)) }   Building PUT Request   To send property updates to the ThingWorx Server I'll create NewReader to read the strings which is JSON in this example payload := strings.NewReader("{\"Prop1\" : \"Demo 101\",\"Prop2\" : 1001}") Like GET request NewRequest is invoked to perform the PUT request like so   req, _ := http.NewRequest("PUT", url, payload) Adding the header details : req.Header.Add("appKey", appkey) req.Header.Add("Content-Type", "application/json") req.Header.Add("Cache-Control", "no-cache") Invoke the client to perform the request res, err := http.DefaultClient.Do(req) if err != nil {         log.Println("Failed to Put the value to the ThingWorx server", err)     } Here's the complete function which takes a URL and appKey and then updates 2 property values for a Thing on the ThingWorx Server:   e.g. myurl= http://tw831psql:8080/Thingworx/Things/RESTThing/Properties/*   func TwxPut(myurl string, appkey string) {     url := myurl     payload := strings.NewReader("{\"Prop1\" : \"Demo 101\",\"Prop2\" : 1001}")     req, _ := http.NewRequest("PUT", url, payload)     req.Header.Add("appKey", appkey)     req.Header.Add("Content-Type", "application/json")     req.Header.Add("Cache-Control", "no-cache")     res, err := http.DefaultClient.Do(req)     if err != nil {         log.Println("Failed to Put the value to the ThingWorx server", err)     }     fmt.Println(res)      } And I can now verify that the property has been updated for the Thing called RESTThing   Building POST Request   Similar to GET & PUT we have to create new Request of method POST to invoke a Service in this example, for this I have already created a service that counts up a numeric property value stored in the CountUpProp property already existing under the RESTThing entity   req, _ := http.NewRequest("POST", url, nil) Adding the Headers to the req req.Header.Add("appKey", appKey) req.Header.Add("Content-Type", "application/json") req.Header.Add("Cache-Control", "no-cache") Handling response and the error in case of an issue res, err := http.DefaultClient.Do(req)     if err != nil {         log.Println("Posting to Thingworx server failed with error", err)     }     fmt.Println(res) Here's complete thought : func TwxPost(myurl string, appKey string) {     // e.g. http://tw831psql:8080/Thingworx/Things/RESTThing/Services/CountUpService     url := myurl     req, _ := http.NewRequest("POST", url, nil)     req.Header.Add("appKey", appKey)     req.Header.Add("Content-Type", "application/json")     req.Header.Add("Cache-Control", "no-cache")     res, err := http.DefaultClient.Do(req)     if err != nil {         log.Println("Posting to Thingworx server failed with error", err)     }     fmt.Println(res) } Verifying property update after the service invoke   All the above functions now can be called for e.g. in a main()   func main() {     var myurl string     var appkey string     // Provide URL for ThingWorx fmt.Println("Enter URL, eg. http://localhost:8080/Thingworx/Server") // accepting URL at runtime     fmt.Scanln(&myurl)     // Provide appKey from the ThingWorx platform fmt.Println("Enter valid ThingWorx Application Key ") // accepting appKey at runtime     fmt.Scanln(&appkey)     GetTwxServerEntities(myurl, appkey)     TwxPut(myurl, appkey)     TwxPost(myurl, appkey) }  
View full tip
The use of the term “SSO” means different things to different people. Among Navigate Admins, it became shorthand for using PingFederate to provide both authentication with a single sign-on component, as well as authorization (checking permissions for access to files). In Navigate 1.5, this was the only option for configuring a production system, and many people were not ready for it. That was the origin of the “must have SSO” statement. Beginning with Navigate 1.6, PTC added a scenario called “Windchill Authentication”, that is suitable for Production and uses your Enterprise LDAP to authenticate users. It will issue a token so you get some of the benefits of single sign-on, but not all the bells and whistles that come with PingFederate. It’s also easier to configure. People have begun referring to Windchill Authentication as “non-SSO”, to distinguish it from PingFederate, even though Windchill Authentication has some SSO functions.   In the install manual, there are three scenarios: Fixed Authentication, Windchill Authentication, and Single Sign-On with PingFederate. People usually begin with Fixed Authentication (the easiest to configure, but not secure so it’s only good for Proof of Concept demonstrations), then do Windchill Authentication before tackling PingFederate. Windchill Authentication can take a couple of days while Webexing with us to get working, but for PingFederate we plan several Webexes over a period of 8 days for a typical install. During that time you will be coordinating with other administrators (such as the AD admin) and waiting for emails etc. to get remote admin tasks done as part of the install. Be prepared, timewise.
View full tip
In this session, we pick up where we left off with the mashup which was worked on in part 1 of our Advanced Mashup Expert Session series. Specifically, we will explore the concepts of master mashups, session variables, and media entities, using each to further enhance the look and feel of our mashup.     For full-sized viewing, click on the YouTube link in the player controls. Visit the Online Success Guide to access our Expert Session videos at any time as well as additional information about ThingWorx training and services.
View full tip
This video builds upon the mashup created in the basic session, and strives to create a more polished, user-friendly interface that is ready for deployment. In part 1, we’ll take a look at advanced layout designs and include a more varied set of widgets to help draw attention to some of the more pertinent properties being captured within the mashup.   For full-sized viewing, click on the YouTube link in the player controls. Visit the Online Success Guide to access our Expert Session videos at any time as well as additional information about ThingWorx training and services.  
View full tip
Database backups are vital when it comes to ensure data integrity and data safety. PostgreSQL offers simple solutions to generate backups of the existing ThingWorx database instance and recover them when needed.   Please note that this does not replace a proper and well-defined disaster recovery plan. Export and Import are part of this strategy, but are not reflecting the complete strategy. The commands used in this post are for Windows, but can be adjusted to work on Linux-based systems as well.   Backup   To create a Backup, the export / dump functionality of PostgreSQL can be used.     pg_dump -U postgres -C thingworx > thingworxDump.sql     The -C option will include the statement to create the database in the .sql file and map it to the existing tablespace and user (e.g. 'thingworx' and 'twadmin'). The tablespace and user can be seen in the .sql file in the line with "ALTER DATABASE <dbname> OWNER TO <user>;"   In the above example, we're backing up the thingworx database schema and dump it into the thingworxDump.sql file   Tablespace, username & password are also included in the platform_settings.json   Restore   To restore the database, we just assume an empty PostgreSQL installation. We need to create the DB schema user first via the following commandline:     psql -U postgres -c "CREATE USER twadmin WITH PASSWORD 'ts';"     With the user created, we can now re-generate the tablespace and grant permissions to the twadmin user:   psql -U postgres -c "CREATE TABLESPACE thingworx OWNER twadmin LOCATION 'C:\ThingWorx\ThingworxPostgresqlStorage';" psql -U postgres -c "GRANT ALL PRIVILEGES ON TABLESPACE thingworx TO twadmin;" Finally the database itself can be restored by using the following commandline:   psql -U postgres < thingworxDump.sql This will create the database and populate it with tables, functions and sequences and will also restore the data from the .sql file.   It is important to have database, username and tablespace match with the original system - otherwise granting permissions and re-creating data might fail on recovery. User and tablespace can also be reused, so only the database has to be deleted before restoring it.   Part of the strategy   Part of the backup and recovery strategy should be to actually test the backup as well as the recovery part of the procedure. It should be well tested on a test-environment before deployed to any production environments. Take backups on a regular basis and test for disaster recovery once or twice a year, to ensure the procedure is still valid.   Data is the most important source in your application - protect it!  
View full tip
The purpose of this document is to see how you can setup an MXChip IoT DevKit and also how send the readings of this microprocessor to ThingWorx through an Azure cloud server. You will also learn how to view the values that are being sent.
View full tip
Help Center link on how to control file transfers from the edge client using the EdgeControlled ThingShape.   The EdgeControlled ThingShape is a default entity included with ThingWorx that allows you to manage the amount of egress being sent from the platform to the Edge.   At the time of writing this post, the available 'When Disconnected' settings for a remotely bound property in ThingWorx are 'Fold' and 'Ignore'. Setting a property to 'Fold' while using this EdgeControlled ThingShape is necessary whether the device is connected all the time or only for brief updates.   To use this ThingShape in a real world scenario you might code your edge client to invoke the DequeueEgress REST API function available through this ThingShape. The parameter you pass in is then the number of messages you would like the client to receive. The result of this function is how many messages the platform then actually sent.   A quick setup: 1. Create a RemoteThing entity in ThingWorx 2. Create an ApplicationKey entity in ThingWorx 3. Setup an edge client to bind to that RemoteThing using the specified ApplicationKey 4. Manage Bindings on the Properties page of the RemoteThing, and pull in a few properties you would like to send property updates to 5. Set the 'When Disconnected' value to 'Fold' for each property you want to queue messages for   5a. Set any other settings on the properties you'd like; ie. persistence, logged 6. Save the Thing 7. Add the EdgeControlled ThingShape to the Thing 8. Save the Thing 9. Update property values, see exceptions thrown, but the value will be queued 10. Invoke DequeueEgress on the RemoteThing, with the number of messages to send to the edge client passed in as the parameter value   10a. Notice 'Fold' means only the last value set for a property will be sent to the edge client. There is currently no retention available for any values previously set to the property and stored as the message to be sent. Those values are lost upon a new value coming in before it's dequeued. 11. Verify the edge client has received the expected egress, and the return result of the DequeueEgress function was the expected # of messages sent.
View full tip
Underneath, video is about ThingWorx Analytics and walks through following functions: Upload a Dataset. Create a Training Model. Create a Scoring Job.
View full tip
Underneath video walks through how to Publish a Model from Analytics Builder into Analytics Manager using the connector named TW.AnalysisServices.AnalyticsServer.AnalyticsServerConnector.
View full tip
Previously Installing & Connecting C SDK to Federated ThingWorx with VNC Tunneling to the Edge device Installing and configuring Web Socket Tunnel Extension on ThingWorx Platform Overview     Using the C SDK Edge client configuration we did earlier, we'll now create above illustrated setup. In this C SDK Client we'll push the data to ThingWorx Publisher with servername : TW802Neo to ThingWorx Subscriber servername : TW81. Notice that the SteamSensor2 on the pulisher server is the one binding to the C SDK client and the FederatedSteamThing on subscriber is only getting data from the SteamSensor2. Let's crack on!   Content   Configure ThingWorx to publish Configure ThingWorx to subscribe Publish entities from Publisher to the Subscriber Create Mashup to view data published to the subscriber Pre-requisite Minimum requirement is to have two ThingWorx servers running. Note that both ThingWorx systems can be publisher & subscriber at the same time.   Configure ThingWorx Publisher   Configuring Federation Subsystem   1. Navigate to ThingWorx Composer > Subsystems > Federation Subsystems and configure the following highlighted sections   Essentially its required to configure the Server Identification, i.e. My Server name (FQDN / IP) , My Server Description (optional) Federation subscribers this server publishes to, i.e. all the server you want to publish to from this server. Refer to the Federation Subsystem doc in the Help Center to check detail description on each configurable parameter.   2. Save the federation subsystem   Configuring a Thing to be published   1. Navigate back to the Composer home page and select the entity which you'd like to publish 2. In this case I'm using SteamSensor2 which is created to connect to the C SDK client 3. To publish edit the entity and click on Publish checkbox, like so 4. Save the entity   Configure ThingWorx Subscriber     Configuring Federation Subsystem   1. Navigate to ThingWorx Composer > Subsystems > Federation Subsystems and configure the following highlighted sections   Essentially its required to configure the Server Identification, i.e. My Server name (FQDN / IP) , My Server Description (optional) Refer to help center doc on Federation Subsystems should you need more detail on the configurable parameter If you only want to use this server as a subscriber of entities from the publishing ThingWorx Server, in that case you don't have to Configure the section Federation subscribers this server publishes to, I've configured here anyway to show that both servers can work as publishers and subscribers   2. Save the federation subsystem   Configuring a Thing to subscribe to a published Thing 1. Subscribing to an entity is fairly straight forward, I'll demonstrate by utilizing the C SDK client which is currently pushing values to my remote thing called SteamSensor2 on server https://tw802neo:443/Thingworx 2. I have already Published the StreamSensor2, see above section Configuring a Thing to be published 3. Create a Thing called FederatedStreamThing with RemoteThingWithTunnels as ThingTemplate, 4. Browser for the Identifier and select the required entity to which binding must be done, like so   5. Navigate to the Properties section for the entity, click Manage Bindings to search for the remote properties like so for adding them to this thing:     6. Save the entity and then we can see the values that were pushed from the client C SDK     7. Finally, we can also quickly see the values pulled via a Mashup created in the subscriber ThingWorx , below a is a simple mashup with grid widget pulling values using QueryPropertyHistory service  
View full tip
Previously Installing & Connecting C SDK to Federated ThingWorx with VNC Tunneling to the Edge device   Pre-requisite Download and install Web Sockets Tunnels Widget and Library Extension from PTC Marketplace   Configuring Tunnel Subsystem   1. Logon to ThingWorx Composer > System > Tunnel Subsystem > Configuration 2. Public host name used for tunnels & Public port used for tunnels parameters require publically address FQDN or IP of the instance running ThingWorx server and the port on which its listening. 3. From the screenshot above the TW802Neo is the name of the server and 443 is the port configured for ThingWorx to listen on 4. Navigate back to the RemoteThing we created above to connect our C SDK client to on the ThingWorx platform and ensure that the Enable Tunnelling is turned on   5. Click on Configuration and click Add My Tunnel button to configure where should the tunnel be opened to   6. In the above example Host and Port parameter is from the Ubuntu machine were my C SDK client is running together with the VNC server. If you are looking for more detail on how to configure these topics refer to the Simple diagnostic utility to analyse tunneling performance in ThingWorx 7. Once done, Save the entity   Configuring Remote Access & WebSocket tunnel widgets in a Mashup   1. Navigate to the ThingWorx Composer > Visualization > Mashups > New to create a Mashup 2. From the list of Widgets drag and drop following two widgets that are added as part of the Web Socket Tunnel Widget and Library extension Remote Access Web Socket Tunnel AcceptSelfSignedCert (if you are configuring ThingWorx with self-signed certificate)   4. Since I have my C SDK client binding to SteamSensor2 (remember it was created with RemoteThingWithTunnel ThingTemplate) I have that selected that as RemoteThingName 5. TunnelName is the vnc as configured in the SteamSensor2 configuration 6. For Web Socket Tunnel Widget following configuration is required RemoteThingName TunnelName VNCPassword 7. Parameters defined in a & b point are exactly the same as already done for the Remote Access Widget parameter above, the VNCPassword is the same password with which you have configured your VNC server with   8. Once done Save the mashup and View it   9. Click on the Remote Access to download the websocket adapter plugin, once downloaded click on it to initiate the websocket you will be prompted with following     Accept and Run   This will open the Remote tunnel with following confirmation, Don't click on OK else it will close   You can now utilize this tunnel to perform required action. Note that if in case there is no connection through this opened tunnel it will time out and will close automatically. 10. For doing remote desktop to the edge device I will use the Remote Device button , which will open a new browser window like so 11. Click on Connect to initiate the remote desktop session, like so 12. I can now start the terminal on the edge device and navigate through    Up Next Configuring ThingWorx Federation for Fetching data from the C SDK client from Publisher to subscriber ThingWorx entity
View full tip
Content Configuring ThingWorx C SDK Configuring C SDK client example for Tunneling Accepting Self Signed certificate Installing and configuring VNC Extension on ThingWorx Platform Configuring Tunnel Subsystem Configuring Remote Access & WebSocket tunnel widgets in a Mashup Setting up Federated ThingWorx   NOTE : For sake of brevity I'll divide this blog in sub blogs and will link them where ever needed.   Configuring ThingWorx C SDK   Pre-requisite Ensure that following two utilities are installed Cmake Make For linux platform both will likely be pre-installed Have ThingWorx configured with self-signed certificate (optional), refer to this article ThingWorx setup SSL / HTTPS on Tomcat with Self-Signed Certificate Create a Thing to which you want to bind to from the csdk example, e.g. create SteamSensor2 with ThingTemplate RemoteThingWithTunnels (below screenshot shows ThingTemplate RemoteThingWithTunnelsAndFileTransfer - this is not must for this blog we just need RemoteThingWithTunnels as we want to have our thing connected to receive property updates and also do tunneling)   Create an Application key with user that has sufficient access rights to work with remote entities Install and configure VNC server for tunneling using this blog Tunneling with ThingWorx and an Edge Device Note : Above mentioned blog is done using ThingWorx Edge Microserver, however we are only interested in installation and configuration related to the VNC server. You don't need to configure the edge device for tunneling from the above mentioned blog as we will do this later in this blog for C SDK.   Installing, configuring and launching the C SDK based client on Linux platform   To begin with ensure that you have downloaded the ThingWorx C SDK from PTC Marketplace or PTC Software Download page. Let's configure the CSDK 1. Navigate to the C SDK root directory and create a directory e.g. /csdk/cmake 2. Change directory into cmake/ 3. Check for the available compilers on the platform, with command cmake -G list   4. Choose as per your requirement, for our example here 'll pick "Unix Makefiles" Note : I'm compiling on Ubuntu 16 OS 5. Execute following command from csdk/cmake/ , eg. cmake -G "Unix Makefiles" ..   This will result in following compilation details : 6. Cmake folder now have some build files created  7. Configure the example SteamSensor for tunneling and for accepting self-signed certificate, as my ThingWorx is currently configured with HTTPS 8. Navigate to root directory and edit the main.c for SteamSensor, e.g. sudo gedit main.c Note: Feel free to use any editor of your choice, I'm using gedit Add following to enable tunneling and accepting self-signed certificates     #define ENABLE_TUNNELING 1 void twApi_SetSelfSignedOK();   9. Save the file and close 10. Make the solution to create executable out of the example SteamSensor we edited 11. Navigate to the SteamSensor example under the cmake e.g. 12. Key in command make and press enter, e.g. ~/csdk/cmake/examples/SteamSensor$ make This will build the solution and once finished it will put some new files and an executable file in the /csdk/cmake/examples/SteamSensor       Once the build is finished,    Great! now then, let's crack on with the SteamSensor executable.   13. To launch the executable it requires following syntax ./SteamSensor <hostname> <port> <appKey> <thingname> , e.g. ./SteamSensor tw802neo 443 11c784ab-dde7-400f-b244-fd7c4b217869 SteamSensor2 14. Once launched & if its successfully connected you can check the remote entity created in ThingWorx Composer, as we can see below entity as connected. Notice that in below screenshot there are some sample properties that are being pushed from the C SDK based client. You may not have them at first   15. In case you see isConnected as true but no property listed for your Thing, edit the Thing and click on Manage Bindings     Click on Remote tab to see which properties are pushed, then you have the option to either click on Add All Above Properties on the right side, which will automatically create properties under the Thing, or you can drag and drop properties on the right to bind with if you are using properties with different name. Ensure that the data type matches with the remote properties.     Done and Save.   So far C SDK installed and configured VNC server installed A Thing created with RemoteThingWithTunnel ThingTemplate An Application key to connect C SDK based client to RemoteThing in ThingWorx Bound remote properties from the C SDK client built above to the Remote Thing on ThingWorx Up Next Installing and configuring Web Socket Tunnel Extension on ThingWorx Platform Configuring ThingWorx Federation
View full tip
Disclaimer: This post does of course not express any political views.   Pie Chart Coloring   In ThingWorx Pie Charts use a default color schema based on the DefaultChartStyle Definitions. These schemas are using fixed numbering and coloring systems, e.g. 1 is blue, 2 is green, 3 is red and so on. All Pie Charts will be rendered with these colors in the same order, no matter which data the chart is using. Visualization of data with the default colors might not necessarily help in creating an easy to read chart.   Just take a look at the following example with the default color schema. Let's just take political parties - as they are usually associated with a distinct color - to illustrate how the default color schema will fail depending on the data displayed.         In the first example, just by sheer coincidence the colors are perfectly matching the parties. When introducing a new party to the pool suddenly the blues are rendered green and the yellows rendered light-blue etc. This can be quite confusing, especially on election night 😉   Custom Color Schema   PoliticalParties Thing   To test a custom color schema, we first need to create a new Thing: PoliticalParties as a GenericThing Add a dataset property with the following PoliticalParties DataShape.         Save the Thing and set the InfoTable to:   Key Value The Purples 20 The Blues 20 The Greens 20 The Reds 20 The Yellows 20 Others 20   Number values don't actually matter too much, as the Pie Chart will automatically distribute them according to their percentage.   PoliticalParties Mashup   Create a new Mashup and add a PieChart to the canvas. Bind the PoliticalParties > GetPropertyValues > dataset to the Data input of the Widget. Ensure to set the LabelField to key and the ValueField to value for a correct mapping.     Save the Mashup and preview it.   It should show a non-matching color for each party listed in the InfoTable.   Custom Styles and States   Create new custom Style Definitions for each political party. As the Pie Chart is only using the Background Color other properties can stay on the default. I chose to go with a more muted version of the colors to make the chart easier to look at.         With the newly defined colors we can now generate a new State Definition as follows:       The States allow to evaluate the key-Strings in the Thing's InfoTable and assign a Style Definition depending on the actual value. In this definition we map a color schema based on the InfoTable's key-value to create a 1:1 mapping for the Strings.   This means, no matter where a certain party is positioned in the chart it will be tinted with its associated color.   Refining the Mashup   Back in the Mashup, select the PieChart. In the ColorFormat property choose the newly created State Definition.     Save the Mashup and preview it. With the States and Styles applies, colors are now displayed correctly.       Even when changing positions and numbers in the original InfoTable of the PoliticalParties Thing, the chart now considers the mapping of Strings and still displays the colors correctly.  
View full tip
Continuing our series of Troubleshooting ThingWorx Analytics installations, in this IoT Tech Tip we will cover two items have been appearing for many users.   Error 1069 Encountered with Native Windows Installation of ThingWorx Analytics 8.2   In some instances, when a user successfully installs ThingWorx Analytics (TWAS) to a Windows Server operating system, they will encounter an error where TWAS will report an Error 1069: The Service did not start due to logon failure.   This can occur with any individual Service that is created by the installation, the following fix should work in addressing the issue.   Primary Reason This Happens:   This error can be encountered when the user provides incorrect credentials for associating the Services to an account during installation. In TWAS 8.2, there is a utility that will enable to the user to change the associated user on the Services. It is important the user provides the password for the User Account on Windows, and not the user/password combination for ThingWorx Foundation Platform Server.   Steps to Fix Issue   Solution 1:   Open a Command Prompt as Administrator, via Start Menu à Run à type CMD. Then right click on cmd.exe and Run As Administrator.   In the elevated command prompt, change your directory to the ThingWorxAnalyticsServer/bin directory, for example in the default installation path would be: cd C:\Program Files (x86)/ThingWorxAnalyticsServer/bin Then execute the changeServiceUserAccount.bat <username>, for example: changeServiceUserAccount.bat user1   You will be prompted to change the password for the user.   Solution 2:   If Solution 1 does not resolve the issue, alternately you can manually change the Log On properties for each of the services. The changeServiceUserAccount.bat would do this via script, but on occasion this may work. Open the Control Panel and navigate to Services, for example: Control Panel à All Control Panel Items à Administrative Tools   You will have to right click each individual service and go to Properties à Log On tab and enter the account name and password for the local account. Note: Local System account will not resolve this issue.   This issue was resolved in the ThingWorx Analytics Server 8.3 release, where all Services are associated with the Network Service account.     More information can be found in this Knowledge Article   Uploading of a Dataset hangs or does not complete in ThingWorx Analytics 8.3   On occasion, after a fresh installation of ThingWorx Analytics Server 8.3 on a Windows Server operating system, a dataset will not complete its upload. Typically no error message is displayed, and the upload wizard UI will just hang on the upload progress after:   Creating copy of Configuration File... Submitting Create Dataset request... Creating copy of Data File...   Primary Reason This Happens:   This is caused by twas-zookeeper service being stuck in a PAUSED state. This means that in the post installation, twas-zookeeper did not start.   Steps to Fix Issue   You will have to double check that the JAVA_HOME variable was defined as a System Variable. In the ThingWorx Analytics Installation guide, pages 12-14 outline the steps required as pre-requisites. You can change this in Control Panel > System > Advanced Settings > Environment Variables, and ass a new variable named JAVA_HOME under System Variables. The value location should be the location of your deployment of JAVA software.   Typically this is located in C:\Program Files\Java\<jre or jdk>_<version number>     More information can be found in this Knowledge Article
View full tip
Key Functional Highlights Changes to the Free Trial for Manufacturing Merged Manufacturing apps and DevKit downloads into one Free Trial  120 day free trial Access to ThingWorx Foundation  Includes manufacturing accelerator Controls Advisor, Asset Advisor (supported apps) Production KPI's (demo app within SCO accelerator) Available via ThingWorx Developer Portal Asset Advisor Merged Manufacturing and Service Apps into a single deliverable Merged ThingWorx Utilities Capabilities into Asset Advisor Equipment Export/Import via Excel Added console links for workflow, composer and SCM Utilities customers can easily migrate to Asset Advisor Support of Quality of Data for assets and lines Building Blocks Additional connectors can be configured in Controls Advisor Edge Microserver (EMS) and Azure IoT can be configured as a data source Operator Advisor Beta     Compatibility - ThingWorx Manufacturing and Service Apps ThingWorx 8.3.x KEPServerEX 6.2 and later Earlier Version of KEPServerEX and 3rd party OPC will be supported via Aggregator All other TWX supported data sources but specifically: NI, EMS and Azure IOT Hub Upgrade Support 8.0.1 and later National Instruments TestStand 1.1.0 and later     Compatibility – ThingWorx Manufacturing Operator Advisor Beta ThingWorx 8.2.x and later MPMLink 11.1 with WRS 1.2     Documentation What’s New in ThingWorx Apps ThingWorx Apps Setup and Configuration Guide ThingWorx Apps Customization Guide Operator Advisor Beta Guide     Additional information The National Instruments Connector can be found on PTC Marketplace, link below     Download ThingWorx Manufacturing and Service Apps & Operator Advisor Beta Extensions National Instruments TestStand Connector
View full tip
Precision and Recall are the evaluation matrices that are used to evaluate the machine learning algorithm used. This post needs some prior understanding of the confusion matrix and would recommend you to go through it here.   Example of Animal Image Recognition Consider the below Confusion Matrix for the input of the animal images and algorithm trying to identify the animal correctly: ANIMALS Cat Dog Leopard Tiger Jaguar Puma Cat 62 2 0 0 1 0 Dog 1 50 1 0 4 0 Leopard 0 2 98 4 0 0 Tiger 0 0 10 78 2 0 Jaguar 0 1 8 0 46 0 Puma 2 0 0 1 1 42   Explaining Few Random Grids: [Cat, Cat]: The grid is having the value 62. It means the image of a cat was identified as a cat for 62 times. [Cat, Dog]: The grid is having the value 2. It means the image of a cat was identified as dog twice. [Leopard, Tiger]: The grid is having the value 4. It means the image of Leopard was identified as Tiger for 4 times.   Questions To Find Some Answers    Q.How many times our algorithm predicted the image to be Tiger? A. Looking at the Tiger column: 0+0+4+78+0+1 = 83   Q.What is the probability that Puma will be classified correctly? A. Looking at the matrix above, we can see that we 42 times Puma was classified correctly. But twice it was classified as Cat, once as Tiger and once as Jaguar. So the probability will come down as: 42/(42+2+1+1)= 42/46 = 0.91      This concept is called as RECALL. It is the fraction of correctly predicted positives out of all actual positives. So we can say that Recall = (True Positives) / (True Positives + False Negatives)   Q. What is the probability that when our algorithm is identifying the image as Cat, it is actually Cat? A. Looking at the matrix above, we can see that once our algorithm has identified a Dog as a Cat, twice Puma as Cat and 62 times Cat as Cat. So the probability will come down as: 62/(62+1+2) = 0.95      This concept is called as PRECISION. It is the fraction if correctly predicted positives out of all predicted positives. So we can say that Precision = (True Positives) / (True Positives + False Positives)
View full tip
This is a follow-up post on my initial document about Edge Microserver (EMS) and Lua Script Resource (LSR) security. While the first part deals with fundamentals on secure configurations, this second part will give some more practical tips and tricks on how to implement these security measurements.   For more information it's also recommended to read through the Setting Up Secure Communications for WS EMS and LSR chapter in the ThingWorx Help Center. See also Trust & Encryption Theory and Hands On for more information and examples - especially around the concept of the Chain of Trust, which will be an important factor for this post as well.   In this post I will only reference the High Security options for both, the EMS and the LSR. Note that all commands and directories are Linux based - Windows equivalents might slightly differ.   Note - some of the configuration options are color coded for easy recognition: LSR resources / EMS resources   Password Encryption   It's recommended to encrypt all passwords and keys, so that they are not stored as cleartext in the config.lua / config.json files.   And of course it's also recommended, to use a more meaningful password than what I use as an example - which also means: do not use any password I mentioned here for your systems, they might too easy to guess now 🙂   The luaScriptResource script can be used for encryption:   ./luaScriptResource -encrypt "pword123" ############ Encrypted String AES:A26fBYKHJq+eMu0Fm2FlDw== ############   The wsems script can be used for encryption:   ./wsems -encrypt "pword123" ############ Encrypted String AES:A26fBYKHJq+eMu0Fm2FlDw== ############   Note that the encryption for both scripts will result in the same encrypted string. This means, either the wsems or luaScriptResource scripts can be used to retrieve the same results.   The string to encrypt can be provided with or without quotation marks. It is however recommended to quote the string, especially when the string contains blanks or spaces. Otherwise unexpected results might occur as blanks will be considered as delimiter symbols.   LSR Configuration   In the config.lua there are two sections to be configured:   scripts.script_resource which deals with the configuration of the LSR itself scripts.rap which deals with the connection to the EMS   HTTP Server Authentication   HTTP Server Authentication will require a username and password for accessing the LSR REST API.     scripts.script_resource_authenticate = true scripts.script_resource_userid = "luauser" scripts.script_resource_password = "pword123"     The password should be encrypted (see above) and the configuration should then be updated to   scripts.script_resource_password = "AES:A26fBYKHJq+eMu0Fm2FlDw=="   HTTP Server TLS Configuration   Configuration   HTTP Server TLS configuration will enable TLS and https for secure and encrypted communication channels from and to the LSR. To enable TLS and https, the following configuration is required:     scripts.script_resource_ssl = true scripts.script_resource_certificate_chain = "/pathToLSR/lsrcertificate.pem" scripts.script_resource_private_key = "/pathToLSR/key.pem" scripts.script_resource_passphrase = "keyForLSR"     It's also encouraged to not use the default certificate, but custom certificates instead. To explicitly set this, the following configuration can be added:     scripts.script_resource_use_default_certificate = false     Certificates, keys and encryption   The passphrase for the private key should be encrypted (see above) and the configuration should then be updated to     scripts.script_resource_passphrase = "AES:A+Uv/xvRWENWUzourErTZQ=="     The private_key should be available as .pem file and starts and ends with the following lines:     -----BEGIN ENCRYPTED PRIVATE KEY----- -----END ENCRYPTED PRIVATE KEY-----     As it's highly recommended to encrypt the private_key, the LSR needs to know the password for how to encrypt and use the key. This is done via the passphrase configuration. Naturally the passphrase should be encrypted in the config.lua to not allow spoofing the actual cleartext passphrase.   The certificate_chain holds the Chain of Trust of the LSR Server Certificate in a .pem file. It holds multiple entries for the the Root, Intermediate and Server Specific certificate starting and ending with the following line for each individual certificate and Certificate Authority (CA):     -----BEGIN CERTIFICATE----- -----END CERTIFICATE-----     After configuring TLS and https, the LSR REST API has to be called via https://lsrserver:8001 (instead of http).   Connection to the EMS   Authentication   To secure the connection to the EMS, the LSR must know the certificates and authentication details for the EMS:     scripts.rap_server_authenticate = true scripts.rap_userid = "emsuser" scripts.rap_password = "AES:A26fBYKHJq+eMu0Fm2FlDw=="     Supply the authentication credentials as defined in the EMS's config.json - as for any other configuration the password can be used in cleartext or encrypted. It's recommended to encrypt it here as well.   HTTPS and TLS   Use the following configuration establish the https connection and using certificates     scripts.rap_ssl = true scripts.rap_cert_file = "/pathToLSR/emscertificate.pem" scripts.rap_deny_selfsigned = true scripts.rap_validate = true     This forces the certificate to be validated and also denies selfsigned certificates. In case selfsigned certificates are used, you might want to adjust above values.   The cert_file is the full Chain of Trust as configured in the EMS' config.json http_server.certificate options. It needs to match exactly, so that the LSR can actually verify and trust the connections from and to the EMS.   EMS Configuration   In the config.lua there are two sections to be configured:   http_server which enables the HTTP Server capabilities for the EMS certificates which holds all certificates that the EMS must verify in order to communicate with other servers (ThingWorx Platform, LSR)   HTTP Server Authentication and TLS Configuration   HTTP Server Authentication will require a username and password for accessing the EMS REST API. HTTP Server TLS configuration will enable TLS and https for secure and encrypted communication channels from and to the EMS.   To enable both the following configuration can be used:   "http_server": { "host": "<emsHostName>", "port": 8000, "ssl": true, "certificate": "/pathToEMS/emscertificate.pem", "private_key": "/pathToEMS/key.pem", "passphrase": "keyForEMS", "authenticate": true, "user": "emsuser", "password": "pword123" }   The passphrase as well as the password should be encrypted (see above) and the configuration should then be updated to   "passphrase": "AES:D6sgxAEwWWdD5ZCcDwq4eg==", "password": "AES:A26fBYKHJq+eMu0Fm2FlDw=="   See LSR configuration for comments on the certificate and the private_key. The same principals apply here. Note that the certificate must hold the full Chain of Trust in a .pem file for the server hosting the EMS.   After configuring TLS and https, the EMS REST API has to be called via https://emsserver:8000 (instead of http).   Certificates Configuration   The certificates configuration hold all certificates that the EMS will need to validate. If ThingWorx is configured for HTTPS and the ws_connection.encryption is set to "ssl" the Chain of Trust for the ThingWorx Platform Server Certificate must be present in the .pem file. If the LSR is configured for HTTPS the Chain of Trust for the LSR Server Certificate must be present in the .pem file.   "certificates": { "validate": true, "allow_self_signed": false, "cert_chain" : "/pathToEMS/listOfCertificates.pem" } The listOfCertificates.pem is basicially a copy of the lsrcertificate.pem with the added ThingWorx certificates and CAs.   Note that all certificates to be validated as well as their full Chain of Trust must be present in this one .pem file. Multiple files cannot be configured.   Binding to the LSR   When binding to the LSR via the auto_bind configuration, the following settings must be configured:   "auto_bind": [{ "name": "<ThingName>", "host": "<lsrHostName>", "port": 8001, "protocol": "https", "user": "luauser", "password": "AES:A26fBYKHJq+eMu0Fm2FlDw==" }]   This will ensure that the EMS connects to the LSR via https and proper authentication.   Tips   Do not use quotation marks (") as part of the strings to be encrypted. This could result in unexpected behavior when running the encryption script. Do not use a semicolon (:) as part of any username. Authentication tokens are passed from browsers as "username:password" and a semicolon in a username could result in unexpected authentication behavior leading to failed authentication requests. In the Server Specific certificates, the CN must match the actual server name and also must match the name of the http_server.host (EMS) or script_resource_host (LSR) In the .pem files first store Server Specific certificates, then all required Intermediate CAs and finally all required Root CAs - any other order could affect the consistency of the files and the certificate might not be fully readable by the scripts and processes. If the EMS is configured with certifcates, the LSR must connect via a secure channel as well and needs to be configured to do so. If the LSR is configured with certifcates, the EMS must connect via a secure channel as well and needs to be configured to do so. For testing REST API calls with resources that require encryptions and authentcation, see also How to run REST API calls with Postman on the Edge Microserver (EMS) and Lua Script Resource (LSR)   Export PEM data from KeyStore Explorer   To generate a .pem file I usually use the KeyStore Explorer for Windows - in which I have created my certificates and manage my keystores. In the keystore, select a certificate and view its details Each certificate and CA in the chain can be viewed: Root, Intermediate and Server Specific Select each certificate and CA and use the "PEM" button on the bottom of the interface to view the actual PEM content Copy to clipboard and paste into .pem file To generate a .pem file for the private key, Right-click the certificate > Export > Export Private Key Choose "PKCS #8" Check "Encrypted" and use the default algorithm; define an "Encryption Password"; check the "PEM" checkbox and export it as .pkcs8 file The .pkcs8 file can then be renamed and used as .pem file The password set during the export process will be the scripts.script_resource_passphrase (LSR) or the http_server.passphrase (EMS) After generating the .pem files I copy them over to my Linux systems where they will need 644 permissions (-rw-r--r--)
View full tip
This is going to cover one way of configuring an SSL passthrough using HAProxy.  This guide is intended to be a reference document, and administrators looking to configure an SSL passthrough should make sure the end solution meets both their company's business and security needs.   Why use SSL Passthrough instead of SSL Termination? The main reason for ThingWorx would be if a company requires encrypted communication internally, as well as externally.  With SSL Termination, the request between the load balancer and the client is encrypted.  But the load balancer takes on the role to decrypt and passes that back to the server.  With SSL Passthrough, the request goes through the load balancer as is, and the decryption happens on the ThingWorx Application server.   What you will need to continue with this guide:   HAProxy installed A working ThingWorx application server (Guide to getting one setup can be found here) Tomcat configured for ssl NOTE : Always contact your Security team and make sure you have a certificate that meets your business policy For this tutorial, I created a self-signed certificate following along with the below guide.  If you have already obtained a valid certificate, then you can just skip over the step of creating it, and follow along with the Tomcat portion https://www.ptc.com/en/support/article?n=CS193947 Once configured, restart Tomcat and verify it is working by navigating to https://<yourServer>:<port>/Thingworx   With ThingWorx running as SSL and HAProxy installed, we just need to make sure the HAProxy configuration is setup to allow SSL traffic through.  We use 'mode tcp' to accomplish this.   On your HAProxy machine, open /etc/haproxy/haproxy.cfg for editing.  While most of this can be customized to fit your business needs, some variation of the highlighted portions below need to be included in your final configuration:   global         log /dev/log    local0         log /dev/log    local1 notice         chroot /var/lib/haproxy         stats socket /run/haproxy/admin.sock mode 660 level admin         stats timeout 30s         user haproxy         group haproxy         daemon           # Default SSL material locations         ca-base /etc/ssl/certs         crt-base /etc/ssl/private           # Default ciphers to use on SSL-enabled listening sockets.         # For more information, see ciphers(1SSL). This list is from:         #  https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/         # An alternative list with additional directives can be obtained from         #  https://mozilla.github.io/server-side-tls/ssl-config-generator/?server=haproxy         ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:!aNULL:!MD5:!DSS         ssl-default-bind-options no-sslv3   defaults         log global          option tcplog          mode tcp          option http-server-close          timeout connect 1s          timeout client  20s          timeout server  20s          timeout client-fin 20s          timeout tunnel 1h          errorfile 400 /etc/haproxy/errors/400.http          errorfile 403 /etc/haproxy/errors/403.http          errorfile 408 /etc/haproxy/errors/408.http          errorfile 500 /etc/haproxy/errors/500.http          errorfile 502 /etc/haproxy/errors/502.http          errorfile 503 /etc/haproxy/errors/503.http          errorfile 504 /etc/haproxy/errors/504.http         frontend https          bind *:443          mode tcp          default_backend bk_app           backend bk_app          mode tcp          server TWXAPP01  <twxapp01IP>:<port>         In this example, the user would connect to https://<loadbalancer>/Thingworx and the load balancer would forward the requests to https://<twxapp01IP>:<port>/Thingworx   That’s it!   A couple of side notes:   The load balancer port the clients connect to does not need to be the same as the ThingWorx port the load balancer will forward to If working in a Highly Available configuration, each ThingWorx Application server needs to have its own certificate configured If HAProxy seems unstable, try updating to the latest release If it is on the latest release according to the Unix repository, check https://www.haproxy.org/ and see if there is a later stable release.  There have been some issues where Ubuntu's latest update in the repository is actually a few years old
View full tip
Announcements