in our application we are having some performance issues, there is a lot of users accessing our mashup and one server is having trouble with performance.
We would like to use multiple servers with a load balancer to distribute the load. Could you please kindly provide us with pointers on how to achieve that if that is even possible? Do you have any other recommendations on how to deal with a load?
(We have Postgres installed to a separate server, we are able to replicate the database but just doing that does not work as expected, in order to see the changes in the Thingworx platform it is necessary to restart the tomcat in our experience which is an issue.)
Thanks a lot in advance.
Support for connecting to a Windchill cluster was introduced in Navigate 1.8. It is not supported in earlier versions.
Your options for configuration depend upon your load balancer. Load balancers come in two basic designs for our purposes: OSI Layer 4 and OSI Layer 7. Layer 4 load balancers can pass the ThingWorx request without altering the SSL content (they can turn SSL Off-loading to OFF) and without changing the hostname and IP address in the data packets (pass-through). Layer 7 load balancers cannot provide this option; they always terminate the SSL connection (Off-loading), and change the hostname and IP address to their own.
The most secure authentication options provided by Navigate are available with a Layer 4 load balancer. You can use PingFederate SSO, or Windchill Authentication (2-way authentication using SSL certificates). With a Layer 7 load balancer, the only authentication option that will work is Windchill trustedHost authentication. This has a security hole, because anyone who can gain access to Windchill through the load balancer will be authenticated without presenting credentials. PTC recommends that this only be used in a Proof of Concept demonstration scenario, and never use it for a Production environment.
Configuration details are included in the Navigate install guides for Navigate 1.8, 1.9, and 8.5 under the heading Configure ThingWorx Navigate with a Clustered Windchill Environment. The same instructions can be found in the Navigate Help Center for each version (example: http://support.ptc.com/help/navigate/18/en/#page/ThingWorx_Navigate%2FConfigure_Navigate_with_Cluster_Windchill_Environment.html%23.
Thank you for your reply!
I've looked into PTC Windchill and Thingworx Navigate and I still do not very well understand how to use it in our use case. PTC Windchill is a resource sharing software in my understanding. It allows multiple users to share documents, CAD files etc. which is not what I am trying to achieve.
Let me explain what my use case is.
We are developing mostly monitoring apps for the IoT, lets say we collect data from environment monitoring sensors, store them in the Thingworx database (Stream) and then create a mashup which shows the data to the users. (Simplified , there is multiple mashups, streams, things etc in the real application)
Once the mashup is created we want to make it publicly accessible from the web. We do that by disabling the login prompt. Connect it to a domain and so on. So now everyone is able to see the mashup given he looks up the appropriate URL in the browser.
This all is tested and works just fine, problem is the load. We can serve let's say 2000 users at a given time, anything over would slow down the process significantly. Crashing the Thingworx server eventually. So we are looking for a solution on how to use multiple servers on active/active configuration alongside a load balancer to distribute the load.
Let me provide a list of questions I have about the whole process, because it is not clear to me based on the HA guide or TW Navigate and PTC Windchill guides.
1. Given I have two dedicated Thingworx servers, how do I achieve them working in and active/active configuration? Based on the TW HA guide I can configure them to switch, once one server is having difficulties, crashes etc. I can switch to the other one (active/passive). That is not what I want, I want the servers to both run cuncurrently and share the load.
2. What do I do with the database? We are running on Postgres 11 which is also installed on a dedicated server (servers in case we need more than one instance running). Do I just connect all the servers to the same database? (This doesn't seem to work, the servers share data only upon tomcat restart overwriting each others data which is to be expected.) Do I need to setup a master-master replication for the databases? How is that going to help, since I always need to restart tomcat to see the changes in the database, I need this to happen in real time.
3. Once the replication works as expected and we share data across all the instances of Thingworks (mind that there will be data arriving from the monitoring sensors at all times) we need to setup a load balancer, Nginx web server seems to suit the task. Are there any other recommendations?
Is this even possible with Thingworx? As far as we have seen nobody is doing that. We also understand that Thingworx main purpose might be different. Still we need to solve the issue somehow.
Thank you for your time, I know that this is a complex task which requires a lot of resources.
Navigate is used to access data from the Windchill database, through Windchill, so when you began talking about load balancers on the Navigate board I assumed you meant how to access a Windchill cluster through a load balancer that manages the Windchill nodes. I understand now that you want to manage multiple ThingWorx servers and do not use Navigate. This is a question for the ThingWorx board, not Navigate. That team has the knowledge about core ThingWorx capabilities.
To my knowledge two separate ThingWorx instances cannot share the same Postgres database, so that appears to be a blocker. Things like the encryption key and credentials are stored in the database, and one instance would overwrite the information for the other instance making that instance fail. You can get a confirmation of that, and possibly what alternatives are available, on the ThingWorx board.