cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

Community Tip - Did you know you can set a signature that will be added to all your posts? Set it here! X

Monolithic to Cluster in 10.2 M030

ssingh-2
3-Newcomer

Monolithic to Cluster in 10.2 M030

Hello All,

We are planning to configure our existing monolithic WC 10.2 M030 to Cluster environment. I have gone through WCADVDeployguide for 10.2 M030 and finalized that we will go for identical node configuration.

But to configure this, we are having two basic question:

1) how to configure Apache Load balancer during Testing/Trial phase. later we will use hardware load balancer

2) when we are running PSI on existing setup, we are not getting enable cluster support option. I do not know how to start if we have to configure our existing WC into cluster.

Please let us know the steps from start if anybody have already configured identical clustering in 10.2 M030.

Thanks & Regards,

Shekhar

3 REPLIES 3
BineshKumar1
13-Aquamarine
(To:ssingh-2)

Hello Shekhar,

First of all, I don't think it is a good idea to use Apache as a load balance when you move to production as it doesn't  have any essential capabilities(HA/ load based routing is poor, administration is tedious) of a commercial LTM.

To use Apache as a load balancer, you have two options use http based load balancing or mod_jk based.

http based - You can google and find a lot of how to s on http based load balancer using the proxy feature of Apache.

mod_jk based load balancing - This is in fact used in windchill, when you have multiple method servers in a monolithic environment, apache load balances multiple request using mod_jk to embedded tomcats in method servers. When you have a cluster what you have to do is define load balancing workers on different hosts and define routes. This is detailed in 10.1 advanced deployment guide and in https://support.ptc.com/appserver/cs/view/solution.jsp?n=CS40808

I don't think you can run PSI and convert an existing install to cluster, it is used when you have are doing a new install. Anyhow, the PSI doesn't do anything other than providing additional wt.properties specific to master and slaves. To configure cluster, you will still have to copy the load points - JDK, Apache and Windchill  and so on to all the members. So the best option for you is to copy the load points to slave hosts and reconfigure the properties wt.cache.master.codebase, wt.cache.master.hostname, wt.cache.master.slaveHosts  and so on. The example detailed in advanced deployment guide is good to easy to follow. Ensure that you have a commonly accessible storage for vaults as well. Since you are in 10.2, do configure the dynamic cache master which would help you get past the single master - SPOF

Thank you,

Binesh Kumar

Barry Wehmiller

Hello Binesh,

First of all thanks for quick update.

As I said above, we will be configuring apache load balancer only for trial/testing/rehearsal phase. for production system we will use hardware load balancer.

After going through your above answer, I am having few more question like :

1)So In 10.2, if we copy the load point folder to cluster/slave nodes and add wt,rmi.server.hostname and wt.cache.master.slavehosts will be enough to configure cluster?

2) Also we are having multiple BGMS name differently for INDX, Publishing and standard Queue. So what will be the best strategic to configure BGMS in cluster and failover mode?

Thanks & Regards,

Shekhr

BineshKumar1
13-Aquamarine
(To:ssingh-2)

Hi Shekhr,

That's right..you have to do property changes after you copy windchill/jdk/http server load points. You can find complete property list in advanced deployment guide's cluster configuration sample.

So for Solr, you have many options either running solr on the master node or on a dedicated slave. Solr Cluster(fail over) is supported only with OS clustering(Microsoft Cluster).Starting 10.2 you can assign queue groups to multiple background method server running on separate nodes, thus you can achieve high availability.You can also assign dedicated CAD workers for each nodes. If you want to use pub monitor details,  you need to have a shared pubtemp.

Thank you,

Binesh Kumar

Barry Wehmiller

Announcements


Top Tags