cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

We are happy to announce the new Windchill Customization board! Learn more.

How to host and configure windchill slave with master server

KGH
1-Newbie
1-Newbie

How to host and configure windchill slave with master server

Hi,

I have hosted the master server successfully and copied the installation folder with the directory structure maintained to the slave server. After this, I do not know how to proceed. I have gone through some of the windchill clustering documents but didn't helped me lot.

Please help me with the steps and settings to host the slave server. This will really going to be valueadd for me.

.....Ravi

5 REPLIES 5
bcavanaugh
1-Newbie
(To:KGH)

Hello,

the master node and slave nodes each have a site.xconf file...for each node, you must edit its site.xconf file - all of the slaves will probably have the same content in their site.xconf file but the master's site.xconf will be different.

KGH
1-Newbie
1-Newbie
(To:bcavanaugh)

Please is this okay.

Slave configuration
wt PropertiesMy comments
java.rmi.server.hostname=set the name of the local server configuring
wt.rmi.server.hostname=Load balancer host name or url
wt.server.hostname=Do we need to leave this blank; if yes, how to set null
wt.queue.executeQueues=Set to false; how to define that specific type of task / requests should go to background method server in master
wt.cache.master.codebase=Need to set master server name or the load balancer name / url
wt.cache.master.hostname=DMaster server host name
wt.cache.master.slaveHosts=B, C, DAll slaves name in the cluster
db.properties:
wt.pom.serviceName=How to find the details of these if we are not aware. Any windchill commands
wt.pom.dbUser=Windchill user name for database server
wt.pom.dbPassword=Windchill password for db user
ie.properties:
ie.ldap.serviceName=A
Chanhasen
5-Regular Member
(To:KGH)

Hi,

I am also in process of configuring Cluster environment. Could you please share your ideas/suggestions.

Thanks in Advance

jessh
5-Regular Member
(To:bcavanaugh)

Do yourself a favor and move to 10.2 (and ideally M020 or higher) before deploying a cluster if at all possible.

10.2 has numerous benefits in this area:

  • The master is dynamically determined and another node dynamically becomes the master if the master fails. This eliminates both the need for special configuration of the master and the single point of failure of the cluster master.
  • Cluster configuration is much simpler and less error prone than previous releases overall. For instance, except where you actually want a node to be different than the others there is no longer any need to have differences in the configuration between nodes.
  • Asymmetric clusters (those with different numbers of foreground method servers on different nodes) are no longer an issue for Info*Engine.

In short, in 10.2 clusters are easier to set up and work better once they are set up.

If you're setting up a cluster on 10.2 and assuming it is just like doing so on a previous release, don't. Read the documentation first as things have changed in some very critical ways.

jessh
5-Regular Member
(To:KGH)

One other note on clusters:

Do not simply do an ad hoc copy from one cluster node to another to produce the cluster.

Sure, that will work, but taking this approach will be the source of endless errors later -- as someone applies a change to one cluster node and does not do so fully or properly to some of the others.

Instead anyone deploying a cluster should be using rsync, robocopy, or a source control system to robustly mirror one node to all the other nodes (apart from node-specific configuration where this is necessary, which could be kept in a separate xconf). This eliminates issues of unintended discrepancies between cluster nodes.

Using source control is the most compelling approach. You check the entire node installation into a source control repository like Git (or Subversion or whatever). Producing other nodes then involves pulling the installation from the source control repository. The compelling part here is that one can easily obtain a full traceable history of all changes made to the installation, complete with administrative commentary as to why the change was being made. The lack of such traceability is itself often a problem even in non-clustered sites when some issue just suddenly arises and there is no truly reliable record as to exactly what was changed in the software installation recently. One can also use branches in the source control system to represent similar but slightly different deployments, e.g. to represent node specific configurations or test vs. production nodes. That is, of course, easiest on a source control system where branching is really fast and easy, like Git, for instance.

Top Tags