I have hosted the master server successfully and copied the installation folder with the directory structure maintained to the slave server. After this, I do not know how to proceed. I have gone through some of the windchill clustering documents but didn't helped me lot.
Please help me with the steps and settings to host the slave server. This will really going to be valueadd for me.
the master node and slave nodes each have a site.xconf file...for each node, you must edit its site.xconf file - all of the slaves will probably have the same content in their site.xconf file but the master's site.xconf will be different.
Please is this okay.
|wt Properties||My comments|
|java.rmi.server.hostname=||set the name of the local server configuring|
|wt.rmi.server.hostname=||Load balancer host name or url|
|wt.server.hostname=||Do we need to leave this blank; if yes, how to set null|
|wt.queue.executeQueues=||Set to false; how to define that specific type of task / requests should go to background method server in master|
|wt.cache.master.codebase=||Need to set master server name or the load balancer name / url|
|wt.cache.master.hostname=D||Master server host name|
|wt.cache.master.slaveHosts=B, C, D||All slaves name in the cluster|
|wt.pom.serviceName=||How to find the details of these if we are not aware. Any windchill commands|
|wt.pom.dbUser=||Windchill user name for database server|
|wt.pom.dbPassword=||Windchill password for db user|
I am also in process of configuring Cluster environment. Could you please share your ideas/suggestions.
Thanks in Advance
Do yourself a favor and move to 10.2 (and ideally M020 or higher) before deploying a cluster if at all possible.
10.2 has numerous benefits in this area:
In short, in 10.2 clusters are easier to set up and work better once they are set up.
If you're setting up a cluster on 10.2 and assuming it is just like doing so on a previous release, don't. Read the documentation first as things have changed in some very critical ways.
One other note on clusters:
Do not simply do an ad hoc copy from one cluster node to another to produce the cluster.
Sure, that will work, but taking this approach will be the source of endless errors later -- as someone applies a change to one cluster node and does not do so fully or properly to some of the others.
Instead anyone deploying a cluster should be using rsync, robocopy, or a source control system to robustly mirror one node to all the other nodes (apart from node-specific configuration where this is necessary, which could be kept in a separate xconf). This eliminates issues of unintended discrepancies between cluster nodes.
Using source control is the most compelling approach. You check the entire node installation into a source control repository like Git (or Subversion or whatever). Producing other nodes then involves pulling the installation from the source control repository. The compelling part here is that one can easily obtain a full traceable history of all changes made to the installation, complete with administrative commentary as to why the change was being made. The lack of such traceability is itself often a problem even in non-clustered sites when some issue just suddenly arises and there is no truly reliable record as to exactly what was changed in the software installation recently. One can also use branches in the source control system to represent similar but slightly different deployments, e.g. to represent node specific configurations or test vs. production nodes. That is, of course, easiest on a source control system where branching is really fast and easy, like Git, for instance.