Jess, good recap, I think the point missed here is not technical in origin,
but business, how do you let systems users get on to a system
post-deployment without insurance that the new changes will not negatively
impact a system that is used 24 hours a day.
Smaller deployments, to reduce failure risk and rollback, and data loss;
say if a WTPart or WTDocument created/iterated post deployment of change.
Big deployments do carry more risk. Perhaps a snapshot capacity for
configuration in the API would allow someone to say rollback changes, but
all data created in past 6 hours gets administratively auto-adjusted to use
old baseline?
That is heavy thought, but not impossible. Workflows and lifecycle
templates already being iterated, but OIRs/preferences for example are not.
Obviously with clustering you can cycle nodes, but there is still some lack
of elasticity. The old saying goes now that PTC has improved clustering,
you give a customer a cookie, they'll wisely ask for that glass of milk.
The idea is not unreasonable at its core, just R&D dollars versus
opportunity cost have to play in here. (And Danny, to be fair, I have no
idea if this is even accurate as to what you were inquiring with the
blaster about.)
David
On Wed, Feb 25, 2015 at 2:49 PM, Jess Holle <->
wrote:
> In general once you have a set of changes to apply to your production
> environment, these should be applied to 1 node and then replicated to other
> cluster nodes.
>
> In the *worst* case this should be done via rsync or robocopy. It should
> *never* be done by ad hoc per-node installation or file copying -- as this
> almost unavoidably becomes a source of system inconsistencies that then
> cause countless problems down the road.
>
> *Ideally* you should use a source control system instead rather than
> simple rsync or robocopy. In this approach you make all changes to one
> node, check-in/commit/push these changes to a source control repository --
> along with a detailed comment as to the changes being made and their
> purpose, and then checkout/pull these changes to all other nodes. This
> ensures that you have a detailed history of changes to the software
> installation including any file-based configuration, complete with quick
> and easy access to difference reports for all files. This is quick, easy,
> and free to set up via open source software like Git. Pulling changes from
> a git repository is also quite fast and efficient (especially as compared
> to some older source control systems, free or commercial).
>
> In either case, i.e. whether you're using rsync/robocopy or a source
> control system, the only downtime for nodes after the original installation
> is that required for a shutdown, pull/rsync, and restart. If there are
> database changes (beyond simply adding new tables and columns or that sort
> of purely additive thing), then these also must generally be done during
> the downtime.
>
> Prior to Windchill 10.2, one had to manage node-specific properties,
> particularly in the case of the master, but every node at least had a
> unique value for java.rmi.server.hostname. As of Windchill 10.2, the
> master is dynamically elected (both initially and upon master failure) and
> the only node specific properties required are those which are specifically
> desired (particularly to run Solr on a specific node, since there can only
> be one Solr instance in the cluster). In cases where per-node changes *are
> *required (or even between test and production) one can use a version
> control system's branching capabilities to manage node or test vs.
> production specifics.
>
> If not running a cluster one should still be able to essentially clone the
> production node, do the installation, testing, etc, there, and then pull
> the changes back to the production node. Additionally, if using a cluster,
> the node to which changes are originally applied doesn't have to be one of
> the normal production cluster nodes either.
>
> Finally, as of 10.2 M010, one should failrly easily be able to add a new
> cluster node by doing the following:
>
> 1. Add a new hostname to wt.cache.master.slaveHosts and propagate it
> to wt.properties via xconfmanager
> 2. Push this change to the source control repository
> 3. Pull this change to all existing cluster nodes (without restarting
> them)
> - As of 10.2 M010, this change to wt.properties will immediately be
> noticed and immediately be incorporated into the running server processes
> 4. Pull an installation from source control to a new cluster node,
> and start it up.
>
> --
> Jess Holle
>
>