Community Tip - Want the oppurtunity to discuss enhancements to PTC products? Join a working group! X
I have been trying to reassess our deployment process. The process defines how we start projects and enhancements and how they move up the chain Dev -> Test -> UAT -> Prod. Also, how do we investigate when things inevitably go wrong?
Recently I have been looking into Git for our source control management for our projects. Namely everything in wtSafeArea and wtCustom. This works alright until you try to account for System-Specific settings such as which systems to integrate with, who are my workers, and how many method servers, etc... etc...
So I have created a long-life branch for each integrated system, Dev, Test, UAT, Prod. This seems to be working for the most part, but might be a bit cumbersome to maintain.
That's when I thought that maybe our integrated systems would be better served as Repos. Then have a Project repo that represents all of our actual project work. Essentially using the system repos as backups of our systems in which we can see the diffs in real time.... If we're doing this why capture just the wtSafeArea and wtCustom, why not expand out into codebase and src and to our actual properties and xconf files. This way we have good accurate snapshots of our integrated systems... holy big... too big for Git (i assume).
That's just the thing though, I'm assuming. This is hard to test and practice at without causing some real confusion so I'm wondering, what does the Windchill community, as a whole, do to achieve these ideas of source control AND system awareness?
((attached is a .gitignore file i'm playing with to prune grabbing production files for Git, let me know what you think))
Personnaly i use differents repo:
A good gitignore can do the job and yours seems to be good for me ( i will try on my test system )
We use one repo then prefix the environment specific files in the repo (PROD.site.xconf vs DEV.site.xconf)
This strategy works for us because we are using a comparison tool which I highly recommend called 'Beyond Compare' https://www.scootersoftware.com
We create environment specific compare sessions within Beyond Compare, which can either capture changes on a specific server into the local working repo folders, or deploy files from the working repo folders to a specific server. The sessions deal with the prefixed environment specific files.
For example. A session that compares the local working repo to PROD server would ignore any files in the repo which are prefixed with DEV.* At the same time, there are also equivalence rules in the session that allow files prefixed with PROD.* in the repo, to be compared with files of the same base name on the server without the prefix.
This allows us to maintain a single repo that serves multiple environments. We also have sessions that compare equivalent environment specific files against each other. eg compare PROD.site.xconf against DEV.site.xconf.
Not the only way but it works well for us.
Regards
Darren
We essentially follow the same conventions that Darren described. We have a site.xconf prefixed with a hostname for each of our environments and have a tool that is "smart" enough to deploy the correct site.xconf to the corresponding environment. We developed a deployment tool (integrated with Bamboo) to deliver and deploy an executable JAR file to each environment. Our deployment tool reads the hostname of the environment, compares it to the site.xconf prefix, and then deploys the appropriate file.
Our branching strategy is relatively close to the one described in this post:
I just got back from another environment where Ansible was used to maintain the infrastructure completely. It was kind of amazing.
During that time I really found the core power of Ansible was its Template module which used yml files and jinja2.
I'm working on a new strategy for my git project. I'm looking to maintain just the wtSafeArea and utilizing jinja2 to maintain the differences between systems. I will have system-specific branches and system-specific yml files for variable captures.
When a new commit is made in a system-specific branch (from a pull request) a pipeline will fire. In that pipeline is a python script that cycles through every file in wtSafeArea. If it ends in .j2, interpret it, if it doesn't, just copy it.
I'll let you know how it turns out.
@bcedar wrote:
I just got back from another environment where Ansible was used to maintain the infrastructure completely. It was kind of amazing. TelltheBell.com
During that time I really found the core power of Ansible was its Template module which used yml files and jinja2.
I'm working on a new strategy for my git project. I'm looking to maintain just the wtSafeArea and utilizing jinja2 to maintain the differences between systems. I will have system-specific branches and system-specific yml files for variable captures.
When a new commit is made in a system-specific branch (from a pull request) a pipeline will fire. In that pipeline is a python script that cycles through every file in wtSafeArea. If it ends in .j2, interpret it, if it doesn't, just copy it.
I'll let you know how it turns out.
Great idea man thanks keep it up all the time. am very happy to see your standard.
We got the build process stood up. We're using azure devops and its pipeline, but the concept should be able to be used in any ci/cd solution.
We identified three different circumstances for our files.
1. a normal copy... it's the same across all systems, copy it to its destination.
2. a jinja copy... it differs across the systems, but is plain text and simple, so the differences are captured between the variables and the jinja in the files... run through the template rendering and place in destination
3. a "specto" file... each system as a dedicated copy of the file for this file.. these are named <filename>.specto.<system name>... the logic looks for if the filename contains specto... then if the filename ends with the system name... then copy it to the <filename> (stripping off ".specto.<system name>").. else ignore the file (ignore the other spectos
*note* specto is something we made up.. needed a signature to determine the situation (specto) and needed a signature to determine if i have a copy for my system (<system name>).
We have been using this for our file generation and are now managing all of our windchill instances in one repo using traditional branching strategies.
This same concept has expanded out to our publishers and other systems managed by our neighboring team in IT. It's been working pretty great.
Hi all
I am just trying to get all our report templates into git. Export them all from WC and you get a zip file with all report template filenames prefixed with 'Exported_QML_'.
My next task: write a script to trim 'Exported_QML_' off each filename. There must be a simpler way.
cheers -- Rick
$ find . -name "Exported*.xml" -exec sh -c 'echo mv "$1" "$(echo "$1" | sed s/\\.\\/Exported_QML_//)"' _ {} \; > rename.sh
When I export my report templates, I export them as "report name" replace spaces with _'s and add _report_template to the zip... I then unzip it before adding it to source. Then, as part of my ci/cd i loop through all first level directories of my Report_Templates folder, and zip them back up... unfortunately I'm uploading the resulting zips manually for my deployment.
Hi all
I am trying to get the workflows into git. Export them, and you get many xml's which are numbered, like TAG-NmLoader-92.xml. The name of the workflow is contained in the <csvname> tag in the file.
You need the files to be named [workflowname].xml .. so it's time to write a script. There must be a better way to get workflows into git, please share!
thanks -- Rick