cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

ThingWorx Navigate is now Windchill Navigate Learn More

Translate the entire conversation x

Best Practices for Deploying ThingWorx Projects (DEV → TEST → PROD)

MA8731174
16-Pearl

Best Practices for Deploying ThingWorx Projects (DEV → TEST → PROD)

Hi Community,

Currently, we deploy by simply exporting projects from DEV and importing them into TEST/PROD. This works, but it also allows quick changes directly on PROD, which isn’t ideal for tracking and versioning.

I’ve seen that PTC recommends packaging as extensions – which gives versioning, rollback, and cleaner deployments – but it also means every change requires a new package and PROD configs can get overwritten.

What’s the best recommended way in practice?

  • Stick with simple export/import?

  • Or always use extensions with versioning?

  • How do you handle quick fixes vs. safe, reproducible deployments?

Thanks for your advice!

ACCEPTED SOLUTION

Accepted Solutions

Here's what I've been doing on about 15 projects so far:

 

  1. Developers have individual ThingWorx sandboxes / servers using one of the following:
    1. K8s
    2. Local Docker
    3. Installed locally
  2. All code integration happens via Git pull requests. A PR serves as a quality gate, allowing you to do code reviews, run automated checks, test the change on a live instance before merging, etc.
  3. All deployments are automated, you never use Import/Export feature manually, including dev. servers
    1. For dev. sandboxes: Import as Source Control,
      1. In practice this is done by running some sort of deploy.sh script on a developer's machine
    2. For all other environments, including test, pre-prod and prod: Import as Extension.
      1. This is done by a CD pipeline in your Git platform, e.g. via GitHub Actions or Jenkins.
  4. Every time you need to deploy any change (even a small bugfix), you push it to Git, open a PR, once the PR is merged the new extension is built and deployed -- all automatically. In real projects this takes ~5 minutes.

 

Let me know if you have some specific questions about this setup.

 

/ Constantine


Vilia (my company) | GitHub | LinkedIn

View solution in original post

6 REPLIES 6

Hi @MA8731174,  look at the reference article below.

 

https://www.ptc.com/en/support/article/CS403579

Here's what I've been doing on about 15 projects so far:

 

  1. Developers have individual ThingWorx sandboxes / servers using one of the following:
    1. K8s
    2. Local Docker
    3. Installed locally
  2. All code integration happens via Git pull requests. A PR serves as a quality gate, allowing you to do code reviews, run automated checks, test the change on a live instance before merging, etc.
  3. All deployments are automated, you never use Import/Export feature manually, including dev. servers
    1. For dev. sandboxes: Import as Source Control,
      1. In practice this is done by running some sort of deploy.sh script on a developer's machine
    2. For all other environments, including test, pre-prod and prod: Import as Extension.
      1. This is done by a CD pipeline in your Git platform, e.g. via GitHub Actions or Jenkins.
  4. Every time you need to deploy any change (even a small bugfix), you push it to Git, open a PR, once the PR is merged the new extension is built and deployed -- all automatically. In real projects this takes ~5 minutes.

 

Let me know if you have some specific questions about this setup.

 

/ Constantine


Vilia (my company) | GitHub | LinkedIn

That’s an excellent setup, and exactly how such processes should ideally be organized.

 

In my current role, however, we’re not following these standards since I don’t have rights on the production environment. My access extends only to the development server, and we are not the product owner. Still, I was very interested to understand what an ideal setup looks like, so thank you for providing such a clear and detailed explanation.

 

For now, in my project we will handle deployments as extensions on test and production which is at least better then manually import/export. There’s no PR-based build or CI/CD pipeline in place, and our access is limited to the development environment where we build solutions for other departments within the company. I have recently started using Github extension (Vladimir Rosu) to work professionally.

 

If in the future I get an opportunity to work more directly with production deployments and come across challenges, I may reach out here again. Thanks for sharing this your approach.

You're welcome! I found that exporting as XML locally and using a proper Git client like SourceTree gives me more control over what goes into the repo, and allows for a cleaner PR / less work for the reviewer.

 

You can make your process better by writing a pair of shell scripts like export.sh / deploy.sh, to ensure that at least the build, deployment and configuration are fully automated. You don't need PR or CD for that, this is something you can do today.

 

Sharing a dev server with other developers prevents you from doing a proper feature-based Git workflow with code reviews, but it doesn't prevent you from doing basic source control as such. It's important to use Git as a source of truth for the application's code, not the dev server. Then you can deploy safely at any moment, without CD.


Vilia (my company) | GitHub | LinkedIn

I appreciate your engagement in this regard. Would you please provide me some resources for this automation. How would you do it if you are doing it from scratch? This explanation can help me to implement this approach.

 

Note: I have only access on development server and also somehow on test server and when we would like to export the project we call the production department and on call we do export on prod and let the customers use mashups on production. I dont have any credentials from my PROD but prod department would always welcome new ways of doing things which is actually good.

Like any CI/CD pipeline, those deployment scripts would be unique for each project. For example, yours might contain some build steps to convert Excel files to mashups, while for another project we would use Excel spreadsheets to import some master data as part of the upgrade. Different people have different point of view on what should be part of the code, what is reference data, and what is configuration, and how to manage those three -- this would have a major effect on your pipeline. A typical example is localization tokens -- some customers export them as XML and treat them as code, while others manage them in an external system like Weblate and deploy dynamically as they translate. So there's no one-size-fits-all approach, all projects are different.

 

I mentioned a few details in another response: https://community.ptc.com/t5/ThingWorx-Developers/GitHub-Integration-for-ThingWorx-Latest-Tools-for-CI-CD/td-p/988176

 

If you have specific questions like "what is the best way to merge xxx configuration?" -- better ask it in a separate question, as there are lots of things to discuss. Implementing a proper deployment automation is far from trivial, but brings good value.

 

/ Constantine


Vilia (my company) | GitHub | LinkedIn
Announcements


Top Tags