Showing results for 
Search instead for 
Did you mean: 
Showing results for 
Search instead for 
Did you mean: 

The PTC Community email address has changed to Learn more.

Publish queue: avoid duplication

Publish queue: avoid duplication

When a CAD model or drawing is checked in to Windchill, it gets sent to the queue for publishing.

My admins tell me that this auto-publish enters the queue at 'medium' priority.  If I access the item through Windchill and click on the 'viewable' box, apparently this creates a second publish job at 'high' priority - and as they rightly point out, if everyone does this then the queue could be significantly increased.

Surely the act of clicking on the 'viewable' should simply increase the priority of the existing publish job, rather than creating a duplicate?  By definition, this is the same checked-in version of the CAD item so I can see no reason to publish it twice, and it shouldn't be difficult to check the current queue for an existing job for the selected item.


I think this may be an even bigger issue than just having the publisher run the same job twice. I've noticed that when the CAD worker tries to process 2 jobs of the same file in close succession, the worker crashes. This leaves a suspended xtop process which consumes the cad worker license. When this occurs all subsequent jobs also fail. These failed jobs will continue to stack up in the queue until the services are stopped, the xtop process is killed and services are restarted.



for CAD worker PTC has a specific license, called PROE_FoundationVis.

It's free, but you have to ask PTC to get this license.


Thanks, Marco. Yes. That is the one I am running via FLexlm on WVS client. Saw no reason to run the node locked version as there would just be more suspended xtop processes. Would much prefer an "avoid dups" option.


Jonathan don't forget to vote up your own idea.


The publishers can be set up to do this but it takes a bit more custom coding to get it set up and it doesn't always work as planned. It should be set up by default for this functionality.


Being an administrator of a PITA to manage multi-CAD Publishing System, where users repeatedly keep manually republishing jobs because they are too impatient to wait for the automatically generated viewable (and in fairness can't see the entire publish queue to see where there job(s) are), I think this is a good idea.

I can see it would make a significant improvement to Publishing System performance, would appear to be simple to implement, so please let's see this in the next maintenance update thanks PTC


A total no brainer. This should be OOTB behavior.


It would also be good to have the thumbnail on the info page show that publication is in process so that the impatient users know that something is happening.


Completly agree


One can set the wvs property publish.service.filterpublishmethod to a custom java class+method that does this task: preventing publishing when the object is already in a publish queue.

In fact I have already implemented that a while ago.

Because of that I know why PTC hasn't included it OOTB. It is a performance issue.

Every time a new job is about to be created, the entries of existing publisher queues have to be checked. I made it configurable to include only certain queues (for example only PublisherQueueH). But still: every entry has to be checked. This takes some time. For Bulk jobs this is a killer.

In my Windchill 1 new job checked against 1000 existing entries takes 1 second to decide. After it has been included there are 1001 existing entries and the next new job has to be compared against them. And so on.

However I can deal with this, because I am aware of it. But if I was PTC I wouldn't make this available OOTB as well... 🙂


OK, after re-thinking: PTC could probably figure out other ways to solve this problem. But it would take much more effort than just a little filtering class.


PTC could perhaps create a column in the db to have a marker if a job is running for this iteration. Similar to "change pending".

23-Emerald IV

We get duplicate jobs for two reasons - renamed objects and state changes.

  • For the first, PTC should definitely be able to catch these.  If a rename is being performed, Windchill should be smart enough to not spawn two publish jobs - one for the file name change and a second for the name/number change.  (It used to be three separate jobs - one for name, one for number, and one for filename.)  These duplicate jobs shouldn't have been created in the first place.
  • For the second case, these occur when someone checks in something and then releases it before the original check in publish has had a chance to complete.  Being able to the check the queue for older unfinished jobs and purge them automatically would be helpful for this case.
Status changed to: Acknowledged
5-Regular Member
There are numerous ways duplication can occur in the publishing queue... 1) Manual entries; User click Paper/Rollers icon, user uses New Representation 2) On Checked of a CAD Document 3) On a state change; a.k.a Unknown Source; this includes rename and move. 4) A scheduler. Let me also note that there are two primary states of publishing, Ready and Executing The currently implementation for duplicates jobs addresses only item 1 from above; and only in the Ready state. I'm sorry to hear that only item 1 from above has been addressed, at least this is my understanding with the Windchill 11.0 we have recently deployed. I really would have like to see more use cases covered by Job Duplication.

Following our upgrade to Windchill 12, I believe this is (at least to some degree) implemented / resolved! Hoorah!

Status changed to: Delivered

Hi! Thrilled to state that this has been implemented slightly differently using value publish.service.duplicatepublishjobvalidation.enabled=true in 11.1+


You may notice that jobs remain in the queue until they get to a certain point before they are removed, but you should see something along the lines of "removing duplicate job" and that medium priority job (despite being submitted first) will not process, but should be marked successful as the representation already exists. So, rather than bumping up priority of the existing job, we would expect the high priority job to be finished first and then cancel out the medium one.


This assumes that the service can identify that they are the same - if the object is iterated or hits a separate trigger (like you also moved the object, or you changed its lifecycle state), that may not be considered a duplicate since some of the details have changed. That's expected, but if you see something outside of that scenario that still looks like we're processing a duplicate, do feel free to reach out through TS and let us know!

23-Emerald IV


There are significant performance impacts when this property is enabled:

The concept is great but the implementation is not.