PTC recommends that you keep an eye on the queues. Workflows are heavily dependent on queues to function. There is good reason. Take for example this very useful OOTB expression robot:
This robot sets the state of all the changeables on an EC. The helper code from PTC will fail if items are checked out. Users being users, often complete tasks with check out objects causing this robot to fail. It is very tempting to just continue the failed task in the Process Manager or set the state manually. This will leave the failed expression in the queue as a ticking timebomb, waiting for you to accidentally restart it. What's the harm in that you ask? Let me tell you what happens...
So you have a queue that looks like this:
You are having workflow issues and PTC Tech Support recommends you restart all these Severe tasks (robots may be in another queue, I forget which, oh and BTW, this did happen). The workflow has finished and your change notice is released as well as the changeables. When the task is resumed, it can now complete, and it does, setting your changeables from Released to Under Review. Nice. But wait...there's more. Since this is a workflow, it starts re-executing the next task from that position in the workflow. The workflow rises from the dead and comes back to life. WorkItem tasks are sent out again and users get all confused. What's worse, you've lost data integrity since released data can start changing on you.
The takeaway - put some error checking around set state robots or bots that can fail (redirect to user task to try again) and never blindly restart queue processes without knowing what they do. Clear out failed robots that you continue past so they are gone for good.