We have a longstanding issue with the “server” engine not releasing memory. If we do not restart the Connect Server after large jobs, the next jobs will be produced with blank pages. Support has tried looking at this, no resolution at this point. Our standard practice is to reboot the VM/OS every day at 2:15pm so the next large overnight jobs won’t crash.
Our jobs keep getting larger and larger. I’m at the point where I need to split the incoming data files into smaller batches (40k records/batch), and run each batch separately, rebooting Connect Server between batches. This isn’t optimal; I shouldn’t have to “babysit” jobs meant to run automatically.
Until/unless this bug is resolved, though, I have to consider workarounds.
If I had a powershell script that restarted Connect Server, and executed that script via the “External Program” plugin, would that “crash” Workflow? The idea is that, as the final step of a Workflow Process, restart Server, check (somehow… maybe code the pshell script to return an error code that Workflow could check…) that Server is running again, before looping back to capture the next batch of data.
What would be the impact on other running Workflow Processes? My guess is that if I restart Connect Server while a Connect task is running, that job would be killed. I would need to ensure that my large job was the only running job.
What are the pros/cons of this approach?
Does anyone have a better suggestion?