Changing memory availability and improving efficiency in workflow

Morning,

For the last 2 nights I have been receiving an Out Of Memory error on our timed bulk workflows. I convinced our Infrastructure department to increase server memory to 16GB, but still received it. The issue appears to be that the workflow service runs out of memory before the server does.

So question 1: Is there a way to increase the memory that PlanetPress workflow can access? How do I go about this? I was sure I could remember it being in the user guide, but now cannot find it.

Question 2: Is it better to have many (20-30) small workflows, or better to have 5-10 far longer ones? Does this impact the memory usuage and would this offset the speed reduction from having a larger workflow?

Using 8.6 for workflow. Files are a mix between evening ‘bulk’ files (4x 500-1000 records) and daytime individual and small number of files (1-40 records)

Workflow is still a 32bit application, therefore it cannot use more than 4GB of RAM anyway. Your RAM increase wasn’t useless, though, because there are several modules in Workflow (all 32 bits) that run in different address spaces, so having more RAM ensures that Workflow gets as close as possible to the maximum amount it can use.

That being said, smaller Workflow processes are more likely to individually perform faster, but they will use more resources from the PC since each process is a separate thread. This is compounded further if you set your processes to be self-replicating. But as far as the overall RAM it uses, it should’t make much of a difference.

Given the numbers you quote (bulk = 4X1000 records, individual=40 records), there is no reason why Workflow would run out of memory with such small files. Workflow routinely handles jobs with tens of thousands of records, so your files should certainly not put a strain on the system. I would therefore think that there is something very specific about the way you process those files that causes the system to use up a lot of memory. Are you generating huge PDF files out of this data? Are you connecting to an external system like SharePoint to pull/push the files? When printing, have you taken a look at the size of the spool files to determine if they are very large?

If you’re generating a lot of PDF files through Workflow (which is one of the most RAM and CPU-intensive tasks), you may want to manually set the number of PP-Alambic instances to a lower value than the current default (Preferences>Plug-in>Messenger). But other than that, there is little you can do to control the amount of memory being used by the application.

Thanks for the answer. Unfortunately the majority of the evening work is pulling from sharepoint and then creating, changing and moving pdf files.

There are around 6 sharepoint libraries each of +/-50 files per day. Is there any way to help balance this load (as often the memory issue pops up just after this has finished), or should I look at a better way to move the files in the evening (Trying to find a native sharepoint answer to move the files to a watch folder for example)

We’ve had reports before about the fact that connecting to SharePoint uses up memory that isn’t returned to the memory pool as soon as the task has finished, so that may very well be part (or all) or your issue.

If you can find a way for SharePoint to export the nightly jobs so that Workflow can capture them direclty - from a folder, for instance - then I think it might go a long way towards resolving your memory issue.