Overall performance

Dear colleagues,

We are running performance tests with the following scenario:

1 - Production environment: Pres Connect running on a single 4-core (8 thread) CPU, 32GB RAM, Windows Server 2019 (commercial license with no additional performance pack)

2 - Test environment: Pres Connect running on two 10-core (20 thread) each CPU (total 20-core, 40 thread), 64GB RAM, Windows Server 2019 (demo license)

We have tried several configurations of paralel jobs, engines and allocated RAM on the test server, however we could get no performance increase at all. For some configurations, the test environment was slower than the production environment.

We are wondering whether this could be caused by any limitation when running a demo license. Is that the case?

We have read the guidelines for performance and followed the advices as close as we could understand.

Any advice on this is much appreciated.

Best regards, Renato

You have to determine the bottleneck in your solution. Check the logs to find out which operation is the slowest, that will give you an indication of where you should focus your efforts.

It is likely that the most time-intensive task is Content Creation, which means you will probably want to revisit your templates and make sure they are optimized (for instance, as explained in this how-to article, making sure that your scripts target ID selectors instead of text selectors).

If you can’t find an area to focus on more specifically, report your findings back here and we may be able to provide some additional feedback.

I should add that trying to optimize a solution without knowing where the bottlenecks are can actually worsen the situation, because you might be changing settings that would otherwise be correct.

Hi Phil!
Thanks for your reply!

Yes, we have identified the bottlenecks and are working on the optimization. That’s clear.

What concerns myself is the fact that increasing the number of phisical CPU cores 2.5 times gave no overall performance increase at all.

Regarding workflow tasks, it is clear we can assign more CPU cores for merger, weaver, etc. But when it comes to workflow scripts, how does it work? For instance, if you have 10 jobs running in paralel and all of them have time-consuming workflow scripts, having more CPU cores would be of any use?

It’s very rare that the Workflow processes are themselves the bottleneck: after all, Workflow simply executes a sequence of commands that are (mostly) handled by other applications/devices. But I can’t speak for your specific case because I don’t know what your time-consuming scripts do.

For instance, if you have a script that makes REST calls to a server in order to obtain additional data that must be embedded in your original data, adding more CPU’s to your system will not have any impact on the network latency that the REST calls introduce into the process.

Or if your scripts are updating a database with large queries, increasing the number of parallel processes may actually end up slowing down the entire solution because the Database Engine may be overwhelmed by all the simultaneous requests.

If your processes are writing huge files to disk, then the same is true: you may pound your hard drive with so much data that it has trouble keeping up with a high volume of I/O requests. I’m sure you have experienced this kind of stuff with your anti-virus software at some point: your entire machine seems to slow down for a few minutes, and it’s mostly because of disk activity.

In the case of Workflow, which uses and generates a large amount of files, investing in high-speed storage (like SSDs or even better, NVMe drives) is likely to have a lot more impact that increasing the number of processes. In the case of the Merge Engines, which require a fair amount of computing power to merge data onto templates, more processors mean you can run more merge engines concurrently, thereby providing better throughput… up to a limit, because at some point it’s the database engine that may start to feel starved for resources.

Self-replicating processes in Workflow are best suited for short, on-demand processes (like delivering web pages or 1-page print jobs, for instance). For large jobs that generate thousands of PDF pages, then you should not go crazy on the number of engines that run in parallel because they will all be competing for system resources, which can ultimately have the opposite effect to what you were expecting.

Sorry to speak in such broad, generic terms, but each situation is different so it’s almost impossible to give you specific pointers unless I were to analyze your entire solution’s architecture.