Faster to map a huge dataset or map individual statements and merge PDFs?

The data is a set of about 2 million records which form around 65,000 statements. The goal is a single large PDF containing all statements.
Is it more efficient/faster to map the large dataset and directly make a large PDF that contains all the statements/invoices; or to first workflow-divide the dataset into individual statements, map the data for each statement, make a PDF each statement, then merge the PDFs into the single large statement?
My first thought was the first way seems like it would be faster, but then I started thinking that the software might process things in parallel better the second way, making it faster… maybe?

If the order of the statements in the final big PDF is not of concern, you could try setting it up with Workflow self-replicating properties provided that your server has a lot of resources.

Merging the PDF at the end will require a script using Alambic API as with 65 000 statements would probably have your system crash or take days should you attempt this with the Merge PDF plugin or Send to Folder concatenation option.

Also, it will require you to properly setup your Connect server to ensure multi-threading.

But your best course of action would be to test both scenarios.

As for one would go with having one big query and have this done in one shot. Then again, if you understand properly multi-threading, it could be an interesting challenge to test it just to see if a lot of speed can be gained.