I assume that, based on the provided screenshot and error, the reason why no error occurred when the Execute Data Mapping Workflow plugin was executed the first time but an error occurred when the same Workflow plugin was executed the second and third time is probably because the splitted CSV file contained a header the first time but didn’t contain a header the second and third time and the applied DataMapper configuration takes this header into account. [edited]
Looking back at the whole post you migth be right!
Good catch, Marten!
If your hypothesis turns out to be true, then it’s easy to fix: either remove the entire first line from the initial data file (that will require some changes in the DM Config), or store the first line from the data file in a variable and prepend it to all the chunks after the first one, using the Add/Remove text task.
Thank you for all the info. Here is a screenshot of the datamapper. Concerning the csv file, it is not split, it is on the workflow that I want the output PDF to be split into batches of 1000 pages. Currently on a csv file of 2000 contacts, the workflow generates 2 PDFs of 1000 pages, but they are the same contacts, from 1 to 1000. I can’t get to the end of the addresses file (from 1001 to 2000). I do not understand why. Thanks for your help
I see that the option First row contains field names (FR: Le première ligne contient le nom des champs) has been checked in the Settings pane of the DataMapper configuration which confirms the assumption as described in this reply. Therefore I would recommend to apply one of the solutions as described in Phil’s reply.
I positioned the add/delete text task there, but I still get the error message. I’m new to the software, I’m not completely comfortable with the workflow. Thanks
Good morning,
We manage to store the 1st line of the address file in a variable. On the other hand, I have several output files, but they are identical, I don’t have the rest. Can you tell me what settings to change? Thanks
You were very close. Here’s what you need:
- Folder Capture input task
- Set JobInfo: store the first line of the data file in one of the jobInfos
- Add/Remove text: remove first line from the file
- Splitter
- Set Job Info (for your other variables)
- Add/Remove text: prepend the header line to the file
- … rest of process
The difference is the two tasks in bold: you need to store the first line of the data file in a JobInfo before the splitter runs. And immediately after that, you need to remove it from the file, otherwise your first chunk of data will have it twice (since you are adding that line to each chunk inside the splitter).