So I’ve done a little digging around other posts here with a similar error but I haven’t been able to find one that’s the same. Fair warning, I’m still a novice with this new PlanetPress software, and am in the process of trying to rebuild the design documents we had into the new template format in this software. So perhaps the issue here is obvious and I just can’t see it.
I’ve got a design template and associated data mapper configuration that I’ve built - and there’s no errors in the Connect Design software - the data mapper config is using a CSV data sample file. I’ve sent the datamapper config/template to the workflow.
In the workflow, I have a process that’s relatively simple. It’s doing a folder capture on an empty folder, and then I have an Execute Data Mapping step that’s using my data mapper configuration - and then I have a Create Preview PDF action (not sure if I need both of these actions or not) that’s using my data mapper configuration and template - both steps result in basically the same error which is as follows:
[0002] W3001 : Error while executing plugin: HTTP/1.1 500 There was an error running the data mapping process caused by ApplicationException: Error executing DM configuration: Error occurred [Record 1, Step Extraction, Field BHBATSEQ]: An error occurred while trying to find the document column [BHBATSEQ] (DME000062) (DME000216) (DM1000031) (SRV000012)
[0003] W3001 : Error while executing plugin: HTTP/1.1 500 There was an error running the content creation process caused by ApplicationException: Error executing DM configuration: Error occurred [Record 1, Step Extraction, Field BHBATSEQ]: An error occurred while trying to find the document column [BHBATSEQ] (DME000062) (DME000216) (DM1000031) (SRV000022)
From what you’re saying, it seems you’re not passing the CSV file to the DataMapper. By default, the Execute data mapping task uses the current data file in the process, so you have to make sure that’s what you capture in your Folder Input task.
Try setting your CSV file as the default sample data file in your Workflow process, then run through it step-by-step. If that works, then it means all you have to do to trigger the process is to drop that CSV file into the monitored folder.
Thanks for the tip! Okay, this does work - but after slightly changing the data of the sample data file (which is now the same as the CSV file in the Folder Capture folder, besides this one change), I can deduce that the Create Preview PDF is using the sample data file to generate its output instead of the file that should be captured by the Folder Capture tool. Is there something I’m missing?
In step-by-step mode, you are never actually capturing the file from your initial Input task, because that initial Input task never runs. Instead, the process uses the sample data file.
But once the process runs as a service, then the sample data file is no longer used, being replaced by whatever file the Input task captures.
Ah okay, that was unclear to me. I suppose if that’s the case, is there a way to capture/use more than just one file (the sample data file) in a process in debug (step-by-step) mode?
You can always duplicate your initial input task. In debug mode, it’s the very first step that gets bypassed, but nothing stops you from duplicating it as your second step and then run step by step. Once the process gets to the end, it will loop back to that second step and pick up additional files.
Warning: be sure to remove that duplicated step before you send your process to the Workflow service.
So say I wanted to create separate CSV files with database calls in a process - and then send the files to a specific location - could I then use a folder capture within the same process to capture those files as the active job file to work on? Or is there another more graceful way to do that when you want to deal with separate files in one workflow process?
Would any folder capture actions past the initial one also trigger the workflow process to start (if they are part of the same workflow process) or is it only that initial folder capture that has that triggering characteristic?
The process is triggered by the initial Input task only.
But within your process, you can have other, secondary input tasks, which will act as a loop that processes all files capturered by that secondary task, before returning control to the main process, which will itself keep looping as often as the number of files it initially captured.
You can nest these input tasks to any level depth you want, although if you have many levels of depth, you should probably revisit how you designed your process to start with because it can become difficult to read when you come back to it 6 months from now.