Simple workflow from start to finish not working

Hi,

Sorry to spam this forum so much but I would like to get a simple workflow working where I send a JSON body via an API to my Automate workflow, it then gets processed correctly and I end up with a bunch of invoices.
Here is a little background. In my OL Connect Workflow the invoices come in separately one-by-one. The big advantage for this is, that if something goes wrong with one or two, I can get them out of the batch and process them manually. Once the batch of invoices is complete I merge them into a single XML (as our system still outputs XML), I then convert them to JSON and feed them to the all-in-one process as a single file as this is a lot faster than processing the files one by one. Only downside to that is that I then aftwards need to split the invoices separately so I have an output preset for that and also a job preset for some meta data.
For my OL Connect Workflow this works fine. When I use the all-in-one I get a lot of issues with it. When I leave out the presets I can get a single big PDF file in a zip file, but I want them split. When I add the output preset I can get the Invoices, but they are stored on my server and I don’t see them locally.
Other times I get a Server Error, but when I look at the server log I see no errors. Everything is processed and ends with a summary for the Job Set.
The location I use for the server log is: C:\Users<user account>\Connect\logs\Server.

Here is a screenshot with as much details as I could fit on there:

It looks a little different than the all-in-one but overall I do not see an error at all.
For me Automate looks really cool and it is a great tool to convert my workflows in and using APIs and creating endpoints is so much more easier than OL Connect Workflow but even so, using the OL Connect nodes has a pretty steep learning curve unfortunately.

And here is a screenshot for the paginated output node (and the Output Preset is a dropdown now, so that works):

I sometimes get a server error when attaching something after the node so I will try to take it one node at a time and if I for example need anything extra I will add it later, when everything is working. So far all the nodes are working except for the pdf creation node.
When I set the Output Options to “Managed by Output Preset” it will work fine without any errors, except that the zip file ends up on the server (which is not the one where Automate is running), so how will I then get my invoices?

Hi dvdmeer, Great to see you’re diving into OL Connect Automate! Feel free to ask any questions here on the forum.

Your observation is correct, by default, the output is stored on the OL Connect Server. The system is designed in such a way that OL Connect Automate can run on remote systems or even within Docker Containers. In those scenarios it does not have direct access to the file system of the OL Connect Server. The output can either be directed to a location accessible by the OL Connect Server using an output preset or downloaded by the OL Connect Automate flow to be stored locally on the respective machine.

Currently, when downloading the output it comes as either a zip file containing the separated files or an array of file paths that you’ll need to process in your flow. We recognize that setting this up can be a bit cumbersome, and we’re working on a solution: a “Send to Folder” node, planned for version 0.9.19 (we’re finalizing 0.9.18 right now). This node will streamline the process by downloading the files and storing them directly on the server running OL Connect Automate.

In the meantime, I’ve put together a tutorial explaining the various techniques for downloading output. While this is how things work today, the process will be simplified in a future release.

Hope this helps,

Erik

I’ve focussed on your first paragraph:
I would like to get a simple workflow working where I send a JSON body via an API to my Automate workflow, it then gets processed correctly and I end up with a bunch of invoices.

It’s not entirely clear whether you’re receiving the invoice data in JSON or XML. If it’s JSON, you might want to try experimenting with the following sample flows.

I’ve created an example flows to illustrate how to processes JSON data in the request body.
json-endpoint-flow.zip (2.4 KB)

Hope this helps,

Erik

Sample data
Flow 1: Emulated data submission (fka load sample data ;))
This is how I typically organize loading sample data, whether it’s emulated submitted data or loading sample files from disk. In this scenario, the inject node contains sample JSON data, which is then sent to the endpoint flow using an HTTP request node. This is a neat way to test your setup.

Alternatively, you could use tools like Postman to submit the data, but I find this method easier because you don’t need to switch between different applications.

In my case, I’ve set it up with two inject nodes, each loading a different sample data set.

With 0.9.18 (not published yet)
Flow 2: Endpoint for content creation
This flow begins with the HTTP In node, followed by the Data Mapping node. In this case, the data file is set to JSON in msg.payload (a new feature in 0.9.18). There is no need to store the JSON data on disk; it is passed in memory and automatically uploaded to the OL Connect Server for processing as a JSON file.

Next, I chained the paginated content, paginated job (passing data fields for output file names), and paginated output nodes to create a traditional print production chain.

With 0.9.17 or earlier
Flow 2: Endpoint for content creation
In 0.9/.17 or earlier things requires a bit more magic and explains the change in the upcoming 0.9.18 release. In this approach, we prepare the JSON data file (as a Buffer in memory) and upload it directly to the file store. Alternatively, you can store files on disk, but this approach would require you to implement a cleanup strategy to manage storage.

As before, the flow starts with the HTTP In node, followed by a function node. The function node generates a random file name stored in msg.basename and creates a Buffer of the submitted data.

const currentMilliseconds = Date.now();
msg.basename = currentMilliseconds.toString();

const jsonDataBuffer = Buffer.from(JSON.stringify(msg.payload));
msg.payload = jsonDataBuffer
return msg;

Next, the file store upload (configured to upload the Buffer in msg.payload) is used to upload the JSON ‘data file’ to the OL Connect file store, returning the file ID in msg.managedFileId.

Following this, the standard chain processes the uploaded JSON data file using msg.managedFileId in the paginated content node.

It’s not the most elegant flow, but as mentioned, this process will be simplified with the upcoming 0.9.18 release.

Thank you for your amazing and detailed explanation. I created my new example workflow based on yours but I am still getting a server error.

Note: my input data is JSON, which contains the invoice information.

The server log does not reflect any error:

I assume there is something in either the job preset or the output preset that is causing this issue but I have no idea what is happening as the logs do not reflect any issues and still it fails.

Could you add a catch node (i.e. on error), wire a debug node to this and set it to output the complete msg object? Perhaps this reveals more information in the debug panel.

Unfortunately this does not give any output that I can work with:

Is there any info in the Automate log file? The default location for this file is:
C:\ProgramData\Objectif Lune\OL Connect Automate\olca-node-red.log

This gives me a little bit more info but to me I have no idea what this means:

[2025-08-11T15:17:46.242] [INFO ] [olcnr-paginated-output:pdf creation] 252251272 200 OK text/plain 4 bytes [859d79669e406edf]
[2025-08-11T15:17:46.242] [DEBUG] [olcnr-paginated-output:pdf creation] 252251272 done [859d79669e406edf]
[2025-08-11T15:17:46.243] [INFO ] [olcnr-paginated-output:pdf creation] 252251273 252251274 POST http://ws16nl40191:9340/rest/serverengine/workflow/outputcreation/getManagedResult/43742d50-8277-41aa-95a0-9f492f936a6c  [859d79669e406edf]
[2025-08-11T15:17:46.246] [INFO ] [olcnr-paginated-output:pdf creation] 252251273 252251274 400 Bad Request application/json 226 bytes [859d79669e406edf]
[2025-08-11T15:17:46.247] [DEBUG] [olcnr-paginated-output:pdf creation] 252251273 252251274 {"error":{"status":400,"message":"The result of the Output Creation requ ... 3127454276029765304\\Main_1234567890\\1234567890_0.pdf","parameter":""}} [859d79669e406edf]
[2025-08-11T15:17:46.247] [ERROR] [olcnr-paginated-output:pdf creation] Internal Server Error, please check the server log [859d79669e406edf]
[2025-08-11T15:17:46.248] [DEBUG] [olcnr-paginated-output:pdf creation] ServerStatusCodeNotExpected { name: 'ServerStatusCodeNotExpected', serverMessage: '', statusCode: 400, body: '{"error":{"status":400,"message":"The result of the Output Creation request was not a managed entity: C:\\\\Users\\\\SA_Planet_DTA\\\\Connect\\\\filestore\\\\4508.13127454276029765304\\\\Main_1234567890\\\\1234567890_0.pdf","parameter":""}}' } [859d79669e406edf]

Interesting, haven’t seen that one before :s. Could you check the contents of that file?

Hi Erik,

What content do you mean? There is no resulting pdf created (as it probably fails before that). The JSON input is correct otherwise it would fail already with the previous steps. The job preset and output preset work in production, but leaving the job preset out and only use an output preset and leave the path as determined by the output preset then things appear to work fine so, there must be something there but I have no idea what exactly.
I could send all the resources that I use but I would rather not do that in a public forum post.

Could the output preset rely on metadata set in a job preset? For example for the names of the output files? Still the error logging would be a bit weak. Could you check the Weaver logs?

The settings for the output preset:
First screen: Separation is checked
Second screen: Job Output Mask: Main_${segment.metadata._meta_GUID}${segment.metadata.meta_GUID}${segment.metadata._meta_invoiceno}.pdf
Job Output Folder: D:\Data\Output
Third screen: Separation: Job Segment
Fourth screen: Metadata / Title: ${segment.metadata._meta_invoiceno}

Job Preset:
First screen: Use Grouping is checked, Include meta data
Second screen: Job Grouping Fields:
Field Names: _meta_invoiceno, _meta_GUID, olTempdir
Third screen: Job Segment Tags:
_meta_invoiceno, Field, _meta_invoiceno
_meta_GUID, Field, _meta_GUID
olTempdir, Field, olTempdir

There are no relevant logs for the Weaver engine, Data mapper engine, Designer, and Merge Engine.
The server log does not mention anything new that I had not already sent. Only informative messages and nothing close to an error.

Would it be possible to share the resources so I can run some tests locally? Ideally with some dummy data. Send me a DM for the details.