HTTP Client Input plugin fails to process the next item correctly after a prior item triggers a 404 error during PDF download.

We’re encountering an issue in our OL Connect Workflow (2024.2.1.7137) using the HTTP Client Input plugin. When a request to download a PDF fails (e.g., due to a 404 error), the very next job passed to the same subprocess does not process correctly. The plugin simply logs:

DEBUG: [plugin] HTTP Client Input found no data to capture

…then skips ahead to the next item in the batch.


Workflow Context

We’re relatively new to OL Connect Workflow, so this might be something we’re not handling correctly — but here’s how our setup works:

  • A Folder Capture grabs a merged XML file.
  • We use an XML Splitter to break the batch into individual order records.
  • Each piece is routed to a subprocess (DownloadPDFs) via Send to Process.
  • In DownloadPDFs, we use Set Job Infos and Variables to extract the PDF URL (into %1) and other metadata.
  • Then HTTP Client Input attempts to download the PDF.

When one item (correctly) fails due to a missing or broken URL (404 error), the very next item, which has a good URL, doesn’t even try. It immediately logs that it “found no data to capture.”


Log Snippet – showing failed job followed by silent skip:

ERROR: 09:23:53.559 [0006] 400: ERROR: Failed to download PDF from https://images.printable.com/..._press_ERROR.pdf
INFO  : 09:23:53.559 [0006] Store ID "400" in variable "%7"

... then the next job ...

DEBUG: 09:23:53.602 [0006] Starting plugin HTTP Client Input - 09:23:53
DEBUG: 09:23:53.602 [0006] Plugin HTTP Client Input found no data to capture - 09:23:53

There is no actual error on the second item — it just silently skips.


Question

Has anyone run into this behavior?

  • Is this a known limitation or bug in the HTTP Client Input plugin?
  • Is there something we need to do to reset job info or clear error states between jobs?
  • Or are we misunderstanding how error handling behaves between subprocess calls?

Any help or ideas appreciated — thanks in advance!

To help you properly we would need to see the complete configuration and reproduce the issue. I suggest you open a technical support ticket through our website.

@brassabeck,

You may want to double-check how your data value is being assigned to %1 in the Set Job Infos and Variables task. If you are using the %c variable to read the contents of the file being received from your main process, then the plugin may include the line break characters into the value, which would make the URL invalid.

Try using a data selection instead, making sure to trim the results to also eliminate any extra blank characters:

Thanks for the suggestion! We definitely see how line breaks or whitespace issues could cause problems if the value for %1 was coming directly from %c or a raw data file. However, that’s not what we’re doing here.

To clarify: we’re not using %c, and we’re not reading from the raw data stream. The value of %1 is being set using a Set Job Infos and Variables plugin, with an explicit XPath-based xmlget() call, like this:

xmlget('/FulfillmentOrderBatch[1]/FulfillmentOrder[1]/Order[1]/orderDetails[1]/orderDetail[1]/Product[1]/ContentFile[1]', Value, KeepCase, NoTrim)

So the value in %1 is being pulled directly from the XML element <ContentFile>, and we’re already using the NoTrim and KeepCase options to ensure it’s a clean, unmodified string.

Why this doesn’t seem to be the issue:

  • The URL in the failed request is parsed and logged correctly, and the 404 error is expected in those cases — the plugin does reach out to the correct (though missing) file.
  • The next job, which contains a fully valid and downloadable URL, gets its %1 set and logged correctly — the value is present and clean in the Logger plugin output — but the HTTP Client Input doesn’t even attempt to use it. It logs “no data to capture” and skips the job.
  • We’ve tested this thoroughly by reordering the XML data so different jobs follow the one that’s known to fail. Each time, the job immediately following the expected error is skipped with the same “no data to capture” message — even though its data is perfectly valid.
  • The job after that (i.e., two after the error) processes just fine, confirming that it’s not a persistent error or data issue — it’s something transient and isolated to the job right after the 404.

So the issue doesn’t appear to be with how %1 is populated — it’s behaving as expected. It’s specifically the HTTP Client Input plugin’s behavior immediately after a failed request that’s concerning.

Happy to post more logs or share a sample XML snippet if that helps diagnose further. Thanks again!

@brassabeck : it was worth testing. On my system, I wasn’t able to replicate the issue unless the data selection is invalid. With valid selections (even if the URL target itself is ultimately invalid), I am not encountering the problem.

So that seems to indicate the issue is not necessarily with the HTTP Client Input (or at least, not consistently so). I would therefore recommend you follow @jchamel’s suggestion and open a ticket with the Support team. Please report back here once they’ve figured out what the issue is, to help the rest of the community.

Thanks @Philippe_F. I’ve submitted a support ticket with the same detailed info and some workflow screenshots for their review. Once I hear back with any resolution or insight, I’ll be sure to update this thread to help others who might run into the same behavior. Appreciate your help!

Update from OL Support – HTTP Client Input Behavior Confirmed

Just wanted to circle back and share what we learned after working with Objective Lune Support on this.

They confirmed that the issue we encountered — where a job following a failed HTTP request silently skips — is a bug in the HTTP Client Input plugin. Specifically, when a request returns an error (e.g., 404), the plugin doesn’t reset properly, and the next job passed to it ends up being ignored, logging only:

DEBUG: HTTP Client Input found no data to capture

They were able to reproduce the behavior on their end and have escalated it to the development team for a fix in a future release.

Interim Workaround

Support provided a workaround using JavaScript in a Run Script task to handle the PDF download manually. This bypasses the plugin and avoids the issue:

var xhttp = new ActiveXObject("Microsoft.XMLHTTP");
var url = Watch.ExpandString('%1');
xhttp.open("GET", url, false);
xhttp.send();
Watch.SetJobInfo(7, xhttp.status);
Watch.log(Watch.GetJobInfo(7), 3);
if (xhttp.status != '400') {
  Watch.SetJobInfo(6, 'ERROR: Failed to download PDF from ' + Watch.GetJobInfo(1) + ' for Line ' + Watch.GetJobInfo(2) + ' -');
}

We’re testing this in place now and it seems to be working well so far.

Thanks

Appreciate everyone who chimed in and helped troubleshoot this. If anyone else runs into this, hopefully this helps save some time. We’ll also update again once the official fix is released.

1 Like