Getting workflow logs before process finished

Is there anyway to get access to the logs for workflow before a job has actually finished so you have some sort of idea where jobs are up to? I know we can add manual logging by adding a run script after each stage and writing to a file, but this is a pain when we have over 100 live processes at the moment

Hi @jbeal84,

Can you please provide us some detail about what you mean by the following:

… so you have some sort of idea where jobs are up to?

Hi Marten

So I mean what step a job is on, at the moment we drop files in and don’t have any idea where it’s up to. Once the job is done you get can see in the workflow logs how long each step took etc. But that’s once it’s all done and it would be good to have an idea where the job is up to while it running. Also you have no idea what jobs are running either, so if we want to send the workflow config to server we have to stop the service and hope something isn’t running, if it is then we’ve got to keep it on stopping until that job is done. And this can be a significant amount of time and while it’s stopping other jobs then can’t run until it’s finished. We can then have issues where scheduled processed then don’t run either

2 Likes

Hi @jbeal84,

This is not natively available in Workflow.

Full disclosure: there is an undocumented option to instruct Workflow to produce synchronous logs (in which each tasks writes to the logs as it gets executed) but that option is extremely verbose and generates as many log files as there are active processes in your configuration. All those files take a huge amount of disk space, that’s why we only recommend this option be used for a short period of time, when debugging a process.
But even if you decided to turn this option on, you would then have to write a process that parses all the log files from all processes… and that process would itself generate a huge amount of log information.

We are currently looking at various telemetry tools that would allow the system to report on its current status, but this is a long term project.

1 Like

Hi @Phil

Yeah that’s no ideal but I do think this is a massive hole in PReS Connect. We have almost 200 processes\jobs running through our workflow for all our clients and when we need to make something live we have no idea if we can\whats running or how far through they are. Most jobs are hot folders receiving files 24/7.

So we have to stop the workflow and cross our fingers nothing big is running, and if there is it can take a long time to say what process it is, or even worse if it’s a queue with replication on it doesn’t tell us what it is. So we have to wait for this to finish, but we process jobs of over 100k pages a time so this can take a long time. So we have to kill the process as other work then queues up behind it, but when we do that then we can very easily miss jobs as we have no idea what was running or how far it had got

James

Thanks for this, I will definitely keep a link to this post for the telemetry ticket.

This is a longstanding issue. With Connect jobs, I have coded my processes to log various counts (for example, when you run a Data Mapper in validation-only mode, you get a record count) along with the active step, process name, data file name, etc. into the Data Repository. This is displayed in a “Job Status” web page I built that uses Datatables.net (ajax/jqeury) to periodically call another process that queries the repository. Not quite real-time, and with the downside that I wrote it all using Workflow as the HTTP Server, so it adds load to Workflow. But conceptually, you can work around the actual log files and do your own logging within the Processes. Knowing the record count gives you some concept of the size of the job. I’ve also written an Action Step inside Data Maps that write out logging information, every X number of records, so that even when a Data Map step is running, we can get “100 of 2,000 records mapped”, “200 of 2,000 records mapped” information. Again, downside is you slow things down to do this.

Yeah I’ve started to add manual logging to some new large jobs we have which take several hours. The problem is we already have over 150 processors within our workflow so it’s huge task having to go through and add all the extra steps to log, also when creating new jobs the developers have to remember to add in all these steps. Like you say there are work arounds but I’m just so surprised there’s nothing in there automatically doing this as every other piece of doc comp software I’ve worked with has all this in by default

2 Likes