OL Connect plugins stuck while printing

Hello to everyone!

First of all, thank you very much for the help, I really appreciate it!

I’m having a big problem with all the workflows I’ve created. After debugging many issues I realized that all of them occur when the server is creating the output. All the OL Connect plugins get stuck until the output is finished. So all my workflow stop working since most of them have the data mapping plugin.

I show you a few pictures about the debugging:

Normal data mapping, very few seconds:
image

After start printing:
image

It will be stuck until the output is created:
image

I’m sure there must be a way to avoid this and have the possibility of using the workflow at the same time a job is being printed. Some of them take 40 min, so I cannot wait.

Thank you very much again!

Federico

Looks like you don’t have enough resources on your server which causes the Content Creation to suck all the juice.

What are the resources available on your server? RAM, CPU?

Is your server a physical machine or a VM? If the latter, can you check with your IT if its resources are dedicated or shared?

You can read as well the following information about minimum requirement, the Connect Server engines configuration.

Hi hamelj,

Thank you very much for your response!
I checked the resources available and are the followings (RAM and CPU):

This is a physical machine, that we use to generate continuous printing outputs.
Do you think the resources are enough?
I’ll check the Server engine configuration page you sent me, and I’ll post here what are my current settings.

Thank you very much again!

If I read this correctly you have about 86 gig of RAM?

@hamelj: no, that’s 8 Gb per bank, so a total of 48Gb

Thanks for the reply and apologies for the delay in the reply, I did not have access to the server on Friday.
Yes, the RAM I think is 48Gb, and I took a screenshot of the server configurations:

Is it possible that the problem is that I only have 1 data mapper engine?
Is the memory allocated to the engines a good fit for a server that continually prints?

Thank you very much, I really appreciate it!!

I post it again just in case the picture was too small:

image

If the problem was the number of datamapping engines, you wouldn’t see the one used being slower…jobs would simply be queued.

From what you have explained, it seems like the resources are greatly used while a big job goes through which seems to leave less for other processes, like the datamapper.

When you compare both scenario (with and without Content Creation), are you using the same data file (that is in use by the Datamapper) and same output job (being in use by the Content Creation)?

Also, do you have a anti-virus which is scanning the jobs? If so, can you turn it off for testing purposes?

While it is going slow (as we have seen, it can take up to 40 minutes), can you look at the Windows Task Manager and see where the resources are being used?

As for your question about the number of Datamapping engines, if you have a lot of processes calling the Datamapper engines simultaneously, having more engines makes sense. You might want to increase the memory of the Datamapping engines has they could require more “juice” when dealing with big data file.

As for the memory allocated, it all depends of the number of simultaneous jobs and their size. You might try to increase it but remember that your OS and other software running on your server do need them as well.

I whish their was an equation to calculate the best performance for each user but their is none. It varies depending of so many factors that it is trial and error to find the proper one for you. Then again, should your job profile change a lot (number of small, medium and big jobs) running simultaneously, then your settings will need to change as well.

Also, please note that with Connect 2020.1, the Connect Server Configuration has been improved when it comes to the engines configuration. It has been made more simple and the default value represent more and more the “usual” environment use of customer…then again, not all are the same.

Hope that guide you a little more.

To add one bit of info to @hamelj’s comprehensive response: the DataMapper’s memory usage does not increase according to overall job size, it increases according to individual record size.

So if you have a million small records (think for instance of a simple Postal Address block extracted from a large mailing), then the DataMapper will require very little memory to run.

However, if you have a single telecom invoice that contains thousands of detail lines, then the DataMapper will require more memory because it has to hold that record in memory for the duration of the extraction process, for that record. But that memory usage isn’t cumulative: once the DataMapper is done processing a record, its memory gets flushed and a new record can be processed.

Thank you very much for taking the time to explain and analyze the situation!

I performed a test with the antivirus disabled and the error keeps happening, before disabling it windows defender was taking up a lot of resources.

I recorded my screen so that you can see what I’m doing (it is only 1 min 47 sec), in which I cancel the content creation, and automatically the task that was stuck is completed.

It is not the same data file, in workflow I am using a very small XML file.

When the content creation reaches a certain percentage (ex: 80%) the task is unlocked and it is not locked again for percentages greater than this.

Regarding the memory allocated, thank you very much, I’ll research my case to optimize it!

This is the short video that maybe helps: Recording #335

And these are the task manager processes:




Thank you very much again!

At this point it would be better to have a technician involved directly to proceed to further tests and even try to replicate locally.
I suggest you open a technical support ticket through our website. Once he has contacted you, refer him to this post so he can start from what we have went through so far.

Thank you for the help!

For some reason, changing these settings fixed the problem.
I’m not sure why but now it’s working well!