I have Following Question.
1-What is java heap low memory error in it show out of memory error and what version iof java use in planet press?
2- i study in scheduling there is small job in maximum record it take only one merge engine process so i have 1 file which size is 14MB there is maximum records i am using performance pack and system hardware is 10 core CPU and 32 MB ram 6 merge engine 4 parallel merge engine .
when i input file and i sow task manager there is only one merge engine utilized the 1 core means(10% of 10 core CPU) and also take memory simultaneously . but it not utilized 4 remaining merge engine.
let me know there is thumb rule to small job when it have maximum record it can utilize one merge engine it can not able to split process to multiple core.
if it able to split process to multi core so it can be process fast but it utilize one engine it take 9 or more hours to process.
once more i have trigger file it goes to all module successful after mapping its come to template it takes much time to process it fail in template show error w3001 it related java heap low memory error.
i have ticket support they told me about the set memory in merge engine to process will completed successfully.
i am success to process maximum 4 MB file size and 14 MB file which i have large amount of records it not process i have set everything.
lastly can you tell me if i want to process small job in maximum record so how it process fast in my experience small job maximum record utilized only one engine it can not to process short time any how.
This article does a good job explaining the Java Heap space and what it means when it runs out of resources: Splunk® Application Performance Monitoring | Splunk
In the latest versions of Connect, the memory allocation can be easily modified through the Connect Server Configuration
You can see here that I’ve increased my allocation to 2gb per Weaver engine, for example.
Now, as for what constitutes a small, medium, or large job, that has to do with the number of records in the data file, not the file size. This is configurable, again in the Connect Server Configuration, under the Scheduling menu.
So for any file that has 100 records or fewer, it is treated as a small job and only one engine is allocated. For anything between 100 and 10000, it’s medium, and 10000 or more is large. This relates directly to the allocations you make for the merge, weaver, and datamapper engines. Again, you can configure these to suit your needs.
At the end of the day, finding the right settings for your setup is going to be a balancing act. If you’re running a lot of small jobs, for instance, but have the occasional large job come through that must be treated as high priority, then you might have to set up some reservations for large jobs so that the engines are available when those jobs come through. Otherwise, multiple parallel small jobs may end up using all of your allocated engines.
Dear AlbertsN
i have file 14MB which is 14 lac transactional detail means 14 lac line. i set scheduling maximum records for small 100 and minimum record for large jon 1000.
so it will consider larger file i have set 4 parallel merge engine for large job but i sow there is only one engine takes memory and processor core.it dont have split multiple core because we have 4 parallel engine for large job.may i ask you about that it is thumb rule in planet press single job which is large or small it process only one engine it can not able to split multi engines.if the transactional detail more then 14 lac record it take much time to process with one engine.it take 9 hours process for one file how can we survive to give solution to banks and any health and insurance organization.
because these organization customer have lots of transactional detail if we going to utilize the planet print server for bluck printing it is very time consumable.the customer need to receive any detail time for reconciliation timely.
Any job that’s considered Small is only ever going to use one engine, yes. You could be running multiple Small Jobs in parallel, however, each on their own engine. You just have to have enough engines allocated (and the resources to power them) and then start feeding multiple jobs through Connect at once. They’ll spread out to the first unoccupied engine that they’re allowed to use (again, based on your engine allocations)
I think you’re going to want to open a support ticket on this. That way we can look into the logs to determine where the slowdowns are occuring and make some more targeted recommendations. Nine hours for a single record job does sound incredibly long, but there are many factors at play here and it’s important to determine which of them are actually impacting processing time. From what I can gather, it sounds like there may be other factors to consider first in these jobs.