Where can I find examples of how to setup a commingling workflow? Checks in one data map/template, Remittance Advices in a separate, but need to combine to create a single mailpiece.
We haven’t got any currently, though I suspect there will be some in the works.
A basic commingling setup is pretty straightforward, however.
First and foremost, ensure that you have a unique ID in the two data sets that can be used to merge them. A customer ID or Invoice number, for example. This is what you’re going to use to define what belongs in the same mail piece.
Next, for each template, you’ll run the Datamapper and the Print Content portions. It’s recommended that you also tag these by way of the Set Properties action.
See below, where we run an Invoice and a Letter, each set gets tagged to allow for some sorting later on.
Note that we’re tagging the Content Items themselves with this. Commingling only occurs with Content Items, so it’s important that we put these tags here if we expect to be able to use them for the commingling portion.
With those two executed, the next step is retrieving them and performing the actual commingling. This all occurs in the Retrieve Items action.
Here, we’re simply stating what we’re retrieving. Content Items that contain either the Property Type of Invoice or Property type of OASISLetter. You may need to get more specific here to ensure you’re only getting the items you want, but for our simple example it will suffice.
Finally, we define the actual commingling itself.
So, from the items retrieved, we’re telling it to match them up based on the ID (Document Contents - Pick items based on). We’re also telling it to sort on Type. This defines the order the content sets appear within each mail piece. In this example, we’re just doing a simple alphabetic sort based on the Property we set earlier. This lets us put each invoice followed by it’s corresponding letter.
And just for the sake of clarity, note that we’ve specified the ID as type Value. This tells it to go look in the fields from the datamapper for each content item. Property would be calling from the values we set via a Set Property action.
These selections can use multiple fields as well. I have another example I was working on that matches based on the First Name field and the Last Name field as I have no single unique value to match on in those data sets.
That’s essentially it for the actual Commingling. You can then go on to specify the Group contents, if you need to. This just lets you do some grouping at a higher level. For example, perhaps I’ve got invoices for the US and Canada and I want to group by Country. I’d define a Pick Items by Value on Country, and then perhaps do a multi level sort by State/province and City, or keep it simple and sort by ZIP. I’d end up with all of Canada, sorted by province, followed by all of the US, sorted by state.
That was very helpful, and I got it all working fine.
I did wrack my brain playing with the various metadata settings of each plugin, wondering how this all works and wondering why I wasn’t getting good output.
I finally narrowed my specific issue down to NOT setting the OL Proxy settings in the Retrieve Items step. Why is this necessary? Without it, Retrieve Items doesn’t work and I can’t generate any output.I have my Workflow Preferences set to use the default port 9340, and the ol-admin account. None of the other Connect plugins require the OL Proxy tab settings to be filled out…
Honestly, I can’t say what’s going on there. It isn’t supposed to be necessary.
I have never set the Proxy settings for any action as the Global setting in Preferences has always been sufficient. Even while testing Commingling, I haven’t encountered this.
What happens if you delete the Retrieve Items action and add it again? Does it still require the proxy settings? Is it only when you’re also configuring commingling, or does it also require proxy settings for a simple retrieve?
Deleting it and adding it again without the proxy settings worked. I’m having the strangest issues lately with things clearly not working, but after a reboot the problem goes away. Had this happen with the Word-to-PDF plugin and now this. Oh well.
Hi AlbertsN,
When testing this with 1 record per database I should get a 2 page PDF. However the retrieve item task is retrieving old attempts from the Connect database. How do I either empty the database or grab the current run only from the database?
Regards,
S
In general, whenever I’m working with a retrieve item task during testing, I tag the records with a unique string.
So at the start, I assign %u to a variable so that it remains the same throughout that run, but resets on a new test. I then use that variable in a Set Properties action after producing the records I’ll be retrieving on. So if I want to retrieve a Content Set, I’ll run the datamapper, run the Create Content, then Set Properties on the Content Set.
Now when I do my retrieve later down the process, I can simply specify that unique string from my variable as part of the conditions for the retrieve. This gives me only the current session’s set.
I’d suggest this as a general rule when doing anything with a Retrieve Item action as you may also encounter re-runs in a production environment. You don’t want to start multiplying your output just because an end user sent the same job twice.
That’s definitely the proper way to do things.
And if you are wondering how to determine which unique ID (%u) has been used for which job, well you can store the values in the Data Repository along with any other type of information that will allow you to determine exactly which ID belongs to which run.
Thanks @AlbertsN and @Phil. I thought the %u would be the answer since I did see a video on YouTube by Evie that utilises that system variable. I’m at home right now but will continue testing tomorrow morning.
Regards,
S
I managed to get it to work with what you suggested but have a question or two.
-
Do we really have to have 3 processes for commingling to work? I prefer not to waste my limited amount of processes. (If not then disregard question 2)
-
If I’m forced to use 3 processes. I have each processes Folder Capture monitoring 1 folder. Process Payslip has a Mask called Payslip.txt. Process Statement has a mask Statement.txt. This is so that the correct process picks up the correct file. I then have the Commingling process Mask set to trigger.txt. Is this the best way to do it? I prefer that this is automated in the sense that I don’t have to manually trigger the Commingling process.
Regards,
S
No, not at all… You can run all the jobs in different branches of the same process. It just depends on what you’re doing.
For example, say you’re running invoices all week long, storing the results in the DB, then at the end of the week you retrieve them and commingling them with some sort of customer letter.
Well, it may be simpler to split this into two processes, one dealing strictly with the invoices and the second with the letter. However, I could very easily imagine a scenario where you’ve got a conditional letter branch and a conditional invoice branch. As the invoices come in, the condition allows them to be processed down the invoice branch. At the end of the week a special data file comes through that triggers condition to allow it down the letter branch. That processes the file then retrieves both sets for the commingling and final output.
Likewise, if you’re not interested in storing things for later retrieval, but just need to split up the job into multiple templates that get commingled at the end, each of them can run in their own branch and they’re all commingled at the end.
For your specific scenario, it sounds like you don’t want to store things. You’re just running them through and merging them. So maybe begin your process with a Folder listing that you then analyze to determine if both of your files are there. You don’t want to start if both files aren’t present, otherwise you need to worry about storing batch IDs in a way that a future job can make use of them and you’ll still need to check later if both batches were completed.
So, if both files are present, one branch picks up the payslip and processes it. Then the next branch picks up the statement and processes it. Finally, it returns to the main trunk where both jobs are retrieved, commingled, and output.
The biggest thing to watch out for, I’d say, is just ensuring that all of the pieces are ready before you begin commingling. At the end of the day, there are going to be numerous ways of accomplishing this (the repository can be immensely useful for this), but you’re absolutely not required to do it in multiple processes.
AlbertsN already provided a very thorough reply, but let me simply add a bit of general information about processes:
The maximum number of individual processes you can define in Workflow is 512. So unless you have an extremely busy server handling an incredible number of business rules, this should not be limitation (if it is, maybe I should be selling you an additional Server! ).
Now whether you run a single process with multiple branches, or several smaller processes simultaneously is pretty much a matter of taste, but it may also have a an impact on performance:
- Multiple processes are usually easier to maintain than a single one, if only because navigating through the tasks can quickly become cumbersome when there are many of them
- Multiple processes can be grouped into logical categories in the Workflow tool, allowing you to organize them visually, again making management a bit easier
- Multiple processes and single processes can both be set to self-replicate, so the number of processes that can actually run simultaneously can be much, MUCH higher than the number that are designed with the Workflow Configuration tool). This setting can be adjusted thorugh the Workflow preferences.
- Single processes are better at handling actions that need to run synchronously (i.e. do this, and THEN do that), which is more difficult to achieve with parallel processing
Hope that clarifies a few things.
Hi AlbertsN,
Thanks for the quick response. I script certain jobs to merge the data via vbscript and using dictionaries with keys so that the payslip matches the statement data. (Some recipients don’t get a statement while some do)
With all my jobs I get both databases together for print. So no need to commingle at a later stage. So regarding the use of a single process. If I branch out using a condition for the payslip and then another conditional branch for the statement, the process stops because both databases have been used. (no more files to grab for the folder capture) Therefore the main branch does not continue past the second conditional to complete the commingling part. I just tried and there is no output.
I can’t send a screenshot because remote accessing my Workflow machine from home does not give me enough screen to show the whole process. Perhaps i should open a support ticket?
Regards,
S
Hi Phil,
Thanks for your input, greatly appreciated. However I did only see your response after posting mine to AlbertsN. If you can take a look at that and advise too.
Regards,
S
Not easy to figure out without seeing the entire process, but if you want to continue past your conditions, then make sure those conditions are themselves housed inside a standard branch. That way, even after all your conditions have been evaluated and dealt with,control will go back to the trunk (or to the parent branch) which can then keep the process going.
Thanks Phil. I think I get what you are saying but it is getting late and I need to get supper sorted I will test again in the morning and should I hit a brick wall I will send a screenshot of how I currently have it setup. There is more to this job too that might change how one should address it… e.g. I get 11 payslip databases to match to one statement database. Each payslip might have a statement and I need to have OMR and individual file breakdowns of payslips/statements. I have this working without commingling but want to see if I can achieve the same with it.
I will try what I think you are saying in the morning and if I hit a brick wall I will let you guys know and we can take it from there.
Regards,
S