We have a data file that comes out of one of our back office systems. It generates a file with 2 different headers and 2 different groups of data.
It arrives in the order below:
HEADER-DATA-DEFINITION
OccNum
,TncyUsrCde
,PayRef,
Name1,
Name2,
StrDte,
NAddr1,
EndDte,
NAddr2,
NAddr3,
TABLE-DATA-DEFINITION
WeekNo
TrnDte,
TrnDteA,
PerCom,
Debit,
CnASgn,
CnAVal,
RntVal,
ChgVal,
Credit,
START-OF-DATA
1111111
11111111
1111111111
MRS X XXXX
19th December 2016
2 Street Address
15th March 2017
Town
County
38
########
19-Dec-16
19/12/2016
Field CnASgn does not exist.
39
########
26-Dec-16
26/12/2016
Field CnASgn does not exist.
40
########
02-Jan-17
########
56.7
Field CnASgn does not exist.
128.37
########
03-Jan-17
Field CnASgn does not exist.
65
41
########
09-Jan-17
########
56.7
Field CnASgn does not exist.
128.37
########
12-Jan-17
Field CnASgn does not exist.
65
42
########
16-Jan-17
########
56.7
Field CnASgn does not exist.
128.37
43
########
23-Jan-17
########
56.7
Field CnASgn does not exist.
128.37
44
########
30-Jan-17
########
56.7
Field CnASgn does not exist.
128.37
I can create 2 seperate datamappers (one for the person data and one for the rent table) but then I cannot use both on the same template. Does anyone have any advice at all on either using 2 datamapping files on one template or creating one datamapper that can deal with the change in columns?
The first line contains the names of the fields for the header data (which is the customer information)
The second line contains the names of the fields for the transactional line items
The third line contains the actual customer information
The remaining lines contain the actual transactional line items
You could preprocess the file using Workflow: store the contents of the third line (the customer information) in a Job Info variable. Then use the Add/Remove Text task to remove the first three lines entirely. That means the file will now only include transactional line items. When you call the DataMapper, the Job Info variables are passed automatically from Workflow. You can use a script to extract all separate value from the Job Info into their appropriate field, without impacting the rest of your data mapping process.
Thanks, Stupidly I have used something similar to that before but didnt think of it!
When I used it before however, I had to create a datamapper for the file, then a print content then create a job in order to set some fields as local variables (workflow below). Is there a quicker way to preprocess a data file?
Just use a SetJobInfos task. Say you want to set JobInfo 9 to the contents of that entire 3rd line, select %9 as the JobInfo#, then right-click on the value field and select Get Data Location. Highlight the third line and press OK. You can adjust the width of your selection (From/To Column) manually. It should look something like this: