I have a question along these lines, if I use the data mapper to mine/map a text file with tables, and I export the data needed by my addressing software (name, address, city, st, zip) to a csv, how does Connect matchup that data to the “detail” data when I bring it back in to create my print output in presorted order.
If you want to export the data directly from the DataMapper, you will need to add a Unique ID field to each and every record. When you pass the file to your addressing software, it should remain untouched, which will allow you to match the new records to the previous ones, presumably through a script in Workflow.
Alternatively - and this would be the recommended way of doing things - you can export the data from Workflow itself by using the Retrieve Items task and using a script to convert the resulting JSON file to CSV. In this case, a unique Record ID is already provided for each record. Once the addressing software has done its magic, you can update the original JSON file with a script, using the Record ID as your indexing field. You can then use the Update Data Records task to store the modified information in the Connect Database.
I recommend adding a .replace(‘"’,‘“”’) to escape double quotes if you are going for standard CSV. Not doing so can really mess with the data, which I learned the hard way.
One more question: Is there a way to write a detail-data instead of records? I have 1 records with a many lines, looking for the same output.
data.records[i].fields[j].toString() needs to be replaced with…?
You’ll have to add an additional level of looping to use something like:
data.records[i].tables.MyDetailTableName[j].fields[k].toString()
Is there a way to add the number of pages per document record to this script?
The DataMapper cannot know how many pages the document will generate using the data you extracted.
Is there a script to get the colnames in a extraction?
No there isn’t. However, if you use Workflow to launch the data mapping operation, then you could store the first line of the CSV (which contains the column names) in a runtime parameter. Then it becomes a simple task in the DataMapper to extract any column name from that runtime parameter.
that is unfortunate. Thank you for the alternative, but I need it to work from the datamapper and designer only. My workaround is to load the csv as an external file in the datamapper and store the first line.
like this:
External Library wont refresh - PlanetPress Connect / DataMapper - OL Learn (objectiflune.com)
Hello,
Try this code :
var getColumnNames = function(){
var names = [];
for(var name in record.fields)
names.push(name);
return names;
}
if (record.index == 1) {
var colNames =getColumnNames();
logger.info(colNames.join(','));
}
Herve
Your script extracts the field names after they have been extracted. What @Filemon was asking about is the ability to extract the original column names from a CSV data file before they get extracted to fields.
You cannot store headers and use them in post-processor script to export file ?
Not the original headers, no.
Then
you can read the first line of CSV in pre-processor script before reading him by extract steps.
Ahhhhh… of course you’re right! I had a brain fart… ![]()
A pre-processing script like the following would work:
var dataFile = openTextReader(data.filename,"");
data.properties.csvHeaders = dataFile.readLine().split(",");
dataFile.close();
The above script assumes you create a data property named csvHeaders and that your CSV uses commas as separators.
Note that the above script is not fail safe: it assumes a simple split(",") operation is good enough to separate all column names as long as none of the names include a comma. For instance, this kind of header, although perfectly valid, wouldn’t work:
"Street","Appartment","City,State", "Country"
Thanks for setting me straight, Hervé!
Try this :
data.properties.csvHeaders = dataFile.readLine().split(/,(?=(?:(?:[^"]*"){2})*[^"]*$)/);
Good job, Hervé! Didn’t have the patience to dig into the proper RegEx, but glad to see someone went the extra mile. ![]()
Google power …
“regexp split javascript csv ignore comma”