Fs use in datamapper

Hello,
I want to read a text file into array and then get values from it with lookup in specific column. Following script works fine in node.js and does exactly what I need, but I’m getting an error when I try to run it in a datamapper action.

It doesn’t recognize the “for” syntax to read the lines, returning error:

" Multiple markers at this line

  • missing ; after for-loop condition
  • missing ; after for-loop initializer"

Any ideas why it is not working? An issue with fs library?

const fs = require('fs');
let cloudtxt = fs.readFileSync('C:\\Secure Folder\\Portal\\OneDrive - ΠΑΠΑΔΟΠΟΥΛΟΣ ΑΕ\\WEST\\WEST_from\\output.txt', 'UTF-8');

let CloudOut = [];

for (let line of cloudtxt.trim().split('\n')) {    //<--- error:

let s = line.trim().split('\t');

CloudOut.push({
	UniqueEnvelopeID: s[0],
	Inf_DetailsID: s[1],
	Envelope_Barcode: s[2],
	Phase1_ID: s[3],
	TicketID: s[4],
	EnvelopeID: s[5],
	HubID: s[6],
	TokenID: s[7],
	Aux3: s[8],
	PrimarySortingColumn: s[9],
	pgStr: s[10],
	TK_Recepient: s[11],
	Address_Recepient: s[12],
	Address_Recepient_No: s[13],
	PFinalDeliveryID: s[14],
	FinalDeliveryDescription: s[15]
	
});
	
}

let index = CloudOut.find(x => x.UniqueEnvelopeID === 'PirPagio_A_2021102340870_10_00138').pgStr;

thank you
Akis

First, require() is specific to NodeJS, it is not part of standard JavaScript. Fortunately, you don’t need it because the DataMapper provides the openTextReader() method to allow you to read a text file.
Second, the DataMapper is not ES6-compliant, so constructs like let or arrow functions are not supported.

The following code should achieve the same thing :

var index;
var CloudOut=[];

var myFile = openTextReader('C:\\Secure Folder\\Portal\\OneDrive - ΠΑΠΑΔΟΠΟΥΛΟΣ ΑΕ\\WEST\\WEST_from\\output.txt', "UTF-8");
while((line = myFile.readLine())!=null){
	var s = line.trim().split('\t');

	CloudOut.push({
		UniqueEnvelopeID: s[0],
		Inf_DetailsID: s[1],
		Envelope_Barcode: s[2],
		Phase1_ID: s[3],
		TicketID: s[4],
		EnvelopeID: s[5],
		HubID: s[6],
		TokenID: s[7],
		Aux3: s[8],
		PrimarySortingColumn: s[9],
		pgStr: s[10],
		TK_Recepient: s[11],
		Address_Recepient: s[12],
		Address_Recepient_No: s[13],
		PFinalDeliveryID: s[14],
		FinalDeliveryDescription: s[15]
	});
	if(s[0] == 'PirPagio_A_2021102340870_10_00138') {
		index = s[10];
	}
}

Hi Phil, I see. Many thanks for the code and the clarifications!

I’m trying to figure out how to make it to load the text file into the array just once, and then query it (probably with the find() method in a not-arrow syntax?) to get a couple more fields for each record in the datamapper.

Sure, the find() method will allow you to do that after the array is loaded:

var pgStr= CloudOut.find( function(item) { return item.pgStr == "some value"; } ) ;
var Aux3 = CloudOut.find( function(item) { return item.Aux3> "17"; } ) ;

works great, thank you @Phil !
At last I got rid of the ODBC text driver that was very unstable.

I have an action step to load the array right after the first extraction step. Then I have a second extraction step with the fields that are acquired from the array.
Is this the right place for it or does it load that way the array for every record of the first extraction step?

If you load your array with an action step, then that action step will be executed for each record in the file. So that’s not the most efficient way to proceed (although this is relative: it all depends on how fast that action step actually runs).

If you want to make it really efficient, I would suggest you perform the loading of the array in Workflow, and then pass that array as a runtime parameter to your data mapping configuration. That way, the array is only loaded once (through Workflow) and each record in your file can do a simple lookup without having to constantly reload the text file since the runtime parameter is available for the entire duration of the data mapping process…

The JavaScript syntax in Workflow will be slightly different (you’ll have to use the Scripting.FileSystemObject object to parse the file), but the overall logic remains the same. You just store the resulting array in a process variable or in a JobInfo, and then specify that JobInfo as a runtime parameter for your data mapping config.