Extract data from two sources

Hi Folks
I have a file in plain text format where I extract by location on each line of this file.
In this file there is an identification code for each municipality, but there is no address, no municipality name and no sub-item number.

The logic would be as follows:
When Datamapper extracts the code on the line, we create the “addsAlt” field where the list of addresses for that code has been added.
On the next line there is another different code that will change the address in the “addsAlt” field
Another field has been created with the name “nameAlt” where the names of the municipalities have been added.

Is there any way of creating another TXT, CSV or even JSON file with this information and adding the new addresses to this file so that we can abandon the routine of opening the Datamapper file to change or add other addresses?

This case is for my needs, but it could have an effect on the needs of another case study, such as Fruits.
Each fruit product has its own EAN code and when Datamapper extracts this EAN, the other fields will ‘tie in’ with the respective fruit name and description, hypothetically.

Feel free to propose alternatives for the same case.
BR

You can do so by updating your data records.

Basically in your Datamapper you create the empty field that you’ll be filling after the Datamapping process.

Then in Connect Workflow, you update either the metadata or JSON (JSON is easier to work with via script) coming out from your Datamapper.

Then in your CreateContent step you check the Update Records from Metadata if you opted for Metadata approach or if you went JSON, then change the Data Source to JSON.
JSON String should be set to %c if you modified the current JSON file coming out of the Datamapping.

@Lazim: in addition to jchamel’s suggestions, there are also various ways of doing this entirely inside your data mapping configuration, but picking the proper method depends on many factors:

  • is that second file large (i.e. does it have only a few records or thousands of records)?
  • can you share a data mapping config that demonstrates what you are currently doing?

Phill, I currently have a list of 74 cities where our client is located.
It happens a lot that they change their address or add another city to the list.
This particular client has an average of 12 ‘processes’ in the workflow (here we call it ‘matrix’).
According to the location of the document, we make these address changes so that the PDF is consistent with the city registered.
We do this using javascript, but it’s a lot of work to change or add a single comma to all 12 of these processes (I sincerely pray to God that this client doesn’t invent another ‘process’ or make any updates to these addresses LOL).
Here are some details:

  • To change the title of the city, another field was created in the Datamapper with the name ‘nameAlt’ and the following script:
    (IDalt is the name of the field that was extracted at the given position.)

//
var IDalt = record.fields.IDalt;
var nmorgao = “”;
switch (IDalt) {
case ‘119100’: nmorgao = ‘TORONTO’; break;
case ‘119200’: nmorgao = ‘QUEBEC’; break;
case ‘207700’: nmorgao = ‘BANFF’; break;

case ‘207720’: nmorgao = ‘OTTAWA’; break;
case ‘260090’: nmorgao = ‘HALIFAX’; break;
default: nmorgao = ‘’ ;break;
}
record.fields.nameAlt = nmorgao;
//

  • The script is the same for determining the addresses for the 74 cities. (field ‘addsAlt’)
    So I can position the scripts in the template with the ‘addsAlt’ & ‘nameAlt’ fields

If it’s only 74 cities, then you should have Workflow pass a JSON array of those 74 addresses as a Runtime Parameter to your data mapping configuration. That way, the data mapping process can can lookup the field value inside the array it receives from Workflow.

You can set things up so that Worklow reads the array from a disk file before passing it to the DataMapper. So whenever the customer changes an address or adds a new one, you simply update the file on disk and your data mapping config will adjust accordingly.

I’m sure I’ve expressed myself badly.
The json file doesn’t exist. I only mentioned it to say that it could be an alternative.
In fact, there is no other file than the one used in the datamapper, which is where we extract the ‘IDalt’ field.

This entire list with 74 cities is in the script I shared earlier using the ‘nameAlt’ field
‘nameAlt’ is the script with the ‘case’ strings for each city name and
‘addsAlt’ is the script with the ‘case’ strings for each city address

So every time there is an address update or a new city is added, I’ll need to open the datamapper of the 14 matrices and change them one by one within each field script.

Very hard work, I confess.

That’s why I think there’s something to simplify this.

Thank you very much for your help.

Then I don’t understand the issue. If you are able to, send me the DataMapper configuration via private message and I will be able to understand what you`re asking for.

I believe I have sent the file correctly.

@Lazim: I received your file and I looked at the data mapping config, but I cannot find any reference to the IDalt and nameAlt fields you mentioned in your earlier posts. I checked in the template as well.

I don’t see any spot where an address is being replaced by something else.

In other words, I still have no idea what you’re asking for…

@Lazim,

Now that you explained it via private message, I understand the issue and it is very easy to implement it so that you never have to make any changes to the data mapping config if an address changes or if a new one is added to the list.

You need to create a JSON file that you will store on disk (let’s call it "C:/dm/addresses.json").
This JSON must be formatted like so:

{ 
  "119100": "MONTREAL",
  "119101": "TORONTO",
  "119102": "VANCOUVER",
  "119103": "OTTAWA",
  ... // and so on for all your addresses
}

In your data mapping configuration, add a Runtime parameter named addr. For its default value, just put something like { "119100": "DEVAULT VALUE" }:
image

Then replace your entire script with:

var IDalt= record.fields.IDalt;
var addr = JSON.parse(automation.parameters.addr);
var nmorgao = addr[IDalt] || "NO VALUE";
record.fields.nameAlt= nmorgao;

This will look in the runtime parameter for the address that corresponds to the value of the IDalt field. If it doesn’t find a match, it will simply store it as “NO VALUE” (you can change that to an empty string of you prefer).

That’s it for your data mapping configuration, just make sure to send it to Workflow.
Now you need to make a small change to your Workflow configuration so that it reads the JSON file from disk and passes its content to the data mapping config.

First, in your process that calls the DM config, insert a Run Script task immediately before the Execute Data Mapping task, with the following code:

var addr = loadFile("C:/dm/addresses.json");
Watch.setJobInfo(9,addr);

function loadFile(filename){
  var stream = new ActiveXObject("ADODB.Stream");
  var newObj;
  stream.Type    = 2;              // 2=Text
  stream.CharSet = "utf-8";
  stream.Open();

  stream.LoadFromFile(filename);
  fileStr =stream.ReadText();
  stream.Close();
  return fileStr;
}

This script stores the content of the JSON file into JobInfo 9. (we have to use a script instead of the Load External File task because the JSON may contain unicode characters).

And finally, in your Execute Data Mapping task, set the addr runtime parameter to %9.
image
From now on, whenever an address changes or if a new code with a new address is added, you just need to edit the JSON file on disk, without having to make any changes to your Workflow process, your data mapping config, or your template!

Thanks Phil
I’ve followed all the steps but still can’t get it to work.
I’ll try again to see if I get the expected result.
BR