The loadjson() function is available both in the DataMapper and Designer module, whenever a script is used (for example, in preprocessor scripts and action steps in the DataMapper, or Scripts in the Designer).
This brought me to the idea to use such an approach in the DataMapper…
The referenced How-To works only partly. There is no way to get back the enriched metadata to the main branch after sequencing.
Unfortunately the information in the tips & tricks was either outdated or wrong, I have removed references to the datamapper in this. And even more sad is that indeed, there is an issue with the Metadata sequencer being used in a branch.
We are investigating the sequencer issue and currently the only workaround would be to use a Script, which would loop through the metadata and complement it directly (in such a script you could also load the JSON file(s) and JSON.parse() them, to have access to their data).
The sequencer replaces the original metadata file with a new one on every iteration. An update on the original metadata file never worked (at least in my PP lifetime). I always had to save an intermediary file, reload it, interpret it and so on…
I already solved my problem with passing a separated string as a variable from the workflow to the data mapper and splitting it. But loading text and JSON.parse() is also a way. Have to test it.
But now I have another problem. Is there any way to skip a record in the data mapper?
Let’s say, I have a list of statements to skip, an I don’t want them to be generated as records?
I know, I can do it also in the workflow, but it would be much easier in the data mapper. I already did the definition work and the input data is pretty weird… It doesn’t make sense to replicate the logic in the workflow.