This is a request from a couple of customers. The idea is to have an optional condition on a process at the top level. A condition that can based on the value of a global variable. The global variable could be set to a value by another process and in affect turn off a process (still active, just not pulling in files) . A “PreCondition” or “PreProcess” step of sorts. This step would be checked just prior to the process being run. If the condition is true then the process runs as normal, but if false the process would be skipped until the next cycle.
One of the potential uses is for DEV/DR vs PROD where they want the same configuration for every environment, but want to “turn off” a process for DEV/DR that is on for PROD. Another use would be to “turn off” a process in a specific instance for “user workflow” steps/actions for flow control. (i.e. User workflow in the sense where User 1 performs a task and then it’s routed to User 2 to perform another task instead of the concept of a PlanetPress workflow) There are other situations where this would be helpful.
I see what you’re getting at, but this is already largely doable with the workflow as it stands.
Consider that you’ve got 2 processes that you want to dynamically enable/disable. Currently they all start with various inputs. Maybe printer inputs or folder captures. It doesn’t really matter what. That initial input will be changed to being the second action in the process. The first will be a new folder capture that is specifically looking for a ‘trigger’ file. That trigger file in turn would be created by another process that looks something like this:
Each of those conditions is checking your global variable. If the condition is true, the trigger file goes off to the trigger folder that then lets that process run once. This whole process then becomes the timing mechanism for your other processes. So if this is set to run every 4 seconds, all of the processes it triggers are also potentially running as fast as once per 4 seconds.
Those processes pick up the trigger file through the initial Folder Capture and the very next step destroy the trigger as they pick up whatever files they’re meant to from whatever other source.
Inputs themselves can also use variables for their configurations. So, perhaps instead of the above method you use your global variable to set the input path. Take a folder capture, for instance. When it’s “on” it’s pulling from the normal source of C:\Work\ProcessA. When it’s “off” it’s pointed at some empty folder like C:\NoCapture. The same concept should work for any input.
This actually works nicely with the whole “Same config, multiple servers” concept. Each server has it’s own startup config file stored on the disk that sets a pile of global variables unique to it. This is picked up by a startup process, so as the server is turned on, the first thing it does is read in it’s config file to set the variables. Those variables are what are being accessed in the various inputs, rather than hard coded values. If you want to change them on the fly, that can be done by feeding a secondary config file into a special process designed for the purpose.
Currently I use a similar approach. With a startup process I read values from an external xml file to set global variables. If the input is a folder then I set the path with a global variable. This allows me to have different paths for DEV and PROD. When there isn’t a DEV location, I use a local folder such as the “NoCapture” folder you mentioned. At other times I use a text condition to check if the global environment variable is set to PROD and perform actions based on the results of this condition.
The customer’s objection is if this process is running every 4 seconds, there is increased load on the server and additional entries in the log. Create file, check condition, delete file every 4 seconds when the value is false.
To my knowledge there isn’t a way to make a specific process inactive using a script (such as in the startup process). The goal is to use the same config file in DEV as in PROD without any modifications. I have a process that works, but customer is questioning if it could be done better with the feature request.
I don’t even know if the feature request is feasible, but said that I would post the idea.
The customer’s objection is if this process is running every 4 seconds, there is increased load on the server and additional entries in the log. Create file, check condition, delete file every 4 seconds when the value is false.
All very good points. I’d just like to address this one in particular. Primarily playing devil’s advocate here as I think there’s some merit to this otherwise.
In my first method, this is absolutely the case. The trigger process will continue to log every run, writing to the disk both to create it’s working file and to write it’s log. Though the load is minuscule, a write is a write.
However, with the second method, where you have the input paths in global variables and you change them mid-run, the logging is at least halted. If a process checks a folder and finds nothing, nothing is logged. This also requires no conditions be checked. It simply changes it’s input location and finds nothing.
Still, it does indeed continue to check the folder. Which does in turn add a very tiny amount of overhead as it reads the disk. So disabling the process on the fly would allow you to save that small amount of processing time.
Like I said, this is a feature request by a couple of customers because of their perceptions. I’m guessing they are primarily annoyed by the logging. Both customers want verbose logging, but don’t like the entries that check and then delete because of the condition does not match.
I appreciate the detailed responses. I hadn’t thought of the argument that the checks are still being performed so the performance difference would me small.
I enjoyed reading this thread because it validates the changes we have planned for future releases of Workflow. Many of Uomo’s concerns (as well as many of Albert’s workarounds) will be addressed by those changes. I unfortunately can’t say much more (if I did, they’d have to kill me… ) but know that the first major parts of this evolution are scheduled for release this year.
We have a config which requires a Print to Printer in Production, but Print to PDF in Development/Test. We also require SMTP to server A in Prod, and SMTP to server B in Development/Test.
The concept of conditional environment output presets which sit above print and email elements seems to be quite critical the context of our organisation. As this feature is not currently known to me, I’m thinking I need to implement it using host file tweaks, or scripts.
May I please get further advice about conditional Print and Email output presets using a server specific config files?
Currently I have an External XML file that contains notes with the various items that are different between the servers. Within a startup process using a common script, I read in the XML note value and populate the global variables. The External XML file needs to be present in all server instances and in a static location.
For example the SMTP server name or ip address would be one of the XML node values.
With the print to printer versus print to pdf, I would set an “Environment” global variable and then use a text condition to do one of the options based on the value.
<Environment>PRODUCTION</Environment>
<Environment>DEVELOPMENT</Environment>
In the script you would look something like this, in my case I have a function, but you could simply change the emulation to XML and set the global variables using the Set JobInfos and Variables plugin.
What @UomoDelGhiaccio explains is the proper way of doing things. In some of our environments, we even take it a step further. Our settings file is a JSON file (it could be XML, but JSON is easier to handle in a script). In that file, we have settings for each of our DTAP servers. The file looks something like this:
In our startup process, we then rely on the Windows computer name to determine which portion of the settings applies to the system that is currently running the configuration. That way, we don’t have to create special environment settings:
...
// Get Computer Name
var wshShell = new ActiveXObject('WScript.Shell');
var computerName = wshShell.ExpandEnvironmentStrings('%COMPUTERNAME%');
// variable allSettings contains the entire JSON settings file as an object
var localSettings = allSettings[computerName];
This type of config is easy portable to any new system because we don’t have to create new environment variables for each machine: we just use what Windows already provides.
I agree with @Phil that using the COMPUTERNAME is pretty slick assuming there are only a few variables that need to be set and only a few servers. Using your example of two different email servers and two different “Print to” methods, a simple Javascript Switch or VB Select Case could be used.
This example method would eliminate the need for the External XML file.
The External XML file method I mentioned is more useful if there are numerous servers and lots of global variables being set. At times there are are other functions that maybe switched on and off that makes directly editing the configuration file becomes problematic. In these situations we simply edit the external XML file and restart the services. This may be something such as switching on a custom built debug mode to assist troubleshooting.
One of the advantages of using an external Settings file is that you can switch values without having to restart the services (as long as you have a process that monitors the folder where that Settings file is stored).
Another advantage is that you can store everything in a Revision Control system. That way, you make all your changes in your Dev environment and commit them to a BitBucket or a GitHub repository. You then move to your Test server and fetch the changes and you don’t have to do anything else: the machine knows its own name and then can pull its own custom settings from the Settings file. Granted, we could have made the changes in the Workflow config file itself, but then you have to be absolutely certain that no other change was implemented in that config.
We had a case in-house a few months back where our QA department wanted to run one of our solutions on an additional machine. We simply added an entry to the Settings file and that’s all that was needed for the entire configuration to work on that new machine. It greatly simplified the QA process because we didn’t have to touch anything in the configuration itself: all changes were constrained to the Settings file.
Sorry to dig out an old post but I’m trying to replicate this setup with our live/test servers and make the testing and rollout of new processes smoother.
I have created and populated the JSON file, but it’s the 2nd part that is confusing me, how do you take the settings and use them in your process?
Create a Startup process that captures your JSON file, then use a scripting task to parse the content of that file and store the values it contains in global variables.
The process could be as simple as this:
Using the sample JSON file I posted earlier, the script would then look something like this:
var jsonSettings = JSON.parse(Watch.ExpandString("%c"));
var computerName = (new ActiveXObject('WScript.Shell')).ExpandEnvironmentStrings('%COMPUTERNAME%');
Watch.SetVariable("global.IP", jsonSettings[computerName].connectServerIP);
Watch.SetVariable("global.Folder", jsonSettings[computerName].resourceFolder);
The script fetches the COMPUTERNAME environment variable from Windows. Assuming this value can either be “Prod” or “Test”, it then picks the appropriate property from the JSON file to assign the IP address and ResourceFolder values to global variables.