Version control 2020

Is there a solution for version control?

I saw a similar question in 2015, and the answer back then was that there is no version control.

I am working on multiple templates and workflows in a testenviroment.
I find it difficult to seperate the live version, backup and in testing.

Does anyone have a example how you are handling this?

If you are familiar with Git and/or GitHub, then you might want to use that for version control.

With Git, each developer has their own copy of every resource, so they can always work locally without having to access online resources directly. However, before you start working with a local resource, you have to make sure that it is up to date, so you ask GIT to fetch the most recent version from the common server (which only tracks versions and changes). And once you’re done working with a resource, you commit it to that common server, which makes it the most recent version for all to use. You can commit it to the main trunk of development or to any specific branch, which can be merged to the trunk or to another branch later on.

It sounds complicated but it actually is a pretty simple and straightforward process… that is, until you run into conflicts. Those conflicts are generally due to users failing to either fetch or commit properly, which leads to conflicting contents between two versions of the resource. When that happens, you usually have one developer in charge of resolving those conflicts which can sometimes be a painstaking process. Often times, it’s easier to revert back to the last known valid version and redo the work, or to accept one of the versions as the new updated version and then ask the other coder whose changes were rejected to integrate once again their own changes into that updated version.

Using GIT allows several users to work on different resources simultaneously, that’s easy. But if several users have to work on the same resources at the same time (think, for instance, of different Workflow Processes being worked on separately but that are ultimately intended to be run inside a single config), then you have to plan things carefully. In our case, here’s how we use it to manage some of our OL Connect-based solutions (the ones available on demo.objectiflune.com):

Coder 1: works on POD processes
Coder 2: works on AR processes
Coder 3: works on CM processes

All three of us work on a different Workflow config, containing only our own stuff. When we are ready to put all of them on the test server, we each commit our config to GIT. Then one of us logs on to the test server and fetches all three configs from GIT. That person then opens Workflow and imports all three configurations into a single config. Of course, we have all previously agreed on a naming convention to prevent any naming collisions in our processes and variables, and we have also engineered the system so that common folders also follow a set hierarchical structure.

In order to accommodate for the dev/test/prod environment, we commit to and fetch from different branches in Git. This allows us to “promote” a test branch to production status and it also allows us to go back to a previous version if anything went wrong while upgrading our resources.

This is just a very broad overview of how we work, there are a number of additional details that also come into play, but it should give you an idea of the kind of process you will have to plan for if you want to work concurrently on different resources and start managing versions of those resources.

3 Likes

Hi Phil,

Thank you for your very detailed answer :slight_smile:!
Our development team already uses Git, so I will also have a look into that.

I was wondering how others are handling this, thank you for sharing!

I am new to this product and have been brought into a team currently maintaining v7. We will be upgrading to Connect Q1-Q2. I am very interested if other teams have implemented version control yet. I would like to take it a step further and implement a type of CI/CD. My initial vision…: our company currently uses Auzre DevOps. I would like to check-in Design Documents and Workflow configurations into ADO git repositories. And have build pipelines “send to workflow”. I believe that would mean that a piece of the suite which converts the pp7 into format consumable by the workflow would need to be extracted and executed via Powershell. … how far from reality is this??

Well you have to understand that Connect Template, Datamapper are ZIP files in essence, so you wouldn’t be able to merge multiple part of code together as with programming languages.

The Workflow configuration is an XML file.

PlanetPress Suite (PPSuite) form are usable as is, in the Connect Workflow environment, no conversion needed. Of course, you’ll need to either keep a VM with PPSuite installed or have it installed on the same as Connect (I suggest the former), should you decide to keep them in that format and modify them at that point. You could as well redesign them completely using Connect.

As for the rest of your enquiry, I’ll leave other forum user, more knowledgeable than me on this, answer it.

We use github in our environment as well. I’m relatively new to both github and PlanetPress, plus we’re migrating from PP7 to OL Connect 2020.

My colleagues tell me to save and commit often, which is good. What I failed to understand is utilizing branches for development work while leaving the main branch until I’m ready to merge them together.

Another tip is to leave detailed commit comments since you cannot do text diff comparisons between your design files. Workflow XML is a different matter, of course.

May I please confirm what the behind the scenes steps are in PlanetPress Connect Designer “Send to Workflow” and PlanetPress Workflow Configuration “Send Configuration”?

Is it possible to put files into a specific location, restart services, then have the server apply the new files?

Typically using CI/CD work practices means minimal manual steps. If you wanted to perform these steps without using a GUI, what would you do?

Both of these operations are handled by the Messenger service, which manages communications between modules/machines.

When using the Send To Workflow option from Designer, Messenger provides a list of other instances on the same subnet, allowing you to pick a destination for the resources you want to send. Messenger then forwards those resources to the Messenger service on the target machine (or “to itself”, if the destination is on the same PC). The receiving Messenger service then copies all the resources to its local C:\ProgramData\Objectif Lune\PlanetPress Workflow 8\PlanetPress Watch\Documents\In folder, which is constantly being monitored by the Workflow Service (and the Workflow configuration tool). There is no need to stop/restart the services, the resources become available immediately.

So in a CI/CD environment, you can automate your own process for sending the resources to that folder on the target machine.

When using the Send configuration option from the Workflow configuration tool, the process is almost exactly the same except for the following:

  • The receiving Messenger service stores the configuration file in the C:\ProgramData\Objectif Lune\PlanetPress Workflow 8\PlanetPress Watch under the name ppwatch.cfg.
  • The receiving Messenger service issues shutdown/restart requests to the Workflow service, ensuring that any jobs currently being processed complete as expected before the service is restarted.

So in a CI/CD environment, you can automate your own process for sending the configuration file to that folder on the target machine (don’t forget to name it ppwatch.cfg). Then you can issue the following commands to stop/restart the Workflow service:
net stop ppwatch8
net start ppwatch8

Note that your process must have the proper rights to shutdown and restart services.

1 Like

Phil, there is STILL a long-standing issue with Workflow not consuming all the documents in the “IN” folder until/unless the Workflow Services are restarted.

@TDGreer: you are right, there are some edge cases when resources get stuck in the documents/in folder until the next service restart, but for the vast majority of users, the method descibed above works.

To make it absolutely foolproof, you could also restart the service just like for the Workflow configuration.

Alternatively, you could do it all from inside a Workflow script by using the Watch.InstallResource() method. This process is a little more involved, but it’s guaranteed to work without restarting the services.