Have 3 PP Connect 2019.1 servers running in production.
All 3 servers access the same MS SQL Database. Sometimes the workflows SQL updates done by the Database Query fails.
All with this error: Error running SQL: [Microsoft][ODBC SQL Server Driver][SQL Server]Transaction (Process ID 222) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction
This means that data are not written into the database correct.
Had the same issue in the PlanetPRESS suite, when the customer was running on that version.
I have never seen the deadlock when using INSERT statements or SELECT statements - only when I’m using the UPDATE statement.
I’m beginning to think that the database query plugin is not good enough for so many transactions around 4000 a day for this process and every day 2-5 updates end up in a deadlock.
Anybody have a hint on how I can move this simple update statement to a vb script ?
%{global.DSN}
UPDATE dbo.Docs
SET
EboksEnabled=‘True’,
EboksTjecked=‘True’,
Production=‘eboks’
Where (Filename = ‘%{Filename_PDF}’)
UPDATE statements are quite different from INSERT and SELECT statements:
INSERT add new records, therefore it can’t run into a dealock issue since it is not writing to existing records that might be in use elsewhere
SELECT just reads records, so it doesn’t care if someone is currently writing to a record, it just takes the latest snapshot of that record’s content
UPDATE, on the other hand, writes data to existing records, which could themselves be currently be written to in another process. In that case, one of the operations must fail, and it’s the MSSQL server that decides which ones it rejects.Since a single UPDATE statement can write to several records at once, it is issued as a transaction: if any of the records fail, then the entire transaction is reverted. And when the entire transaction is reverted, that’s when the DB Query task fails with the error you quoted.
You could modify your Workflow process so that in the event the DB Query plugin fails because of a deadlock, you would run it again automatically.
Alternatively, you could write a script that handles each individual UPDATE as a single transaction, but that would be extremely detrimental to performance. Instead, your script should issue the UPDATE as a single transaction, just like the DBQuery does, but it would have to issue retries on deadlock errors as many times as necessary until the update is accepted by the server.
There are plenty of tutorials on the web explaining how to use the ADOBD object to access databases from scripting. You can find a fairly complete reference for that object on this page: ADO Connection Object.
Not necessarily: you could simply log a custom message (e.g. “FAILED”) and store it in a local variable or jobinfo. Then immediately after the task, check if that variable contains the word “FAILED”.
You could even do that inside a loop, but that might be overkill: a deadlock only occurs for a brief period of time, retrying once or twice should be sufficient.
Like on the picture…?
Normally I just kill the process when an error occurs - in this case I just kill the branch
Then outside the brance I check the “Update_Check” variable and if the error is there I just run the update process again.
In this case I would proberly also have to remove the Send to process check mark.
You could set the action to Default and uncheck the Send to process option. Immediately after the task, use a condition to check if your Update_Check variable contains your custom error message. If it does, run the task a second time, but now using the same settings you currently have. So essentially, you’re running the task twice before calling it a day.
This would likely get rid of most, if not all your issues. Obviously, the more retries, the more likely the process is to succeed in the end.
I have encountered this before. What I did to overcome it was create a global variable called “hold” which was set to 0.
In the process which is writing to the DB, change the update statements to inserts into a holding table (temp table) with timestamps or change the write output to a log file.
Then, periodically (set an interval on the process), change the “hold” variable to 1 (i.e. True) and in that process, do and update to your main table but in your update, select top 1 order by DESC from your temp table. Once complete, wipe remaining records.
This way you are writing the latest update. Once this process is complete, set the “hold” variable back to 0.
(If not all the records are a full update, this will need some fine-tuning and could possibly do away with temp table altogether and just pause main process for each update but this will be time-consuming)
If the main process is set within a condition such as a do while you will get the desired result.
I’ve been looking into this one recently, so if anybody is still searching for an answer in 2022 for SQL Server 2012 and up here’s the code. I STRONGLY advise to turn this into a stored procedure (that’s the only way I’ve ever run this code in Workflow, so not even sure the syntax will be fully supported) with filename as parameter and sanitize the input properly in case somebody decides to pull a fast one and feeds the following file
Code attempts to update - fails if no records match the file name. If deadlock is encountered, it retries at an incrementing delay, giving up after 50 attempts:
IF EXISTS (SELECT 1 FROM dbo.Docs WHERE Filename = @FileName)
BEGIN
UPDATE dbo.Docs
SET
EboksEnabled=‘True’,
EboksTjecked=‘True’,
Production = ‘eboks’
WHERE Filename = @FileName
END
ELSE
BEGIN
DECLARE @Msg VARCHAR(8000) = CONCAT('File ‘, @Filename, ’ does not exist in dbo.Docs’)
;THROW 51000, @Msg, 1
END
COMMIT TRAN
END TRY
BEGIN CATCH
ROLLBACK TRAN
IF ERROR_NUMBER() = 1205 AND @Attempt <= @Attempts
BEGIN
/*–PRINT CONCAT('DEADLOCK ON ATTEMPT ', @Attempt)–*/
DECLARE @WaitFor VARCHAR(12) = FORMAT((@Attempt - 1) * 0.01, ‘00:00:00.000’)
WAITFOR DELAY @WaitFor
GOTO TryUpdate
END
ELSE
BEGIN
;THROW
END