Frequent deadlocks

Last Post 21 Mar 2008 08:26 PM by HarryNejad. 7 Replies.
AddThis - Bookmarking and Sharing Button
Author Messages
New Member
New Member

08 Mar 2008 04:57 PM

I used to have some other queries that always deadlocked on the same tables (REC_TRX and REC_TRX_SCRUB) and those deadlocks have now been resolved.

Yet, there is another query that is embedded in the application and several users (about 10) use it frequently, and the query deadlocks very frequently (the query deadlocks with another instance of the same query)

The query selects a single record and inserts into another table, and the select is right on the primary key.

All the indexes including the PK on both tables are non-clustered.

It is SQL 2000

I have tried everything to resolve this, but it keeps happenning about 50 times a day.

I have dropped and recreated all the indexes and PKs.

I have done the following (and they returned absolutely good with no error):

dbcc checktable ('REC_TRX') with all_errormsgs

dbcc checktable ('REC_TRX_SCRUB') with all_errormsgs

dbcc checkalloc with all_errormsgs

Is it reasonable to believe that there could be a hardware problem?

Is there any other dbcc query or the like that could shed more light on this?

Any ideas?

Here is the query:

begin transaction
insert REC_TRX ( ... very long column list ... )
select ... very long column list ...
where ARC_CTR = '..........'
and FAS_FILE_NAME = '..........'

if @@error <> 0 rollback transaction
delete from REC_TRX_SCRUB
where ARC_CTR = '............'
and FAS_FILE_NAME = '.............'

if @@error <> 0
rollback transaction
commit transaction

New Member
New Member

09 Mar 2008 12:57 AM
This is one of the simplest textbook case of a transaction bound to deadlock.

General advice: don't use the same table more than once in a transaction.

One process is waiting for to do SELECT ..REC_TRX_SCRUB,
another waiting to do DELETE ...REC_TRX_SCRUB

They are waiting for each other. Deadlock detector will kick in.

A transaction has to be small and fast. DELETE is rarely fast. The INSERT may not be fast either if it is multiple inserts. If indexes are deteriorated....

100% solution: You can just remove the DELETE from the transaction. Setup a nightly batch delete is one option.

95% solution: Do update-in-place on IsActive flag. Update-in-place may solve your issue since it is faster than delete.

90% solution put NOLOCK in the SELECT statement ( this may cause dirty reads)

Are your indexes in order? Fragmented? Nightly reindexing of DYNAMIC tables with 70% FILL FACTOR may also solve your problem since it would speed up the transaction.

Kalman Toth - Database, Data Warehouse & Business Intelligence Architect
BI Workshop NYC - March 17-20:

New Member
New Member

09 Mar 2008 05:50 PM

Here are the answers to your questions plus my notes on this issue:
1. You asked for the exact where clause.
The query is composed by the application based on the selections on the screen, and each time the where clause is different, but the where clause always contains literals like the example below:

where ARC_CTR = '3082748' and FAS_FILE_NAME = 'RV_Conversion'

as for the above columns, they are as follows:

ARC_CTR int not null
FAS_FILE_NAME varchar(17) not null

and the PK is like this (copied from sp_helpindex):

PK_REC_TRX_SCRUB nonclustered, unique, primary key located on PRIMARY ARC_CTR, FAS_FILE_NAME

2. You are proposing the following (I a quoting just the last part of your code):

<<< ...
IF @@TRANCOUNT > 0 -- Always check for the @@Trancount to be > 0 before commit or rollback
... >>>

and what should I do "else" (that is if the @@TRANCOUNT is NOT greater than zero)?
Should I rollback transaction?

3. You are saying:

<<< ... This should only allow one user to get an exlcusive lock on the row for the select at a time ... >>>

OK, then does this not mean that the second user's attempt to get the same record will result in a deadlock (because the first user already has it exclusively)?

4. Neither table has any clustered index (including PKs), because the inserts are never sequential based on any of the indexes, and they could result in a lot of data re-organisation on the disk when an insert is attempted.
In fact, the two tables DID have clustered primary keys, and at that time we got far more deadlocks.
I dropped the clustered PKs, and all other indexes, and rebuilt them all as nonclusterd, and the deadlocks reduced to the current level (of about 50 a day) which is STILL way too high.

I don't think clustered indexes are the solution. I think they will worsen the problem.

Any help on this will be greatly appreciated.


New Member
New Member

09 Mar 2008 06:55 PM

The transaction structure is the problem.

SELECT - followed by a DELETE on the same table within the transaction.

DELETE generally is not a good idea because it trashes the indexes.

DELETE maybe slow because it reworks the indexes.

Kalman Toth - Database, Data Warehouse & Business Intelligence Architect
BI Workshop NYC - March 17-20:

New Member
New Member

11 Mar 2008 07:05 AM

1. With regards to your proposed query:
<<< ...
... >>>

I looked at the SQL help and it says this about "HOLDLOCK":
<<< ...
Hold a shared lock until completion of the transaction instead of releasing the lock as soon as the required table, row, or data page is no longer required. HOLDLOCK is equivalent to SERIALIZABLE.
... >>>

So, it looks like that holdlock applies to shared locks only and cannot be used in combination with "xlock" the way you are proposing.

Please comment on this.

2. Also, SQL help says this about "XLOCK":
<<< ...
Use an exclusive lock that will be held until the end of the transaction on all data processed by the statement. This lock can be specified with either PAGLOCK or TABLOCK, in which case the exclusive lock applies to the appropriate level of granularity
... >>>

Does this mean that there is no way to use xlock with rowlock (if I wish to lock only one row)?

For example, can I write a query like this:


Please advise.


New Member
New Member

12 Mar 2008 04:17 PM

I have doubts that the SQL is scanning the whole table, because no performance issues have been reported while the tables have many records (REC_TRX_SCRUB that is being selected has 215000 records), but you may be right after all.

Please note that this happens at our client site only (and not in our office), and that the query is composed by the front-end application, so it is very difficult to experiment with the solutions. Yet I plan to do so.

If I were to summarize all of your suggestions, am I right to come to the following action plan:

1. Use the locking hints: with (paglock, holdlock, xlock)

2. Change the where clause to use an actual integer literal (get rid of the single-quotes).

Are the above the only things that I can do or there are any other suggestions?

Please note that your recommendations require that I modify the front-end application that composes the query and have our client install it in their site, and this is very difficult as there are tonnes of red-tape involved.

That is why I need to make sure I do all the recommended changes at once to avoid having to send them new versions of the front-end application repeatedly.

Please advise.

Your help is greatly appreciated.


New Member
New Member

21 Mar 2008 08:26 PM

1. --------------------------------------------------------------
Unfortunately, I do not think that clustering the PK could help, because the values of

are not sequential. and when it comes to insert a record into the table, the totally random values of these fields can put some stress on the server to re-organize the records (even with fillfactor, etc)

The PK used to be clustered and at that time we got a whole lot more deadlocks.

As soon as I recreated the PK as nonclustered, the number of deadlocks fell sharply to the current level (which is still high).

2. --------------------------------------------------------------
I can certainly change the query into a stored procedure call, and I will do so as soon as I get the permission to modify the front-end application.

3. --------------------------------------------------------------
One of you suggested that I check the execution plan, because SQL server may actually be scanning the table (and not using the index).

How can I do so when the query is composed and by the front-end application and it is running at our client's site.

I may be able to get remote access to their server (for example the Query Analyzer), so that I can run a query in there for example to rebuild the indexes, but again, the problematic query that deadlocks frequently is composed by the front-end application, so how can I check the execution plan?

4. --------------------------------------------------------------
One of you had asked where in B.O.L I read that matter about the locking hints.
It was SQL 2000.
And the whole system that I am dealing with is SQL 2000

5. --------------------------------------------------------------
I am still a little confused about the impact of the locking hints that you recommended.
Let's say, I use the holdlock, xlock hints.
Then the first user that attempts to access the record gets a hold of it successfully and exclusively.
Then if another user attempts to access the same record, he will be blocked because the first one has it exclusively (until he is finished with it)
But, does this not mean that the second user will be deadlocked (chosen as the victim of deadlock)?
This is what appears to me.
Am I missing anything?

Any help will be greatly appreciated.



New Member
New Member

21 Mar 2008 11:57 PM

Deadlocks cannot be avoided TOTALLY.

Your issue is that you have too many. I would say once a week is not too many. How many do you get a day?

Most important guidelines:

1. Transactions must be THIN and FAST - kind of Marathon runner who is a great sprinter
2. Well maintained indexes with fill-factor - avoid DELETEs (they destroy fill factor)
3. Definitely stored procedure - never direct queries!
4. Don't hit the same table twice! like A - B - A
5. Don't hit the tables in different sprocs like A - B & B - A, they should be the same order

Kalman Toth - Database, Data Warehouse & Business Intelligence Architect
BI Workshop NYC SSAS, SSIS, SSRS - April 21-24:

Acceptable Use Policy