Deadlock on single table

Last Post 23 Dec 2004 02:12 AM by alesj. 6 Replies.
AddThis - Bookmarking and Sharing Button
Author Messages
alesj
New Member
New Member

--
17 Nov 2004 04:36 AM
Hi!

I have scheduled data getting service (Java app) which starts in n threads concurrently. And all of these threads are reading/inserting/updating the same table (missingdata). No service/thread reads the same data (row) = intersection of read collections in each thread is empty.

In each thread I start a transaction, reading short (< 20) list of missingdata, making each of them 'dirty' by changing property (using Java ORM tool which acknowledges that row changed and issues an update) and inserting a few new missingdata.

I get a deadlock. How is this possible on a single table?

Then I tried to use 'select * from missingdata with (updlock, rowlock) where service_id = <id>' , also no success.

Next try: '... with (rowlock, holdlock) ..' on select and also on insert. Nope.

Only the next try is working, but this one explicitly locks the whole table - 'with (tablockx, holdlock)' on both select and insert - and since I have big number of threads this is not a good solution.

How could I make this more fine grained solution work?

Rgds, Ales

alesj
New Member
New Member

--
18 Nov 2004 02:49 AM
Ok, this speeds up things on select. I tried to with 'nolock' on select and 'rowlock' on insert - deadlock. I then made inserts with 'tablockx, holdlock' and it worked- but still that's not what I want - locking the whole table.
Adding additional hint 'holdlock' with second hint also didn't help? Btw: what actually 'holdlock' does? Aren't all lock hints durable until commit/rollback?

How can I fine grain locking on insert? Since you have here limited hints options.

For what reason does it fail, since I'm only accessing single table? Shouldn't database provide all (smart) synchronization?

Now I don't have critical data, but how do you make it work when you have mission critical data accessed by multiple threads - and you cannot rely on DB to synchronize access to them? This means actually hacking all my ORM tool statements - I loose the power of ORM tool. And ofcourse you don't want serializable isolation.

Rgds, Ales
BobBarker
New Member
New Member

--
22 Dec 2004 10:13 AM
A few things:

1. I think you have to use the HOLDLOCK hint in your SELECT statement, or else the lock gets dropped right after the selection is done (not very useful).

2. When placing row locks, SQL Server will also place "intent" locks on other rows in the same page, which prevents other threads from getting an exclusive lock other rows in that page.

3. I'm not sure about this but I believe that, even if you specify ROWLOCK, the database engine may escalate the lock to the page or table level if the number of locks gets high. But 20 does not seem like a very high number.

I know this isn't the answer you were looking for, but have you considered combining the n threads into one thread which performs all of the updates in a round-robin process? Believe it or not, having 20 threads work on the same table in parallel is probably slower than having one thread do the updates in serial.

BB


alesj
New Member
New Member

--
22 Dec 2004 11:22 PM
Thanks for the info. Will look into hints more thoroughly.

As for joining threads - although this is scheduled data (I know in advance when one action will fire up), schedulers are set by users and at the moment it is easier for me to use multithreaded Java Scheduler API. If problems become unhandleable, I will look into changing to single thread service architecture.

AJ
tom27
New Member
New Member

--
23 Dec 2004 01:48 AM
Hi
Do you have an index on service_id column? If not create one otherwise it's going to lock the whole table. What is your transaction isolation level.

Thanks
Tom
alesj
New Member
New Member

--
23 Dec 2004 02:12 AM
I probably need index on missingdata table (the one that is failing on deadlock)? What will this solve?
My transaction isolation level is default - read commited.

How does one manage highly critical data on multithreaded systems - banking, ... ?
With more than one table being critical - where order of locking is really important - so you don't get 'expected' dlock.

AJ
BobBarker
New Member
New Member

--
11 Jan 2005 01:45 PM
I suspect that highly mission critical applications do not update records in tables that are likely to fall into contention. Banking systems probably use a ledger-style table into which the application can only perform inserts. For example, to change an account balance from 200 to 150, you'd avoid the obvious:

update account set balance = balance - 50 where id = @id


and instead add a transaction:

insert transaction(id, amount) values (@id, -50)


You can then define a view which can give you the account balance by adding the transactions up :

create view account as select id, sum(amount) from transaction group by id


You won't get the same kinds of deadlocks on an insert as you do on an update.

BB


Acceptable Use Policy
---