Icon View Thread

The following is the text of the current message along with any replies.
Messages 1 to 8 of 8 total
Thread Could not lock the database
Wed, Feb 26 2014 1:29 AMPermanent Link

Danie van Eeden

Hi,

would appreciate any responses to this question Smile

We are getting the above error (subject line) at some of our clients. I would like to think that I understand why.
The amount of I/O is really intense I have monitored the tables and have determined the following:

The amount of write time to the tables in this database is never more than 0.003 or 0.004 seconds. Hence I don't believe that a single transaction is holding a write lock on any tables for too long.

However, the number of transactions the occur within any specific time period is massive. This leads me to believe that an terminal / client would (for eg.) attempt to obtain a lock, then fail, and thenbefore having the opportunity to successfully lock on the second attempt, another client already obtained it in the meantime. It seems that, but pure bad luck, a single terminal may never obtain the lock and then fails with the relevant error message.

Am I correct in this assumption?

If so, would it be at all possible to prioritize the locks?

Notice: This application is still running in "File Server" mode (ie. local sessions) and not in "Client Server" mode.

Help appreciated
Danie
Wed, Feb 26 2014 2:58 PMPermanent Link

Danie van Eeden

I admit that looking into the design of the table structures and process design would be a logical approach in an attempt to limit the concurrent access to the same set of data. I will attempt this, however it might prove more difficult than it seems - simply due to the vast amounts of terminals / db clients involved.

another though would be to look at the TableReadLockRetryCount property of the Engine

Currently it's set to High(ShortInt)-1.
Would it be prudent to increase this?

I noticed the RetryWaitTime, but my senses tell me to rather leave this alone as a change would affect all transactions with certainty (possibly negatively). Increasing the retry count, however, simply provides more opportunity for obtaining the lock.

Just another thought.
Any input appreciated.
Wed, Feb 26 2014 10:24 PMPermanent Link

Raul

Team Elevate Team Elevate

On 2/26/2014 1:29 AM, Danie van Eeden wrote:
> We are getting the above error (subject line) at some of our clients. I would like to think that I understand why.

What's the error number - 10229 or something else?

Also what version of DBISAM are you using ?

> The amount of I/O is really intense I have monitored the tables and have determined the following:
> The amount of write time to the tables in this database is never more than 0.003 or 0.004 seconds. Hence I don't believe that a single transaction is holding a write lock on any tables for too long.

I might be missing something but that does not sound too intense. Are
you saying you have lot of clients executing short writes and that's
intense volume?

Based on this i assume you're using transactions for all of the writes -
are they all writing to same tables? If not then you could consider
restricted transaction where transaction only specifies certain tables.
Not sure this would help though.

Also any chance the client app is forcibly closed or otherwise does not
get a chance to clean up locks?

>
> Notice: This application is still running in "File Server" mode (ie. local sessions) and not in "Client Server" mode.

File server mode will have limitations no matter what you try. What are
the files hosted on (actual windows server or NAS box or some other OS)
? File sharing has gotten quite a bit worse with recent windows releases
by Microsoft. Generally going to c/s is the only way to get more control
over this.

Raul
Thu, Feb 27 2014 2:18 AMPermanent Link

Danie van Eeden

Hi and thanks for the reply.


<<<What's the error number - 10229 or something else?>>.

yes the precise error is :
DBISAM Engine Error # 10229 Transaction cannot lock the database

<<<Also what version of DBISAM are you using ?>>>

Version 4.36 Build 1

<<<I might be missing something but that does not sound too intense. Are
you saying you have lot of clients executing short writes and that's
intense volume?.

Based on this i assume you're using transactions for all of the writes -
are they all writing to same tables? If not then you could consider
restricted transaction where transaction only specifies certain tables.
Not sure this would help though.>>>

Yes, my thoughts exactly - the writes are short and simple - mostly adding / editing single records.
But yes again, the number of clients are (in our context) quite a lot and the number of writes happen often BUT independently (at various stages of the business process).

All our Database Transactions specify specific / required tables only. The whole database never gets locked.


<<<Also any chance the client app is forcibly closed or otherwise does not
get a chance to clean up locks?>>>

I believe this may be a secondary cause - problem is after the first error occurs (example 10229), the session / database connection is closed automatically. the exception isn't handled as it shouldn't occur - well according to my mind Smile


<<<File server mode will have limitations no matter what you try. What are
the files hosted on (actual windows server or NAS box or some other OS)
? File sharing has gotten quite a bit worse with recent windows releases
by Microsoft. Generally going to c/s is the only way to get more control
over this.>>>

I totally agree. it seems the overhead over windows file sharing has become more and more. To answer you question, yes the files are hosted on a Windows server (dedicated for our data). They are running Solid State drives (processing and memory seems to be no issue). The drives aren't building up any caching queues, it simply flushes immediately. So no bottle neck here. The only thin I can think is that, becuase we arent running C/S, there is no central method for handling locks, Each terminal is doing a 'best effort'. hence the reason why within 32768-1 it might still fail.



Thanks again.
Danie
Fri, Feb 28 2014 4:47 PMPermanent Link

Tim Young [Elevate Software]

Elevate Software, Inc.

Avatar

Email timyoung@elevatesoft.com

Danie,

<< We are getting the above error (subject line) at some of our clients. I
would like to think that I understand why.
The amount of I/O is really intense I have monitored the tables and have
determined the following:

The amount of write time to the tables in this database is never more than
0.003 or 0.004 seconds. Hence I don't believe that a single transaction is
holding a write lock on any tables for too long. >>

Are you using transactions with record locks, or are you specifically
locking the entire table with LockTable/UnlockTable calls ?

Tim Young
Elevate Software
www.elevatesoft.com
Tue, Mar 4 2014 2:25 AMPermanent Link

Danie van Eeden

Hi Tim,

thanks for the reply (and apologies for the delay in mine).

I am using Database.StartTransaction(S); //Where S is a TStrings containing Tables

We haven't made any changes to the locking protocol for record locking - thus I believe it's still on lpPessimistic.

Hope this answers the question - please direct me otherwise.

Thanks again
Kind Regards
Danie


"Tim Young [Elevate Software]" wrote:

Danie,

<< We are getting the above error (subject line) at some of our clients. I
would like to think that I understand why.
The amount of I/O is really intense I have monitored the tables and have
determined the following:

The amount of write time to the tables in this database is never more than
0.003 or 0.004 seconds. Hence I don't believe that a single transaction is
holding a write lock on any tables for too long. >>

Are you using transactions with record locks, or are you specifically
locking the entire table with LockTable/UnlockTable calls ?

Tim Young
Elevate Software
www.elevatesoft.com
Tue, Mar 11 2014 9:00 AMPermanent Link

Danie van Eeden

Hi,

an update. I have increased the retry count on the locking (as per posts above), this as alleviated some of the issues but - as thought - not all. The problem still persists at a problematic level.

My only resort at this stage would be to change the data structures to multiple tables split according to business processes in an attempt to lessen the amount of i/o to them.

Any other help (taking into account the posts above) would be appreciated.

Danie
Wed, Mar 19 2014 9:22 AMPermanent Link

Tim Young [Elevate Software]

Elevate Software, Inc.

Avatar

Email timyoung@elevatesoft.com

Danie,

<< I am using Database.StartTransaction(S); //Where S is a TStrings
containing Tables >>

And that is where the locks are failing ?  Do you have the exact error
message that you're seeing ?

If the transaction locks are what is causing the error message, then the
problem may simply be related to the TDBISAMDatabase.Commit call, which, by
default, will perform a flush to disk also.  You could be looking at a
scenario where these flushes are starting to tax the drive and cause locks
to queue.  The solution would be to call TDBISAMDatabase.Commit(False)
instead.

Tim Young
Elevate Software
www.elevatesoft.com
Image