Icon View Thread

The following is the text of the current message along with any replies.
Messages 1 to 10 of 10 total
Thread Why StartTransaction() slowdown query execution speed ?
Wed, Apr 9 2008 3:40 PMPermanent Link

Max Terentiev
Hi,

I notice strange performance hit if I execute query inside Transaction.

My program execute parametrized query to insert 50K records:

-------
Query->SQL->Add("INSERT INTO \"List0\" (Email,FullName) VALUES (:eml, 'test')");
Query->Prepare();
Database->StartTransaction();
for (int i=0;i<Data->Count;i++)
   {
   Query->ParamByName("eml")->AsString=Data->Strings[i];
   Query->ExecSQL();
   }
Database->Commit();
Query->UnPrepare();
--------

If query executed without StartTransaction/Commit - 50K records inserted in 5 seconds.

With StartTransaction this query take 25-30 seconds.

I try to add COMMIT INTERVAL 500, 1000, 5000 at end of Query string but it's not helps.

My program must work in multi-user environment so I must use Transactions. But it's
possible to avoid this performance hit ?

Thanks for help !
Wed, Apr 9 2008 4:17 PMPermanent Link

"Robert"
Why should a list of inserts have to be inside a transaction? A transaction
is usually for a situation where the various updates are related to each
other. In fact, in a multiuser system locking a loop like yours inside a
transaction will have the unwanted effect of freezing the other users.

Anyway, I certainly don't have an answer as to why it would be a lot slower.
Would you mind reversing the prepare and the starttransaction (start the
transaction BEFORE you prepare the query) and post the results? Thanks.

Robert

"Max Terentiev" <support@bspdev.com> wrote in message
news:22383537-2391-4365-BF15-D9E08BEA83EB@news.elevatesoft.com...
> Hi,
>
> I notice strange performance hit if I execute query inside Transaction.
>
> My program execute parametrized query to insert 50K records:
>
> -------
> Query->SQL->Add("INSERT INTO \"List0\" (Email,FullName) VALUES (:eml,
> 'test')");
> Query->Prepare();
> Database->StartTransaction();
> for (int i=0;i<Data->Count;i++)
> {
> Query->ParamByName("eml")->AsString=Data->Strings[i];
> Query->ExecSQL();
> }
> Database->Commit();
> Query->UnPrepare();
> --------
>
> If query executed without StartTransaction/Commit - 50K records inserted
> in 5 seconds.
>
> With StartTransaction this query take 25-30 seconds.
>
> I try to add COMMIT INTERVAL 500, 1000, 5000 at end of Query string but
> it's not helps.
>
> My program must work in multi-user environment so I must use Transactions.
> But it's
> possible to avoid this performance hit ?
>
> Thanks for help !
>

Wed, Apr 9 2008 6:23 PMPermanent Link

Max Terentiev
>Why should a list of inserts have to be inside a transaction? A transaction
>is usually for a situation where the various updates are related to each
>other. In fact, in a multiuser system locking a loop like yours inside a
>transaction will have the unwanted effect of freezing the other users.

Bulk insert operation will be performed only time to time, so no problems with
freezing other users...

>Anyway, I certainly don't have an answer as to why it would be a lot slower.
>Would you mind reversing the prepare and the starttransaction (start the
>transaction BEFORE you prepare the query) and post the results? Thanks.

Yes, I try it. Same results (inside transaction query up to 3 times slower).
Wed, Apr 9 2008 7:00 PMPermanent Link

"Robert"

"Max Terentiev" <support@bspdev.com> wrote in message
news:48F1DD7F-C68C-48F5-A016-64D0C0EF84EF@news.elevatesoft.com...
> >Why should a list of inserts have to be inside a transaction? A
> >transaction
>>is usually for a situation where the various updates are related to each
>>other. In fact, in a multiuser system locking a loop like yours inside a
>>transaction will have the unwanted effect of freezing the other users.
>
> Bulk insert operation will be performed only time to time, so no problems
> with
> freezing other users...
>

Still, if another user needs to do somethig with the database, he'll have to
wait until loop is completed. I guess my question was what does the
transaction buy you when all you're doing is a series of inserts.

>>Anyway, I certainly don't have an answer as to why it would be a lot
>>slower.
>>Would you mind reversing the prepare and the starttransaction (start the
>>transaction BEFORE you prepare the query) and post the results? Thanks.
>
> Yes, I try it. Same results (inside transaction query up to 3 times
> slower).
>

I have no idea. It would be nice to find out though.

Robert

Thu, Apr 10 2008 3:57 AMPermanent Link

Max Terentiev
>Still, if another user needs to do somethig with the database, he'll have to
>wait until loop is completed. I guess my question was what does the
>transaction buy you when all you're doing is a series of inserts.

It's possible to start partial-transaction only for one table. So, not all database
users will wait until loop completed...

I try to start transaction for each Insert in loop but it's make query extremely
slow ! Now it's take about 5 minutes (instead 5 seconds without transactions !):

-------
Query->SQL->Add("INSERT INTO \"List0\" (Email,FullName) VALUES (:eml, 'test')");
Query->Prepare();
for (int i=0;i<Data->Count;i++)
  {
  Database->StartTransaction();
  Query->ParamByName("eml")->AsString=Data->Strings[i];
  Query->ExecSQL();
  Database->Commit();
  }
Query->UnPrepare();
--------


>I have no idea. It would be nice to find out though.

I have no idea to Smile
Thu, Apr 10 2008 6:11 AMPermanent Link

"Jose Eduardo Helminsky"
Max

<< It's possible to start partial-transaction only for one table. So, not
all database
users will wait until loop completed... >>

Just remember *ALL* routines that update the data in the database MUST be
use partial-transaction, but if they do not, the process can wait other
users that is locking the entire database instead of a couple of tables.

Eduardo

Thu, Apr 10 2008 6:19 AMPermanent Link

"Robert"

"Max Terentiev" <support@bspdev.com> wrote in message
news:24D6DD88-B787-4EB3-806E-3AA7D50F3052@news.elevatesoft.com...
> >Still, if another user needs to do somethig with the database, he'll have
> >to
>>wait until loop is completed. I guess my question was what does the
>>transaction buy you when all you're doing is a series of inserts.
>
> It's possible to start partial-transaction only for one table. So, not all
> database
> users will wait until loop completed...
>

Sure. I was questioning why you would use transactions in the first place.

> I try to start transaction for each Insert in loop but it's make query
> extremely
> slow ! Now it's take about 5 minutes (instead 5 seconds without
> transactions !):
>

You're on to something here. You should not be getting these huge timing
differences, I think.

>
>
>>I have no idea. It would be nice to find out though.
>
> I have no idea to Smile
>

Maybe Tim can give us some clues.

Robert

Thu, Apr 10 2008 7:33 AMPermanent Link

Max Terentiev
I found solution !

Commit() must be called every 1500-2000 Inserts ! In this case inserting 100k records take
about 7 seconds !

So, Inserting 100K records in one big transaction is slow !
Inserting 100k records with 100k transactions for each record VERY slow !

Inserting 100K records with 50-70 transactions (for each 2000 inserts) VERY FAST !

Thank you very much for help !
Thu, Apr 10 2008 9:14 AMPermanent Link

"Robert"

"Max Terentiev" <support@bspdev.com> wrote in message
news:C17BBB54-88AE-4529-852C-3110CEC9C686@news.elevatesoft.com...
>I found solution !
>

Thanks for the information.

Robert

Thu, Apr 10 2008 1:18 PMPermanent Link

Tim Young [Elevate Software]

Elevate Software, Inc.

Avatar

Email timyoung@elevatesoft.com

Max,

<< Commit() must be called every 1500-2000 Inserts ! In this case inserting
100k records take about 7 seconds !

So, Inserting 100K records in one big transaction is slow !
Inserting 100k records with 100k transactions for each record VERY slow !

Inserting 100K records with 50-70 transactions (for each 2000 inserts) VERY
FAST ! >>

Yes, this is correct.  Transactions are buffered in memory, so any
transaction over 20-30 thousand rows will start to slow down as the number
of records in memory becomes large enough to cause the buffer manager to
slow down a bit due to the cache management overhead.  DBISAM uses a LRU
cache replacement algorithm, which can experience slowdowns when DBISAM has
to search the entire LRU chain only to find that there are no record buffers
that can be re-used because all of them are modified as part of the
transaction.

--
Tim Young
Elevate Software
www.elevatesoft.com

Image