Icon View Thread

The following is the text of the current message along with any replies.
Messages 11 to 20 of 32 total
Thread Low performance of prepared TDBISAMQuery
Wed, Apr 10 2019 2:10 AMPermanent Link

Roy Lambert

NLH Associates

Team Elevate Team Elevate

Evgeny

I am very suspicious of this construct

      repeat
         try
           FSyncTable.ExecSQL;
           ok := true;
         except
         end;
       until ok;

Its just going to keep on until it works. Your experiment with commenting out the ExecSQL will simply allow the loop to be exited. I understand what you've done and why but you do need some proper exception handling in place, at least during development and debug.

During debug its worth displaying the error message that results in exception handling being triggered, during production write to a log file.

At the very least add a sleep(10) in the except clause to give things a chance to sort themselves out.

Its also worth considering moving this type of stuff to a thread, especially in a multi-user system

Roy Lambert
Wed, Apr 10 2019 6:06 AMPermanent Link

ds

Roy Lambert wrote:

>I am very suspicious of this construct

      repeat
         try
           FSyncTable.ExecSQL;
           ok := true;
         except
         end;
       until ok;

Its just going to keep on until it works. Your experiment with commenting out the ExecSQL will simply allow the loop to be exited. I understand what you've done and why but you do need some proper exception handling in place, at least during development and debug.

Yes - it is current "experimental" code, don't mind of this strange construction - it not affect ExecSQL performance actually (and not finished for now - goal will be to catch and process exceptions in specific way). As i warned - there is a lot of additional things which can look strange and confusing WinkWhat i sent is dev unit. Commenting exec just check if bottleneck is query execution - and confirms it.
So question still is why prepared sql exec is slow? Funny thing - host 130K records table (116 different fields with 8 indexes) updating in "while not eof Update..Post next" loop is executed several times faster than loop with these 130K ExecSQL - how???

regards, Evgeny
Wed, Apr 10 2019 8:34 AMPermanent Link

Roy Lambert

NLH Associates

Team Elevate Team Elevate

Evgeny


>Commenting exec just check if bottleneck is query execution - and confirms it.

This is, possibly, where you're going wrong.

Try this little routine out

procedure TForm1.Button1Click(Sender: TObject);
var
t1: integer;
t2: integer;
cntr: integer;
ok: boolean;
const
loops = 250000;
begin
t1 := gettickcount;
for cntr := 0 to loops do begin
 ok := true;
end;
t2 := gettickcount;
showmessage(inttostr(t2 - t1));
end;

On my machine it takes zero ticks. If I add a sleep(1) into the loop (ie 1 millisecond) which is about 1 sixteenth of the time your test of the query in DBSys its up to 250023 ticks.

What you're actually testing by commenting FSyncTableU.ExecSQL; out is how fast the loop is if it only runs once. With the code as shown, and without tracing into the loop you have no way of knowing if the query was successful or if there were many many errors prior to success.

To prove my point try this

procedure TForm1.Button1Click(Sender: TObject);
var
t1: integer;
t2: integer;
cntr: integer;
sl: TStringlist;
ok: boolean;
const
loops = 250000;
begin
t1 := gettickcount;
ok := false;
cntr := 0;
repeat
 try
  sleep(1);
  inc(Cntr);
  sl.add('xxx');
  ok := true;
 except
  if Cntr >= 10 then Break;
 end;
until ok;
t2 := gettickcount;
showmessage(inttostr(t2 - t1));
end;


then comment out if Cntr >= 10 then Break; and get ready to reboot your PC

Roy
Wed, Apr 10 2019 9:08 AMPermanent Link

ds

Roy
Sure, all you wrote is correct; but there it is not a reason... When i replaced loop with single ExecSQL call - i have absolutely same situation. Sorry if this construction confuses you SmileI've tried different cases myself before asking on forum. So (again) - why construction like
table.first;
while not table.eof do begin
 table.update;
 table.fieldbyname('name').asstring = '***';
 table.post;
 table.next;
end;
worked several times faster than
table.first;
while not table.eof do begin
 query.params[0].asinteger := table.fieldbyname('syncid').asinteger;
 query.params[1].asdatetime := DT;
 query.execsql;
 table.next;
end;
?
(of course, it is simplified code; triggers are off; query prepared as i posted before)
table is 116 columns and 8 indexes (as i wrote)

regards, Evgeny
Wed, Apr 10 2019 10:22 AMPermanent Link

Roy Lambert

NLH Associates

Team Elevate Team Elevate

ds


OK, you now have a simple test case we can look at, post it, and the tables to the attachments newsgroup and I'll have a play with it.

Roy Lambert
Wed, Apr 10 2019 12:55 PMPermanent Link

Raul

Team Elevate Team Elevate

<<
ds wrote:
So (again) - why construction like
table.first;
while not table.eof do begin
 table.update;
 table.fieldbyname('name').asstring = '***';
 table.post;
 table.next;
end;
worked several times faster than
table.first;
while not table.eof do begin
 query.params[0].asinteger := table.fieldbyname('syncid').asinteger;
 query.params[1].asdatetime := DT;
 query.execsql;
 table.next;
end;

>>

These 2 are not comparable - 2nd executes the query for every row which  is 118 SQL operations (meaning transaction, open table, update etc, finish) while 2nd is simple table navigation with record update.

Yes i would expect 1st to be factor of times faster than 2nd

Raul
Thu, Apr 11 2019 2:06 AMPermanent Link

Roy Lambert

NLH Associates

Team Elevate Team Elevate

Raul


I'd expect some difference, with c/s ot f/s, but not as much as Evgeny is quoting.

My bet is still the problem is caused by some type of re-entrancy

Roy Lambert
Thu, Apr 11 2019 8:52 AMPermanent Link

Raul

Team Elevate Team Elevate

On 4/11/2019 2:06 AM, Roy Lambert wrote:
> I'd expect some difference, with c/s ot f/s, but not as much as Evgeny is quoting.

I would expect massive difference between these 2 :

- dataset loop is an already open cursor and involves just a field
update (so just a record lock)

- other one is a query that needs to open new cursor, apply conditions,
do implicit transaction, update and close (even if DBISAM can somehow
use open curor it would need to refilter on each one).

Without running any tests i would guess that 1st one with 118 records
should be less than 100 mseconds

2nd i would expect to be up to few seconds so maybe 10-20 times slower

I'd need to run a testrig here to check but if we're talking say 10x
difference then that i would expect between these 2.

Raul

Thu, Apr 11 2019 9:44 AMPermanent Link

Roy Lambert

NLH Associates

Team Elevate Team Elevate

Raul


>- other one is a query that needs to open new cursor, apply conditions,
>do implicit transaction, update and close (even if DBISAM can somehow
>use open curor it would need to refilter on each one).

My understanding is that a prepared query (which is what Evgeny says he's using) already has all the table handles there.There must be some form of locking with on the Post event even if not a transaction (otherwise I'd expect some serious trouble on a multi-user system) but I do agree with the filter part which is why I think there will be some difference.

It would be good to see his test application so we can judge it properly


Roy
Thu, Apr 11 2019 9:29 PMPermanent Link

Raul

Team Elevate Team Elevate

On 4/11/2019 9:44 AM, Roy Lambert wrote:

> My understanding is that a prepared query (which is what Evgeny says he's using) already has all the table handles there.There must be some form of locking with on the Post event even if not a transaction (otherwise I'd expect some serious trouble on a multi-user system) but I do agree with the filter part which is why I think there will be some difference.

From what i saw the 2nd example does not use Post at all - dataset is
simply looping thru records in read-only mode and executing a separate
query based on record data.

So query is only one modifying data and yes dataset has a read lock but
i don't think this is the issue here.

I did a quick test rig application that mimics this (not 100 % same but
close enough).

I have a table with Autoinc and Timestamp and String and i populated it
with 1000 records so the ID files is 1..1000

1. Open table and use while not table.eof to loop thru and update string
field - this took 78 msec

2. prepared a query and executed it 1000 times using the value 1..1000
as param for update - this took 938 msec

3. Finally did the same as #2 but unprepared and prepared query inside
the loop - this took 1422 msec

In this scenario there was no locking at all and there was almost 12
times performance difference.

Raul
« Previous PagePage 2 of 4Next Page »
Jump to Page:  1 2 3 4
Image