Icon View Thread

The following is the text of the current message along with any replies.
Messages 1 to 10 of 10 total
Thread Question about transactions
Fri, Dec 4 2009 7:42 AMPermanent Link

"John Hay"
Is there a difference (in terms of safety/data written to disk) in the
following procedure if I use true or false in the DB.Commit?  I am opening
and closing the table as part of the procedure.  The reason I ask is that it
in my tests it is about 5 times as fast with false as with true.

Table.Active := True;
DB.StartTransaction;
try
 Table.Append;
 Table.Fieldbyname('test').asinteger := 1;
 Table.Post;
 DB.Commit(true);
except
 DB.RollBack;
end;
Table.Active := False;

John

Fri, Dec 4 2009 8:22 AMPermanent Link

Roy Lambert

NLH Associates

Team Elevate Team Elevate

John

My, potentially flawed, understanding is that

> DB.Commit(true);

commits the data, flushes DBISAM's buffers to the OS and instructs the OS to flush the data to disk

DB.Commit(False);

commits the data, flushes DBISAM's buffers to the OS but does not tell the OS to flush the data to disk so it may still be in memory. This explains why its about 5 times faster. Also, I think, that if there's then a power out or something you will probably loose data and may potentially have corruption.

With you little bit of test code I wouldn't worry Smileybut in real life you may want to add a counter and do a DB.Commit(true); at suitable intervals. If you have flaky power with no UPS make the intervals shorter, if you have rock solid power, nice new PC, stable environment etc you can afford to make it longer.

Roy Lambert [Team Elevate]
Fri, Dec 4 2009 8:54 AMPermanent Link

"Robert"

"Roy Lambert" <roy.lambert@skynet.co.uk> wrote in message
news:C3BED0E8-58BB-4594-AE7B-F83BC4CF489F@news.elevatesoft.com...
>
> With you little bit of test code I wouldn't worry Smileybut in real life you
> may want to add a counter and do a DB.Commit(true); at suitable intervals.
> If you have flaky power with no UPS make the intervals shorter, if you
> have rock solid power, nice new PC, stable environment etc you can afford
> to make it longer.
>

The purpose of encapsulating a series of updates in a transaction is to make
sure that either every update takes place, or none does. Doing a commit at
"suitable intervals" can leave a database in a semi-updated mode. And
anyway, there is no bulletproof software protection against a hardware
failure.

Robert


Fri, Dec 4 2009 10:07 AMPermanent Link

Roy Lambert

NLH Associates

Team Elevate Team Elevate

Robert

>The purpose of encapsulating a series of updates in a transaction is to make
>sure that either every update takes place, or none does. Doing a commit at
>"suitable intervals" can leave a database in a semi-updated mode. And
>anyway, there is no bulletproof software protection against a hardware
>failure.

I agree with you but it is possible to have elements within a transaction batch that are pretty much stand alone, providing the software can handle a restart from an intermediate point. EG if you're processing invoices for customers then depending on how you update the accounts part of it then the suitable point to commit would be between customers as long as you can restart the run from customer x.

Roy Lambert
Fri, Dec 4 2009 10:11 AMPermanent Link

"John Hay"
Roy

The real procedure Smileyhaapens on a RemObjects Sever datamodule.  The
datamodule is created does its stuff and freed (disconnected client) so I
can't even call flushbuffers periodically.

In this case does table.active := false not flush the data to disk?  If not
when is it safely on disk?

John

Fri, Dec 4 2009 10:15 AMPermanent Link

"John Hay"
Robert

> The purpose of encapsulating a series of updates in a transaction is to
make
> sure that either every update takes place, or none does.

You got it in one!

> "suitable intervals" can leave a database in a semi-updated mode. And
> anyway, there is no bulletproof software protection against a hardware
> failure.

I accept that nothing is completely foolproof.  I am just trying to be as
safe as possible with the minimum of overhead.  I guess I was really asking
is table.active := false good enough the to ensure the data will be written
back to disk?

John

Fri, Dec 4 2009 10:56 AMPermanent Link

"Robert"

"Roy Lambert" <roy.lambert@skynet.co.uk> wrote in message
news:0BB8AF47-B268-4133-97E0-5E459DF21172@news.elevatesoft.com...
> Robert
>
>>The purpose of encapsulating a series of updates in a transaction is to
>>make
>>sure that either every update takes place, or none does. Doing a commit at
>>"suitable intervals" can leave a database in a semi-updated mode. And
>>anyway, there is no bulletproof software protection against a hardware
>>failure.
>
> I agree with you but it is possible to have elements within a transaction
> batch that are pretty much stand alone, providing the software can handle
> a restart from an intermediate point. EG if you're processing invoices for
> customers then depending on how you update the accounts part of it then
> the suitable point to commit would be between customers as long as you can
> restart the run from customer x.
>

IMO safety for those "batch" runs is better handled by first doing a backup.
If anything goes kaput, at least you can do a restore and start all over.
How do you REALLY know after a power failure if everything (including the
commit) got written to disk?

In the past, multiple updates in DBISAM were much faster if done within a
transaction and that was a reason to use transactions even if recovery was
not the deal. I have code such as

UpdateCount := 0;
table.First;
while not table.eof do begin
if updatecount = 0 starttransaction;
table.edit;
//whatever
table.post;
inc(updatecount);
if updatecount = 500 then begin
commit;
updatecount := 0;
end;
table.next;
end;
if database.intransaction then commit;  //very important

that i used to do just for speed. I don't know if this is the case with
current versions of DBISAM.

Robert

Fri, Dec 4 2009 11:07 AMPermanent Link

Roy Lambert

NLH Associates

Team Elevate Team Elevate

Robert


Almost certainly the same or very similar since a transaction goes into ram until its flushed down.

Roy Lambert
Fri, Dec 4 2009 11:17 AMPermanent Link

Roy Lambert

NLH Associates

Team Elevate Team Elevate

John

>The real procedure Smileyhaapens on a RemObjects Sever datamodule. The
>datamodule is created does its stuff and freed (disconnected client) so I
>can't even call flushbuffers periodically.
>
>In this case does table.active := false not flush the data to disk? If not
>when is it safely on disk?

You have exceeded my memory limits. I'm sure its been answered but a quick search didn't show the answer.

Roy Lambert [Team Elevate]
Sun, Dec 6 2009 6:42 AMPermanent Link

Tim Young [Elevate Software]

Elevate Software, Inc.

Avatar

Email timyoung@elevatesoft.com

John,

<< The real procedure Smileyhaapens on a RemObjects Sever datamodule.  The
datamodule is created does its stuff and freed (disconnected client) so I
can't even call flushbuffers periodically.

In this case does table.active := false not flush the data to disk?  If not
when is it safely on disk? >>

It used to be the case that Windows flushed a file's modified file system
buffers to disk when a file was closed, but I'm not sure if that is still
the case since Windows 2000 and the newer NTFS file systems, so it isn't
something that I would rely upon.

--
Tim Young
Elevate Software
www.elevatesoft.com
Image