Login ProductsSalesSupportDownloadsAbout |
Home » Technical Support » DBISAM Technical Support » Support Forums » DBISAM General » View Thread |
Messages 1 to 10 of 10 total |
Question about transactions |
Fri, Dec 4 2009 7:42 AM | Permanent Link |
"John Hay" | Is there a difference (in terms of safety/data written to disk) in the
following procedure if I use true or false in the DB.Commit? I am opening and closing the table as part of the procedure. The reason I ask is that it in my tests it is about 5 times as fast with false as with true. Table.Active := True; DB.StartTransaction; try Table.Append; Table.Fieldbyname('test').asinteger := 1; Table.Post; DB.Commit(true); except DB.RollBack; end; Table.Active := False; John |
Fri, Dec 4 2009 8:22 AM | Permanent Link |
Roy Lambert NLH Associates Team Elevate | John
My, potentially flawed, understanding is that > DB.Commit(true); commits the data, flushes DBISAM's buffers to the OS and instructs the OS to flush the data to disk DB.Commit(False); commits the data, flushes DBISAM's buffers to the OS but does not tell the OS to flush the data to disk so it may still be in memory. This explains why its about 5 times faster. Also, I think, that if there's then a power out or something you will probably loose data and may potentially have corruption. With you little bit of test code I wouldn't worry but in real life you may want to add a counter and do a DB.Commit(true); at suitable intervals. If you have flaky power with no UPS make the intervals shorter, if you have rock solid power, nice new PC, stable environment etc you can afford to make it longer. Roy Lambert [Team Elevate] |
Fri, Dec 4 2009 8:54 AM | Permanent Link |
"Robert" | "Roy Lambert" <roy.lambert@skynet.co.uk> wrote in message news:C3BED0E8-58BB-4594-AE7B-F83BC4CF489F@news.elevatesoft.com... > > With you little bit of test code I wouldn't worry but in real life you > may want to add a counter and do a DB.Commit(true); at suitable intervals. > If you have flaky power with no UPS make the intervals shorter, if you > have rock solid power, nice new PC, stable environment etc you can afford > to make it longer. > The purpose of encapsulating a series of updates in a transaction is to make sure that either every update takes place, or none does. Doing a commit at "suitable intervals" can leave a database in a semi-updated mode. And anyway, there is no bulletproof software protection against a hardware failure. Robert |
Fri, Dec 4 2009 10:07 AM | Permanent Link |
Roy Lambert NLH Associates Team Elevate | Robert
>The purpose of encapsulating a series of updates in a transaction is to make >sure that either every update takes place, or none does. Doing a commit at >"suitable intervals" can leave a database in a semi-updated mode. And >anyway, there is no bulletproof software protection against a hardware >failure. I agree with you but it is possible to have elements within a transaction batch that are pretty much stand alone, providing the software can handle a restart from an intermediate point. EG if you're processing invoices for customers then depending on how you update the accounts part of it then the suitable point to commit would be between customers as long as you can restart the run from customer x. Roy Lambert |
Fri, Dec 4 2009 10:11 AM | Permanent Link |
"John Hay" | Roy
The real procedure haapens on a RemObjects Sever datamodule. The datamodule is created does its stuff and freed (disconnected client) so I can't even call flushbuffers periodically. In this case does table.active := false not flush the data to disk? If not when is it safely on disk? John |
Fri, Dec 4 2009 10:15 AM | Permanent Link |
"John Hay" | Robert
> The purpose of encapsulating a series of updates in a transaction is to make > sure that either every update takes place, or none does. You got it in one! > "suitable intervals" can leave a database in a semi-updated mode. And > anyway, there is no bulletproof software protection against a hardware > failure. I accept that nothing is completely foolproof. I am just trying to be as safe as possible with the minimum of overhead. I guess I was really asking is table.active := false good enough the to ensure the data will be written back to disk? John |
Fri, Dec 4 2009 10:56 AM | Permanent Link |
"Robert" | "Roy Lambert" <roy.lambert@skynet.co.uk> wrote in message news:0BB8AF47-B268-4133-97E0-5E459DF21172@news.elevatesoft.com... > Robert > >>The purpose of encapsulating a series of updates in a transaction is to >>make >>sure that either every update takes place, or none does. Doing a commit at >>"suitable intervals" can leave a database in a semi-updated mode. And >>anyway, there is no bulletproof software protection against a hardware >>failure. > > I agree with you but it is possible to have elements within a transaction > batch that are pretty much stand alone, providing the software can handle > a restart from an intermediate point. EG if you're processing invoices for > customers then depending on how you update the accounts part of it then > the suitable point to commit would be between customers as long as you can > restart the run from customer x. > IMO safety for those "batch" runs is better handled by first doing a backup. If anything goes kaput, at least you can do a restore and start all over. How do you REALLY know after a power failure if everything (including the commit) got written to disk? In the past, multiple updates in DBISAM were much faster if done within a transaction and that was a reason to use transactions even if recovery was not the deal. I have code such as UpdateCount := 0; table.First; while not table.eof do begin if updatecount = 0 starttransaction; table.edit; //whatever table.post; inc(updatecount); if updatecount = 500 then begin commit; updatecount := 0; end; table.next; end; if database.intransaction then commit; //very important that i used to do just for speed. I don't know if this is the case with current versions of DBISAM. Robert |
Fri, Dec 4 2009 11:07 AM | Permanent Link |
Roy Lambert NLH Associates Team Elevate | Robert
Almost certainly the same or very similar since a transaction goes into ram until its flushed down. Roy Lambert |
Fri, Dec 4 2009 11:17 AM | Permanent Link |
Roy Lambert NLH Associates Team Elevate | John
>The real procedure haapens on a RemObjects Sever datamodule. The >datamodule is created does its stuff and freed (disconnected client) so I >can't even call flushbuffers periodically. > >In this case does table.active := false not flush the data to disk? If not >when is it safely on disk? You have exceeded my memory limits. I'm sure its been answered but a quick search didn't show the answer. Roy Lambert [Team Elevate] |
Sun, Dec 6 2009 6:42 AM | Permanent Link |
Tim Young [Elevate Software] Elevate Software, Inc. timyoung@elevatesoft.com | John,
<< The real procedure haapens on a RemObjects Sever datamodule. The datamodule is created does its stuff and freed (disconnected client) so I can't even call flushbuffers periodically. In this case does table.active := false not flush the data to disk? If not when is it safely on disk? >> It used to be the case that Windows flushed a file's modified file system buffers to disk when a file was closed, but I'm not sure if that is still the case since Windows 2000 and the newer NTFS file systems, so it isn't something that I would rely upon. -- Tim Young Elevate Software www.elevatesoft.com |
This web page was last updated on Thursday, March 28, 2024 at 08:36 AM | Privacy PolicySite Map © 2024 Elevate Software, Inc. All Rights Reserved Questions or comments ? E-mail us at info@elevatesoft.com |