Icon View Thread

The following is the text of the current message along with any replies.
Messages 1 to 10 of 10 total
Thread Mangled Tables ... any possible reasons?
Mon, Oct 16 2006 5:42 AMPermanent Link

adam
Dear All,

I attach a zip file with 2 tables in it. Actually I have extracted small segments of much
larger tables to send as an example of what has happened.

The application runs in a factory in Uganda. Power supply here is not stable, but the
server is on a UPS & stabilizer. I write apps so that all Views of data are canned & users
only interact with 1 record at a time, posting data piece by piece & only holding a live
view on a single record.

(i.e. I use a SELECT A, B, C FROM Table WHERE ID = 232 style SQL to generate live queries
of a single record).

I thought this was a reasonably safe way to proceed, regardless of occasional power
disconnection ... I thought that the base data in a table would be OK & that only the data
from the live record would be lost if there was an outage, or system crash for some other
reason.

--

You can see the tables are totally mangled ... I ran a "repair table" with DBSys & this is
the result, virtually all the data is gone, with Autoinc fields are way out ... I had
thought I would retreive at least a decent part of the table, but ALL the data is gone!

This has resulted in a severe loss of confidence from the client & I really would like to
be able to give them some concrete reason why. Ideally I would be like to be able to act
on the findings from this disaster by re-coding my systems to ensure it won't happen again.

Any suggestions?

Adam








Attachments: BrokenTables.zip
Mon, Oct 16 2006 7:50 AMPermanent Link

Roy Lambert

NLH Associates

Team Elevate Team Elevate

adam


Some random thoughts

- duff network connections so garbage is being sent to the tables
- duff hard drive
- someone deliberately screwing it up
- another program overwriting bits of the hard drive

If power is very problematic then I wouldn't use live queries at all. I'd select into a memory table, edit that and post the result using a transaction. I'd also look at flushbuffers and probably some form of mirroring or at least run a backup once an hour for transactional files.

Its also probably worth zipping the original corrupted tables and sending them to Tim (assuming he's go power Smiley.

Roy Lambert
Mon, Oct 16 2006 8:25 AMPermanent Link

adam
Dear Roy

Thanks for the speedy reply.

>Some random thoughts

>- duff network connections so garbage is being sent to the tables

... yup this could be an issue given power variation, but would a single-record post back
actually damage any part of the DB Table other than the particular record it was posting
back & could this actually damage the rest of the Table?


>- duff hard drive
>- someone deliberately screwing it up
>- another program overwriting bits of the hard drive

... any of the above 3 *might* be an issue, though I don't know how someone would screw up
a table to mangle it in the way I am seeing ... particularly so that DBSys can 'repair' it
but leave me with garbage! If someone wanted to screw up the system they would really just
have to delete parts of the DB.

>If power is very problematic then I wouldn't use live queries at all. I'd select into a
memory table, edit that and post the result using a transaction. I'd also look at
flushbuffers and probably

... I use Flushbuffers in every "post" from my user's app to try to ensure no data is held
anywhere in any Cache. The idea you suggest of using a Memory Table is interesting ... Do
you have any code you wouldn't mind sharing to give me an idea of how to do it??? Wink

>some form of mirroring or at least run a backup once an hour for transactional files.

I have always shied away from stuff like automated hourly back-ups as I worry about how to
cope with issues like testing the integrity of every table prior to doing the back up
(since you don't want to back-up a corrupted version) plus a number of other issues ...
its a complex area which I have poked around in only to come out thinking I don't have
time to make a good enough job of it!

This is a feature that is built in to some other DBs (i.e. create a mirror of DB A as DB
B, while checking that the files being mirrored are not corrupt & doing any other
housekeeping) ... any chance of it being in V5?

(Or does anyone know of a third party component that would build it into DBISAM??)

>Its also probably worth zipping the original corrupted tables and sending them to Tim
(assuming he's got power Smiley.

... good suggestion, I'll add that to this thread later, unfortunately I think I only have
the "repaired" versions, I don't think I kept a copy of the unrepaired files ... I am sure
that the unrepaired files might show Tim something interesting.



Mon, Oct 16 2006 9:20 AMPermanent Link

Roy Lambert

NLH Associates

Team Elevate Team Elevate

adam


I'm sure Tim will give the real answers but here's my drivel

>>- duff network connections so garbage is being sent to the tables
>
>.. yup this could be an issue given power variation, but would a single-record post back
>actually damage any part of the DB Table other than the particular record it was posting
>back & could this actually damage the rest of the Table?

It could damage the header eg when writing back the autoinc value and if the header is damaged it could give rise to your problem.

>
>>- duff hard drive
>>- someone deliberately screwing it up
>>- another program overwriting bits of the hard drive
>
>.. any of the above 3 *might* be an issue, though I don't know how someone would screw up
>a table to mangle it in the way I am seeing ... particularly so that DBSys can 'repair' it
>but leave me with garbage! If someone wanted to screw up the system they would really just
>have to delete parts of the DB.

Again the main area of concern would be the header - I'm dubious about human intervention but the other two could do it.

>>If power is very problematic then I wouldn't use live queries at all. I'd select into a
>memory table, edit that and post the result using a transaction. I'd also look at
>flushbuffers and probably
>
>.. I use Flushbuffers in every "post" from my user's app to try to ensure no data is held
>anywhere in any Cache. The idea you suggest of using a Memory Table is interesting ... Do
>you have any code you wouldn't mind sharing to give me an idea of how to do it??? Wink

Not really - Ole posted some code ages ago in the binaries. In principle its very easy.

1. Design form using the real table then when happy alter the datasource to point to the memory table
2. In the memory tables AfterPost event build an sql statement and use it to UPDATE the real table.

If there are no lookups just SELECT INTO 'memory\table'..... If there are lookups I'd create a memory table in the IDE and set the lookups, populate it with data from the result of a SELECT statement.

The only real problems are 1) its a bit more work and 2) you loose the locking built into DBISAM. There are various ways round the latter from simply checking if the record has changed since you got it (if so refresh the data and let the user have another go) to creating a small locking file of your own (write the record id in) and use that.

>>some form of mirroring or at least run a backup once an hour for transactional files.
>
>I have always shied away from stuff like automated hourly back-ups as I worry about how to
>cope with issues like testing the integrity of every table prior to doing the back up
>(since you don't want to back-up a corrupted version) plus a number of other issues ...
>its a complex area which I have poked around in only to come out thinking I don't have
>time to make a good enough job of it!

In that case what about logging ALL edits (table,record,field) into a separate transaction file and have a process of backing up the application tables once a day. Validate them and if OK clear down the transaction file. Then if you have a crash check the transaction file (if corrupted start crying) and if ok bring back yesterday's data and update it with the transaction log.

I've done this myself in a couple of apps with low transaction volumes - its not advisable for high levels of input.

Final point - I'm doubtful that V4 would be that much more robust but why not upgrade?

Roy Lambert
Mon, Oct 16 2006 9:36 AMPermanent Link

"David Farrell-Garcia"
> I thought this was a reasonably safe way to proceed, regardless of
> occasional power disconnection ... I thought that the base data in a
> table would be OK & that only the data from the live record would be
> lost if there was an outage, or system crash for some other reason.

I would suggest that you move to Dbisam C/S.  If you are using Local
Dbisam in LAN environment, then your data can be in an unstable state
duing a power-out situation and table corruption is not only possible,
it is likely.

--
David Farrell-Garcia
Whidbey Island Software, LLC
Mon, Oct 16 2006 9:38 AMPermanent Link

"John Hay"
Adam

>
> You can see the tables are totally mangled ... I ran a "repair table" with
DBSys & this is
> the result, virtually all the data is gone, with Autoinc fields are way
out ... I had
> thought I would retreive at least a decent part of the table, but ALL the
data is gone!

Without seeing the table pre reapir it is a bit hard to say but my guess
would be that possibly that FAT has been messed up and the files sectors
have become misaligned.  It may be possible to largely recover the data (as
long as it is in fact there)  with a "smart" bit of low level (byte by byte)
tool.  I have had this happen a couple of times in the past.

Does the site use FAT or NTFS?  If it is the former then I would advise
switching to NTFS.

John



Mon, Oct 16 2006 9:42 AMPermanent Link

adam
Dear Roy,

Thanks for the many useful suggestions, plenty for me to consider as I have an evening
beer overlooking the source of the river Nile here in Uganda (Wink

You are right that the memory table implementation would not be difficult, could even use
ClientDataset, which I wrote code for a long time ago, but only used up to now on systems
which sometimes lose connection to their servers & can retry several times to post data.

Have spent today looking at the likely source of the problem & there is 1 computer to
which both problems have been traced. It is a pretty old Win2000 machine which is a bit
dodgy & which has been used by staff in casual ways which may have caused problems. I am
reinstalling & if it works OK for the week I am here I think I will get my clients
confidence back.

--

I haven't gone up to V4 yet, as by the time I wanted to / needed to Tim said V5 was "just
about ready" (July I think Smiley so I thought I would wait & do a bigger leap. Hum.

Adam
Mon, Oct 16 2006 9:57 AMPermanent Link

Roy Lambert

NLH Associates

Team Elevate Team Elevate

adam


>I haven't gone up to V4 yet, as by the time I wanted to / needed to Tim said V5 was "just
>about ready" (July I think Smiley so I thought I would wait & do a bigger leap. Hum.

errrrrr which year was that Smiley

Roy Lambert

Mon, Oct 16 2006 10:51 AMPermanent Link

adam
SPOT ON ... faulty Windows machine was FAT (& is not being used for the DB any more) AND I
was operating LAN not C/S ... I will change over, as I have C/S & can implement it with a
single switch.

Adam
Mon, Oct 16 2006 3:13 PMPermanent Link

Tim Young [Elevate Software]

Elevate Software, Inc.

Avatar

Email timyoung@elevatesoft.com

Adam,

<< .. yup this could be an issue given power variation, but would a
single-record post back actually damage any part of the DB Table other than
the particular record it was posting back & could this actually damage the
rest of the Table? >>

DBISAM won't, but Windows sure will.  With single-user installations and no
OS flushing going on (see the link below), Windows can and will insert all
sorts of random junk into a table that has been written to if the machine
goes down while there is modified data for the file sitting in the Windows
file system buffers.   Basically the file system gets all screwed up and
other areas of memory end up in the file such as bits of running processes,
web pages, etc.

http://www.elevatesoft.com/dbisam4d5_buffering_caching.htm

see the section entitled "OS Buffering".

<< .. I use Flushbuffers in every "post" from my user's app to try to ensure
no data is held anywhere in any Cache. >>

In that case, the problem is most likely something other than the above,
i.e. a hardware issue of some sort or foul play.  There is still a small
chance that Windows is still doing what I describe above, however,
especially if the power outage occurs in-between the write of the DBISAM
buffers to Windows and the Windows OS physical flush to disk.

<< This is a feature that is built in to some other DBs (i.e. create a
mirror of DB A as DB B, while checking that the files being mirrored are not
corrupt & doing any other housekeeping) ... any chance of it being in V5? >>

Not initially, no.  However, fail-safety is a priority with ElevateDB.

--
Tim Young
Elevate Software
www.elevatesoft.com

Image