Icon View Thread

The following is the text of the current message along with any replies.
Messages 1 to 10 of 11 total
Thread Table Corruption
Mon, Apr 24 2006 3:09 PMPermanent Link

Vincent
DBISAM 3.29/3.30

I have a rather large table (over 1.1 million records) with large records (over 1800 bytes). Recently the table started to have problems. SQL and Filters that had worked well using available
indexes now still work, only extremely slowly, as if they are not using the available indexes. Setranges and findkeys still work ok.

I have tried everything I can think of, here is a brief (not necessarily complete) list:

-Verify
-Repair
-Optimize
-CopyTable
-Upgrade to DBISAM 3.30 (from 3.29)
-Recreate indexes with AddIndex method
-Recreate indexes with RestructureTable method (individually and with single command)
-With/Without index compression
-Defined indexes in different orders
-Defined only single index on one field
-Even renaming the current .idx, placing an empty one built from scratch, repair
-Creating a new empty table, copying records from 1 to the other with a quick fix program

Anyway, all end in the same result.

Two things worked:

1. Upgrading the table from 3 to DBISAM4. Once I did that the tables responded ok on the queries. This is not an option for production just yet.
2. Creating a new empty table and rebuilding the data from a different set of text files. This is not the best option, not sure all of the data is there yet. A cursory glance reveals more records
in the old table than the new. Not sure if that is accurate or not, or a function of some kind of corruption.

Any ideas on something I might not have done, or what could be done to fix my problem?

Mon, Apr 24 2006 5:01 PMPermanent Link

Vincent
One other question. If the application does not update data, is used in file-sharing mode and all of the tables/queries are set to read-only, would that prevent corruption in the event of
abnormal shut-down, etc?
Mon, Apr 24 2006 11:49 PMPermanent Link

Vincent
Found the issue with the first problem:

In the GetCostOptimizeLevel function when determining whether to use an index or not, multiplying 1819 (physical record size) by 1,185,625 (record count) resulted in -2,000,000,000. The
actual value should be 2,156,651,875, however the upper limit of the integer value is 2,147,483,648. Got around it by typecasting physicalrecordsize and currentcount to int64 and changing
recordcost to int64.

Tue, Apr 25 2006 12:11 AMPermanent Link

"Clive"
I may be off base but I suspect your problem is due to Large File support ,
or more specifcally you dont have it enabled in the engine and your tables
are big.

In  the TDBISAMEngine component there is a property called LargeFileSupport,
try setting this to TRUE, should help you.


Cheers
Clive


"Vincent" <zcrewbcs@cox.net> wrote in message
news:053E2529-13A3-4694-9536-85CA639B48B5@news.elevatesoft.com...
> Found the issue with the first problem:
>
> In the GetCostOptimizeLevel function when determining whether to use an
> index or not, multiplying 1819 (physical record size) by 1,185,625 (record
> count) resulted in -2,000,000,000. The
> actual value should be 2,156,651,875, however the upper limit of the
> integer value is 2,147,483,648. Got around it by typecasting
> physicalrecordsize and currentcount to int64 and changing
> recordcost to int64.
>
>

Tue, Apr 25 2006 12:35 AMPermanent Link

Vincent
Thanks for your response. I am aware of the LargeFileSupport property. I have run into the need for it in the past with some other tables growing beyond 4 gig. However, in this case
the .dat file is a little over 2 gig, the idx file a little over 1 gig and the blb file a little over 300,000. As I understand it, the 4 gig limitation is in regards to each file. I do believe that the fact
that the result in the above function is an integer and the result overflows the maximum allowed is the reason for the index usage problem. Once tweaked as mentioned, it performed like it
used to.

Thanks again for your response I was starting to get a little lonely Wink

"Clive" <dd@dddd.com> wrote:

>I may be off base but I suspect your problem is due to Large File support ,
>or more specifcally you dont have it enabled in the engine and your tables
>are big.

>In  the TDBISAMEngine component there is a property called LargeFileSupport,
>try setting this to TRUE, should help you.


>Cheers
>Clive

Tue, Apr 25 2006 12:53 AMPermanent Link

"Clive"
We in New Zealand are all still up..!

Im also having Index problems, in that Joins are getting done incorrectly if
no index unless you turn query optimisation off.. May or may not be related,
but it used to be OK in previous versions. I dont have tables as large as
yours though.


"Vincent" <zcrewbcs@cox.net> wrote in message
news:EB847E5D-8106-4E3D-8649-5757D6BFD85C@news.elevatesoft.com...
> Thanks for your response. I am aware of the LargeFileSupport property. I
> have run into the need for it in the past with some other tables growing
> beyond 4 gig. However, in this case
> the .dat file is a little over 2 gig, the idx file a little over 1 gig and
> the blb file a little over 300,000. As I understand it, the 4 gig
> limitation is in regards to each file. I do believe that the fact
> that the result in the above function is an integer and the result
> overflows the maximum allowed is the reason for the index usage problem.
> Once tweaked as mentioned, it performed like it
> used to.
>
> Thanks again for your response I was starting to get a little lonely Wink
>
> "Clive" <dd@dddd.com> wrote:
>
>>I may be off base but I suspect your problem is due to Large File support
>>,
>>or more specifcally you dont have it enabled in the engine and your tables
>>are big.
>
>>In  the TDBISAMEngine component there is a property called
>>LargeFileSupport,
>>try setting this to TRUE, should help you.
>
>
>>Cheers
>>Clive
>
>

Tue, Apr 25 2006 4:59 AMPermanent Link

"Walter Matte"
Vincent:

As I too have not full leaped to version 4... (waiting for v5 to completely
move to 4 lol)

I have been following this thread.  Thanks for sharing your findings.

Walter

"Vincent" <zcrewbcs@cox.net> wrote in message
news:EB847E5D-8106-4E3D-8649-5757D6BFD85C@news.elevatesoft.com...
> Thanks for your response. I am aware of the LargeFileSupport property. I
> have run into the need for it in the past with some other tables growing
> beyond 4 gig. However, in this case
> the .dat file is a little over 2 gig, the idx file a little over 1 gig and
> the blb file a little over 300,000. As I understand it, the 4 gig
> limitation is in regards to each file. I do believe that the fact
> that the result in the above function is an integer and the result
> overflows the maximum allowed is the reason for the index usage problem.
> Once tweaked as mentioned, it performed like it
> used to.
>
> Thanks again for your response I was starting to get a little lonely Wink
>
> "Clive" <dd@dddd.com> wrote:
>
>>I may be off base but I suspect your problem is due to Large File support
>>,
>>or more specifcally you dont have it enabled in the engine and your tables
>>are big.
>
>>In  the TDBISAMEngine component there is a property called
>>LargeFileSupport,
>>try setting this to TRUE, should help you.
>
>
>>Cheers
>>Clive
>
>

Tue, Apr 25 2006 11:03 AMPermanent Link

Tim Young [Elevate Software]

Elevate Software, Inc.

Avatar

Email timyoung@elevatesoft.com

Vincent,

<< Found the issue with the first problem:

In the GetCostOptimizeLevel function when determining whether to use an
index or not, multiplying 1819 (physical record size) by 1,185,625 (record
count) resulted in -2,000,000,000. The actual value should be 2,156,651,875,
however the upper limit of the integer value is 2,147,483,648. Got around it
by typecasting physicalrecordsize and currentcount to int64 and changing
recordcost to int64. >>

This was a problem with version 3 and 4 and has been corrected in version 4,
which now uses an Int64 for optimization calculations:

http://www.elevatesoft.com/scripts/incident.dll?action=viewrep&release=4.14&type=f&incident=1899

--
Tim Young
Elevate Software
www.elevatesoft.com

Tue, Apr 25 2006 11:24 AMPermanent Link

Vincent
Thanks. I figured it was fixed in 4 since it worked right off the bat. I should
have looked through the incident reports.
Wed, Apr 26 2006 1:44 AMPermanent Link

Vincent
Thanks. I figured it was fixed in 4 since it worked right off the bat. I should
have looked through the incident reports.
Page 1 of 2Next Page »
Jump to Page:  1 2
Image