Icon View Thread

The following is the text of the current message along with any replies.
Messages 1 to 8 of 8 total
Thread Buffers
Tue, Feb 20 2007 3:57 AMPermanent Link

Roy Lambert

NLH Associates

Team Elevate Team Elevate

Following another thread I thought I'd try the impact of increasing the buffers so I set the following to 25 times the default

MaxTableDataBufferSize
MaxTableDataBufferCount
MaxTableIndexBufferSize
MaxTableIndexBufferCount
MaxTableBlobBufferSize
MaxTableBlobBufferCount

The result was my optimization routine which I run overnight (as and when I remember, and feel like it) only reached 61% complete on the big table after 12 hours (c130k records, 1.2Gb, custom FTI on several fields)

Deleted the code and optimization again runs in c 6 - 7 hours.


Roy Lambert
Tue, Feb 20 2007 7:36 PMPermanent Link

Tim Young [Elevate Software]

Elevate Software, Inc.

Avatar

Email timyoung@elevatesoft.com

Roy,

<< Following another thread I thought I'd try the impact of increasing the
buffers so I set the following to 25 times the default >>

There are diminishing returns when it comes to increasing the buffering, and
25 times is way past the point of getting any return for the increased
memory.  It doesn't surprise me that the performance gets worse.  DBISAM's
buffering isn't really intended to be used for buffering huge amounts of
data per table.

--
Tim Young
Elevate Software
www.elevatesoft.com

Wed, Feb 21 2007 3:46 AMPermanent Link

Roy Lambert

NLH Associates

Team Elevate Team Elevate

Tim


So any idea of the maximum useful increment?

Roy Lambert
Wed, Feb 21 2007 10:40 AMPermanent Link

"Donat Hebert \(Worldsoft\)"
This has improved performance for our application.  I tried various
combinations and it has been quite a while
since I changed this on our server ...

 with Engine do
 begin
   Engine.Active := False;
   EngineSignature := SetSig;
   CreateTempTablesInDatabase := True;
   MaxTableDataBufferSize := 131072; // dft 32,768   X 4
   MaxTableIndexBufferSize := 131072; // dft 65,536  X 2
   Engine.Active := True;
 end;

hth  Donat.

"Roy Lambert" <roy.lambert@skynet.co.uk> wrote in message
news:F8BA1C12-E5EF-448D-8841-6AE2CD8B7274@news.elevatesoft.com...
> Tim
>
>
> So any idea of the maximum useful increment?
>
> Roy Lambert
>

Wed, Feb 21 2007 11:05 AMPermanent Link

Tim Young [Elevate Software]

Elevate Software, Inc.

Avatar

Email timyoung@elevatesoft.com

Roy,

<< So any idea of the maximum useful increment? >>

I wouldn't exceed more than a couple of megabytes per table buffering type
(2 for data, 2 for indexes, 2 for BLOBs).

--
Tim Young
Elevate Software
www.elevatesoft.com

Thu, Feb 22 2007 10:29 AMPermanent Link

"Donat Hebert \(Worldsoft\)"
Just to share findings with others that may be interested, tested heavying
building, update process
on two machines.  (Slow machine around 24 minutes, other machine around 14
minute as 'base')

Also tested other routines we normally run for impact to ensure not getting
degradation somewhere else.

The best increases for us were
X4 Data and X2 indices or  (X4 indices slowed things down a bit ??)
X8 Data and X8 indices

Found if went too large ie X 12 or 16, speed started to decrease.  Wasn't
using blobs.

Performance increased from 'factory' release is around 10% which certainly
helps.  Also played with Commit Interval
and best overall performance was setting it to 500.  I'm sure there are a
number of factors impacting that you have to
test on your apps.  Most performance gains are typically from optimization
of SQLs that we typically run or as noted above,
better equipment config.

Donat.

Thu, Feb 22 2007 11:16 PMPermanent Link

Sam Davis
Roy Lambert wrote:

> Following another thread I thought I'd try the impact of increasing the buffers so I set the following to 25 times the default
>
> MaxTableDataBufferSize
> MaxTableDataBufferCount
> MaxTableIndexBufferSize
> MaxTableIndexBufferCount
> MaxTableBlobBufferSize
> MaxTableBlobBufferCount
>
> The result was my optimization routine which I run overnight (as and when I remember, and feel like it) only reached 61% complete on the big table after 12 hours (c130k records, 1.2Gb, custom FTI on several fields)
>
> Deleted the code and optimization again runs in c 6 - 7 hours.
>
>
> Roy Lambert

Roy,
   A couple of things. Have you tried optimizing the table using the
same index that you're using to access it? If you are going sequentially
through the table, that should help.

   Are you writing to this table? If not then have you tried making it
readonly or exclusive? Maybe there is some overhead with locking that is
slowing it down. Failing that, you can always buy a big hulking RAM disk. Smile

Sam
Fri, Feb 23 2007 3:26 AMPermanent Link

Roy Lambert

NLH Associates

Team Elevate Team Elevate

Sam


> A couple of things. Have you tried optimizing the table using the
>same index that you're using to access it? If you are going sequentially
>through the table, that should help.

It gets optimised every so often. In fact optimisation was the test I used to see if increasing the buffers had any good effect.

> Are you writing to this table? If not then have you tried making it
>readonly or exclusive? Maybe there is some overhead with locking that is
>slowing it down.

Yup, it is written to.

>Failing that, you can always buy a big hulking RAM disk. Smile

OK, good solution. I'm accepting donations. Smiley

Roy Lambert
Image