Icon View Thread

The following is the text of the current message along with any replies.
Messages 1 to 10 of 10 total
Thread BLOB BLOCK SIZE
Wed, Feb 3 2010 10:03 AMPermanent Link

Roy Lambert

NLH Associates

Team Elevate Team Elevate

What are the chances on this ever being settable on a per column rather than a per table basis?

Roy Lambert
Wed, Feb 3 2010 10:51 AMPermanent Link

Tim Young [Elevate Software]

Elevate Software, Inc.

Avatar

Email timyoung@elevatesoft.com

Roy,

<< What are the chances on this ever being settable on a per column rather
than a per table basis? >>

The enterprise server can/will be able to do this.  The current version
could do so, but it would involve either a) a lot more .edbblb files per
table, or b) a pretty fragmented .edbblb file.

--
Tim Young
Elevate Software
www.elevatesoft.com

Wed, Feb 3 2010 11:23 AMPermanent Link

Roy Lambert

NLH Associates

Team Elevate Team Elevate

Tim


><< What are the chances on this ever being settable on a per column rather
>than a per table basis? >>
>
>The enterprise server can/will be able to do this. The current version
>could do so, but it would involve either a) a lot more .edbblb files per
>table, or b) a pretty fragmented .edbblb file.

I'm not keen on b) (I'm guessing performance degradation and lots more disk space). I could live with a) as long as I don't have to manage it (ie not as I would have to if I simply created several tables).

I don't know how many of us have my sort of problem with multiple CLOB columns in a table with the data size varying wildly across them (and naturally down some of them).

I'm going to write myself a little utility to calculate space usage / slack space. Can you let me know what the blob structure is. I'm guessing that a blob block size of 512 will not actually store 512 characters because of pointers

Roy Lambert
Wed, Feb 3 2010 2:10 PMPermanent Link

Roy Lambert

NLH Associates

Team Elevate Team Elevate

Tim


I just realised, unless I can get the actual space used after any compression is applied I have no way of calculating slack. Tish and tush

Roy Lambert
Wed, Feb 3 2010 4:29 PMPermanent Link

"Simon Page"
Roy Lambert wrote:

> I just realised, unless I can get the actual space used after any
> compression is applied I have no way of calculating slack.

I wrote this when we upgraded to v4.x to check the benefit of various
compression levels on our blob data (written for D6):

function GetCompressedSize(Field: TField; CompLevel: Integer): Integer;
var
 InBuf: string;
 OutBuf: Pointer;
begin
 InBuf := Field.AsString;
 if CompLevel = 0 then
   Result := Length(InBuf)
 else
   begin
     CompressBuf(PChar(InBuf), Length(InBuf), CompLevel, OutBuf,
Result);
     DeAllocMem(OutBuf);
   end;
end;

If checking the current compression level then just use the applicable
TDBISAMFieldDef's Compression property for the level. It's a brute
force approach but I don't think I could find another way at the time
without directly accessing the internals of the blob file.

Maybe Tim can suggest an easier way?

--

Simon
Thu, Feb 4 2010 2:56 AMPermanent Link

Roy Lambert

NLH Associates

Team Elevate Team Elevate

Simon


Thanks, I'll give it a go

Roy Lambert
Thu, Feb 4 2010 9:37 AMPermanent Link

Tim Young [Elevate Software]

Elevate Software, Inc.

Avatar

Email timyoung@elevatesoft.com

Roy,

<< I'm not keen on b) (I'm guessing performance degradation and lots more
disk space). I could live with a) as long as I don't have to manage it (ie
not as I would have to if I simply created several tables). >>

No, you wouldn't have to manage it.

<< I don't know how many of us have my sort of problem with multiple CLOB
columns in a table with the data size
varying wildly across them (and naturally down some of them). >>

It's probably fairly common.

<< I'm going to write myself a little utility to calculate space usage /
slack space. Can you let me know what the blob structure is. I'm guessing
that a blob block size of 512 will not actually store 512 characters because
of pointers >>

The block header size depends upon the type of block:

  START_BLOCK_HEADER_SIZE =
(BUFFER_HEADER_SIZE+SizeOf(Byte)+(SizeOf(Integer)*3));
  NEXT_BLOCK_HEADER_SIZE =
(BUFFER_HEADER_SIZE+SizeOf(Byte)+SizeOf(Integer));

a "start" block is the first block in a particular BLOB, so there's always
just one of those, and the rest are the "next" blocks.

--
Tim Young
Elevate Software
www.elevatesoft.com

Thu, Feb 4 2010 10:13 AMPermanent Link

Roy Lambert

NLH Associates

Team Elevate Team Elevate

Tim

Thanks for that info. However, as I said below I then realised about the compression problem. Simon Page has posted some code from DBISAM but CompressBuf isn't recognised. Is there a unit I need to include or am I stuffed?


Roy Lambert
Mon, Feb 8 2010 10:14 AMPermanent Link

Tim Young [Elevate Software]

Elevate Software, Inc.

Avatar

Email timyoung@elevatesoft.com

Roy,

<< Thanks for that info. However, as I said below I then realised about the
compression problem. Simon Page has posted some code from DBISAM but
CompressBuf isn't recognised. Is there a unit I need to include or am I
stuffed? >>

Check out the TEDBCompressionManager object in the edbcompressmgr.pas unit.

--
Tim Young
Elevate Software
www.elevatesoft.com

Mon, Feb 8 2010 11:24 AMPermanent Link

Roy Lambert

NLH Associates

Team Elevate Team Elevate

Tim

>Check out the TEDBCompressionManager object in the edbcompressmgr.pas unit.

Will do when I've sorted out my threaded bug.

Thanks

Roy Lambert
Image