Icon View Thread

The following is the text of the current message along with any replies.
Messages 1 to 10 of 12 total
Thread BLOB questions
Sun, May 12 2013 8:29 AMPermanent Link

Mark Wilsdorf

I've searched the Forums and looked at the User Guide, and think I know the answers to my BLOB questions, but I'd appreciate comments if any of the following are NOT correct:

(1) BLOB field limit per DBISAM table is 128.

(2) If no data is stored in a BLOB field, that field takes up NO SPACE in the .BLB file...until data is stored there.

(3) Once a set of BLOB blocks has been allocated for a field in a specific record, they are not released if smaller BLOB data is stored in that field, or even if the record is deleted, unless OptimizeTable is called.

(3a) Therefore, implementing my own scheme for re-using deleted records should reduce the growth in size of the .BLB file.

Are these assertions correct?
Mon, May 13 2013 4:47 AMPermanent Link

Matthew Jones

Hmm, my experience is that you are generally right on the re-use behaviour, but I
think you don't have to do anything clever. Simply delete a record when you are
done with it (or null its blob) and DBISAM will re-use the blocks as much as it can.
It won't grow bigger than it needs to to hold all the data you give it. I can't see
how managing the re-use yourself would help at all.

The only thing you may want to do is optimise the blob block size, which may mean
you can fit every blob into a single block, or not.

/Matthew Jones/
Tue, May 14 2013 12:15 PMPermanent Link

Tim Young [Elevate Software]

Elevate Software, Inc.

Avatar

Email timyoung@elevatesoft.com

Mark,

<< (3a) Therefore, implementing my own scheme for re-using deleted records
should reduce the growth in size of the .BLB file. >>

I wouldn't recommend this at all.  Just trying to manage the "hiding" of the
deleted records alone would be immensely complex, especially since DBISAM
already does this for you.

Perhaps you can give me an idea of what you're having an issue with in terms
of .BLB file size ?  The only time you should see outrageous growth in the
..BLB file is if you were constantly deleting records with very large BLOB
fields and then re-inserting records with very small/non-existent BLOB
fields.  The BLOB block re-use design assumes a similar average size of
BLOBs in a given table for a given BLOB field, so this is pretty rare for
most applications.

Tim Young
Elevate Software
www.elevatesoft.com
Wed, May 15 2013 10:36 AMPermanent Link

Matthew Jones

> The BLOB block re-use design assumes a similar average size of
> BLOBs in a given table for a given BLOB field,

If it ever got worked on, it would be good to re-evaluate this. One of my core
applications stores files of up to 1Gb in records, alongside files of just a few
bytes. I can't tell which it may be. That probably explains why they occasionally
complain of the large file sizes it grows to. Perhaps I should use two tables, one
for large and one for small...

/Matthew Jones/
Wed, May 15 2013 11:00 AMPermanent Link

Tim Young [Elevate Software]

Elevate Software, Inc.

Avatar

Email timyoung@elevatesoft.com

Matthew,

<< If it ever got worked on, it would be good to re-evaluate this. One of my
core applications stores files of up to 1Gb in records, alongside files of
just a few bytes. >>

In the *same* BLOB field ?  The BLOBs have to use the same BLOB field in
order to cause this issue.  IOW, if you have a memo field and a binary BLOB
field, the two will each have a separate allocation with the more
appropriate size for each.

Tim Young
Elevate Software
www.elevatesoft.com
Wed, May 15 2013 11:52 AMPermanent Link

Matthew Jones

Ah, no, I would create a record, put data in a blob of unknown size, and then later
delete the record completely. Repeat forever. If the blob's storage is completely
re-allocated on the record delete, thus storing another blob of 1 byte will not use
a 1Gb of storage, then it's just down to my data. (And the data can get big if they
go mad with the options...)

/Matthew Jones/
Wed, May 15 2013 12:57 PMPermanent Link

Tim Young [Elevate Software]

Elevate Software, Inc.

Avatar

Email timyoung@elevatesoft.com

Matthew,

<< Ah, no, I would create a record, put data in a blob of unknown size, and
then later delete the record completely. Repeat forever. If the blob's
storage is completely re-allocated on the record delete, thus storing
another blob of 1 byte will not use a 1Gb of storage, then it's just down to
my data. (And the data can get big if they go mad with the options...) >>

Yeah, if you're storing both 1B and 1GB in the same BLOB field across
multiple records, then yes, you're going to have issues with the re-use of
deleted records and you'll need to optimize the table occasionally.

Tim Young
Elevate Software
www.elevatesoft.com
Thu, May 16 2013 4:35 AMPermanent Link

Matthew Jones

> Yeah, if you're storing both 1B and 1GB in the same BLOB field
> across

Okay, so would using two different blob fields make a difference?
That is, if I have a field called "SmallBlob" and another "LargeBlob" and I store
the data according to whether it is over a threshold in the appropriate one, would
I get less "fragmentation"? Thus big data only in the LargeBlob field, never 1 byte
data. If the management is per-field, then I could do that instead of different
tables.

It's not the worlds biggest problem, but it would be nice to reduce the maintenance
for this particular use.

/Matthew Jones/
Thu, May 16 2013 6:08 AMPermanent Link

Jose Eduardo Helminsky

HPro Informatica

Matthew Jones

Does not matter how big is your data or how many BLOB fields you have, DBISAM will handle this according with the use. If you delete a record, then the space in BLOB will be re-used.

I have tables with BLB file reaching 10Gb and never see a problem there.

Eduardo
Thu, May 16 2013 6:28 AMPermanent Link

Matthew Jones

That's what I thought, but obviously if that isn't the case I'd like to know. I may
just be over-interpreting the situation.

> The only time you should see outrageous growth in the
> .BLB file is if you were constantly deleting records with very
> large BLOB fields and then re-inserting records with very
> small/non-existent BLOB fields.  The BLOB block re-use design
> assumes a similar average size of BLOBs in a given table for a
> given BLOB field, so this is pretty rare for most applications.

This does sound as though once you have allocated a 1GB Blob in a particular fields,
then the space used by that blob will remain allocated to that field, so storing 10
bytes would actually take up the space previously allocated, and thus wasting a lot.
If using a different field overcomes this, then I'm happy to do that.

The problem I have is that if there are frequent internet outages, then my service
will load these large files several times, so the records can get these large blobs.
They will be deleted within 10 minutes, but then "used again" for small data. Now,
if the large blob's space is allocated on an as-needed basis for other data, then
fine, but if a record has a blob-space allocated and it stays forever, then it
would help to do something. IIRC I added a 3am optimisation to my code, but
sometimes the database cannot be unlocked by all my threads, so it can't optimise
it.

Anyway, it's not a big deal, but I like to understand things so I can consider
options (or decide that it doesn't matter).

/Matthew Jones/
Page 1 of 2Next Page »
Jump to Page:  1 2
Image