Icon View Thread

The following is the text of the current message along with any replies.
Messages 21 to 30 of 49 total
Thread Varchar2
Mon, Mar 13 2006 5:17 PMPermanent Link

Tim Young [Elevate Software]

Elevate Software, Inc.

Avatar

Email timyoung@elevatesoft.com

Ole,

<< VARCHAR data is truncated at retrieval, >>

VARCHAR data wouldn't normally be truncated during retrieval, it would
simply retrieve whatever was stored in the column.  Also, for Roy - trailing
spaces are not removed  - they are kept in place if that is what was entered
or specified.

--
Tim Young
Elevate Software
www.elevatesoft.com

Mon, Mar 13 2006 5:20 PMPermanent Link

"Ole Willy Tuv"
Tim,

<< Why ?  Do your users or your application traditionally add spaces to the
end of strings ? >>

This is weird. All you have to do to implement CHAR and VARCHAR in a
industry compliant manner is to right-pad CHAR data (if neccessary,
depending on how it's stored) and right-trim VARCHAR data at *retrieval*.
Whether or not the data is *stored* in fixed or variable length is totally a
vendor decision.

Ole

Mon, Mar 13 2006 7:03 PMPermanent Link

Michael Baytalsky
Tim,

What's the maximum length of varchar field in dbisam vs. ElevateDB?
The problem is lack of varchar, that could contain up to 2Kb (ideally
4-16Kb) is the major PITA for me. I have fields, that need to contain
relatively small amount of text (but more then 250B) and require
to be read for every record. The table is medium size, about 50,000 recs.
I had to use Memo fields for that and I need 2 and more fields in the row,
so in order to retrieve a record I have to go back and force to server
several times for each record. This slows down scrolling incredibly!

Now, let's calculate if I use large char fields for that, say I'm allowed
to have a 2Kb fields. The record will then be close to 6 KB (also
because of other 10 char fields). The table with only 50,000 records will
be 300,000,000. This is just one small table. I have tables there
with similar requirements with 200,000 records and they're naturally
growing with time. I think lack of normal varchar is a great lack of feature
nowadays and you can't say, that it's up to a vendor how to store them,
because if you don't store them properly your tables will just grow
out of proportion. I'm not sure I understand "10 megs in 1 meg chunks".
It depends a lot on how you implement it, doesn't it? Are you saying
that you can read it noticeably faster then MySQL which supports all flavors
of varchar? Given proper implementation you will read 2 megs in 1 meg chunks
and 50 Kb in 8Kb chunks - something like that.

The most reasonable suggestion, I think, would be to allow VARCHARs to be
CHARs if under 250 bytes of length and then keep ~100 bytes in record and
move rest of it to blob (remember the way Memos were done in Paradox?),
*but read_with_the_record_when_fetched!*.
This will not require any structural changes to the engine and will allow
for large VARCHAR fields, that are not BLOBs. When I need an extra large
field, I don't want it to be read with each record. However, when I
specify a text field, I want it to be fetched with each record, so
I don't have to go back and force only to read 200 bytes of data from
a small memo field.


Regards,
Michael


Tim Young [Elevate Software] wrote:
> Michael,
>
> << Well i think that VarChar can save a lot of space when talking about
> thousand of records. Also since it's smaller it would be easier and faster
> to backup! (either compressed either not). >>
>
> Space savings always comes at the price of computational complexity, and
> that adds more time than it saves in a lot of cases.   Think of it this way,
> do you think that it is faster to read 10 megs in 1 meg chunks, or 4 megs in
> 1k chunks ?
>
Tue, Mar 14 2006 3:11 AMPermanent Link

Roy Lambert

NLH Associates

Team Elevate Team Elevate

Tim

>VARCHAR data wouldn't normally be truncated during retrieval, it would
>simply retrieve whatever was stored in the column. Also, for Roy - trailing
>spaces are not removed - they are kept in place if that is what was entered
>or specified.

WHY?

This one, in my view at least is a MASSIVE change. The behaviour of all string fields in my databases is about to alter in a way I don't want. I would (unhappily) put in the work needed to cope with the "null change" but this one I don't even want to think about it.

Please have a rethink.

Roy Lambert
Tue, Mar 14 2006 3:16 AMPermanent Link

Roy Lambert

NLH Associates

Team Elevate Team Elevate

Ole


Sounds like a good approach to me

Roy Lambert
Tue, Mar 14 2006 3:16 AMPermanent Link

Roy Lambert

NLH Associates

Team Elevate Team Elevate

Tim

>Why ? Do your users or your application traditionally add spaces to the end
>of strings ?

No but should a database engine depend on end user behaviour?

Roy Lambert

ps they do sometimes
Tue, Mar 14 2006 5:24 AMPermanent Link

Chris Erdal
"Tim Young [Elevate Software]" <timyoung@elevatesoft.com> wrote in
news:5032232B-C024-4708-9A42-5857E92165E6@news.elevatesoft.com:


> Why ?  Do your users or your application traditionally add spaces to
> the end of strings ?
>

Most certainly YES !

Especially the more elderly secretaries who used typewriters not so long
ago, and who always ended a word with a space just in case there was
another word coming, before glancing at the text once more...

It caught me out a few times when searching indexes for typed-in words
without trimming them first.

--
Chris
Tue, Mar 14 2006 11:33 AMPermanent Link

"Mike Shkolnik"
> This is weird. All you have to do to implement CHAR and VARCHAR in a
> industry compliant manner is to right-pad CHAR data (if neccessary,
> depending on how it's stored) and right-trim VARCHAR data at *retrieval*.
> Whether or not the data is *stored* in fixed or variable length is totally
a
> vendor decision.
Agree - most databases works in such manner.

On client side the engine (session instance usually) can additionally to
have the option: to trim the string fields or not.

--
With best regards, Mike Shkolnik
EMail: mshkolnik@scalabium.com
http://www.scalabium.com

Tue, Mar 14 2006 1:21 PMPermanent Link

Tim Young [Elevate Software]

Elevate Software, Inc.

Avatar

Email timyoung@elevatesoft.com

Ole,

<< This is weird. All you have to do to implement CHAR and VARCHAR in a
industry compliant manner is to right-pad CHAR data (if neccessary,
depending on how it's stored) and right-trim VARCHAR data at *retrieval*.
Whether or not the data is *stored* in fixed or variable length is totally a
vendor decision. >>

I understand the standard.  The reason for not padding the CHAR type was to
prevent some serious issues with existing data that is transferred to
ElevateDB.  IOW, it *will* break things pretty badly, whereas not
right-trimming VARCHAR columns does not affect existing data.  And as I
stated already, VARCHAR is not right-trimmed according to the standard (9.2
Store assignment).

However, I guess my best bet is to simply make the whole thing standard and
let the chips fall where they may.

--
Tim Young
Elevate Software
www.elevatesoft.com

Tue, Mar 14 2006 8:14 PMPermanent Link

Charalabos Michael
Hello Tim,

> Space savings always comes at the price of computational complexity, and
> that adds more time than it saves in a lot of cases.   Think of it this way,
> do you think that it is faster to read 10 megs in 1 meg chunks, or 4 megs in
> 1k chunks ?

1st Case)
10mb = 10485760 bytes
1k = 1048576 bytes

10485760 div 1048576 = 10 operations

2nd Case)
4mb = 427819008 bytes
1k = 1024 bytes

427819008 div 1024 = 417792 operations

The 1st i guess. Let me ask you, what's better and faster for the
Operating System or TDataset ? To cache 10 mb of data or 50mb ?

Also with VarChar we can avoid using Memo fields for storing
small string data.

The only thing that worrys me about VarChar is fragmentation
of the database which i guess it's the only think that it can
slow down. (But hey! Thats why OptimizeTable exists!)


--
Charalabos Michael - [Creation Power] - http://www.creationpower.com -
http://www.creationpower.gr
« Previous PagePage 3 of 5Next Page »
Jump to Page:  1 2 3 4 5
Image