Icon View Thread

The following is the text of the current message along with any replies.
Messages 1 to 10 of 12 total
Thread Tables resulting size from migrators
Thu, Aug 23 2018 1:35 PMPermanent Link

Jorge Ortiz

Rianxeira S.A.

Hi,

I just import a table with the ADS Migrator to create a EDB Database.

Just 1 table:

The Table was->



EXECUTE IMMEDIATE 'CREATE TABLE "agtapunte"
(
"codigo" VARCHAR(6) COLLATE "UNI",
"fecha" DATE,
"hora" VARCHAR(8) COLLATE "UNI",
"tipo" VARCHAR(1) COLLATE "UNI",
"neto" DECIMAL(19,4),
"comision" DECIMAL(19,4),
"importe" DECIMAL(19,4),
"nota" VARCHAR(60) COLLATE "UNI",
"operador" VARCHAR(30) COLLATE "UNI",
"banco" VARCHAR(6) COLLATE "UNI",
"empresa" VARCHAR(6) COLLATE "UNI",
"envios" INTEGER
)
VERSION 1.00
READWRITE
UNENCRYPTED
INDEX PAGE SIZE 8192
BLOB BLOCK SIZE 1024
PUBLISH BLOCK SIZE 1024
PUBLISH COMPRESSION 0
MAX ROW BUFFER SIZE 32768
MAX INDEX BUFFER SIZE 65536
MAX BLOB BUFFER SIZE 32768
MAX PUBLISH BUFFER SIZE 32768';



EXECUTE IMMEDIATE 'CREATE INDEX "REPORTE" ON "agtapunte" ("codigo" COLLATE "UNI", "empresa" COLLATE "UNI", "fecha", "tipo" COLLATE "UNI")';

EXECUTE IMMEDIATE 'CREATE INDEX "ORDEN" ON "agtapunte" ("codigo" COLLATE "UNI", "empresa" COLLATE "UNI", "fecha", "hora" COLLATE "UNI")';

EXECUTE IMMEDIATE 'CREATE INDEX "CODIGO" ON "agtapunte" ("codigo" COLLATE "UNI")';

EXECUTE IMMEDIATE 'CREATE INDEX "FECHA" ON "agtapunte" ("fecha")';

EXECUTE IMMEDIATE 'CREATE INDEX "HORA" ON "agtapunte" ("hora" COLLATE "UNI")';

EXECUTE IMMEDIATE 'CREATE INDEX "FECHAEXT" ON "agtapunte" ("fecha", "hora" COLLATE "UNI")';


The result was:

ORIGEN DATA->

agtapunte.adt -> 2,124,159  MB (table)
agtapunte.adi ->     588,022 MB (index)

RESULTING DATA->

agtapunte.EDBIdx-> 3,369,840 MB
agtapunte.EDBTbl -> 4,413,436 MB

rows -> 14,122,942

The resulting data is 2,8 times bigger.

with almost 1TB size in databases, having only 2TB in the server array DB disk, is a concern for us.

we migrate somedata allready to EDB, and works very smoth and fast.

¿Any sugestions to reduce the resulting databases?

Thanks for your kind response.
Fri, Aug 24 2018 2:40 AMPermanent Link

Roy Lambert

NLH Associates

Team Elevate Team Elevate

Jorge


Have you moved from ANSI to UNICODE or was the ADS database UNICODE as well?

Roy Lambert
Fri, Aug 24 2018 7:45 AMPermanent Link

Adam Brett

Orixa Systems

Jorge

>>The resulting data is 2,8 times bigger.

>>with almost 1TB size in databases, having only 2TB in the server array DB disk, is a concern for us.

There are a number of options for compression of data while it is stored in an EDB data-table / database. The default setting does not compress at all.

I have only rarely used compression at the table and database level, as my systems don't really need it, so I am not an expert. But I remember that when I did use it the result was much smaller data-files without too much loss of performance.

EDB Manager help says: "ElevateDB uses the standard ZLib compression algorithm for compressing data"  ...
Fri, Aug 24 2018 10:08 AMPermanent Link

Roy Lambert

NLH Associates

Team Elevate Team Elevate

Adam


Are you thinking about compression for transmission to/from a server? The only ones I know about are the BLOB/CLOB compression for on disk. Whilst that's what my memory and a quick check sayets very possible I'm wrong.

Even with compression its not going to help much looking at the size of the fields

Roy Lambert
Fri, Aug 24 2018 10:43 AMPermanent Link

Matthew Jones

Jorge Ortiz wrote:

> The resulting data is 2,8 times bigger.

I presume the old database was Unicode, and the new one is too? Obviously Ansi to Unicode would give you a doubling.

--

Matthew Jones
Fri, Aug 24 2018 11:45 AMPermanent Link

Jorge Ortiz

Rianxeira S.A.

<<"Matthew Jones" wrote:

Jorge Ortiz wrote:

> The resulting data is 2,8 times bigger.

I presume the old database was Unicode, and the new one is too? Obviously Ansi to Unicode would give you a doubling.

--

Matthew Jones>>

You are right! the old tables are
Collation=ansi

¿would you recommend to stay with ansi then?

¿Can i migrate ansi to ansi?

thanks for any advice.
Fri, Aug 24 2018 11:50 AMPermanent Link

Jorge Ortiz

Rianxeira S.A.

Also...

¿Is there a benefit to migrate ansi to unicode?
Fri, Aug 24 2018 1:49 PMPermanent Link

Jorge Ortiz

Rianxeira S.A.

<<Jorge Ortiz wrote:

<<"Matthew Jones" wrote:

Jorge Ortiz wrote:

> The resulting data is 2,8 times bigger.

I presume the old database was Unicode, and the new one is too? Obviously Ansi to Unicode would give you a doubling.

--

Matthew Jones>>


You are right! the old tables are
Collation=ansi

¿would you recommend to stay with ansi then?

¿Can i migrate ansi to ansi?

thanks for any advice.
>>


UPDATE with ANSI TO ANSI

The result was:

ORIGEN DATA->

agtapunte.adt -> 2,126,026  MB (table)
agtapunte.adi ->     588,783 MB (index)

RESULTING DATA->

agtapunte.EDBIdx->  2,748,120 MB
agtapunte.EDBTbl -> 2,760,746 MB

rows: 14,134,935

the tables size are acceptable.
¿Tips to get smaller indexes?

Thanks for any advice
Sat, Aug 25 2018 3:36 AMPermanent Link

Roy Lambert

NLH Associates

Team Elevate Team Elevate

Jorge

>¿Is there a benefit to migrate ansi to unicode?

Yes if you want to take advantage of native languages and allow users to enter queries in their own language and have reports etc sorted by their language, otherwise no.

Just the prejudiced view of an english speaking monoglot.

Roy
Sat, Aug 25 2018 4:09 AMPermanent Link

Roy Lambert

NLH Associates

Team Elevate Team Elevate

Jorge


>The result was:
>
>ORIGEN DATA->
>
>agtapunte.adt -> 2,126,026 MB (table)
>agtapunte.adi -> 588,783 MB (index)
>
>RESULTING DATA->
>
>agtapunte.EDBIdx-> 2,748,120 MB
>agtapunte.EDBTbl -> 2,760,746 MB
>
>rows: 14,134,935
>
>the tables size are acceptable.
>¿Tips to get smaller indexes?

I've had a quick look at the ADS indexing web page and it looks as though they follow a different approach to Tim so its not a like for like comparison.

I would have expected a bigger drop than that moving from unicode to ansi so the first thing is to check that they have all been switched to ansi. After that about all I can think of to suggest is to review what each index is required for and if its still needed.

It might be worth dropping all the indices and then rebuilding one by one to see what the individual sizes are. Just looking at the column sizes and multiplying by the row count you'd expect about 1 third of that size.

Roy Lambert
Page 1 of 2Next Page »
Jump to Page:  1 2
Image