Icon View Thread

The following is the text of the current message along with any replies.
Messages 1 to 5 of 5 total
Thread How to improve speed for a big amount of inserts?
Thu, Mar 31 2022 2:35 PMPermanent Link

Javierus

CYBER Informatica Avanzada, S.L.

Hello,
I'm migrating from BDE/Paradox( TTable) to EDB (TEDBTable), and right now all is fine
When  the EDB version of my program runs, it creates automatically the database, creates the EDB table, and transfers all the data from the Paradox table to the EDB table

However, that transfer is very slow, and I'd like to know how to improve it
I don'tm mind using a TEDBQuery, an SQL script, or whatever is needed
Is there something that can be done?
NexusDB has a nice and easy trick, where you tell the TnxQuery how many lines would you insert, and then Params becomes an array where you fill all the values
Do we have something in ElevateDB? What can be done to improve the speed without compromising the data?

Thanks in advance
Fri, Apr 1 2022 2:33 AMPermanent Link

Roy Lambert

NLH Associates

Team Elevate Team Elevate

Javierus


ElevateDB's native transfer routine has to be generic which will make it slower than a custom routine. If speed is really a problem (how slow is slow?) then I'd suggest building your own routine which would have both Paradox & ElevateDB engines built in. Open the Paradox tables and loop through with a while .. eof loop and use a parameterised query to transfer the data into ElevateDB. Ignore anything like indices, referential integrity or triggers until the data is in. After all the data is transfered then do the indices etc. Naturally you need to be sure the Paradox data is clean before you start.

You'd have something like:



function MakeEDBQuery(iDatabase: TEDBDatabase; Live: boolean = True): TEDBQuery;
begin
Result := TEDBQuery.Create(nil);
Result.SessionName := iDatabase.SessionName;
Result.DatabaseName := iDatabase.DatabaseName;
Result.RequestSensitive := Live;
end;

procedure DoTheTransfer;
var
Updater:TEDBQuery;
begin
Updater := MakeEDBQuery(db,False);
Updater.SQL.Text := 'INSERT INTO edbtable  (fld1,fld2,fld3....fldn) VALUES (:val1,:val2,:val3,.. :valn)'
Updater.Prepare;

pdoxtable.Open;
pdoxTable.First;
while not pdoxtbl.eof do begin
Updater.Close;
Updater.ParamByName('fld1'):= value1;
Updater.ParamByName('fld2'):= value2;
Updater.ParamByName('fld3'):= value3;
....
Updater.ParamByName('fldn'):= valuen;
pdoxtbl.Next;
end;

Its down to you to make sure the types for fld & value are compatible. Finally you can create indices - have a look in the OLH for the syntax.

Roy Lambert
Fri, Apr 1 2022 12:33 PMPermanent Link

Javierus

CYBER Informatica Avanzada, S.L.

Hi Roy,

My fault; didn't explained me clearly

Whay I'm doing is opening the Paradox table through BDE, and the ElevateDB table using a TEDBTable
Then I loop through the BDE table inserting each record to the TEDBTable

I'm testing it with one my my customer's database, 12GB in 200 tables, wich takes more than 12 hours, and that's a problem

Didn't measured, but reading is a minimal part of that time: Paradox-to-Paradox would take 2 hours or less

How can I improve the write speed, then?
Would it be faster using a Query and Insert Into than doing it through a table?
Is possible using a multiple values Insert Into? Would it be faster?
Opening Exclusive would improve?
I'm not using transactions, would it improve if I open a transaction, post 10 or 1000 records and then commit?
Does exist any bulk insert as in other DB engines?
Should create the table without indexes and add them at the end?

Thanks in advance
Fri, Apr 1 2022 3:56 PMPermanent Link

Jose Eduardo Helminsky

HPro Informatica

Javierus

<<
Whay I'm doing is opening the Paradox table through BDE, and the ElevateDB table using a TEDBTable
Then I loop through the BDE table inserting each record to the TEDBTable
>>

My tips:

1) Open the destination tables (TEDBTable) in Exclusive mode

2) Do not use TCP/IP or network shared drives. Copy all the source tables to local drive and do the copies locally.

3) Disable any AV software (I doubt it help)

Regards

Eduardo
Sat, Apr 2 2022 6:09 AMPermanent Link

Roy Lambert

NLH Associates

Team Elevate Team Elevate

Javierus

>Would it be faster using a Query and Insert Into than doing it through a table?

Very probably since queries do not have a lot of the overhead of tables - but you MUST remember to prepare the query

>Is possible using a multiple values Insert Into? Would it be faster?

It is but I've never done it but there are example on the sql newsgroup - might be faster I have no idea

>Opening Exclusive would improve?

As Eduardo says run everything of a local disk no network. Its generally file sharing and locking that slows multi user access down so you should be OK.

>I'm not using transactions, would it improve if I open a transaction, post 10 or 1000 records and then commit?

Probably since changes are initially written to RAM and only flushed to disk on commit - it depends on the trade off.

>Does exist any bulk insert as in other DB engines?

Not that I know of.

>Should create the table without indexes and add them at the end?

YES

********************************************************************************************

Two things that can massively impact speed: Using an SSD, and limiting user feedback.

If you have money to spend you might consider seeing if Tim has spare time and could write a customised transfer suite for you.

Another thought has just occurred to me. Is the conversion process bottleneck writing to disk, memory or cpu? Depending on trade offs it might be faster to use multiple threads to do the conversion.


Roy
Image