Icon View Thread

The following is the text of the current message along with any replies.
Messages 1 to 4 of 4 total
Thread ImportTable Question
Wed, Apr 22 2015 3:12 PMPermanent Link

indyrhett

IndySoft

Is there any way to efficiently pull all the data from one TDataset to another? For example if I did a query on a SQLServer database and then called a function like "ImportTable" to bring all the data over into an identical ElevateDB structure? I could obviously loop through all data but we are wanting to migrate from another DB type called AbsoluteDB and they had this ImportTable function. Would be tedious to replace it in over 300+ places.

Thanks,

Rhett Price
Thu, Apr 23 2015 5:47 AMPermanent Link

Roy Lambert

NLH Associates

Team Elevate Team Elevate

Rhett

>Is there any way to efficiently pull all the data from one TDataset to another? For example if I did a query on a SQLServer database and then called a function like "ImportTable" to bring all the data over into an identical ElevateDB structure? I could obviously loop through all data but we are wanting to migrate from another DB type called AbsoluteDB and they had this ImportTable function. Would be tedious to replace it in over 300+ places.

Just had a quick look on the web for AbsoluteDB and ImportTable and it looks as though it just does it, no data conversion or anything. I'm guessing here but I think its just a small program to loop through the dataset and copy the records into their dataset. There is no way I know of to provide anything else for an unlimited set of database types. Whilst ElevateDB doesn't have that facility built in it would take a very short time to subclass TEDBTable and add the capability.

I've never done it for TEDBTable but I have built a clone facility into my string based TDataset descendent.

Roy Lambert
Thu, Apr 23 2015 9:49 AMPermanent Link

indyrhett

IndySoft

Thanks for the response. Yeah, we actually do a manual loop if an ImportTable function ever failed (it has happened). However our #1 priority is speed (we are using for a cache file) and when we do just a standard loop to insert all data, it is slower by a factor of 10. I'm guessing ImportTable performs some type of specially tuned way of getting mass amounts of data into the table. Thanks for confirming there is currently no built in way to do this.

- Rhett

Roy Lambert wrote:

Rhett

>Is there any way to efficiently pull all the data from one TDataset to another? For example if I did a query on a SQLServer database and then called a function like "ImportTable" to bring all the data over into an identical ElevateDB structure? I could obviously loop through all data but we are wanting to migrate from another DB type called AbsoluteDB and they had this ImportTable function. Would be tedious to replace it in over 300+ places.

Just had a quick look on the web for AbsoluteDB and ImportTable and it looks as though it just does it, no data conversion or anything. I'm guessing here but I think its just a small program to loop through the dataset and copy the records into their dataset. There is no way I know of to provide anything else for an unlimited set of database types. Whilst ElevateDB doesn't have that facility built in it would take a very short time to subclass TEDBTable and add the capability.

I've never done it for TEDBTable but I have built a clone facility into my string based TDataset descendent.

Roy Lambert
Thu, Apr 23 2015 11:32 AMPermanent Link

Roy Lambert

NLH Associates

Team Elevate Team Elevate

Rhett


Without access to AbsoluteDB's source code I can only guess what they're doing. There are a few things that can be done to speed up the process, depending on the size of data, datatype matching etc.

The fastest way to do a transfer generally consists of grabbing raw data from disk as bytes, doing a bit of a massage and throwing back to disk in the new format. If the data is held in memory then a bit different technique is required and if its virtual data it gets even worse.

Just looping through, as long as you're sure that the columns are identical then  you can a) address by index  and b) use .Assign rather than :=
eg newtable.Fields[0].Assign(OldTable.Fields[0])

Sometimes its faster to generate a bunch of SQL statements and run those in batches (just to make sure you don't swamp memory) or possibly parameterised queries.

Having tables opened in exclusive mode can also be faster.

Finally disconnect the tables from all visual controls during the transfer.

I don't have SQL server or AbsoluteDB  so I can't try and do any speed comparisons for you Frown


Roy Lambert
Image