Login ProductsSalesSupportDownloadsAbout |
Home » Technical Support » DBISAM Technical Support » Support Forums » DBISAM General » View Thread |
Messages 1 to 7 of 7 total |
Same database record compare |
Wed, Mar 29 2006 12:56 PM | Permanent Link |
"Jerry Clancy" | I have to rid some large files of duplicate records. I would like to do this
using generic (file-independent) field-level compares but, since I can't have two records active at once, how would you suggest storing the first record in memory before skipping to the next record and performing the comparison? Any thoughts welcome. Jerry |
Wed, Mar 29 2006 2:01 PM | Permanent Link |
Michael Baytalsky | How about using two datasets?
Michael Jerry Clancy wrote: > I have to rid some large files of duplicate records. I would like to do this > using generic (file-independent) field-level compares but, since I can't > have two records active at once, how would you suggest storing the first > record in memory before skipping to the next record and performing the > comparison? > > Any thoughts welcome. > > Jerry > > |
Wed, Mar 29 2006 2:36 PM | Permanent Link |
"Scott Martin" | Jerry,
Here are several options I have used, hopefully it will help in your situation: 1. Create a hash field that is hash of the fields you want to compare: hashvalue = field1+field2+fieldN ... Copy the unique hash records to another file. 2. Here a SQL statement I recieved that locates duplicates SQL.Text := 'SELECT DisplayName DupeName, COUNT(*) as DupCount INTO MEMORY\Temp FROM'+ ' Clients GROUP BY'+ ' DisplayName ;'+ 'SELECT '+cCLIENTSQLFIELDS+' FROM Clients'+ ' JOIN MEMORY\Temp Temp ON DisplayName = DupeName'+ ' WHERE Temp.DupCount > 1 ORDER BY DisplayName'; // // execute end; end; 3. Use a memtable/variant array as a placeholder? hth, Scott. "Jerry Clancy" <jclancy@billtrak.com> wrote in message news:5971526F-5EDC-4B6A-86CA-9F7716892CA4@news.elevatesoft.com... >I have to rid some large files of duplicate records. I would like to do this > using generic (file-independent) field-level compares but, since I can't > have two records active at once, how would you suggest storing the first > record in memory before skipping to the next record and performing the > comparison? > > Any thoughts welcome. > > Jerry > > |
Wed, Mar 29 2006 4:28 PM | Permanent Link |
Tim Young [Elevate Software] Elevate Software, Inc. timyoung@elevatesoft.com | Jerry,
<< I have to rid some large files of duplicate records. I would like to do this using generic (file-independent) field-level compares but, since I can't have two records active at once, how would you suggest storing the first record in memory before skipping to the next record and performing the comparison? >> I would do what Michael suggests and simply use two separate TDBISAMTable components that point to the same physical table, or follow Scott's advice and use the SQL method. -- Tim Young Elevate Software www.elevatesoft.com |
Wed, Mar 29 2006 10:39 PM | Permanent Link |
"Jerry Clancy" | Michael,
Thanks. This is exactly what I've done in the interim. It's working but has a minor bug I'm checking out. Jerry "Michael Baytalsky" <mike@contextsoft.com> wrote in message news:D22721EA-8751-4C8F-8E85-686575B75995@news.elevatesoft.com... | How about using two datasets? | | Michael | | Jerry Clancy wrote: | > I have to rid some large files of duplicate records. I would like to do this | > using generic (file-independent) field-level compares but, since I can't | > have two records active at once, how would you suggest storing the first | > record in memory before skipping to the next record and performing the | > comparison? | > | > Any thoughts welcome. | > | > Jerry | > | > |
Wed, Mar 29 2006 10:48 PM | Permanent Link |
"Jerry Clancy" | Scott,
Good to hear from you. See my previous post to Michael. I took the dual dataset route which seems to be working, almost anyway. Not an SQL guy but this is an interesting approach. I'll hang onto it. Jerry "Scott Martin" <scottmartin@pdq.net> wrote in message news:51EC1F36-02EB-4F95-A83E-E99B0684515E@news.elevatesoft.com... | Jerry, | | Here are several options I have used, hopefully it will help in your situation: | | 1. | Create a hash field that is hash of the fields you want to compare: hashvalue = field1+field2+fieldN ... | Copy the unique hash records to another file. | | 2. | Here a SQL statement I recieved that locates duplicates | | SQL.Text := | 'SELECT DisplayName DupeName, COUNT(*) as DupCount INTO MEMORY\Temp FROM'+ | ' Clients GROUP BY'+ | ' DisplayName ;'+ | | 'SELECT '+cCLIENTSQLFIELDS+' FROM Clients'+ | ' JOIN MEMORY\Temp Temp ON DisplayName = DupeName'+ | ' WHERE Temp.DupCount > 1 ORDER BY DisplayName'; | // | // execute | end; | end; | | 3. Use a memtable/variant array as a placeholder? | | hth, | Scott. | | | "Jerry Clancy" <jclancy@billtrak.com> wrote in message news:5971526F-5EDC-4B6A-86CA-9F7716892CA4@news.elevatesoft.com... | >I have to rid some large files of duplicate records. I would like to do this | > using generic (file-independent) field-level compares but, since I can't | > have two records active at once, how would you suggest storing the first | > record in memory before skipping to the next record and performing the | > comparison? | > | > Any thoughts welcome. | > | > Jerry | > | > | | |
Wed, Mar 29 2006 10:50 PM | Permanent Link |
"Jerry Clancy" | That's actually what I did. I initially considered this but wasn't sure it
would work. Seems it does. Thanks. Jerry "Tim Young [Elevate Software]" <timyoung@elevatesoft.com> wrote in message news:5E3F04B8-AB21-4F36-AE15-82897B3BA41C@news.elevatesoft.com... | Jerry, | | << I have to rid some large files of duplicate records. I would like to do | this using generic (file-independent) field-level compares but, since I | can't have two records active at once, how would you suggest storing the | first record in memory before skipping to the next record and performing the | comparison? >> | | I would do what Michael suggests and simply use two separate TDBISAMTable | components that point to the same physical table, or follow Scott's advice | and use the SQL method. | | -- | Tim Young | Elevate Software | www.elevatesoft.com | | |
This web page was last updated on Thursday, March 28, 2024 at 08:36 AM | Privacy PolicySite Map © 2024 Elevate Software, Inc. All Rights Reserved Questions or comments ? E-mail us at info@elevatesoft.com |