Icon View Thread

The following is the text of the current message along with any replies.
Messages 1 to 10 of 28 total
Thread Oracle
Sat, Feb 23 2008 6:16 PMPermanent Link

Sanford Aranoff
If EDB can easily deal with millions of records, how about
billions? Can it compete with Oracle? If so, maybe Larry
will buy ElevateSoft, and then you guys can retire!
Sat, Feb 23 2008 6:38 PMPermanent Link

Dave Harrison
Sanford Aranoff wrote:
> If EDB can easily deal with millions of records, how about
> billions? Can it compete with Oracle? If so, maybe Larry
> will buy ElevateSoft, and then you guys can retire!

Keep dreaming. Smile

It will take a long long time to add tens of millions of rows to an EDB
table, or DBISAM or Nexusdb, or any other Delphi database for that
matter. They are not really enterprise class databases and Oracle's
reign is quite safe. The main problem is the inefficient use of memory
when it comes to building indexes. It will take 10+ hours to add 10
million rows to an EDB table. MySQL can add the same data in 10 minutes.
Oracle won't be far behind.

All of these Delphi databases are fine for vertical market apps that
have less than 1 million rows. But they all slow down drastically when
adding more than 1 million rows.

Dave
Sat, Feb 23 2008 7:53 PMPermanent Link

"Rita"

"Dave Harrison" <daveh_18824@spammore.com> wrote in message
news:939DF01A-5D12-4C3F-B6B7-552276950BDE@news.elevatesoft.com...
>
> All of these Delphi databases are fine for vertical market apps that have
> less than 1 million rows. But they all slow down drastically when adding
> more than 1 million rows.
>

Dave yea but what if the tables are seperate like in a UK address
database they know the highest ABC order on names and file
the info in seperate tables and they are as fast as lightning.
Your 1st and 2nd names are common so both will be in seperate
tables.
Rita

Sun, Feb 24 2008 1:47 AMPermanent Link

Dave Harrison
Rita wrote:

> "Dave Harrison" <daveh_18824@spammore.com> wrote in message
> news:939DF01A-5D12-4C3F-B6B7-552276950BDE@news.elevatesoft.com...
>
>>All of these Delphi databases are fine for vertical market apps that have
>>less than 1 million rows. But they all slow down drastically when adding
>>more than 1 million rows.
>>
>
>
> Dave yea but what if the tables are seperate like in a UK address
> database they know the highest ABC order on names and file
> the info in seperate tables and they are as fast as lightning.
> Your 1st and 2nd names are common so both will be in seperate
> tables.
> Rita
>
>

Rita,
    Retrieving rows from a large EDB/DBISAM table is fast enough if you
are using ranges. But SQL can be slower especially if sorting is
required because it creates a temporary table.

The big problem is loading the data into the table, which can take a day
or more. I tried splitting the 15 million tow table into 15 one million
row tables but the overall load times stayed the same. That's why I went
back to using MySQL.

Dave
Sun, Feb 24 2008 2:49 AMPermanent Link

Arnd Baranowski
Well Dave,

you are right to a certain point.

We handle to load 100 million records from ascii files (call data) into
dbisam, evaluate, number crunch and save the information dramatically
fast with DBISAM. However we never go beyond 700.000 - 1.000.000 records
per table. The point is to respect this limit also on retrieving the
information. If you follow this rule and bring in your own inteligence
then you will be able to do the above described within 4 hours (running
6 threads in parallel and developing your own DBISAM servers).

Arnd


Dave Harrison wrote:
> Rita wrote:
>
>> "Dave Harrison" <daveh_18824@spammore.com> wrote in message
>> news:939DF01A-5D12-4C3F-B6B7-552276950BDE@news.elevatesoft.com...
>>
>>> All of these Delphi databases are fine for vertical market apps that
>>> have less than 1 million rows. But they all slow down drastically
>>> when adding more than 1 million rows.
>>>
>>
>>
>> Dave yea but what if the tables are seperate like in a UK address
>> database they know the highest ABC order on names and file
>> the info in seperate tables and they are as fast as lightning.
>> Your 1st and 2nd names are common so both will be in seperate
>> tables.
>> Rita
>>
>>
>
> Rita,
>     Retrieving rows from a large EDB/DBISAM table is fast enough if you
> are using ranges. But SQL can be slower especially if sorting is
> required because it creates a temporary table.
>
> The big problem is loading the data into the table, which can take a day
> or more. I tried splitting the 15 million tow table into 15 one million
> row tables but the overall load times stayed the same. That's why I went
> back to using MySQL.
>
> Dave
Sun, Feb 24 2008 2:51 PMPermanent Link

"Rita"

"Arnd Baranowski" <baranowski@oculeus.de> wrote in message
news:DD86F644-E41A-4849-8190-8DC8D13A8F39@news.elevatesoft.com...
> Well Dave,
>
> you are right to a certain point.
>

That point being very large tables not DATABASES

> However we never go beyond 700.000 - 1.000.000 records per table. The
> point is to respect this limit also on retrieving the information.

DBisam handles that no problem, and when the data is spread
amongst tables via if endif blocks its faster than Oracle so
now Google wants EDB.

Rita

Mon, Feb 25 2008 3:21 AMPermanent Link

Arnd Baranowski
>
> That point being very large tables not DATABASES
>

Absoloute

>
>>However we never go beyond 700.000 - 1.000.000 records per table. The
>>point is to respect this limit also on retrieving the information.
>
>
> DBisam handles that no problem, and when the data is spread
> amongst tables via if endif blocks its faster than Oracle so
> now Google wants EDB.
>

Having looked at some of these "enterprise" database systems. They do
nothing else than we do. They spread the content of large tables on
different files/tables themselve. Simular the way we do they simply
respect limits before it starts getting creepy slow!

Once we took stuff of a customer performed on a Microsoft SQL database
(done by developers!) simply to DBISAM and then we optimized it. We
received the following timings: The operation on the Microsoft SQL
database took 70 minutes. Simply moved to DBISAM the whole operation
took 10 minutes. After optimization the whole operation took 16 seconds.

Arnd
Mon, Feb 25 2008 11:12 AMPermanent Link

Dave Harrison
Rita wrote:
> "Arnd Baranowski" <baranowski@oculeus.de> wrote in message
> news:DD86F644-E41A-4849-8190-8DC8D13A8F39@news.elevatesoft.com...
>
>>Well Dave,
>>
>>you are right to a certain point.
>>
>
>
> That point being very large tables not DATABASES
>
>
>>However we never go beyond 700.000 - 1.000.000 records per table. The
>>point is to respect this limit also on retrieving the information.
>
>
> DBisam handles that no problem, and when the data is spread
> amongst tables via if endif blocks its faster than Oracle so
> now Google wants EDB.
>
> Rita
>
>

Rita,
    >now Google wants EDB.<
    Huh? Are you saying Google buying ElevateSoft? Is Google switching
its propietary database over to EDB? Enquiring minds want to know. Smile

Dave
Mon, Feb 25 2008 11:24 AMPermanent Link

Dave Harrison
Arnd Baranowski wrote:

> Well Dave,
>
> you are right to a certain point.
>
> We handle to load 100 million records from ascii files (call data) into
> dbisam, evaluate, number crunch and save the information dramatically
> fast with DBISAM. However we never go beyond 700.000 - 1.000.000 records
> per table. The point is to respect this limit also on retrieving the
> information. If you follow this rule and bring in your own inteligence
> then you will be able to do the above described within 4 hours (running
> 6 threads in parallel and developing your own DBISAM servers).
>
> Arnd

Arnd,
    So if you have 100 million records and a max of 1 million rows per
table, then you have 100 tables. If someone wanted to store 1 billion
rows you're looking at 1000 tables and 10 billion rows you're looking at
10,000 tables.

    You brought up an interesting point about the 6 threads. When
loading in a lot of data the process is disk bound. The only advantage I
see of using multiple threads with multiple tables is to put the tables
on a separate hard drive and load the data using separate machines. Of
course you have to have data that is easily separated into distinct
tables in order to make searches work properly. It's not going to work
very well if you need to do a full text search on 1000 tables.

Dave
>
>
> Dave Harrison wrote:
>
>> Rita wrote:
>>
>>> "Dave Harrison" <daveh_18824@spammore.com> wrote in message
>>> news:939DF01A-5D12-4C3F-B6B7-552276950BDE@news.elevatesoft.com...
>>>
>>>> All of these Delphi databases are fine for vertical market apps that
>>>> have less than 1 million rows. But they all slow down drastically
>>>> when adding more than 1 million rows.
>>>>
>>>
>>>
>>> Dave yea but what if the tables are seperate like in a UK address
>>> database they know the highest ABC order on names and file
>>> the info in seperate tables and they are as fast as lightning.
>>> Your 1st and 2nd names are common so both will be in seperate
>>> tables.
>>> Rita
>>>
>>>
>>
>> Rita,
>>     Retrieving rows from a large EDB/DBISAM table is fast enough if
>> you are using ranges. But SQL can be slower especially if sorting is
>> required because it creates a temporary table.
>>
>> The big problem is loading the data into the table, which can take a
>> day or more. I tried splitting the 15 million tow table into 15 one
>> million row tables but the overall load times stayed the same. That's
>> why I went back to using MySQL.
>>
>> Dave
Mon, Feb 25 2008 11:26 AMPermanent Link

Dave Harrison
Arnd Baranowski wrote:

>>
>> That point being very large tables not DATABASES
>>
>
> Absoloute
>
>>
>>> However we never go beyond 700.000 - 1.000.000 records per table. The
>>> point is to respect this limit also on retrieving the information.
>>
>>
>>
>> DBisam handles that no problem, and when the data is spread
>> amongst tables via if endif blocks its faster than Oracle so
>> now Google wants EDB.
>>
>
> Having looked at some of these "enterprise" database systems. They do
> nothing else than we do. They spread the content of large tables on
> different files/tables themselve. Simular the way we do they simply
> respect limits before it starts getting creepy slow!
>
> Once we took stuff of a customer performed on a Microsoft SQL database
> (done by developers!) simply to DBISAM and then we optimized it. We
> received the following timings: The operation on the Microsoft SQL
> database took 70 minutes. Simply moved to DBISAM the whole operation
> took 10 minutes. After optimization the whole operation took 16 seconds.
>
Arnd,
    What was the operation that took 70 minutes on SQL Server and only
16 seconds on DBISAM? Did you have to denormalize the tables to
eliminate table joins, was that why it was so slow?

Dave
Page 1 of 3Next Page »
Jump to Page:  1 2 3
Image