Icon View Thread

The following is the text of the current message along with any replies.
Messages 31 to 40 of 41 total
Thread LIMIT keyword for SELECT
Thu, Mar 1 2007 4:49 AMPermanent Link

Tim Young [Elevate Software]

Elevate Software, Inc.

Avatar

Email timyoung@elevatesoft.com

David,

<< You probably don't remember conversations with my boss (Tom) regarding
DBISAM 3.x and Petra in the past.  If you did, you would remember the
lengths he has gone to maintain his own cache of data locally and why. >>

No, I remember speaking with you guys, but I usually don't remember the
actual conversations for more than a week.   I deal with a lot of different
customers, as you can imagine, and keeping all of that stuff in my head
would make it explode. Smiley

<< If I understand you correctly, ElevateDB does not give us the control we
need when specifying the data we want from a table of millions of records.
>>

As long as your table has a primary key, you can use a WHERE clause to tell
EDB exactly what rows you wish to retrieve.

--
Tim Young
Elevate Software
www.elevatesoft.com

Thu, Mar 1 2007 10:06 AMPermanent Link

David Loving
Tim,

I understand as we used to have similar business models until we were acquired.  Wink Tom is our Tim and customers have daily direct contact
with him.  We know who the problem customers are.  Wink

<< As long as your table has a primary key, you can use a WHERE clause to tell
EDB exactly what rows you wish to retrieve. >>

Your reply says you still don't understand.  

My table has a primary key, two foreign keys and a million or more records.  Old records are deleted and new records are added, leaving gaps in
the auto assigned primary key sequence.  The problem gets worse as I use SELECT and WHERE to pull the few hundred thousand records that
are related to one or both of the foreign key fields.  What do you think the primary key field looks like in this result set?  (1, 8, 225, 100025,
473289, etc.)  Now, filter that in a WHERE to pull only 100 records local.  Now request the next 100 records.  A LIMIT [offset, n] would help... but
I understand you, it doesn't help if every record has to be pulled local just to build the result set of 100 at each request.

We currently have to get creative and do things like pull just the unique record ids local and then turn around and request the full records one at
a time as we need them.  We could select based on a list of ids but can't count on all of the records returning as one or more could be deleted by
other users sometime during the process.

If we could construct a server side cursor on a fixed snapshot of data and navigate a block of records at a time, we could really speed up this
process of pulling the data local by reducing the round trip on each of the subset of records and not have to worry about the snapshot of data
changing sometime during the process... until we wish to post a change to a record that is no longer there.  Wink

If I understand you correctly, the above paragraph is not possible with the current implementation of ElevateDB C/S.

Database replication is an option.  Each user could work in a briefcase model and pick the time they submit their inserts, updates and deletes
while accepting a new snapshot of the data locally... moving only the data that has changed since the last update.  They would be working
normally in a separate, exclusive, local environment (with all the speed and protection that provides) between these updates.

I don't think ElevateDB C/S will do what we want.  I will watch for your implementation of database replication and hope it does what we need.

David L.
Thu, Mar 1 2007 11:00 AMPermanent Link

Roy Lambert

NLH Associates

Team Elevate Team Elevate

David


I'm coming from a position of not really understanding what's going on. You seem to be talking mainly display and if that's in a grid DBISAM already only tran

Roy Lambert
Thu, Mar 1 2007 11:35 AMPermanent Link

Roy Lambert

NLH Associates

Team Elevate Team Elevate

David


Woops, forgot to finish what I was saying

I'm coming from a position of not really understanding what's going on. You seem to be talking mainly display and if that's in a grid DBISAM already only transfers only the needed records for the grid. Even if you're not using a grid it might be a way to achieve what you want.

I think Tim's demo c/s system has c1million records and it seems to work fast enough.

Roy Lambert
Thu, Mar 1 2007 1:12 PMPermanent Link

David Loving
Hey Roy,

Believe me, I am simplifying the issue.  I'm giving just enough info to define the problem.  The actual case and solution is much more complex
and involves numerous tables / relationships of scientific data.  The actual display is graphical (both two and three dimensional) but has been
simplified to filling a data grid for discussion purposes.  The application is a Geological and Geophysical (G&G) package used for locating gas and
oil.  Our mapping module will take the information and produce a two dimensional map on a standard x / y projection for GIS and location
purposes.  The 3d visualization module will produce a three dimensional view of the earth's strata complete with wellbore paths, color digital log
recordings from the well and geologist selected tops that help them locate (hopefully) producing formations of gas and oil.

We have three related tables that allow our customer (the geologist) to define zones (depth ranges) and associate information with those ranges.  
The largest of these tables is the one that has a unique record primary key, a foreign key relationship with a wellbore and a foreign key
relationship with a user defined zone... and then associates user defined fields of data to that zone.  The job of a team of geologists is to work
with a shared set of data for an area of interest containing many wells and interpret or identify these zones, associate data with these zones and
calculate (estimate) volumes of material in these areas.  Believe me, this is a simple summary and not the detail of everything involved.

So, teams of geologists are constantly picking tops, defining zones, associating them with various collections of wells (inserting, updating and
deleting records) in a central corporate database that is physically far away from the facility or other geologists who want to see the very latest
work by other members of the team.  If you do the math... take the number of wells (700,000 in Oklahoma alone), times the number of zones,
times the number of data items per zone and then want to rapidly display this data on a local display in three dimensions... you have an idea of
the scope of the problem.  Depending on the zoom level, I need to display only a subset of all the data associated with the subset of all wells in
the current display and did not want to have to copy the entire table of data to the client workstation to do this.  LIMIT [offset, n] would only help
if there was a static snapshot of the subset of data on the server that the client can navigate, insert, update and delete individual records within.

I will have to wait until we can replicate the corporate data store on the local workstation, do everything in the speed and safety of an exclusive
access local environment and then only transport the modified data to and from the corporate datastore at certain times.

Hope that explains it.

David L.


Roy Lambert <roy.lambert@skynet.co.uk> wrote:

David


Woops, forgot to finish what I was saying

I'm coming from a position of not really understanding what's going on. You seem to be talking mainly display and if that's in a grid DBISAM
already only transfers only the needed records for the grid. Even if you're not using a grid it might be a way to achieve what you want.

I think Tim's demo c/s system has c1million records and it seems to work fast enough.

Roy Lambert
Thu, Mar 1 2007 1:33 PMPermanent Link

Roy Lambert

NLH Associates

Team Elevate Team Elevate

David


Wow!!!!!!!

Roy Lambert
Thu, Mar 1 2007 2:12 PMPermanent Link

David Loving
Roy,

I want to clarify something about my post... we love the power of DBISAM.  How else could we have developed such a fast and complex piece of
software for the independent (small) companies and all the majors.  Unlike our competitors or even our GIS mapping partner, we do not ask our
customers to install a $5k a CPU database engine, pay even more for a database schema to set on top of that and then sell them a mapping or
visualization software on top of that.  The database is built in and works on the client workstation or in a corporate environment.  It does upset
the big IT empires in some of the companies, 'cause it doesn't require a lot of admin.  Wink

I'm simply asking for more from Tim... customers always ask for more or we are not doing your job!  Wink

David L.

Roy Lambert <roy.lambert@skynet.co.uk> wrote:

David


Wow!!!!!!!

Roy Lambert
Thu, Mar 1 2007 9:35 PMPermanent Link

Tim Young [Elevate Software]

Elevate Software, Inc.

Avatar

Email timyoung@elevatesoft.com

David,

<< I understand as we used to have similar business models until we were
acquired.  Wink Tom is our Tim and customers have daily direct contact with
him.  We know who the problem customers are.  Wink>>

We're lucky in that we don't have any real "problem" customers, although
there are days when certain customers tend to try my patience.  However,
that usually has more to do with how busy I am as much as it has anything to
do with the customer.  There are certain days when I should step away from
things for a while and I don't.

<< My table has a primary key, two foreign keys and a million or more
records.  Old records are deleted and new records are added, leaving gaps in
the auto assigned primary key sequence.  The problem gets worse as I use
SELECT and WHERE to pull the few hundred thousand records that are related
to one or both of the foreign key fields.  What do you think the primary key
field looks like in this result set?  (1, 8, 225, 100025, 473289, etc.)
Now, filter that in a WHERE to pull only 100 records local.  Now request the
next 100 records.  A LIMIT [offset, n] would help... but I understand you,
it doesn't help if every record has to be pulled local just to build the
result set of 100 at each request. >>

All you said was that you wanted to pull 100 rows at a time to a
locally-cached table.  I assumed that you knew which rows that you wanted to
pull since both DBISAM and EDB would allow you to find out that information
rather quickly and easily.

<< We currently have to get creative and do things like pull just the unique
record ids local and then turn around and request the full records one at a
time as we need them.  We could select based on a list of ids but can't
count on all of the records returning as one or more could be deleted by
other users sometime during the process.

If we could construct a server side cursor on a fixed snapshot of data and
navigate a block of records at a time, we could really speed up this process
of pulling the data local by reducing the round trip on each of the subset
of records and not have to worry about the snapshot of data changing
sometime during the process... until we wish to post a change to a record
that is no longer there.  Wink

If I understand you correctly, the above paragraph is not possible with the
current implementation of ElevateDB C/S. >>

Of course it's possible.  The question I have is why you are bothering with
trying to cache things locally when both DBISAM and EDB will manage a
server-side cursor just fine without you having to do anything at all ?  Are
you trying to run in a disconnected mode or is the connection always active
?  If it's the latter, then an EDB remote session does a really good job of
caching rows and can navigate the local row cache bi-directionally.  It also
caches the BLOBs for the rows as necessary.  Just set the TEDBTable or
TEDBQuery RemoteReadSize property to the number of rows that you wish to
read/cache at one time.  If you want to run in a disconnected mode, then
just keep the RemoteReadSize property the same and simply read the rows into
a local temporary table.  EDB won't manage change detection for you anymore,
but it will certainly pull X number of rows at a time from the database
server.  This is where I was trying to point out earlier that you and Jan
are getting the LIMIT and TOP clauses mixed up with how many rows are pulled
from the database server to the client-side cache.  MS has really confused
the issue a lot by implying that the two are one and the same with ADO.NET,
when in fact the issue is that MS simply made the default behavior of
ADO.NET be that way.  ODBC, OLEDB, and ADO and all other older CLI
implementations that MS offered all provided a way to specify how many rows
were pulled from the database server.

<< I don't think ElevateDB C/S will do what we want. >>

I think you've convinced yourself of this, but I can't say that I agree at
all.  Everything that you've talked about is perfectly doable within the
current EDB C/S.

--
Tim Young
Elevate Software
www.elevatesoft.com

Fri, Mar 2 2007 10:40 AMPermanent Link

David Loving
Tim,

I'm serously interested in seeing your implimentation of database replication.  It solves more than this issue.

Our customers want the speed, performance and security of a stand alone, single user, local installation with the ability to share their most
current data with multiple team members in many remote sites (some overseas).  Yes, ability to work disconnected is a common request.  Having
the most current data from our new parent company AND other Geologists / Engineers within the same company working on the same project is
the highest priority.  I was looking at the problem a little too narrow and trying to stay live with the data connection to a multi-user server.  The
focus on a solution has changed.  Work local, think (and shared data) global.  Wink The connection speed and amount of data transferred over the
net only matters during an update.  The efficiency is on the data replication software in how fast it can apply the updates to a corporate server
and receive anything that has changed since the last connect.  I like that... puts the stress on you and not me.  Wink

Keep me in mind, if you need someone to test your pre-release of database replication.

David L.
Fri, Mar 2 2007 12:33 PMPermanent Link

Tim Young [Elevate Software]

Elevate Software, Inc.

Avatar

Email timyoung@elevatesoft.com

David,

<< Keep me in mind, if you need someone to test your pre-release of database
replication. >>

Will do.

--
Tim Young
Elevate Software
www.elevatesoft.com

« Previous PagePage 4 of 5Next Page »
Jump to Page:  1 2 3 4 5
Image