Icon View Thread

The following is the text of the current message along with any replies.
Messages 1 to 10 of 10 total
Thread How to get around no Live Result set problem?
Wed, Jun 4 2008 8:43 PMPermanent Link

Kerry Neighbour
I use Delphi 7 and DBISAM 4.26

I have one situation where I am stumped. I have two queries, one with
about 5,000 records in it, the other with about 10,000. These two
queries take some time to run - about 3-6 seconds for each one.

These queries are attached to DexExpress grids so the user can see and
select from them. The user selects a number of records from one grid and
processes them into the second grid.

Processing involves removing items from the first grid (query) and
updating or removing items from the second grid (query), depending on
the processing.

I have everything working ok, and in fact it worked just fine when I
first wrote this screen. The problem is as the record counts increase to
levels that I mention (ie 5,000 or so), it becomes too slow to use.
After I process one record, I have to update the displays in each grid
to reflect the changes. Since this takes around 6 seconds for each
update, it is hopeless. The user just gives up.

Processing takes very little time. I can also process more than one
record at a time, if many have been selected. The problem is showing the
updates after every transaction. I have to reload the two queries, which
is where the time is. I cannot make them Live results sets. The queries
are just too complex, I guess. So what I have are two canned snapshots
that I am playing with.

I am currently doing the processing to the base tables behind the
queries. I am not making changes to the cells in the grids themselves.
Since the resultset is not live, this seems the way to go - ie any
change I make in the grids themselves will not be saved anyway. And of
course,

My first idea was to load these two queries into local memory tables. I
do not like this idea much as I guess the tables can get pretty big.
Most users would not go over 5,000 - 10,000 records, but I guess they
could. This seems a bit large for in-memory tables.

In any case, while I can do the processing and viewing with in-memory
tables, at some point I am going to have to put all the changes made in
these memory tables back into the real tables. And this is where I have
a real problem. I can see no way to make this seamless and robust. ie
what if the user just quits my application? I can try and trap this, but
it is not foolproof.

Should I create 'real' tempory tables? ie not in memory, but normal
tables on the disk. This gets around the size problem, but slows the
system down a lot in creating, filling and deleting these tables on startup.

Any suggestions?
Wed, Jun 4 2008 11:17 PMPermanent Link

Jan Ferguson

Data Software Solutions, Inc.

Team Elevate Team Elevate

Kerry,

The problem is not with your queries or with DBISAM. The problem is
with loading the records into the DevExpress grid. If you have grid
mode set to False in the DevEx grid, it will take time for the grid to
load all the records so it can work with them (i.e., sorting, grouping,
etc.) If you set the grid mode to True, it will only load/show the
number of records you specify (i.e., 100 at a time). This will load the
grid quickly. The only drawback to this is that you cannot use the
nifty capabilities of a DevEx grid (i.e., sorting, grouping,
calculating, etc.)

Check out this link to the knowledge base article on DevEx's support
site. (the URL was VERY long to I used TinyURL to make it manageable.)

http://tinyurl.com/5symby

HTH,

--
Regards,
Jan Ferguson [Team Elevate]


Kerry Neighbour wrote:

<<I use Delphi 7 and DBISAM 4.26
<<
<<I have one situation where I am stumped. I have two queries, one with
<<about 5,000 records in it, the other with about 10,000. These two
<<queries take some time to run - about 3-6 seconds for each one.
<<
<<These queries are attached to DexExpress grids so the user can see
<<and select from them. The user selects a number of records from one
<<grid and processes them into the second grid.
<<
<<Processing involves removing items from the first grid (query) and
<<updating or removing items from the second grid (query), depending on
<<the processing.
<<
<<I have everything working ok, and in fact it worked just fine when I
<<first wrote this screen. The problem is as the record counts increase
<<to levels that I mention (ie 5,000 or so), it becomes too slow to
<<use. After I process one record, I have to update the displays in
<<each grid to reflect the changes. Since this takes around 6 seconds
<<for each update, it is hopeless. The user just gives up.
<<
<<Processing takes very little time. I can also process more than one
<<record at a time, if many have been selected. The problem is showing
<<the updates after every transaction. I have to reload the two
<<queries, which is where the time is. I cannot make them Live results
<<sets. The queries are just too complex, I guess. So what I have are
<<two canned snapshots that I am playing with.
<<
<<I am currently doing the processing to the base tables behind the
<<queries. I am not making changes to the cells in the grids
<<themselves. Since the resultset is not live, this seems the way to go
<<- ie any change I make in the grids themselves will not be saved
<<anyway. And of course,
<<
<<My first idea was to load these two queries into local memory tables.
<<I do not like this idea much as I guess the tables can get pretty
<<big. Most users would not go over 5,000 - 10,000 records, but I guess
<<they could. This seems a bit large for in-memory tables.
<<
<<In any case, while I can do the processing and viewing with in-memory
<<tables, at some point I am going to have to put all the changes made
<<in these memory tables back into the real tables. And this is where I
<<have a real problem. I can see no way to make this seamless and
<<robust. ie what if the user just quits my application? I can try and
<<trap this, but it is not foolproof.
<<
<<Should I create 'real' tempory tables? ie not in memory, but normal
<<tables on the disk. This gets around the size problem, but slows the
<<system down a lot in creating, filling and deleting these tables on
<<startup.
<<
<<Any suggestions?
>>

Kerry
Thu, Jun 5 2008 12:52 AMPermanent Link

Kerry Neighbour
> The problem is not with your queries or with DBISAM. The problem is
> with loading the records into the DevExpress grid. If you have grid
> mode set to False in the DevEx grid, it will take time for the grid to

Well, the Grid Mode idea is a good one. But it is not my real problem, I
think. The real problem is that I cannot get the data in the queries in
Live mode. ie I cannot edit the data live. So everytime I make a change
to the base tables, I have to refresh the display via the close/open
method. ie if I make a change to one record, I have to reload all 10,000.

Or am I missing something?
Thu, Jun 5 2008 3:09 AMPermanent Link

Roy Lambert

NLH Associates

Team Elevate Team Elevate

Kerry


I think you need to a bit more coding Smiley I don't know if its c/s or f/s, which will probably affect your decision but here are a few thoughts:

1. Switch to tables using filters, master-detail links and lookup fields
2. I can't remember if canned datasets are still editable in DBISAM if so post the data to the underlying table, the result set, and if necessary the grid (I do something similar in an app)
3. If you're concerned about an in-memory table blowing physical memory a) stop worrying its very unlikely with swop files being what they are or b) switch to writing a temporary table to disk. With either in-memory or disk you can (effectively) edit the result set so can keep everything in sync.

Roy Lambert [Team Elevate]
Thu, Jun 5 2008 4:34 AMPermanent Link

"John Hay"
Kerry

> 1. Switch to tables using filters, master-detail links and lookup fields
> 2. I can't remember if canned datasets are still editable in DBISAM if so
post the data to the underlying table, the result set, and if necessary the
grid (I do something similar in an app)
> 3. If you're concerned about an in-memory table blowing physical memory a)
stop worrying its very unlikely with swop files being what they are or b)
switch to writing a temporary table to disk. With either in-memory or disk
you can (effectively) edit the result set so can keep everything in sync.

Unless you need the query to be updated with changes which may have been
made by another user every time you process some records I would definitely
vote for option 2.  This should work well in both local or remote sessions.

John

Thu, Jun 5 2008 7:40 AMPermanent Link

Jan Ferguson

Data Software Solutions, Inc.

Team Elevate Team Elevate

Kerry,

Sorry...I didn't quite get from your post that you wanted/needed a live
dataset. (I should have known better to read newsgroups just before
going to bed at a late hour! Smiley

I have rarely found a situation where I couldn't manipulate tables to
give me a result set I needed. As Roy pointed out in his first
suggestion, is this a possibility? How complex is your query where
tables could not be a solution to your live result issue? Can you post
the query so suggestions might be made on how to better assist you?

Certainly the DevEx grid issue will still be there upon initial
loading, just something to keep in mind.

--
Regards,
Jan Ferguson [Team Elevate]


Kerry Neighbour wrote:

<<<< The problem is not with your queries or with DBISAM. The problem is
<<<< with loading the records into the DevExpress grid. If you have grid
<<<< mode set to False in the DevEx grid, it will take time for the
<<<<grid to
<<
<<Well, the Grid Mode idea is a good one. But it is not my real
<<problem, I think. The real problem is that I cannot get the data in
<<the queries in Live mode. ie I cannot edit the data live. So
<<everytime I make a change to the base tables, I have to refresh the
<<display via the close/open method. ie if I make a change to one
<<record, I have to reload all 10,000.
<<
<<Or am I missing something?
>>

Kerry
Thu, Jun 5 2008 8:57 AMPermanent Link

"Robert"

"Kerry Neighbour" <kerry@dojitraders.com> wrote in message
news:33AFA2EB-D921-4A17-9A0F-8FE814424E54@news.elevatesoft.com...
> Live mode. ie I cannot edit the data live. So everytime I make a change to
> the base tables, I have to refresh the display via the close/open method.
> ie if I make a change to one record, I have to reload all 10,000.
>

Why? Why can't you change the query (or in memory table) *and* the real
table AT THE SAME TIME? You can catch the changes to the query in afterpost,
fire an SQL change to the disk table to reflect the changes, and move on. I
don't know your data, but I don't see why you would need to rebuild the
queries every time you make a change. A quey is an editable dataset, and the
changes will be reflected in the grid.

Robert

Thu, Jun 5 2008 9:01 AMPermanent Link

"Robert"

"Kerry Neighbour" <kerry@dojitraders.com> wrote in message
news:97F9C2FB-0ADC-48BC-9242-AED89042A9EA@news.elevatesoft.com...
> Most users would not go over 5,000 - 10,000 records, but I guess they
> could. This seems a bit large for in-memory tables.
>

That's not true. In fact, a query is a dataset not that different from a
memory table. Other than some minor memory managing issues, it should be no
different to have a dataset that is the result of a query, or the same
dataset in a memory table.

> In any case, while I can do the processing and viewing with in-memory
> tables, at some point I am going to have to put all the changes made in
> these memory tables back into the real tables.

Do the changes as you post the changes in the query. See my other posting.

Robert

Thu, Jun 5 2008 6:27 PMPermanent Link

Kerry Neighbour
> I have rarely found a situation where I couldn't manipulate tables to
> give me a result set I needed. As Roy pointed out in his first
> suggestion, is this a possibility? How complex is your query where
> tables could not be a solution to your live result issue? Can you post
> the query so suggestions might be made on how to better assist you?

The queries are not that complex.

Query 1  (first grid)
----------------------
SELECT box.ID,
box.BoxNumber,
box.statusid,
boxsizes.BoxDescription,
boxsizes.PercentageOccupation,
boxsizes.BoxesPerPosition,
100/boxsizes.BoxesPerPosition as PercentageOccupation,
boxstatus.Status,
boxesatlocation.ID,
offices.OfficeTitle
FROM
box
LEFT OUTER JOIN boxsizes ON (box.BoxSizeID = boxsizes.ID)
LEFT OUTER JOIN offices ON (box.OfficeID = offices.ID)
LEFT OUTER JOIN boxstatus ON (box.StatusID = boxstatus.ID)
LEFT OUTER JOIN boxesatlocation ON (box.ID = boxesatlocation.BoxID)
WHERE
(boxesatlocation.ID IS NULL)
 AND (box.StatusID <> 1) AND (box.StatusID <> 2)
 AND (box.OfficeID = 3)
ORDER BY boxnumber

This query generally takes about 3 seconds on a table of around 5,000
boxes. I guess I could make the 'boxesatlocation' my main table, then
make everything else lookups? That should make the resultset Live, I
presume.

The table numbers are roughly as follows
box  : 5000 records
boxsizes : 3
offices : 2
boxstatus : 4
boxeatlocations : 5000


Query 2  (second grid)
----------------------
SELECT
boxlocations.ID,
boxlocations.boxstorageid,
boxlocations.binposition,
boxstorage.officeid,
boxesatlocation.LocationID,
boxsizes.BoxesPerPosition,
boxstorage.StorageLocation,
sum(ifnull(boxsizes.BoxesPerPosition, 0, 100/
boxsizes.BoxesPerPosition)) AS PercentageOccupation
FROM
boxlocations
INNER JOIN boxstorage ON ( boxlocations.BoxStorageID=boxstorage.ID)
LEFT OUTER JOIN boxesatlocation ON (boxlocations.ID =
boxesatlocation.LocationID)
LEFT OUTER JOIN box ON (boxesatlocation.BoxID = box.ID)
inner join boxsizes ON (box.BoxSizeID = boxsizes.ID)
GROUP BY
boxlocations.boxstorageid
HAVING
boxstorage.OfficeID=3

This query takes around 7 seconds on table size of 10,000 records (in
boxlocations).

What my processing does is change "boxesatlocation.ID" in the first
query so that it is no longer NULL - so it drops from the grid. in the
second query/grid, I am updating 'PercentageOccuption', so that figure
just changes.

This query does not seem all that easy to convert to a live resultset as
it uses aggregates. In fact, it is the aggregate field that I am really
interested in.

What I am actually doing here is allocating an archive box to a storage
location on a shelf in a warehouse. The archive box can be a number of
sizes, and each location in the warehouse holds a certain multiple of
these fixed sizes. ie one location might hold 1, 2 or 4 boxes. So I need
to count up how many boxes are already stored there (ie SUM() bit) and
then see how much room is left. This is why I have to re-run the query
after each box allocation - the SUM() will depend on the size of the
last box that I allocated.

Anyway, no need to get into too much detail - you guys have already
given me a few good ideas. I have tried using memory tables, but I am
not too sure if it is any faster. Loading the memory tables is a lot of
extra time, and I am not sure if it is the way to go.
Thu, Jun 5 2008 6:29 PMPermanent Link

Kerry Neighbour

> Why? Why can't you change the query (or in memory table) *and* the real
> table AT THE SAME TIME? You can catch the changes to the query in afterpost,
> fire an SQL change to the disk table to reflect the changes, and move on. I
> don't know your data, but I don't see why you would need to rebuild the
> queries every time you make a change. A quey is an editable dataset, and the
> changes will be reflected in the grid.

This is a good idea - and for some reason I never thought of it. I guess
I did not think of the query recordset as being editable. Which it is
not to the tables behind it, but all I need to do is change the display
in the grids.

Thanks!
Image