Icon View Thread

The following is the text of the current message along with any replies.
Messages 1 to 10 of 14 total
Thread station caching?
Tue, Feb 13 2018 7:28 PMPermanent Link

Greg Hallam

Microcalm Solutions Inc

I have a couple of applications running against the same database.  One application is putting items into inventory and the other potentially taking them out.  I have a situation where the second application doesn't see the inventory for seconds or sometimes minutes after the first one has finished the change.

I have recreated the problem replacing the first application with the EDB Manager.  I can manually enter the transaction and move to another record (posting it).  The second application doesn't see it for a few seconds.  The second application is using a TDataset.Locate to find the record.  How is this possible?  Where can I look for clues as to the cause?  My manual test seems to have a smaller window in time than the live application but why is there any delay?

I am using EDB Server V2.26 build 7 with a remote connection.
Wed, Feb 14 2018 1:28 AMPermanent Link

Roy Lambert

NLH Associates

Team Elevate Team Elevate

gbh100


What happens if you do a table.refresh immediately before the .Locate?

Roy Lambert
Wed, Feb 14 2018 9:02 AMPermanent Link

Raul

Team Elevate Team Elevate

On 2/14/2018 1:28 AM, Roy Lambert wrote:
> What happens if you do a table.refresh immediately before the .Locate?
>

I'm with Roy. See also

https://www.elevatesoft.com/manual?action=viewtopic&id=edb2sql&topic=Change_Detection


Raul
Wed, Feb 14 2018 10:35 AMPermanent Link

Greg Hallam

Microcalm Solutions Inc

Roy Lambert wrote:

gbh100


What happens if you do a table.refresh immediately before the .Locate?

Roy Lambert


A table refresh solves the problem.  Is there a way to stop this local caching all together?  there are some tables where I may make a new record if a locate fails creating a problem or an error.  I get the performance value of caching but I would like the server to do it in these cases with shared files.  It has always been my understanding that opening a file in shared mode meant that there would be no local caching or buffering otherwise many multi-user systems would have problems.  Am I wrong or unreasonable here?
Thu, Feb 15 2018 4:56 AMPermanent Link

Roy Lambert

NLH Associates

Team Elevate Team Elevate

gbh100


Read the link Raul provided, it gives you some information.

Way back I had an application that used PageControls to display forms. In order to make sure that the same data was being looked at on each page I introduced a flush to disk on page change. The performance hit was best described as "not pleasant".

There is a better way to handle things. A lot of the time the data will be sitting there ready and waiting. You really don't want to incur the performance hit for everything to just cope with the few occasions when its not there. I would recommend either you create a function say ForceLocate or subclass TEDBTable. Using the simpler example of a function (totally untested)

function ForceLocate(tbl:TEDBTable; fld:string; ID: variant):boolean;
begin
Result := tbl.Locate(fld,ID,[]);
if not Result then begin
tbl.Refresh;
Result := tbl.Locate(fld,ID,[]);
end;
end;

Roy Lambert
Thu, Feb 15 2018 12:20 PMPermanent Link

Tim Young [Elevate Software]

Elevate Software, Inc.

Avatar

Email timyoung@elevatesoft.com


<< I have a couple of applications running against the same database.  One application is putting items into inventory and the other potentially taking them out.  I have a situation where the second application doesn't see the inventory for seconds or sometimes minutes after the first one has finished the change.

I have recreated the problem replacing the first application with the EDB Manager.  I can manually enter the transaction and move to another record (posting it).  The second application doesn't see it for a few seconds.  The second application is using a TDataset.Locate to find the record.  How is this possible?  Where can I look for clues as to the cause?  My manual test seems to have a smaller window in time than the live application but why is there any delay? >>

The only real test for the existence of a row is when the engine does so under lock coverage, such as is the case when editing a row with pessimistic row locking enabled, or when posting a row with optimistic row locking enabled (and for constraint checks, etc.).

During any other period, what you see may or may not be entirely up-to-date, and the only way to get the most recent version of a row is via the Refresh method or by using strict change detection, which is usually overkill.  The key thing to remember, however, is that even using Refresh followed by a Locate could result in incorrect information if the system is busy enough and you're looking for a row that changes frequently.

What you want to do is wrap these types of updates into a procedure/function that performs the inventory operations on the EDB Server within a transaction.  This gives you two things:

1) There is no way that the transaction does not either commit or rollback (due to an error).  With client-initiated transactions, there's always the risk that the remote session gets disconnected after starting the transaction and doesn't issue a commit or rollback.

2) The inventory check is *guaranteed* to be correct and up-to-date, and can be done within the same block of code that also adjusts the inventory.

Tim Young
Elevate Software
www.elevatesoft.com
Thu, Feb 15 2018 1:41 PMPermanent Link

Greg Hallam

Microcalm Solutions Inc

Thanks Tim, the confirmation helps.

Would I be correct  in extending your argument relative to transactions that, If I did my locate and subsequent updates inside a transaction I may not have this issue.  Would the start of the transaction essentially flush any buffers in use?

Is it true that a query performed in a client/server setup would always get the most up to date information posted by any other stations even if the server buffers haven't been written to disk.

The application was originally developed with ADS and just changed to EDB.  Very little code was required to be changed.  This is a significant difference in operation that unfortunately may require a significantly different approach especially for this particular migration (ADS -> EDB).

Overall I'm a little shaken by this.  I have over 30 years of development experience and most of it database related and most of that in a client server environment.  I have never once had to consider a search on a shared file not being evaluated on up to date information. Frown
Fri, Feb 16 2018 4:18 PMPermanent Link

Greg Hallam

Microcalm Solutions Inc

After having time to think on this further I have less of an issue with local caching in shared files than I do with it being used for a locate function in particular.  It seems reasonable to me that migrating through a file with next and prior would benefit greatly by buffering and without any real issue.  It seems just as reasonable that a locate or findkey would be served best by the server and not at all at the station and that any buffering would be done at the server and therefore would be 100% accurate 100% of the time without the performance hit or network overhead that would be there with next and prior.

In my live example I have a web application where a session may sit idle for 5 or 10 minutes and then perform an operation involving a locate that is minutes out of date.  I have seen examples of 2 minutes but have no idea what the limit might be.  Is there any time consideration with local buffers?
Fri, Feb 16 2018 4:28 PMPermanent Link

Greg Hallam

Microcalm Solutions Inc

In reading some of the documentation relative to buffering it seems unclear where the buffering is being done.  For example the index buffers that I can set per table, are they for local buffering or server buffering as that distinction isn't made?  I have no issue with server buffering.  All of my examples are using client/server as that is pretty much all I use.

I have assumed that queries that I run are being executed at the server and therefore would be 100% accurate at the time of their execution.  Are all table commands local or are some of them executed at the server?  For example I am used to a filter being evaluated at the server in ADS and not locally.  What is the case with EDB?
Thu, Feb 22 2018 3:05 PMPermanent Link

Tim Young [Elevate Software]

Elevate Software, Inc.

Avatar

Email timyoung@elevatesoft.com

<< Would I be correct  in extending your argument relative to transactions that, If I did my locate and subsequent updates inside a transaction I may not have this issue.  Would the start of the transaction essentially flush any buffers in use? >>

Kind of - the start of a transaction will cause an implicit refresh after the transaction is started so that you're guaranteed that the transaction will see the most recent version of all rows that you visit during the transaction.

<< Is it true that a query performed in a client/server setup would always get the most up to date information posted by any other stations even if the server buffers haven't been written to disk. >>

*Any* access in ElevateDB is subject to the same rules: if you want to ensure that most recent version of the rows, you will need to either perform a manual refresh before the operation or turn on strict change detection for the session.  But, as I said before, this still only guarantees that you'll see the most up-to-date version as of the time of the refresh, and it's still possible for other users to perform updates to the rows after the refresh, thus immediately making your version outdated.

<< Overall I'm a little shaken by this.  I have over 30 years of development experience and most of it database related and most of that in a client server environment.  I have never once had to consider a search on a shared file not being evaluated on up to date information. Frown>>

It might be that you simply never noticed, but the only way to guarantee that a read sees a consistent snapshot of a database is through an MVCC architecture, which ElevateDB does not use, or by using read locks to ensure that no other user can modify any rows while the operation is taking place.  However, both of these approaches come with various downsides (MVCC can result in rejected commits or aborted transactions when there are conflicts, and read locks can reduce concurrency pretty significantly if held for too long).

ElevateDB only guarantees *itself* a read-consistent view of a table by using covering read locks whenever it needs to read something (row, index page, or BLOB block) from a table on disk.  These ensure that what ElevateDB caches is consistent, and also results in the read locks being held for very short periods of time and managed entirely by ElevateDB.

Tim Young
Elevate Software
www.elevatesoft.com
Page 1 of 2Next Page »
Jump to Page:  1 2
Image