Icon View Thread

The following is the text of the current message along with any replies.
Messages 1 to 10 of 17 total
Thread Cursor, DataSet fluidity, preservation of records?
Tue, Mar 16 2010 5:10 AMPermanent Link

durumdara

Dear Programmers/Developers!

I must find a best way to realize some capacity planner (production management) program code.

I need to upgrade the older version, and avoid the errors and problems in the prev. version. This is based on DBISAM V3.30.

So I want to re-plan the program.

The program: where you can see the timeline, the machines, and in the grid you can drag, move, plan the working in the factory.

But I "bump into" DBISAM/EDB specials.

Same thing I realized in another corporation with FireBird.
FireBird have inner transaction-based storage, and when I finished with editing, I can save the items in transaction, but I can hold and keep the records with transaction when I save them.

In DBISAM I bumped into "cursor", DataSet fluidity, and preservation problems.

Let's see how I plan to realize this thing:

I have a grid. When some of the rows/cols are appearing, I check the data is in memory or not.
If not, I load them into Delphi Objects. I have a list for every column, and I can show them in grid.

Ok.

See what happening, if new item we got, or some item dragged into another position.

1.)
First we need to create a new object, if this is a new item.

2.)
Next we must find, where is it in the DataSet to get all record that affected by this change.
Table.SetRange([MachineCode, StartTime])
Table.FindNearest()
NearestID = Table['ID']

3.)
We need to search the prior records that affected by this change.

proc SearchForAffectedPrior;
Locate(NearestID);
if ThisId <> 0 Locate(ThisID)
while Record.StartTime > ThisTime do PriorIfNotBof;
if HaveRecord
 GetRecordTime
 if ThisTime < RecordTime + RecordLen
    then
        Affected;
        SearchForAffectedPrior(RecordID);
end if

4.)
We need to search the next records the affected by this change.

proc SearchForAffectedNext;
Locate(NearestID);
if ThisId <> 0 Locate(ThisID)
while Record.StartTime < ThisTime do NextIfNoEof;
if HaveRecord
 GetRecordTime
 if ThisTime > RecordTime + RecordLen
    then
        Affected;
        SearchForAffectedNext(ThisID);
end if

5.)
After this we need to load these records as items, and delete the items are not in record list.

6.)
We get the affected elements, and make the changes in the dataset.

7.)
If something was wrong in this operation, some records are not found, we show msg to the user that table changed, please refresh it, and don't do the operation (rollback).

For example:

a.) 10:00, 30:00
b.) 10:30, 15:00
cSmile10:45, 15:00


New record dragged:
x.) 10:30, 30:00
Then a.) is not affected, only b.) and c.), I need to move them into new position:

x.) 10:30, 30:00
b.) 11:00, 15:00
cSmile11:15, 15:00

Ok.

The main problem is

a.) speed:
I need to make this thing fast, so query is slow for it.

b.) fluidity:
The cursor is in position X.
When somebody delete some rows under us, the actual/prior/next records are somewhere - but not in good position where we need to be.

For example: I move to record X with findnearest. I search for priors,and then "nexts".
But somebody move record Y to priors in this operation.
Then I working with Y as "next" item, but it is "prior" now.

c.) preservation:
I can use transaction for the rewrite-s, but I cannot preserve the items in this operation (read records, check, set them affected).
This operation is recursive, because I need to move prior/next list while I touch the border of affection.
If somebody changed the dataset, both of the records/lists are confused.

d.) new items arriving while I save changes:
I cannot use only the list is in memory when I want to re-sort the items (on move).
Because when somebody added a new item into the list, may I not compute with it.

So:
I think I good describe what is the main problem.

I search for a solution that is enough safe, and have less fluidity than Table dataset.

May I need to use query: but query also not preserve the records for me, only shows a not fluid dataset.
To preserve the records I need to lock them, but in DBISAM I cannot do it like in firebird:
"update * from table where []" to preserve the records in a transaction before I write the final values into them.

Please help me: what can I do?

Thanks:
  dd
Tue, Mar 16 2010 5:57 AMPermanent Link

Roy Lambert

NLH Associates

Team Elevate Team Elevate

durumdara


I've never used Firebird so have no idea what it can and can't do.

From your post I can't quite figure out what it is you want. I have two alternatives:

1.    Stop any other user altering the rows you're working on, or might be working on
2.    Extract specified rows from the database and work on them without changes being apparent to you.

If you want 1. then you need persistent row locks which don't exist in DBISAM (or ElevateDB yet) but it should be possible to "roll your own" with a small table holding IDs of those rows being worked on, or an extra field in the table to indicate its in use. If 2. then this can be achieved with a canned (or non-sensitive) query or loading the desired rows into a memory table (with DBISAM its a simple SELECT INTO statement). Its just a bit more effort returning it to the database after manipulation. You could also simply start a transaction when you start the process but this would block other users from updating parts of the table you were not working on until released.

Roy Lambert
Tue, Mar 16 2010 7:38 AMPermanent Link

durumdara

Roy Lambert wrote:
<<< durumdara I've never used Firebird so have no idea what it can and can't do.>>>

In "FB world" all thing is in transaction.
If you read: you read through transaction. The isolation level set what you can read.

When you write with simple "update field=field" you can lock the rows.
Other users cannot modify these rows before transaction closed.

<<<From your post I can't quite figure out what it is you want. >>>

I describe it.
We have Production Planner program. Each operations have StartedAt (TDateTime) and ItemLength (TDateTime) data, and others, but these are important for us.
When a we move or create a new element to this "Diary", we need to reorganize the underlying data.
Because the elements possible overlaps some of others, and we need to make an "Affected" list.
If element A affect to B, then we must move element B.
But possible the element B in the new position is affect to C, and so on.
This is recursive.
The affects are working in backward and forward.
If A element affects to B (backward), we must move element B to backward...

So, the operation must make these points:

1.) We need to know which elements we have in grid (locally)
2.) What cause the element change:
- which elements need to move
- where needs to move them
- what new elements needs to load from Table
- what new elements created by others
3.) We need to determine that elements are changed in the Table by other users before we save?
4.) If not, we must lock these all affected elements, and change them.
5.) Refresh the view.

Perfect solutions is not exists, because if somebody change an element in the Table where affects our elements, but our elements are not affects to this element IN THE MOMENT OF THE CHECKING, we lost the good order, and elements possible overlaps.

But we can minimalize the problem, if we can put some lock to records, and if some of them locked, we know that other user working on this record, so we can warn the user.

Critical time is the time of checking, because if we determine the order, and other user change something in this moment, we may make wrong result...

<<<I have two alternatives:
1.    Stop any other user altering the rows you're working on, or might be working on>>>

Good idea!
For this I can use a Table with this Field:
[MachineID]

I have two DBISAM/EDBTables.
One for source Column, One for dest. column.
If they are same then I use only the first.
If some data created, or dragged, I check the source column, get the machine code.
I create and post (or use) a row from T1, and Edit it.
This may makes a lock to this row.
If lock not created, I periodically check (X msec), and if 1-2 minutes ellapsed, I make an error - and operation failed.
I lock the second table (T2), this hold the destination Column (and MachineID).

If this two locks are mine, I have total control to the Production table, and I can manipulate safely.
When I finished or I got error, I release locks.

Possible this techniq can prevent to other user's work confuse my work, but not prevent to work in other columns, or working before/after my changes.

Thanks!!! I hope this lock is enough fast for us.

dd
Tue, Mar 16 2010 9:17 AMPermanent Link

Roy Lambert

NLH Associates

Team Elevate Team Elevate

durumdara


I don't know how your system is organised but I'd be very tempted to only allow one planning session at a time.To me allowing multiple people to plan simultaneously is a nightmare scenario. Any new stuff would get dumped into a queue waiting to be processed after the current session is finalised.

Roy Lambert
Tue, Mar 16 2010 1:38 PMPermanent Link

Tim Young [Elevate Software]

Elevate Software, Inc.

Avatar

Email timyoung@elevatesoft.com

Are you doing this in DBISAM, or EDB ?  EDB allows you to manually lock
rows, and these row locks persist until you manually unlock them (they
aren't affected by transactions):

http://www.elevatesoft.com/manual?action=viewmethod&id=edb2&product=d&version=7&comp=TEDBDataSet&method=LockCurrentRecord

--
Tim Young
Elevate Software
www.elevatesoft.com

Wed, Mar 17 2010 3:28 AMPermanent Link

durumdara

Roy Lambert wrote:

<<<I don't know how your system is organised but I'd be very tempted to only allow one planning session at a time.To me allowing multiple people to plan simultaneously is a nightmare scenario.>>>

I don't like it also, but it is (with some problems) working now. So we cannot speak our customers that "it is impossible"... Smile

My old project uses row locks and "errror showing" mechanism. Error occured when some of elements are overlaps. There is not way to realize this thing total safely.

<<< Any new stuff would get dumped into a queue waiting to be processed after the current session is finalised.>>>

Ok, but who is the queue manager? DBISAM does not have "queue manager", and we must realize this with "client based" version: they must check the errors, and try to lock resources.

I want ask that what is the side effect of locks?

For example:
I want to realize the lock with "insert an unique record" and with "edit it".
This locks the record because another edit is not working in it.

But what happens, when this client died, because of: a.) crashing the program b.) random restart (blue dying)?
(The session management set to "5 minutes" on client disconnection.)
How the DBISAM handle the "in edit" mode? Can it detect the client disconnection and remove this lock?
Or I must wait for 5 minutes?

And what happens when server died, or temporarily out?
It remembers this kind of lock?

I want to avoid to "lock out" from the system if possible. But I don't know how to realize without "Edit".

If I sign the lock with an inserted record, may client die, and record remaining.

I need same thing that "Mutex" in Windows that linked to creator process, and removed when process died.

Thanks:
  dd
Wed, Mar 17 2010 3:32 AMPermanent Link

durumdara

Thanks Tim! Smile

I "slightly remembered" this kind of locking you said about, and this will good for the EDB version of the program.

But first I need to make "cross-DB" version, so that will the next step.

Thanks for the re-remembering Smile to I don't forget it to implement it... Smile

dd

"Tim Young [Elevate Software]" wrote:

Are you doing this in DBISAM, or EDB ?  EDB allows you to manually lock
rows, and these row locks persist until you manually unlock them (they
aren't affected by transactions):

http://www.elevatesoft.com/manual?action=viewmethod&id=edb2&product=d&version=7&comp=TEDBDataSet&method=LockCurrentRecord

--
Tim Young
Elevate Software
www.elevatesoft.com
Wed, Mar 17 2010 4:41 AMPermanent Link

Robert Kaplan


<durumdara> wrote in message
news:B55BF9E1-F2EA-4C1F-BF30-588E40A01DC9@news.elevatesoft.com...
>
> I need same thing that "Mutex" in Windows that linked to creator process,
> and removed when process died.
>

Check semaphore locks.

Robert

Wed, Mar 17 2010 5:16 AMPermanent Link

Roy Lambert

NLH Associates

Team Elevate Team Elevate

Robert

>Check semaphore locks.

I thought about those, but (from memory) they're limited to 255 per table so I ignored them.

Roy Lambert [Team Elevate]
Wed, Mar 17 2010 5:56 AMPermanent Link

Roy Lambert

NLH Associates

Team Elevate Team Elevate

Tim

>Are you doing this in DBISAM, or EDB ? EDB allows you to manually lock
>rows, and these row locks persist until you manually unlock them (they
>aren't affected by transactions):

For some reason I hadn't thought of these as row locks but they are.

I know they are released when the app using them closes or crashes and as it says in the manual when the dataset is closed but what happens when in a c/s environment the session bombs? At what point are they released? When the session goes or after the dead session cleanup time or something else entirely different?

Roy Lambert


Page 1 of 2Next Page »
Jump to Page:  1 2
Image