Icon View Thread

The following is the text of the current message along with any replies.
Messages 1 to 10 of 16 total
Thread Database Design
Thu, Oct 15 2009 12:03 PMPermanent Link

Randy
If this is beyond the scope of these forums please feel free to ignore me. I just thought it would be nice to get some input before I get committed
to a particular design.

We are an accounting office and I need to build a database to track agricultural commodity inventory for our clients. There is certain common data
such as a master list of commodity names and a Price list for each commodity by dates. I want clients to login and enter transactions and need my
staff able to get at all client data.

The alternatives I am thinking about are:

    1) whether to have a database for the shared data and a separate database for each client

    2) or have one database with master read only tables (we administer and update ) and separate tables for each client's active commodities,
pricing and transactions.

Any thoughts on simplicity, security, performance or out of the box alternatives would be appreciated.

Thanks in advance.
Randy.
Thu, Oct 15 2009 2:12 PMPermanent Link

Tim Young [Elevate Software]

Elevate Software, Inc.

Avatar

Email timyoung@elevatesoft.com

Randy,

<< 1) whether to have a database for the shared data and a separate database
for each client >>

<< 2) or have one database with master read only tables (we administer and
update ) and separate tables for each client's active commodities, pricing
and transactions. >>

How many clients do you expect to have ?  And how much data do you expect an
average client to accumulate ?

I usually try to get customers to lean towards 2), mainly because it then
allows for easy aggregation of data for your internal purposes (say, for
example, that you wanted to get a summary of all activity across all
clients, or something similar).

However, it really can depend upon how much data you'll be accumulating for
each client.  Sometimes it is better to go with 1) since it keeps the amount
of data dependent upon the size of the client, not the size of all of the
clients.

--
Tim Young
Elevate Software
www.elevatesoft.com

Thu, Oct 15 2009 8:20 PMPermanent Link

Leslie
I have a similar situation and I was thinking about how to overcome of the disadvantages of the data being spread in multiply databases. My
idea was to make the client a databases replicate all their changes to the master database. Of course more disk space is used this way, but
otherwise we have the best of both approach.

I was also thinking about how to use the built in replication in EDB to replicate the data to the clients computer too. It looks like this:

MasterDB with all the data (master only + all data from ClientDB's)
      |
    repl.
      |
ClientDB with Shared data in the companies server
      |
    repl.
      |
ClientDB with Shared data on the clients computer

As far as I understand EDB replication the data can be filtered when loading, but  I need some way to filter the data before being sent because
the client db often belongs to an other company. So having a separate db for the shared data for every business partner seems good, but there
is a potentially problematic point.  How do I get the changes coming  from a client to the middle database  forward to the master and the
master only?

Leslie
  
Thu, Oct 15 2009 8:26 PMPermanent Link

Leslie
In case it's not clear, the Master DB only receives changes, does not publish any.
Fri, Oct 16 2009 12:21 PMPermanent Link

Randy


I am not certain on the total users. I will be designing it so that I could market it to other offices as well, but I would maintain the data with
possible replication. So we could be looking at

Master tables shared over all offices for item lists and pricing. Could be multilple lists for different provinces. (states).

Office A -  B -C..............
  Employee Lists
  client Lists
  client transactions.

Clients
  List of their items and pricing
  transaction lists.


Minimum volume = A few hundred users with a few hundred transactions per year. (Need to maintain multi year transactions)

Likely volume = A thousand users with a thousand transactions per year

Potential volume = 10K to 50K users with a thousand transactions per year (Tropical Island here I come :-})

?? Question - How does your recomendation change with these volumes with scalability an important factor. I am leaning towards a single database
for each office and still considering one for each client.

This is sounding similar to Leslie's Project but I would rather not get into replication until down the road when I am much wiser and more
comfortable.

?? Question - Can replication be added reasonably simply to an existing database or are their structures and consideration that need to be dealt
with from day one.
Fri, Oct 16 2009 3:12 PMPermanent Link

Tim Young [Elevate Software]

Elevate Software, Inc.

Avatar

Email timyoung@elevatesoft.com

Leslie,

<< As far as I understand EDB replication the data can be filtered when
loading, >>

No, not yet.

<< but  I need some way to filter the data before being sent because the
client db often belongs to an other company. >>

I'm still working on this.  The internal published updates structures in the
..edbpbl file were not designed for selective removal, so I'm having a very
hard time coming up with a solution that doesn't fragment the file and cause
the size to grow too large.

--
Tim Young
Elevate Software
www.elevatesoft.com

Fri, Oct 16 2009 3:15 PMPermanent Link

Tim Young [Elevate Software]

Elevate Software, Inc.

Avatar

Email timyoung@elevatesoft.com

Randy,

<< ?? Question - How does your recomendation change with these volumes with
scalability an important factor. I am leaning towards a single database for
each office and still considering one for each client. >>

One database for each office should work.  I was under the impression that
you were handling this all in one master office, which is why I questioned
the use of a single database due to the amount of data.

<< This is sounding similar to Leslie's Project but I would rather not get
into replication until down the road when I am much wiser and more
comfortable.

?? Question - Can replication be added reasonably simply to an existing
database or are their structures and consideration that need to be dealt
with from day one. >>

Yes, you can add replication "after the fact".   Just contact us when you
want to start the replication, and I'll walk through what needs to happen.

--
Tim Young
Elevate Software
www.elevatesoft.com

Fri, Oct 16 2009 6:26 PMPermanent Link

Leslie
Tim,

<<I'm still working on this.  The internal published updates structures in the
..edbpbl file were not designed for selective removal, so I'm having a very
hard time coming up with a solution that doesn't fragment the file and cause
the size to grow too large.>>

I suppose the actual writing to the change log file is triggered by committing a transaction. Is it possible to apply a SQL based filter during that
procedure for the tables? If yes, we could have a set of filters with different change log files as target. So the PUBLISH command could have a
an extra INTO STORE  parameter and the every table could have a FilterCondition parameter. This would mean we have more than one publish
rule set so every PUBLISHING  should  probably have a unique name for intended the recipients too.  

PUBLISH DATABASE <DatabaseName>  FOR <RecepientGroupName> INTO STORE <StoreName>
[TABLES <TableName> [WHERE <FilterCondition>] [,<TableName>[WHERE <FilterCondition>]]]



If this is possible  it could be a very flexible solution, but with linearly growing performance cost. The more log files there are, the slower
committing a transaction is. So it would probably be better to save a complete change log first and process it in a lower priority thread where or
the  other replication targets could be created during a simulated LOAD UPDATES.

If RecepientGroupName comes into the picture as a new system object, it might be a logical extension to have a  detail system table with the
URL of the databases which should receive the updates. So at the and we could have an easily configurable target list plus a  new command like

SEND UPDATES TO <RecepientGroupName>

to trigger an update directly, and

SCHEDULE UPDATES FOR  <RecepientGroupName> ...

This all together could be a very smooth replication framework.



Oops, I did not realize when I actually felt asleep, but at least I am having a very nice dream .... Smile

Leslie
  


Fri, Oct 16 2009 6:30 PMPermanent Link

Leslie
Tim,

So many spelling mistakes again ...  I only wish there was a way to edit the post after being posted.

Leslie  

Sun, Oct 18 2009 9:58 PMPermanent Link

Tony Pomfrett
Hi Tim,

Is it the case that when replicating updates from a main database to a group of client databases, each client will receive an update file containing all updates for all clients? And that each client will need to load this all-inclusive update file into its
database before somehow removing the updates that are not relevant to the client?

If I've understood this correctly, then the ability to create individual update files at the master database which are specifically tailored for each client would be very handy.
Page 1 of 2Next Page »
Jump to Page:  1 2
Image