Icon View Thread

The following is the text of the current message along with any replies.
Messages 1 to 4 of 4 total
Thread Transaction and Speed
Thu, Feb 26 2015 1:11 PMPermanent Link

Eduardo

Avatar

Hi there.

I am using transactions on my sql updates with ElevateDB, I was not doing so in my first project version. Then I some points of the database could be not covered.

So I have started using it and noted a incredible reduction in the speed.

My program is a Task Manager that keeps reading Log sent by stores that gets updated on the main database, mainly with sales.

Now the program is always behind the log, and soon when new stores will be opened it will never catch them.

The server is a windows 2008 R2 and should be fast enought for that. There is not much load in it, it is almost for that work. The task manager and ElevateDB server are in the same computer.

I am using it by Aurelius ORM, I monitor the SQL commands generated by it and seems normal, like the way I would be doing directly to the database.

I would expect some overhead with transaction in place, but it seems to much.

This database is shared, however most of the updates on it is based on the stores that each 10 min connect to save more records to the Log Table, that is consumed but this task manager.

So, ElevateDB should handle this easily, I am not sure how to deal with it.

Any advice much appreaciated.

Eduardo
Thu, Feb 26 2015 1:46 PMPermanent Link

Raul

Team Elevate Team Elevate

On 2/26/2015 1:11 PM, Eduardo wrote:
> So I have started using it and noted a incredible reduction in the speed.
> My program is a Task Manager that keeps reading Log sent by stores that gets updated on the main database, mainly with sales.
> Now the program is always behind the log, and soon when new stores will be opened it will never catch them.
> The server is a windows 2008 R2 and should be fast enought for that. There is not much load in it, it is almost for that work. The task manager and ElevateDB server are in the same computer.
> I am using it by Aurelius ORM, I monitor the SQL commands generated by it and seems normal, like the way I would be doing directly to the database.
> I would expect some overhead with transaction in place, but it seems to much.

The actual overhead of the transaction is fairly small but you're likely
getting hit by something else : my initial guess is either locking or
flushing (or both).

During transaction the database table lock is obtained that prevents
anybody else from writing so that could be one reason for this. You
could try a restricted transaction (one limited to specified tables
instead of whole db).

The other thing to try is to disable disk flushing on every commit
(COMMIT NO FLUSH SQL command or in delphi "commit(False)") and see if it
makes a difference.

Without any additional information on what the app does it's hard to
give advice.

Generally i would look at things like :
- make sure transactions are as short as possible
- try use transaction for cases of multiple updates only - whatever that
means is business sense (single SQL statements already use implicit
transaction so no need for them theer)
- try to determine where the slowdown occurs - is it the transaction
itself or is it due to transaction table lock and something else waiting.

Raul
Fri, Feb 27 2015 3:13 AMPermanent Link

Roy Lambert

NLH Associates

Team Elevate Team Elevate

Eduardo


Raul has pretty much covered everything. As with him my initial guess would be contention for locks, either on the shared tables if you're using a restricted transaction or the database as a whole otherwise. Even if you use a restricted transaction and updated tables are shared between the stores and the task manager then you'll have problems. Effectively every time a store puts in an update you're asking the task manager to stop working, even if only one table (eg teh log) is shared and included in the restricted transaction.

What can be done about it depends on the architecture and workflow that you're implementing. Keeping transactions short, and eliminating them if not necessary will help, but as more stores come on line may not resolve the issue.

Raul touches on the issue of finding out where the slowdown occurs. I would make that priority 1. Personally, with something like this, I'd set up a testbed to simulate the maximum loadings expected and see what happens and answer a few fundamental questions - does it work? if not is the problems ElevateDB, Aurelius or the concept?

Roy Lambert
Fri, Feb 27 2015 8:21 AMPermanent Link

Eduardo

Avatar

Roy, Raul,

I expend more time on this issue. First thing I did was removing completely the transaction management from my code and I could see it changed NOTHING on the speed.

So this uncovered the problem.

Since I am using Aurelius, it has a object that keeps the entity object cache, and taking a look on the documentation I found out that I have to avoid this cache to grow up too much, since it will slow down the operation in BATCH processing. That is my case.

Then I proceeded with flushing this cache and restoring the transaction work as before, and the program is now working as expected.

The work this program was taking to complete in an entire week was done in less than 2 hours this morning.

So, I found.

Thanks for the advises, it help me taking proper steps to conclude this.

Eduardo
Image