Icon View Thread

The following is the text of the current message along with any replies.
Messages 1 to 9 of 9 total
Thread sql tracing dbisam 3.30
Sun, Mar 26 2006 5:45 PMPermanent Link

Jaweed Saleem
Hi,

I realise that this functionality is already in DBISAM 4 but I was just
wondering if there was any way of finding out what SQL has been executed
 behind the scenes in DBISAM 3.30 (e.g. opening a certain table is
slow, so I'd like to see what Sql DBISAM is executing when opening the
table, so that I can start performing optimizations with a query
component instead).

Hope that makes sense.

I didn't know where to post this (in the SQL newsgroup or client
server). But I'm only concerned about speed when using this in a
client/server environment.

Thanks in advance
Mon, Mar 27 2006 2:55 PMPermanent Link

Tim Young [Elevate Software]

Elevate Software, Inc.

Avatar

Email timyoung@elevatesoft.com

Jaweed,

<< I realise that this functionality is already in DBISAM 4 but I was just
wondering if there was any way of finding out what SQL has been executed
behind the scenes in DBISAM 3.30 (e.g. opening a certain table is slow, so
I'd like to see what Sql DBISAM is executing when opening the table, so that
I can start performing optimizations with a query component instead). >>

DBISAM doesn't execute SQL for opening tables or any navigational operation.
Those are all native binary operations with no interpretation.

<< I didn't know where to post this (in the SQL newsgroup or client server).
But I'm only concerned about speed when using this in a
client/server environment. >>

What you want to look into is the remote tracing.  You can use the
TDBISAMSession.RemoteTrace property to turn on remote tracing, and then use
the TDBISAMSession.OnRemoteTrace event handler to log the request/response
messages to and from the database server.  That will tell you what is
occurring when you open a table via a remote session, along with the times
required to execute the remote call.

--
Tim Young
Elevate Software
www.elevatesoft.com

Sun, Apr 2 2006 8:06 PMPermanent Link

Jaweed Saleem
Tim Young [Elevate Software] wrote:
> Jaweed,
>
> << I realise that this functionality is already in DBISAM 4 but I was just
> wondering if there was any way of finding out what SQL has been executed
> behind the scenes in DBISAM 3.30 (e.g. opening a certain table is slow, so
> I'd like to see what Sql DBISAM is executing when opening the table, so that
> I can start performing optimizations with a query component instead). >>
>
> DBISAM doesn't execute SQL for opening tables or any navigational operation.
> Those are all native binary operations with no interpretation.
>
> << I didn't know where to post this (in the SQL newsgroup or client server).
> But I'm only concerned about speed when using this in a
> client/server environment. >>
>
> What you want to look into is the remote tracing.  You can use the
> TDBISAMSession.RemoteTrace property to turn on remote tracing, and then use
> the TDBISAMSession.OnRemoteTrace event handler to log the request/response
> messages to and from the database server.  That will tell you what is
> occurring when you open a table via a remote session, along with the times
> required to execute the remote call.
>

Hi Tim,

Thanks for your reply. Are there any guidelines when to use SQL and when
to use table based code? I'm not a big fan of proprietary access and
prefer to use SQL as much as possible, as I've found it to be more
scalable and allows for more robust code.

Are there any situations where table based code would be significantly
faster? I guess what I'm also asking here is, under what scenario's I
SHOULDN'T be using SQL.

Hope that makes sense.

Jaweed
Mon, Apr 3 2006 5:59 AMPermanent Link

Tim Young [Elevate Software]

Elevate Software, Inc.

Avatar

Email timyoung@elevatesoft.com

Jaweed,

<< Thanks for your reply. Are there any guidelines when to use SQL and when
to use table based code? I'm not a big fan of proprietary access and prefer
to use SQL as much as possible, as I've found it to be more scalable and
allows for more robust code.

Are there any situations where table based code would be significantly
faster? I guess what I'm also asking here is, under what scenario's I
SHOULDN'T be using SQL. >>

Well, the differences between navigational and SQL methods are usually
minimal enough that using straight SQL shouldn't be an issue.  However,
there are some slight performance advantages to using FindKey/Ranges and
other direct, navigational, index-based access methods, so if performance is
absolutely, positively the most crucial item, then you will gain by using
them instead of SQL.  Filters are basically a wash with SQL, since both
operate in the same fashion in terms of limiting the results of a
filter/query.

A good rule of thumb is this with C/S access:

If performing any type of batch processing, then use a server-side procedure
or SQL script, otherwise, the difference is negligible between the SQL and
navigational methods.

--
Tim Young
Elevate Software
www.elevatesoft.com

Mon, Apr 3 2006 8:06 PMPermanent Link

Jaweed Saleem
Tim Young [Elevate Software] wrote:
> Jaweed,
>
> << Thanks for your reply. Are there any guidelines when to use SQL and when
> to use table based code? I'm not a big fan of proprietary access and prefer
> to use SQL as much as possible, as I've found it to be more scalable and
> allows for more robust code.
>
>  Are there any situations where table based code would be significantly
> faster? I guess what I'm also asking here is, under what scenario's I
> SHOULDN'T be using SQL. >>
>
> Well, the differences between navigational and SQL methods are usually
> minimal enough that using straight SQL shouldn't be an issue.  However,
> there are some slight performance advantages to using FindKey/Ranges and
> other direct, navigational, index-based access methods, so if performance is
> absolutely, positively the most crucial item, then you will gain by using
> them instead of SQL.  Filters are basically a wash with SQL, since both
> operate in the same fashion in terms of limiting the results of a
> filter/query.
>
> A good rule of thumb is this with C/S access:
>
> If performing any type of batch processing, then use a server-side procedure
> or SQL script, otherwise, the difference is negligible between the SQL and
> navigational methods.
>

I just want to put a scenario to you (which comes up often enough in
programming with datasets).

Say you have a table with 20 fields and 300,000 records and you have to
loop through this table (after using an index) and use the values from
four fields to do some processing. Is this what you mean by batch
processing? I would normally use a query and limit it to "select field1,
field2, field3, field4 from mytable order by field1;" and optionally a
where clause. Would this be more efficient then opening a table, using
an index and a filter?

Are there any performance improvements if I use the above query without
an order by clause and make it request live (so that users can update
the records) instead of a table (which would bring down the metadata for
all 20 fields (i'm assuming))?

Thanks
Tue, Apr 4 2006 10:11 AMPermanent Link

Tim Young [Elevate Software]

Elevate Software, Inc.

Avatar

Email timyoung@elevatesoft.com

Jaweed,

<< Say you have a table with 20 fields and 300,000 records and you have to
loop through this table (after using an index) and use the values from four
fields to do some processing. Is this what you mean by batch processing? >>

Yes.

<< I would normally use a query and limit it to "select field1, field2,
field3, field4 from mytable order by field1;" and optionally a where clause.
Would this be more efficient then opening a table, using an index and a
filter? >>

If the operation can be done in one SQL statement, then that is the best
solution since it is done entirely on the database server.  However, most
batch processes cannot be dealt with in one SELECT statement.  The other
option would be to wrap up the batch process into a server-side procedure.
Using server-side procedures, you can mix and match SQL and navigational
code as necessary to get the most optimal processing time, and the whole
thing is done on the database server and is very quick.

<< Are there any performance improvements if I use the above query without
an order by clause and make it request live (so that users can update the
records) instead of a table (which would bring down the metadata for all 20
fields (i'm assuming))? >>

There is no difference between a table with a filter and a live query with a
WHERE clause.  If you plan on navigating the dataset and there are a lot of
records, then it will be slow via C/S regardless of whether you're
navigating a table, a live query result set, or a canned query result set.

--
Tim Young
Elevate Software
www.elevatesoft.com

Tue, Apr 4 2006 7:47 PMPermanent Link

Jaweed Saleem
Tim Young [Elevate Software] wrote:
> Jaweed,
>
> << Say you have a table with 20 fields and 300,000 records and you have to
> loop through this table (after using an index) and use the values from four
> fields to do some processing. Is this what you mean by batch processing? >>
>
> Yes.
>
> << I would normally use a query and limit it to "select field1, field2,
> field3, field4 from mytable order by field1;" and optionally a where clause.
> Would this be more efficient then opening a table, using an index and a
> filter? >>
>
> If the operation can be done in one SQL statement, then that is the best
> solution since it is done entirely on the database server.  However, most
> batch processes cannot be dealt with in one SELECT statement.  The other
> option would be to wrap up the batch process into a server-side procedure.
> Using server-side procedures, you can mix and match SQL and navigational
> code as necessary to get the most optimal processing time, and the whole
> thing is done on the database server and is very quick.
>
> << Are there any performance improvements if I use the above query without
> an order by clause and make it request live (so that users can update the
> records) instead of a table (which would bring down the metadata for all 20
> fields (i'm assuming))? >>
>
> There is no difference between a table with a filter and a live query with a
> WHERE clause.  If you plan on navigating the dataset and there are a lot of
> records, then it will be slow via C/S regardless of whether you're
> navigating a table, a live query result set, or a canned query result set.
>

We are using version 3.30 at the moment so Server Side Procedures aren't
available to us at the moment. With that last point about my sql query
only bringing down four fields out of 20 and request live set to true,
would it really be the same performance as a table that would bring down
the full 20 fields? I would have thought the query would be faster.
Wed, Apr 5 2006 6:12 AMPermanent Link

Chris Erdal
Jaweed Saleem <jaweedxp@hotmail.com> wrote in
news:064649C3-0038-413D-A339-4F6708555594@news.elevatesoft.com:

>> There is no difference between a table with a filter and a live query
>> with a WHERE clause.  If you plan on navigating the dataset and there
>> are a lot of records, then it will be slow via C/S regardless of
>> whether you're navigating a table, a live query result set, or a
>> canned query result set.
>>
>
> We are using version 3.30 at the moment so Server Side Procedures
> aren't available to us at the moment. With that last point about my
> sql query only bringing down four fields out of 20 and request live
> set to true, would it really be the same performance as a table that
> would bring down the full 20 fields? I would have thought the query
> would be faster.

If performance when navigating a selection from a dataset is critical, you
could perhaps transfer your selection to two identical memory tables on the
client, let the user update one of them, and then compare them to send back
the differences by SQL when they've finished.

--
Chris
Wed, Apr 5 2006 2:19 PMPermanent Link

Tim Young [Elevate Software]

Elevate Software, Inc.

Avatar

Email timyoung@elevatesoft.com

Jaweed,

<< We are using version 3.30 at the moment so Server Side Procedures aren't
available to us at the moment. With that last point about my sql query only
bringing down four fields out of 20 and request live set to true, would it
really be the same performance as a table that would bring down the full 20
fields? I would have thought the query would be faster. >>

A live query always brings down the entire set of fields for the table.  The
reason for this has to do with live, calculated fields in the SQL SELECT
columns, such as this:

SELECT CustNo, CompanyName+' '+Address1
FROM customer
ORDER BY 1

With RequestLive=True, DBISAM needs the CompanyName and Address1 fields in
order to calculate the second column correctly.  They are hidden, but it
still needs them.

--
Tim Young
Elevate Software
www.elevatesoft.com

Image