Icon View Thread

The following is the text of the current message along with any replies.
Messages 11 to 20 of 22 total
Thread Altering a table problem
Sun, Feb 11 2007 12:41 PMPermanent Link

Sam
"Tim Young [Elevate Software]" <timyoung@elevatesoft.com> wrote:

<<< 1) So how do you rename a column and still keep its data? I made a typing
mistake in the name and had to drop the existing column and add a new
column. How do I preserve the data doing it this way? >>>

<Dropping and adding a column with the same name will preserve its data.
However, if you need to rename a column then there is no way currently for
EDB to know how to match up the data, so the column data is not transferred.>

So people don't ever rename columns? SmileThat could be a problem, especially for people who
making typing mistakes like me.

<<< 2) Also, can I assume the EDB Mgr will eventually have filtering and
ranges added? >>>

<It already has filters, but no, it won't have ranges.>
I'd recommend putting in ranges. People with millions of rows of data don't want to use
filters. SQL is ok but not fast enough especially when sorting. Ranges are going to be 50x
faster than SQL.


<<< 3a) How do I create a live query? >>>

<In the EDB Manager ?  If so, then make sure to check the Request Sensitive
check box.  In EDB, "live" query result sets are now referred to as
"sensitive" query result sets ?>

"Sensitive"?  Is that another ANSI standard? I never heard of it before. "Live" query is
more obvious. I get the impression that a "sensitive query" might break down and cry if
get the syntax wrong. Wink

<<< 3b) Is there a "LIMIT n" or "Top n" syntax for the Select statement? I
looked in the help file and couldn't find any mention of it. >>>

<No.  We'll be implementing something like this later, but it will use the
standard windowing syntax in SQL 2003.>

Ok

Sam
Mon, Feb 12 2007 7:05 AMPermanent Link

Tim Young [Elevate Software]

Elevate Software, Inc.

Avatar

Email timyoung@elevatesoft.com

Sam,

<< So people don't ever rename columns? SmileThat could be a problem,
especially for people who making typing mistakes like me. >>

There are reasons why columns can't be renamed in the SQL standard without a
drop and then re-add.  Mainly it has to do with column bindings.  We'll
probably eventually relax this rule because we aren't restricted by column
bindings in the database catalogs, but for now it is there.

<< I'd recommend putting in ranges. People with millions of rows of data
don't want to use filters. SQL is ok but not fast enough especially when
sorting. Ranges are going to be 50x faster than SQL. >>

An insensitive (canned) result set on a table with millions of rows will be
very fast if you're only selecting a few rows.  If you're selecting a lot of
rows, then switching to a senstive result set will be very fast.

<< "Sensitive"?  Is that another ANSI standard? >>

It's in the SQL standard for SQL/PSM cursors and the CLI standard.

<< I never heard of it before. "Live" query is more obvious. I get the
impression that a "sensitive query" might break down and cry if get the
syntax wrong. Wink>>

It's possible - they *are* sensitive, after all. Smiley

--
Tim Young
Elevate Software
www.elevatesoft.com

Mon, Feb 12 2007 1:13 PMPermanent Link

Sam Karl
"Tim Young [Elevate Software]" <timyoung@elevatesoft.com> wrote:

<<An insensitive (canned) result set on a table with millions of rows will be
very fast if you're only selecting a few rows.  If you're selecting a lot of
rows, then switching to a senstive result set will be very fast.>>

I haven't found that to be the case when the query has a sort.
For example:

select * from table1 where col1 like 'abc%' order by col1

takes 12.6 seconds. There are 1 million rows in table1 and col1 is VarChar(127) case
insensitive and is indexed. It returns 576 rows.
Is there a way to speed it up? A range would be just 10ms or less.

Sam
Mon, Feb 12 2007 2:53 PMPermanent Link

Sam Karl
Tim,
    Here is the SQL Execution Plan which might help:

Sam

================================================================================
SQL Query (Executed by ElevateDB 1.00 Build 8)

Note: The SQL  shown here is generated by ElevateDB and may not be exactly the
same as the SQL that was originally entered.  However, none of the differences
alter the execution results in any way.

================================================================================

SELECT ALL
"cust"."rcd_id" AS "rcd_id",
"cust"."cust_Title" AS "cust_Title",
"cust"."col1" AS "col1",
....
"cust"."Description" AS "Description"
FROM "cust"
WHERE "col1" = 'abcd' [Partial length match on 4 char(s)]
ORDER BY "cust"."col1"

Source Tables
-------------

cust: 1000000 rows

Result Set
----------

The result set was live
The result set consisted of zero or more rows
The result set was ordered using the index ix_cust

Filtering
---------

The following filter condition was applied to the cust table:

"col1" = 'abcd' [Partial length match on 4 char(s)] [Index scan: 279 keys,
77824 bytes estimated cost]

================================================================================
576 row(s) returned in 12.297 secs
================================================================================
Mon, Feb 12 2007 2:58 PMPermanent Link

"Jose Eduardo Helminsky"
Sam

> select * from table1 where col1 like 'abc%' order by col1
>
> takes 12.6 seconds. There are 1 million rows in table1 and col1 is
> VarChar(127) case
> insensitive and is indexed. It returns 576 rows.
> Is there a way to speed it up? A range would be just 10ms or less.

Have you tried col1>='abc' and col1<='abc'#255

Eduardo

Mon, Feb 12 2007 3:55 PMPermanent Link

Tim Young [Elevate Software]

Elevate Software, Inc.

Avatar

Email timyoung@elevatesoft.com

Sam,

<< I haven't found that to be the case when the query has a sort. For
example:

select * from table1 where col1 like 'abc%' order by col1

takes 12.6 seconds. There are 1 million rows in table1 and col1 is
VarChar(127) case insensitive and is indexed. It returns 576 rows. >>

The problem is not the canned result set - the problem is that you're
performing a case-sensitive LIKE so that the case-insensitive index cannot
be used:

http://www.elevatesoft.com/dbisam4d5_optimizations.htm

If you change the query to this:

select * from table1 where UPPER(col1) like UPPER('abc%') order by col1

it should speed it up.  BTW, generating a query plan would tell you that the
WHERE clause is un-optimized.

--
Tim Young
Elevate Software
www.elevatesoft.com

Mon, Feb 12 2007 3:57 PMPermanent Link

Tim Young [Elevate Software]

Elevate Software, Inc.

Avatar

Email timyoung@elevatesoft.com

Sam,

Forget my last response - I forgot I was in the EDB newsgroup and gave you a
DBISAM response. Smiley Please see my other response for the correct
information.

--
Tim Young
Elevate Software
www.elevatesoft.com

Mon, Feb 12 2007 3:58 PMPermanent Link

Tim Young [Elevate Software]

Elevate Software, Inc.

Avatar

Email timyoung@elevatesoft.com

Sam,

<< The result set was live >>

This is the issue that I referred to earlier.  You're selecting 576 rows out
of 1000000 rows, hence you should set RequestSensitive to False and have EDB
generate an insensitive result set cursor.

--
Tim Young
Elevate Software
www.elevatesoft.com

Mon, Feb 12 2007 3:58 PMPermanent Link

Tim Young [Elevate Software]

Elevate Software, Inc.

Avatar

Email timyoung@elevatesoft.com

Eduardo,

<< Have you tried col1>='abc' and col1<='abc'#255 >>

That won't work for most non-US-English collations.

--
Tim Young
Elevate Software
www.elevatesoft.com

Mon, Feb 12 2007 4:39 PMPermanent Link

Sam Karl
"Tim Young [Elevate Software]" <timyoung@elevatesoft.com> wrote:

Sam,

Forget my last response - I forgot I was in the EDB newsgroup and gave you a
DBISAM response. Smiley Please see my other response for the correct
information.

Tim,
     Yeah, I use multiple databases and I end up wearing so many different db hats that
it takes a while to get my brain in gear. You only have to deal with two. Smile
Yes, your suggestion to make it insensitive got it down to 100ms. Even if I drop the sort
and have it sensitive (live), it still takes 1100ms, or 10x slower than a static query. So
it looks like I need to use insensitive queries when extracting a few rows from a large
table. I guess there is some sort of contention between sorting and filtering on the same
column. That's why I think a Range would be faster. Anyways, I won't worry about it.

Sam
« Previous PagePage 2 of 3Next Page »
Jump to Page:  1 2 3
Image