Sybase ASE
All versions
Last updated: 18 June 2006
ASE Quiz Questions: answers 2006
 
More ASE quiz questions: 2013 - 2012 - 2011 - 2010 - 2009 - 2008 - 2007 - 2006 - 2005 - 2004 - 2003 - 2002 - 2001
For a randomly selected quiz question, click here.


November 2006
In ASE, it's quite easy to write a custom string to the ASE errorlog from SQL. An application is the errorlog checking tool described here.
I have received numerous requests from RepServer DBAs to make this tool work for RepServer errorlogs as well. Which brings me to this month's question: how can you write an arbitrary string to the RepServer errorlog in a programmatically controlled way?

Answer:
It can be done, but the method depends on the RepServer version. First, RepServer 15.0 has a new command admin echo, "your-string", with_log which writes the string your-string to the RepServer errorlog:
1> admin who, echo "Hello", with_log
2> go
The errorlog will then contain the following line:
I. 2006/11/25 13:49:13. User 'sa' writes: 'Hello'
NB: without the with_log option (which is the part that's new in RS 15.0), the string is echoed to the client only.

In pre-RS 15 things are a bit more tricky, since there is no supported command to write a particular string to the errorlog. However, it is still possible -- and although there's no risk, be warned ahead that this trick is unlikely to win any prizes for elegance.
In RS 12.6 or before, writing your own custom string to the RS errorlog in a controlled way requires the following steps:
  1. Create a connection to a non-existing database, for example, 'XXX.XXX'. In the create connection command, specify a login name that (i) marks the unique placeholder string you're trying to write to the RS errorlog, and (ii) is guaranteed not to exist if 'XXX' would happen to be an existing dataserver, e.g.:
    create connection to XXX.XXX
    set error class to rs_sqlserver_error_class 
    set function string class to rs_sqlserver_function_class
    set username to ___my_unique_string___
    set password to doesntmatter
    with dsi_suspended
    
  2. When you want to write your placeholder string to the RS errorlog, run these commands:
    resume connection to XXX.XXX
    go
    suspend connection to XXX.XXX
    go
    
  3. Although the login into XXX.XXX will fail (that's the idea!), the name of the login -i.e. your placeholder string- is written to the RS errorlog as a result:
    I. 2006/11/25 13:53:11. Trying to connect to server 'XXX' 
    as user '___my_unique_string___' ......
    
  4. Note that it is not a good idea to use the dataserver name of database name as your placeholder string. When starting RepServer, the name of the connection will be written to the RS errorlog as part of a warning that teh connection is down.
NB: the errorlog checking tool has been modified to work with RepServer 15.0. If you want to get it working with RS pre-15, you'll have to hack it around yourself a bit.

How would you rate this ASE quiz question?
Great    OK    Average    Boring     

October 2006
As we all know, when writing a SQL statement it doesn't really matter in what order you list the tables in the from-clause; this has no impact on the query's result set.
However, there happens to be a situation where the order of the various parts in the from-clause of a select statement does actually make a difference for syntactical correctness. Do you know which?

Answer:
This particular case concerns the 'on'-clause in the ANSI-style join syntax. The on-clause is part of the from-clause, and its predicates define the join conditions between two joined tables. Before getting into the difficult part, here's an example of a three-table join in ANSI syntax:
-- define the tables
create table t1 ( a int, b int)
create table t2 ( c int, d int)
create table t3 ( e int, f int)
go

-- this query works fine
select *
from t1 
inner join t2 on t1.a = t2.c
inner join t3 on t2.d = t3.f
For reference, the classic T-SQL equivalent of this statement would be as follows:
select *
from t1, t2, t3
where t1.a = t2.c 
  and t2.d = t3.f
Now, the ANSI join syntax requires that the tables referred in the on-clause have already shown up with an inner join operator. For example, if you swap 't2' and 't3' in the above query, you'll get error 11040. This is because the join predicate refers to 't2.c', while that table doesn't actually appear until after the next 'inner join' operator:
-- this query runs into syntax error 11040
select *
from t1 
inner join t3 on t1.a = t2.c
inner join t2 on t2.d = t3.f
go

Msg 11040, Level 16, State 1
Server 'PROD', Line 3
Adaptive Server cannot perform the requested action 
because column 'c' is not within the scope of the 
joined table expression. Check your command for 
missing or incorrect database objects, variable 
names, and/or input data.
This behaviour may be somewhat surprising when you've never seen it before, since we've all been raised with the understanding that the order of the various parts of a SQL statement's from-clause doesn't make a difference. In the case of the ANSI syntax however, it does, and there's no way around it. Fortunately ASE will let you know when you got it wrong...

How would you rate this ASE quiz question?
Great    OK    Average    Boring     

August 2006
We're always looking for ways to make queries run faster, or keep the response time below an certain maximum.
But what if you'd want to do the reverse, and actually want to make a query take at least some minimum amount of time to respond? How would you do that?
(NB: this is not as crazy an idea as it may sound -- see the answer).

Answer:
If you'd want to create some kind of delay when querying a table, you can do this by using a proxy table. The basic idea is this:
  • create a proxy table, mapping to the real table
  • use a dedicated login for the proxy table's CIS access (i.e. create an external login)
  • set up a login trigger for that login, which contains a waitfor delay statement
  • when querying the proxy table, the waitfor delay in the login trigger kicks in, thus causing a delay in query execution
The basic idea is quite simple, but there are some things to be aware of, so keep reading.
Before getting into the details, let's take a step back: why would you want to do a thing like this in the first place?
Well, it's not as outlandish as it may seem, and in fact I'm sure many readers will recognise the following situation. Imagine you have a new system, with still only a few users and little data: consequently, queries will be very fast. When the system grows to its expected size, volumes increase, and queries get slower. This is all natural from a DBA's perspective, but it may be something end-users will not like or understand (or both), so complaints may result ('but it was so fast first and now it's slow...'). One could argue this is a typical case of mismanaged end-user expectations, and from that perspective, there would have been some sense in making sure those initial response times had not been as fast as they were; hence, some kind of query slowdown mechanism could have been helpful (I'll leave it to everyone's own imagination how to take this one step further so as to magically 'improve' performance later -- but for ethic's sake, I won't recommend doing that).

Anyway, against this background, let's look at how we can create such a query slowdown:
First, let's look at the case where a proxy table is mapped to the real table. First you need a 'loopback' servername alias (if you use the MDA tables, you already have this):
sp_addserver loopback, null, @@servername
go

-- For some reason, you need to make sure the statement cache 
-- is disabled, or things won't work...
sp_configure 'statement cache', 0
go
Now, create the proxy table. Rename the real table (here, 'my_table'), and create a proxy table with the same name as the original real table:
use my_db
go
sp_rename my_table, my_table_renamed
go
create proxy_table my_table at 'loopback.my_db..my_table_renamed'
go
grant all on my_table to public
go
Now, define a dedicated login for all CIS access to 'loopback':
use master
go
sp_addlogin loopback_login, mysecretcode, my_db
go
sp_addexternlogin loopback, my_login, loopback_login, mysecretcode
-- check the docs for details about sp_addexternlogin !
go
use my_db
go
sp_adduser loopback_login
go
grant all on my_table_renamed to loopback_login
go
At this point, check that queries against the proxy table return the actual table's data.
Finally, set up the login trigger:
use my_db
go
create proc my_logintrig
as
if suser_name() = 'loopback_login'
waitfor delay '00:00:10' -- perhaps a bit much...
go
grant all on my_logintrig to public
go
sp_modifylogin loopback_login, 'login script' , my_logintrig
go
Now, do a 'select * from my_table' and see how it takes at least 10 seconds (adjust the timeout as needed).

There are some things to be aware of. For example, you'll see that a second access to my_table does not wait for 10 seconds, but returns immediately. This is because the CIS connection is maintained after the first access (until the parent session disconnects), so the login triggger doesn't kick in for subsequent accesses.
There are ways to work around this though. For example, in 12.5.3 ESD#3 or later, enabling traceflag 11231 disconnects the CIS connection after each access, so a new login is performed every time. Note that this makes all CIS operations slower server-wide (but that was our intention, remember?). However, if you're using MDA tables in pre-15.0 ESD#2, be aware that this has an impact on the way data is returned from the historical MDA tables like monSysSQLText since the connection's context is not maintained between subsequent MDA queries (in 15.0 ESD#2 and later, this problem doesn't apply anymore).
In 12.5.0.3 or later, you can also use Dave Wein's sp__idleReaper to automatically kill sessions (CIS sessions, in this case: you can check for those using 'loopback_login') that have been idle for more than, say, a few seconds (although there is a risk that a new query is just being submitted while you do that...).
Lastly, in 15.0.1 (expected September 2006), a new configuration parameter is available: 'cis idle connection timeout' defines the number of seconds after which an idle CIS connection is automatically closed.
Note that the last two options won't work when the table is continuously queried (i.e. more frequent than the idle timeout setting).


Finally, an alternative approach is possible, where the proxy table is mapped to stored procedure. This procedure performs the query against the real table, after doing a waitfor delay. This is simpler because no dedicated external login is needed here; however, this works for read-only access only. Another difference is that the full column list must be specified explicitly when creating the proxy table:
sp_dropexternlogin loopback, NULL
go

use my_db
go
create proc my_proc 
as 
waitfor delay '00:00:10'
select * from my_table_renamed
go
create existing table my_table2 (...all columns...) 
external procedure at 'loopback.my_db..my_proc'
go
grant all on my_proc to public
go
Now, you can select from my_table2.


Of course, when all your SQL is wrapped in stored procedures, you could simply build the delay into each stored procedure itself, without the need to mess around with login triggers.

How would you rate this ASE quiz question?
Great    OK    Average    Boring     

July 2006
Some time ago, I was installing an EBF in my ASE server. As you know, part of this procedure is to run the installmaster, installsecurity, installmontables (etc) scripts after booting the ASE server with the new EBF.
Since I was in a rush, I kicked off two parallel isql sessions, one doing installmaster and one doing installsecurity, since this seemed a quicker way of completing the job.
Later, various system stored procedures (like sp_poolconfig and sp_modifylogin, but many others too) produced permission errors when I ran it. Why was that?

Answer:
System stored procedures often update system tables. As you know, when updating system tables manually, this requires the configuration parameter 'allow updates' to be enabled. However, the system stored procedures can also perform these updates when this option is disabled. The reason is that these procedures are created while 'allow updates' is enabled: the permission to update system tables is then 'frozen' into stored procedures that are created while this parameter is enabled.
To achieve this effect, the installmaster script (and others, like installsecurity) first enable 'allow updates' at the start of the script; at the end, they disabled it again.

What happened in my case was this: I kicked off installmaster and then installsecurity, so that they ran in parallel. Since installsecurity is much shorter than installmaster, it finished first, and in doing so it disabled 'allow updates'. Now, since this is a server-wide setting, any stored procs that were created by the still-running installmaster from that point onwards did not have the permission to update system tables frozen into it... and this leads to run-time permission errors when executing such a procedure.
Therefore, even though this may not be explicitly stated anywhere, always run the install* scripts serially.

Lesson learned: When in a rush, doing things differently than usual is rarely a good idea...
How would you rate this ASE quiz question?
Great    OK    Average    Boring     

June 2006
An insert trigger is created on a table as follows:
create trigger MyTrigger on MyTable for insert
as
  select * from inserted
Now, when inserting a row, the trigger returns an empty result set, suggesting that no data was found in the inserted table. However, the row itself appears to have been inserted OK:
1> insert MyTable (a, b) values (12345, 'inserting 12345')
2> go
 a           b
 ----------- --------------------

(0 rows affected)

(1 row affected)
1> select * from MyTable where a = 12345
2> go
 a           b
 ----------- --------------------
       12345 inserting 12345

(1 row affected)
How do you explain that while a row is inserted perfectly fine, the insert trigger does not see the inserted data, even though the trigger is fired as expected?

Answer:
The explanation is that the trigger was created on a proxy table: inserts and retrieval (and other DML) on a proxy table work fine and from the outside there's nothing that suggests this isn't a normal table (as is indeed the purpose of a proxy table). Also, when doing DML, corresponding triggers fire when they exist, but their inserted and deleted tables will always be empty so they cannot perform any processing based on the inserted/updated/deleted rows.
The reason why inserted and deleted are empty is that these tables are dynamically constructed from the transaction log records generated for the DML statement causing the trigger to fire. For a proxy table, the actual inserts, and the generation of log records, are taking place in the remote server where the 'real' table resides. Therefore, in the database where the proxy table resides, there simply are no transaction log records for any DML against the proxy table. Consequently, the trigger on the proxy table won't be able to do any sensible processing at all. Only a trigger on the 'real' table in the remote server would be able work as we'd expect from a trigger.
Altogether, it's actually quite normal that a trigger on a proxy table behaves this way... perhaps the only surprising part is that a trigger on a proxy table is fired at all, or indeed, can be created at all. I guess this has been allowed because a proxy table is supposed to behave as much as a normal table...

Should you want to try this yourself, here's some DDL to get you started:
use master
go

-- assuming @@servername is defined!
sp_addserver loopback, null, @@servername 
go

use my_db
go
create table MyRealTable (a int, b varchar(20))
go
create proxy_table MyTable at 'loopback.my_db..MyRealTable'
go
create trigger MyTrigger on MyTable for insert
as
  select * from inserted
go


How would you rate this ASE quiz question?
Great    OK    Average    Boring     

April 2006
A well-known way to determine the elapsed time for your queries is to run set statistics time on. This requires modifying the executed SQL script since set statistics time on is a SQL command that must be executed in order to have any effect.

How can you determine the elapsed time for the queries in your session, without involving any SQL commands at all?

Answer:

There is little-known command flag for way for isql that does the job: by running isql -p, each executed batch gets timestamped by isql:
% isql -Usa -P -SSYB150 -p
1> sp_helpdb
2> go
[...output...]
(return status = 0)
Execution Time (ms.):   1000            Clock Time (ms.):   1000
The time measured here is the end-to-end elapsed time as seen from the client application (isql).
Note that the timing is based on the entire batch: isql starts timing when the batch is sent to the server, and stops the clock when the results have been received.
If you'd use set statistics time on instead, the elapsed time for each statement inside an executed stored procedure would be printed as well (which may or may not be what you're interested in). This is because set statistics time on causes the ASE server to send timestamps for each statement to the client, while isql -p is done purely by the client itself without any functional involvement of the ASE server.
When executing a batch multiple times (with 'go N'), 'Clock Time' will be cumulating the total elapsed time for all executions together, while 'Execution Time' is the time required for each individual batch (try 'go 100' yourself!)

Unfortunately, isql appears to round the elapsed time to multiples of 1000 milliseconds. For very short statements, the elapsed time appears to be rounded down and is shown as 0. As always, sqsh comes to the rescue: sqsh -p measures the time much more precisely:
% sqsh -Usa -P -SSYB150 -p
[...]
[101] SYB150.sa..master.1> sp_helpdb
[101] SYB150.sa..master.2> go
[...output...]
Clock Time (sec.): Total = 0.132  Avg = 0.132 (7.58 xacts per sec.)
Here, 'Total' is the cumulative amount of time for all batches when 'go N' is used, and 'Avg' the average time per batch.
Given the more accurate timing, this looks like yet another reason to use sqsh instead of isql...

How would you rate this ASE quiz question?
Great    OK    Average    Boring     

March 2006
Some years ago, while in the middle of a troubleshooting operation involving a corrupt database, something happened that my heart skip a beat. I had just run sp_helpdb, and the result looked something like this:
1> sp_helpdb
2> go
 name                     db_size       owner                    dbid
         created
         status
 ------------------------ ------------- ---------------------- ------
         ------------------
         ----------------------------------------------------
 PROD                         2500.0 MB sa                          9
         Mar 18, 2004
         abort tran on log full

 REPORTING                    4000.0 MB sa                         11
         Mar 22, 2004
         select into/bulkcopy/pllsort, abort tran on log full

 dbccdb                       1200.0 MB sa                      31516
         Jun 29, 2004
         trunc log on chkpt

(1 row affected)
(return status = 0)
1>
There were only 3 databases in my server! What had happened to my master database and other databases?? This corruption problem seemed to be getting out of hand...
Panicking, I ran sp_helpdb again, only to find those same three databases again. Getting down to basics, I did a select * from sysdatabases -- but still, it showed only those three databases. HELP!?!?

Any clues?

Answer:
With hindsight, this is one of those classic mistake everyone has to make at least once...
It appears I had run set rowcount 3 earlier in that session, and naturally, this was having an effect on all statements, include those in sp_helpdb, as well as the direct select from sysdatabases. Great was the relief when, after running set rowcount 0, all databases showed up normally again...

How would you rate this ASE quiz question?
Great    OK    Average    Boring     

January/February 2006
Let's assume you need to generate 100000 row inserts for testing the throughput of insert statements by ASE. In the spirit of the quiz question for May 2005, you're doing this as follows:
1> insert MyTable values (....)
2> go 100000
Is this a good approach for performance testing?

Answer:
Obviously, this approach works fine from a functional perspective: 100000 inserts are generated as desired. While this is efficient from a lazy DBA's point of view -it requires only a minimal amount of typing- it is not the right approach for performance testing. Why not?

When using the 'go N' subcommand of isql, the preceding command batch is sent to the ASE server N times. Each time, this requires network I/O (to send the query and return the result), and, inside ASE, parsing and compilation of the query. This overhead can be significant, and as a result, you may not be generating as much workload for the ASE server as you would perhaps have thought. To illustrate this, I ran two simple tests.

First, I'm using 'go 100000' as above. Note that the isql session runs on the same host as the ASE server to minimise the actual network traffic:
1> select convert(varchar, getdate(), 109)
2> go

 ------------------------------
 Feb 12 2006 10:27:53:596PM

(1 row affected)

1> insert MyTable values (1)
2> go 100000
(1 row affected)
100000 xacts:
1> select convert(varchar, getdate(), 109)
2> go

 ------------------------------
 Feb 12 2006 10:35:11:406PM

(1 row affected)
As you can see, inserting 100000 rows takes 07:18, which corresponds to ~230 inserts/second. That may seem fast enough, but let's try a different approach too.

Here, I'm creating a stored procedure to perform the inserts in a loop:
1> create proc MyTable_insert @n int as
2> while @n > 0
3> begin
4>   insert MyTable values (1)
5>   select @n = @n - 1
6> end
7> go

1> select convert(varchar, getdate(), 109)
2> go

 ------------------------------
 Feb 12 2006 10:37:10:216PM

(1 row affected)

1> MyTable_insert 100000
2> go 
(return status = 0)
1> select convert(varchar, getdate(), 109)
2> go

 ------------------------------
 Feb 12 2006 10:37:33:680PM

(1 row affected)
In this case, it takes only 23 seconds to insert 100000 rows, corresponding to ~4300 inserts/second. That works out as about 18 times faster than the 'go 100000' approach. Since this stored procedure invocation requires only a single client-server round trip (as opposed to 100000 such round-trips in the first test), this factor-18-difference is caused solely by the overhead mentioned before (network traffic, query parsing/compilation).

Of course, the figures you find on your own server may be quite different, but the difference should still be very clear.

Summarising, the conclusion should be that the overhead of the 'go 100000' approach (network traffic + query parsing, compilation, etc) is significant. For performance testing, this means that with 'go 100000', you're primarily testing how fast your isql client can send queries to the ASE server, rather than putting the maximum workload on ASE.

NB: the above tests were executed in a regular database; when running these tests in tempdb, the difference will likely be bigger, since ASE uses various tempdb optimizations -like lazy commit writes- that do not apply to processing in normal databases. The net effect would be that ASE has to spend less time on inserting each row than in a regular database, and therefore the difference between these two tests would be even bigger.
Because of these optimizations, running such performance tests in tempdb wouldn't be a fair when doing comparative tests for a real-life application.
How would you rate this ASE quiz question?
Great    OK    Average    Boring     


More ASE quiz questions: 2013 - 2012 - 2011 - 2010 - 2009 - 2008 - 2007 - 2006 - 2005 - 2004 - 2003 - 2002 - 2001
For a randomly selected quiz question, click here.


 
 This document is located at www.sypron.nl/quiz2006a.html