Sybase ASE
All versions
Last updated: 24 December 2009
ASE Quiz Questions: answers 2009
 
More ASE quiz questions: 2013 - 2012 - 2011 - 2010 - 2009 - 2008 - 2007 - 2006 - 2005 - 2004 - 2003 - 2002 - 2001
For a randomly selected quiz question, click here.

December 2009
As of version 15.0.2, ASE supports SQL User-Defined Functions (UDFs).
If you'd create an 'elapsed_time' resource limit of 10 minutes, the idea is that any T-SQL statement taking longer than 10 minutes to execute, will be aborted by the ASE Resource Governor (assuming you'd configure the limit for aborting).
While taking a fresh look at the 'elapsed_time' resource limit, with a I ran into recently, by complete coincidence -or rather, by sloppy SQL coding on my part- was the following waitfor statement:
waitfor delay '01:00:00'  -- i.e. wait for 1 hour
When you execute this waitfor while the 10-minute resource limit mentioned above is set up, after how much time do you expect the resource limit to kick in?
  • (a) after 0 seconds
  • (b) after 10 minutes
  • (c) after 1 hour
What's your guess?

Answer:
Before getting to the answer, let's do a quick recap of that resource limits thing. This feature was originally introduced in ASE 11.5 in 1997, but remained largely unnoticed by most ASE users until version 12.5 when a new resource limit 'tempdb_space' was added, allowing a DBA to put a hard limit on the amount of space a session could allocate in tempdb.

Prior to ASE 12.5, the only available resource limit types were 'row_count', 'io_cost' and 'elapsed_time'. Unlike 'tempdb_space', none of these provided a working solution to an everyday problem, and this was perhaps the reason why resource limits didn't get very popular.
A quick overview of these limit types:
  • 'row_count' allows you to limit the number of rows in a result set being returned to the client application, thus trying to avoid hanging up server and client in case some monstruous unexpected carthesian product occurs.
  • 'io_cost' aims to identify excessively costly, and therefore probably bad, query plans -- though this concept actually relies on a whole bunch of assumptions which may or may not be correct.
  • Finally, 'elapsed_time' lets you specify the maximum amount of wall-clock time a T-SQL statement (any T-SQL statement) may take to run.
Of these three limits, 'elapsed_time' was the one with the best chance of getting into many a DBA's toolbox. Unfortunately, the initial implementation left a few essential things to be desired, to the point that it was not very useful at all. The original problems were mainly that the time limit checking took place only after a statement had completed -- in other words, a statement could run to completion in 2 hours, only for the Resource Governor to discover after completion that the 'elapsed_time' limit of 10 minutes had been exceeded. In other words, a killer query wasn't exactly stopped in its tracks.
Also, I recall the 'elapsed_time' limit didn't apply to triggers, and there was a series of other smaller issues as well.

Recently it was pointed out to me (thank you, Steve Hookings!) that some of the issues with resource limits had quietly been resolved in ASE 12.5.3, which had somehow completely escaped my attention. In any case, in ASE 12.5.3 and later, the 'elapsed_time' limit is now acting as you had hoped and expected -- namely, stopping a statement the second it runs longer than the configured limit. The good news is that this resource limit has become much more practically usable now. The only thing I'm aware of that still doesn't work about the 'elapsed_time' limit is that it does not apply to triggers: a trigger will still execute in full before the time limit is checked. I guess that's not ideal, but it doesn't look like a big showstopper to me since it does work for all statements that are not executing within the scope of a trigger.
Anyway, having learnt about these very welcome fixes to the 'elapsed_time' limit, I decided to give it another chance. And I must say, it seems to work very well indeed. If only I had known this back in December 2004 when 12.5.3 was released...

I'm getting to the quiz answer in a second, but first, if you'd like to test this thing yourself, here's how to set things up.
First, you must enable the Resource Governor:
sp_configure 'allow resource limits', 1
go
-- now you must shutdown and reboot ASE for this setting to take effect...
Second, you must create a resource limit. In this example, I'm creating an 'elapsed_time' limit of 600 seconds which applies to all sessions by login 'robv':
sp_add_resource_limit 'robv', NULL, 'at all times', elapsed_time, 600, 2, 4, 6
go

-- display the current limits
sp_help_resource_limit @verbose=1
go
The syntax for creating the resource limit above has three intriguing constant parameters at the end (2,4,6). Suffice to say here that '4' means that the session is killed when the limit is exceeded (alternatives include aborting the batch or the transaction (but not terminating the session), and simply issuing a warning). Check the ASE manuals if you're interested in the details...

So, after this lengthy intro, let's turn to the quiz question. What do you think happens when an 'elapsed_time' limit is configured and waitfor delay '01:00:00' is executed?
To my surprise, the correct answer appeared to be (a): the resource limit applies immediately when execution reaches the waitfor delay statement. This may look odd, but when you think about it, it's perhaps not so strange after all. I have been unable to get definitive confirmation, but I *think* the reasoning behind this behaviour, which in fact dates back to ASE 11.5, is as follows: when we execute waitfor delay, we know exactly how long we're supposed to wait (since the argument to waitfor delay specifies the wait interval). If that interval is longer than the configured 'elapsed_time' resource limit, then we know with certainty that this statement will exceed the configured limit at some point, and action will then have to be taken. Since we know that this will happen with certainty, why bother waiting for that moment? Let's take the action right now, and waste less of the world's time...

I guess this behaviour is quite OK. The only downside I'm seeing is when you actually expected the waitfor to really wait for as long as you asked it -- but then, you shouldn't have configured an 'elapsed_time' resource limit that was shorter than what your application would be doing!

I should point out immediately that the way waitfor delay is handled by the Resource Governor is a big exception: any other long-running statement would be killed around the 601st second of execution, as it should be (again, this does not apply to anything executed in a trigger).
Clearly, few people would be interested in using resource limits to tackle waitfor statements. In reality, you would rather want this to apply to runaway select statements for example.

Anyway, the news of 'elapsed_time' having received a major fix may be a bit late, but I think it justifies looking at this feature again if you have any issues with long-running queries that strike your server out of the blue.
Just in case that is indeed your problem: when you enable traceflag 3662 (requires ASE 12.5.3 or later), then the SQL text of the offending batch is written to the ASE errorlog, so as to make it easier to figure out what happened.

How would you rate this ASE quiz question?
Great    OK    Average    Boring     


October 2009
Here's some unexpected ASE behaviour -for me, at least- that I ran into recently.
This is about the relatively little-known feature of 'resource limits', specifically about the 'elapsed_time' limit.
If you'd create an 'elapsed_time' resource limit of 10 minutes, the idea is that any T-SQL statement taking longer than 10 minutes to execute, will be aborted by the ASE Resource Governor (assuming you'd configure the limit for aborting).
While taking a fresh look at the 'elapsed_time' resource limit, with a I ran into recently, by complete coincidence -or rather, by sloppy SQL coding on my part- was the following waitfor statement:
waitfor delay '01:00:00'  -- i.e. wait for 1 hour
When you execute this waitfor while the 10-minute resource limit mentioned above is set up, after how much time do you expect the resource limit to kick in?
  • (a) after 0 seconds
  • (b) after 10 minutes
  • (c) after 1 hour
What's your guess?

Answer:
Before getting to the answer, let's do a quick recap of that resource limits thing. This feature was originally introduced in ASE 11.5 in 1997, but remained largely unnoticed by most ASE users until version 12.5 when a new resource limit 'tempdb_space' was added, allowing a DBA to put a hard limit on the amount of space a session could allocate in tempdb.

Prior to ASE 12.5, the only available resource limit types were 'row_count', 'io_cost' and 'elapsed_time'. Unlike 'tempdb_space', none of these provided a working solution to an everyday problem, and this was perhaps the reason why resource limits didn't get very popular.
A quick overview of these limit types:
  • 'row_count' allows you to limit the number of rows in a result set being returned to the client application, thus trying to avoid hanging up server and client in case some monstruous unexpected carthesian product occurs.
  • 'io_cost' aims to identify excessively costly, and therefore probably bad, query plans -- though this concept actually relies on a whole bunch of assumptions which may or may not be correct.
  • Finally, 'elapsed_time' lets you specify the maximum amount of wall-clock time a T-SQL statement (any T-SQL statement) may take to run.
Of these three limits, 'elapsed_time' was the one with the best chance of getting into many a DBA's toolbox. Unfortunately, the initial implementation left a few essential things to be desired, to the point that it was not very useful at all. The original problems were mainly that the time limit checking took place only after a statement had completed -- in other words, a statement could run to completion in 2 hours, only for the Resource Governor to discover after completion that the 'elapsed_time' limit of 10 minutes had been exceeded. In other words, a killer query wasn't exactly stopped in its tracks.
Also, I recall the 'elapsed_time' limit didn't apply to triggers, and there was a series of other smaller issues as well.

Recently it was pointed out to me (thank you, Steve Hookings!) that some of the issues with resource limits had quietly been resolved in ASE 12.5.3, which had somehow completely escaped my attention. In any case, in ASE 12.5.3 and later, the 'elapsed_time' limit is now acting as you had hoped and expected -- namely, stopping a statement the second it runs longer than the configured limit. The good news is that this resource limit has become much more practically usable now. The only thing I'm aware of that still doesn't work about the 'elapsed_time' limit is that it does not apply to triggers: a trigger will still execute in full before the time limit is checked. I guess that's not ideal, but it doesn't look like a big showstopper to me since it does work for all statements that are not executing within the scope of a trigger.
Anyway, having learnt about these very welcome fixes to the 'elapsed_time' limit, I decided to give it another chance. And I must say, it seems to work very well indeed. If only I had known this back in December 2004 when 12.5.3 was released...

I'm getting to the quiz answer in a second, but first, if you'd like to test this thing yourself, here's how to set things up.
First, you must enable the Resource Governor:
sp_configure 'allow resource limits', 1
go
-- now you must shutdown and reboot ASE for this setting to take effect...
Second, you must create a resource limit. In this example, I'm creating an 'elapsed_time' limit of 600 seconds which applies to all sessions by login 'robv':
sp_add_resource_limit 'robv', NULL, 'at all times', elapsed_time, 600, 2, 4, 6
go

-- display the current limits
sp_help_resource_limit @verbose=1
go
The syntax for creating the resource limit above has three intriguing constant parameters at the end (2,4,6). Suffice to say here that '4' means that the session is killed when the limit is exceeded (alternatives include aborting the batch or the transaction (but not terminating the session), and simply issuing a warning). Check the ASE manuals if you're interested in the details...

So, after this lengthy intro, let's turn to the quiz question. What do you think happens when an 'elapsed_time' limit is configured and waitfor delay '01:00:00' is executed?
To my surprise, the correct answer appeared to be (a): the resource limit applies immediately when execution reaches the waitfor delay statement. This may look odd, but when you think about it, it's perhaps not so strange after all. I have been unable to get definitive confirmation, but I *think* the reasoning behind this behaviour, which in fact dates back to ASE 11.5, is as follows: when we execute waitfor delay, we know exactly how long we're supposed to wait (since the argument to waitfor delay specifies the wait interval). If that interval is longer than the configured 'elapsed_time' resource limit, then we know with certainty that this statement will exceed the configured limit at some point, and action will then have to be taken. Since we know that this will happen with certainty, why bother waiting for that moment? Let's take the action right now, and waste less of the world's time...

I guess this behaviour is quite OK. The only downside I'm seeing is when you actually expected the waitfor to really wait for as long as you asked it -- but then, you shouldn't have configured an 'elapsed_time' resource limit that was shorter than what your application would be doing!

I should point out immediately that the way waitfor delay is handled by the Resource Governor is a big exception: any other long-running statement would be killed around the 601st second of execution, as it should be (again, this does not apply to anything executed in a trigger).
Clearly, few people would be interested in using resource limits to tackle waitfor statements. In reality, you would rather want this to apply to runaway select statements for example.

Anyway, the news of 'elapsed_time' having received a major fix may be a bit late, but I think it justifies looking at this feature again if you have any issues with long-running queries that strike your server out of the blue.
Just in case that is indeed your problem: when you enable traceflag 3662 (requires ASE 12.5.3 or later), then the SQL text of the offending batch is written to the ASE errorlog, so as to make it easier to figure out what happened.
How would you rate this ASE quiz question?
Great    OK    Average    Boring     


September 2009
When you want to get an idea of the locks being held in your ASE server, you run the system procedure sp_lock.
However, the results show only the object IDs for which locks are currently being held. Unless you're one of those guys who knows the object IDs of all your tables by heart (I'm definitely not in that league myself), you'd really like to see the table names reported instead.
What's the simplest way of achieving that?

Answer:
The simplest way of displaying the tables names for which locks are currently being held, is, in fact, to use sp_lock -- provided you're running ASE 15.
As of 15.0, sp_lock supports an additional parameter @verbose. When specifying a non-zero value for this parameter, sp_lock will print the table names instead of only their IDs.
@verbose is the third parameter, so you can call sp_lock in either of the following ways:
1> sp_lock @verbose=1
2> go

1> sp_lock NULL, NULL, 1
2> go
(the output is typically a little too wide to fit neatly on the page width I'm using for this WWW site, so I'm omitting it here)

Try it out!

In case you're not running ASE 15 yet, you'll have to code your own version of sp_lock -- or you could just use the SQL code of the ASE 15 version of sp_lock in 12.x: it will run in 12.x (to get a copy of that code, you can just download & install one of the free ASE versions; the installmaster script contains the source code).
How would you rate this ASE quiz question?
Great    OK    Average    Boring     


August 2009
You're an ASE DBA, and an opinionated (as well as upset) end-user calls you saying there must be an open transaction in the ASE server, blocking his application.
In a plain-vanilla ASE installation, what's the least number of keystrokes to find out if there is indeed an open transaction, and if so, how long it's been open?


Answer:
The quickest way to find out if there are any open transactions in your ASE server, as well as various details of such transactions, is to run the following SQL:
1> sp_transactions
2> go
The system stored proc sp_transactions will print out an overview of all transactions that are currently open in your ASE server. This overview includes the session owning each open transaction, as well as the start time of the transactions. The latter is especially relevant because it will tell you how long a transaction has been open: if that's been longer than a few minutes, it almost always means that something, somewhere is wrong (transaction log full? blocked by another transaction? someone forgot to do a commit?)
The classic way to get this information is to run select * from master..syslogshold, but that's more keystrokes -- and, if your keyboard skills are like mine (always typing too fast), more opportunities for syntax errors.
What neither will show you is whether there are blocking locks -- you need to run sp_lock for that. But you should run sp_transactions first: if there is no open transaction, then there won't be corresponding locks either and you can safely tell your opinionated end-user that his diagnosis stinks -- giving you some more time to look further to see if perhaps something else is wrong (hint: run sp_who to check for LOG SUSPEND sessions).

BTW, note that querying syslogshold also tells you whether a replication truncation point is active in a database, and sp_transactions will not -- however, a replication truncation point will never cause locks to be held that block other sessions.

So, by running sp_transactions, you'll get your information in the least number of keystrokes.

To pre-empt everyone responding that lesser keystrokes are also possible, of course, nothing stops you from being even more economic on keystrokes by creating your own procedure sp_x that calls sp_transactions:
1> create proc sp_x 
2> as
3>   exec sp_transactions
4> go
Indeed, many DBAs have written their own procedures that provide a combination of what sp_who, sp_lock and sp_transactions do, thus giving a fuller picture in one glance, typically tailored to the specific situation of the DBA's system, and the DBA's preferences.
Also, if you're a 'sqsh' addict like me, you can also create your own 'sqsh' command aliases which don't even require typing the 'go' command (see http://www.sypron.nl/sqsh for some examples.

But in a plain-vanilla ASE environment, sp_transactions is the shortest command you can get.

How would you rate this ASE quiz question?
Great    OK    Average    Boring     


July 2009
You're probably familiar with the special significance of the "sp_" prefix in stored procedure names: when executing such a procedure, if it does not exist in the session's current database, it will be searched for in the sybsystemprocs database (and, as a remnant of older days, if it doesn't exist there either, it will be searched for in the master database).

Can this be applied beyond stored procedures?


Answer:
Try the following:
1> use sybsystemprocs
2> go
1> create table sp_mytab (a char(20))
2> insert sp_mytab values ('from sybsystemprocs')
3> go

1> use master
2> go
1> create table sp_mytab (a char(20))
2> insert sp_mytab values ('from master')
3> go

1> use my_db
2> go
1> create table sp_mytab (a char(20))
2> insert sp_mytab values ('from my_db')
3> go
Now run: select * from sp_mytab in my_db and also in another database like tempdb. And also run this: select * from yet_another_db..sp_mytab. What do you get? (make sure the statement cache is off before you run these queries!)
What you'll be seeing is that the table being selected from will be found in sybsystemprocs database if it doesn't exist in the current database; and if you'd drop sybsystemprocs..sp_mytab, the table would be found in the master database. In other words, the mechanism of finding the object name that starts with "sp_" applies not only to stored procedures but also to tables. And indeed, to views and SQL functions as well.
However, with the exception of SQL functions (more on that later), it may not necessarily be a good idea to rely on this behaviour.

However, the fact that this appears to 'work' does not imply it is always a good idea to use it. First, note that the ASE documentation only documents this behaviour for stored procedures starting with "sp_", so formally speaking, using this for other types of object is not supported (from a practical perspective I'd be pragmatic about SQL functions, see below).
Creating tables named "sp_something", and placing them in sybsystemprocs, could be 'handy' for cases where you'd like to make particular data accessible from within all databases. However, it is doubtful whether sybsystemprocs is the right place to put any application data: the proper way of doing would be to put these objects in a normal user database (which is part of your backup and maintenance cycles), and create views to these tables in every other database that needs access. But if creating those views in every database is cumbersome, then using this "sp_" trick could make life a bit easier.
Note that there is a risk however: if you create a table in an application database with the same name (e.g. sp_mytable) as already exists in sybsystemprocs, it will use that table in the local database. However, the schema of those two tables (column names & data types) better be the same, since an application might end up selecting from a different table depending on the database context it happens to be in. If those tables have a different schema, the application could malfunction.
It is also possible to create views named "sp_myview" in sybsystemprocs. This is perhaps not such a useful thing to do since the table(s) which the view selects from must exist when the view is created.

As mentioned, the same mechanism of applies to SQL functions (introduced in 15.0.2) whose name starts with "sp_". In this case, there is actually a potentially useful application here, since you could place useful SQL functions in sybsystemprocs, while they can be called from any database, much in the same spirit as system stored procedures. Good candidates are functions performing some utility function, like for custom output formatting. Such SQL functions combine ideally with the "sp_" mechanism.
In order to distinguish such functions fro stored procedure, I recommend using the "sp_f_" prefix.
Example:
1> use sybsystemprocs
2> go
1> create function sp_f_my_datefmt (@d datetime)
2> returns char(11)
3> as
4>   -- formats a date value a dd-mmm-yyyy
5>   return str_replace( convert(char(11), @d, 106), ' ', '-')
6> go
1> use my_db
2> go
1> select dbo.sp_f_my_datefmt(getdate())
2> go

 -----------
 12-Jul-2009

(1 row affected)
SQL functions can also select from tables. However, when using this "sp_" trick, the table being selected from must exist in the database where the SQL function is called from, which is unlikely to be the case for user tables (since the SQL function can be called from *any* database). Selecting from system tables should be no problem, since these always exist anyway (the same restriction applies to the "sp_" system stored procedure, which either select from system tables, or from tables in the master database).

Please note that, formally, this trick for using a SQL function with "sp_..." in its name is not supported, since this is documented only for stored procedures. Nevertheless, it happens to work perfectly OK for SQL functions too, and I've been using it without any issues.
However, you should be keep in mind that it is not impossible that this might actually change in some future ASE release, and an "sf_" prefix might then be introduced to have the same effect, but for SQL functions. Nothing is confirmed about whether and when that would happen, when it does, SQL functions relying on the "sp_" prefix might need to be changed to start with "sf_" instead of "sp_". Until that time however, "sp_" should work fine for functions.
(Update March 2013: I don't think "sf_" will be supported soon in ASE. Therefore I recommend using "sp_f_" for SQL functions)

The mechanism we're seeing in action with this "sp_" prefix has been part of ASE since the very beginning: it already worked this way when I first use Sybase SQL Server v.3, back in 1989. That may mean it won't likely be abandoned soon, though there is of course no guarantee for that.

Note that the "sp_" mechanism does not work for user-defined functions that are based on a Java class: these functions must always exist in the current database.
Also, it does not work for triggers, which is not a surprise since these are intimately tied to a particular table.

How would you rate this ASE quiz question?
Great    OK    Average    Boring     


June 2009
Version 15.0.2 of ASE comes with the new feature of 'application tracing'. Understanding this feature is a must for every DBA, because it provides a simple mechanism to figure out what your client applications are actually doing: apptracing lets you capture the SQL submitted to the ASE server by a specific client connection, and writes it into a file.
Basically, it works as follows. Say there's an application connected to your ASE server with session number (spid #) 54, and you want to see what SQL it is actually executing. You run the following SQL:
  set tracefile '/tmp/spid54.trace.out' for 54
  set show_sqltext on
... and from here on, any SQL executed by the application on session 54 will be captured into the file /tmp/spid54.trace.out. Session 54 itself will not be affected in any way -- so this is a nice way of snooping on other sessions (yes, you need sa_role for this).

However, there's a downside: you need to wait until the application has connected to ASE before you can start application tracing for the session with the SQL shown above. This means you'll miss any SQL executed immediately after the connection is made, and for applications that very quickly disconnect again, you won't be able to use apptracing at all.

How can you overcome this limitation?

Answer:
When looking at the documentation of the application tracing feature, it may not be immediately apparent that you can also enable tracing for each session itself, as it connects to the ASE server: this can be done by using a login trigger (see here for more details on login triggers). I have found this usage of the apptracing feature particularly useful, since it provides a quite complete picture of the application's behaviour.
The login trigger would look something like this:
create proc mylogintrig
as
  declare @dir varchar(200)
  set @dir = "/tmp/my_trace_dir/"

  -- pick up the program name
  declare @progname varchar(30)
  select @progname = program_name 
  from master..sysprocesses
  where spid = @@spid

  -- what's the time?
  declare @now varchar(30)
  set @now = convert(varchar, getdate(), 112) + '_' + 
             str_replace(convert(varchar, getdate(), 108), ':', NULL) 

  -- compose a unique file name...
  declare @fname varchar(150)
  set @fname = @dir + 'apptrace_' + @now + '_' + 
               suser_name() + '_' + @progname + 
               '_spid' + convert(varchar, @@spid) + '.txt'

  -- and use it...
  set tracefile @fname
  set show_sqltext on        -- to display SQL text
  set showplan on            -- to display query plan
  set statistics plancost on -- to display I/O counts etc.
go
Most of the SQL in this procedure is about generating a file name that is guaranteed to be unique -- this is essential because if the specified file name does indeed exist, the set tracefile command will fail with an error.
The trace files being generated (in directory /tmp/my_trace_dir/) would look something like this -- the login date and time, the login name, the program name and the spid number are all very clear:
apptrace_20090614_122901_batchuser_isql_spid33.txt
apptrace_20090614_124114_sa_sqsh_spid34.txt
...etc...
To configure the procedure shown above as a login trigger, you need to use either with sp_modifylogin (for a specific login), or sp_logintrigger (for a login trigger applying to all logins). And don't forget to grant execute permission on the procedure, or the login trigger may not be executed!. More details are here.

So, with this login trigger in place, you'll see a new trace file getting generated for every new connection into the ASE server. Since the login trigger is executed before the first SQL statement is sent to the server, you can be sure that all SQL statements in the session are captured in the trace file. Very useful indeed -- this trick with a login trigger should be in every DBA's toolbox.

But apart from this obvious benefit, I have found another interesting advantage of using this form of application tracing. Because a new trace file is generated for every time someone connects to the ASE server, you get a clear picture of who is logging in how often.
A few months ago I was involved in a performance tuning exercise for a commercial third-party application running on ASE (which shall remain anonymous here), and application tracing turned out to be a crucial tool to pinpoint the cause of a particular performance problem: over a period of about 45 minutes, we observed over 30,000 trace files being created -- meaning that the application had connected & disconnected to/from ASE equally many times.
It appeared that the client application was using a highly inefficient algorithm: in essence, it connected to ASE, retrieved one row (this could be seen from the contents of the individual trace files) and then disconnected. It did the same for the next row, and the next, and so on. Simply connecting once and retrieving all 30,000 rows would have been much faster, but that's not how the application was built.
The overhead of all those connect and disconnect actions was having a significant impact on the performance of that part of the application. With a simple test, we determined that, for this particular system, a connect from the application to ASE, followed immediately by a disconnect (i.e. without executing any SQL at all), took around 60 milliseconds. Doing that 30,000 times means the system was wasting 1800 seconds, or 30 minutes, on just connecting/disconnecting. Given that the total time for this part of the application was about 45 minutes, it was pretty clear where the opportunities for improving performance were.
It was very useful to obtain this information, but as often happens with performance issues, the challenge is to convince the other side of the table that the issue is really in their application, and not in ASE performing badly. Being able to show 30,000+ trace files with virtually identical contents provided undeniable and very convincing proof of what was going on, and this greatly simplified the discussion.

How would you rate this ASE quiz question?
Great    OK    Average    Boring     


May 2009
Back in November 2003, I posted a quiz question about finding the ASE configuration parameters that have been set to non-default values. Before proceeding, you may want to check out that question and the answer, which was indeed the best way of doing it in those days (the alternative was to go through the .cfg file and find those lines which were not set to 'DEFAULT').
Finding only the non-default config settings matters, since this is generally only a subset of the 300+ configuration parameters, and as a DBA you typically want to know the stuff that is not set to the default.

Some 5.5 years later, there's now a much better way of figuring out those non-default configuration parameters. How?

Answer:
If you checked out that November 2003 question, you've seen that the solution is a join between the system tables syscurconfigs and sysconfigures . That was indeed a very useful query to have in your toolbox (in fact, one of the reasons for putting that quiz question online was that I could easily get hold of the query myself when working on customer assignments).

However, it would make sense to let sp_configure report these non-default settings itself, wouldn't it?
Fortunately, that's exactly what has happened in ASE 15.0.2 IR. As of that version, you can run sp_configure nondefault , which will list out, well, the configuration parameters set to non-default values:
   
1> sp_configure nondefault
2> go
Parameter Name               Default Memory Used Config Value Run Value Unit             Type
---------------------------- ------- ----------- ------------ --------- ---------------- -------
SQL batch capture            0       0           1            1         switch           dynamic
cis rpc handling             0       0           1            1         switch           dynamic
deadlock pipe active         0       0           1            1         switch           dynamic
deadlock pipe max messages   0       46          100          100       number           dynamic
enable file access           0       0           1            1         switch           dynamic
enable monitoring            0       0           1            1         switch           dynamic
enable rep agent threads     0       0           1            1         switch           dynamic
enable semantic partitioning 0       0           1            1         switch           dynamic
enable xml                   0       0           1            1         switch           dynamic
errorlog pipe active         0       0           1            1         switch           dynamic
errorlog pipe max messages   0       55          100          100       number           dynamic
identity reservation size    1       0           100          100       number           dynamic
max SQL text monitored       0       54          2048         2048      bytes            static
max memory                   49152   104000      52000        52000     memory pages(2k) dynamic
object lockwait timing       0       0           1            1         switch           dynamic
per object statistics active 0       0           1            1         switch           dynamic
procedure cache size         7000    31753       15000        15000     memory pages(2k) dynamic
process wait events          0       0           1            1         switch           dynamic
sql text pipe active         0       0           1            1         switch           dynamic
sql text pipe max messages   0       29          100          100       number           dynamic
statement cache size         0       211         100          100       memory pages(2k) dynamic
statement pipe active        0       0           1            1         switch           dynamic
statement pipe max messages  0       12          100          100       number           dynamic
statement statistics active  0       0           1            1         switch           dynamic
wait event timing            0       0           1            1         switch           dynamic

(1 row affected, return status = 0)

There could hardly be a more convenient way to get this information, and it removes a lot of unnecessary noise as well. So next time you contact Sybase tech support and they request the output of sp_configure, please just send the the output of sp_configure nondefault (you *are* already running on 15.0.2, aren't you?) -- this will make things easier to handle for everyone.

If you were to dig into the SQL source code of sp_configure, you'd find a query similar to the one in the solution of the November 2003 question, and that's no coincidence (as a Sybase employee -which is my day job, as opposed to the nocturnal activities of writing books and editing this web site- sometimes you can help to push things into a useful direction...).

How would you rate this ASE quiz question?
Great    OK    Average    Boring     


February 2009
When you're tuning individual queries for optimal performance, you're aiming at identifying the right combination of required indexes, statistics and SQL constructs so that the ASE query optimizer will generate the query plan that appears to execute fastest.
While that sounds pretty good in theory, in practice you can find yourself in that situation where simply forcing a particular index (with (index your_index_name) behind the table in the from-clause), or join order (with set forceplan on) results in an obviously superior plan, yet you cannot get the darned optimizer to figure that out for itself (i.e. without forcing anything). Being pragmatic, it is often most cost-effective to simply use that forcing: performance problem solved, on to the next problem...

While the short-term economics of this approach are undeniably favourable (is it really worth spending another day trying to find a solution *without* forcing that index?), the trouble with using query plan forcings is that they can turn against you later. The ASE query optimizer may well have been enhanced at some point, yielding a better plan for that query, if only you hadn't twisted the optimizer's arm by forcing part of the query plan.
This is the reason for the Sybase recommendation that all those forcings should be re-evaluated 'periodically', but anyway when upgrading to the next major ASE version (see here for example: 7th bullet from bottom). However, this recommendation is all too often ignored since re-evaluating all those forcings is simply too much hassle.

Yet, the need to re-evaluate your forced indexes and join orders has in fact never been greater than when upgrading to ASE 15, where many classic do's and dont's have changed, and indeed, those forcings may very well deny your queries the opportunity of running much faster with ASE 15's new algorithms like hash joins and hash-based sorting (to name just two out of many enhancements). This is where that short-term solution from a few years ago may be costing you potential performance gains in ASE 15.

The question: what is the fastest way to test the effect of removing all forcings from your SQL code in ASE 15?

Answer:
Re-evaluating all your decisions to override the optimizer with query plan forcings is, indeed, not the type of job most DBAs are waiting for. Assuming you actually know (or can find out) which forcings were applied to your applications' queries in the first place, you'd have to modify each query to remove the forcing, and then test whether the query runs better when you let the optimizer do its thing.
Still, there's no guarantees that removing the forcings will actually produce better query plans -- try justifying that effort to your management...

Sybase recognised that a more efficient approach to address this issue would be in everyone's interest, so ASE 15 comes with a new feature in this area: traceflags 15307 and 15308 (both were added in version 15.0.1).

Traceflag 15307, when enabled, has the effect of making the optimizer ignore all set forceplan statements server-wide, thus effectively removing the join order forcing. Similarly, traceflag 15308 ignores all index forcings, as well as all forcings for the degree of parallelism (and also for the prefetch I/O size and LRU/MRU strategy, but I reckon very few people have actually used these). Note that these traceflags have no effect on forcings through a (partial) abstract query plan (i.e. with a plan '(....)' clause at the end of the query) -- 15308 applies only to the forcings specified in the from-clause following the name of a table.

So, returning to our original problem of re-evaluating all query forcings -- things can be a lot simpler now: just run your application, then enable traceflags 15307 and 15308, and run your application again. You'll know quickly enough it's worth looking deeper into specific queries or whether things are best left as they were (in case you need to dig deeper, the trial-and-error process around finding the best plan starts again, but in ASE 15 you'll have a big head start compared with earlier ASE versions).

Please note that traceflags 15307 and 15308 are an all-or-nothing deal: they apply server-wide and cannot be limited to a specifc user session or stored procedure.
Lastly, also note that the traceflags apply only when the optimizer is active -- this means that you should ensure that all query plans are recompiled when switching these traceflags on or off, or existing plans (in the procedure cache or statement cache) will simply be re-used without recompilation (a server wide recompilation can be achieved by rebooting ASE, as well as by running sp_recompile on all your database tables)
How would you rate this ASE quiz question?
Great    OK    Average    Boring     


More ASE quiz questions: 2013 - 2012 - 2011 - 2010 - 2009 - 2008 - 2007 - 2006 - 2005 - 2004 - 2003 - 2002 - 2001
For a randomly selected quiz question, click here.


 
 This document is located at www.sypron.nl/quiz2009a.html