Sybase ASE
All versions
Last updated: 02 December 2007
ASE Quiz Questions 2007
 
This page contains some quiz-type ASE questions to test your ASE knowledge.
More ASE quiz questions: 2013 - 2012 - 2011 - 2010 - 2009 - 2008 - 2007 - 2006 - 2005 - 2004 - 2003 - 2002 - 2001
For a randomly selected quiz question, click here.


December 2007
Read the following, then answer 'true' or 'false' without thinking too long about it:

"Since truncate table is a minimally logged operation, it should not be used in a production database: its minimally logged nature may make it impossible to recover the database from transaction log dumps"

Go here to check your answer.

October 2007
Let's assume a client application is performing a 10-million-row BCP-in into your biggest, billion-row table every day. As a DBA, you want to monitor the progress of this BCP job, but without logging into the client system and looking at client app log files (or, for the sake of this question, let's assume that such client-side BCP log files are accessible for you).
How can you keep track of the progress of the BCP-in job, using only ASE-side functionality?

Go here for the answer.

September 2007
Sometime ago, someone I met was trying to solve a problem by creating a view against an application table. I'll spare you the details except one: the view had to contain a column reflecting the session's ID number ('spid'). The obvious solution was to include the session-specific global variable @@spid in the view definition as follows:
create view v
as
  select spid=@@spid,
         my_col
  from my_table
Unfortunately, the obvious solution doesn't work here as it is not allowed to include variables in a view:
Msg 7351, Level 15, State 1
Server 'SYBASE', Line 3
Local or global variables not allowed in view definition
Despite this limitation, is it possible to still create a view with the desired functionality?

Go here for the answer.

July 2007
With some regularity, I get emails for Sybase users admitting they've been manually changing system tables -- and now their ASE server doesn't run as well as before. If I would perhaps have a suggestion to fix things?
Now, I don't blame anyone for playing with system tables and getting it wrong -- I've done it myself, and it can be most instructive. However, I'm glad I've always done such things on my own test server only -- unlike some of those customers.
For example, one customer requested me to ask Sybase to implement a feature that makes it impossible to manually delete rows from system tables. When I pointed out that the config parameter 'allow updates on system tables' does exactly that, he explained this was not good enough for them since all users logged in as 'sa', having both sa_role and sso_role, and 'allow updates on system tables' was always enabled, and apparently those users sometimes did some bad things. Obviously though, this customer's problem had to be solved on a different level than by adding some ASE security mechanism: there's only so much features can protect against.

Anyway, this month's quiz question looks at two things customers have asked me about:
- First, let's say someone deletes the 'guest' or 'dbo' user from the 'master' database
- Second, let's say all rows in master..syslogins are also deleted
In both cases, what would be the consequences and how would you repair the damage?

Go here for the answer.

May 2007
This question is about string manipulation. There may not be an immediate pressing problem for everyone to solve with this, but it's worth having seen the trick behind the solution. Anyway...

When you need to determine whether a string contains a particular substring, you can use the charindex() built-in function. However, let's say you need to determine whether a string contains that particular substring a specific number of times, for example, twice or thrice (I've been waiting for an opportunity to use that word...). Doing this with charindex() is possible but gets messy very quickly. Can this be done in a better way?

Go here for the answer.

April 2007
Let's say you're BCPing a file into a table, but you're getting an error like the following:
% bcp mydb..mytab in data.bcp -Umylogin -Psecret -SSYBASE -c 

Starting copy...
1000 rows sent to SQL Server.
2000 rows sent to SQL Server.
3000 rows sent to SQL Server.
CSLIB Message:  - L0/O0/S0/N36/1/0:
cs_convert: cslib user api layer: common library error: The result is 
truncated because the conversion/operation resulted in overflow.
4000 rows sent to SQL Server.
5000 rows sent to SQL Server.
[...]
The question: how do you find out which input record in the file is causing this message?

Go here for the answer.

March 2007
Let's say you need to generate some test data. For example, you need 1 million rows of simulated trading data with a trading date spread evenly over, say, the last 5 years. How can you generate such data quickly?

Go here for the answer.

February 2007
As you no doubt know, statistics that accurately represent the actual data distribution in a column is key to ensuring good performance in ASE. The accuracy of a histogram (being the form in which those statistics are stored) can often be improved by increasing the number of histogram 'steps' -- a 'step' being a data point in the histogram. The idea is simple: the more data points, the greater the likelihood that the histogram shows an accurate picture of the actual data.

For a pre-15.0.1 esd#1 ASE server with default configuration settings, all histograms have 20 steps (or less) by default. Let's assume we want to increase the number of steps to 500 for all currently histogrammed columns in a particular table. How do you achieve this?

Go here for the answer.

January 2007
In ASE 15, you can control the types of join the optimizer can choose. For example, when a query uses a hash join but you want to forbid hash joins (for example, because you want to compare performance with a merge join or a nested loop join), you can achieve this by running set hash_join 0 first (in 15.0.1+, you can also run set hash_join off): when re-running the first query again, it will then not choose a hash join anymore.

Recently I was doing exactly this when analysing query performance, but to my surprise, the query kept using a hash join. (NB: I didn't make a typing error or something... set hash_join 0 was executed correctly in a separate batch before resubmitting the query). Also, when setting the optimization goal to allrows_mix, the query still used a hash join as well.
So although session-level settings were in effect forbidding hash joins to be used, these did not seem to have any effect since the query still used a hash join anyway.
Why did this happen ?

Go here for the answer.


More ASE quiz questions: 2013 - 2012 - 2011 - 2010 - 2009 - 2008 - 2007 - 2006 - 2005 - 2004 - 2003 - 2002 - 2001
For a randomly selected quiz question, click here.


 
 This document is located at www.sypron.nl/quiz2007.html