Sybase ASE
All versions
Last updated: 10 July 2005
ASE Quiz Questions: answers 2005
 
More ASE quiz questions: 2013 - 2012 - 2011 - 2010 - 2009 - 2008 - 2007 - 2006 - 2005 - 2004 - 2003 - 2002 - 2001
For a randomly selected quiz question, click here.

December 2005
A stored procedure p1 is reverse-engineered, for example with ddlgen, Sybase Central or sp_helptext. The stored proc is then dropped and the reverse-eningeered SQL text is executed. What is the name of the object that has now been created?
(we're assuming the stored proc's SQL text was not encrypted with sp_hidetext)

Answer:
You would probably expect a procedure named p1 to be created, since that was the name of the object being reverse-engineered. However, surprising as it may sound, the created procedure may well be named differently.
Consider the following sequence:
create proc p2
as print 'Hello!'
go
sp_rename p2, p1
go
When procedure p1 is now reverse-engineered, we will get the SQL text with which p2 was originally created. When executing this SQL, we get a procedure named p2 -- and not p1 as may have been expected.
The technical background of this problem is that the SQL text with which a compiled object is created, is stored in syscomments at object creation time. When the object is later renamed, this is not reflected in the saved SQL text.
Note that this problem may also occur for views, rules, defaults and triggers since these object types all have there SQL text stored in syscomments.
Also note that tools like Sybase Central and ddlgen may add additional SQL statements to the reverse-engineered SQL text featuring the original object's name.

You may wonder whether this could ever be relevant? It certainly can be -- I stumbled across this one during an ASE migration/upgrade years ago where the customer did not trust the accuracy of their stored-proc creation scripts, and therefore reverse-engineering of compiled objects was chosen instead. During tests, it turned out that this problem was occurring for one particular procedure. Fortunately, this was discovered in time.

Is there any way to tell that you may be hitting this problem? Unfortunately, there is no reliable way to detect this situation beforehand, since you cannot tell from sysobjects alone that an object has been renamed. I can see two ways of approaching this problem. First, the difficult and clumsy one: try to apply some programmatic intelligence to the SQL text extracted from syscomments to see if the original object's name matches the name stored in syscomments -- but you'd need to take things like line breaks and comments into account...
The second option might be more practical: just go ahead and create the object from the reverse-engineered SQL. Then try to figure out the name of what you've just created by querying sysobjects for a very recently created object of a certain type, and check whether its name is what you expect. You may want to rename the original object first, to some name that is unlikely to ever be used, and only drop the original object when the recreated object appears to have the expected name (for interested readers, check out how this approach is used in my proc sp_recreate).

How would you rate this ASE quiz question?
Great    OK    Average    Boring     

November 2005
ASE currently does not support timezone attributes in date/time datatypes. How can you still determine the timezone in which your ASE server's host is located?

Answer:
There are two ways to determine the timezone of the ASE host's system clock.

Option 1: Use xp_cmdshell to run OS commands (on Unix, run the 'date' command for example), and parse the results to pick up the time zone. This can be used in ASE 11.5 or later. Go here for examples of using xp_cmdshell with its results imported back into ASE.

Option 2: ASE 12.5.3 and later supports the built-in function getutcdate(). This function returns the date and time of the system clock expressed as UTC time (UTC = 'Universal Time Coordinated' = GMT = Greenwich Time = the international 'default' timezoneless time). In contrast, getdate() returns the system clock's time corrected for the local time zone.
Armed with this knowledge, you can use these functions to calculate the difference between the local time and GMT. Your first attempt might be a query like this:
select datediff (hh, getutcdate(), getdate())
While the principle behind this query is correct, there are actually two problems with it.

First, this query returns the time difference only as an integer multiple of hours. However, not all timezones have full-hour offsets to GMT. For example, the local time zone in India is defined as GMT + 05:30 hours, while folks in Kathmandu, Nepal, live on GMT + 05:45. In other words, the query above cannot be applied universally since it won't return the accurate offset for all time zones.
(for more information about timezones, see -for example- www.worldtimezone.com)

The second problem is that the above query implicitly assumes that getutcdate() and getdate() will always return the same number of milliseconds when they are called at the same time. Although that does seem to be the case most of the time, there is no guarantee that it always will be -- it is not impossible that the values returned have a minimal difference in their millisecond fields.
For example, in the GMT+1 timezone, imagine there's a 3 millisecond difference between both functions, with getutcdate() returning '10:22:46.016' and getdate() returning '11:22:46.013'. It is left as an exercise for the reader to confirm that the number of hours returned by the datediff() query above will be 0 instead of 1.

To work around these problems, the time difference should not be calculated in hours but in a smaller units like minutes:
select datediff (mi, getutcdate(), getdate())
NB: don't use 'mm' for the datepart since that'll get you the difference in months (which will always be 0 anyway).
This query may still return 59 instead of 60 minutes due to the possible millisecond difference, but it will be clear what the time zone offset is.

(NB2: Time is a funny concept. When writing this question, I wondered what timezone the South Pole would be in. And how should one handle the international date line once you're down there? I'll let you know when I've figured it out...)

How would you rate this ASE quiz question?
Great    OK    Average    Boring     

October 2005
How many rows may be retrieved by the select statement in each of these two SQL code samples?
-- sample 1
create table t1  (k int unique, ....)
insert t1 values (rand()*1000, ....)
insert t1 values (rand()*1000, ....)
insert t1 values (rand()*1000, ....)
[...1000 additional inserts...]
select * from t1 where k = rand()*1000

-- sample 2
create table t2  (k varchar(36) unique, ....)
insert t2 values (newid(), ....)
insert t2 values (newid(), ....)
insert t2 values (newid(), ....)
[...1000 additional inserts...]
select * from t2 where k = newid()

Answer:
First, let's look at what the two samples have in common. Both samples insert rows while attempting to supply unique key values for each row; finally one of the inserted rows is selected at random by suppling a random value as a search argument.

Sounds good, right? Well... let's look a bit closer. In sample 1, the value of the inserted key value is an integer between 0 and 1000, based on rand() (which produces a random float value between 0 and 1). Since we've inserted 1000 rows, there's a reasonable chance that the search argument in the select statement matches the key value of one of the inserted rows (note that at most one row can be retrieved since a unique constraint exist). Therefore, 0 or 1 rows may be retrieved.

Sample 2 looks essentially identical to sample 1, except that newid() is used instead of rand() to generate unique key values. However, the select statement in sample 2 will never retrieve a row. Why?
The reason is that newid() generates UUIDs (Universally Unique Identifiers) or GUIDs (Globally Unique Identifiers). As defined by the IETF, such values are guaranteed to never produce a value that has been generated before. In other words, values generated by newid() are eternally unique by definition.
As a result, the invocation of newid() in the select statement will produce a value that has never been generated before -- therefore, none of the rows in the table will match and 0 rows will always be returned.

So, while both rand() and newid() involve randomness, the essential difference is that rand() can produce duplicate values while newid() cannot.

NB: newid() was introduced in ASE 12.5.0.3.

How would you rate this ASE quiz question?
Great    OK    Average    Boring     

September 2005
Recently I had a discussion about the merits of holdlock (a.k.a. transaction isolation level 3 for a single select statement). Some legacy code had surfaced, basically looking like this:
select * 
from MyTable holdlock
where ...some-condition...
go
The question: does holdlock make sense here?

Answer:
The function of holdlock is to maintain all shared locks until the end of the transaction. This enforces ANSI transaction isolation level 3, which guarantees that when you run the same select statement twice, you'll get the same result set (i.e. another user won't be able to change or add any rows in the mean time)

So, for holdlock to make any sense, you'd need to use it inside a transaction, typically something like this:
begin tran

select * from MyTable holdlock where ...some-condition...

...other processing, modifying rows from the earlier result set...

commit
In the example that started off this quiz question, no transaction was present, so one might be tempted to draw the conclusion (as I did initially) that in this case, using holdlock is simply pointless. Why? Because the whole point of isolation level 3 is to ensure that rows you have read in a statement will not be changed until further notice (i.e. the end of the transaction).

OK, let's hold here for a second... There's actually more to it than you might think.
Who says there is no transaction involved? It seems we've assumed that the code snippet was to be run in unchained transaction mode (the ASE default). If, on the other hand, this code would run in chained mode (the ANSI default), the picture suddenly changes: in that case, the select statement would start a new transaction if none would be active, and then a holdlock might make more sense. Also, the code could be run by isql -Y, which would also switch on chained mode for that session. So things may not be as simple as we thought...

I suppose my main point here is there can be more to a simple piece of code than you'd expect... that piece of legacy code was in fact supposed to run in unchained mode, but I'd just like to show it may be worth taking a step back and considering other aspects.

On a side note, I would recommend always being suspicious of holdlock clauses or use of non-default transaction isolation levels: this is complex stuff that tends to be misunderstood and used either incorrectly or for the wrong reasons (holdlock often indicates a desparate attempt to solve problems resulting from bad database design -- albeit at the expense of concurrency). Also see the quiz question for April 2004.
How would you rate this ASE quiz question?
Great    OK    Average    Boring     

July 2005
I always have trouble remembering the parameters for stored procedures like sp_addsegment, sp_addthreshold, sp_addalias etc. For example, sp_addsegment requires the segment name and database name as first and second parameters, respectively, while sp_addthreshold wants these parameters the other way around. What's the quickest way of finding out which parameters these procedures expect?

Answer:
Just run the stored proc without parameters, and ASE will print an error message telling you which mandatory parameters are expected. The parameter names usually indicate quite clearly what the parameter is for:
1> sp_addsegment
2> go
Msg 201, Level 16, State 2
, Procedure 'sp_addsegment'
Procedure sp_addsegment expects parameter @segname,

which was not supplied. Msg 201, Level 16, State 2 , Procedure 'sp_addsegment' Procedure sp_addsegment expects parameter @dbname,
which was not supplied. Msg 201, Level 16, State 2 , Procedure 'sp_addsegment' Procedure sp_addsegment expects parameter @devname,
which was not supplied. (return status = -6) 1> sp_addthreshold 2> go Msg 201, Level 16, State 2 , Procedure 'sp_addthreshold' Procedure sp_addthreshold expects parameter @dbname,
which was not supplied. Msg 201, Level 16, State 2 , Procedure 'sp_addthreshold' Procedure sp_addthreshold expects parameter @segname,
which was not supplied. Msg 201, Level 16, State 2 , Procedure 'sp_addthreshold' Procedure sp_addthreshold expects parameter @free_space,
which was not supplied. Msg 201, Level 16, State 2 , Procedure 'sp_addthreshold' Procedure sp_addthreshold expects parameter @proc_name,
which was not supplied. (return status = -6)
Note that only mandatory parameters are listed. Typically, these are the first parameters in the argument list; optional parameters typically come last.

Note that a different, more user-friendly, approach has been used in a number of new sp_* procedures that were introduced in recent years. Procedures like sp_tempdb (12.5.0.3) and sp_dbextend (12.5.1), which provide quite extensive functionality, will print a section with 'usage' information when called without parameters or with 'help' as the first parameter. The same approach has been followed for sp_encryption (12.5.3a), and sp_metrics (15.0).
As of 12.5.1, also sp_monitor will print usage information when called with 'help': for reasons of backward compatibility (sp_monitor already existed in version 3 of ASE!), the default behaviour without parameters -displaying some not-so-useful 'cpu busy-type' statistics- has not changed. BTW, the new version of sp_monitor -with parameters- is actually very useful, since it reports things like the top-10 I/O-intensive queries; it gets its information from the MDA tables.

As various readers pointed out, you may also run sp_help to find a procedure's parameters. That will work, but has a disadvantage: you must change to the sybsystemprocs database first -- either with use sybsystemprocs or by running sybsystemprocs..sp_help sp_addsegment. For some reason, I always seem to make a typing error in 'sybystemprocs' -- oops 'sybssytemprocs' (well, you know what I mean...), so I try to avoid this. Even without spelling errors, these are more keystrokes than just running the stored proc without parameters.

In case you have the sybsyntax database installed, you could run sp_syntax sp_xxx to get the parameters of sp_xxx. Unfortunately, sybsyntax is not maintained at the same level as the ASE documentation, so you may find that some stored procedures are not included.

Lastly -and as a shameless bit of self-promotion- make sure you have a copy of my handy ASE Quick Ref Guide, which describes the parameters of all system procedures -- and a lot more.
How would you rate this ASE quiz question?
Great    OK    Average    Boring     

June 2005
This is a problem that took me quite some time to figure out a customer site a while ago.

The customer had a trading system with two central tables in a Parent-Child relationship (let's call the tables 'Parent' and 'Child'; they're both row-locked). When a row was inserted into the Child, the values of some of the inserted columns had to be added to the Parent; this was done through an insert trigger on the Child.
Since multiple inserts of Child rows for the same Parent could happen at the same time, deadlocks occurred a bit too frequently. Changing the application was not an option, so a classic trick to force serialization was applied: the insert trigger first locks the Parent row with a dummy update and then performs the actual trigger work. The idea is that a second Child insert for that Parent row will hit the Exclusive-Row lock on the Parent row and be forced to wait until the first transaction completed.
Basically, the trigger code looked like this:
create trigger Child_insert_trigger on Child for insert 
as
begin
-- first, force serialization in case of multiple
-- Child inserts for the same parent
update Parent
set SomeColumn = P.SomeColumn
from Parent P, inserted i
where P.ParentKey = i.ParentKey

-- now do the real trigger work...
...
...
end
Admittedly, this won't win prizes for subtlety, but this method tends to work fine as far as forcing serialization is concerned. Sometimes this kind of (indeed somewhat heavy-handed) approach may be your only option to alleviate terrible deadlock situations.

Now, the problem: amazingly, even with this trigger-serialization code, deadlocks still kept occurring! These deadlocks always involved two sessions executing the Child_insert_trigger trying to lock the same Parent row to force serialization.
This did not seem logical: when looking at the locks taken out by the trigger (by building a waitfor in the last part of the trigger for example), we could clearly see the Parent row being locked exclusively before anything else happened.

So.... the question is: how is it possible that deadlocks still kept occuring despite these serialization statements?

Here are two additional pieces of information.
First, assume no other activity takes place on these Parent and Child tables other than the inserts into Child.
Second, this is an example of the deadlock details:
Deadlock Id 128: Process (Familyid 0, Spid 59, Suid 3) was executing 
a UPDATE command in the trigger 'Child_insert_trigger'.
SQL Text: insert into Child (...)
Deadlock Id 128: Process (Familyid 0, Spid 48, Suid 3) was executing 
a UPDATE command in the trigger 'Child_insert_trigger'.
SQL Text: insert into Child (...)
Deadlock Id 128: Process (Familyid 0, Spid 48) was waiting for a 
'exclusive row' lock on row 8 page 1334285 of the 'Parent' table 
in database 4 but process (Familyid 0, Spid 59) already held a 
'shared row' lock on it.
Deadlock Id 128: Process (Familyid 0, Spid 59) was waiting for a 
'update row' lock on row 8 page 1334285 of the 'Parent' table in
database 4 but process (Familyid 0, Spid 48) already held a 
'update row' lock on it.

Deadlock Id 128: Process (Familyid 0, Spid 59) was chosen as the 
victim. End of deadlock information.


Answer:
The key to solving this problem lies in the types of locks mentioned in the deadlock info details. Since we're inserting a row and updating another row, and both tables are row-locked, exclusive-row and/or update-row locks on those tables would only be logical. However, the deadlock details also show a shared-row lock on Parent. The insert into Child and the update of Parent do not require shared-row locks. So where is this lock coming from?
The answer: there appears to be foreign-key constraint between Child and Parent (i.e. alter table Child add constraint fkey foreign key (ParentKey) references Parent(ParentKey)).
When a row is inserted into Child, an implicit lookup is performed on Parent, which leads to a shared-row lock on the corresponding row in Parent. This shared-row lock is maintained until the end of the insert statement; since the trigger is executed as part of the insert, this means the lock is active during the trigger execution. This can lead to a deadlock when two sessions insert a row for the same Parent row concurrently.

Without the foreign-key constraint, the locks would be as follows, showing that serialization for the trigger will indeed occur:
Session 1Session 2
insert Child: exclusive-row on Child for inserted row 
trigger update: update-row on Parent for updated row 
 insert Child: exclusive-row on Child for inserted row
 trigger update: needs an update-row lock on Parent but is blocked the update-row (or exclusive-row) lock held by session 1
trigger update: the update-row lock on Parent is upgraded to exclusive-row 
...performs other trigger actions.......blocked by session 1...
commit: releases all locks...blocked by session 1...
 exclusive-row lock on Parent for updated row is now granted
 session 2 proceeds


However, with the foreign-key constraint, the additional shared-row lock can spoil this scenario, potentially leading to a deadlock as shown in the question follows:
Session 1Session 2
insert Child: shared-row on Parent for row lookup due to RI contraint, plus exclusive-row on Child for inserted row 
trigger update: update-row on Parent for updated row 
 insert Child: shared-row on Parent for row lookup due to RI contraint, plus exclusive-row on Child for inserted row
trigger update: the update-row lock on Parent is upgraded to exclusive-row, but this is blocked because session 2 now holds a shared-row lock on this row 
 trigger update: needs an update-row lock on Parent but is blocked the update-row lock held by session 1
...session is blocked by session 2......session 2 is blocked by session 1...
A deadlock has occurred -- ASE will abort one of the transactions


Whether a deadlock will indeed occur according to this scenario depends partly on chance, since the timing of each of the above steps by the two sessions matters a lot: if the actions by both sessions overlap just a little bit differently, a deadlock may occur differently, or not at all. In contrast, without the shared-row lock due to the constraint, a deadlock will never occur, no matter how the actions of both sessions are timed.

It may seem a bit odd that an RI constraint can have such an effect on concurrency. The reason is simply that without the constraints, only update locks and exclusive locks are involved, which are mutually exclusive. The shared lock from the constraint however, will read through an update lock -- but an update lock will be blocked by a shared lock. As demonstrated, this can change the concurrency aspects quite a bit.

Note that it is actually quite reasonble to expect a foreign-key constraint to exist in this example: there is obviously a relation between Parent and Child, so it is logical to expect this relation to be guarded by a constraint. With hindsight, the attempt to force serialization by locking the Parent row was not the right approach for this particular problem. If serialization is really essential in this case, it might have been better to drop the foreign-key constraint and guard the relation be means of triggers.

To see the details of the lock acquisition yourself, you can run the statements with traceflag 1217:
create table Child 
(ChildKey int, ParentKey int unique, 
SomeColumn int) 
lock datarows

create table Parent 
(ParentKey int unique, 
SomeColumn int) 
lock datarows

insert Parent values (200, 1)
insert Parent values (201, 2)
go

create trigger Child_insert_trigger on Child for insert 
as
begin
-- first, force serialization in case of multiple
-- Child inserts for the same parent
update Parent
set SomeColumn = P.SomeColumn
from Parent P, inserted i
where P.ParentKey = i.ParentKey

select "this is the trigger"
end
go

dbcc traceon(1217,3604)
go

insert Child values (1000, 200, 2)
go

[you may need to reconnect to ASE at this point since this is 
one of those traceflags that can sometimes get a client 
confused and the connection aborted]

alter table Child add constraint fkey 
foreign key (ParentKey) references Parent(ParentKey)
go

dbcc traceon(1271,3604)
go

insert Child values (1000, 201, 2)
go


How would you rate this ASE quiz question?
Great    OK    Average    Boring     

May 2005
Suppose you need to generate 100000 rows of unique test data values for the following simple table:
create table test_tb (a int, b varchar(32))
The generated column values should all be unique (although the actual values themselves may be meaningless).

What is the easiest way to achive this?

Answer:
The solution does not involve an ASE feature, but rather a feature of the default ASE client application isql. While everyone is used to typing 'go' to execute the preceding T-SQL command batch, surprisingly few ASE users know it is possible to automatically repeat the isql 'go' command a number of times. For example, 'go 123' causes the command batch to be sent to ASE 123 times .
With this isql feature (NB: sqsh supports this same feature), we can generate 100000 rows with only a little bit of typing:
insert test_tb values (rand()*100000000, newid())
go 100000
The values generated for both columns are quite likely to be unique: rand() delivers a float value between 0 and 1, so multiplying it by a large number will not quickly result in duplicates. However, there is always a possibility that duplicates occur -- see the solution below.
For the varchar column, newid() is very good at generating unique -though meaningless- strings of 32 characters. If your column is shorter than 32 characters, the resulting value may not always be unique. Note that newid() was introduced in ASE 12.5.0.3.

If uniqueness is really required for both columns, create a unique index on each column before inserting the rows.
In case you'd be inserting rows in larger batches -for example, with an insert...select statement from a different table- create the unique index with the with ignore_dup_key clause to avoid rolling back all inserted rows when just a single row is a duplicate (for more information about with ignore_dup_key, see chapter 5 of my book 'Tips, Tricks and Recipes for Sybase ASE'. For more information about newid() see chapter 8 of this book).

How would you rate this ASE quiz question?
Great    OK    Average    Boring     

April 2005
Suppose you need to retrieve a randomly selected row from a table (for example, for a workload generator). What is the simplest way to achieve this?

Answer:
In ASE 12.5.1+, the simplest way to pick a row at random is as follows:
set rowcount 1
select * from t order by newid()
go
Here, the rows in the table are sorted according to value returned by the newid() function; newid() generates a unique, and from an ordering perspective, rather random value for each row in the result set. Due to the set rowcount 1, only the first row is returned.
Note that the absence of a where-clause means that this statement will do a table scan, so running this for large tables is not advisable.

In ASE pre-12.5.1, a different method is needed since newid() cannot be used as shown above (newid() is supported in 12.5.0.3, but works different than in 12.5.1+). Assuming the table contains a numeric column named my_col with varying values, the following approach may be possible:
set rowcount 1
declare @i int
select @i = rand()*1000*datepart(ms,getdate())
select * from t order by rand(@i*my_col)
go
Here, the trick is to include a column inside the rand() argument in the where-clause so that rand() is evaluated once for each row in the result set. If instead you'd use rand(@i) or just rand(), all rows in the result set would have the same value. For this trick to work, my_col should have different values: for two columns with the same value, rand() will generate identical results.
(For more background about these tricks with rand(), and about newid(), see chapter 8 of my book 'Tips, Tricks and Recipes for Sybase ASE')

There is yet another possible approach: if you happen to know the key values in your table are integers in a range with not too many missing values, you can do something like this:
declare @minkey int, @maxkey int, @rand_key int
select @minkey = min(key_col) from t
select @maxkey = max(key_col) from t
select @rand_key = 
     convert(int,(rand()*100000000)%(@maxkey - @minkey)) 
     + @minkey

set rowcount 1
select * from t where key_col >= @rand_key 
go
The advantage of this method is that it avoids a table scan if a suitable index exists. For the resulting row to be as random as possible, there shouldn't be big gaps in the range of key values. Hopefully, the min() and max() queries can be resolved quickly through an index.
Note that this query above may not suitable for very large numbers of rows, since the modulo operator limits the expression to the range of an int datatype. If this is not enough, either rewrite the expression with numeric datatypes, or wait for ASE 15.0 which supports the 8-byte bigint datatype.

Update (May 2005):
Richard Crossley suggested the following alternative approach, which also works very well:
declare @num_rows int

select @num_rows = convert(int, 1000 * rand())

set rowcount @num_rows

select * into #t 
from spt_values 
order by number desc

set rowcount 1

select * from #t 
order by number asc

set rowcount 0
How would you rate this ASE quiz question?
Great    OK    Average    Boring     

March 2005
Due to circumstances, I haven't been able to produce add a quiz question this month. Sorry!
February 2005
When setting up a warm standby with Replication Server, or when creating a database subscription in RepServer 12.6, the replicate database must be initialised (or, more formally, 'materialised'). This is typically achieved by creating the standby connection, or the database subscription, with the use dump marker clause, and then making a database dump of the primary database. This database dump is then loaded into the replicate database.
When the primary database is large (say, a few 100's of Gbytes), it may take a while to transfer the database dump to the replicate side and to load it, simply because of its size. As a consequence, there can be a long time without an up-to-date standby.
The question: is it possible to perform the initialisation in such a way that the time without an up-to-date standby is reduced?

Answer:
If you can plan ahead, it is indeed possible to dramatically reduce the time needed to intialise the standby.
Let's first look at the standard sequence of actions in a bit more detail:

Step 1: Place a dump marker in the transaction log of the primary database (with create standby connection, or -in RS 12.6- create subscription, with the use dump marker clause). RepServer will start picking up replicated transactions from this point onwards.

Step 2: Make a database dump of the primary database; transfer the dump file to the replicate side, and load it.
Now, let's assume this takes a few hours for a large database. Meanwhile, RepServer is accumulating replicated transactions in the queues on its stable devices, so there may be a significant latency by the time the dump is finally loaded. This latency is what we'd like to reduce. A side-effect is that a lot of disk space may be needed in the stable queues to store all these transactions, adding a potential worry for the DBA.

Step 3: Online the replicate database (with online database) and resume the connection to replicate database. RepServer can now finally start to apply the transactions that it has accumulated in its queues since the the dump marker was set in step 1.

Now, the question is if we can find a sequence of actions that reduces the latency. You guessed it -- we can!
The trick is to make a transaction log dump (instead of a database dump) after using the use dump marker clause.

Step 1: Make a database dump of the primary database, and load it into the replicate database. If this takes a long time, also make a subsequent log dump of the primary database, and load that in the replicate database as well.

Step 2: Place a dump marker in the transaction log of the primary database (with create standby connection, or -in RS 12.6- create subscription, with the use dump marker clause). RepServer will start picking up replicated transactions from this point onwards.

Step 3: Make a log dump of the primary database; transfer the dump file to the replicate side, and load it.
Let's assume this log dump is not large compared with the database dump file in step 1 above. Therefore, transferring the file to the replicate side and loading it should not take too long. Meanwhile, RepServer is accumulating replicated transactions in the queues on its stable devices, but it won't be long before we start applying these to the replicate database.

Step 4: Online the replicate database (with online database) and resume the connection to replicate database. RepServer only has to catch up the period corresponding to step 3 above, which should be short.

Note that the crucial ingredient here is that you can indeed plan ahead: the total elapsed time for the entire procedure is not reduced, but only the time that you're without an up-to-date replicate database. This procedure makes sense if you know that you have to create a new standby database: you can start initialising it, and wait with dropping the current standby connection until the last moment.
For a large database, the difference in time (without an up-to-date standby) between the two methods can be significant, and may easily amount to a number of hours. Especially when the availability of an up-to-date replicate database is a business requirement, it can be important to leave this time as short as possible.
Also note that the underlying assumption is that transferring transactions through a log dump is faster than replicating them (which seems a fair assumption in most cases).

Interestingly, the RepServer documentation hardly mentions the possibility of using a log dump in combination with the dump marker (though it does not exclude it either). I believe there is actually just one place where the docs explicitly mention a log dump in this context, but otherwise they docs speak either of a database dump or just of a 'dump'. I suppose this is the reason why conventional DBA wisdom says that the initialisation sequence should always use a database dump...

(Thanks to Ger Timmens for providing input for this question)

How would you rate this ASE quiz question?
Great    OK    Average    Boring     


January 2005
Does ASE give any clues about the exact OS (operating system) version that's running on your ASE host?

Answer:
The ASE version string indicates the OS for which your particular ASE version was compiled. The following comes from an ASE server running on Linux:
1> select @@version
2> go
-----------------------------------------------------------------
Adaptive Server Enterprise/12.5.3/EBF 12151/P/Linux Intel/
Enterprise Linux/ase125x/1883/32-bit/OPT/Thu Nov 11 17:33:16 2004
This information won't usually be a surprise: as a DBA, you tend to know on what kind of box your ASE server runs.

When it comes to the exact version of the OS, ASE does not traditionally provide any clues at all. However, this has changed recently: as of version 12.5.1 ESD#2, ASE now logs detailed information about the OS version in its errorlog during startup.
Here are two examples from Linux and Windows (it works similarly on all other platforms):
ASE booted on Linux release 2.4.7-10 version #1 Thu Sep 6 17:27:27
EDT 2001.

ASE booted on Windows 2000 build 2195 Service Pack 4.
This information is logged at the beginning of the startup messages, after the Sybase copyright notice. Look for the string 'ASE booted'.

No doubt this information is useful, but of course it isn't exactly earth-shattering. First, you still have to access the ASE errorlog, which typically requires logging in to the host first. Once you're there, running the Unix command uname -a will give you the same details (BTW, note that you may avoid logging in to the host by accessing the errorlog via a proxy table-mapped-to-a-file -- this requires the ASE_XFS license option).

I suppose these OS version details are perhaps most useful when it is required to determine the exact moment of an OS upgrade that occurred in the past, for example when trying to explain or troubleshoot problems or error messages that 'suddenly' have started to appear: it wouldn't be the first time a system manager upgrades a host OS without telling the DBA...
Also, Richard Crossley pointed out that this information is probably most useful for Sybase TechSupport who can tell from the errorlog what OS version they're dealing with (good point; I hadn't thought of that, actually).
How would you rate this ASE quiz question?
Great    OK    Average    Boring     


More ASE quiz questions: 2013 - 2012 - 2011 - 2010 - 2009 - 2008 - 2007 - 2006 - 2005 - 2004 - 2003 - 2002 - 2001
For a randomly selected quiz question, click here.


 
 This document is located at www.sypron.nl/quiz2005a.html