Sybase ASE
All versions
Last updated: 18 December 2011
ASE Quiz Questions: answers 2011
 
More ASE quiz questions: 2013 - 2012 - 2011 - 2010 - 2009 - 2008 - 2007 - 2006 - 2005 - 2004 - 2003 - 2002 - 2001
For a randomly selected quiz question, click here.


December 2011
As you no doubt know, running BCP-in to copy data from a file into a database table will not fire an insert trigger on that table.
This month's question: if I told you that BCP-out can in fact cause a trigger to fire, would you know how that could work? (assuming you'd believe me in the first place, of course).

Answer:
It is true that the act of copying rows into a table with BCP will not cause an insert trigger to fire.
No discussion about that.

However, ever since ASE 15.0.2, BCP supports the --initstring parameter, and this parameter can be used to do more than just copying rows to/from a file.
This parameter can take a SQL statement as its value. That can be any SQL statement, so if it is an insert (or update or delete) operation, it can cause a corresponding insert (or update or delete) trigger on the affected table to fire.
It does not actually matter if it is a BCP-in or BCP-out operation: in both cases, the SQL statement specified with --initstring is executed before BCP start copying rows between the file and the database table (mentioning BCP-out was a bit of a trick part of the question).

Here is an example. Before copying data into table my_db..my_table, we first remove all currently existing rows from the table (truncate table would be faster BTW, but I'm using delete here just for the example):
% bcp my_db..my_table in data_this_week.dat -Usa -Pmysecret -SSYBASE -c 
  --initstring "delete my_db..my_table"
Clearly, when a trigger is fired by an insert, update or delete statement in --initstring, that trigger will not see the data rows being copied into the table, but only the rows affected by the statement in --initstring.

The reason why I'm bringing this up is that you can do some useful things with that --initstring parameter (I admit, mentioning the trigger was just an excuse to come up with a question that would grab your attention -- but you're still reading, so it worked ;-).
Originally, --initstring was designed only for the command set replication off: the problem to solve was that sometimes you want to BCP into a replicated table but without replicating the rows being BCP'd in.
However, you can specify any SQL statement with --initstring. In fact, you can put multiple SQL statements in it, because it is submitted to the ASE server as a SQL batch. replicating the rows being BCP'd in.
Cleaning out a table you're going to copy into, as shown above, is one example. I've also seen a customer use it to create a view, and then BCP out from that same view (which you still have to drop afterwards since BCP does not yet have an --exitstring parameter: more on that below).
Obviously, you can do all this without --initstring as well, by running a script with the sam SQL command before kicking off BCP. But with --initstring, you can simplify things by merging these two steps into one.

A word of warning though. The SQL statement is executed in the background. There is no output of any kind; no results are returned; no error messages are shown (you might find some in the ASE errorlog though, depending on the type of error). Therefore you should be careful not to run into error conditions, like running out of locks: this will make it more difficult for you to understand what's going on.

Lastly, it's actually not ASE 15.0.2 that you need, but OpenClient version OCS 15.0 ESD#5, since the enhancement is in the BCP client utility only. This OCS version happens to be bundled with ASE 15.0.2, but you can also use this BCP version against older versions of ASE: BCP will simply log in to the ASE server to execute the --initstring SQL statement. This is not different from how any other client application would connect to ASE.

In summary, --initstring is a useful enhancement to BCP.
A question I've had many times is if there could be an --exitstring as well, to execute a SQL statement after completing the BCP operation. This would indeed be useful, but it has not yet been implemented. If you feel this would be handy for you, you can help by opening a case with Sybase TechSupport and request this enhancement. Ask your TechSupport contact person to link to case to CR 593795.

How would you rate this ASE quiz question?
Great    OK    Average    Boring     

October 2011
I was recently contacted by an ASE user who had a requirement to store data in ASE after hashing it with SHA-256.
As of version 15.0.2, ASE provides the built-in functions hash() and hashbytes() are available, but these support only SHA and SHA-1 (as well as MD5), but not SHA-256. For some reason, other hashing algorithms were not acceptable to use.
Eventually though, this ASE user found a way to use SHA-256 inside ASE.

How would you have solved this problem?

Answer:
Since ASE also supports user-defined SQL functions, you could consider implementing the SHA-256 algorithm yourself in Transact-SQL. I haven't tried this myself, but I don't immediately see a reason why this couldn't be done (you'd have to use a varchar(16384) variable as an array while calculating the algorithm). If anyone with enough spare time on his hands manages to get this working, please share the result.


The more practical solution, as indeed adopted by our ASE user, is to use Java, which supports various hashing algorithms.
For example, you could use the java.security.MessageDigest class ('message digest' is a more formal name for 'hashing'):
	import java.security.MessageDigest;

	String MyDataToHash = "Gotta keep this text safe! Better hash it...";

	MessageDigest D = MessageDigest.getInstance("SHA-256");
	D.reset();
	byte[] Hashed = D.digest(MyDataToHash.getBytes("UTF-8"));
You can code up a Java class with this functionality and load the class into ASE (see this page for more details on how to do that).
Assuming the class name is HashSHA256 with a method named CalcSHA256, you can then do this:
1> create function sha256(s varchar(255)) returns varchar(255)
2> language java parameter style java
3> external name 'HashSHA256.CalcSHA256'
4> go
... so that finally, you can call the Java class as if it was an ASE function:
1> select sha256("MyVeryS3cretP@ssword")
2> go
Presto!

Obviously, using this Java workaround will likely be signficantly slower than the native ASE functions hash() and hashbytes(). But this is an interesting way of quickly borrowing some functionality already available elsewhere.

Should you require support for native SHA256 hashing in ASE (i.e. as part of hash() or hashbytes()), then please open a techsupport case with Sybase, and request that your case be linked to "CR 551014".

(Thanks for inspiration to Frank Fraijo, whose systems, incidententally, are now enjoying the security of strong hashing).

How would you rate this ASE quiz question?
Great    OK    Average    Boring     

August 2011
As you probably know, it is possible to protect an ASE role with a password: in order to activate the role you have to specify the password (i.e. set role supervisor_role with passwd MyS3cret on).
Recently a question came up from a security-conscious user, if you could also put passwords on system roles like sa_role and sso_role.

The answer is: yes, you can do that. However, it is a really bad idea. Why?

Answer:
It's a bad idea to put passwords on system roles like sa_role and sso_role because you run a serious risk of effectively locking the DBA out of ASE -- and if you get to that point, there is no solution to get back in.

Here's how that could happen.

As you know, the sa login by default has been granted the roles sa_role and sso_role, allowing it to do all those priviliged DBA things.
When you put a password on those roles, it means that the sa login can still connect to ASE, but it will be an ordinary user with no priviliges until it activates sa_role and sso_role (with the set role sa_role with passwd ... on command -- having to execute this every time after logging in is a headache, but that's the sort of hassle security may require).

The problem is that if you somehow loose knowledge of those passwords, sa cannot perform the DBA functions anymore since the sa user is cut off from the priviliges that those roles bring. Even though all other operations not requiring such priviliges can continue, sooner or later there will be a requirement to perform some DBA functions -- like running shutdown. However, this is now impossible since those system roles can no longer be activated.
That's a real problem, and it means the DBA has effectively locked himself out of ASE.

In the older days, there was actually a sneaky backdoor solution for this situation, as I described in my first ever presentation to the European ISUG conference in 1998 (download it here). The basic trick was that, even though the sa login didn't have any priviliges, it was still the owner of the master database. For that reason, it could still update the system tables in master. This allowed you to manually patch the sysroles table, effectively wiping out the role passwords.
(you had to jump through some hoops to actually perform these updates; for full details, see that presentation I mentioned earlier)

Anyway, this workaround is no longer possible. Since ASE 12.0 or thereabout, updating the system tables was made dependent specifically on having activated system roles.
Since then, there is no backdoor, traceflag or trick to get your system role priviliges back if you loose the passwords on these roles.
To the best of my knowledge, so far no customer has actually run into this problem. So don't be the first, and resist the temptation of putting passwords on sa_role and sso_role, no matter how security-conscious you (or your manager) may be. Should you decide to do it anyway, better put all safeguards in place (password copies in safe locations etc), and establish procedures to ensure those safeguards always remain in place, updated and accessible as needed.


You'll have to excuse me for deliberately not mentioning the precise SQL commands to put a password on these system roles. This is just to avoid someone accidentally 'trying it' in the wrong server, and getting into trouble later as described above. There's nothing secret about these commands, and they're easy to look up in the documentation, but you have been warned!

For those with active interest in the darker side of things, you may also want to know this is not the only method of locking yourself out of ASE. The answer to the quiz question for July 2007 also describes some scenarios to get yourself into deep trouble. But you have been warned again!

How would you rate this ASE quiz question?
Great    OK    Average    Boring     

May 2011
As a DBA, sometimes you may want to know the schema of a #temp table that a particular session is currently using. Running sp_help would be the logical first thing to try, but that doesn't work for #temp tables.
It is possible to figure out the details of a #temp table by using the trick described here, which involves dumping tempdb and loading it into another database. However, this can take some time when the temporary database in question is big.

Is there a quicker way to get the schema of someone else's #temp table?

Answer:
There is in fact a very simple way to find the schema of a #temp table owned by another session than your own.
This can be achieved with the ddlgen utility (more details on ddlgen are here).

Let's say the following table has been created:
create table #mytmp (
   a int,
   b datetime null,
   c varchar(30),
   d numeric identity )
go
Now, from a different session, you must figure out the internal name for the #temp table you're interested in (that's how it would be if you were a DBA with interest in someone else's #temp table).
Assuming your temporary database is tempdb (you can verify with select db_name(@@tempdbid)), you first must find the internal name for the #temp table:
use tempdb  -- or whatever your session's temporary DB is...
go

select name from sysobjects where name like "#mytmp%" 
go

-- If multiple '#mytmp' tables exist, figure out which one you're interested in.
-- Go here to understand what that full internal name means...

Now that you have the internal name of the temporary table, specify it as an argument to ddlgen, along with the name of your session's temporary database (both shown in bold).
Important: there must not be a space between -N and the table name, or an error will result:
% ddlgen -Usa -Pmysecret -SSYBASE155 -Dtempdb -TU -N#mytmp00000160005404477
[...]
set quoted_identifier on
go 
create table "#mytmp00000240019407352" (
        a           int                not null  ,
        b           datetime               null  ,
        c           varchar(30)        not null  ,
        d           numeric(18,0)      identity   
)
lock allpages
 on 'default'
go 
[...]
So there we have the table's schema!
Note that you cannot actually execute this particular piece of DDL generated by ddlgen: it is invalid syntax for a quoted identifier to start with a '#" character (but never mind about that, we were just looking for the schema).

Also note that when indexes have been defined for the #temp table, ddlgen also tries to reverse-engineer these, but unfortunately, it shows an empty string for the indexed columns (i.e. you cannot tell on which column(s) the index was created). This makes reverse-engineering indexes on #temp table rather pointless.

I'm guessing that ddlgen was never really supposed to be able to reverse-engineer #temp tables, but that it just happens to work (apa rt from the indexes, that is) when you specify the full internal name.
It has been useful for me though, when trying to figure out what was going on deep inside an application.

How would you rate this ASE quiz question?
Great    OK    Average    Boring     

April 2011
Let's say you need to do some OLAP-style processing in ASE, such as calculating the 6-month exponentially weighted moving average over some monthly sales data.
Since ASE does not provide OLAP functions, you'd have to manually write a workaround to implement this requirement with the existing ASE features. That's not impossible, but such workarounds quickly become complex, slow and clumsy.

Now, let's assume you also have Sybase IQ available (with all required licenses in place).
IQ 15.1 (or later) happens to provide the analytic function exp_weighted_avg() -- exactly what you need. You could of course copy the ASE data into IQ and then run that OLAP query in IQ instead of in ASE. But having two copies of the data is not convenient, and introduces new complexities like keeping both copies synchronised.

How can you implement the required OLAP functionality in ASE without storing any ASE data in IQ?

Answer:
It is actually surprisingly simple to use IQ's capabilities in ASE. This requires remote data access from ASE to IQ as well as the other way around (accessing data in remote servers, also known as 'CIS', is supported both in ASE and in IQ). Most of the stuff below is one-time setup & configuration to make the remote data access possible.

Before getting to the setup steps, this is the schema of the ASE table holding the actual data:
-- run this in ASE:
use my_db
go
create table my_table 
	( year int,
	  month varchar(30),
	  sales money )
go
Before configuring the ASE & IQ servers, let's assume the following:
  • our ASE server (named 'MY_ASE') runs on port 5001 on server 'myasehost'
  • we're connecting to 'MY_ASE' with login 'my_ase_login', with password 'my_ase_passwd' (without the quotes)
  • our IQ 15.1+ server (named 'MY_IQ') runs on port 2639 on server 'myiqhost'
  • we're connecting to 'MY_IQ' with login 'my_iq_login', with password 'my_iq_passwd' (without the quotes)
First, we create a proxy table in IQ that points to our ASE table my_db..my_table. This takes a couple of setup steps:
-- run these steps in IQ, as DBA:

-- create mapping to remote ASE server :
create server MY_ASE class 'asejdbc' using 'myasehost:5001';

-- create externlogin
create externlogin my_iq_login to MY_ASE 
remote login my_ase_login identified by my_ase_passwd;

-- test connection:
forward to MY_ASE {select @@servername, @@version};

-- create proxy table mapping to ASE table:
create existing table ase_proxy_tab at 'MY_ASE.my_db.dbo.my_table';

-- test proxy table (run as user 'my_iq_login'):
select * from ase_proxy_tab;
There are different ways to set up the proxy table, and you could potentially run into some issues while setting things up. I'll cover those at a later time in more detail (it would lead too far for this quiz question).

Now that we can access ASE data from IQ, the next step is to write an IQ stored procedure that calculates the moving average, accessing the ASE data through the IQ proxy table. This procedure will be kicked off from within ASE.
-- create this proc in IQ 15.1 or later:
create procedure iq_proc_for_ase (in @nr_months int)
as
begin
	-- IQ analytic functions don't run on proxy tables,
	-- so we need to copy the data
	select * into #asecopy from ase_proxy_tab

	select year, month, 
	moving_avg_6month = exp_weighted_avg(sales, @nr_months) 
	over (order by year, month(month)) 
	from #asecopy
	order by year, month(month)

	drop table #asecopy
end;

-- local test if this procedure works 
-- (NB: result set will be empty at this point!)
execute iq_proc_for_ase 6;
We've set up the IQ side now. The remaining part is to configure ASE so that we can kick off the stored procedure in IQ:
-- run these steps in ASE (run as 'sa'):

sp_configure 'enable cis', 1
go
-- if it was not already set to 1, shutdown & restart ASE

sp_configure 'cis rpc handling', 1 
go
-- now disconnect & reconnect the session

-- create mapping to remote IQ server :
sp_addserver MY_IQ, ASIQ, 'myiqhost:2639'
go

-- create externlogin:
sp_addexternlogin MY_IQ, my_ase_login, my_iq_login, my_iq_passwd
go

-- test the connection to IQ:
connect to MY_IQ
go
select @@version
go
disconnect
go

-- now test executing the IQ stored procedure (as user 'my_ase_login'):
exec MY_IQ...iq_proc_for_ase 6
go
The only thing missing now is some test data. This query will quickly generate some rows:
-- run this in ASE:
insert my_db..my_table
select year(m), datename(mm,m),
       sales=1000+n*10+rand()*10*n -- simulate varying sales numbers
from (select m=dateadd(mm, number, 'Jan 01, 2003'), 
             n=number from master..spt_values where type='P' and number < 100
     ) as x order by x.n
...and now run the IQ stored procedure:
-- run this in ASE:
-- 6 means: 6-month moving average
exec MY_IQ...iq_proc_for_ase 6
go
 year        month                          moving_avg_6month
 ----------- ------------------------------ --------------------
[...]
        2006 July                                    1632.958134
        2006 August                                  1667.054181
        2006 September                               1618.518615
        2006 October                                 1586.282182
        2006 November                                1643.579587
        2006 December                                1697.787791
        2007 January                                 1644.576679
        2007 February                                1729.625285
[...]
So, there we are: we have managed to extend ASE with functionality available in IQ. Mission accomplished!


Obviously, there may well be some performance impact due to using an ASE RPC to start the IQ stored procedure, which in turn uses CIS to access the actual ASE data. That would be the performance price to pay for bringing some IQ functionality within reach of ASE with minimal development effort.

I've created a separate page describing this trick: go here.

How would you rate this ASE quiz question?
Great    OK    Average    Boring     

March 2011
When using a raw device for an ASE database, you have to make sure you're using the character device, not the block device. This is because data could potentially be lost in case of a sudden system failure when the block device is used.
Sometimes it happens that a DBA inadvertently picks the wrong device type. On some platforms, an error message will then be raised, but other platforms will not.

How can you verify whether a particular raw device is a character device or a block device?

Answer:
Conveniently, ASE itself provides a function to simply determine the type of a disk device.
As of version 15.0.2, ASE has a built-in function getdevicetype().

So how does getdevicetype() work? It's quite simple: you call it with a pathname, and the function returns a number indicating the type of the device:
1> select getdevicetype('/dev/raw/raw2')
2> go

 ------
      1

(1 row affected)
The return code can have the following values:
  • 1 : raw device/character device
  • 2 : block device
  • 3 : regular file
  • 0 : error (i.e. an invalid pathname was specified)
Note that the function also works fine with the pathname of a symbolic link that points to the raw device.

You will not find getdevicetype() in the ASE documentation, so it is formally undocumented. However, it is used in the system stored procedure sp_helpdevice (and others, too). So you can use it too -- it works fine.

A convenient function, indeed.

How would you rate this ASE quiz question?
Great    OK    Average    Boring     

February 2011
This month's quiz question is about the following valid SQL statement.
Even if you haven't been drinking, you probably get this feeling there's something odd about this piece of SQL code.
The question: what's odd about it, and why is this statement still valid?
1> select difference from space
2> where datalength between lockscheme and pagesize
3> group by right(stuff,left)
4> having ceiling(floor) < floor(ceiling)
5>    and night = day
6> go
 difference
 -----------------------------------
 The difference between *what* ?

(1 row affected)

Answer:
I admit, this case is a bit doctored, but I can explain everything, if you'd be so kind to hang on...
Inspiration for this quiz question came not from the bottle, but by a syntax error I did not run into -- strange as that may sound.
It all started with a mis-typed table name that should have been named 'dat', but ended up as 'day' (can happen on a QWERTY keyboard). To my surprise, the table was still created and the table name was valid. The suprise lay in the fact that 'day()' is a built-in function in ASE.
This made me realize that, although you cannot use reserved words as identifiers (unless you put them in quotes or brackets, also see ......), you can use names of built-in functions as identifiers.

Some time after this discovery, I wondered how far you could push this concept, and the select statement above emerged. The mix of function names in places where you'd normally expect columns, leads to a wacky-looking SQL statement. For me, at least.

Here's the table creation statement to make the above select work. A table named 'space', with columns like 'floor', 'ceiling', 'day' ...which are all names of ASE built-in functions (okay, 'night' is nothing special, it was added just to make the predicate 'night = day' possible):
1> create table space (difference varchar(35), datalength int,  
2> lockscheme int, pagesize int, left int, stuff int, floor int, 
3> ceiling int, night int, day int)
4> insert space values ('The difference between *what* ?', 1,1,1,1,1,2,3,1,1)
5> go
(1 row affected)
In fact, it is quite possible to take this concept further:
  • Names of 'set' options can be used as identifiers; this can be a perfectly valid SQL statement:
    select showplan from forceplan where language = chained
  • Well-chosen names of objects, like tables, stored procedures, or user-defined SQL functions, combined with names of users or databases:
    set
    @this = the.[end](beautiful.friend())
    (in memory of Jim Morrison; '[end]' and 'friend' are UDFs, owned by users 'the' and 'beautiful', respectively)
  • Using quoted identifiers, or using brackets around identifiers (as in '[end]' above).
All of this made me realise we're talking about something called 'code poetry' here. This concept isn't new: some programming languages lend themselves well for crafting lines that rhyme, or otherwise sound more like a work of literary art than computer code, yet still compile and run correctly. Perhaps the best-known example is Perl, where a kind of a sub-culture art form developed out of this (for those for whom Perl itself wasn't sub-culture enough, I guess).
Some examples of Perl Poetry (google for "Perl Poetry" for more): Due to its syntactical structure, Perl lends itself well for this sort of thing.
It's harder in SQL, but not impossible. Here are two fine examples of SQL Poetry: This looks like fun (to me at least).

So, here's a challenge for you, esteemed reader: can you demonstrate your mastery of SQL and write some SQL Poetry? This is SQL code that looks, well, unlike regular SQL code, but rather has a story line of its own, yet still runs correctly?
It doesn't *have* to rhyme -- anything goes as long as there is some elegance, surprise or any other quality you wouldn't normally associate with SQL statements. Admittedly, these are highly subjective criteria, but that's no reason not to try. The only restriction I'd suggest is that long_identifiers_with_lots_of_underscores, and "quoted identifiers consisting of entire sentences" should be used sparingly.
If you submit your creative work, I'll host it on a dedicated 'SQL Poetry' page (all credits remain with the author submitting the entry). When you submit something, please include the DDL to make the code run. It doesn't have to be running on ASE -- SQL Anywhere or IQ are fine too (but please indicate the required product + version).

I am awaiting your entries! (you can submit here).

Update: We have some submissions! Go here...

How would you rate this ASE quiz question?
Great    OK    Average    Boring     

January 2011
Most ASE users are well aware that updating a row causes that row, or that row's data page, to be exclusively locked for the duration of the transaction. Such an exclusive lock blocks all access to that row from other sessions.
However, there are two exceptions to this rule.
The first is with "transaction isolation level 0": a select statement at level 0 will read rows being modified whose transaction is not yet committed (note that isolation level zero is not without risks -- but that's a topic for another time).

What is the other exception? (which, incidentally, is totally risk-free)

Answer:
The other exception is a little-known feature named 'pseudo column-level locking'. This exists since ASE 11.9 when the datarows and datapages lock schemes were introduced.
These lock schemes were designed to provide better concurrency, primarily by avoiding locking of index rows (both datarows and datapages) and by allowing to lock only one specific row (datarows) instead of the entire page (and thereby, all rows on that page).
In addition, these lock schemes have some further concurrency enhancements that don't tend to get much attention -- such as pseudo column-level locking.

Pseudo column-level locking is active by default for tables using datarows and datapages lock schemes. The basic concept is that ASE may still allow columns of a locked row to be accessed by select statements in other sessions, if ASE can determine that the result of the select will not be affected by the update that causes the row to be locked. create table MyTab (a int, b int, c int) insert t select id, id, id from sysobjects where id < 11
That was a long sentence -- read it again, and then check the example below.
Let's say we have this table:
create table MyTab (x int, y int, z int) lock datarows
go
-- quickly insert 10 rows of test data:
insert MyTab 
select id, id+10, id+100 
from sysobjects where id < 11
go
Now, user A updates a row, and leaves his transaction open, thus keeping the row locked:
begin tran
go
update MyTab 
set y = y*10
where x = 5
go
(1 row affected)
When user B runs the following statement, it will block becuase user A has the row locked (so far, there's nothing special):
select * from MyTab where x = 5
go
(session hangs due to blocking lock)
However, when user C runs this statement, it does not block -- and this may be a surprise (it was for me when I saw it first):
select x, z from MyTab where x = 5
go

 x           z
 ----------- -----------
           5         105

(1 row affected)
So what's the difference between the select by user B and the one by user C? Well, user B is selecting all columns, including column 'y' that is being updated -- clearly that column cannot be accessed since it is being modified in a yet-uncommitted transaction; and therefore user B is blocked.
However, user C is only selecting columns 'x' and 'z', and neither of these are being modified by user A's update. This is where ASE's pseudo column-level locking kicks in: whatever the final result of the update of column 'y' will be, it will not affect the values of columns 'x' and 'z' for that row; therefore, ASE allows the select to proceed. Presto!

There are some different flavours of pseudo column-level locking. The example above works only for a datarows lock scheme, but other variations also work with datapages locking (none of this works for allpages locking, should you wonder). Some additional restrictions apply, for example around the index being used for the select: if this index contains the column being updated, the select will still block.

For full details, check the ASE documentation. Just click here, and search for "pseudo column level locking". Happy reading!

Bottom line: datarows and datapages locking help to improve concurrency -- maybe in more ways than you thought.

How would you rate this ASE quiz question?
Great    OK    Average    Boring     


More ASE quiz questions: 2013 - 2012 - 2011 - 2010 - 2009 - 2008 - 2007 - 2006 - 2005 - 2004 - 2003 - 2002 - 2001
For a randomly selected quiz question, click here.


 
 This document is located at www.sypron.nl/quiz2011a.html