Troubleshooting: sas.servers & hostname aliases

SAS® Service Management Script for Linux and UNIX: sas.servers I ran into an interesting problem over the last few days with my SAS® 9.3 deployments on Linux. I had noticed that the sas.servers scripts for my Lev2 and Lev3 deployments were taking several minutes to start the SAS services. I know JBoss takes ages to start but usually the SAS services start really quickly. I had assumed Remote Services was the culprit, but surprisingly it turned out to be starting the SAS/CONNECT Job Spawner service that was the problem. Even more surprisingly the underlying cause turned out to be the use of the hostname in the log filename where the deployment logical hostname alias used during deployment was different to the physical hostname of the machine it was deployed on.

To start the troubleshooting process I modified the sas.servers script to add some timing information to the console log messages it generated. Editing the sas.servers script is not necessarily the best option but more on that later. The sas.servers script has a function called logmsg which seems to have been very nicely provided in order to customize its logging. I wanted to make a simple change to add a time stamp so I could see where all the time was being spent. I changed the function from this (reformatted and comments omitted for brevity):

logmsg() {
    echo "$*"
}

… to this:

logmsg() {
    DT=`date +%Y-%m-%d:%H:%M:%S`
    echo "$DT: $*"
}

I stopped the services since they were already running:

root@sas93lnx01:~# cd /opt/ebiedieg/Lev3
root@sas93lnx01:/opt/ebiedieg/Lev3# ./sas.servers stop
2011-10-30:21:51:31: Stopping SAS servers

Then checked they were all stopped:

root@sas93lnx01:/opt/ebiedieg/Lev3# ./sas.servers status
2011-10-30:21:51:59: SAS servers status:
2011-10-30:21:51:59: SAS Metadata Server 1 is NOT up
2011-10-30:21:51:59: SAS OLAP Server 1 is NOT up
2011-10-30:21:51:59: SAS Object Spawner 1 is NOT up
2011-10-30:21:51:59: SAS Share Server 1 is NOT up
2011-10-30:21:51:59: SAS CONNECT Spawner 1 is NOT up
2011-10-30:21:51:59: SAS Remote Services 1 is NOT up
2011-10-30:21:51:59: SAS Framework Data Server 1 is NOT up

Once stopped, I started them again so I could see where all the time was spent:

root@sas93lnx01:/opt/ebiedieg/Lev3# ./sas.servers start
2011-10-30:21:54:54: Starting SAS servers
2011-10-30:21:55:03: SAS Metadata Server 1 is UP
2011-10-30:21:55:07: SAS OLAP Server 1 is UP
2011-10-30:21:55:11: SAS Object Spawner 1 is UP
2011-10-30:21:55:11: SAS Share Server 1 is UP
2011-10-30:21:56:12: SAS CONNECT Spawner 1 is NOT up
2011-10-30:21:56:28: SAS Remote Services 1 is UP
2011-10-30:21:56:32: SAS Framework Data Server 1 is UP

Total time to start was 98 seconds. Not too bad really but it should be quicker than that. Everything seemed to start within a few seconds of each other except for the the SAS/CONNECT Job Spawner! It looked like I was wrong to assume it was Remote Services. The SAS/CONNECT Job Spawner appeared to be taking 60 seconds to apparently not start. It had started though. This was confirmed by looking in the process list and the job spawner log file. It all seemed very odd and for 2 reasons: firstly the job spawner is usually very fast to start and secondly 60 seconds is a bit of a convenient number; it sounded like a timeout.

The sas.servers script starts each SAS service in turn waiting for the service to become available before starting the next one. It sounded like sas.servers was waiting for some event from the job spawner and gave up after 60 seconds assuming it had not started. Time to dig into the sas.servers script a bit more.

The SAS/CONNECT job spawner is started by the sas.servers script in the start_connect_spawner function. It spawns the spawner (ConnectSpawner.sh start) in the background and then calls the is_atypical_server_up function to wait for it to finish starting. Looking into the is_atypical_server_up function it seems to repeatedly sleep and grep the job spawner log for some trigger text. Effectively:

grep "SAS Job Spawner for Open Systems" /opt/ebiedieg/Lev3/ConnectSpawner/Logs/ConnectSpawner_sas93lnx01.log

Now I could see the problem. It was scanning a non-existant log file ConnectSpawner_sas93lnx01.log when it should be scanning the ConnectSpawner_sas93lnx03.log file which was actually being written to by the job spawner. The log file name contained the wrong hostname. The name sas93lnx01 was the physical name of the machine, but sas93lnx03 was the host name alias I used when deploying the Lev3 environment. I prefer to use host name aliases in deployments for the benefits they provide in being able to easily move deployments between physical machines and provide disaster recovery options. In this case the Lev3 environment was currently on the same machine as the Lev1 environment, but I knew one day I would move it onto another machine so had used a host name alias sas93lnx03 (a DNS CNAME record) which I could easily redirect later to the appropriate physical machine name (DNS A record).

There was the problem. The SAS/CONNECT job spawner was using the deployment/logical hostname alias whereas the sas.servers script was using the physical hostname (which it got from `uname -n`). With the wrong log file name it could never find the log file and therefore never see the trigger text and would always time out after 60 seconds.

There were at least two ways I could fix this problem. The first was to modify the ConnectSpawner.sh script to use the physical host name, rather than deployment host name alias, in the log file name. The second was to modify the sas.servers script to use the deployment hostname alias instead of the physical host name. That was the option I chose. It was the slightly more difficult of the two but I didn’t feel right telling the job spawner to log to a file with the wrong name :)

These were the changes I made to sas.servers to get it to work. Firstly I added an extra line where the SHOSTNAME environment variable is derived, changing this:

SHOSTNAME=`uname -n`

… to this:

SHOSTNAME=`uname -n`
SHOSTNAME_ALIAS="sas93lnx03"

Secondly I modified the start_connect_spawner function to change the name of the log file (as checked by the is_atypical_server_up function) from this:

SLOGNAME="ConnectSpawner_${SHOSTNAME}"

… to this:

SLOGNAME="ConnectSpawner_${SHOSTNAME_ALIAS}"

As it happens, the same problem occurs with the deployment tester server component too so I also edited the start_deployment_testsrv function changing this:

SLOGNAME="DeploymentTesterServer_${SHOSTNAME}"

… to this:

SLOGNAME="DeploymentTesterServer_${SHOSTNAME_ALIAS}"

However, there is a potential problem with this method of editing sas.servers directly; I alluded to it earlier in the post. The sas.servers script is a generated script and any edits made to it will get wiped out if/when it is regenerated at a later date. The SAS documentation for the sas.servers script cautions against editing the script. The sas.servers script is created by the generate_boot_scripts.sh script. Next time someone or something runs generate_boot_scripts.sh the changes to sas.servers will be lost.

To protect against this you could modify the templates that are used by generate_boot_scripts.sh. This is what I did. I made all the same changes described above in the single template /opt/ebiedieg/Lev3/Utilities/script_templates/sas.servers.mainlog so they would survive a regeneration. I wouldn’t necessarily recommend this though. It’s not a method documented by SAS Institute and I suspect these templates will most likely get changed at some point during a SAS software upgrade or hotfix. However, it works for me for the time being. Looking back it probably would have been better to just changed the ConnectSpawner.sh script to use the wrong hostname ;)

With those changes done it was time to stop and start the servers to see any improvements:

root@sas93lnx01:/opt/ebiedieg/Lev3# ./sas.servers start
2011-10-30:23:54:30: Starting SAS servers
2011-10-30:23:54:39: SAS Metadata Server 1 is UP
2011-10-30:23:54:43: SAS OLAP Server 1 is UP
2011-10-30:23:54:47: SAS Object Spawner 1 is UP
2011-10-30:23:54:47: SAS Share Server 1 is UP
2011-10-30:23:54:49: SAS CONNECT Spawner 1 is UP
2011-10-30:23:55:06: SAS Remote Services 1 is UP
2011-10-30:23:55:10: SAS Framework Data Server 1 is UP

This time only 40 seconds (less than half the original time) and the SAS CONNECT Job Spawner is now reported as being up.

As it turns out, I noticed that several of the other SAS services (like the metadata server) use the physical host name in their log filenames. It all works fine but I would have preferred they used the deployment hostname if it were possible. I briefly looked into ways of telling those servers to use the logical hostname alias instead but, since they use the SAS logging framework and the %S{hostname} conversion pattern, I couldn’t see any obvious way other than editing all of the config files and scripts post deployment. If someone knows a good way of consistently and automatically using the deployment hostname (the one providing during deployment and potentially different from the physical hostname) during installation then I’m all ears :)

New Admin Forum on SAS Communities

I read some great SAS news this weekend. I found out there’s a new admin related forum on the SAS Communities site! If you head over to http://communities.sas.com/ and look in the list of Communities and Forums you’ll see a new header, Deployment & Administration of SAS Software, and a new forum called SAS Deployment:

In the announcement Mark Schneider, R&D Director – Deployment, SAS Institute Inc. said the following:

“We’ve received quite a few requests for a SAS Community for SAS Administrators, and as an initial response, we’ve created this SAS Deployment Community.”

Thanks to Mark and the SAS Communities administrators for setting up this new forum.

When I read the announcement I also wondered whether the phrase “initial response” and the fact that the new forum is listed under a heading of its own, could mean we might possibly see additional forums in the future on other specific admin topics (like EG Admin, Web Admin, Security Admin etc). What do you think? From my perspective I’m just pleased there’s an admin forum. If it sees so much activity that more specific ones get created down the track that could be a bonus, but for now I’m just happy there’s a new home for admin discussions.

Hope to see you in the new SAS Communities admin forum. :)

SAS Management Console 9.3 Configuration Manager: Where did everything go?

When I first saw the Configuration Manager plug-in in SAS Management Console 9.3 I wondered where everything had gone! As you can see in the image below the list only contained four items!

I was expecting to see a nice long list of items like there is in this screenshot from SAS 9.2 …

Surely something was wrong? I have to confess when I saw this the first thing I did was to go looking through my SAS 9.3 install logs looking for errors. I couldn’t find any … ??? Puzzled, I went back to the SAS 9.3 Configuration Manager plug-in to take a look at what was there. As it turned out I was a bit keen to go looking for errors because when I expanded the 3rd item, SAS Application Infrastructure, hey presto there were all the missing items I had been looking for. Not missing, just rearranged.

Mystery solved, feeling a bit red-faced though. I’m always telling people to check the logs but maybe sometimes its worth clicking around a bit first ;)

SAS Deployment Manager 9.3

I noticed the user interface for the SAS® Deployment Manager 9.3 has changed slightly.

Here is a screenshot from my “old” SAS 9.2 M3 installation:

… and here’s the screenshot from my new SAS 9.3 installation:

I’m guessing that the new hierarchical folder structure is going to allow for more tools to added.

Interestingly there’s a new Apply Hot Fixes item at the bottom of the list of tools. This new tool is also mentioned in the SAS 9.3 hotfixes area under Important Changes to Hot Fixes. Initially I wondered if this was going to replace the SAS92HFADD tool, but then I saw there’s a SAS 9.3 Hot Fix Analysis, Download and Deployment Tool (SAS93HFADD) that’s currently marked as Under Development. I really liked SAS92HFADD (I did a couple of posts about my experiences with it here and here), so I’ll be interested to see SAS93HFADD when it’s available.

The documentation for the updated SAS Deployment Manager is in the SAS Deployment Wizard and SAS Deployment Manager 9.3: User’s Guide which is available from a link at the bottom of the SAS 9.3 Install Center Documentation page. This is one of those documents that I don’t refer to often enough to always remember where to find it. I hope it makes it onto the very useful, all-in-one page SAS 9.3 Documentation by Title.

SAS Restricted Options on UNIX

In a tweet by Gordon Cox last month, I was reminded of the restricted options facility available with SAS® software on UNIX platforms. This is capability where an administrator can set mandatory SAS system options at multiple levels of granularity: globally, per-group, and/or per-user. The reason for this post is that I don’t look at the documentation for this very often and every time I do it takes me a while to track it down. I always think its going to be in the UNIX companion in the Base SAS area… but it’s not! That gets me every time. Instead it’s tucked away in the Configuration Guide for SAS 9.2 Foundation for UNIX Environments (PDF) in Chapter 2 – Restricted Options. You can find this document in the Install Center section of support.sas.com under SAS Installation Note 36467: Documentation for a SAS® 9.2 installation on UNIX.

The context of the tweet was that the restricted options facility is another mechanism whereby a default setting of the NOXCMD option for SAS platform servers could be overridden for a subset of trusted users or groups in a SAS platform installation. The NOXCMD option is discussed in an earlier post: NOXCMD: NO eXternal CoMmanDs!

A quick summary of restricted options:

  • SAS Systems Options under UNIX set by an administrator, that cannot be changed by a user
  • Processed in the order global, group, then user. The last instance of an option is the one that wins.
  • Global restrictions are read from the file !SASROOT/misc/rstropts/rsasv9.cfg
  • Group restrictions are read from the file !SASROOT/misc/rstropts/groups/<groupname>_rsasv9.cfg
  • User restrictions are read from the file !SASROOT/misc/rstropts/users/<userid>_rsasv9.cfg

On Linux (at least) I can use the command “id -gn <userid>” to find out the effective group name for a user, given their user id. For example, “id -gn sassrv” might generate “sas“.

In my SAS 9.2 installation on Linux, whilst everyone else is still constrained by the NOXCMD option, I can ensure that the SAS Enterprise Guide user Bob Baxter, who has a user id of bob, can still use operating system commands in the SAS programs he runs on the SASApp server, by creating the file /usr/local/SAS/SASFoundation/9.2/misc/rstropts/users/bob_rsasv9.cfg with the following contents:

-xcmd

Of course, this only applies to SAS processes launched and run as the requesting user. Whilst it can be used to override NOXCMD for specific users/groups using a standard workspace server, it cant be used to distinguish between different users on the same stored process server, since all users will share SAS stored process server processes running under a shared identity (like sassrv). In that situation directing the users to separate SAS application servers would be more appropriate. There is an example of this in Jim Fenton & Robert Ladd’s SAS Global Forum 2010 paper 311-2010: A Practical Approach to Securing a SAS® 9.2 Intelligence Platform Deployment

Thanks to Gordon for reminding me about the restricted options facility.