SAS Download Manager in Console Mode

SAS® 9.4 M3 is now available (as discussed by Andy Ratcliffe in his recent NOTE: blog post), so I’m downloading a new SAS depot using the SAS Download Manager. I’m downloading it onto a Linux server that happens to have X11 available, but I’m choosing to use console (text) mode, rather than the default X11 windowed mode.

Using SAS Download Manager in console mode is my preferred method because I find it more flexible. Why Console mode? I know from experience that a SAS depot download is going to take a few hours and, if I run SAS Download Manager in windowed mode, I risk losing the remote X connection if there are any network disruptions between me and the server. By running SAS Download Manager in console mode, in combination with screen over an SSH session, I can easily disconnect/reconnect as required. I can disconnect (Control-A D) when I leave the office to go home, and reconnect (screen -r) when I get home to check on progress. I can also get screen to capture a log of the session in a text file (screen -L).

Of course, I could have used a Windows or Mac version of the SAS Download Manager to download onto a laptop and then upload onto the server, but that ties the laptop to a single location for the duration of the download. Plus if I’m not near the server at the time, I’d have to download and then upload nearly 50GB of SAS depot. If I’m traveling with my 4G mobile broadband connection, the server also has a much faster and much cheaper internet connection than I do! These are some of the reasons I prefer console mode.

To run SAS Download Manager in console mode Continue reading “SAS Download Manager in Console Mode”

SAS 9.3 Framework Data Server (and the missing license)

Whilst troubleshooting why my SAS 9.3 Framework Data Server wasn’t starting I discovered that, unlike Foundation SAS where the license (a.k.a. SID or setinit) is applied once a year, the SAS Framework Data Server requires the presence of a setinit.sas during every start up. Mine had been deleted. I’m not 100% sure how that came to be but I suspect it was probably one of two things: Continue reading “SAS 9.3 Framework Data Server (and the missing license)”

SAS Management Console over SSH

I was asked recently how to get SAS® Management Console to remotely access a SAS metadata server using SSH tunnels. In the absence of a VPN connection to your network, SSH can be an alternative for SAS Management Console access to a remote SAS metadata server.

I am a huge fan of SSH (Secure Shell). I have been using it several times a day for many years now. It’s great. If you haven’t heard of SSH before, take a look at the Wikipedia page for Secure Shell. Here’s a quote from the page that provides a nice intro/summary:

Secure Shell (SSH) is a network protocol for secure data communication, remote shell services or command execution and other secure network services between two networked computers that it connects via a secure channel over an insecure network.

Here are a couple of methods for using SAS Management Console over SSH:

1. Remote execution of SAS Management Console with X11 forwarding to the client

This method can be used when you are accessing a SAS platform installation on Linux or UNIX and have SSH client software and X server software on your remote workstation. Mac or Linux workstations are great for this since they usually have all the required software pre-installed. Windows can also be used if you obtain SSH client and X server software. I personally use Putty as an SSH client when I am working on Windows and would defintely recommend it. I have no specific recommendations for an X server on Windows since it’s been a long time since I’ve done X on Windows.

Assuming, as a SAS platform administrator, you have remote SSH access to the SAS metadata server machine then you can use SSH from your workstation to execute the SAS Management Console remotely and forward the X display to your client workstation.

Here’s an example command to do this:

ssh -X sasmeta.example.com /opt/sas93/SASManagementConsole/9.3/sasmc

Which means SSH connect, with X11 forwarding, to the machine sasmeta.example.com and then execute the SAS Management Console (/opt/sas93/SASManagementConsole/9.3/sasmc) on that remote machine sending the X windows back to the client workstation.

2. Local execution of SAS Management Console with SSH tunneling

SSH also allows you to configure a tunnel – a local port on your workstation that that forwards traffic to a designated server and port in the remote network. This tunnel can be used to make a remote service appear to be a local service. We can use it to make a remote SAS metadata server port appear to be on the local workstation so that a local installation of the SAS Management Console can connect to it as if it had a local metadata server.

Here’s an example command to do this:

ssh -L 8561:sasmeta.example.com:8561 sasmid.example.com

Which means SSH connect to the machine sasmid.example.com and establish a local machine port (8561) that forwards traffic to the remote host/port sasmeta.example.com:8561 accessible via sasmid.example.com. If you are not using public key authentication (recommended) then you will be prompted for a user id and password for the server. Once the connection is active, a local SAS Management Console can be started and will be able to access the remote SAS metadata server using a connection profile that connects to a metadata server on host/port localhost:8561. All traffic to this local port will be sent to the remote metadata server over the SSH tunnel.

Bear in mind that this only makes the metadata server port available on the local machine. So the local SAS Management Console instance can only access the metadata server. It doesn’t necessarily make the client fully functional as it may require additional connections to additional servers e.g. access to a SAS Object Spawner for a SAS Workspace Server session or a connection to the SAS Content Server. You could look into forwarding other ports and will also need to modify your local hosts file to redirect the remote host names found in metadata to the localhost interface. For anything more complex than pure metadata server access it would probably be easier and more robust to use a VPN connection (or remote access via something like X or RDP to remote network client workstations).

SAS & JBoss: Too Many Open Files

I’ve been seeing some ‘Too many open files‘ exceptions in the SAS® mid-tier JBoss logs on my Ubuntu Linux server. I was surprised about this because I remember during installation I had followed the guidance in the SAS documentation and increased the nofile limit. It turns out that verifying the ulimit from a normal console/ssh login was not sufficient, I should have also verified it from an su based login too.

These were the sorts of messages I was seeing in the JBoss logs:

2012-03-11 09:22:38,463 ERROR [org.apache.tomcat.util.net.JIoEndpoint] Socket accept failed
java.net.SocketException: Too many open files
...
2012-03-11 08:57:19,454 ERROR [org.apache.catalina.core.StandardContext] Error reading tld listeners java.io.FileNotFoundException: /opt/jboss-4.2.3.GA/server/SASServer3/work/jboss.web/localhost/SASBIDashboard/tldCache.ser (Too many open files)
...

There are some Pre-Installation Steps for JBoss for both SAS 9.3 and SAS 9.2 that should be followed during installation to avoid these errors. These were the instructions I had followed, but as you’ll see in a moment, it wasn’t quite enough for this (unsupported) Ubuntu installation.

The instructions specify to edit the /etc/security/limits.conf file directly. Rather than editing this main config file, where the settings might get forgotten or lost during an upgrade, I placed the settings I required for my SAS installation in their own dedicated config file: /etc/security/limits.d/sas.conf

# Increase the open file descriptors limit from the default of 1024 to 30720 for JBoss running web apps for SAS 9.2/9.3
* - nofile 30720

I knew that this had taken effect because I logged in, via ssh, to verify it as the sas user (I run JBoss as the sas user). Checking the ulimit I saw the following:

sas@server:~$ ulimit -Hn;ulimit -Sn
30720
30720

With nofile at 30270, how was it I was still getting ‘Too many open files‘ errors? After a quick session on Google I found a blog post suggesting the increased limits will only apply if the pam_limits PAM module is enabled.

Checking the /etc/pam.d/login file I could see the pam_limits line was already present and uncommented:
...
session required pam_limits.so
...

This made sense since the console/ssh login showed the expected numbers.

Google also led me to a stackoverflow question (how do i set hard and soft file limits for a non-root user at boot?). The answer provided there indicated that, for su commands, you also need to verify the pam_limits module is enabled in an additional su specific PAM config file, which on my machine was /etc/pam.d/su. My JBoss init script runs as root during system startup but uses su to run JBoss as the sas user. Looking in /etc/pam.d/su I could see that the pam_limits line was commented so perhaps that was the issue.

Before making the necessary changes, I verified the nofile ulimit for the sas user by running su as root:

root@server:~# su sas --login --command 'ulimit -Hn;ulimit -Sn'
1024
1024

Aha! It had the 1024 default rather then the increased value. It looked like this was indeed the problem. I uncommented the pam_limits line in /etc/pam.d/su and repeated the test:

root@server:~# su sas --login --command 'ulimit -Hn;ulimit -Sn'
30720
30720

It now shows the increased value as expected, so it looks like the problem’s fixed. I restarted JBoss and haven’t seen any ‘Too many open files‘ errors since.

Troubleshooting: sas.servers & hostname aliases

SAS® Service Management Script for Linux and UNIX: sas.servers I ran into an interesting problem over the last few days with my SAS® 9.3 deployments on Linux. I had noticed that the sas.servers scripts for my Lev2 and Lev3 deployments were taking several minutes to start the SAS services. I know JBoss takes ages to start but usually the SAS services start really quickly. I had assumed Remote Services was the culprit, but surprisingly it turned out to be starting the SAS/CONNECT Job Spawner service that was the problem. Even more surprisingly the underlying cause turned out to be the use of the hostname in the log filename where the deployment logical hostname alias used during deployment was different to the physical hostname of the machine it was deployed on.

To start the troubleshooting process I modified the sas.servers script to add some timing information to the console log messages it generated. Editing the sas.servers script is not necessarily the best option but more on that later. The sas.servers script has a function called logmsg which seems to have been very nicely provided in order to customize its logging. I wanted to make a simple change to add a time stamp so I could see where all the time was being spent. I changed the function from this (reformatted and comments omitted for brevity):

logmsg() {
    echo "$*"
}

… to this:

logmsg() {
    DT=`date +%Y-%m-%d:%H:%M:%S`
    echo "$DT: $*"
}

I stopped the services since they were already running:

root@sas93lnx01:~# cd /opt/ebiedieg/Lev3
root@sas93lnx01:/opt/ebiedieg/Lev3# ./sas.servers stop
2011-10-30:21:51:31: Stopping SAS servers

Then checked they were all stopped:

root@sas93lnx01:/opt/ebiedieg/Lev3# ./sas.servers status
2011-10-30:21:51:59: SAS servers status:
2011-10-30:21:51:59: SAS Metadata Server 1 is NOT up
2011-10-30:21:51:59: SAS OLAP Server 1 is NOT up
2011-10-30:21:51:59: SAS Object Spawner 1 is NOT up
2011-10-30:21:51:59: SAS Share Server 1 is NOT up
2011-10-30:21:51:59: SAS CONNECT Spawner 1 is NOT up
2011-10-30:21:51:59: SAS Remote Services 1 is NOT up
2011-10-30:21:51:59: SAS Framework Data Server 1 is NOT up

Once stopped, I started them again so I could see where all the time was spent:

root@sas93lnx01:/opt/ebiedieg/Lev3# ./sas.servers start
2011-10-30:21:54:54: Starting SAS servers
2011-10-30:21:55:03: SAS Metadata Server 1 is UP
2011-10-30:21:55:07: SAS OLAP Server 1 is UP
2011-10-30:21:55:11: SAS Object Spawner 1 is UP
2011-10-30:21:55:11: SAS Share Server 1 is UP
2011-10-30:21:56:12: SAS CONNECT Spawner 1 is NOT up
2011-10-30:21:56:28: SAS Remote Services 1 is UP
2011-10-30:21:56:32: SAS Framework Data Server 1 is UP

Total time to start was 98 seconds. Not too bad really but it should be quicker than that. Everything seemed to start within a few seconds of each other except for the the SAS/CONNECT Job Spawner! It looked like I was wrong to assume it was Remote Services. The SAS/CONNECT Job Spawner appeared to be taking 60 seconds to apparently not start. It had started though. This was confirmed by looking in the process list and the job spawner log file. It all seemed very odd and for 2 reasons: firstly the job spawner is usually very fast to start and secondly 60 seconds is a bit of a convenient number; it sounded like a timeout.

The sas.servers script starts each SAS service in turn waiting for the service to become available before starting the next one. It sounded like sas.servers was waiting for some event from the job spawner and gave up after 60 seconds assuming it had not started. Time to dig into the sas.servers script a bit more.

The SAS/CONNECT job spawner is started by the sas.servers script in the start_connect_spawner function. It spawns the spawner (ConnectSpawner.sh start) in the background and then calls the is_atypical_server_up function to wait for it to finish starting. Looking into the is_atypical_server_up function it seems to repeatedly sleep and grep the job spawner log for some trigger text. Effectively:

grep "SAS Job Spawner for Open Systems" /opt/ebiedieg/Lev3/ConnectSpawner/Logs/ConnectSpawner_sas93lnx01.log

Now I could see the problem. It was scanning a non-existant log file ConnectSpawner_sas93lnx01.log when it should be scanning the ConnectSpawner_sas93lnx03.log file which was actually being written to by the job spawner. The log file name contained the wrong hostname. The name sas93lnx01 was the physical name of the machine, but sas93lnx03 was the host name alias I used when deploying the Lev3 environment. I prefer to use host name aliases in deployments for the benefits they provide in being able to easily move deployments between physical machines and provide disaster recovery options. In this case the Lev3 environment was currently on the same machine as the Lev1 environment, but I knew one day I would move it onto another machine so had used a host name alias sas93lnx03 (a DNS CNAME record) which I could easily redirect later to the appropriate physical machine name (DNS A record).

There was the problem. The SAS/CONNECT job spawner was using the deployment/logical hostname alias whereas the sas.servers script was using the physical hostname (which it got from `uname -n`). With the wrong log file name it could never find the log file and therefore never see the trigger text and would always time out after 60 seconds.

There were at least two ways I could fix this problem. The first was to modify the ConnectSpawner.sh script to use the physical host name, rather than deployment host name alias, in the log file name. The second was to modify the sas.servers script to use the deployment hostname alias instead of the physical host name. That was the option I chose. It was the slightly more difficult of the two but I didn’t feel right telling the job spawner to log to a file with the wrong name :)

These were the changes I made to sas.servers to get it to work. Firstly I added an extra line where the SHOSTNAME environment variable is derived, changing this:

SHOSTNAME=`uname -n`

… to this:

SHOSTNAME=`uname -n`
SHOSTNAME_ALIAS="sas93lnx03"

Secondly I modified the start_connect_spawner function to change the name of the log file (as checked by the is_atypical_server_up function) from this:

SLOGNAME="ConnectSpawner_${SHOSTNAME}"

… to this:

SLOGNAME="ConnectSpawner_${SHOSTNAME_ALIAS}"

As it happens, the same problem occurs with the deployment tester server component too so I also edited the start_deployment_testsrv function changing this:

SLOGNAME="DeploymentTesterServer_${SHOSTNAME}"

… to this:

SLOGNAME="DeploymentTesterServer_${SHOSTNAME_ALIAS}"

However, there is a potential problem with this method of editing sas.servers directly; I alluded to it earlier in the post. The sas.servers script is a generated script and any edits made to it will get wiped out if/when it is regenerated at a later date. The SAS documentation for the sas.servers script cautions against editing the script. The sas.servers script is created by the generate_boot_scripts.sh script. Next time someone or something runs generate_boot_scripts.sh the changes to sas.servers will be lost.

To protect against this you could modify the templates that are used by generate_boot_scripts.sh. This is what I did. I made all the same changes described above in the single template /opt/ebiedieg/Lev3/Utilities/script_templates/sas.servers.mainlog so they would survive a regeneration. I wouldn’t necessarily recommend this though. It’s not a method documented by SAS Institute and I suspect these templates will most likely get changed at some point during a SAS software upgrade or hotfix. However, it works for me for the time being. Looking back it probably would have been better to just changed the ConnectSpawner.sh script to use the wrong hostname ;)

With those changes done it was time to stop and start the servers to see any improvements:

root@sas93lnx01:/opt/ebiedieg/Lev3# ./sas.servers start
2011-10-30:23:54:30: Starting SAS servers
2011-10-30:23:54:39: SAS Metadata Server 1 is UP
2011-10-30:23:54:43: SAS OLAP Server 1 is UP
2011-10-30:23:54:47: SAS Object Spawner 1 is UP
2011-10-30:23:54:47: SAS Share Server 1 is UP
2011-10-30:23:54:49: SAS CONNECT Spawner 1 is UP
2011-10-30:23:55:06: SAS Remote Services 1 is UP
2011-10-30:23:55:10: SAS Framework Data Server 1 is UP

This time only 40 seconds (less than half the original time) and the SAS CONNECT Job Spawner is now reported as being up.

As it turns out, I noticed that several of the other SAS services (like the metadata server) use the physical host name in their log filenames. It all works fine but I would have preferred they used the deployment hostname if it were possible. I briefly looked into ways of telling those servers to use the logical hostname alias instead but, since they use the SAS logging framework and the %S{hostname} conversion pattern, I couldn’t see any obvious way other than editing all of the config files and scripts post deployment. If someone knows a good way of consistently and automatically using the deployment hostname (the one providing during deployment and potentially different from the physical hostname) during installation then I’m all ears :)