December 29, 2000

A secure NFS environment?

A lot of organizations do not realise the danger of NFS untill they have been intruded by hostile crackers. This article would give a short description of most NFS realted problems and means to avoid it. Since I mostly use solaris, I'll try to stick to Solaris examples in this paper.

Problems: Un-authenticated NFS mounts.

Many sys-admins including me, have setup uncontrolled NFS shares on solaris boxes. There might be many excuses for this. My popular excuse is that I was just testing it, or that I was asked to do that by someone else. No matter what the excuse is, its tough to recover from a hostile attack morally if its ever misused.

As a matter of policy shares should have restricted hosts, especially if it has read-write enabled. No NFS mounts should be allowed from hosts which are accessable from the Internet, and one should avoid critical write enabled NFS mounts in a non-secure zone.

Problems: home directories

Its is a popular to use NFS for home directories, especially for developer environment where no one likes to update profiles all over the network. Most of the environments I've worked with had NFS setup. In such a network, the NFS directories are only as secure as the weakest machine on the network. Its usually a good practice in such a senario to avoid "root" access to NFS.

Even if you think you can recover damage to the NFS directories using backups, you would have a difficult time if the cracker misused "r" commands and reaches other servers on the network. Even if a user has different password on each and every system on the network, the NFS home directories can effectively give a cracker access to the entire network if he sets up a .rhosts file. Ive noticed that by killing Inetd and setting up ssh make some Admins feel a little more secure. However, unfortunately ssh allows exactly the same set of accessability which a "r"command does. The only difference here is that the execution is secure to sniffing by corporate sniffers. Which in other words is more dangerous.

Problems: Trusted servers on NFS ?

Personally I think any machine on a NFS should be considered open to attack in the greatest degree. If you really want to build a secure trusted server for remote management, the first think you should do is shut down inetd and NFS completely. This is again for hte same reason as I explained above.

Problems: Suid on NFS ?

Well now that you know how NFS is insecure, its logical to conclude that if suid on NFS is changed on one machine, it will effect on all systems which run it. Hence, avoid suid if possible. Implement it on local drive. Run away from your manager and try to act as if you didn't hear it if he proposes to enable SUID on nfs.

Problems: Dont forget Automounts.

/etc/dfs/dfstab is not the only place you have to be carefull, check your automounts. if you use NIS+, you can centrally push more secure configs to all your NFS clients.

Problems: FDQN please...

I work in an environment which has multiple domains with multiple seach domains listed on /etc/resolv.conf. It would ,hence, be prudent if you try to use only FDQN (Fully qualified Domain names).

Problems: netgroups

I've heard some horror stories with netgroups. The biggest I think is that solaris exports the directory to the entire world if someone missplet a netgroup. Thats a real horror story.

Other improvements: secure RPC Solaris allows Secure RPC communication which can make NFS a little more secure. Linux does support it too ( I think ).

http://www.cco.caltech.edu/~refguide/sheets/nfs-security.html#intro

http://www.lanl.gov/projects/ia/stds/ia7a01.html

December 02, 2000

Problems in Loadblancing

With the expansion of internet, the userbase of most sites are growing exponentially. However the speed of the servers themselves are not growing fast enough. It is hence logical to conclude that these services have to be setup on multiple servers. Depending on what kind of service you are providing this could be a trivial task.

Problem 1
However, some applications are very touchy about which server the client connects to after the first hit. It is possible that the service itself is not scalable enough to allow user to switch between the two servers without interrupting the service. This requires some sort of session management to allow users to stick to one server after they log in.

Solution 1
There are three primary ways of getting this done. The first and foremost way is to resolve this issue is to setup loadbalancer to loadbalance using the Source-IP of the client. This will make sure that the client browser always goes to the same server for that particular session. This solution will work for most service providers.

Problem 2
However if the client IP address is the IP of a proxy server, it would mean that all the clients behind that IP might end up on the same server, which would create an undesirable effect of overloading that particular server. An even worse senario is that if different URLs are being obtained using distributed cache engines, makeing the same client reach the the same server using different cache engines, which could still break the application. (since each cache engine has its own IP which could end up on different servers)

Solutions 2
The second solution which is more sane on this front is the usage of Cookies. Most loadbalancers today are capable of understanding Cookies set by the servers and redirect the client to the right one. Most of them also have the capability of providing their own cookies to do the job.

Some of the good places to look for more information on cookies would be places like these http://www.cookiecentral.com/ , http://www.epic.org/privacy/internet/cookies/ , http://www.cis.ohio-state.edu/rfc/rfc2109.txt , http://developer.netscape.com/docs/manuals/communicator/jsguide4/cookies.htm

What is a cookie ?

A Cookie is nothing but a small piece of text set on your browser by the server and is sent back to the everytime your browser connects to it. I won't be telling you anything new when I say that this is a security/privacy problem, since the server can literally track you every time you login and log out.

Problem
There are a few problems with this implementation however. I still cant get everything working yet. Donno why, but here is a gist of problems I've noticed till now.

The first and probably the biggest problem in implementing this solution is that many security-aware organization/users are switching off cookies in thier browsers. This will almost always break applications which are cookie dependent. Sites like doubleclick have a lot of offer in this problem. About which you could read more at http://www.epic.org/privacy/internet/cookies/

Make sure your webserver is configured to EXPIRE your dynamic pages. http://www.mnot.net/cache_docs/#IMP-SERVER

Even if I ignore the first problem, I still couldn't get the cookie working properly with some of the cache engines over the net. Some of the cache engines don't use EXPIRE tag to cache at all. Making it difficult for the server to force an expiry.

Third problem is dependent on which kind of loadbalancer you are running. Some loadbalancers, like Resonate, F5 and I think Arrowpoint too, work better when they themselves give out the cookies. Most of the proxy implementation check for cookies when the browser sends back a request with a cookie attached. However the actuall issuing of a cookie happens in the previous GET/POST request when server replies back with a set-cookie. Resonate has a design issue due to which it can't handle this cookie (they call this problem "the first hit bug"). However I've noticed similar problems with other loadbalancers too. The solutions ofcourse is to ignore the server set cookies and use cookies set by the loadblanacer for loadbalancing.

Solution 3
The Third solution to this entire problem is to tag the URL itself with an ID which changes with each session. For example take this url for example
http://security.royans.net/test.html?THISISASESSIONID=1234

As u notice, even though I have a static html page, the URL has a ID attached to it which can be used when the client connects back. "Referer URL" always lists the last URL which sent the client browser to the new link. This value can be effectively used by a loadbalancer to track a user and keep him/her on the same server.

Though the problem is simple, there are lots of hurdles attached in implementing the right solution. The info I gathered was based on my experience. I'll be pleased to correct any factual errors in this document. Contributions for more info about loadbalancer implementation is always welcome