December 29, 2000

A secure NFS environment?

A lot of organizations do not realise the danger of NFS untill they have been intruded by hostile crackers. This article would give a short description of most NFS realted problems and means to avoid it. Since I mostly use solaris, I'll try to stick to Solaris examples in this paper.

Problems: Un-authenticated NFS mounts.

Many sys-admins including me, have setup uncontrolled NFS shares on solaris boxes. There might be many excuses for this. My popular excuse is that I was just testing it, or that I was asked to do that by someone else. No matter what the excuse is, its tough to recover from a hostile attack morally if its ever misused.

As a matter of policy shares should have restricted hosts, especially if it has read-write enabled. No NFS mounts should be allowed from hosts which are accessable from the Internet, and one should avoid critical write enabled NFS mounts in a non-secure zone.

Problems: home directories

Its is a popular to use NFS for home directories, especially for developer environment where no one likes to update profiles all over the network. Most of the environments I've worked with had NFS setup. In such a network, the NFS directories are only as secure as the weakest machine on the network. Its usually a good practice in such a senario to avoid "root" access to NFS.

Even if you think you can recover damage to the NFS directories using backups, you would have a difficult time if the cracker misused "r" commands and reaches other servers on the network. Even if a user has different password on each and every system on the network, the NFS home directories can effectively give a cracker access to the entire network if he sets up a .rhosts file. Ive noticed that by killing Inetd and setting up ssh make some Admins feel a little more secure. However, unfortunately ssh allows exactly the same set of accessability which a "r"command does. The only difference here is that the execution is secure to sniffing by corporate sniffers. Which in other words is more dangerous.

Problems: Trusted servers on NFS ?

Personally I think any machine on a NFS should be considered open to attack in the greatest degree. If you really want to build a secure trusted server for remote management, the first think you should do is shut down inetd and NFS completely. This is again for hte same reason as I explained above.

Problems: Suid on NFS ?

Well now that you know how NFS is insecure, its logical to conclude that if suid on NFS is changed on one machine, it will effect on all systems which run it. Hence, avoid suid if possible. Implement it on local drive. Run away from your manager and try to act as if you didn't hear it if he proposes to enable SUID on nfs.

Problems: Dont forget Automounts.

/etc/dfs/dfstab is not the only place you have to be carefull, check your automounts. if you use NIS+, you can centrally push more secure configs to all your NFS clients.

Problems: FDQN please...

I work in an environment which has multiple domains with multiple seach domains listed on /etc/resolv.conf. It would ,hence, be prudent if you try to use only FDQN (Fully qualified Domain names).

Problems: netgroups

I've heard some horror stories with netgroups. The biggest I think is that solaris exports the directory to the entire world if someone missplet a netgroup. Thats a real horror story.

Other improvements: secure RPC Solaris allows Secure RPC communication which can make NFS a little more secure. Linux does support it too ( I think ).

http://www.cco.caltech.edu/~refguide/sheets/nfs-security.html#intro

http://www.lanl.gov/projects/ia/stds/ia7a01.html