December 29, 2000

A secure NFS environment?

A lot of organizations do not realise the danger of NFS untill they have been intruded by hostile crackers. This article would give a short description of most NFS realted problems and means to avoid it. Since I mostly use solaris, I'll try to stick to Solaris examples in this paper.

Problems: Un-authenticated NFS mounts.

Many sys-admins including me, have setup uncontrolled NFS shares on solaris boxes. There might be many excuses for this. My popular excuse is that I was just testing it, or that I was asked to do that by someone else. No matter what the excuse is, its tough to recover from a hostile attack morally if its ever misused.

As a matter of policy shares should have restricted hosts, especially if it has read-write enabled. No NFS mounts should be allowed from hosts which are accessable from the Internet, and one should avoid critical write enabled NFS mounts in a non-secure zone.

Problems: home directories

Its is a popular to use NFS for home directories, especially for developer environment where no one likes to update profiles all over the network. Most of the environments I've worked with had NFS setup. In such a network, the NFS directories are only as secure as the weakest machine on the network. Its usually a good practice in such a senario to avoid "root" access to NFS.

Even if you think you can recover damage to the NFS directories using backups, you would have a difficult time if the cracker misused "r" commands and reaches other servers on the network. Even if a user has different password on each and every system on the network, the NFS home directories can effectively give a cracker access to the entire network if he sets up a .rhosts file. Ive noticed that by killing Inetd and setting up ssh make some Admins feel a little more secure. However, unfortunately ssh allows exactly the same set of accessability which a "r"command does. The only difference here is that the execution is secure to sniffing by corporate sniffers. Which in other words is more dangerous.

Problems: Trusted servers on NFS ?

Personally I think any machine on a NFS should be considered open to attack in the greatest degree. If you really want to build a secure trusted server for remote management, the first think you should do is shut down inetd and NFS completely. This is again for hte same reason as I explained above.

Problems: Suid on NFS ?

Well now that you know how NFS is insecure, its logical to conclude that if suid on NFS is changed on one machine, it will effect on all systems which run it. Hence, avoid suid if possible. Implement it on local drive. Run away from your manager and try to act as if you didn't hear it if he proposes to enable SUID on nfs.

Problems: Dont forget Automounts.

/etc/dfs/dfstab is not the only place you have to be carefull, check your automounts. if you use NIS+, you can centrally push more secure configs to all your NFS clients.

Problems: FDQN please...

I work in an environment which has multiple domains with multiple seach domains listed on /etc/resolv.conf. It would ,hence, be prudent if you try to use only FDQN (Fully qualified Domain names).

Problems: netgroups

I've heard some horror stories with netgroups. The biggest I think is that solaris exports the directory to the entire world if someone missplet a netgroup. Thats a real horror story.

Other improvements: secure RPC Solaris allows Secure RPC communication which can make NFS a little more secure. Linux does support it too ( I think ).

http://www.cco.caltech.edu/~refguide/sheets/nfs-security.html#intro

http://www.lanl.gov/projects/ia/stds/ia7a01.html

December 02, 2000

Problems in Loadblancing

With the expansion of internet, the userbase of most sites are growing exponentially. However the speed of the servers themselves are not growing fast enough. It is hence logical to conclude that these services have to be setup on multiple servers. Depending on what kind of service you are providing this could be a trivial task.

Problem 1
However, some applications are very touchy about which server the client connects to after the first hit. It is possible that the service itself is not scalable enough to allow user to switch between the two servers without interrupting the service. This requires some sort of session management to allow users to stick to one server after they log in.

Solution 1
There are three primary ways of getting this done. The first and foremost way is to resolve this issue is to setup loadbalancer to loadbalance using the Source-IP of the client. This will make sure that the client browser always goes to the same server for that particular session. This solution will work for most service providers.

Problem 2
However if the client IP address is the IP of a proxy server, it would mean that all the clients behind that IP might end up on the same server, which would create an undesirable effect of overloading that particular server. An even worse senario is that if different URLs are being obtained using distributed cache engines, makeing the same client reach the the same server using different cache engines, which could still break the application. (since each cache engine has its own IP which could end up on different servers)

Solutions 2
The second solution which is more sane on this front is the usage of Cookies. Most loadbalancers today are capable of understanding Cookies set by the servers and redirect the client to the right one. Most of them also have the capability of providing their own cookies to do the job.

Some of the good places to look for more information on cookies would be places like these http://www.cookiecentral.com/ , http://www.epic.org/privacy/internet/cookies/ , http://www.cis.ohio-state.edu/rfc/rfc2109.txt , http://developer.netscape.com/docs/manuals/communicator/jsguide4/cookies.htm

What is a cookie ?

A Cookie is nothing but a small piece of text set on your browser by the server and is sent back to the everytime your browser connects to it. I won't be telling you anything new when I say that this is a security/privacy problem, since the server can literally track you every time you login and log out.

Problem
There are a few problems with this implementation however. I still cant get everything working yet. Donno why, but here is a gist of problems I've noticed till now.

The first and probably the biggest problem in implementing this solution is that many security-aware organization/users are switching off cookies in thier browsers. This will almost always break applications which are cookie dependent. Sites like doubleclick have a lot of offer in this problem. About which you could read more at http://www.epic.org/privacy/internet/cookies/

Make sure your webserver is configured to EXPIRE your dynamic pages. http://www.mnot.net/cache_docs/#IMP-SERVER

Even if I ignore the first problem, I still couldn't get the cookie working properly with some of the cache engines over the net. Some of the cache engines don't use EXPIRE tag to cache at all. Making it difficult for the server to force an expiry.

Third problem is dependent on which kind of loadbalancer you are running. Some loadbalancers, like Resonate, F5 and I think Arrowpoint too, work better when they themselves give out the cookies. Most of the proxy implementation check for cookies when the browser sends back a request with a cookie attached. However the actuall issuing of a cookie happens in the previous GET/POST request when server replies back with a set-cookie. Resonate has a design issue due to which it can't handle this cookie (they call this problem "the first hit bug"). However I've noticed similar problems with other loadbalancers too. The solutions ofcourse is to ignore the server set cookies and use cookies set by the loadblanacer for loadbalancing.

Solution 3
The Third solution to this entire problem is to tag the URL itself with an ID which changes with each session. For example take this url for example
http://security.royans.net/test.html?THISISASESSIONID=1234

As u notice, even though I have a static html page, the URL has a ID attached to it which can be used when the client connects back. "Referer URL" always lists the last URL which sent the client browser to the new link. This value can be effectively used by a loadbalancer to track a user and keep him/her on the same server.

Though the problem is simple, there are lots of hurdles attached in implementing the right solution. The info I gathered was based on my experience. I'll be pleased to correct any factual errors in this document. Contributions for more info about loadbalancer implementation is always welcome

June 06, 2000

Feb Attack 2000: DDOS Attack - analysis

BOOKMARK: http://staff.washington.edu/dittrich/misc/ddos/
BOOKMARK:
Max Vision Network Security & Penetration Testing website
BOOKMARK:
MIXTER Security

June 9, 2000

May 15, 2000
    FAQ DOS FAQ This FAQ covers denial of service attacks (DoS) in great depth, and has links to software that can be used to execute DoS attacks. Also DDos Research

May 15, 2000
    PAPER On Magic, IRC wars, and DDoS The recent attacks against major Internet sites are "magical" in the same fashion. The public doesn't know how the hacks are done, and imagines all sorts of things. It is much simpler than that.

May 15, 2000
    DISCUSSION(Political) THE WAR ON HACKERS This last week we have been inundated with press about distributed denial-of-service (DDoS) attacks against the portal and shopping sites at Yahoo, Amazon, eBay, CNN.com, Buy.com, ZDNet, E*Trade, and Excite.com. Is it just me, or does anyone else notice the dire message exposed by these news stories?

May 15, 2000
    ARTICLE Network Services in an Uncooperative Internet By hacking your TCP stack, you could theoretically do a number of things to increase your TCP transmission performance. Essentially, you could just "turn down" or completely eliminate the limits that TCP was designed to introduce. Your hacked stack would transmit as fast as it can and everyone else's TCP stack will kindly get out of the way in the face of so much usage.

May 13, 2000

HEADLINES NEWS.COM:HEADLINES
HEADLINES WIRED.COM:IS/IT Infostructure

May 2, 2000

    PAPER Analysis of mstream, a DDoS tool This is an analysis of "mstream", a distributed denial of service (DDoS) attack tool, based on the source code of "stream2.c", a classic point-to-point DoS attack tool

May 1, 2000
    PAPER Source code to mstream, a DDoS tool It's been alleged that this source code, once compiled, was used by persons unknown in the distributed denial of service (DDoS) attacks earlier this year.

Apr 26, 2000
    PAPER A study of TCP and the DDOS problem By exploiting features inherent to TCP protocol remote attackers can perform denial of service attacks on a wide array of target operating systems. The attack is most efficient against HTTP servers.

Mar 18, 2000

Mar 17, 2000
    LINK 138238 Misconfigured Networks !! This site does polling to find out how many misconfigured networks are out there waiting to become one of the victim sites in originating DDOS attacks. I hope people who manage these networks do understand what they are doing and immediately correct it.

Mar 15, 2000

Feb 27, 2000

Feb 27, 2000

Feb 20, 2000
    INFO:More on DDOS : Excerpted from up comming book Uberhacker (happyhacker.org)

Feb 18, 2000

Feb 17, 2000

Feb 16, 2000

Feb 15, 2000

Feb 14, 2000

Feb 12, 2000

Feb 11, 2000

Feb 10, 2000

Feb 9, 2000

Feb 8, 2000

Feb 7, 2000

Feb 3, 2000

December 31, 1999

December 27, 1999

December 23, 1999

December 20, 1999

December 15, 1999

December 8, 1999

October 21, 1999

February 13, 2000

DNS Information hiding

One of the funniest ways of using DNS is by hiding information in it. DNS, as the name goes, is more about distributing Domain information. However, some people, who think differently had other ideas about it. I used the idea to hide one of my perl programs in a dns server I have access to. Execute the following line as a single command and wait for the outcome.
dig @beta.royans.net beta.royans.net axfr | grep '^host' | sort | cut -b8-39
| perl -e 'while(
){print pack("H32",$_)}' | gzip -qd

How does the real DNS look line ?
Its pretty dirty :) But have a look anyway.
dig @ns1.granitecanyon.com royans.dhs.org axfr

; <
> DiG 8.2 <
> @ns1.granitecanyon.com royans.dhs.org axfr
; (1 server found)
$ORIGIN royans.dhs.org.
@ 12H IN SOA ns1.granitecanyon.com.
rkt.pobox.com. (
153313462 ; serial
6H ; refresh
3H ; retry
1W ; expiry
12H ) ; minimum

12H IN NS ns1.granitecanyon.com.
12H IN NS ns2.granitecanyon.com.
12H IN A 209.163.245.213
12H IN RP rkt.pobox.com. @
host12.41b30512676bb8956b2a0cfcc273aae9 12H IN A 192.168.5.12
host26.64709224e45be244e57c6e5a3a0cf1d2 12H IN A 192.168.5.26
host50.1ab62f7d0cffa7c12ecfc28fdebeb4e9 12H IN A 192.168.5.50
host34.fef349259c2cf8175f7d02ed763cc06d 12H IN A 192.168.5.34
host18.9f9995ca9abe2a4384afda280a97e94b 12H IN A 192.168.5.18
localhost 12H IN A 127.0.0.1
host39.5390401b3c346be12e4bb85c4be3e1be 12H IN A 192.168.5.39
host09.215866b91460d84bc544c64054cb19d3 12H IN A 192.168.5.9
host06.e1a7a3cae8a31917478ae9d2f30ee1be 12H IN A 192.168.5.6
host13.820ad00b7ba6e44cfee96772f9ee843f 12H IN A 192.168.5.13
host36.de25f5463337214c325647c1f4f1f889 12H IN A 192.168.5.36
host15.eeaa1f9cd8027e539d15709c2489e755 12H IN A 192.168.5.15
host46.f06d0763793bf8ae83b1de1d7ce9e02d 12H IN A 192.168.5.46
host32.7fdec7f163f2046f6f4e2a97585b737a 12H IN A 192.168.5.32
host03.5bc89644724b8ac6a485b502a181d0b4 12H IN A 192.168.5.3
host48.7cb27d925b664cc04fdbfb818985de08 12H IN A 192.168.5.48
host31.6b87472e3bf2c73e69683139bf9c5e8f 12H IN A 192.168.5.31
host14.5aaa82c22bfabb9a927e927c87344642 12H IN A 192.168.5.14
host08.a52aa96560255041cbb561707f718364 12H IN A 192.168.5.8
host49.5c7f1da7152148c95b8ac0711c6c3c56 12H IN A 192.168.5.49
host37.7c98638c6c548802e3ca2441de2e4649 12H IN A 192.168.5.37
host04.0af1e31345959bb8c46a6a1bdba1ab42 12H IN A 192.168.5.4
host45.660b236b079f77309aefe09b0ec66a76 12H IN A 192.168.5.45
host23.4e6da68682ad3ed41ed50e1bd59b8df3 12H IN A 192.168.5.23
host19.ba4a6d5aa426cd43cc08de8603247c28 12H IN A 192.168.5.19
host35.53c24ea5eba2add3722c79086e5946f1 12H IN A 192.168.5.35
host43.864feec535e304ff1ef085b5013e041c 12H IN A 192.168.5.43
host21.e4ec7518325b24a8177035a579ae8741 12H IN A 192.168.5.21
host11.a6e16afc00253799cb6c1258cb5fa16d 12H IN A 192.168.5.11
host33.ea6d3a034451df28cd859d47fee7aaff 12H IN A 192.168.5.33
host44.6e39b17ed7c2f67db84662a8b7290ffe 12H IN A 192.168.5.44
host29.dc4f37380753cd80ab6341970c6a6fb9 12H IN A 192.168.5.29
host47.bd79642368471f7b07b577b0654dec00 12H IN A 192.168.5.47
host10.a6ffafe08c71f10c9a958c1a96432572 12H IN A 192.168.5.10
host17.4f31e76515a52aa32ac640637b5c3308 12H IN A 192.168.5.17
host42.b736c62ffae157370c74c42f5a366296 12H IN A 192.168.5.42
host28.49e8ce486f40265fb282658b269c9ced 12H IN A 192.168.5.28
host16.d8c798d934bda5abeb9bd326be93686b 12H IN A 192.168.5.16
host07.e006329933c075a5b9b54c003540c1b2 12H IN A 192.168.5.7
host22.cde76e83b3b151bca91d6b83e7cacdc4 12H IN A 192.168.5.22
host02.652e706c006553ef4fdb3010fd4cfe8a 12H IN A 192.168.5.2
host25.d85173f641ac37722817dc46ce897402 12H IN A 192.168.5.25
host24.9ef3b225f95c81d212af6d32c1d87712 12H IN A 192.168.5.24
host38.6d316a17149be29f8b04957e6e221228 12H IN A 192.168.5.38
host51.a4cd9ef4e1bba48bf17dfe054c0e35b9 12H IN A 192.168.5.51
host01.1f8b080897c3cf380003616e616c697a 12H IN A 192.168.5.1
host30.86c0f58f4ea6e073bc8e08075f486367 12H IN A 192.168.5.30
host40.398e2caa393e8bc7b0710849d858844f 12H IN A 192.168.5.40
host41.a4792d08cb4a67cce1cc585cb666b843 12H IN A 192.168.5.41
host27.c4a205a24e11497c0ed11c7b8c6b28b1 12H IN A 192.168.5.27
host52.d7050000 12H IN A 127.0.0.52
host05.fff79d9392d24daa62dff3bbf7eeceee 12H IN A 192.168.5.5
host20.0d04b32ba91760849438f37ebf0f9e17 12H IN A 192.168.5.20
@ 12H IN SOA ns1.granitecanyon.com.
rkt.pobox.com. (
153313462 ; serial
6H ; refresh
3H ; retry
1W ; expiry
12H ) ; minimum

;; Received 59 answers (59 records).
;; FROM: torque to SERVER: 205.166.226.38
;; WHEN: Sat Jun 10 23:03:12 2000

How to do it

There are two different source I have for it. I wrote the first one, on a cold winter day when I had nothing to do. And the other code I recieved in my mail to show me what a pathetic perl coder I am :)
My Code
A better code by Ramki(at)vtc.taos.com

January 01, 2000

Linux in 2000

The other day, someone asked me what my worst nightmare was. I told him, "Linux world domination." Surprising as it may seem coming from a Linux advocate, the fact is that this issue is being debated in Linux circles the world over. With the absence of competition linux may not have much to look forward to.

However since that will take a long time to happen, we look at the past how linux succeeded as it is today and where its headed towards in the near future.

Battling Since Birth

Linux was born in the midst of the Minix generation of Intel 286s. Minix, which did not allow free distribution of code, was the de facto OS for university curricula back then (1991). After winning over Minix, Linux fought the software crunch of the early 90s and integrated many common Unix tools ported over in the first quarter of this decade. And just before it stood against Microsoft in the late 90s to fight for a pie in the desktop segment, Linux fought hard to get X-Windows applications ported to it. And today, as I look back at the decade gone by, Linux is fighting its toughest battle yet-the people. In short, Linux can succeed only as much as people can accept it.

Two years ago I wasn't so sure if Linux would ever gain commercial acceptance. Linux was always developed by and for a niche group of developers, and commercialization was the last thing on their mind. From the day it was created by Linus Torvalds, it was intended for education and research purposes only, with its source code free for all to see. Until the end of the millennium. The first signs of acceptance became visible in 1998, seven years after its birth, when the corporate world stepped in. Though still not considered a desktop environment, it was put to test by the likes of Sun and SGI for stability issues and came through with flying colors. The rest, as they say, is history. The Internet, as we know it today, was soon running on various versions of Linux. Apart from companies like Red Hat who centered their business around Linux, others like Netscape, Oracle, Creative, Corel, and Novell also stepped in. 1999 saw an end to the lack of commercial support, which had long plagued Linux as an acceptance criterion for larger corporations.

1999 was also the year when the Linux community made a unified effort to educate the press about how money could be made from a free OS. The press had been largely ignorant of the growth of Linux, because it was always looked at as a free OS, which did not have any business viability. The most notable development on this front was the emergence of Linux advocacy guidelines, which are today being followed by most organizations that support Linux.

The year would be particularly remembered for the Microsoft versus DOJ case, which boosted Linux in its own way. With a view to show that Microsoft was not a monopoly, Microsoft conducted a business analysis of Linux within its own closed doors. Now known as the Halloween I and II documents (www.opensource. org/halloween/), the revelations of these tests were a huge draw for the press. The document sparked a full-blown debate, so much so that Linux came to be talked about by the masses as well. It was apparent after that debate that Linux was in fact being considered as a direct opponent to Microsoft's Windows NT. IBM, SGI, Sun, and SCO were the next to join the Linux bandwagon. Red Hat's successful IPO in August '99 redefined the business viability of Linux. While things went quite well for Linux, it did have its share of ups and downs. The initial success of Linux blinded many Linux supporters to the OS's shortcomings. The comparative tests of Windows and Linux conducted by PC Week magazine and Mindcraft, an independent organization, produced results that the Linux community didn't want to hear. Notwithstanding the doubts raised over the authenticity of these tests, it became quite apparent in successive tests that Linux did need optimization of a few core components, which were way too slow to stand competition from Windows NT. Linus Torvalds himself was not pleased with the results, and since then a lot of work has gone into optimizing the code.

Linux (Em)bedded

Another area where optimized code could help Linux is that of Linux-embedded devices. Over the past few months, global focus has been steadily shifting from "one computer in every home" to "computers on the go." Cellular phones, MP3 players, and other handheld devices that talk to web servers already exist, as do handheld devices that organize your daily to-do lists. What we haven't seen too much of yet are watches, refrigerators, and microwave ovens with OSes built into them. Some of these are already happening, but some are waiting for the right OS. Not only is the open source mechanism of Linux ideal for manufacturers to use as a building block for such intelligent devices, it would also be economically disastrous to start work on a new OS from scratch. Cobalt, in 1997, was one of the first companies that started using Linux for its appliances (cache engines and web servers). A few others have since come up; some of them have also developed routers based on Linux. I'm sure the new millennium would see a lot more of these Linux-based devices. 3Com's Palm and Microsoft's Windows CE have shown that there is a growing market for handheld devices. A number of Linux ports to smaller chips are in progress, some of which are uCsimm, ARM, and the Palm.

Penguins and Icebergs

Some of the most mind-blowing graphics in the movie Titanic were rendered using a collection of Alpha processor-based machines running Linux. Linux was used in a research project at NASA to create such parallel-processing supercomputers. Now code-named Beowulf Linux, the system has the capability to harness the processing power of multiple machines. C-DAC's PARAM supercomputer-which used its own OS and was developed from scratch-is till date unrealistic for most Indian universities to afford. Now with the Linux Beowulf cluster ready for market adoption, research centers in India could set up much more powerful supercomputers with relatively low investment. It won't be long when a couple of old Pentium or 486 boxes would be put together to make Bollywood version of Titanic. Moving from the server to the desktop was perhaps the biggest challenge in Linux's history. The desktop is the playground that determines the success of an OS. The user friendliness and intuitiveness of the Linux GUI has been one of the most ignored issues for a long time. The lack of an open version of Motif, which was used on the Sun platform, was one of the reasons behind the slow start of Linux in the desktop arena. It was not until 1997 that any one could come up with an alternative and better Window Manager. KDE and Gnome, the two most popular environments today for Linux newbies, have gone a long way in enhancing Linux's sheen. Companies like Dell, Compaq, SGI, and Gateway-which would not have associated with Linux three years back-are now supporting Linux by selling Linux preinstalled machines along with similar Windows offers. Some hardware manufacturers are afraid of jumping onto the Linux bandwagon due to certain licensing restrictions in Linux, but that too is slowly becoming a thing of the past. DVD support is an excellent example of hardware support going wrong. The licensing specifications for DVDs forbid open source completely; this has left Linux users high and dry. Creative Technologies was one of the first big organizations to actively promote open source by opening up its specifications.

Linux Everywhere

But licensing is not the reason why Unix failed in the 90s. It failed because it could not stand united against Microsoft products. Though some analysts see a similar future for Linux, GNU licensing does address some of these issues. Organizations like FSF, which backs Linux with legal support, are recognized for their contribution to popularizing Linux. Linus Trovalds, who holds the trademark "Linux," is still the principal maintainer of the Linux kernel around which all Linux apps are built. For each critical development tool in Linux distribution, a committee works out how to get things done. Though it might be a bit slow, this democratic setup ensures that Linux doesn't go the way Unix did. With Linux stocks (of companies like Red Hat) touching the sky, it's anybody's guess what Linux would be like in the new millennium. Red Hat today has enough market capitalization to buy a few companies. Cygnus, a well-known development tool builder, was one of the first companies to be acquired by RedHat. Sun Microsystems has been busy pulling in companies like StarOffice and the like. Corel Corp. has launched its own version of Linux, as you can see from the PC World CD this month. At present there are about twenty different versions of Linux with different configurations and for different environments. All running on the same kernel and other tools. It's difficult to predict the impact of Windows 2000 on the success of Linux, or whether my wristwatch would one day run Linux. But one thing is sure to happen: We would see some radical changes in the way software is written, shared, and maintained around the world.

A version of this article was contributed to PCWorld Jan 2000 issue.