March 17, 2006

Could the Google and Sun rumor be about Java ?

If you have been following writings from Daniel M Harrison you would notice how strong his convictions are on this topic. Daniel strongly believes that Google is buying Sun. And any reader who doesn't understand how Google and Sun operate can easily be swayed to believe this. But not me.

The fact that Google or Sun haven't publicly denied these new rumors, means that something might be cooking. But Google buying Sun doesn't sound very interesting.

  • Sun has a large pool of talent who know how to create fault tolerant, high performance parallel processing computing infrastructure.

  • Google has a large pool of talent who have perfected the art of distributed computing using cheap hardware clusters using free tools and operating systems

  • Sun is a hardware, software and services company

  • Google makes its revenue from advertisements

  • If Google buys Sun, it would be forced to use Sun technology. Microsoft had a hard time switching from FreeBSD to Microsoft based solutions.

  • The change for Google to switch to Sun based hardware and software and the time spent to do it could be quite significant.

  • A lot of goodwill for Google stems from the fact that Google is Open source friendly. Even though Sun has made attempts to open its Operating System, the perception is not the same. Google might have to face some negative publicity if they don't take immediate damage control initiatives after a buy out.

If there is any truth to these rumors, its more likely that its about Java than anything else.

  • Google already has an agreement with Sun over cross distribution of Java and Google desktop.

  • Based on what I know, its more likely that Google might buy out Sun's Java technology than buying the whole company itself.

  • Java is one platform which is truely write-once-run-everywhere. Nothing else comes closer to this reality.

  • Google desktop has made significant inroads into desktop world running Microsoft OS. But lacks critical foothold in non-Microsoft world. This could change if it switches to Java as the application platform for all of its client side applications

  • If Google plans to quickly build applications like GDrive and integrate with other applications running on the local operating system, it would need a more universal platform. Java, though slow, is still faster than javascript and has more access to the operating system to do such tasks.

  • With better control over how Java develops, Google could use its strong technical background to speed it up and customize it for its own applications. The way Microsoft is trying to use .NET to spread its word.

  • This may or may not be a good thing for Java. But will definitely be a awesome add on for Google.

March 09, 2006

Skype PBX is here : Good or bad ?

Recently I wrote about skype invading the cellphone market. While this might be a few years away, something more interesting might happen much earlier.

A few companies at CEBit are showing off Skype to PBX gateways. [ Vosky , Spintronics , Zipcom ] Imagine how easy it would be communicate between two branches using VOIP protocols but without the expense of costly VOIP hardware.

I think this is a bag of good and bad news.

The good news is that skype will break down the artificial communication barrier between people and companies which live in different parts of the world. Up until recently we assumed that its ok to charge more if you want to talk with someone very far away. Its almost like we assume that travel fares are directly proposional to the distance. With the "national plan" going into effect most voice carriers provided a means for us to communicate with anyone in the country for the same fare. Unfortunately such a plan doesn't exist internationally because unlike in US, voice carriers here don't have agreements with all the countries in the world. Internet as per design broke down such barriers very early in its evolution. I'm very excited that skype is leading the way in making voice comm cheaper, which will go a long way in moving us towards a truely global economy.

Skype is a wonderful product, its free to use, has allowed other products to be built around it using its API. Its growth might almost be viral in nature. The bad news, however, is that we might be seeing a birth of another monopoly which is building its business around security through obscurity. I recommend reading a very fascinating presentation "Silver Needle in the Skype" by two gentlemen Philippe and Fabrice. They talk about how hard skype has been trying to keep its protocol closed. Even its installation binaries are rigged with obsfucated code and anti-debugging/anti-reverse_engineering mechanisms.

Skype is openning up holes in the network faster than most of us realize. What if someone finds a hole in skype software or protocol after it becomes a critical part of global communication infrastructure ? Are we setting up ourselves for a global catastrophe ?

Even though I personally like Skype, security through obscurity should be discouraged and I'll try my best to look for alternatives unless skype opens up the protocol further.

March 08, 2006

Microsoft Ultra-Mobile PC (umpc/Origami)

Update : Let the hype come to an end. Here is the the real thing in flesh and blood

Microsoft Ultra-Mobile PC

In preparation of the release sometime tommorow, there is a file on the microsoft servers with the name Origamai_Otto_Berkes_2006. Not sure if its available from outside, but here are the important details of Origami project which we have all been waiting for.

  • List Price $USD 599.99

  • Resolution 800x480 (native). Can go upto 800x600

  • Battery life: Doesn't seem anything dramatically different from other tablets

  • Low powered. Cannot play Halo 2

  • USB Keyboard optional.

  • 40GB Drive

  • Bluetooth

  • 802.11 (wifi)

source:c9 CeBit M

March 06, 2006

Dont mess with my packets

We had some emergency network maintenance done over the weekend which went well except that I started noticing that I couldn't "cat" a file on a server for some reason. Every time I login to the box everything would go fine, until I tried to cat a large log file which would freeze my terminal. I tried fsck (like chkdsk), reboots and everything else I could think off without any success. Regardless of what I did, my console would freeze as soon as I tried to cat this log file.

My first impression was that the network died, then when I was able to get back in I thought may be the file was corrupted, or even worse, that we got hacked and "cat" itself was corrupted. To make sure I was not hacked, I tried to "cat /etc/paswd". And that worked fine. Then I tried to cat a different file in the logs directory and found that to freeze too. I figured that something is wrong with the box and gave up on it for the night, and decided to worry about it on Monday morning. Which was today.I go in to work this morning, and find a whole bunch of users complaining that they can't go to any webserver on a particular loadbalancer in a this part of the DMZ. So, now I have a network modification, a bad unix file system and a loadbalancer (with few webservers behind it) all malfunctioning at the same time. With adrenalin kicked in, blood pressure rising, and 2 cups of coffee, I figured that there had to be something common between all of these.
After a little bit of investigation I found out that none of the users in my network are able to get to any of the servers in the target network using web. And though ssh is working fine, we couldn't "cat" any large file on any of the servers in that network. Weird.
I tried to recollect a previous incident where some packets were not getting through a firewall which made the ssh session freeze. If every server on the same network has the same problem, it had to be a problem with one of the routers or firewall in between. So I did the next logical thing, which was to setup tcpdump on both sides of communication. This would allow me to sniff traffic at the moment the "freeze" happens.
Sure enough I see a whole bunch of packets going by, until I do a "cat logfile". Thats when hell freezes over.
11:07:10.955656 server1.634 > server2.22: . ack 4046 win 24840  (DF) [tos 0x10]
11:07:10.958896 server1.634 > server2.22: . ack 4046 win 24840 (DF) [tos 0x10]
11:07:10.959221 server1.634 > server2.22: . ack 4046 win 24840 (DF) [tos 0x10]
11:07:10.959252 server2.22 > server1.634: . 4046:5426(1380) ack 1607 win 24840 (DF) [tos 0x10]
11:07:10.959538 server1.634 > server2.22: . ack 4046 win 24840 (DF) [tos 0x10]
11:07:10.959573 server2.22 > server1.634: . 6498:7878(1380) ack 1607 win 24840 (DF) [tos 0x10]
11:07:10.962011 server1.634 > server2.22: . ack 4046 win 24840 (DF) [tos 0x10]
11:07:10.962040 server2.22 > server1.634: . 7878:9258(1380) ack 1607 win 24840 (DF) [tos 0x10]
11:07:11.443579 server2.22 > server1.634: . 4046:5426(1380) ack 1607 win 24840 (DF) [tos 0x10]
11:07:12.433550 server2.22 > server1.634: . 4046:5426(1380) ack 1607 win 24840 (DF) [tos 0x10]
11:07:14.413493 server2.22 > server1.634: . 4046:5426(1380) ack 1607 win 24840 (DF) [tos 0x10]
11:07:18.373444 server2.22 > server1.634: . 4046:5426(1380) ack 1607 win 24840 (DF) [tos 0x10]
11:07:26.303489 server2.22 > server1.634: . 4046:5426(1380) ack 1607 win 24840 (DF) [tos 0x10]
11:07:42.172971 server2.22 > server1.634: . 4046:5426(1380) ack 1607 win 24840 (DF) [tos 0x10]

In the sniff above "server2" is the server which is freezing and server1 was my desktop from where I was logging into. The interesting thing about the capture I did on my desktop was, that it accounted for all the packets which you see here except the last few packets which have the "ack 1607" string in them. For those who don't understand tcpdump, this is a capture of repeating packets which are not getting acknowledged by the other end.
So now we knew for sure that it has to be a routing or firewalling glitch of some kind. But it still didn't explain why it was repeating. On a hunch I looked at the firewall logs to see if there is anything there about why its dropping my packets. May be it thinks that all of these servers are attacking it or something. It didn't revile anything.

Mar 6 11:09:56 [] Mar 06 2006 13:05:24: %PIX-4-106023: Deny icmp src inside:router1 dst vlan server2.22 (type 3, code 4) by access-group "inside"

But what I did see, is that once in a while, there is a weird log entry from the PIX (cisco firewall) complaining about an ICMP packet being dropped due to an ACL restriction. ICMP is a great protocol and almost every kid in the world knows how to use ping to find if a remote host is alive. What its also used for is error reporting and tracerouting. In our network we had ICMP enabled in such a way that errors being reported to the admin network are allowed to go through. And since there are too many reasons why errors should be going into a DMZ, they are generally blocked by edge-routers or firewalls. So the ACL which dropped the packet wasn't that surprising. But what the heck is "type 3, code 4" ?

Type 3, Code 4 according to RFCs is "The datagram is too big. Packet fragmentation is required but the DF bit in the IP header is set." Fragmentation is the process of breaking down of large packets into smaller packets so that it can travel through network media which have different packet size limits. Finally, we know the reason why the packets were getting dropped. Apparently for some reason "DF" flag was getting set on the packets. DF (Dont Fragment) flag is a bit inside IP header which tells all intermediary devices not to ever "fragment" that particular IP packet.

Based on the PIX logs, it seems router1 dropped the packet and generated a "type 3, code 4" error indicating the reasons why it dropped. Under normal scenarios any sniffer would have noticed an ICMP error packet coming back to server2. But since this was in a DMZ, and since inbound ICMP errors are getting dropped there was no way to know the reasons why some of these packets were going through.
The solution to this problem, apparently was to force the DF flag to be removed which then resolved all the connectivity problems. We also found out that all of our problems started sometime after the maintenance window during which some key networking devices were reconfigured.

March 05, 2006

Two-way Two-factor SecureID

A lot of companies are moving towards two factor authentication which is a great because it tries to reduce the risk of weak authentication credentials. What it doesn't do, unfortunately, is reduce phishing risk, which will become the next big problem after spamming. I wrote a few words on detecting phishing attacks a few days ago. This is the continuation of the same discussion.

"Passmark" and similar authentication mechanisms are one of the best current solutions in use today. Unfortunately, Passmark is one of those mechanisms which are built to be broken. The strength of this authentication mechanism, in this care, depends on the number of images in the Passmark database which according to the website is currently at 50000.

50000 variations might be alright for now, but we would be short-sighted if we stop at this. One of the serious drawbacks of this mechanism is that if the user guesses the users logon name, or captures that information in some other way, Passmark authentication effectively reduces to a one-way password authentication.

For example, if an attacker wants to steal a victims session and has somehow guessed the users logon name, all they have to do under the "passmark mechanism" is to go to the real website once with the users logon name and extract the image shown by the real website. Once this is done, since the image doesn't change at all, ever, the attacker can prompt the victim with the cached image whenever the user logs on.

I think the day is not very far when companies like RSA will come out with two-way authentication mechanism where the token provided by the server keeps on changing. RSA already makes excellent two-factor one-way authentication, which changes based on time. They can easily extend it by doing a "two-way two-factor" authentication. If such a two-factor two-way authentication existed, even if the attacker knew victims logon name, he/she would have to go the real bank every time the user logs on, to get the latest SecureID token which the user could look for. Its just a mater of time after that for someone at the actual website to figure out phishing activity.

Before I end today's rant, I'd like to admit that its totally possible that someone has already done this, and that I've just not seen it yet. If so, I hope it gets deployed fast.

March 01, 2006

Detecting Phishing sites

wikipedia [ "phishing is a form of criminal activity utilizing social engineering fraudulently acquire sensitive information, such as passwords and credit card details, by masquerading as a trustworthy person or business in an apparently official electronic communication, such as an email or an instant message. The term phishing arises from the use of increasingly sophisticated lures to "fish" for users' financial information and passwords." ]

According to there were 5490 more phishing sites reported in the month of December 2005 as compared to a year ago. And if you run a business which involves any kind of monetary (or identity) transactions, its just a matter of time before you become a victim.
A lot of companies today are working together to solve this problem, which is at least as hard, if not more, than shutting email-spam. The underlying reason why phishing is still a good business model is because the users aren't technical enough to identify a phishing attack. As an example, one of the most common misconception among the users is that a secure website (running over ssl) with a valid ssl certificate is completely trust worthy. Unfortunately most users don't know that getting a certificate is almost as simple as buying apples from a store. SSL certificates does help in encrypting traffic to a target server but it can't tell you that you are going to instead of

Help is on the way though. Some organizations are working on building visual tool (or a plugins) for the browsers which can intelligently identify and visually alert the user about a possible phishing attack. IE7, which is still in beta, aparently will have this tool built-into the browser itself. The sad part, however, is that most of these mechanisms still depend on the user to download and install, which may not happen overnight.

Most organizations which deal with sensitive data, are aware of the phishing problem and have a very reactive security team to identify and shutdown such websites. A few even take time to train its users. The lag between a phising website going operational to the point when its shutdown is still significantly long.

The technology behind such a detection engine for phishing attacks is not very different from that of a spam filter. They both rely on some kind of signature which have their share of false-positives and false-negatives. And just like spammers have been managing to get through our spam sensors, its just a matter or time before phising attacks will become more sophisticated.
For websites which have a more urgent need of anti-phishing intelligence, and cant wait for IE7 deployed everywhere, are resorting to a other interesting ideas. One which I have personally witnessed and appreciate is something called PassMark, which uses a two-way authentication mechanism instead of the standard two-factor authentication. In two way authentication, the server authenticates to the user before the user authenticates to the server. One example is where you select an image from a random set of images on the banking site. Before you enter your password to login, the banking website will show you the image you had previously selected. Since its easier to identify a change in image than detecting a minor variation in URL, this mechanism works well with technically-challenged users as well.

For a Phishing website to be setup by an attacker, the attacker has to mirror the real website. This is another area where security experts can setup their alerting agents. Unlike most search engines, attackers who want to mirror websites might download pages and images using automated tools which could behave differently than how a human operated browser will. Detecting such patterns might give the website sufficient heads-up to analyze and disable the offending website before the attack is launched.

But if the attacker succeeds to copy the content to setup a replica and moves to a different subnet, it might be hard to track and shutdown such websites. In order to maximize the number of victims caught in the trap (maximize profit) Phishing websites try their best to minimize service disruption for the users. Some websites will transfer users to the actual website almost as soon as username and password are types. One of the way to detect such attacks would be to analyze apache/www logs to look for "Referrer urls". Since Referrer urls are reported by the browser for every HTTP request, there is a good chance that the Phishing site will leave a trace of its existence. If the server side application could detect "Referrer urls" coming from an un-authorized site it could proactively warn the user and shut the session down.

As I said before there are a lot of companies trying to solve this problem. I heard about one at CodeCon 2006 and I them fascinating. I hope we have some of these implemented very soon so that we IT folks can stop worrying about training the users and get down to do some real work.

Security Podcasts for iTunes

Hackaday has a great blog entry of all the nice security podcasts out there. Here are direct itunes links to all the podcasts with a few more I googled.