Detecting Phishing sites
wikipedia [ "phishing is a form of criminal activity utilizing social engineering fraudulently acquire sensitive information, such as passwords and credit card details, by masquerading as a trustworthy person or business in an apparently official electronic communication, such as an email or an instant message. The term phishing arises from the use of increasingly sophisticated lures to "fish" for users' financial information and passwords." ]
According to Anti-Phishing.org there were 5490 more phishing sites reported in the month of December 2005 as compared to a year ago. And if you run a business which involves any kind of monetary (or identity) transactions, its just a matter of time before you become a victim.
A lot of companies today are working together to solve this problem, which is at least as hard, if not more, than shutting email-spam. The underlying reason why phishing is still a good business model is because the users aren't technical enough to identify a phishing attack. As an example, one of the most common misconception among the users is that a secure website (running over ssl) with a valid ssl certificate is completely trust worthy. Unfortunately most users don't know that getting a certificate is almost as simple as buying apples from a store. SSL certificates does help in encrypting traffic to a target server but it can't tell you that you are going to https://www.ebey.com instead of https://www.ebay.com.
Help is on the way though. Some organizations are working on building visual tool (or a plugins) for the browsers which can intelligently identify and visually alert the user about a possible phishing attack. IE7, which is still in beta, aparently will have this tool built-into the browser itself. The sad part, however, is that most of these mechanisms still depend on the user to download and install, which may not happen overnight.
Most organizations which deal with sensitive data, are aware of the phishing problem and have a very reactive security team to identify and shutdown such websites. A few even take time to train its users. The lag between a phising website going operational to the point when its shutdown is still significantly long.
The technology behind such a detection engine for phishing attacks is not very different from that of a spam filter. They both rely on some kind of signature which have their share of false-positives and false-negatives. And just like spammers have been managing to get through our spam sensors, its just a matter or time before phising attacks will become more sophisticated.
For websites which have a more urgent need of anti-phishing intelligence, and cant wait for IE7 deployed everywhere, are resorting to a other interesting ideas. One which I have personally witnessed and appreciate is something called PassMark, which uses a two-way authentication mechanism instead of the standard two-factor authentication. In two way authentication, the server authenticates to the user before the user authenticates to the server. One example is where you select an image from a random set of images on the banking site. Before you enter your password to login, the banking website will show you the image you had previously selected. Since its easier to identify a change in image than detecting a minor variation in URL, this mechanism works well with technically-challenged users as well.
For a Phishing website to be setup by an attacker, the attacker has to mirror the real website. This is another area where security experts can setup their alerting agents. Unlike most search engines, attackers who want to mirror websites might download pages and images using automated tools which could behave differently than how a human operated browser will. Detecting such patterns might give the website sufficient heads-up to analyze and disable the offending website before the attack is launched.
But if the attacker succeeds to copy the content to setup a replica and moves to a different subnet, it might be hard to track and shutdown such websites. In order to maximize the number of victims caught in the trap (maximize profit) Phishing websites try their best to minimize service disruption for the users. Some websites will transfer users to the actual website almost as soon as username and password are types. One of the way to detect such attacks would be to analyze apache/www logs to look for "Referrer urls". Since Referrer urls are reported by the browser for every HTTP request, there is a good chance that the Phishing site will leave a trace of its existence. If the server side application could detect "Referrer urls" coming from an un-authorized site it could proactively warn the user and shut the session down.
As I said before there are a lot of companies trying to solve this problem. I heard about one at CodeCon 2006 and I them fascinating. I hope we have some of these implemented very soon so that we IT folks can stop worrying about training the users and get down to do some real work.
According to Anti-Phishing.org there were 5490 more phishing sites reported in the month of December 2005 as compared to a year ago. And if you run a business which involves any kind of monetary (or identity) transactions, its just a matter of time before you become a victim.
A lot of companies today are working together to solve this problem, which is at least as hard, if not more, than shutting email-spam. The underlying reason why phishing is still a good business model is because the users aren't technical enough to identify a phishing attack. As an example, one of the most common misconception among the users is that a secure website (running over ssl) with a valid ssl certificate is completely trust worthy. Unfortunately most users don't know that getting a certificate is almost as simple as buying apples from a store. SSL certificates does help in encrypting traffic to a target server but it can't tell you that you are going to https://www.ebey.com instead of https://www.ebay.com.
Help is on the way though. Some organizations are working on building visual tool (or a plugins) for the browsers which can intelligently identify and visually alert the user about a possible phishing attack. IE7, which is still in beta, aparently will have this tool built-into the browser itself. The sad part, however, is that most of these mechanisms still depend on the user to download and install, which may not happen overnight.
Most organizations which deal with sensitive data, are aware of the phishing problem and have a very reactive security team to identify and shutdown such websites. A few even take time to train its users. The lag between a phising website going operational to the point when its shutdown is still significantly long.
The technology behind such a detection engine for phishing attacks is not very different from that of a spam filter. They both rely on some kind of signature which have their share of false-positives and false-negatives. And just like spammers have been managing to get through our spam sensors, its just a matter or time before phising attacks will become more sophisticated.
For websites which have a more urgent need of anti-phishing intelligence, and cant wait for IE7 deployed everywhere, are resorting to a other interesting ideas. One which I have personally witnessed and appreciate is something called PassMark, which uses a two-way authentication mechanism instead of the standard two-factor authentication. In two way authentication, the server authenticates to the user before the user authenticates to the server. One example is where you select an image from a random set of images on the banking site. Before you enter your password to login, the banking website will show you the image you had previously selected. Since its easier to identify a change in image than detecting a minor variation in URL, this mechanism works well with technically-challenged users as well.
For a Phishing website to be setup by an attacker, the attacker has to mirror the real website. This is another area where security experts can setup their alerting agents. Unlike most search engines, attackers who want to mirror websites might download pages and images using automated tools which could behave differently than how a human operated browser will. Detecting such patterns might give the website sufficient heads-up to analyze and disable the offending website before the attack is launched.
But if the attacker succeeds to copy the content to setup a replica and moves to a different subnet, it might be hard to track and shutdown such websites. In order to maximize the number of victims caught in the trap (maximize profit) Phishing websites try their best to minimize service disruption for the users. Some websites will transfer users to the actual website almost as soon as username and password are types. One of the way to detect such attacks would be to analyze apache/www logs to look for "Referrer urls". Since Referrer urls are reported by the browser for every HTTP request, there is a good chance that the Phishing site will leave a trace of its existence. If the server side application could detect "Referrer urls" coming from an un-authorized site it could proactively warn the user and shut the session down.
As I said before there are a lot of companies trying to solve this problem. I heard about one at CodeCon 2006 and I them fascinating. I hope we have some of these implemented very soon so that we IT folks can stop worrying about training the users and get down to do some real work.
Comments
email encryption