December 12, 2006

Google web toolkit now completely open source

Google announced that as of today its GWT (Google web toolkit) is available under Apache 2.0 license. GWT, incase you didnt know is a java to javascript compiler which can churn out intelligent cross-browser compatible javascript code from pure java code. Switching from javascript to GWT has a bit of learning curve, but its completely possible that it will overtake most if not all the other Javascript toolkits out there.

My biggest pain point while working with other javascript based toolkits was detecting compile-time errors and debugging runtime errors. And in long run, I expect browser compatibility issues taking up significant development time to keep up with the various browsers out there. GWT like most other toolkits keeps its core code seperate from users javascript, and thus can take away some of the long term maintainance issues. But since GWT is supported by google, there is a bigger chance that this one will stay in the market much longer than others out there.

While most other toolkits expect you to write a little bit of javascript, GWT doesn't expect you to write any. Infact GWT can even help you generate a complete HTML interface with buttons and input boxes using standard GWT widgets. And if you write your backend code in java, you will love the fact that GWT code can get compiled along with rest of your code, in background, in your favourite java editor, and will ensure that its as structured as rest of your java code base.

December 05, 2006

"101 Top 10 lists", because I hate "Top 10" lists

Most of the "Top 10 lists" come under the catagory of things which can't be believed or ignored. To begin with, these are personal, regional, professional or age/gender based recommendations which can and do change from one set of people to another. And the fact that "Top 10" means there are more contending for the higher spot, these lists get outdated almost as soon as they make it to a web page.

Now here is a list of 101 "Top 10" lists put together which seemed to attract my attention. Notices that the list is in alphabetical order and that the author doesn't really say that this is the "Top 101" either.

I'm not surprised that there were so many "Top 10" lists, but am a little curious who will come up with the "Top 1001" next time.

December 03, 2006

Zamzar - Free online file conversion

This is a cool idea for a website. Convert almost any popular file format into another one. The name Zamzar probably means that all other words in the dictionary is taken, but what it does is truly remarkable. Not because its is difficult to do, but because no one thought about it before.

November 30, 2006

Design to fail

Last night I went to an SDForum talk by two eBay architects Randy Shoup and Dan Pritchett on how they built, scaled and run their operation. The talk didn't have anything substantially different from what I've heard before, but was still impressive because they were applying some of the common thinking to their operations which runs over 15000 servers any given time. [ Slides ]
Here are a few interesting phrases I took away from the talk.

  • Scale out not up: Scaling up is not only expensive, it will also become impossible beyond a certain technical limitation. Scaling out, however is cheaper and practical.

  • Design to fail: Every QA team I know, do a whole batch of tests to make sure all components work as they should. Rarely have I seen a team which also does testing to see whether the servers stay up if certain parts of the application fail.

  • If you can't split it, you can't scale it: Ebay realized early on that anything which cannot be split into smaller components can't be scaled. A good example of such operation are the "joins" on multiple tables in a database. Relying on database to do joins across a large set of tables means that you can never partition those tables into different databases. And if you can't split it, you will have t

  • Virtualize components: If they can virtualize it, and create an abstraction layer to take care of these virtual components, then rest of the application need not worry about the actual server names, database names, table names etc. The Operations team can move components around to suite scalability needs.

November 27, 2006

The Java+linux OS

This will be an interesting trend to follow. This linux+perl distribution is made up of just linux kernel and perl binaries. Rest of the tools are all written in perl shell scripts. Miguel de Icaza, the creator of mono is looking for folks to do the same with mono.

I think its a great experiment and will help validate mono as a practical alternative to other frameworks/languages on linux. But what will be even cooler (for me atleast) is if someone can create a true Object oriented shell experience like Microsoft's powershell/monad. And incase you didn't know, Powershell/Monad is the new shell by Microsoft using .net framework. It will probably replace cmd sometime in the future.
That being said, it doesn't really have to be mono. Java is a perfect candidate for it as well. There was a java project related to a java based shell which I don't think is active anymore... may be someone can revive it.

Can it be done ?

November 23, 2006

JSON: Breaking the same-server-policy Ajax barrier

The same origin policy prevents document or script loaded from one origin from getting or setting properties (XMLHttpRequest) of a document from a different origin. The policy dates from Netscape Navigator 2.0. This is a very important security restriction which disables rogue third-party javascripts from getting information from your authenticated banking server session.

Unfortunately, this also almost completely shuts down any possibility of data sharing between multiple servers. Note the use of the word "almost", because "JSON" is the new Saviour of web2.0 world. JSON or Javascript Object Notation, is nothing but a simple data interchange format which can be easily used by javascript applications. Whats different here is that unlike XMLHttpRequest which can send back answers in any format the javascript application wants, JSON requires the answers to be in JSON format, which is basically a subset of Javascript Programming language, or to be more specific Standard ECMA-262.

For those who are curious how this works and don't have time to read the complete documentation, the difference is that a javascript application can still call other javascripts to be loaded from third party websites. So if you are running an application on www.royans.net and you have some data on data.royans.net, you can load that data into your application as long as you masquerade that information as a javascript.

Thats it, there is no rocket science here... but it does feel like one when you first come across it. I surely did.

While you are at it, watch out for JSONP (JSON with padding) too. Google is one company which I know have been using such mechanisms for a long time. They recently came out with more vocal support of this new open data interchange standard.
Oh, and before you go hacking your code, one thing you might like to watch out is to avoid opening up private/privileged information using JSON mechanism, because its open to XSS (Cross site scripting hole).

Ajax/Web debugging with Firebug

I've been using Firefox for a long time, and have always had Web developer plugin by my side for those miserable days. This tool which can save your ass at a time when you really need to understand what the heck your Ajax code is up to.

A couple of days ago I came across another such tool called  Firebug. All I have to say is that I was completely blown away by its intutive debugging style Cleaning up my messy Ajax generated code could have been a lot worse if this guy wasn't around.
Here is a quick feature list

* JavaScript debugger for stepping through code one line at a time
* Status bar icon shows you when there is an error in a web page
* A console that shows errors from JavaScript and CSS
* Log messages from JavaScript in your web page to the console (bye bye "alert debugging")
* An JavaScript command line (no more "javascript:" in the URL bar)
* Spy on XMLHttpRequest traffic
* Inspect HTML source, computed style, events, layout and the DOM

Console

Thanksgiving updates

November 19, 2006

Faking a Virtual Machine

One of the more popular trends in the recent years is the move of malicious code analysts towards virtual machines to test and reverse-engineer malicious code. And surprisingly the virus/worm writers have been adding mechanisms to their code to detect such environments.

I came across this particular piece of software called Themida which does exactly that. Lenny Zeltser from SANS reports about this on SANS. Whats interesting is that this kind of detection is now part of commercial packers around the world.
The question I have is this, how long will it take for someone to come up with a VMWare/Virtual Machine simulator/faker which I can run on my perfect non-virtual desktop/laptop/server and make malwares believe its running inside a Virtual machine ?

If that can kill even a small percent of fresh 0-day worms/viruses, it would be worth the effort. Wouldn't it ?

November 18, 2006

The RAJAX framework (Reverse AJAX)

The use of XmlHTTPRequest without refreshing the browser is one of the more common ways of differentiating an Ajax application from a more traditional approach. But while rest of the world was learning Ajax, some smart developers have figured out to do the next step and created something called "Reverse AJAX", or as I call it "RAJAX".

Traditional client-server applications (not over the web) which used standard TCP/IP and UDP protocols didn't have to worry about Firewalls, NATs and PATs. Such client-server applications had the ability to intiate connections either way (from client to server, or from server to client). HTTP Protocol, which was built over TCP/IP was designed for specifically for web browsing where its always the clients asking for information and servers replying.

By moving traditional client-server applications to Web applications, the users did solve a lot of Firewall/NAT/PAT issues, but gave up a lot on usability and speed. AJAX to some extent solves the problem by reducing the amount of communication happening between the client ant the server, but it still doesn't openly allow something which servers could do in the old client-server model. Initiate a connection back to the client.

RAJAX is a framework where multiple AJAX calls between the client and server could bridge this gap and give both the server and client the ability to ask and answer to requests. An excellent example of an RAJAX application is webified chat client. Google Talk for example doesn't just open a connection when the user types in a message... it also keeps a connection open to the server to send messages to the user in case one of his/her contacts wants to initiate a chat. Another example provided by one of the reference links below is that of allowing multiple AJAX-based-document-sharers modifying the same document.
So, in short, the client always keeps an active HTTP request to the Server and allows the server to respond to that request only if there is a message from server to client which client didn't ask for.

References

November 15, 2006

Sitemaps now supported by Microsoft and Yahoo.

Google started it, but sitemaps has since been adopted by most of the large search organizations out there. If you own a website, and have a lot of static content, you probably should be investigating at creating and updating sitemap on regular basis.

Sitemap is basically an XML file which describes the contents and change frequency of the site. If you ever had pages hidden deep inside your website which were not getting indexed before, sitemaps is an excellent way of advertising those pages to the search engine.
Sitemaps are an easy way for webmasters to inform search engines about pages on their sites that are available for crawling. In its simplest form, a Sitemap is an XML file that lists URLs for a site along with additional metadata about each URL (when it was last updated, how often it usually changes, and how important it is, relative to other URLs in the site) so that search engines can more intelligently crawl the site. Web crawlers usually discover pages from links within the site and from other sites. Sitemaps supplement this data to allow crawlers that support Sitemaps to pick up all URLs in the Sitemap and learn about those URLs using the associated metadata. Using the Sitemap protocol does not guarantee that web pages are included in search engines, but provides hints for web crawlers to do a better job of crawling your site.

Powershell/Monad Version 1.0 is finally out

More than two years ago I wrote about a neat little microsoft project called Monad which caught my eye. The project boasted of doing something which I've never seen anyone else do before. They created an object oriented shell interface.

One of the examples I use to explain is that unlike unix flavor of "ps" which allows listing of fields you like or not using optional command line parameters, in Monad, you can parse the output of "ps" (aka get-process) and manipulate the objects returned to print any format you want by inspecting the object. All unix admins know how to use "cut" "grep" and "awk" for different reasons, but in a true monad shell environments where every command you type is a monad commandlet, you won't have to use the traditional string based tools anymore.

Whats interesting is that unlike in Unix/other_shells, you can pipe the output of ps command in monad and throw it on to an XLS sheet with a pie chart attached. Neat !!

Microsoft has finally released the official 1.0 version of this product (just in time for the Vista release) and its now being called Powershell. Even though the version I installed was on my XP box, it supports other flavors of Windows as well. Watch out for this blog for more of Powershell as I'm for sure going to use it.

References

November 14, 2006

Comprehensive security report on Mac

I knew that the Macs are the most secure operating systems around, but what surprised me is that someone took the trouble of writing a comprehensive 29 page PDF report about it.
http://www.physorg.com/newman/gfx/news/SGE.DEP36.300805200518.photo00.quicklook.default-217x217.jpg"The research report looks at significant OS X threats including local, remote and kernel vulnerabilities and discusses overall system design weaknesses that contribute to insecurities on the Mac platform. The document also reviews the current state of malicious code, discussing the presence of several viruses and worms and the existence of three known rootkits for OS X."

November 12, 2006

Microsoft will probably start selling/distributing linux soon

Anyone can tell you an interesting story, but when it comes to Microsoft and Novell's recent deal Linux enthusiasts around the world have more than a couple up their sleeves.

Microsoft has a long history at killing competition. They started with Novell's Server market, they tried to do with Java, and today they are trying to do it against the Anti-Virus vendors. They succeeded against Netscape, gained significant grounds against Sony's Playstation, and killed a thousand other products that I can't name because I forgot about them after Microsoft obliterated them out of the market. If any of you are XBox lovers, I don't have to tell you that in the war over consoles Microsoft has been losing money on every XBox it sells. Zune (the competition to iPod) is said to have a similar strategy. In short Microsoft has a huge bank balance and can pump in a lot of money until the competition goes bankrupt.

As a result of this announcement its not a surprise that the Linux world is almost up in arms against Novell for giving in for a few pieces of silver. I on the other hand have a different prespective on it.

  • Microsoft isn't interested in suing anyone (anytime soon atleast) because of its Vista launch schedule and the tricky negotiations going on in Europe

  • SCO has already tried the same FUD which Microsoft is accused of trying. In fact if you remember Microsoft had "licensed" SCO unix in a similar deal which was indirectly used to fund SCO's battle against IBM/Linux

  • Most of the other visible products Microsoft has went after till now have been markets where Microsoft didn't really have a foothold. Linux is one of the very few unique products which started up as a competitor to Microsoft has has gradually increased in popularity over the years. [Firefox/Mozilla is the other one which I admire]

  • The other interesting point to note is that unlike most other commercial vendors who got nailed by Microsoft's pump and dump strategy, Linux is not a commercial entity which can go bankrupt. They can kill Novell, but it will be very hard for them to kill the whole linux movement.


My personal analysis is that Microsoft is afraid.

  • Its so afraid of loosing this battle that in its moment of desperation its ready to do anything short of launching a Microsoft branded Linux distribution.

  • The Financial deal Microsoft and Novell signed has a few hints of where this might be heading.

  • To begin with its clear both of them want to integrate each others OS using each others technology to provide a better virtualization experience.

  • Its also clear that though Novell might use significant portions of proprietary Microsoft technology (for example for authentication, authorization and accounting) Microsoft will mostly be using GNU code to which Novell doesn't have any rights anyway.

  • So why is Microsoft paying Novell ?

  • And what's the deal with 240 million dollars for linux license subscription cost ? What is it going to do with that many copies of linux distribution ?

  • Oh wait, they could embed it into your Microsoft operating system ? Have you ever thought which distribution of Linux you would use if your Microsoft OS copy you already have, has a Linux distribution pre-bundled with it?

  • Novell also mentions that it will pay Microsoft a minimum amount of licensing fees, which can increase depending on its own sales. So may be it will sell Windows as well... who knows. But it will sell something with at least some part of Microsoft code in it.

  • Finally based on my personal opinion (with no understanding of financial details) it almost looks like Microsoft has kind of bought a share of Novell's company and wants a piece of the action every year.

  • May be Microsoft is going to announce something even much more significant which will dramatically increase Novell's sales. May be Novell is an investment after all... not just a pump-and-dump target.


My thought process finally took me to the one place I didn't want to go... Its the thought that Microsoft will soon bundle Suse linux with one of its own products.

Coming back to the discussion on whether we should abandon Suse or not, I personally think it doesn't matter as long as Microsoft is not trying to kill it. Stop acting like a 5 year old kid who doesn't like the big guys. If anything, you should be excited about more commercial support behind your favourite OS. And if they really do bundle Suse with every Desktop/Server OS thats exactly what I wanted when I joined the revolution. Linux on every desktop...
I have said this before, and I'll continue to say it that I'm not opposed to Microsoft Linux as long as others can innovate and keep Microsoft on its toes.

Offline Storage in Ajax applications ?

I've been out of the blogging world working on a ajax application which has been sucking out a lot of time from my already small free time which I have.


I'd mentioned Laszlo sometime back, and explained how its jumping into the Ajax world from a pure flash based application server. The ajax application I was working on, however, started in pure ajax before it got involved with Dojo. Dojo is not the only Javascript library out there, but it certainly is one of the better ones. I played around with a few others including yahoo's javascript library, Google web toolkit and Sajax before I chose Dojo to work with. No server side code was one of the reasons, but its popularity was the man reason.


When I started off Dojo had 0.3 version out which already had a lot of important features like back-button-fix and keyboard event handlers which I heavily use in my application. As of today has 0.4 released which has among other things APIs to draw 2D graphics. But what really surprised me today was when I read that one of the most important things which wasn't possible to do with javascript is now not only possible but its also supported by Dojo.


Interestingly, Offline storage on browsers has always been there in the form of web cache. I also know there are some flash based applications which can persist data on client's desktop too... but until I saw the Dojo:Storage documentation it never occured to me that an Ajax based application could so easily use this feature to do something which should have been there to begin with.


Dojo doesn't only have APIs to programatically recall that cache and browse the content but also interact and modify it. Here are some references to this interesting concept


October 08, 2006

Color Palette Generator, Dodgeit and more

One of the most common problem in the web designing world is to select a color palette to design your site with. Not only you have to worry about the colors mixing well with each other, it also sometimes have to work with other images which are on the page. Color Palette Generator is the first such online service which allows you it online.

The other common problem in the internet world is the spam business. And if all you wanted is that free cell iPod for which you need to create 37 odd accounts on different websites, you could do it with fake email address provided by dodgeit.com. You register with a fake dodgeit.com email address which you can create on the fly... and then sign in with the email address (without any password) to retrieve the email.

Some news, and other Links to various items..

Last month we were gifted with a wonderfull little baby boy and had since been missing in action. He is busy growing, eating, sleeping (and you know what else). So forgive me for being a little out of tune. Anyway, here are a few quick links to get back in the blogging world.

  • This site did something which I didn't know was possible before. It shows how to use a show image rollover without using multiple image src.

  • This site has a wonderful Regex cheatsheet for all of you perl (and other language) hackers. This is something which will soon go up on the board in my cube

  • And finally this site give a short introduction to RJS, (Ruby Javascript) which I didn't even know existed a few hours ago.

September 02, 2006

Google image labeler

Not sure how many of you have tried this new google service. It looks like a game, smells like a game and acts like a game, but its just a simple image tagging algorithm which uses your brain instead of using image recogonition. Traditionally google has looked at the text inside "a href" to find out what the image is about. So the fact that google has come down to doing this could mean one of two things

  • It has a come across a bunch of images without any tags or meta data and is looking for ways to index them

  • It is testing ways to get human brain to do the job of cheap computers by giving them interesting incentives ( this is no different from how we train monkeys and pigeons to do a particular work). You get them to play a game, and reward them with a cookie at the end.

  • Or, as I suspect, google is testing an image recognition software and need human input to validate the images.


Either way, this is a very cool idea and I'm pretty sure everyone else will be doing the same thing in no time.

August 17, 2006

Writely is taking new accounts

WritelyWritely, the company google bought a few months ago was  closed for new accounts. It seems like they have finally opened up again. But instead of using google accounts it  still requests users to create a new one.  They did mention that the will eventually integrate  with google's signin soon.

August 16, 2006

Dzone: Digg for developers

Dzone
I found a new site for called  Dzone  today. Unlike Digg its focuses on programming, coding tools, processes and practices. The feature which made this site uniquely stand out among the other 100 digg replica's is its ability to take "webshots" of the URL being linked which is shown as a thumbnail.

dzone fills a void which in a developers life which sites like digg and slashdot can't fulfill because of their unfocused news items. Lately digg has been trying hard to develop more focused pages, but its no where close to what developers are currently looking for.

Flashy Speed test



There are tons of speedtesting tools out there. But here is one you might not have seen before. Its called speedtest.net. Whats cool about this site is that it allows you to test your bandwidth against multiple server in US and Europe instead of just one.

Speedtest1

August 15, 2006

250 Web 2.0 APIs

This site programmableweb has a cloud of 250 apis. If you are into mashups, here are a few more APIs to play with.
API Cloud

Create your own Web logo

Found this cool site today called msig.info which allows you to create logos for your own site in a jiffy. Checkout the new Techhawking logo.
Techhawking

Google Talk

Google has released a newer version of Google Talk. This one allows you to leave Voicemails.
googletalk.png
File and photo sharing in Google Talk works like you'd expect: Simple, fast, and fun. Simplicity means that you can drag and drop one or more files directly onto a chat window. As soon as your friend clicks 'Accept', the bits will start flowing. When the transfer completes, the recipient can open the file or find it on disk with a single click.

File transfer is fast. Google Talk makes a direct connection to your friend's computer whenever possible, enabling the fastest speed available. And even if your super-secure firewall won't allow a direct connection, we'll still get it there at a decent speed, because we're nice like that.

Photo sharing is fun! When you drop up to 10 photos on Google Talk, smaller previews automatically appear right inside the chat window, so you can chat about them right away. The previews adjust to the size of your chat window, so just enlarge the window when you want to see more detail. To view the images at full size, or to save them for later, click the 'Download Originals' link.

August 13, 2006

Is Microsoft afraid ?

Microsoft came out with Microsoft Live Writer today. What surprised me was that it is one of the first tools which I can think of in recent years which has support for non-Microsoft products.

Remember the good old days of Novell servers when Microsoft came with a file server which could talk to Novell servers and what about the services for Unix or Microsoft Java VM?

I know everyone is excited about Microsoft doing this, but I being me, am skeptical about the true intentions behind this. Infact, most of the times microsoft releases a product supporting other non-microsoft products, is because when its afraid of loosing market share to a competitor. So the real question is, who is microsoft really afraid of other blogging software or services out there ? Blogspot, MySpace and services like wordpress, typepad are significant competitors to MSN spaces. Microsoft Live Writer is not very different from any other Free Microsoft products in the sense that it is designed to do one thing. Convert.

That being said, I'm glad it has jumped into the market. I can see a lot of improvement in overall blogging experience across the board. Oh and BTW I posted this entry using Microsoft Live Writer.

August 09, 2006

Detecting browser bandwidth (in perl)

If your website has file downloads in megabytes, it can take multiple minutes to download from far away places. Detecting user's bandwidth and predicting the time it might take might become essential to help your customers understand why its taking so long. Detecting bandwidth of a client could be as simple as timing a downloading of a simple file. But there are a few problems with this.

To begin with, most browsers can open multiple download threads to the same destination (IE uses 2, Firefox uses 4). This is not a problem, but its good to know. Then there is a TCP start/stop overhead, impact of which can be minimized by using large files and enabling keepalive. The biggest problem however is caching intelligence within the browser which can trick detection logic to think that it has a superfast network connectivity. The same problem can also confuse multiple browsers behind a caching proxy server.

The solution to all of these problems are relatively simple. First of all use multiple file downloads to maximize the usage of all the browser threads to the server. Enable Keepalives on the server to minimize TCP restart overheads. Use relatively large files for sampling and finally use random numbers as URL parameters to force the cache to discard previous version of the file from cache "?randomnumbers"

August 08, 2006

The Blue Pill - 100% undectable malware

During Code Con 2006 7 months ago I first heard about the existence of virtual machines based rootkits. I've also been reading about hypervisor technology and about products like Xen which are trying to build a better virtual machine engines. Amd and Intel now, officially, have hooks in the processor itself to support this. Unlike traditional virtual machines which "emulate" all the processing within another OS, using this new technology, each OS could infact live along with each other talking directly with the processor.
But what took me by surprise is that within this short time of all this happening, there is a new technology called the "Blue Pill" which has been demonstrated and discussed in the underground world, which makes use of the virtualization features of the processors to make 100% undetactable malware.

Here is an extract from authors description of blue pill..
All the current rootkits and backdoors, which I am aware of, are based on a concept. For example: FU was based on an idea of unlinking EPROCESS blocks from the kernel list of active processes, Shadow Walker was based on a concept of hooking the page fault handler and marking some pages as invalid, deepdoor on changing some fields in NDIS data structure, etc... Once you know the concept you can (at least theoretically) detect the given rootkit.

Now, imagine a malware (e.g. a network backdoor, keylogger, etc...) whose capabilities to remain undetectable do not rely on obscurity of the concept. Malware, which could not be detected even though its algorithm (concept) is publicly known. Let's go further and imagine that even its code could be made public, but still there would be no way for detecting that this creature is running on our machines...

References

August 06, 2006

VMware for Mac is finally out !

Virtualization for Mac OS X

Bootcamp is nice, but Virtualization is better. This is what almost everyone in the mac user community have been waiting for.
Parallels is already selling a virtualization product for Intel based Macs for last few months and has an edge over VMware in the world. But VMware's large user base from the windows and linux community, can disturb Parallels' lead in this market segment almost overnight.
VMware had been the defacto standard in PC-virtualization for few years until Microsoft came along. Recently it came out with a free version of its product called VMware Player which could "play" virtual machines created by its non-free products. While its possible that VMware may not release VMware Player for free in the Mac world, it might price itself low enough to compete with Parallels.

VMware's latest move kind of confirms what Parallels has been betting on for all this while, that the Mac running on Intel will lead to more Windows users to buy and experiment with Apple products. Infact Steve Jobs has a lot to gloat about during tomorrows Keynote address, since VMware's this move wouldn't have been possible without the switch from PowerPC to Intel.

August 05, 2006

Helping people Bookmark

If you run a blog or a website, chances are that you want to make it easier for people to bookmark you website. Here is a nice little page with list of APIs to help you generate those links for your website.

Switching to an online News reader

Flock News readerFlock has a great News Engine, but over the last few months I realized that unless someone comes out with a something equivalent of Google Sync, I don't think its going to work for me. I have 2 laptops and a desktop to work with and find it difficult to manage and read the daily news items. I did try using the google sync firefox plugin hack to sync flock, but I couldn't get news to sync up. I hope someone comes out with that plugin.

So after giving up on flock I turned to online news readers. The one I've heard a lot about was bloglines. To begin with I think there is a lot of improvements they can do with the UI. It was a serious turn off for me. Then there was the non-Ajax refresh which was another big usability bottleneck. Its hard to understand why they haven't switched to Ajax for most of the server interactions. Bloglines
May be I am dumb, or may be I got used to flock, but I couldn't figure out how to create folders and subfolders for by blogs which I want to read. Managing 200 blogs without subfolders gets a little tough. Bloglines has a few interesting features like creating your own blogs, creating blogrolls, etc... which are nice but they are not for me.

RojoWhile I did find bloglines to have solved my problem, I didn't stop looking until I found Rojo. Rojo was easy to use, Ajax based, with support for subfolders. One feature which I still miss from Flock is the ability to mark individual items are "read" or "unread". Again, I might be dumb, but I can't find this feature in Rojo. But they have a way to flag a news which is very close to what I want.

BTW there were two other news readers I did think about but didn't investigate deep enough. I didn't like Google Reader for its complicated interface, and didn't want to start using MyYahoo after being burnt by their Mail service sometime back.

Predictions for WWDC 2006

While we are at it here are my guesses at whats going to go down at WWDC 2006

August 04, 2006

Linux initrd (initial RAM disk) overview

Initrd is one of those things in linux which most of us have taken for granted. Here is a very interesting writeup on how initrd really works. "The Linux® initial RAM disk (initrd) is a temporary root file system that is mounted during system boot to support the two-state boot process. The initrd contains various executables and drivers that permit the real root file system to be mounted, after which the initrd RAM disk is unmounted and its memory freed. In many embedded Linux systems, the initrd is the final root file system. This article explores the initial RAM disk for Linux 2.6, including its creation and use in the Linux kernel."

August 03, 2006

Flagthis Service

We all have bookmarks and have grown over time to love and hate it. The problem is that after some time it get just too big and gets difficult to handle it. I've been working on a tool called flagthis which will make an attempt to remove the problem. Tradionally I used a notepad to jot down my browsed links so that I can pick and choose whats important later before adding it to my bookmarks list. Unfortuantely notepads are not very easy to maintain across multiple computers... and neither are bookmarks.

Flagthis allows you to create an account without any username, password or email address which can then be used to help you manage your collection of browsed links. I don't want to say much about it yet, but do feel free to check it out here... http://www.flagthis.com/

August 02, 2006

Javascript and firefox extensions

I have been hacking around with Firefox and extensions and realized that window.close() doesn't work on Firefox tabs. Aparently there is a hack available for this. I was also surprised that there is actually a firefox extension wizard available to created Firefox extensions. I don't think such a thing exists for IE. But if you know please let me know.

July 30, 2006

Hybrid drives

Hybrid cars solved the problems associated with electric cars and fuel guzzling engines. By bringing both of the technologies together, Hybrid cars can function on gasoline and still save costs by switching to an electric engine when possible.

A similar problem in computing industry is forcing storage manufacturers to work on a new kind of hybrid storage device called a Hybrid Drive. This device is a result of combination of the technologies behind regular disk based drive and the faster USB drives on your keychains. This combination provides it with high speed data access and cheap-per-byte pricing in the same storage device.

This concept isn't new, and if you have worked with storage devices you will remember that most high end RAID devices already have an internal cache which does something similar. Infact most Operating systems, including Windows, Linux has Solaris have builtin file cache too. But most of these devices don't use non-volatile Solid state (flash) which forces the cache to be destroyed everytime the Operating system is restarted. Solid state cache within the Harddrives can not only survive reboots (if non-volatile memory is used), it can also reduce the dependency on third-party caching software and hardwares which can introduce its own set of problems.

One thing to note is that though overall i/o speed will improve, the Solid state storage within HDDs will probably never completely replace in-memory(RAM) cache.

Though the technology behind this has existed for a while with a few very expensive implementations, its not until now, due to dropping solid state prices, that we might have a real chance at seeing this in action inside our home computers.
References

July 28, 2006

Sysadmin Day

Pat yourselves on your back for fixing all those servers,
- doing backup,recovery and user creation.
Pat yourselves for saying no to root and yes to sudo,
- for writing ACLs and scripting voodoo...

Pat again for waking at 2am
- just to put your cellphone on charge.
..for dealing with people
- who wanted everything a day past

Pat again for reading 650 mails a day.
- for blocking SYNFIN floods on ur network
..for carrying those secure-ids
- even while you are not at work.

When you are done patting... please stop by a bar
- pick your pagers and throw away..
'cause you all need a break once in a while
- atleast on the feaking System Admin Day !!

July 24, 2006

Nutch Distributed file system

Nutch is a very interesting java based crawler and search engine based off the lucene project. The part which captivated me, however, was this component called Distributed File system which was built to support the Nutch's quest for all the pages on internet.

July 19, 2006

Over 250 Google Wi-Fi Access points in mountain view

Google's plan for giving out wifi access to everyone in mountain view is old news. It just started rolling out. Here is a map of all the Access points in mountain view plotted on the map. Based on my initial analysis it has about 269 Access points all over Mountain View.

Google Sitemaps and DMOZ inaccuracies

If you run a website, you might have heard of Google Sitemaps and DMOZ already. What you didn't know probably is that Google Sitemaps can now learn from DMOZ if your site is listed on DMOZ.

The problem Google and other search engine face is that though they can crawl your site, they don't really know how to describe your site to an search engine user. Apart from looking at your description Meta tag they also look at various other sources of information including DMOZ database to find the best way to describe your site. Though in most cases databases like DMOZ can acurately describe  the website, its not always the case, and letting search bots like that of google know that using  a meta tag can be very helpful.

July 16, 2006

Plotting Hosts/IP Addresses on the google map

I have setup a new IP Address mapping tool on huntip today which allows anyone to plot multiple ip addresses on the map. Here is the the quick API for this map

API

  • Method: POST/GET

  • Parameters: ips ( comma delimited list of IP addresses or hostnames. For example 10.10.10.1,10.10.10.2,10.10.10.3)

  • Parametsrs: ips ( You can add a comment for each IP by using : as delimiter . For example www.hotmail.com:hotmail server,www.google.com: Google servers,202.54.15.1:VLSNL server in india)

  • Parameters: showinput (1= default, 0=dont show input box, 2 = don't show menus either)

  • Restrictions: Maximum of 100 IP addresses at any given time.


Notes

  • Accuracy: The version of MaxMind database I'm using gives accuracy of around 20 miles


Examples

July 09, 2006

Internet Health monitoring Reports

I was looking for worldwide internet health statistics and found some interesting links.

General Connectivity Reports

BGP and DNS Reports


Where is my root dns server ?

I'm sure you have heard that there are 13 root servers in the world. This cache file (root hint) provided by internic/IANA http://www.internic.net/zones/named.root should confirm that. So how does these 13 servers brave a DDOS attack.

Aparently 6 of the 13 root servers are mirrored using Anycast routing to loadbalance between multiple servers. The F Root server itself has about 37 mirrors in the world. Anycast routing is implemented using BGP by simultaneously announcing the same destination IP range from many different places on the internet. So even though an IP might be registered for a location here in US, if someone announces that a route to the same IP block in Tokyo, hosts in or around that country will try to pick the cheapest route to get to a DNS server. DDOS attacks against root dns servers have happened in the past, and will continue to happen in future. Anycast routing is probably why these "13" DNS servers are still alive today.

The next question some might ask is why we can't have more than 13 IP addresses for root servers... or why can't we just have a large root hint (cache). The answer is simple. For DNS to work using UDP protocol (which is stateless) there is a recommended upper limit on the size of a DNS packet (512 bytes). TCP/IP, which is much more expensive because of its overhead, is the recommended protocol for queries/replies beyond that packet size. The root server administrators understand this very well (who else will know better) and decided to restrict the total number of servers to 13 which can easily be embedded as a list of IPs inside a 512 byte UDP packet if required.

Here is a map of the 13 registered root servers on the global map. A complete list of root servers are listed at http://www.root-servers.org/.

July 07, 2006

How many root dns server do we have ?

Haven't you heard that we have 13 root dns servers in the world ? This map on huntip.com was created based on the root file hint provided by internic/IANA http://www.iana.org/popular.htm, http://www.internic.net/zones/named.root which listed the 13 IP addresses. The part which I later found out is that 6 of these IP addresses use Anycast addressing (different from multicast, broadcast and unicast).


Anycast routing is implemented using BGP by simultaneously announcing the same destination IP range from many different places on the internet. So even though an IP might be registered for a location here in US, if someone announces that a route to the same IP block in Tokyo, hosts in or around that country will try to pick the cheapest route to get to a DNS server. The F Root server itself has about 37 mirrors in the world. So, we are very well protected against the DOS attack.

Some might ask why we can't have more than 13 IP addresses for root servers. For DNS to work using UDP protocol (which is stateless) there is a recommended upperlimit on the size of a DNS packet (512 bytes). TCP/IP is the recommended protocol for queries/replies beyond that packet size. The root server administrators understand this very well (who else will know better) and decided to restrict the total number of servers to 13 which can easily be embedded as a list of IPs inside a 512 byte UDP packet if required.

A complete list of root servers are listed at http://www.root-servers.org/. The graphical coordinates for non-anycast IP addresses are accurate to within 50 miles of the actual server.

July 04, 2006

HuntIP.com goes live

Hunt IP is a collection of systems admin tools and links to looks which can help in investigating network, dns and Email problems.
HuntIP.com

June 29, 2006

Disaster Recovery process: Insurance policy for IT disasters

In a bizarre twist of reality, a company which was standing one day, is packing up and folding away three days later. Couchsurfing faced, what they called, a perfect storm which could have happened to anyone. My sympathies with them and especially their IT team who must have gone through a lot before they were all asked to leave. Multiple failures happening at the same time is not so rare as your IT team make you believe. It has happened and will happen for ever. Unfortunately its disasters like these that make people realize the importance of backup procedures and disaster recovery plans. It reminds me of September 11, 2001 and Katarina (New Orleans) which in its own weird ways, contributed a lot towards IT Disaster recovery process improvements. IT's backup and disaster recovery team were some of the unsung heros who never seem to get recogonized for how they help business to get back into action after a disaster on this scale. Investing in backup processes, is like an insurance policy with a never-ending monthly bill, but help you back on your feet if disaster strikes.

June 28, 2006

Google checkout and SSO

Google checkout is out, and as expected its so lean and mean that I couldn't figure out if it was actually a new google component. With froogle already in place, Google checkout can cash on the goodwill people have for its froogle service. I think this news is a big one for other business organizations, but probably isn't as significant for end user.

Remember Microsoft Passport ? Now think Google Single Sign on. I noticed a story about it being released and pulled yesterday due to some unkown reason. Personally I've always supported Federated authentication system, because it can reduce security problems due to reduced number of passwords one needs to remember. However, using a 3rd party single signon over which we have no control is like the government trying to control/monitor our income. That being said I'm still ready to subject myself to Google's Single sign on if it reduces security risks.

June 25, 2006

OpenLaszlo Legals: Breaking the flash barrier

In the past, though I loved the idea behind laszlo, it was hard for me to come up with a reason to force my users to use Flash. That was before Ajax gained popularity. With RIA (Rich internet applications) invading the market, I had been, for a few months, pondering about re-investigating laszlo to see where it stands.

Today, however, I got a very pleasent surprise when OpenLaszlo announced the availability of "OpenLaszlo Legals" extention which allows OpenLaszlo to generate runtimes for different target browsers using JScript, ActionScript or Javascript instead of just Flash.



I can see Laszlo getting a lot of positive feedback over the next few days. This is probably the best move they could have made. I wish them all the best.

Notes: WikiMapia, Digg, IPv6, flock and Google Sync.

WikiMapia

  • This is the first time I happen to stumble upon WikiMapia, which looks like a wiki of maps. Very interesting and creative idea. WikiMapia uses Google Maps API and allows users to mark places and add text to locations around the world.

  • Its like  a large world map with people scribling all over it. Google recently updated its global map database to include some very high resolutions satallite images around the world which makes WikiMapia an even more very interesting new service to look out for.


Digg

  • Digg has been around for just over a year and has already surpassed slashdot in traffic volume. The Digg 3.0 release party demoed some really interesting new tools which are set to come out soon after 3.0 release on monday. The one tool which already exists is Digg Spy.


IPv6

  • US Government has plans to enable IPv6 on backbone routers by 2008.

  • Comcast is probably the first large organization who has already started deploying IPv6. Here are some interesting presentation slides from one of their talks.

  • I looked up ARIN and noticed that Google, Microsoft and Cisco all have /32 assigned to them which is a significant allotment. Even though ARIN policy kind-of states that /32 allotments requires the aquiree to act as an ISP and give away atleast 200 blocks to smaller ISPs or organizations in 5 years, I don't think this is enforced. Cisco for example has its IPv6 block since 2000 and is well past its 5 year limit.

  • Aparently, during IPv6 I also found out that while IPv6 is being deployed, multihoming is not yet standardized.


Flock

  • If you like Firefox you'll like Flock too. Just like the web is slowing moving towards web 2.0, flock is kind of an extention to the firefox experience which gives you "web 2.0 rich" experience.

  • Features like social tagging, blogging and photo sharing are built into the browser. But what I liked the best in flock is its implementation of the RSS new reader.

  • Flock beta 1 was released on June 13th.


Google Sync

  • Google Sync is a firefox plugin which claims to synchronize your browser settings with your gmail account so that you can carry them with you when you switch desktops.

  • Unfortunately though flock is based off firefox, its not supported which is a shame cause I primarily use flock. However, there is a hacked version of Google Sync which will work for flock here.

  • BTW, I think that Google Sync is far from mature, 'cause over the weekend Google Sync successfully locked up my Firefox browser on windows XP and even reboot doesn't bring it up anymore.

June 24, 2006

Top Ten ways to speed up your website

Over last few years as a web admin, I realized that knowing HTML and javascript alone was not enough to build a fast website. To make the site faster one needs to understand the real world problems like network latency and packet loss which is usually ignored by most web administrators. Here are 10 things you should investigate before you call your website perfect. Some of these are minor configuration changes, others might require time and resource to implement.

  1. HTTP Keepalives: If Http Keepalives are not turned on, you can get 30% to 50% improvements just by turning this on. Keepalives allow multiple HTTP requests to go over the same TCP/IP connection. Since there is a performance penalty for setting up new TCP/IP connections, using Keepalives will help most websites.

  2. Compression: Enabling compression can dramatically speed up sites which transfer large web objects. Compression doesn't help much on a site with lots of images, but it can do wonders in most text/html based websites. Almost all webservers which do compression automatically detect browsers compatibility before they compress data in HTTP. Most browsers since 1999 which support HTTP 1.1 support compression too by default. In real life, however, I've noticed some plugins can create problems. An excellent example is Adobe's PDF plugin which inconsistently failed to open some PDFs on our website when compression was enabled. In apache its easy to define which objects should not be compressed, so setting up workarounds are simple too.

  3. Number of Objects: Reduce the number of objects per page. Most browsers can't download more than 2 objects at a time (RFC 2616). This may not seem like a big deal, but if you are managing a website which has international audience, network latency can dramatically slow down the load-time for the page. The other day I checked on google's search page and noticed that they had only one image file in addition to the html page. That's an amazingly lean website. In real life all sites can't be like that, but using image maps with javascript to simulate buttons can do wonders. Merging HTML, Javascripts and CSS into a single file are other common ways of reducing objects. Most modern sites today avoid using images entirely for buttons and stick made of HTML/CSS/Javascript.

  4. Multiple Servers: If you can't reduce the number of objects try to distribute your content over multiple servers. Since most browsers have an upper-limit on the number of open connections to a single server, they may ignore that limit if some objects are from different server. For example what would happen if an HTML page which has 4 jpeg images is using server1.domain.com and server2.domain.com for 2 images each instead of putting all of them on one server ? In most browsers cases you will notice 2 times speed improvement. Firefox and IE browsers can both be modified to increase this limit, but you can't ask each of your visitors to do that.

  5. AJAX: Using AJAX won't always speed up your website, but having javascript respond to users click immediately can make it feel very responsive. Most interactive sites are using AJAX technologies today than they were before. In some cases, sites using Java and Flash have moved to AJAX to do the same work in lesser number of bytes.

  6. Caching: Enabling expiry HTTP header on objects can intelligently tell browsers to cache the objects for a predefined duration. If your site doesn't change very often, or if there are a certain set of pages or objects which change less frequently, change the expiry header associated with that file type to mention that. Browsers visiting your site should see speed improvements almost immediately. I've seen sites with more than 50 image objects in a single HTML file doing amazingly well due to browser caching.

  7. Static Objects on fast webserver: Web applications servers are almost always proxied through a webserver. While web application servers can do a good job of providing dynamic content, they are not the best suited to service static objects. In most cases you can see significant speed improvements if you offload static content to the webserver which can do the same job more efficiently. Adding more application servers behind a loadbalancer can do the same trick too. While at the topic, please remember the language you chose to serve your application can make or break your business. While protoyping can be done in almost any language, heavily used websites should investigate performance, productivity and security gain/loss of moving to other platforms/languages like Java/.Net/C/C++.

  8. TCP/IP initial window size: The default initial TCP/IP Window sizes on most operating systems are conservatively defined and can affect download/upload speed problems. TCP/IP starts with a low window size and tries to find an optimal window size over time. Unfortunately since the initial value is set to a low value and since HTTP connections don't last that long, setting the initial value to a higher value can dramatically speed up transmission to remote high latency networks.

  9. Global Loadbalancing: If you have already invested in some kind of simple loadbalancing technology and are still having performance problems, start investigating in global loadbalancing which allows you to deploy multiple servers around the world and use intelligent loadbalancing devices to route client traffic to closest web server. If your organization can't afford to setup multiple websites around the world, investigate global caching services like Akamai

  10. Webserver Log Analysis: Make it a habit to analyse your webserver logs on a regular basis to look for errors and bottlenecks. You would be surprised how much you can learn about your own site by looking at your logs. One of the first things I look for are objects which are requested the most or objects which consume the most bandwidth. Compression and Expiry can both help in this case. I regularly look for 404s and 500s to see for missing pages or application errors. Understanding where your customers are coming from (country) and what times they like to come in at can help you understand latency or packet loss problems. I use awstats for my log analysis.


References:

[p.s: This site royans.net unfortunately is not physically maintained by me, so I have limited control to make changes on it.]

June 19, 2006

Why GoogleTalk is not about Instant Messaging.

The two big names in the messaging industry came out with two major upgrades today. Yahoo announced "Yahoo Messenger 8.0" for Windows platform and MSN released their Windows Live Messenger. While both MSN and Yahoo are offering some form of VoIP support, the big thing for Yahoo was the opening up of the APIs for its messenger and the discussion happening is around its Yahoo! Messenger On-the-Road offering which seems to be some kind of a paid service which will grant you access to more than 30000 wifi spots around the world. On MSN side the big thing is the announcement that Philips is now making Voip handsets with embedded Windows Live Messenger in it. This trend of moving VoIP software to handheld devices is not new, but with Microsoft jumping into the market, it not very surprising why Skype is giving away free minutes.

Which brings this discussion to the third player in this market, Google. While MSN and yahoo are desperately trying attach the kitchen sink to their IM client, Google seems to be less interested in developing standalone "Google Talk" clients and is more interested in gathering generating grass root support with least bottlenecks for the end user. For coming late to the party, thats not too much to ask for.

However what we all miss to see in this picture is that in the IM world, MSN and Yahoo are not very far from what centralized networks like AOL and Compuserv looked like before they hooked up to the internet. Isn't it a shame that you as a user of MSN also have to create a Yahoo, GoogleTalk, ICQ and AOL account just to talk to all of your friends ? And while you can sign up with just one ISP to visit all the websites on the internet is it really necessary to sign up with 10 different service providers just to exchange instant messages with your friends ? After all how different is instant messages from regular email messages ?

When Google decided to use an open protocol called Jabber which has close to 100 different client implementations, they did two things which was not very apparent outright. First they bought themselves a huge developer base which have been screaming about Jabber as an alternative to proprietary protocols. Second they have now forced MSN and Yahoo to acknowledge that inter-IM communication is eventually possible.
Infact, Jabber protocol, unlike other instant messaging protocols was designed ground up like SMTP protocol to be decentralized, flexible and diverse. Its so much alike like SMTP, that from a birds eye view Jabber could look like SMTP in the way it works.

GoogleTalk in short is what Internet was to AOL the reason why Google doesn't care about GoogleTalk client is because Jabber like SMTP can be routed, archived and searched for targeted advertisements.

//p.s. In the current design GoogleTalk is not routable(s2s)... but that hopefully would be fixed soon.

June 17, 2006

Sun AMD V20z hardware problems

Sun Microsystems was one of the first big companies to come up with 64Bit AMD V20Z servers which quickly replaced our ancient Sparc servers. Compared to the old E220s and E420s, AMD servers were about 3 to 5 times faster depending on what we wanted it to do.

The first round of V20z's we deployed saved us a lot of rack space, but the heating and power requirements were little higher than expected. Though the v20z's did reduce the footprint on the racks, the heat generated forced us to leave room on the top of the servers where the ventilation holes were placed. For all practical reasons, we couldn't use it as one U system.

We ordered a second round of V20Z's a few months back and though we were prepared for the extra rack space, we stumbled upon a whole new problem this time. We noticed that some of these servers were randomly rebooting, especially at times of high activity. We were using a mirror image of the Suse distribution which we installed on the first set of servers which rules out any change in the software/os side. Whats funny is that some of these servers were so predictable faulty that a simple "tar -xvzf filename.tgz" would kill it. Putting the boot drive from the faulty server in a perfectly working server confirmed that it wasn't the OS or Harddisk which was faulty, but the server hardware itself.

These problems have been going on for over atleast a couple of months and we have opened up a case with sun for few weeks now. Among the things we have done to fix this includes updating different firmwares in various V20z components, play around with the memory modules add more space for ventilation and we even checked the voltage regulator to see if its defective. These servers are brand new and of the 30 or so which we bought we can consistently reproduce this problem on 6 of them. Infact we had the sun engineer (2 of them) come on site and see it for themselves and yet its hard for them to agree that they need to replace the server.

So the question is, how long does it take for someone to admit a mistake and give us a replacement ? Does Sun realize that while they request us to upgrade firmwares on our servers and do other time delaying steps, 20% of these servers can't be used at all ? Do they understand that if we just wanted to keep them unused, we would probably not have bought it in the first place ?

Our company has tried to escalate this problem with Sun so many times, and the guy on the other end just refuses to sign off on the replacements.

Which leads me to the next question, how many other servers are there which have this problem ? If you have this problem, could you please reply to this blog, or let me know by email ? If 20% of the servers sold to us were badly defective, there has to be others out there who are having the same problem.

We have spent between 300 to 600 man hours trying to debug this problem and setting up  workarounds instead of resolve this issue. Posting of this blog online is not just an act of desperation on my part, but is also a message for Sun Microsystems to let them know that they are not the only server vendor out there.

March 17, 2006

Could the Google and Sun rumor be about Java ?

If you have been following writings from Daniel M Harrison you would notice how strong his convictions are on this topic. Daniel strongly believes that Google is buying Sun. And any reader who doesn't understand how Google and Sun operate can easily be swayed to believe this. But not me.

The fact that Google or Sun haven't publicly denied these new rumors, means that something might be cooking. But Google buying Sun doesn't sound very interesting.

  • Sun has a large pool of talent who know how to create fault tolerant, high performance parallel processing computing infrastructure.

  • Google has a large pool of talent who have perfected the art of distributed computing using cheap hardware clusters using free tools and operating systems

  • Sun is a hardware, software and services company

  • Google makes its revenue from advertisements

  • If Google buys Sun, it would be forced to use Sun technology. Microsoft had a hard time switching Hotmail.com from FreeBSD to Microsoft based solutions.

  • The change for Google to switch to Sun based hardware and software and the time spent to do it could be quite significant.

  • A lot of goodwill for Google stems from the fact that Google is Open source friendly. Even though Sun has made attempts to open its Operating System, the perception is not the same. Google might have to face some negative publicity if they don't take immediate damage control initiatives after a buy out.


If there is any truth to these rumors, its more likely that its about Java than anything else.

  • Google already has an agreement with Sun over cross distribution of Java and Google desktop.

  • Based on what I know, its more likely that Google might buy out Sun's Java technology than buying the whole company itself.

  • Java is one platform which is truely write-once-run-everywhere. Nothing else comes closer to this reality.

  • Google desktop has made significant inroads into desktop world running Microsoft OS. But lacks critical foothold in non-Microsoft world. This could change if it switches to Java as the application platform for all of its client side applications

  • If Google plans to quickly build applications like GDrive and integrate writely.com with other applications running on the local operating system, it would need a more universal platform. Java, though slow, is still faster than javascript and has more access to the operating system to do such tasks.

  • With better control over how Java develops, Google could use its strong technical background to speed it up and customize it for its own applications. The way Microsoft is trying to use .NET to spread its word.

  • This may or may not be a good thing for Java. But will definitely be a awesome add on for Google.

March 09, 2006

Skype PBX is here : Good or bad ?

Recently I wrote about skype invading the cellphone market. While this might be a few years away, something more interesting might happen much earlier.

A few companies at CEBit are showing off Skype to PBX gateways. [ Vosky , Spintronics , Zipcom ] Imagine how easy it would be communicate between two branches using VOIP protocols but without the expense of costly VOIP hardware.

I think this is a bag of good and bad news.

The good news is that skype will break down the artificial communication barrier between people and companies which live in different parts of the world. Up until recently we assumed that its ok to charge more if you want to talk with someone very far away. Its almost like we assume that travel fares are directly proposional to the distance. With the "national plan" going into effect most voice carriers provided a means for us to communicate with anyone in the country for the same fare. Unfortunately such a plan doesn't exist internationally because unlike in US, voice carriers here don't have agreements with all the countries in the world. Internet as per design broke down such barriers very early in its evolution. I'm very excited that skype is leading the way in making voice comm cheaper, which will go a long way in moving us towards a truely global economy.

Skype is a wonderful product, its free to use, has allowed other products to be built around it using its API. Its growth might almost be viral in nature. The bad news, however, is that we might be seeing a birth of another monopoly which is building its business around security through obscurity. I recommend reading a very fascinating presentation "Silver Needle in the Skype" by two gentlemen Philippe and Fabrice. They talk about how hard skype has been trying to keep its protocol closed. Even its installation binaries are rigged with obsfucated code and anti-debugging/anti-reverse_engineering mechanisms.

Skype is openning up holes in the network faster than most of us realize. What if someone finds a hole in skype software or protocol after it becomes a critical part of global communication infrastructure ? Are we setting up ourselves for a global catastrophe ?

Even though I personally like Skype, security through obscurity should be discouraged and I'll try my best to look for alternatives unless skype opens up the protocol further.

March 08, 2006

Microsoft Ultra-Mobile PC (umpc/Origami)

Update : Let the hype come to an end. Here is the the real thing in flesh and blood

Microsoft Ultra-Mobile PC

In preparation of the release sometime tommorow, there is a file on the microsoft servers with the name Origamai_Otto_Berkes_2006. Not sure if its available from outside, but here are the important details of Origami project which we have all been waiting for.

  • List Price $USD 599.99

  • Resolution 800x480 (native). Can go upto 800x600

  • Battery life: Doesn't seem anything dramatically different from other tablets

  • Low powered. Cannot play Halo 2

  • USB Keyboard optional.

  • 40GB Drive

  • Bluetooth

  • 802.11 (wifi)


source:c9 CeBit M

March 06, 2006

Dont mess with my packets

We had some emergency network maintenance done over the weekend which went well except that I started noticing that I couldn't "cat" a file on a server for some reason. Every time I login to the box everything would go fine, until I tried to cat a large log file which would freeze my terminal. I tried fsck (like chkdsk), reboots and everything else I could think off without any success. Regardless of what I did, my console would freeze as soon as I tried to cat this log file.

My first impression was that the network died, then when I was able to get back in I thought may be the file was corrupted, or even worse, that we got hacked and "cat" itself was corrupted. To make sure I was not hacked, I tried to "cat /etc/paswd". And that worked fine. Then I tried to cat a different file in the logs directory and found that to freeze too. I figured that something is wrong with the box and gave up on it for the night, and decided to worry about it on Monday morning. Which was today.I go in to work this morning, and find a whole bunch of users complaining that they can't go to any webserver on a particular loadbalancer in a this part of the DMZ. So, now I have a network modification, a bad unix file system and a loadbalancer (with few webservers behind it) all malfunctioning at the same time. With adrenalin kicked in, blood pressure rising, and 2 cups of coffee, I figured that there had to be something common between all of these.
After a little bit of investigation I found out that none of the users in my network are able to get to any of the servers in the target network using web. And though ssh is working fine, we couldn't "cat" any large file on any of the servers in that network. Weird.
I tried to recollect a previous incident where some packets were not getting through a firewall which made the ssh session freeze. If every server on the same network has the same problem, it had to be a problem with one of the routers or firewall in between. So I did the next logical thing, which was to setup tcpdump on both sides of communication. This would allow me to sniff traffic at the moment the "freeze" happens.
Sure enough I see a whole bunch of packets going by, until I do a "cat logfile". Thats when hell freezes over.
11:07:10.955656 server1.634 > server2.22: . ack 4046 win 24840  (DF) [tos 0x10]
11:07:10.958896 server1.634 > server2.22: . ack 4046 win 24840 (DF) [tos 0x10]
11:07:10.959221 server1.634 > server2.22: . ack 4046 win 24840 (DF) [tos 0x10]
11:07:10.959252 server2.22 > server1.634: . 4046:5426(1380) ack 1607 win 24840 (DF) [tos 0x10]
11:07:10.959538 server1.634 > server2.22: . ack 4046 win 24840 (DF) [tos 0x10]
11:07:10.959573 server2.22 > server1.634: . 6498:7878(1380) ack 1607 win 24840 (DF) [tos 0x10]
11:07:10.962011 server1.634 > server2.22: . ack 4046 win 24840 (DF) [tos 0x10]
11:07:10.962040 server2.22 > server1.634: . 7878:9258(1380) ack 1607 win 24840 (DF) [tos 0x10]
11:07:11.443579 server2.22 > server1.634: . 4046:5426(1380) ack 1607 win 24840 (DF) [tos 0x10]
11:07:12.433550 server2.22 > server1.634: . 4046:5426(1380) ack 1607 win 24840 (DF) [tos 0x10]
11:07:14.413493 server2.22 > server1.634: . 4046:5426(1380) ack 1607 win 24840 (DF) [tos 0x10]
11:07:18.373444 server2.22 > server1.634: . 4046:5426(1380) ack 1607 win 24840 (DF) [tos 0x10]
11:07:26.303489 server2.22 > server1.634: . 4046:5426(1380) ack 1607 win 24840 (DF) [tos 0x10]
11:07:42.172971 server2.22 > server1.634: . 4046:5426(1380) ack 1607 win 24840 (DF) [tos 0x10]

In the sniff above "server2" is the server which is freezing and server1 was my desktop from where I was logging into. The interesting thing about the capture I did on my desktop was, that it accounted for all the packets which you see here except the last few packets which have the "ack 1607" string in them. For those who don't understand tcpdump, this is a capture of repeating packets which are not getting acknowledged by the other end.
So now we knew for sure that it has to be a routing or firewalling glitch of some kind. But it still didn't explain why it was repeating. On a hunch I looked at the firewall logs to see if there is anything there about why its dropping my packets. May be it thinks that all of these servers are attacking it or something. It didn't revile anything.

Mar 6 11:09:56 [10.1.10.5.2.2] Mar 06 2006 13:05:24: %PIX-4-106023: Deny icmp src inside:router1 dst vlan server2.22 (type 3, code 4) by access-group "inside"

But what I did see, is that once in a while, there is a weird log entry from the PIX (cisco firewall) complaining about an ICMP packet being dropped due to an ACL restriction. ICMP is a great protocol and almost every kid in the world knows how to use ping to find if a remote host is alive. What its also used for is error reporting and tracerouting. In our network we had ICMP enabled in such a way that errors being reported to the admin network are allowed to go through. And since there are too many reasons why errors should be going into a DMZ, they are generally blocked by edge-routers or firewalls. So the ACL which dropped the packet wasn't that surprising. But what the heck is "type 3, code 4" ?

Type 3, Code 4 according to RFCs is "The datagram is too big. Packet fragmentation is required but the DF bit in the IP header is set." Fragmentation is the process of breaking down of large packets into smaller packets so that it can travel through network media which have different packet size limits. Finally, we know the reason why the packets were getting dropped. Apparently for some reason "DF" flag was getting set on the packets. DF (Dont Fragment) flag is a bit inside IP header which tells all intermediary devices not to ever "fragment" that particular IP packet.

Based on the PIX logs, it seems router1 dropped the packet and generated a "type 3, code 4" error indicating the reasons why it dropped. Under normal scenarios any sniffer would have noticed an ICMP error packet coming back to server2. But since this was in a DMZ, and since inbound ICMP errors are getting dropped there was no way to know the reasons why some of these packets were going through.
The solution to this problem, apparently was to force the DF flag to be removed which then resolved all the connectivity problems. We also found out that all of our problems started sometime after the maintenance window during which some key networking devices were reconfigured.

March 05, 2006

Two-way Two-factor SecureID

A lot of companies are moving towards two factor authentication which is a great because it tries to reduce the risk of weak authentication credentials. What it doesn't do, unfortunately, is reduce phishing risk, which will become the next big problem after spamming. I wrote a few words on detecting phishing attacks a few days ago. This is the continuation of the same discussion.

"Passmark" and similar authentication mechanisms are one of the best current solutions in use today. Unfortunately, Passmark is one of those mechanisms which are built to be broken. The strength of this authentication mechanism, in this care, depends on the number of images in the Passmark database which according to the website is currently at 50000.

50000 variations might be alright for now, but we would be short-sighted if we stop at this. One of the serious drawbacks of this mechanism is that if the user guesses the users logon name, or captures that information in some other way, Passmark authentication effectively reduces to a one-way password authentication.

For example, if an attacker wants to steal a victims session and has somehow guessed the users logon name, all they have to do under the "passmark mechanism" is to go to the real website once with the users logon name and extract the image shown by the real website. Once this is done, since the image doesn't change at all, ever, the attacker can prompt the victim with the cached image whenever the user logs on.

I think the day is not very far when companies like RSA will come out with two-way authentication mechanism where the token provided by the server keeps on changing. RSA already makes excellent two-factor one-way authentication, which changes based on time. They can easily extend it by doing a "two-way two-factor" authentication. If such a two-factor two-way authentication existed, even if the attacker knew victims logon name, he/she would have to go the real bank every time the user logs on, to get the latest SecureID token which the user could look for. Its just a mater of time after that for someone at the actual website to figure out phishing activity.

Before I end today's rant, I'd like to admit that its totally possible that someone has already done this, and that I've just not seen it yet. If so, I hope it gets deployed fast.