Putting Chrome OS behind SSL based webfilters

Educational institutions, particularly the K12 are stuck between a rock and a hard place. They are always in search of ways to open up newer technologies to students, but don’t want to give up their ability to manage and filter what students can see or do. As a father of two I approve that.

While Chrome OS does take security very seriously and tries very hard to discourage “man in the middle attack”, it does provide an industry tested feature to allow educators to filter web content for students in its recent version of Chrome OS. To understand how it works in Chrome OS, I’ll first explain how the Chrome OS works internally.

Chrome OS devices, as most of you already know, has two distinct components. The Chrome browser is what provides most of the UI, but deep inside it also has an operating system built on top of linux. Among other things that OS is responsible for, auto-updates and security are two of the most important. 

The web filtering feature which Chrome OS provides for our enterprise and schools users allows all “user session” traffic from the browser to be intercepted, but doesn’t allow any of the system requests to be intercepted in the same way.

Network setup

To get a chromebook to work correctly in an environment with webfilter, its important to let webfilter know which hosts chromebook would connect to for which it won’t tolerate SSL inspection. Google has published a set of domain names here which can be used for this purpose.

Note that whitelisting by IP addresses (netblocks) is not good enough. The IP addresses mapped to these hosts keep changing and the only reliable way to whitelist them is by whitelisting the domain names as it is. Most webfilters (including some transparent webfilters) support this and if you are not sure, contact your proxy/webfilter provider to understand how to do it.

Quick test

Once the network is setup, import your custom root CA cert into the browser using certificate

manager under “Authorities” and make sure you enable “Trust this certificate for identifying websites.” Then go to any website which you think should be intercepted and try to see if browser threw any error. Even if it didn’t throw an error, check at the certificate details and confirm that it was signed by your webfilter.

Broader test

Once the tests confirm that everything is working as expected, its time to do a broader test using management console. To prepare for this test, I would recommend picking a small set of users who are are ok with brief interruption (in case something goes wrong) and are willing to provide you with detailed feedback to help you debug the issue.

In your admin panel, go to Chrome’s “Advance settings” section and then “Networks”. Pick the OU where all of your test users are and then click on “Manage Certificates” button on the

top right corner. 

Upload your certificate and check the box for “Use the certificate as an HTTPS certificate authority”. 

The example on the right shows my setup where I’m using zscaler’s cloud based webfilter.

The final test to make sure this is good, would be to move these users to a network where there is no direct network access.  Have them be forced to go through the proxy/webfilter and see if anything breaks. 

Let this configuration stay like this for a few days/weeks and collect feedback on whether users noticed any other side effects. For example make sure devices are getting updates (which is critical) and that user users can be added any time.

Complete the transition

Once everything has been tested, apply the certificate to more and more OUs until the transition is complete.


There are few caveats you should watch out for
  1. Even though this policy is being applied as a user policy, it will only work on devices which are enrolled to the same domain. This is one of the most common reasons for the feature not working.  This also means that if the device was unenrolled, it may cause network connectivity failures.
  2. Since this is a user policy, other users using the same device will not get this feature automatically. Each user has to be moved into an OU where this certificate is installed.

More info

Chrome device fleet reporting using APIs

Chrome Devices have been a huge success in places like schools where students and teachers want the mobility and price point of a tablet but usability of a laptop. Like everything else Google focuses on scale and part of the Chrome Device offering to enterprises and schools are the tools around the devices to manage the fleet.
Chrome Device management can be centralized which allows admins not only manage apps on the device, they can also push complex network and system settings with a touch of a few key strokes. And that works the same way for a customer with 10 devices and one with 20000.

Recently Google released a new batch of APIs called the Admin SDK which includes some new APIs to discover and manage devices in the fleet. Using these APIs, admins can not only get a list of the active devices they have, they can also find out if the devices are having update issues. As an example of how to use this API, I’ve published a sample script called “ChromeOSDeviceStats” which I wrote for my own domain. This could be used to quickly get high level stats of devices in the domain, when they were enrolled, what version they are on, and what Orgs are they part off.

The script could be further adapted to automatically send email reports like these when certain conditions are met. For example, if a school is interested in knowing about all recent “enrollments”, they could enhance this script to send that report.

Total devices : 31
Total active : 31
Total in Dev mode : 1
Total devices enrolled in last 2 days : 0
Total devices enrolled in last 7 days : 1
Total devices enrolled in last 1 month : 2
Total devices enrolled in last 1 year : 6
Distribution of devices across models: 
                          Alex : 5 
                          Google_Stumpy : 2 
                          Google_Link : 6 
                          Google_Snow : 1 
                          Google_Lumpy : 8 
Distribution of devices across versions: 
                          24 : 1 
                          25 : 5 
                          26 : 2 
                          27 : 5 
                          21 : 6 
                          23 : 3 
Distribution of devices across orgUnits: 
                          /Eng : 3 
                          /hack : 1 
                          / : 31 

Openvpn in EC2 for Chromebooks : Part 2

[ Update: take a look at this write up as well ]

ChromeOS has a minimalistic design does a fairly good job at hiding the complex internals of the operating system. But deep inside it still runs linux and has a full blown openvpn client. In this post I’ll show you how to use “ONC” (Open Network Configuration) format to configure OpenVPN client inside ChromeOS. This file is very similar to an .ovpn file.

The core scripts which did the cert conversion and created the sample ONC file was contributed by Ralph by reading the ONC documentations by himself. If there is a bleeding-edge user of chromebook.. he is the best example I can think off.

These scripts are now published on github and here is a step by step guide of how to use it.

Step 1: Launch a new amazon instance ( Based on Amazon Linux AMI )

  • Pick defaults for everything ( Use t1.micro for the cheapest instance )
  • If you don’t already have a keypair, create one yourself and upload your public key to amazon

Step 2: Update security group you used. Allow UDP:1194 incoming.

  • Open 1194 UDP incoming 
  • Open 22 TCP incoming

Step 3: Find the IP address of the new instance.

  • Find the “Public DNS” address. This is the address we will SSH to.

Step 4: SSH into the server

  • If you only have a chromebook, use this extension to initiate ssh
  • Upload both private and public key to this extension.
  • Fill in the hostname and username (ec2-user)

Step 5: Run the quick setup script

curl https://raw.github.com/royans/ec2_chromeos_openvpn/master/quicksetup.sh > quicksetup.sh; 
chmod +x quicksetup.sh;
sudo ./quicksetup.sh email_address@blogofy.com
  • These three lines downloads the script and launches the setup script.
  • Remember to put in your email address instead of the one listed here
  • This step may take two to three minutes before it prompts you for anything

Step 6: Select default values

  • When it does prompt you, just choose the defaults
  • If it asks you y/n questions… just select ‘y’ 
  • It will ask you for password a few times… just press enter (which sets no password on the keys)
  • Remember this is proof of concept and that you should customize it before you can use it for real stuff

Step 7: Wait for the email. Download the ONC file attached.

Step 8: Upload ONC file into chromebook

  • Make sure it says “ONC file successfully parsed” after the import.
  • [Advanced users: /var/logs/ui/ui.LATEST will have parse errors if you want to investigate a failed import]

Step 9: Try to connect to the openvpn server

  • Just click on “Connect”

Step 10: Connected

  • At this point you should see a solid (should not be blinking) VPN signal on the wifi icon. 

Step 11: Verify if its traffic is being routed through amazon

Patches: Pull or Push ?

Most people prefer to disagree with the masses on whether they like sunny side up or scrambled eggs. And the form of getting patches is no different. If you ask an IT administrator (which is the person applying patches in most corporate organizations) they will tell you horror stories of how patches can go wrong and would be happy to give you examples of why every patch needs to be individually tested before deploying. 
But my dad, for example, doesn’t care about patches, and while he won’t go out of the way to install a patch, he may be ok with patches being pushed to him automatically.  

This debate  reminds me of another interesting debate in the Web-Operations world about “continuous deployment“. In that case the debate was whether applications should be deployed in scheduled releases (for example every quarter) or whether it should be released as things gets developed and pushed.

If you think about this a little more it would be very clear that the developers who build the patches are the ones who first need to be ready to do “continuous deployment”. The confidence of every patch is directly proportional to the robustness of test infrastructure… and the unit-tests and integration-tests which are associated with it. For products where there are test-gaps it makes sense for IT administrators to constantly monitor and test every single patch before its deployed. 

So that brings us back to “Pull or Push“. Because of the increase in number of recent attacks, I am now very conscious about what products I use on a daily basis and try very hard to pick those which can auto-update without nagging me. And they are usually the ones which have robust test infrastructure to allow “continuous deployment”… which in turn means that they are usually the ones which have better test coverage of their products and are the ones which can quickly patch something bad very quickly with confidence (like a 0-day).

I do understand that mistakes in ‘Push’ based patches can be very expensive, but they are still more secure for end users when it comes to privacy and security.

The reason why I was thinking about this is that Adobe released a patch today.. hope you noticed. I just wish it would auto-update devices where have it installed… its easy to forget to update devices, and in general not doing silent auto-update makes me worried that they are not super confident about their test infrastructure.

Capturing wifi traffic of one station from another

This is more of an embarrassing tale than a real how-to document. But I found this interesting enough that I don’t mind sharing it.

A couple of weeks ago I was tasked to capture wifi traffic from a device which didn’t have any capture software built in and I wondered how one would do it.

I have used sniffing tools on my Mac to passively sniff activity on access points around me. Because I’ve always tested such tools in places with dozens of access points with multiple saturated channels, I always assumed that all wifi stations ( laptops ) frequently switch channels. I also assumed that AP (Access points) which are setup to select channels automatically are designed to automatically switch channels anytime if they find a better (less noisy) frequency to provide services at.

And because of those incorrect assumption, I concluded that sniffing another wifi station would be a difficult task because it would be impossible to dynamically change the channel of a second wifi station to follow the first one to correctly sniff all the packets.

After a short discussion with a colleague I found out that most wifi stations don’t really switch AP points unless the noise to signal ratio gets too bad, and most APs never change channels once they are fully initialized.

At the end, to sniff one device, all I had to do is keep the second device close to the first one and make sure that the second one joins the same channel as the first one. For my tests I used open wifi AP which were easier to capture/decode. At this point, if your hardware is capable of promiscuous mode and you have the right software for capture, u should be able to put in a filter with the mac address of the device you want to capture to initiate the process.

Chrome Frame – How to add command line parameters

Chrome frame intentionally does stuff without getting in the way of the user. This sometimes makes things harder to debug. For example how can one debug an issue if chrome frame doesn’t even launch ? Apparently there is a flag for that. But you have to know how to enable it. Here are the steps.

  1. Make sure chrome frame is installed.
  2. We can enable startup flags for dumping debug logs using a policy called AdditionalLaunchParameters
  3. If this is just for one desktop, I recommend doing a registry edit (it can be pushed via GPO as well)
  4. Add a REG_SZ property “AdditionalLaunchParameters” to “SoftwarePoliciesGoogleChromeAdditionalLaunchParameters” with the value “–enable-logging –v=1” (also documented here and mentioned here)  [ Attachment 1 ]
  5. Next kill the IE browser and make sure chrome is also dead by checking taskmgr
  6. Restart IE and go to “gcf:about:version” and confirm that the parameters you added show up next to “Command Line:”. If this doesn’t work… skip this step and go to next step anyway.
  7. Under your “Application Data” folder (its inside “Documents and Settings”) search for a file chrome_debug.log [ Its usually in “Application DataGoogleChrome FrameUser DataIEXPLORE

Chromebooks with Openvpn on EC2

Chromebooks are perfect companions for travel. They are light, secure and one generally doesn’t have to worry about data theft in case they loose the device.  But surfing from hotels and coffee shops is another story. While most sites are in SSL, there are enough websites which are not… and even the ones which support SSL sometimes forget to use SSL connectivity for sensitive data. Which is why extensions like “HTTPS everywhere” is highly recommended.

If I could, I’d pay a few cents for extra level of privacy when using these public wifi networks. In this post I’ll document how you could quickly setup an openvpn server on EC2 instance to do exactly this for your chromebook.


  1. A working EC2 account
  2. A working key-pair (required to ssh into the EC2 instance)
  3. Chromebook with R23 or later 

Step 1 – Launch Amazon Linux AMI ( I used 32 bit for my setup.. its the cheapest). Pick all the defaults options and pay attention to which “Security Group” you would be selecting. It would most probably be called “default”

Step 2 – Edit the security group used by the instance and make sure 1194 udp is added to “inbound” port list.

Step 3 – Ssh into the EC2 instance using your key ( you could also use this extension if you have the ‘identity’ file instead of the .pem)

ssh -i my_key.pem ec2-user@ec2-75-101-188-186.compute-1.amazonaws.com

Step 4 – Add a user, set password and update the server

sudo bash 

useradd temp 

echo ‘my_password‘ | passwd temp –stdin 

yum -y update

Step 5 – Install/start openvpn server with basic options

# Install  

yum -y install openvpn

yum -y install mailx  

# Create fresh keys

mkdir -p /etc/openvpn/easy-rsa/

cp /usr/share/openvpn/easy-rsa/2.0/* /etc/openvpn/easy-rsa/

cd /etc/openvpn/easy-rsa/

source vars
export KEY_CITY=”test-city”
export KEY_ORG=”Example Company”
export KEY_EMAIL=”royans@example.com”
export KEY_CN=changemenow
export KEY_NAME=changemenow
export KEY_OU=changemenow
./pkitool –initca
./pkitool –server server
./build-key-pkcs12 –pkcs12 hostname 
# Send a copy of ca cert by mail
mail -s ca.cert -a /etc/openvpn/easy-rsa/keys/ca.crt royans@example.com <
This is the cert file for this setup. Install this in the authorities tab in the chrome os device from where vpn needs to be initiated.
# Create a server.conf file
cat > /etc/openvpn/server.conf <
port 1194
proto udp
dev tun
ca /etc/openvpn/easy-rsa/keys/ca.crt
cert /etc/openvpn/easy-rsa/keys/server.crt
key /etc/openvpn/easy-rsa/keys/server.key  
# This file should be kept secret
dh /etc/openvpn/easy-rsa/keys/dh1024.pem
ifconfig-pool-persist ipp.txt
keepalive 10 120
status openvpn-status.log/proc/sys/net/ipv4/ip_forward 
verb 6
plugin /usr/lib/openvpn/plugin/lib/openvpn-auth-pam.so login
push “dhcp-option DNS”
push “dhcp-option DNS”
#Start the service
/etc/init.d/openvpn start

    Step 6 – Setup basic source nat

    echo 1 > /proc/sys/net/ipv4/ip_forward 

    IP=`ifconfig eth0 | grep ‘inet addr’ | awk ‘{print $2}’ | cut -d’:’ -f2`  

    iptables -F; iptables -t nat -A POSTROUTING -o eth0 -j SNAT –to $IP

    Step 7 – Install cert

    • In step 5 we sent the ca.cert to the user’s email address. The user should download that cert and install it under authorities tab of chromebook [ chrome://chrome/settings/certificates#cert ]
    • There is a known issue that new certs of this type don’t take effect until user logs out and logs in again
    • After logging in again add the new vpn
      • chrome://chrome/settings/
      • Click on “Add connection”
      • Click on “Add private network”
      • In “Server hostname:” put in the external IP address or the name of your EC2 instance. For example ec2-54-245-135-132.compute-1.amazonaws.com
      • in “Server CA certificate” you should see a new certificate called “chagemenow”
      • For Username/Password use the credentials you setup in step 4

    Step 8 – Done

      • You should be able to test by pinging www.google.com from the crosh terminal

      Certificate based authentication

        • If your goal is to setup certificate based authentication you will have to do a few extra steps
          • Along with installing ca.cert in authorities tab, you would also have to install  the hostname.p12 certificate you created in “Step 5” onto your chromebook ( look in the keys directory )
          • In “server.conf” file comment out  both of the  lines below. The first one disables PAM based authentication, and second one enables certificate based authentication.
            • plugin /usr/lib/openvpn/plugin/lib/openvpn-auth-pam.so login
            • client-cert-not-required
          • When you specify client configuration make sure you specify both “Server CA certificate” and “User certificate”.
          • Type something into username to keep the UI satisfied… if you haven’t enabled PAM based authentication it would have no effect in the login process.

        Next steps

        • Note that this is the simplest Openvpn setup. There are many different ways you can improve on this, which I highly recommend if you plan to keep this instance running for more than a few minutes or hours.
          • You could use client certificates instead of just username/password
          • You should consider tightening firewall rules on the server  
          • If openvpn is running as root on the server, please switch it to something less privileged like ‘nobody’.
        • You should replace passwords and names I used as example above.
        • It should be trivial for someone to write a script to do this… if you do, please let me know and I’ll gladly link it from here
        • While EC2 instances costs only a few cents per hour… please do remember to shutdown when you are done, else you will get an unexpected bill at the end of the month.

        How software defined radios (SDRs) will change security

        Locks were considered very secure until the first lock pickers got their hands on it. Phone system were secure until the John Draper discovered some use for the toy whistle in Captain Crunch pack. Infact even the creators of internet didn’t think of too much security when it was initially designed.  Its the commoditization of technology which sometimes brings about the worst of all security bugs. And I believe the next round of changes are coming very soon.

        Until very recently radios were built for a purpose and they rarely did more than what it was supposed to do.   Think of them like the early computers which took a whole room and could only do only type of a job per computer. Todays computer can do all kinds of stuff and unlike the older versions, they don’t need to be rewired physically to make them do a new job. Everything is done using software.

        Wikipedia does a good job at defining what this is.

        software-defined radio system, or SDR, is a radio communication system where components that have been typically implemented in hardware (e.g. mixers,filtersamplifiersmodulators/demodulatorsdetectors, etc.) are instead implemented by means of software on a personal computer or embedded system.[1] While the concept of SDR is not new, the rapidly evolving capabilities of digital electronics render practical many processes which used to be only theoretically possible.

        A group of individuals figured out that some of the TV tuner cards can not only be reprogrammed to listen to a wider range of frequencies but could be driven entirely using software which could make it look like an all purpose radio receiver. Interestingly that USB tuner costs only about USD 20.

        PaulDotCom mentioned SDRs in one of the talks as well but he went further and pointed out that SDRs could also be used to send signals which makes it significantly more dangerous. One of the worst examples he gave was that an SDR could be reprogrammed to generate fake transponder signals. They pointed out that modern aircrafts do listen for transponder signals from other nearby aircrafts and some of them are programmed to take automatic sudden evasive measures when it detects another aircraft close by.

        The point is not that terrorists can attack airplanes this way… they could do it today by buying and reprogramming a real transponder. The point is that this technology will become so cheap that anyone would be able to do it with just a computer and a simple SDR transmitter.

        I’m not really sure how good Transponders are with respect to security.. may be it has a good secure way of authenticating the transmitter. In which case all is good. But if thats not happening today, it will change at some point when this technology becomes as easy to disrupt as DNS is today.

        Chrome: Fully sandboxed flash engine protect users

        The truth is that not everyone gets updates to chrome as soon as its released. And as its usually the case a lot of holes get discovered only after its exploited in the field. Google has finally announced a fully sandboxed flash engine which prevents the malicious code running within the flash component to fully exploit the system. It should keep you safe from unexpected security threats until an update arrives.

        Google says sandboxing is now available for Flash “with this release” of Chrome. The most recent version, Chrome 23, arrived last week, which is when the four-year-old browser received its usual dose of security fixes (14 in total), as well as a new version of Adobe Flash. 

        Yet the company today wanted to underline today that Chrome’s built-in Flash Player on Mac now uses a new plug-in architecture which runs Flash inside a sandbox that’s as strong as Chrome’s native sandbox, and “much more robust than anything else available.” This is great news for Mac users since Flash is so very widely used, and thus is a huge target for cybercriminals pushing malware. 

        Malware writers love exploiting Flash for the same reasons as they do Java: it’s a cross-platform plugin. Such an attack vector allows them to target more than one operating system, more than one browser, and thus more than one type of user. What Google is doing here is minimizing the chances that its users, namely those using Chrome, will get infected by such threats.

        Top security threats from Oracle, Adobe and Apple

        Kaspersky labs came out with its Q3 report and not surprisingly Oracle and Adobe have some of the worst holes impacting the largest number of users. What I was surprised more about was that Apple made it to that list even though Microsoft didn’t explicitly get named. The map below shows the % of users infected.

        Also found it interesting that iTunes has a lot of holes. Who would have thunk it.

        IT Threat Evolution: Q3 2012 – Securelist