Happy New Year, Happy Freebie, and Happy Giveaway!

Happy New Year, bloggy friends!  Not sure what your 2012 was like, but mine was full of surprises.  Some fantastic ones, some rough ones, some interesting ones, and some incredibly fun ones!   More than anything, I find myself THANKFUL and BLESSED for my family, friends, faith, job, health, and surprises...it keeps life interesting. 



Happy Freebie!  Here's a New Year's Resolution FREEBIE for your students!  Teach them what a resolution is, challenge them to set a goal for 2013 and think about what steps it will take to get there, as well as celebrate their successes for 2012!  Click the image to download.  





Happy Giveaway!  I'm so excited to start 2013 off with 600+ followers on my blog.  What a better way to celebrate than to give a few things away?!  If you're like me, shopping on TpT is one of my favorite hobbies!  Maybe the $30.00 will help you grab some great products for the New Year!  In addition to that I'm throwing in 6 items of your choice from my TpT or TN store...anything goes!  :-)   There are LOTS of ways to enter and lots of chances to win!  Good luck!









a Rafflecopter giveaway

Bananas for New Year's Fun,

Certificate pinning in Android 4.2

A lot has happened in the Android world since our last post, with new devices being announced and going on and off sale.  Most importantly, however, Android 4.2 has been released and made its way to AOSP. It's an evolutionary upgrade, bringing various improvements and some new  user and developer features. This time around, security related enhancements made it into the what's new  list, and there is quite a lot of them. The most widely publicized one has been, as expected, the one users may actually see -- application verification. It recently got an in-depth analysis, so in this post we will look into something less visible, but nevertheless quite important -- certificate pinning

PKI's trust problems and proposed solutions

In the highly unlikely case that you haven't heard about it, the trustworthiness of the existing public CA model has been severely compromised in the recent couple of years. It has been suspect for a while, but recent high profile CA security breaches have brought this problem into the spotlight. Attackers managed to issue certificates for a wide range of sites, including Windows Update servers and Gmail. Not all of those were used (or at least not detected) in real attacks, but the incidents showed just how much of current Internet technology depends on certificates. Fraudulent ones can be used for anything from installing malware to spying to Internet communication, and all that while fooling users that they are using a secure channel or installing a trusted executable. And better security for CA's is not really a solution: major CA's have willingly issued hundreds of certificated for unqualified names such as localhost, webmail and exchange (here is a breakdown, by number of issued certificates). These could enable eavesdropping on internal corporate traffic by using the certificates for a man-in-the-middle (MITM) attack against any internal host accessed using an unqualified name. And of course there is also the matter of compelled certificate creation, where a government agency could compel a CA to issue a false certificate to be used for intercepting secure traffic (and all this may be perfectly legal). 

Clearly the current PKI system, which is largely based on a pre-selected set of trusted CA's (trust anchors), is problematic, but what are some of the actual problems? There are different takes on this one, but for starters, there are too many public CA's. As this map by the EFF's SSL Observatory project shows, there are more than public 650 CA's trusted by major browsers. Recent Android versions ship with over one hundred (140 for 4.2) trusted CA certificates and until ICS the only way to remote a trusted certificate was a vendor-initiated OS OTA. Additionally, there is generally no technical restriction to what certificates CA's can issue: as the Comodo and DigiNotar attack have shown, anyone can issue a certificate for *.google.com (name constraints don't apply to root CA's and don't really work for a public CA). Furthermore, since CA's don't publicize what certificates they have issued, there is no way for site operators (in this case Google) to know when someone issues a new, possibly fraudulent, certificate for one of their sites and take appropriate action (certificate transparency standards aims to address this). In short, with the current system if any of the built-in trust anchors is compromised, an attacker could issue a certificate for any site, and neither users accessing it, nor the owner of the site would notice. So what are some of the proposed solutions? 

Proposed solutions range from radical: scrape the whole PKI idea altogether and replace it with something new and better (DNSSEC is a usual favourite); and moderate: use the current infrastructure  but do not implicitly trust CA's; to evolutionary: maintain compatibility with the current system, but extend it in ways that limit the damage of CA compromise. DNSSEC is still not universally deployed, although the key TLD domains have already been signed. Additionally, it is inherently hierarchical and actually more rigid than PKI, so it doesn't really fit the bill too well. Other even remotely viable solutions have yet to emerge, so we can safely say that the radical path is currently out of the picture. Moving towards the moderate side, some people suggest the SSH model, in which no sites or CA's are initially trusted, and users decide what site to trust on first access. Unlike SSH however, the number of sites that you access directly or indirectly (via CDN's, embedded content, etc.) is virtually unlimited, and user-managed trust is quite unrealistic. Of a similar vein, but much more practical is Moxie Marlinspike's (of sslstrip and CloudCracker fame) Convergence. It is based on the idea of trust agility, a concept he introduced in his SSL And The Future Of Authenticity talk (and related blog post). It both abolishes the browser (or OS) pre-selected trust anchor set, and recognizes that users cannot possibly independently make trust decisions about all the sites they visit. Trust decisions are delegated to a set of notaries, that can vouch for a site by basically confirming that the certificate you receive from a site is one they have seen before. If multiple notaries point out the same certificate as correct, users can be reasonably sure that it is genuine and therefore trustworthy. Convergence is not a formal standard, but was released as actual working code including a Firefox plugin (client) and server-side notary software. While this system is promising, the number of available notaries is currently limited, and Google has publicly stated that it won't add it to Chrome, and it cannot currently be implemented as an extension either (Chrome lacks the necessary API's to let plugins override the default certificate validation module).

That leads us to the current evolutionary solutions, which have been deployed to a fairly large user base, mostly courtesy of the Chrome browser. One is certificate blacklisting, which is more of a band-aid solution: in addition to removing compromised CA certificates from the trust anchor set with a browser update, it also explicitly refuses to trust their public keys in order to cover the case where they are manually added to the trust store again. Chrome added blacklisting around the time Comodo was compromised, and Android has this feature since the original Jelly Bean release (4.1). The next one, certificate pinning (more accurately public key pinning), takes the converse approach: it whitelists the keys that are trusted to sign certificates for a particular site. Let's look at it in a bit more detail.

Certificate pinning

Pinning was introduced in Google Chrome 13 in order to limit the CA's that can issue certificates for Google properties. It actually helped discover the MITM attack against Gmail, which resulted from the DigiNotar breach. It is implemented by maintaining a list of public keys that are trusted to issue certificates for a particular DNS name. The list is consulted when validating the certificate chain for a host, and if the chain doesn't include at least one of the whitelisted keys, validation fails. In practice the browser keeps a list of SHA1 hashes of the SubjectPublicKeyInfo (SPKI) field of trusted certificates. Pinning the public keys instead of the actual certificates allows for updating host certificates without breaking validation and requiring pinning information update. You can find the current Chrome list here.

As you can see, the list now pins non-Google sites as well, such as twitter.com and lookout.com, and is rather large. Including more sites will only make it larger, and it is quite obvious that hard-coding pins doesn't really scale. A couple of new Internet standards have been proposed to help solve this scalability problem: Public Key Pinning Extension for HTTP (PKPE) by Google and Trust Assertions for Certificate Keys (TACK) by Moxie Marlinspike. The first one is simpler and proposes a new HTTP header (Public-Key-Pin, PKP) that holds pinning information including public key hashes, pin lifetime and whether to apply pinning to subdomains of the current host. Pinning information (or simply 'pins') is cached by the browser and used when making trust decisions until it expires. Pins are required to be delivered over a secure (TLS) connection, and the first connection that includes a PKP header is implicitly trusted (or optionally validated against pins built into the client). The protocol also supports an endpoint to report failed validations to via the report-uri directive and allows for a non-enforcing mode (specified with the Public-Key-Pins-Report-Only header), where validation failures are reported, but connections are still allowed. This makes it possible to notify host administrators about possible MITM attacks against their sites, so that they can take appropriate action. The TACK proposal, on the other header, is somewhat more complex and defines a new TLS extension (TACK) that carries pinning information signed with a dedicated 'TACK key'. TLS connections to a pinned hostname require the server to present a 'tack' containing the pinned key and a corresponding signature over the TLS server's public key. Thus both pinning information exchange and validation are carried out at the TLS layer. In contrast, PKPE uses the HTTP layer (over TLS) to send pinning information to clients, but also requires validation to be performed at the TLS layer, dropping the connection if validation against the pins fails. Now that we have an idea how pinning works, let's see how it's implemented on Android.

Certificate pinning in Android

As mentioned at beginning of the post, pinning is one of the many security enhancements introduced in Android 4.2. The OS doesn't come with any built-in pins, but instead reads them from a file in the /data/misc/keychain directory (where user-added certificates and blacklists are stored). The file is called, you guessed it, simply pins and is in the following format: hostname=enforcing|SPKI SHA512 hash, SPKI SHA512 hash,.... Here enforcing is either true or false and is followed by a list of SPKI hashes (SHA512) separated by commas. Note that there is no validity period, so pins are valid until deleted. The file is used not only by the browser, but system-wide by virtue of pinning being integrated in libcore. In practice this means that the default (and only) system X509TrustManager implementation (TrustManagerImpl) consults the pin list when validating certificate chains. However there is a twist: the standard checkServerTrusted() method doesn't consult the pin list. Thus any legacy libraries that do not know about certificate pinning would continue to function exactly as before, regardless of the contents of the pin list. This has probably been done for compatibility reasons, and is something to be aware of: running on 4.2 doesn't necessarily mean that you get the benefit of system-level certificate pins. The pinning functionality is exposed to third party libraries or SDK apps via the new X509TrustManagerExtensions SDK class. It has a single method, List<X509Certificate> checkServerTrusted(X509Certificate[] chain, String authType, String host) that returns a validated chain on success or throws a CertificateException if validation fails. Note the last parameter, host. This is what the underlying implementation (TrustManagerImpl) uses to search the pin list for matching pins. If one is found, the public keys in the chain being validated will be checked against the hashes in the pin entry for that host. If none of them matches, validation will fail and you will get a CertificateException. So what part of the system uses the new pinning functionality then? The default SSL engine (JSSE provider), namely the client handshake (ClientHandshakeImpl) and SSL socket (OpenSSLSocketImpl) implementations. They would check their underlying X509TrustManager and if it supports pinning, they will perform additional validation against the pin list. If validation fails, the connection won't be established, thus implementing pin validation on the TLS layer as required by the standards discussed in the previous section. We now know what the pin list is and who uses it, so let's find out how it is created and maintained.

First off, at the time of this writing, Google-managed (on Nexus devices) JB 4.2 installations have an empty pin list (i.e., the pins file doesn't exist). Thus certificate pinning on Android has not been widely deployed yet. Eventually it will be, but the current state of affairs makes it easier to play with, because restoring to factory state requires simply deleting the pins file and associated metadata (root access required). As you might expect, the pins file is not written directly by the OS. Updating it is triggered by a broadcast (android.intent.action.UPDATE_PINS) that contains the new pins in it's extras. The extras contain the path to the new pins file, its new version (stored in /data/misc/keychain/metadata/version), a hash of the current pins and a SHA512withRSA signature over all the above. The receiver of the broadcast (CertPinInstallReceiver) will then verify the version, hash and signature, and if valid, atomically replace the current pins file with new content (the same procedure is used for updating the premium SMS numbers list). Signing the new pins ensures that they can only by updated by whoever controls the private signing key. The corresponding public key used for validation is stored as a system secure setting under the "config_update_certificate" key (usually in the secure table of the
/data/data/com.android.providers.settings/databases/settings.db) Just like the pins file, this value currently doesn't exists, so its relatively safe to install your own key in order to test how pinning works. Restoring to factory state requires deleting the corresponding row from the secure table. This basically covers the current pinning implementation in Android, it's now time to actually try it out.

Using certificate pinning

To begin with, if you are considering using pinning in an Android app, you don't need the latest and greatest OS version. If you are connecting to a server that uses a self-signed or a private CA-issued certificate, chances you might already be using pinning. Unlike a browser, your Android app doesn't need to connect to practically every possible host on the Internet, but only to a limited number of servers that you know and have control over (limited control in the case of hosted services). Thus you know in advance who issued your certificates and only need to trust their key(s) in order to establish a secure connection to your server(s). If you are initializing a TrustManagerFactory with your own keystore file that contains the issuing certificate(s) of your server's SSL certificate, you are already using pinning: since you don't trust any of the built-in trust anchors (CA certificates), if any of those got compromised your app won't be affected (unless it also talks to affected public servers as well). If you, for some reason, need to use the default trust anchors as well, you can define pins for your keys and validate them after the default system validation succeeds. For more thoughts on this and some sample code (doesn't support ICS and later, but there is pull request with the required changes), refer to this post by Moxie Marlinspike. Update: Moxie has repackaged his sample pinning code in an easy to use standalone library. Update 2: His version uses a static, app-specific trust tore. Here's a fork that uses the system trust store, both on pre-ICS (cacerts.bks) and post-ICS (AndroidCAStore) devices.

Before we (finally!) start using pinning in 4.2 a word of warning: using the sample code presented below both requires root access and modifies core system files. It does have some limited safety checks, but it might break your system. If you decide to run it, make sure you have a full system backup and proceed with caution.

As we have seen, pins are stored in a simple text file, so we can just write one up and place it in the required location. It will be picked and used by the system TrustManager, but that is not much fun and is not how the system actually works. We will go through the 'proper' channel instead by creating and sending a correctly signed update broadcast. To do this, we first need to create and install a signing key. The sample app has one embedded so you can just use that or generate and load a new one using OpenSSL (convert to PKCS#8 format to include in Java code). To install the key we need the WRITE_SECURE_SETTINGS permission, which is only granted to system apps, so we must either sign our test app with the platform key (on a self-built ROM) or copy it to /system/app (on a rooted phone with stock firmware). Once this is done we can install the key by updating the "config_update_certificate" secure setting:

Settings.Secure.putString(ctx.getContentResolver(), "config_update_certificate", 
"MIICqDCCAZAC...");

If this is successful we then proceed to constructing our update request. This requires reading the current pin list version (from /data/misc/keychain/metadata/version) and the current pins file content. Initially both should be empty, so we can just start off with 0 and an empty string. We can then create our pins file, concatenate it with the above and sign the whole thing before sending the UPDATE_PINS broadcast. For updates, things are a bit more tricky since the metadata/version file's permissions don't allow for reading by a third party app. We work around this by launching a root shell to get the file contents with cat, so don't be alarmed if you get a 'Grant root?' popup by SuperSU or its brethren. Hashing and signing are pretty straightforward, but creating the new pins file merits some explanation.

To make it easier to test, we create (or append to) the pins file by connecting to the URL specified in the app and pinning the public keys in the host's certificate chain (we'll use www.google.com in this example, but any host accessible over HTTPS should do). Note that we don't actually pin the host's SSL certificate: this is to allow for the case where the host key is lost or compromised and a new certificate is issued to the host. This is introduced in the PKPE draft as a necessary security trade-off to allow for host certificate updates. Also note that in the case of one (or more) intermediate CA certificates we pin both the issuing certificate's key(s) and the root certificate's key. This is to allow for testing more variations, but is not something you might want to do in practice: for a connection to be considered valid, only one of the keys in the pin entry needs to be in the host's certificate chain. In the case that this is the root certificate's key, connections to hosts with certificates issued by a compromised intermediary CA will be allowed (think hacked root CA reseller). And above all, getting and creating pins based on certificates you receive from a host on the Internet is obviously pointless if you are already the target of a MITM attack. For the purposes of this test, we assume that this is not the case. Once we have all the data, we fire the update intent, and if it checks out the pins file will be updated (watch the logcat output to confirm). The code for this will look something like this (largely based on pinning unit test code in AOSP). With that, it is time to test if pinning actually works.

URL url = new URL("https://www.google.com");
HttpsURLConnection conn = (HttpsURLConnection) url.openConnection();
conn.setRequestMethod("GET");
conn.connect();

X509Certificate[] chain = (X509Certificate[])conn.getServerCertificates();
X509Certificate cert = chain[1];
String pinEntry = String.format("%s=true|%s", url.getHost(), getFingerprint(cert));
String contentPath = makeTemporaryContentFile(pinEntry);
String version = getNextVersion("/data/misc/keychain/metadata/version");
String currentHash = getHash("/data/misc/keychain/pins");
String signature = createSignature(content, version, currentHash);

Intent i = new Intent();
i.setAction("android.intent.action.UPDATE_PINS");
i.putExtra("CONTENT_PATH", contentPath);
i.putExtra("VERSION", version);
i.putExtra(REQUIRED_HASH", currentHash);
i.putExtra("SIGNATURE", signature);
sendBroadcast(i);

We have now pinned www.google.com, but how to test if the connection will actually fail? There are multiple ways to do this, but to make things a bit more realistic we will launch a MITM attack of sorts by using an SSL proxy. We will use the Burp proxy, which works by generating a new temporary (ephemeral) certificate on the fly for each host you connect to (if you prefer a terminal-based solution, try mitmproxy). If you install Burp's root certificate in Android's trust store and are not using pinning, browsers and other HTTP clients have no way of distinguishing the ephemeral certificate Burp generates from the real one and will happily allow the connection. This allows Burp to decrypt the secure channel on the fly and enables you to view and manipulate traffic as you wish (strictly for research purposes, of course). Refer to the Getting Started page for help with setting up Burp. Once we have Burp all set up, we need to configure Android to use it. While Android does support HTTP proxies, those are generally only used by the built-in browser and it is not guaranteed that HTTP libraries will use the proxy settings as well. Since Android is after all Linux, we can easily take care of this by setting up a 'transparent' proxy that redirects all HTTP traffic to our chosen host by using iptables. If you are not comfortable with iptables syntax or simply prefer an easy to use GUI, there's an app for that as well: Proxy Droid. After setting up Proxy Droid to forward packets to our Burp instance we should have all Android traffic flowing through our proxy. Open a couple of pages in the browser to confirm before proceeding further (make sure Burp's 'Intercept' button is off if traffic seems stuck).

Finally time to connect! The sample app allows you to test connection with both of Android's HTTP libraries (HttpURLConnection and Apache's HttpClient), just press the corresponding 'Check w/ ...' button. Since validation is done at the TLS layer, the connection shouldn't be allowed and you should see something like this (the error message may say 'No peer certificates' for HttpClient; this is due to the way it handles validation errors):



If you instead see a message starting with 'X509TrustManagerExtensions verify result: Error verifying chain...', the connection did go through but our additional validation using the X509TrustManagerExtensions class detected the changed certificate and failed. This shouldn't happen, right? It does though because HTTP clients cache connections (SSLSocket instances, which in turn each hold a X509TrustManager instance, which only reads pins when created). The easiest way to make sure pins are picked up is to reboot the phone after you pin your test host. If you try connecting with the Android browser after rebooting (not Chrome!), you will be greeted with this message:


As you can see the certificate for www.google.com is issued by our Burp CA, but it might as well be from DigiNotar: if the proper public keys are pinned, Android should detected the fraudulent host certificate and show a warning. This works because the Android browser is using the system trust store and pins via the default TrustManager, even though it doesn't use JSSE SSL sockets. Connecting with Chrome on the other hand works fine even though it does have built-in pins for Google sites: Chrome allows manually installed trust anchors to override system pins so that tools such as Burp or Fiddler continue to work (or pinning is not yet enabled on Android, which is somewhat unlikely).


So there you have it: pinning on Android works. If you look at the sample code, you will see that we have created enforcing pins and that is why we get connection errors when connecting through the proxy. If you set the enforcing parameter to false instead, connection will be allowed, but chains that failed validation will still be recorded to the system dropbox (/data/system/dropbox) in cert_pin_failure@timestamp.txt files, one for each validation failure.

Summary

Android adds certificate pinning by keeping a pin list with an entry for each pinned DNS name. Pin entries include a host name, an enforcing parameter and a list of SPKI SHA512 hashes of the of keys that are allowed to sign a certificate for that host. The pin list is updated by sending a broadcast with signed update data. Applications using the default HTTP libraries get the benefit of system-level pinning automatically or can explicitly check a certificate chain against the pin list by using the X509TrustManagerExtensions SDK class. Currently the pin list is empty, but the functionality is available now and once pins for major sites are deployed this will add another layer of defense against MIMT attacks that follow after a CA has been compromised.

Is Big Data the Way to Ensure Compliance?

Big data – according to Wikipedia – is a collection of data sets so large and complex that it becomes difficult to process using on-hand database management tools. Yawning yet? The truth is big data is exciting. And the excitement lies in the additional information that can be pulled from these data sets and what you do with them.


In the past it’s been prohibitively expensive thanks to the cost of servers needed to store and drill down the data to make it useful. But as technology gets cheaper, the opportunities available from big data are becoming more accessible to everyone.  And when it comes to buying business travel, big data holds a lot of potential for corporates.

Big data allows travel managers to understand the traveller better, track him or her and push information out to him. It’s this last element that is particularly useful for travel managers because it allows them to push information to travellers that enables and encourage them to stick to policy and budget while away from the office.  

For example, managers can use big data to tell junior staff, who may well feel apprehensive about taking public transport abroad, not to get a cab when they arrive at the airport, but to go to bus transfer stop A or train station B. They can give them directions, explain where to go to buy a ticket, ask them to use the NFC company credit card, pick up the ticket, get on the train at time x to destination y, remind him when to get off, that he can walk to the hotel in five minutes rather than take an overpriced unnecessary cab, where to go for dinner that’s close to the hotel, within policy and within budget… the possibilities are pretty endless.  

And, as we know in this day and age of the rogue, anything that helps travel managers to know more about their travellers and use that information in a way that increases compliance has got to be a good thing.

David Chapple is event director of the Business Travel Show. Talk to him about Big Data on Twitter @btshowlondon or directly at david@businesstravelshow.com.

Holiday E Book is here!

YAY! If you weren't already in the holiday mood with the growing number of Christmas lights appearing, trees going up, Black Friday and Cyber Monday shoppers, then maybe this will do it for you. :-) The Holiday E book on TpT is ready! I should say E BOOKS, because there are 3 this year! There's a 1-2 version, 3-6 version, and 7-12 version to suit your needs. The E books are each filled with 50, yes 50 holiday FREEBIES from some of TpT's finest!  The E book can be downloaded at the following TpT stores...












1-2 Holiday Freebies





















3-6 Holiday Freebies































7-12 Holiday Freebies





























I was SO HAPPY to be able to contribute TWO FREEBIES to one of the E books this year!  I hope you can use and ENJOY this holiday season.  :-)









Bananas for the HOLIDAY SEASON!  *Maybe, just maybe I'll get my own tree and lights up this weekend!  ;-)










CYBER Mon. & Tues. SALE

HAPPY THANKSGIVING blog friends!  Hoping this season finds you thankful for so many things.  My list is so long I don't even know where to begin.  The 4 F's are at the top of my list:  Faith, Family, Friends, and Freedom!  The 3 T's are right up there close to the top of my list, too...Teaching, Tea (love my tea!), and TpT!  





TpT is showing their gratitude for teachers and the hard work they put in daily by running a HUGE CYBER DAY SALE!  Join myself and other sellers on Monday, Nov. 26th and Tuesday, Nov. 27th @ TpT for 28% off all your favorite items!  Be sure to use the promo code CMT12 when checking out to get the additional % off your purchase! 

Stock up on your favorite units, activities, games and more at huge savings!  I'm looking forward to having my WHOLE store on sale!   Here's a look at some of the newest items you can find...    *Click any of the pics for direct links!





























Blessings to you and yours this Thanksgiving!




Bananas for some time off work to spend with family,



Other fabulous sellers who are participating in the TpT sale below!




GUEST BLOG: Who are your business travel villains?

Without a doubt our award for Business Travel Villain goes to roaming charges! No one likes them and yet we all get them when travelling abroad and there is very little we can do about it – or is there? Here at RoamingExpert.com we have put together a guide with some handy hints and tips on how to banish that villain or at least reduce its impact on your budget.

Look at where you roam to
Where is it that you travel to mainly? Is it outside of Europe, within Europe or all a combination of both? Each network has slightly different rates depending on where you go.

Do you use voice, data or both?
What is your main concern when roaming? is it your voice calls, data usage or both? Again, different networks have different charges for voice and data and some are more competitive than others so it is important to know how you need to use your mobile when you are roaming.

What handset do you use?
We would always recommend using a Blackberry as they compress data more efficiently than other handsets and so data charges will always be less.

What are your roaming rates like?
Are your current roaming rates competitive? Or are they actually over inflated? What are you being charged to visit the countries you do? Roaming rates from provider to provider vary greatly some are vastly overinflated.

Do you know how your network bills?
The networks bill completely differently, some calls may be included in inclusive minutes and some may not.  Also some networks charge per minute and some per second, this can make a big difference to your final bill.

Renewing your mobile contract may not be the top of your list of priorities but if you want to avoid a major travel villain in today’s new, global and technologically focused world it could well pay to re assess your contract. Finding out a little more about it and taking some top tips on how to best deal with the situation could well pay off and we are here to help too of course!  

This blog was written by Kate Staley, marketing manager with  Roaming Expert. You can contact her at kate.staley@roamingexpert.com and find out more about the company at www.roamingexpert.com 

GUEST BLOG: SMM: So much talk, so little action

Everywhere we look someone is discussing meetings: meetings management, strategic meetings management, small meetings management – but this is not a new subject.  ‘SMM’ has been the ‘next big thing’ for years.  So why, after all this time, is there still so much talk and so little action?


Admittedly there are plenty of reasons offered up:

No clear internal owner to champion the initiative and drive success.  The absence of a solution that is easy to deploy, supports broad based adoption AND integrates with the rest of your hotel management strategy.  The other perennial favorite, “we don’t really have that many meetings”, is also all too often cited.

These and many other factors contribute to an ‘Ostrich Mentality’: I can’t see it, therefore it isn’t there.

But it IS there and the cost implications of ignoring this category can be huge.  Admittedly, you don’t know what you don’t know, but from our own experience the total ‘hospitality’ spend for the majority of organisations is split across 3 areas – 40% on Transient Accommodation, 40% on Meetings and 20% on ad hoc Projects.  If your focus is purely on your Transient Accommodation, you are managing less than 50% of your total hotel spend.

So – how difficult can this really be?

You need to understand what your requirement is, so start small.  Begin by identifying your target audience.  Head for the departments most likely to generate meetings bookings – sales/marketing, training, the executive floor – and ask for volunteers for a ‘super-user’ test team.

At this stage, you want to garner support and not give the impression that you are trying to take away the responsibility for these meetings.  All you want is to build the environment where everyone creates their meetings requests (RFPs) in one place.   From there you will quickly gather the data you need to put the ‘strategic’ into strategic meetings management.

Aim to start with the small frequently booked meetings. The large conferences and major company events may be the more visible piece but (a) it’s always much tougher to persuade these bookers to give up any aspect of control or management of these bookings and (b) the chances are that you spend much more on smaller meetings.  You are offering meeting planners a solution that will take the ‘grunt work’ out of the process without stripping them of ultimate control and the easiest meetings to target for this are the smaller ones – which are usually just a time consuming chore for those that have to book them.

Ask a few basic questions:
Approximately how often do you book?  What’s the average size of the group and the length of the meeting?  What are the most common features they always need (refreshments, break out rooms, AV equipment etc)?  What type of venue do they typically look for (hotels, internal meeting space, other venues)?  Where do they go to find these venues and what influences their choice?

The answers will help you build the most useful information and features into your RFP tool so that the first time a meeting planner looks for a venue for a meeting, they find what they expect to see and the basic information is pre-loaded to make it the simpler process that you promised!

So all you now need is a system that all your meeting planners can access with nothing more than a User ID and Password, without any additional implementation, and with just an hour or so training.
  
And if they like it, if the system gives them access to the properties they need, helps them create an RFP with all their requirements in 4 easy to follow steps, presents the responses in a format they can share with their colleagues, gives them full budgeting and reporting capabilities and allows them to complete the process in a fraction of the time, then you’ve got your ‘champions’ within the organisation and selling the benefits to others will be so much easier.

And finally, the single most important element to consider.  The first step on the road to success is always the hardest. As you ponder this article, think about this.  What stops you from being the internal champion that gets the ball rolling at your company?  Just because you don’t have it in your job description, does that mean you can’t own it?  At a time when companies are in a constant process of evaluating people, processes and performance, perhaps the greatest thing you can do for your company and yourself is to take that critical first step.

This blog has been written by Jean Squires, director of business development EMEA, Lanyon. To find out more about how the Lanyon Meetings RFP Tool can help please contact Jean at jean.squires@lanyon.com 

GUEST BLOG: A Vision for the Future of Business Travel

Business travel is faster and less isolating than it used to be but, until recently, no-one has been looking out for the business traveller. Now, the landscape is changing and Concur is proud to be a leader in that evolution.

Travel used to be lonely. Gradually, over the past 10 years, and rapidly in the last two, the Internet and mobile devices have changed business travel for the better.  Our smartphones now fulfil a dual role of travel buddy and personal assistant. But we still have a steep hill to climb to improve business travel even more.


The current reality:


Self-serve online travel booking enables the business traveller to book travel at a click of a button – but, because no-one is looking at the experience through the eyes of the business traveller, the whole experience is very disjointed and often frustrating.  For those that travel across Europe, there’s the added complication of different languages, currencies, procedures and cultures to overcome.


A vision for the future of business travel:

Concur has spent considerable time thinking about what would make ‘The Perfect Trip’.  Imagine a journey with no paper or queues, a journey where your smartphone knows your preferences and has pre-booked every step of the journey to seamlessly blend into the next. We have this perfect tripfirmly in mind, and we’re figuring out how to make it happen


How Concur is working to improve business travel:

Making the perfect trip a reality will mean partnership and collaboration between travel suppliers across the entire eco-system for the good of the business traveller. This a huge challenge, particularly in Europe, where suppliers’ regional coverage varies so dramatically. Fortunately, the landscape is already changing and Concur is proud to be a leader in that evolution.


What happens next?

By sharing ideas on how business travel can be improved due to technological integration and the fostering of closer networks we can help shape the future of business travel together. So, whether you are travel supplier, travel app developer, card provider, in the travel business or a business traveller, we’d love to hear your ideas and opinions.


How do you think business travel could be better?

What are your favourite travel apps? What innovations would make the most impact on travel in Europe?  These are the kind of questions that will influence the development of a bigger, better travel and expenses management network across Europe. We have made a start by setting up a Facebook community for those that want to contribute or stay in touch with developments.  Join us – like us or post your recommendation in the comments below.



Written by Lisa Hutt

Lisa heads marketing for Concur Technologies in EMEA and is responsible for growth in the region through brand awareness, pipeline generation and customer success programs. Lisa is an advocate of the latest marketing innovations, global collaboration, sales alignment, lead-to-revenue and influencer marketing. Lisa’s background in marketing communications was developed with IT and online organisations such as Salesforce.com, Dell, Intel, Epson, Sybase and Monster.

Single sign-on to Google sites using AccountManager

In the first part of this series, we presented how the standard Android online account management framework works and explored how Google account authentication and authorization modules are implemented on Android. In this article we will see how to use the Google credentials stored on the device to log in to Google Web sites automatically. Note that this is different from using public Google API's, which generally only requires putting an authentication token (and possibly an API key) in a request header, and is quite well supported by the Google APIs Client Library. First, some words on what motivated this whole exercise (may include some ranting, feel free to skip to the next section).

Android developer console API: DIY

If you have ever published an application on the Android Market Google Play Store, you are familiar with the Android developer console. Besides letting you publish and update your apps, it also shows the number of total and active installs (notoriously broken and not too be taken too seriously, though it's been getting better lately), ratings and comments. Depending on how excited about the whole app publishing business you are, you might want to check it quite often to see how your app is doing, or maybe you just like hitting F5. Most people don't however, so pretty much every developer at some point comes up with the heretic idea that there must be a better way: you should be able to check your app's statistics on your Android device (obviously!), you should get notified about changes automatically and maybe even be able to easily see if today's numbers are better than yesterday's at a glance. Writing such a tool should be fairly easy, so you start looking for an API. If your search ends up empty it's not your search engine's fault: there is none! So before you start scraping those pretty Web pages with your favourite P-language, you check if someone has done this before -- you might get a few hits, and if you are lucky even find the Android app.

Originally developed by Timelappse, and now open source, Andlytics does all the things mentioned above, and more (and if you need yet another feature, consider contributing). So how does it manage to do all of this without an API? Through blood, sweat and a lot of protocol reversing guessing. You see, the current developer console is built on GWT which used to be Google's webstack-du-jour a few years back. GWT essentially consists of RPC endpoints at the server, called by a JavaScript client running in the browser. The serialization protocol in between is a custom one, and the specification is purposefully not publicly available (apparently, to allow for easier changes!?!). It has two main features: you need to know exactly how the transferred objects look like to be able to make any sense of it, and it was obviously designed by someone who used to write compilers for a living before they got into Web development ('string table' ring a bell?). Given the above, Andlytics was quite an accomplishment. Additionally, the developer console changing its protocol every other week and adding new features from time to time didn't really make it any easier to maintain. Eventually, the original developer had a bit too much GWT on his plate, and was kind enough to open source it, so others could share the pain.

But there is a bright side to all this: Developer Console v2. It was announced at this year's Google I/O to much applause, but was only made universally available a couple of weeks ago (sound familiar?). It is a work in progress, but is showing promise. And the best part: it uses perfectly readable (if a bit heavy on null's) JSON to transport data! Naturally, there was much rejoicing at the Andlytics Github project. It was unanimously decided that the sooner we obliterate all traces of GWT, the better, and the next version should use the v2 console 'API'. Deciphering the protocol didn't take long, but it turned out that while to log in to the v1 console all you needed was a ClientLogin (see the next section for an explanation) token straight out of Android's AccountManger, the new one was not so forgiving and the login flow was somewhat more complex. Asking the user for their password and using it to login was obviously doable, but no one would like that, so we needed to figure out how to log in using the Google credentials already cached on the device. Android browser and Chrome are able to automatically log you in to the developer console without requiring your password, so it was clearly possible. The process is not really documented though, and that prompted this (maybe a bit too wide-cast) investigation. Which finally leads us to the topic of this post: to show how to use cached Google account credentials for single sign-on. Let's first see what standard ways are available to authenticate to Google's public services and API's.

Google services authentication and authorization

The official place to start when selecting an auth mechanism is the Google Accounts Authentication and Authorization page. It lists quite a few protocols, some open and some proprietary. If you research further you will find that currently all but OAuth 2.0 and Open ID are considered deprecated, and using the proprietary ones is not recommended. However, a lot of services are still using older, proprietary protocols, so we will look into some of those as well. Most protocols also have two variations: one for Web applications and one for the so called, 'installed applications'. Web applications run in a browser, and are expected to be able to take advantage of all standard browser features: rich UI, free-form user interaction, cookie store and ability to follow redirects. Installed applications, on the other hand, don't have a native way to preserve session information, and may not have the full Web capabilities of a browser. Android native applications (mostly) fall in the 'installed applications' category, so let's see what protocols are available for them.

ClientLogin

The oldest and most widely used till now authorization protocol for installed applications is ClientLogin. It assumes the application has access to the user's account name and password and lets you get an authorization token for a particular service, that can be saved and used for accessing that service on behalf of the user. Services are identified by proprietary service names, for example 'cl' for Google Calendar and 'ah' for Google App engine. A (non-exhaustive) list of supported service names can be found in the Google Data API reference. Here are a few Android-specific ones, not listed in the reference: 'ac2dm', 'android', 'androidsecure', 'androiddeveloper', 'androidmarket' and 'youngandroid' (probably for the discontinued App Inventor). The token can be fairly long-lived (up to two weeks), but cannot be refreshed and the application needs to obtain a new token when it expires. Additionally, there is no way to validate the token short of accessing the associated service: if you get an OK HTTP status (200), it is still valid, if 403 is returned you need to consult the additional error code and retry or get a new token. Another limitation is that ClientLogin tokens don't offer fine grained access to a service's resources: access is all or nothing, you cannot specify read-only access or access to a particular resource only. The biggest drawback for use in mobile apps though is that ClientLogin requires access to the actual user password. Therefore, if you don't want to force users to enter it each time a new token is required, it needs to be saved on the device, which poses various problems. As we saw in the previous post, in Android this is handled by GLS and the associated online service by storing an encrypted password or a master token on the device. Getting a token is as simple as calling the appropriate AccountManger method, which either returns a cached token or issues an API request to fetch a fresh one. Despite it's many limitations, the protocol is easy to understand and straightforward to implement, so it has been widely used. It has been officially deprecated since April 2012 though, and apps using it are encouraged to migrate to OAuth 2.0, but this hasn't quite happened yet. 

OAuth 2.0

No one likes OAuth 1.0 (except Twitter) and AuthSub is not quite suited for native applications, so we will only look at the currently recommended OAuth 2.0 protocol. OAuth 2.0 has been in the works for quite some time, but it only recently became an official Internet standard. It defines different authorization 'flows', aimed at different use cases, but we will not try to present all of them here. If you are unfamiliar with the protocol, refer to one of the multiple posts that aim to explain it at a higher level, or just read the RFC if you need the details.  And, of course, you can watch for this for a slightly different point of view. We will only discuss how OAuth 2.0 relates to native mobile applications.

The OAuth 2.0 specification defines 4 basic flows for getting an authorization token for a resource, and the two ones that don't require the client (in our scenario an Android app) to directly handle user credentials (Google account user name and password), namely the authorization code grant flow and the implicit grant flow, both have a common step that needs user interaction. They both require the authorization server (Google's) to authenticate the resource owner (the user of the our Android app) and establish whether they grant or deny the access request for the specified scope (e.g., read-only access to profile information). In a typical Web application that runs in a browser, this is very straightforward to do: the user is redirected to an authentication page, then to a access grant page that basically says 'Do you allow app X to access data Y and Z?', and if they agree, another redirect, which includes an authorization token, takes them back to the original application. The browser simply needs to pass on the token in the next request to gain access to the target resource. Here's an official Google example that uses the implicit flow: follow this link and grant access as requested to let the demo Web app display your Google profile information. With a native app things are not that simple. It can either
  • use the system browser to handle the permission grant step, which would typically involve the following steps:
    • launch the system browser and hope that the user will finish the authentication and permission grant process
    • detect success or failure and extract the authorization token from the browser on success (from the window title, redirect URL or the cookie store)
    • ensure that after granting access, the user ends up back in your app
    • finally, save the token locally and use it to issue the intended Web API request
  • embed a WebView or a similar control in the apps's UI. Getting a token would generally involve these steps:
    • in the app's UI, instruct the user what to do and load the login/authorization page
    • register for a 'page loaded' callback, and check for the final success URL each time it's called
    • when found, extract the token from the redirect URL or the WebView's cookie jar and save it locally
    • finally use the token to send the intended API request
Neither is ideal, both are confusing to the user and to implement the first one on Android you might event have to (temporarily) start a Web server (redirect_uri is set to http://localhost in the API console, so you can't just use a custom scheme). The second one is generally preferable, if not pretty: here's an (somewhat outdated) overview of what needs to be done and a more recent example with full source code. This integration complexity and UI impedance mismatch are the problems that OAuth 2.0 support via the AccountManager initially, and recently Google Play Services aim to solve. When using either of those, user authentication is implemented transparently by passing the saved master token (or encrypted password) to the server side component, and instead of a WebView with a permission grant page, you get the Android native access grant dialog. If you approve, a second request is sent to convey this and the returned access token is directly delivered to the requesting app. This is essentially the same flow as for Web applications, but has the advantages that it doesn't require context switching from native to browser and back, and is much more user friendly. Of course, it only works for Google accounts, so if you wanted to write, say, a Facebook client, you still have to use a WebView to process the access permission grant and get an authorization token.

Now that we have an idea what authentication methods are available, let's see if we can use them to access an online Google service that doesn't have a dedicated API.

Google Web properties single sign-on

Being able to access multiple related, but separate services without needing to authenticate to each one individually is generally referred to as single sign-on (SSO). There are multiple standard ways to accomplish this for different contexts, ranging from Kerberos to SAML-based solutions. We will use the term here in a narrower meaning: being able to use different Google services (Web sites or API's) after having authenticated to only one of them (including the Android login service). If you have a fairly fast Internet connection, you might not even notice it, but after you log in to, say, Gmail, clicking on YouTube links will take you to a completely different domain, and yet you will be able to comment on that neat cat video without having to log in again. If you have a somewhat slower connection and a wide display though, you may notice that there is a lot of redirecting and long parameter passing, with the occasional progress bar going on. What happens behind the scenes is that your current session cookies and authentication tokens are being exchanged for yet other tokens and more cookies, to let you seamlessly log in to that other site. If you are curious, you can observe the flow with Chrome's built-in developer tools (or similar plugins for other browsers), or check out our sample. All of those requests and responses are essentially a proprietary SSO protocol (Google's), which is not really publicly documented anywhere, and, of course, is likely to change fairly often as Google rolls out upgrades to their services. With that said, there is a distinct pattern, and on a higher level you only have two main cases. We are deliberately ignoring the persistent cookie ('Stay signed in')  scenario for simplicity's sake.
  • Case 1: you haven't authenticated to any of the Google properties. If you access, for example, mail.google.com in that state you will get a login screen originating at https://accounts.google.com/ServiceLogin with parameters specifying the service you are trying to access ('mail' for Gmail) and where to send you after you are authenticated. After you enter your credentials, you will generally get redirected a few times around the accounts.google.com, which will set a few session cookies, common (Domain=.google.com) for all services (always SID and LSID, plus a few more). The last redirect will be to the originally requested service and include an authentication token in the redirected location (usually specified with the auth parameter, e.g.: https://mail.google.com/mail/?auth=DQAAA...). The target service will validate the token and set a few more service-specific sessions cookies, restricted by domain and path, and with the Secure and HttpOnly flags set. From there, it might take a couple of more redirects before you finally land at an actual content page.
  • Case 2: you have already authenticated to at least one service (Gmail in our example). In this state, if you open, say, Calendar, you will go through https://accounts.google.com/ServiceLogin again, but this time the login screen won't be shown. The accounts service will modify your SID and LSID cookies, maybe set a few new ones and finally redirect you the original service, adding an authentication token to the redirect location. From there the process is similar: one or more service-specific cookies will be set and you will finally be redirected to the target content.
Those flows obviously work well for browser-based logins, but since we are trying to do this from an Android app, without requiring user credentials or showing WebView's, we have a different scenario. We can easily get a ClientLogin or an OAuth 2.0 token from the AccountManager, but since we are not preforming an actual Web login, we have no cookies to present. The question becomes: is there a way to log in with a standard token alone? Since tokens can be used with the data APIs (where available) of each service, they obviously contain enough information to authenticate us and grant access to the service's resources. What we need is an Web endpoint, that will take our token and give us a set of cookies we could use to access the corresponding Web site in exchange. Clues and traces of such a service are scattered around the Internet, mostly in the code of unofficial Google client libraries and applications. Once we know it is definitely possible, the next problem becomes getting it to work with Android's AccountManger.

Logging in using AccountManager

The only real documentation we could find, besides code comments and READMEs of the unofficial Google client applications mentioned above, is a short Chromium OS design document. It tells us that the standard (at the time) login API for installed applications, ClientLogin, alone is not enough to accomplish Web SSO, and outlines a three step process that lets us exchange ClientLogin tokens for session cookies valid for a particular service:
  1. Get a ClientLogin token (this we can do via the AccountManager)
  2. Pass it to https://www.google.com/accounts/IssueAuthToken, to get a one-time use, short-lived token that will authenticate the user to any service (the so called, 'ubertoken')
  3. Finally, pass the ubertoken to https://www.google.com/accounts/TokenAuth, to exchange it for the full set of browser cookies we need to do SSO
This outlines the process, but is a little light on the details. Fortunately, those can be found in the Chromium OS source code, as well as a few other projects. After a fair bit of digging, here's what we uncovered:
    1. To get the mythical ubertoken, you need to pass the SID and LSID cookies to the IssueAuthToken endpoint like this:
      https://www.google.com/accounts/IssueAuthToken?service=gaia&Session=false&SID=sid&LSID=lsid
    2. The response will give you the ubertoken, which you pass to the TokenAuth endpoint along with the URL of the service you want to use:
      https://www.google.com/accounts/TokenAuth?source=myapp&auth=ubertoken&continue=service-URL
    3. If the token check out OK, the response will give you a URL to load. If your HTTP client is set up to follow redirects automatically, once you load it, needed cookies will be set automatically (just as in a browser), and you will finally land on the target site. As long as you keep the same session (which usually means the same HTTP client instance) you will be able to issue multiple requests, without needing to go through the authentication flow again.
    What remains to be seen is, can we implement this on Android. As usual, it turns out that there is more than one way to do it:

    The hard way

    The straightforward way would be to simply implement the flow outlined above using your favourite HTTP client library. We choose to use Apache HttpClient, which supports session cookies and multiple requests using a single instance out of the box. The first step calls for the SID and LSID cookies though, not an authentication token: we need cookies to get a token, in order to get more cookies. Since Android's AccountManager can only give us authentication tokens, and not cookies, this might seem like a hopeless catch-22 situation. However, while browsing the authtokens table of the system's accounts database earlier, we happened to notice that it actually had a bunch of tokens with type SID and LSID. Our next step is, of course, to try to request those tokens via the AccountManager interface, and this happens to work as expected:

    String sid = am.getAuthToken(account, "SID", null, activity, null, null)
    .getResult().getString(AccountManager.KEY_AUTHTOKEN);
    String lsid = am.getAuthToken(account, "LSID", null, activity, null, null)
    .getResult().getString(AccountManager.KEY_AUTHTOKEN);

    Having gotten those, the rest is just a matter of issuing two HTTP requests (error handling omitted for brevity):

    String TARGET_URL = "https://play.google.com/apps/publish/v2/";
    Uri ISSUE_AUTH_TOKEN_URL =
    Uri.parse("https://www.google.com/accounts/IssueAuthToken?service=gaia&Session=false");
    Uri TOKEN_AUTH_URL = Uri.parse("https://www.google.com/accounts/TokenAuth");

    String url = ISSUE_AUTH_TOKEN_URL.buildUpon().appendQueryParameter("SID", sid)
    .appendQueryParameter("LSID", lsid)
    .build().toString();
    HttpPost getUberToken = new HttpPost(url);
    HttpResponse response = httpClient.execute(getUberToken);
    String uberToken = EntityUtils.toString(entity, "UTF-8");
    String getCookiesUrl = TOKEN_AUTH_URL.buildUpon()
    .appendQueryParameter("source", "android-browser")
    .appendQueryParameter("auth", authToken)
    .appendQueryParameter("continue", TARGET_URL)
    .build().toString();
    HttpGet getCookies = new HttpGet(getCookiesUrl);
    response = httpClient.execute(getCookies);

    CookieStore cookieStore = httpClient.getCookieStore();
    // check for service-specific session cookie
    String adCookie = findCookie(cookieStore.getCookies(), "AD");
    // fail if not found, otherwise get page content
    String responseStr = EntityUtils.toString(entity, "UTF-8");

    This lets us authenticate to the Android Developer Console (version 2) site without requiring user credentials and we can easily proceed to parse the result and use it in a native app (warning: work in progress!) from here. The downside is that for this to work, the user has to grant access twice, for two cryptically looking token types (SID and LSID).

    Of course, after writing all of this, it turns out that the stock Android browser already has code that does it, which we could have used or at least referenced from the very beginning. Better yet, this find leads us to an yet easier way to accomplish our task. 

    The easy way

    The easy way is found right next to the Browser class referenced above, in the DeviceAccountLogin class, so we can't really take any credit for this. It is hardly anything new, but some Googling suggests that it is neither widely known nor used much. You might have noticed that the Android browser is able to silently log you in to Gmail and friends, when you use the mobile site. The way this is implemented is via the 'magic' token type 'weblogin:'. If you use it along with the service name and URL of the site you want to access, it will do all of the steps listed above automatically and instead of a token will give you a full URL you can load to get automatically logged in to your target service. This magic URL is in the format shown below, and includes both the ubertoken and the URL of the target site, as well as the service name (this example is for the Android Developer Console, line is broken for readability):

    https://accounts.google.com/MergeSession?args=service%3Dandroiddeveloper%26continue
    %3Dhttps://play.google.com/apps/publish/v2/&uberauth=APh...&source=AndroidWebLogin

    Here's how to get the MergeSession URL:

    String tokenType = "weblogin:service=androiddeveloper&"
    + "continue=https://play.google.com/apps/publish/v2/";
    String loginUrl = accountManager.getAuthToken(account,tokenType, false, null, null)
    .getResult().getString(AccountManager.KEY_AUTHTOKEN);

    This is again for the Developer Console, but works for any Google site, including Gmail, Calendar and even the account management page. The only problem you might have is finding the service name, which is hardly obvious in some cases (e.g., 'grandcentral' for Google Voice and 'lh2' for Picasa).

    It takes only a single HTTP request form Android to get the final URL, which tells us that the token issuing flow is implemented on the server side. This means that you can also use the Google Play Services client library to issue a weblogin: 'token' (see screenshot below and note that unlike for OAuth 2.0 scopes, it shows the 'raw' token type). Probably goes without saying, but it also means that if you happen to come across someone's accounts.db file, all it takes to log in into their Google account(s) is two HTTPS requests: one to get the MergeSession URL, and one to log in to their accounts page. If you are thinking 'This doesn't affect me, I use Google two-factor authentication (2FA)!', you should know that in this case 2FA doesn't really help. Why? Because since Android doesn't support 2FA, to register an account with the AccountManager you need to use an application specific password (Update: On ICS and later, GLS will actually show a WebView and let you authenticate using your password and OTP. However, the OTP is not required once you get the master token). And once you have entered one, any tokens issued based on it, will just work (until you revoke it), without requiring entering an additional code. So if you value your account, keep your master tokens close and revoke them as soon as you suspect that your phone might be lost or stolen. Better yet, consider a solution that lets you wipe it remotely (which might not work after your revoke the tokens, so be sure to check how it works before you actually need it).


    As we mentioned above, this is all ClientLogin based, which is officially deprecated, and might be going away soon (EOL scheduled for April 2013). But some of the Android Google data sync feeds still depend on ClientLogin, so if you use it you would probably OK for a while. Additionally, since the weblogin: implementation is server-based, it might be updated to conform with the latest (OAuth 2.0-based?) infrastructure without changing the client-side interface. In any case, watch the Android Browser and Chormium code to keep up to date.

    Summary

    Google offers multiple online services, some with both a traditional browser-based interface and a developer-oriented API. Consequently, there are multiple ways to authenticate to those, ranging from form-based username and password login to authentication API's such as ClientLogin and OAuth 2.0. It is relatively straightforward to get an authentication token for services with a public API on Android, either using Android's native AccountManager interface or the newer Google Play Services extension. Getting the required session cookies to login automatically to the Web sites of services that do not offer an API is however neither obvious, nor documented. Fortunately, it is possible and very easy to do if you combine the special 'weblogin:' token type with the service name and the URL of the site you want to use. The best available documentation about this is the Android Browser source code, which uses the same techniques to automatically log you in to Google sites using the account(s) already registered on your device.

    Moral of the story: interoperability is so much easier when you control all parties involved.