Your Phone Can Be a Very Scary Place

Mobile apps are changing our social, cultural, and economic landscapes – and, with the many opportunities and perks that these changes promise, come an equally impressive collection of risks and potential exploits.

This post is probably way overdue – it’s an update (supplement really) to an article I wrote for The ISSA Journal on Assessing and Managing Security Risks Unique to Java and .NET way back in 09’. The article laid out reverse engineering and tampering risks stemming from the use of managed code (Java and .NET). The technical issues were really secondary – what needed to be emphasized was the importance of having a consistent and rational framework to assess the materiality (relative danger) of those risks (piracy, IP theft, data engineering…).

In other words, the simple fact that it’s easy to reverse engineer and tamper with a piece of managed code does not automatically lead to a conclusion that a development team should make any moves to prevent that from happening. The degree of danger (risk) should be the only motivation (justification) to invest in preventative or detective measures; and, by implication, risk mitigation investments should be in proportion to those risks (low risk, low investment).

Here’s a graphic I used in 09’ to show the progression from managed apps (.NET and Java) to the risks that stem naturally from their use.

android1

Managed code risks in the mobile world
Of course, managed code is also playing a central role in the rise of mobile computing as well as the ubiquitous “app marketplace,” e.g. Android and, to a lesser degree, Windows Phone and WindowsRT – and, as one might predict, these apps are introducing their own unique cross-section of potential risks and exploits.

Here is an updated “hierarchy of risks” for today’s mobile world:

android2

I’ve highlighted risks that have either evolved or emerged within the mobile ecosystem – and these are probably best illustrated with real world incidents and/or trends:

android3

Earlier this year, a mobile development company documented how to turn one of the most popular paid Android apps (SwiftKey Keyboard) into a keylogger (something that captures everything you do and sends it somewhere else).

This little example highlights all of the risks highlighted above:

  • IP theft (this is a paid app that can now be side loaded for free)
  • Content theft (branding, documentation, etc. are stolen)
  • Counterfeiting (it is not a REAL SwiftKey instance – it’s a fake – more than a cracked instance)
  • Service theft (if the SwiftKey app makes any web service calls that the true developers must pay for – then these users are driving up cloud expenses – and if any of these users write-in for support, then human resources are being burned here too)
  • Data loss and privacy violations (obviously there is no “opt-in” to the keylogging and the passwords, etc. that are sent are clearly private data)
  • Piracy (users should be paying the licensing fee normally charged)
  • Malware (the keylogging is the malware in this case)

In this scenario, the “victim” would have needed to go looking for “free versions” of the app away from the sanctioned marketplace – but that’s not always the case.

android4

Symantec recently reported finding counterfeit apps inside the Amazon Appstore (and Amazon has one of the most rigorous curating and analysis check-in processes). I, myself, have had my content stripped and look alike apps published across marketplaces too – see my earlier posts Hoisted by my own petard: or why my app is number two (for now) and Ryan is Lying – well, actually stealing, cheating and lying – again)

Now these anecdotes are all too real, but they are by no means unique. Trend Micro found that 1 in every 10 Android apps are malicious and that 22% of apps inappropriately leaked user data – that is crazy!

For a good overview of Android threats, checkout this free paper by Trend Micro, Android Under Siege: Popularity Comes at a Price.

To obfuscate (or not)?
As I’ve already written – you shouldn’t do anything simply to make reverse engineering and tampering more difficult – you should only take action if the associated risks are significant enough to you and said “steps” would reduce those risks to an acceptable level (your “appetite for risk.”) 

…but, seriously, who cares what I think? What to the owners of these platforms have to say?

Android “highly recommends” obfuscating all code and emphasizes this in a number of specific areas such as: “At a minimum, we recommend that you run an obfuscation tool” when developing billing logic. …and, they go so far as to include an open source obfuscator, Proguard – where again, Android “highly recommends” that all Android apps be obfuscated.

Microsoft also recommends that all modern apps be obfuscated (see Windows Phone policy) and they also offer a “community edition” obfuscator (our own Dotfuscator CE) as a part of Visual Studio. 

Tamper detection, exception monitoring, and usage profiling
Obfuscation “prevents” reverse engineering and tampering; but it does not actively detect when attackers are successful (and, with enough skill and time – all attackers can eventually succeed). Nor would obfuscation defend against attacks or include a notification mechanism – that’s what tamper defense, exception monitoring, and usage profiling do. If you care enough to prevent an attack, chances are you care enough to detect when one is underway or has succeeded. 

Application Hardening Options (representative – not exhaustive)
If you decide that you do agree with Android’s and Microsoft’s recommendation to obfuscate – then you have to decide which technology is most appropriate to meet your needs – again, a completely subjective process to be sure, but hopefully, the following table can serve as a comparative reference.

android5