When it comes to application risk management, you can’t do it alone

I’m often asked to estimate how many developers are required to obfuscate and harden their applications against reverse engineering and tampering – and when they say “required,” what they usually mean is what is the bare minimum number of developers that need to be licensed to use our software.

Of course it’s important to get the number of licensed users just right; if the count is too high, you’re wasting money – but, if it’s too low, you’re either not going to be efficient or effective – or worse still – you’ve painted yourself into a corner where you’re forced to violate a license agreement to do your job.

Yet, as important as this question may be, it’s not the first question that needs answering.

The number of staff required to effectively manage application risk is not the same as counting the number of concurrent users required to run our (or any) software at a given point in time. …and if you’re not planning on effectively managing application risk, why bother licensing software in the first place?

How many people are required to run PreEmptive’s application hardening products on a given build of a particular application? Actually, none at all. Both Dotfuscator for .NET and DashO for Java) can be fully integrated into your automated build and (continuous) deployment processes – lights out and hands free.

Now, the answer will be different when the question is rephrased as “how many people does it take to effectively protect your application assets against reverse engineering and tampering?” The answer is “it depends,” but it cannot be less than two. Here’s why…

  • Application risk management is made up of one (or more) controls (processes not programs). These controls must first be defined, then implemented, then applied consistently over time, and, lastly, monitored to ensure effective use.
  • Application hardening (obfuscation and tamper defense injection) is just such a control – a control that is embedded into a larger DevOps framework – and a control that is often the final step in a deployment process (followed only by digital signing).

Now, in order to be truly effective, application hardening cannot create more risk than it mitigates – the cure cannot be worse than the disease.

What risks can come from a poorly managed application hardening control (process)?

If an application hardening task fails and goes undetected,

  • the application may be distributed unprotected into production and the risk of reverse engineering and tamper go entirely un-managed, or 
  • the application may be shipped in a damaged state causing runtime failures in production.

If an application hardening task failure is detected, but the root cause cannot be quickly identified and fixed, then the application can’t be shipped; deadlines are missed and the software can’t be used.

So, what’s the minimum number of people required to protect an application against reverse engineering and tampering?

You’ll need (at least) one person to define and implement the application hardening control.

…and you’ll need (at least) one person to manage the hardening control itself (monitor each time the application is hardened, detect any build issues, and resolve those issues should they arise in a timely fashion).

Could one individual design, implement and manage an application hardening control? Yes, one person can do all three tasks for sure.

However, if the software being protected is released with any frequency or with any urgency, one individual cannot guarantee that he/she will be available to manage that control on every given day at every given time – they simply must have a backup – a “co-pilot.”

No organization should implement an application hardening control that’s dependent on one individual – there must be at least two individuals trained (and authorized) to run, administer, and configure your application hardening software and processes. The penalty for unexpected shipping delays and/or shipping damaged code or releasing an unprotected application asset into “the wild” is typically so severe that even though the likelihood of such an event occurring on any given day may seem remote – it cannot be ignored.

This is nothing new in risk management – every commercial plane flies with a co-pilot for this very reason – and airline manufacturers do not build planes without a co-pilot’s seat. It would be cheaper to build and fly planes that only accommodate one pilot – and it wouldn’t be an issue for most flights – but to ignore the risk that having a single pilot brings would be more than irresponsible – it would be unethical.

Are there other considerations that might increase the need for additional people and processes? Of course – but these are tied to development methodologies, architecture choices, testing and audit requirements of the development organization, etc. These are not universal requirements. 

If reverse engineering and/or application tampering pose Intellectual Property, privacy, compliance, piracy, or other material risks, these need to be managed accordingly – as a resilient and well-defined process. Or, said in another way, when it comes to application risk management, you can’t do it alone.

Why are people who need people the luckiest people in the world? Because they have a backup to protect their applications against unplanned delays, reverse engineering and tampering!