I’m often asked to estimate how many developers are required to obfuscate and harden their applications against reverse engineering and tampering—and when they say “required,” what they usually mean is what is the bare minimum number of developers that need to be licensed to use our software.
Of course, it’s essential to get the number of licensed users just right; if the count is too high, you’re wasting money. But if it’s too low, you’re either not going to be efficient or practical—or worse still, you’ve painted yourself into a corner where you’re forced to violate a license agreement to do your job.
Yet, as important as this question may be, it’s not the first question that needs answering.
The number of staff required to effectively manage application risk is not the same as counting the concurrent users required to run our (or any) software at a given time. If you’re not planning on effectively managing application risk, why bother licensing software in the first place?
How many people are required to run PreEmptive’s application hardening products on a given build of a particular application? Actually, none at all. Both Dotfuscator for .NET and DashO for Java) can be fully integrated into your automated build and (continuous) deployment processes—lights out and hands free.
The answer will be different when the question is rephrased as “how many people does it take to effectively protect your application assets against reverse engineering and tampering?” The answer is “it depends,” but it cannot be less than two. Here’s why…
To be genuinely effective, application hardening cannot create more risk than it mitigates—the cure cannot be worse than the disease.
What risks can come from a poorly managed application hardening control (process)?
If an application hardening task fails and goes undetected:
If a hardening task failure is detected, but the root cause cannot be quickly identified and fixed, the application can’t be shipped, deadlines are missed, and the software can’t be used.
So, what’s the minimum number of people required to protect an application against reverse engineering and tampering?
You’ll need (at least) one person to define and implement the application hardening control.
…and you’ll need (at least) one person to manage the hardening control itself (monitor each time the application is hardened, detect any build issues, and resolve those issues should they arise in a timely fashion).
Could one individual design, implement, and manage an application hardening control? Yes, one person can definitely do all three tasks.
However, if the software being protected is released frequently or urgently, one individual cannot guarantee that he/she will be available to manage that control every day at every given time—they simply must have a backup—a “co-pilot.”
No organization should implement an application hardening control dependent on one individual. At least two individuals must be trained (and authorized) to run, administer, and configure your application hardening software and processes. The penalty for unexpected shipping delays, shipping damaged code, or releasing an unprotected application asset into “the wild” is typically so severe that even though the likelihood of such an event occurring on any given day may seem remote, it cannot be ignored.
This is nothing new in risk management—every commercial plane flies with a co-pilot for this very reason—and airline manufacturers do not build planes without a co-pilot’s seat. It would be cheaper to build and fly planes that only accommodate one pilot—and it wouldn’t be an issue for most flights. But to ignore the risk that having a single pilot brings would be more than irresponsible; it would be unethical.
Are there other considerations that might require additional people and processes? Of course, these are tied to development methodologies, architecture choices, testing, and audit requirements of the development organization, etc., and are not universal.
If reverse engineering and/or application tampering pose Intellectual Property, privacy, compliance, piracy, or other material risks, they must be managed accordingly—as a resilient and well-defined process. In other words, you can’t do it alone when it comes to application risk management.