PreEmptive logo

Trusted Computing: Panacea or Magical Thinking?

Can you tell the difference? Exception or the norm?

Of course, everyone is “for security” in principle. Each organization must answer the hard question for themselves: “How much is enough?” Over-engineering is (by definition) excessive, and over-engineering application security can be devastating as overly complex algorithms, architectures, and processes can compromise user experience, degrade performance, and slow development velocity. On the other hand, punishment is swift for organizations that cut corners and do not effectively secure their applications, their data, and, most importantly, their users and business stakeholders. Finding and maintaining that balance can be time consuming and, because you can never be sure you’ve gotten it exactly right, it can also be a thankless job.

Given all of this, you can almost forgive development organizations when they are seduced into the magical thinking of “trusted computing.” Do not conflate magical trusted computing with Trusted Execution Environments (TEE) and its components/derivatives. The latter defines a runtime in which applications can be securely executed, while the “magical” variety offers a haven where bad (or clumsy) actors do not exist. It is a utopian dream-place where only good actors have access.

This slippery slope begins innocently enough with a well-intentioned desire to avoid overengineering security controls whenever possible, focusing primarily on untrusted environments. Analysts, architects, and development organizations often frame application security requirements within a worldview that sees untrusted systems as the exception rather than the rule to avoid appearing tone-deaf to the underlying business objectives.

Inside (not outside): one of the most magical lands of all

In Verizon’s 2019 Data Breach Investigations Report, nearly 40% of breaches were assigned to internal threat actors – and it’s probably worse than that.

The threat of malicious insiders to organizational security has historically been among the most difficult challenges. Insiders often attack using authorized access and with behavior very difficult to distinguish from normal activities. This doesn’t address scenarios where external attackers fool non-malicious or unintentional insiders.

Untrusted2

Further, organizations suffering insider attacks have always been reluctant to publicly share data about those attacks. While numerous regulations are imposing disclosure requirements for data loss (the GDPR being among the most draconian), there are no such obligations tied narrowly to application exploits (unless they—as they often do—lead to subsequent data loss). Intellectual Property loss does not fall under that rubric.

Who are the typical threat actors?

The following potential threat actor personas are divided into “insiders” and “outsiders,” and depending on the specific business and applications, this list may be shorter or longer.

InsidersOutsiders
EmployeesProfessional Hackers
ContractorsCompetitors
VendorsOrganized Crime
Business PartnersNon-professional hackers
Hacktivists
Nation state intelligence/military
Malware authors

The central point here is that, even with limited public data, there is simply no evidence to suggest that any organization has effectively established and maintained an application haven to exclude threat actors (ironically, perhaps this is why there is so much interest in commercial TEE).

How much application security is enough?

Returning to the central theme—how can an organization most effectively (efficiently) find that balance between security and productivity? Trust level must be viewed as a continuum, not a binary state. One end of this continuum might include running applications inside Trusted Execution Environments. However, that is not feasible for all but a narrow slice of today’s application deployment scenarios.

Looking at the most closely related (and, in fact, inseparable) domain of information security, there is certainly a paradox (if not an outright contradiction) between the guidance assigned to sensitive data versus sensitive code. There is near-universal agreement that, at a minimum, information at rest must be encrypted at all times and in all systems. HIPAA, FISMA, GDPR, and 23 NYCRR 500 are just a tiny sampling of the growing body of information security requirements that mandate encryption of sensitive data (PII, etc.).

If you were looking for more evidence that there is no haven “inside” an organization, ask yourself why PII needs to be encrypted when it is safely at rest inside a well-run, secure organization.

If an application accesses sensitive or IS-sensitive data (e.g., Intellectual Property, etc.), one would logically conclude that there should be controls in place commensurate (proportionate, analogous) with those in place for the associated data.

Application Flow

As with traditional information, mapping the lifecycle of an application is a fundamental step in measuring the potential for vulnerability exploitation (which, in some percentage of those cases, leads to an actual loss of some sort).

If computing trust is a continuum, where do your applications fall? 

Who uses your applications, and under what conditions?

While this does not paint the entire picture, measuring the number of users (within a fixed timeframe) in each of these cells and assigning an appropriate multiplier for your scenarios offers a perspective on the likelihood of an incident occurring. 10,000 unverified users accessing an application on an unmanaged device across multiple countries should be far more concerning than a single, non-privileged employee running on an entirely managed platform.

PrivilegedNon-privilegedManaged networkManaged deviceUn-managed (by you) networkUn-managed (by you) deviceGeographical distribution
Employee
Contractor
Partner
Client
Unverified

Maximizing trust while minimizing risk

Key points to keep in mind when settling out on an application security/risk management journey:

There is no magical “happy place” where protecting your software’s confidentiality, maintaining its Integrity, and controlling Access to your software no longer needs tending. Certainly, there can be a spectrum of application scenarios where the likelihood of a vulnerability exploit and the materiality of the resulting primary and secondary losses can vary widely. Still, there is no magical happy place where you can take off your thinking cap and ignore what is now a fundamental pillar of every application development project. 

Application security cannot be managed as a silo. Whatever the strategy, it should be consistent with corresponding information security policies and practices. This consistency should include a review of any regulatory or statutory information privacy and security obligations that your organization may be subject to.

Risk can be shared but not transferred (from a technological perspective). Cloud providers, third-party platforms, networks, and devices can simplify (or complicate) your obligations but can never relieve you of them.

This is a journey like every other security and risk management endeavor. No matter how appropriate your assumptions maybe when you make them, you must revisit them periodically. How often? Probably about as often as you revisit your information security and privacy policies.

In This Article:

Try a Free Trial of PreEmptive Today!