PreEmptive logo

Building Integrated Quality Operations in the AI Era: From Scanning to Resilience

KW-PRE-Webinar-Campaign-Blog-image

AppSec maturity has traditionally meant “more scanning, earlier.” However, in the age of AI, vulnerability scanning alone is no longer enough. AI tools are accelerating both software delivery and reverse engineering—enabling teams to write, test, and deploy code at record speed, but also empowering attackers to exploit that code faster than ever.

What does this mean for DevSecOps teams who are facing the demands of faster releases in a rapidly evolving threat landscape? The secret lies in building integrated quality operations. By bringing code scanning, quality signals, and runtime protection together, DevSecOps teams will be ready to turn security from a gate into a system.

We recently explored this topic in a webinar with Sembi DevSecOps experts, Solution Architect John Brawner and Product Manager Luka Oniani. Here’s a recap and the key takeaways from the session—and don’t miss the on-demand webinar embedded at the end to catch the full conversation!

TL;DR

  • Scanning is necessary, but no longer sufficient. AI accelerates delivery and attacker capabilities, especially reverse engineering of shipped binaries.
  • Attackers increasingly exploit exposure, not just defects. A clean SAST/SCA report can’t address risks rooted in readable code and accessible business logic.
  • Integrated quality operations close the gap. Unify testing signals, security findings, and automated runtime protection (including CI/CD-enforced obfuscation) so security becomes a system, not a gate.

The Limits of Traditional AppSec Scanning

Traditional AppSec tools assume that security risk lives in code defects. Find the bugs, fix the bugs, ship the code, and you’re all good, right? But modern attacks increasingly exploit design exposure, not just implementation errors and that’s a gap that no scanner can close.

Reverse engineering is easier than ever

Consider the reality of how compiled code is distributed today. Containerized, desktop, edge, and mobile applications all ship binaries that can be decompiled. AI-powered analysis tools are making reverse engineering faster and more accessible than ever, with the average person able to get their hands on tools that can identify patterns and suggest attack vectors with remarkable accuracy. 

AI-generated code compounds this problem. It tends to follow predictable patterns and may be structurally inconsistent with surrounding architecture, making it easier for attackers to identify, analyze, and exploit. What was once a slow, expert-only process is becoming increasingly automated and accessible.

AppSec scanners can’t see business context or runtime exposure

Perhaps most importantly, AppSec scanners have no visibility into business context or enterprise risk. A clean scan result is not a security guarantee—it’s a narrow signal about a narrow set of known defects. As Solution Architect John Brawner put it, even when all your tests pass, bugs will exist. The goal is to contain what you can’t catch before it reaches the customer.

“When a threat actor reverse-engineers your mobile banking app to understand how it validates transactions, they’re not exploiting a CVE—they’re exploiting the fundamental accessibility of compiled code. No amount of SAST scanning will flag that risk because there’s nothing wrong with the code itself. The vulnerability is in its readability and fundamental structure.”

The question isn’t whether your code could be reverse engineered. It’s whether you’ve made that process trivially easy or meaningfully difficult.

Obfuscation as a DevSecOps Capability

Given how accessible reverse engineering has become, obfuscation can no longer be treated as an optional last step or a mobile-only concern. It needs to be a first-class DevSecOps capability: planned in, not bolted on.

The benefits of modern obfuscation

Modern obfuscation goes far beyond simple name mangling:

  • Control flow obfuscation restructures code logic to resist automated analysis
  • String encryption prevents attackers from searching for sensitive API endpoints or cryptographic keys
  • Anti-tampering checks detect when code has been modified and respond accordingly
  • Runtime integrity verification ensures the application behaves as intended, even on compromised devices. 

Together, these techniques raise the cost and complexity of reverse engineering significantly.

Yet most organizations still treat obfuscation reactively, applied manually, late in the release cycle, or not at all. The reasons are familiar. Limited awareness of the threat, a team culture oriented around CVE counts and vulnerability metrics, and the perception that obfuscation is a niche concern for mobile developers.

Automate obfuscation in CI/CD

What changes when obfuscation is automated and enforced in CI/CD? It transitions from a late-stage protection wrapper to an intrinsic part of the security architecture. Protection becomes consistent, auditable, and impossible to accidentally skip. And crucially, it becomes something that every build is tested against, not something applied after testing is complete.

For teams making the case to leadership, it helps to reframe the value proposition. Vulnerability counts speak to the probability of defects reaching customers, and while reducing those vulnerabilities is certainly a priority, it’s no longer sufficient on its own. Obfuscation protects the actual value in the product: the IP, the business logic, and process acumen that customers are paying for. 
As AI lowers the cost of reverse engineering, obfuscation raises the opportunity cost for attackers. That’s a business argument as much as a technical one.

Watch the on-demand webinar to hear John Brawner and Luka Oniani explain how integrated quality operations unify scanning, quality signals, and automated protection.

Integrated Quality Operations: Unify Testing, Scanning, and Protection

In a typical enterprise software organization, QA, security, and application protection each optimize for different metrics, use different tools, and have limited visibility into how their work affects the others. The result is fragmented coverage, process inefficiency, and security gaps that fall through the cracks between teams—all compounding into organizational risk. 

Breaking Down the Silos

Building integrated quality operations means closing those gaps. It requires treating software quality, security posture, and runtime resilience as interdependent outcomes rather than separate checkboxes, and designing the toolchain and processes to reflect that. In practice, that looks like several concrete changes:

Unified visibility across disciplines

Create consolidated dashboards that show testing coverage, security findings, and protection status in a single view. When a product leader asks “are we ready to ship?” they should be looking at an integrated quality score, not synthesizing separate reports from three different teams.

Shared context between tools

Your testing platform should know which code paths handle sensitive data. Your security scanner should know which vulnerabilities are mitigated by runtime protections. Your obfuscation engine should know which code paths are covered by automated tests. This shared context enables smarter prioritization: a high-severity vulnerability in well-protected code may be less urgent than a medium-severity vulnerability in unprotected authentication logic.

Shift-left protection planning

Rather than treating obfuscation as a final build step, protection requirements should be defined during design. Architects identify code that will handle sensitive operations and specify protection levels alongside functional requirements before the code is written, not after.

Automated enforcement in CI/CD

Build pipelines should include gates that verify protection coverage alongside test results and security scan outcomes. A build that passes tests but lacks required obfuscation should fail as definitively as a build with functional defects.

Cross-functional ownership

Domain expertise still matters: This isn’t about collapsing QA, security, and protection into a single team. But it does mean shared OKRs, joint planning sessions, and clear accountability for integrated outcomes rather than siloed metrics.

For teams that can’t protect everything at once, the starting point is straightforward:

  1. Apply risk management principles. Begin with your highest-value assets: critical business logic, high-value transactions, and sensitive IP. 
  2. Assess practical risk, not just theoretical exposure. 
  3. And coordinate across teams so that when a known gap exists, someone is watching it while it gets patched.

How AI Is Reshaping the DevSecOps Threat Model

AI has turbocharged both sides of the security equation, and the asymmetry is worth understanding clearly. On the development side, tools like GitHub Copilot, Cursor, and LLM-based assistants have dramatically accelerated software delivery. On the attacker side, those same capabilities are being actively weaponized for reverse engineering, vulnerability discovery, and exploit development.

The delivery-security imbalance

During the webinar, John highlighted a structural imbalance that AI has widened: in most organizations, the ratio of software engineers to AppSec engineers runs at roughly 60-to-1. In large enterprises, it can reach 400-to-1. AI-assisted code generation has accelerated delivery without a corresponding uplift on the security side. As Luka Oniani noted plainly:

“Right now as we speak, somebody’s weaponizing AI against you.”

This changes what “good enough” obfuscation looks like. Because AI excels at pattern recognition, effective obfuscation is fundamentally about destroying the patterns that decompilers and AI analysis tools rely on. Obfuscation makes code unreadable and structurally unpredictable—which is precisely what breaks AI-assisted reverse engineering. The goal is not to make analysis impossible, but to make it expensive enough that attackers move on.

The cycle of constant evolution

The broader implication is that application protection is not a static problem with a permanent solution. It’s an ongoing practice, requiring the same continuous improvement mindset applied to every other aspect of DevSecOps. The obfuscation strategy that was sufficient in 2024 won’t be sufficient in 2027—and that’s exactly as we would expect.

Looking ahead two to three years, John sees AppSec platforms evolving beyond vulnerability count metrics toward a more holistic, systems-level view. The question won’t be “How many CVEs did we find?” but rather “How defensible is this code, how risk-aware is our delivery process, and how faithfully does what ships reflect what we designed?” 

Protection will be automated into every build. Teams that want to be considered mature won’t treat security as a gate at the end of the pipeline: they’ll have made it a structural property of every artifact that leaves the build system.

The Bottom Line: Resilience Requires More Than Scanning

For decades, software organizations have treated testing, security, and application protection as separate disciplines, each with its own tools, teams, budgets, and reporting lines. That organizational model made sense when software moved slowly, threats evolved predictably, and the client-side attack surface was limited to desktop applications. None of those conditions hold today.

Organizations navigating the current environment successfully aren’t just investing more in each discipline independently. They’re fundamentally rethinking how testing, security, and application protection work together. They’re building integrated quality operations and treating software quality, security posture, and runtime resilience as interdependent outcomes rather than separate checkboxes.

When asked if he could give teams one piece of advice to improve security this quarter without slowing delivery, Luka left the audience with this quote from security researcher Dan Kaminsky:

“Stop relying solely on ‘finding issues’ and start assuming your code will be studied, copied, and abused after release. Then automate protection accordingly.”

The AI era has changed the rules for application security. The teams that adapt won’t be the ones who scan harder, they’ll be the ones who build security and protection into the fabric of how they deliver software.
Watch the on-demand webinar to hear John Brawner and Luka Oniani break down what integrated quality operations looks like in practice—and how to start building it into your delivery process today.

In This Article:

Start a Free Trial of PreEmptive Today