All articles

Security & Governance

Detection Engineers Move To Agentic Workflows As AI Attackers Cut Recon To Minutes

AI Data Press - News Team
|
May 5, 2026

Danny Z., a senior security engineer, explains why narrow, single-purpose AI agents outperform general-purpose ones in detection and response, and why organizations need agentic workflows of their own to keep pace with AI-assisted attackers.

Credit: AI Data Press News

From a detection standpoint, an automated attack can show up as a few minutes of activity, whereas a human hands-on-keyboard attack takes longer.

Danny Zendejas

Senior Security Engineer

Danny Zendejas

Senior Security Engineer

Security infrastructure is hitting a fork. Teams that bolt AI onto existing detection and response systems gain speed in the short term but inherit architectural limits that compound over time. Teams that rebuild from the ground up get cleaner systems but absorb significant cost and disruption. That tension now defines how security engineering organizations are approaching everything from CI/CD pipelines to incident response.

Danny Zendejas is a Senior Security Engineer who spent four years at Pinterest building detection signals, leading incident response across production and corporate infrastructure, and managing data retention projects handling terabytes of daily ingested data. His background spans detection engineering, threat hunting, endpoint and cloud security, and SOAR automation.

"Anything that's AI-assisted from the attacker standpoint is going to be able to scan much faster. Repositories, file systems, multiple scripts. From a detection standpoint, an automated attack shows up as a few minutes of activity, whereas a human hands-on-keyboard attack takes longer," says Zendejas.

From SOAR to agentic response

The shift Zendejas describes is already underway. SOAR automation, the dominant workflow for the past several years, is giving way to agentic architectures that handle more of the triage and response chain before a human touches the case.

The change is tangible in code review workflows. Pull requests that previously went through linters and syntax checks now pass through AI-assisted vulnerability scans and design evaluations first. "An engineer comes in and does the final pass of whether this was the right thinking from a design standpoint," Zendejas says. "A lot of the checks have already been done."

For these workflows to function, Zendejas emphasizes that each company needs its own contextual engine. "Every company has its own different threat model," he says. "According to what your threats are, you have your context for most important assets, high-risk accounts. And then from there, you have agentic workflows to combat incoming attacks."

The Unix philosophy for security AI

When it comes to how agents should be designed, Zendejas draws on a model from systems engineering. Narrow tools that do one thing well outperform broad agents that try to handle everything.

"In Linux and Unix, a lot of tools only do one thing, but they do it really well," Zendejas says. "Taking that same mentality when it comes to AI agents, having an agent that only reviews this specific kind of output or only works with this third-party tool, minimizes what it needs to do and reduces the chances of hallucination." He has not seen a single-agent-for-everything approach work in practice.

When agents are given focused documentation for their specific scope, "they tend to do pretty well," Danny Zendejas says. The pattern reinforces the broader point: scoping reduces error, and scope creep is where agents fail.

Supply chain and identity are the top risks

Zendejas identifies supply chain attacks as one of the most persistent and difficult threats facing security teams today.

"A legitimate package gets compromised, or a version goes up by one and that version is the malicious one," Zendejas explains. "A 1.6 version is fine, and then the 1.7 is the one that's malicious for a day or so before it gets taken down." Some teams put validation checks in front of their CI/CD pipeline. Pinning dependencies helps, but does not fully solve the problem.

On the identity side, non-human identity risk continues to grow. Zendejas points to scenarios where a third-party provider gets compromised through an OAuth grant. "An identity tool doesn't even have to be compromised, just an OAuth grant," he says. "That third party gets compromised, and from that OAuth grant is where a company gets impacted."

Shadow AI replaces shadow IT

Zendejas frames shadow AI as the new version of shadow IT, with employees spinning up unapproved AI tools without security visibility. His recommendation: if a company has enterprise agreements with approved providers, allow-list those and build detections for everything else.

On logging, Zendejas says prompt-level capture is too cost-prohibitive for most organizations. A more sustainable approach tracks user-agent strings and DNS requests to model endpoints, giving teams visibility into what tools are being used without trying to log every interaction.

His closing advice reflects the throughline of the entire conversation. "People will have to decide what they want to focus on," Zendejas says. "Whether it's getting really good contextual agents or stopping shadow AI. There are so many new things all the time that we get shiny object syndrome. But if companies focus on a few things and do them really well, it'll work out in their favor."