A financial services company called us with a problem that nobody likes to talk about: their own people stole from them.
Multiple employees — across different roles and access levels — had been quietly exfiltrating data. Customer information, intellectual property, the works. They were planning to leave and start a competing business. The individuals performed similar actions within tight timeframes and departed simultaneously. Not exactly Ocean’s Eleven — my kids do a better job hiding evidence when cookies go missing from the jar. By the time leadership realized what had happened, the group was already gone.
The company had a SIEM. They had logs. But log enrichment and normalization hadn’t been a priority — and why would it be, until you need it? There was no common schema, no easy way to correlate activity across sources. Incident response meant logging into individual vendor consoles and manually hunting through raw logs. It was slow, painful, and expensive.
That’s when they called us.
The 90-Day Plan
We scoped a 90-day engagement with a clear objective: figure out what happened, then make sure it could never go undetected again.
Days 1–30: Log Review and Gap Analysis
The first month was forensic. We dug into every log source that mattered:
- Email (Google Workspace) — who sent what to whom, and when
- Identity Provider — login patterns, role changes, access grants
- VPN — connection times, locations, session durations
- Internal tooling — application-level activity logs
- System logs — endpoint and server-level events
The biggest pain point was immediately obvious. The data existed, but it was scattered across vendor-specific formats with no common schema. To reconstruct the timeline of the breach, we had to go back into raw Google Workspace logs and manually find activity that should have been searchable in seconds. Vendor tooling gave us the data eventually — but “eventually” isn’t good enough during an active investigation.
Days 30–60: Enrichment, Normalization, and OCSF
This is where the real engineering started. We took every critical data source and mapped it to the Open Cybersecurity Schema Framework (OCSF). OCSF gives you a common language for security events — so a login from your IDP, a VPN connection, and an email send all share a consistent structure. Fields have the same names. Timestamps are in the same format. User identifiers resolve the same way.
This is the step most organizations skip, and it’s the reason incident response takes days instead of hours. When your data is normalized, correlation is trivial. When it’s not, you’re manually translating between five different vendor schemas while the clock ticks.
Once the data was enriched and structured, we forwarded everything into Datadog. We chose Datadog because it’s fast to stand up, the log ingestion pipeline is straightforward, and — critically — we already had our monitors and dashboards defined as Terraform modules. Infrastructure as code meant we could deploy the entire detection stack repeatably and hand it off cleanly.
Days 60–90: Agentic AI-Powered Threat Detection
Here’s where it gets interesting.
With clean, normalized logs flowing into Datadog, we built an AI-powered threat detection workflow using Claude Code. Not a chatbot. Not a dashboard widget. An actual analyst-in-a-box.
Here’s how it works: the client’s security team clones the repository, fires up a Claude Code session, and talks to it like a human analyst.
“Find all activity from jsmith in the last 24 hours.”
Behind the scenes, a carefully crafted CLAUDE.md file tells Claude everything it needs to know — what log indexes exist, what fields are available, what API keys to use, what methods to call, and what APIs it has access to. Claude understands the relationships between data sources because we’ve documented them. It knows that a user ID in the IDP logs maps to the same user in the email logs and the VPN logs. It can “join” across sources in ways that would take a human analyst significant time to do manually.
The prompts stay human and analyst-friendly. The complexity lives in the configuration, not the conversation. This means the client’s team doesn’t need to memorize Datadog query syntax or know which index holds VPN logs versus email logs. They just ask questions and get answers.
We also configured agentic monitoring patterns — things like:
- Large downloads during off-hours — if you’re downloading the entire customer database at 2 AM, you’d better be on the incident response team
- Coordinated similar actions across users — multiple people performing the same unusual activity in a short window
- Access pattern anomalies — users accessing resources outside their normal scope
- Exfiltration signals — email forwards to personal accounts, large file attachments, unusual cloud storage activity
These are the exact patterns that, in hindsight, would have flagged the original breach. Now they’re caught in near real-time.
The Handoff
On day 90, we didn’t just hand over a slide deck. The client’s team deployed every system themselves on handoff day. They had:
- Full runbooks for every detection and response workflow
- Documentation covering the log pipeline, OCSF mappings, and Datadog configuration
- Training on how to use the Claude Code analyst workflow
- Terraform modules to redeploy or modify the entire stack
They’re self-sufficient. That was always the goal.
Why This Matters
Insider threats are the ones nobody wants to think about. External attackers get the headlines, but insiders already have the keys. They know your systems, they know your data, and they know your blind spots.
Most organizations don’t discover insider threats through their security tooling. They discover them after the damage is done — when the employees quit, when the competitor launches, when the customer data shows up somewhere it shouldn’t.
The fix isn’t just better tools. It’s better data. Normalized, enriched, queryable data that lets you ask questions and get answers fast — whether the one asking is a human analyst or an AI one.
If your organization needs help building a threat detection pipeline — or if you’re dealing with an incident right now and your logs aren’t giving you answers — get in touch. This is what we do.