Hunting Lazarus, Part 5: Eleven Hours on His Disk
Forensic examination of an active Lazarus Group operator machine: a target list of nearly 17,000 developers, six drained wallets, and a plaintext file containing his own keys.
Insights, research findings, and security best practices from the front lines of cybersecurity.
Subscribe via RSSForensic examination of an active Lazarus Group operator machine: a target list of nearly 17,000 developers, six drained wallets, and a plaintext file containing his own keys.
We burned through our Claude MAX weekly quota two days before renewal. So we gave Codex a try. Here's what happened.
You're paying $200/month for unlimited AI assistance. Except it's not unlimited, the limits keep changing without notice, and nobody can tell you how close you are to hitting them.
It has been only days since we published Part III—where we asked whether we were hunting Lazarus or walking into a honeypot. We did not expect to be back this soon. But what we found makes everything before it look like a prologue.
We discovered a second malware family, mapped approximately 20 ghost servers with consistent configurations, attempted to exploit the C2 infrastructure—and ended up questioning whether we were hunting them, or they were hunting us.
The attackers couldn't keep their Pastebin accounts online. So they moved their payload delivery to infrastructure that can't be taken down.
We found North Korean malware in a client's Upwork project. Then we spent five days mapping the attackers' infrastructure.
For most of software engineering history, the hardest skill was translating intent into correct syntax. Syntax mastery became a proxy for competence itself. Large language models quietly break that assumption - and force the industry to confront what engineering skill has always actually been.
Most organizations testing their AI systems are doing it wrong. This five-level maturity framework provides structure for understanding where you are, what capabilities you need next, and how much it will cost to get there.
Web3 AI agents control millions in crypto assets with irreversible transaction finality. Traditional prompt injection barely scratches the attack surface. This guide introduces context manipulation - a comprehensive offensive methodology targeting the memory, oracles, and input channels that autonomous agents trust.
The protocol that's becoming the standard interface between AI and enterprise systems has security gaps most organizations haven't yet learned to see.
Preliminary forensic analysis of the Nov 3, 2025 Balancer V2 exploit with on-chain verification and a per-chain verification plan. Official disclosure pending.
Red Asgard's threat-intel provides multi-source threat aggregation, CVE integration, and automated risk assessment for security operations teams.
Red Asgard releases quantum-shield, implementing NIST-standardized post-quantum algorithms (Kyber, Dilithium) for hybrid quantum-resistant encryption and signatures.
Introducing path-security, Red Asgard's comprehensive Rust library defending against 85+ path traversal attack vectors including unicode, encoding, and exotic bypasses.
Red Asgard releases module-registry, a powerful Rust crate enabling compile-time discovery and runtime instantiation of plugins with type safety guarantees.
Red Asgard releases blockchain-runtime, a powerful Rust crate for dynamic blockchain analysis, testing, and simulation across multiple chains without vendor lock-in.
Red Asgard releases llm-security, an open-source Rust crate providing defense-in-depth protection against prompt injection, jailbreaks, and LLM manipulation attacks.
Introducing our new security research blog where we share insights, vulnerabilities, and best practices from the front lines of cybersecurity.