[Email] [Google Scholar]
My research bridges the beautiful and mathematically rigorous world of CS Theory with the messy dark art of implementing high-performance systems that can scale. My work demonstrates that provable guarantees (correctness, security, performance bounds) can be achieved without sacrificing practical efficiency, often leading to measurable performance improvements at the scale of multiple orders of magnitude. Recently, I'm focused on ML performance. This includes optimizing both ML inference and training, along with training and using Agentic AI to solve research-level systems problems. I am also interested in Adversarial AI. I built out a zero-trust gateway for AI infrastructure, and am finding security solutions for AI/ML vulnerabilites.
Research Background: I obtained an M.S. in Computer Science from Harvard with a 4.0 GPA. At Harvard, I was advised by Dr. Minlan Yu. I have a second M.S. in Cybersecurity from the City College of New York with a 4.0 GPA, where I was advised by Dr. Allison Bishop. I was also a visiting student at MIT, where I completed 6.829 with an A grade. I am currently a Visiting Researcher at Dr. Kinan Albab's SPACE Lab at Boston University.
Industry Background: I am currently a Member of Technical Staff at a seed-stage MIT startup focusing on ML for Systems. In the past, I have held senior-level engineering roles and research roles at multiple tech companies, including Stripe, Google, and Bloomberg. I am also the co-founder and CTO of Dorcha, a company specializing in AI security and privacy.
Teaching Background: I designed assignments for Allison Bishop’s graduate-level Adversarial AI course at the City College of New York. For Fall 2025, I was an adjunct professor at Adrian College, teaching an advanced-undergrad course focused on Systems + ML. I was a Teaching Fellow at Harvard for the course CS 145: Networking at Scale.
- Adversary Resilient Learned Bloom Filters - AsiaCrypt 2025
- Cheetah: Accelerating Database Queries with Switch Pruning - SIGMOD 2020
- Borg: the Next Generation - EuroSys 2020
- SwitchV: Automated SDN Validation - SIGCOMM 2022
- cISP: A Speed-of-Light Internet Provider - NSDI 2022
- A Cloud-based Content Gathering Network - HotCloud 2017
- All Proof of Work but No Proof of Play - CFAIL 2025
- Whistledown: Combining User-Level Privacy with Conversational Coherence in LLMs
- Adversarially Robust Bloom Filters: Privacy, Reductions, and Open Problems
- An Introduction to Protein Cryptography
- LSM Trees in Adversarial Environments
- Random Number Generation from Pulsars
-
Orla: A dead-simple unix tool for lightweight open-source local agents.
-
PRP-LBF & Cuckoo-LBF: World's first ML-augmented Bloom filters with provable security guarantees. Extends Google Research's learned Bloom filters (Kraska et al.) with adversarial robustness.
-
Pulsar-RNG: Cryptographic TRNG using pulsar data from NASA/ESA as entropy source. Public alternative to Cloudflare's lava lamp generators.
-
Lilypond: Cryptographic speedrun verification exploring proofs of execution and tamper-resistance. High-performance C implementation.
-
libprobability: Tiny, self-contained, high-performance C library for probabilistic data structures.
-
Google P4 PDPI: Core contributor to Google's P4 representation library. Developed fuzzing modules and runtime verification for switch infrastructure.
-
Cheetah: P4 library that optimizes queries using programmable switches. Works with the Barefoot Tofino switch (which is sadly no longer in production by Intel).
- Languages: Go, Python, C, C++, Rust, P4, Java, JavaScript
- Machine Learning: MCP Design, PyTorch, Scikit-Learn, Nvidia Triton, vLLM, ollama, Ray
- Security: Zero Trust, Cryptography, Differential Privacy, Endpoint Security
- Systems: Kafka, Spark, Kubernetes, Borg, MySQL, BigTable, HBase
- Cloud: GCP, AWS, Azure, Terraform, Ansible




