About Me

I'm an AI Research Engineer, interested in Safety, Security, and Privacy. I work at Meta within Superintelligence Labs' Trust and Safety team, focusing on building evals, monitors, and guardrails for our AI systems across a variety of risks. I've also been involved in incubating the use of AI and ML in the Privacy and Security orgs.

I've been involved in several open-source projects, including CyberSecEval, PromptGuard, LlamaFirewall, LlamaGuard, and the Llama Stack.

Before GenAI, I worked on traditional ML, particularly large-scale recommendation systems. I built the first version of Instagram's feed recommendations ranking model and also worked on Facebook's search engine.

Updates

Aug 2025
Speaking Engagement
Spoke at Blackhat Arsenal and TDI in Vegas this year.
Jun 2025
Update
Took a new role/title at Meta; I joined Superintelligence Labs as an AI research engineer to work on safety evaluations and monitoring of our AI systems.
Apr 2025
Product Launch
We released two new PromptGuard models — Llama Prompt Guard 2 86M and Llama Prompt Guard 2 22M. We believe these to be the strongest jailbreak detection models available.
Apr 2025
New Paper
LlamaFirewall - a framework for deploying our AI agent defenses, like PromptGuard and Alignment Checks. (Also see the framework website and the GitHub repo)
Feb 2025
Speaking Engagement
Hosted a workshop and demo session at AI Security Forum in Paris.
Aug 2024
Speaking Engagement
Spoke at DefCon AI Village — "Evaluations and Guardrails against Prompt Injection attacks on LLM-powered applications."
Aug 2024
Speaking Engagement
Hosted a workshop at the AI Security Forum .
Jul 2024
New Paper
Llama 3 Herd of Models. Our team worked on cybersecurity risk measurements of the models and the system-level safety models.
Jul 2024
Product Launch
Launched PromptGuard and LlamaGuard 3, part of Meta's system-level safety stack for Llama.