I build AI systems that are worth trusting — fraud detection, model interpretability, and verifiable machine intelligence. If your AI can't explain itself, it's a liability.
Real-time fraud detection system with an IsolationForest ML model hitting 88% precision, backed by SHAP-based interpretability so every flagged transaction comes with a human-readable explanation. Built in 36 hrs at Cal Hacks 12.0, secured the BrightData sponsorship track, and is in active product development with the BrightData team.
Multi-agent RL environment where 7 LLM agents play Diplomacy — negotiating, deceiving, and betraying — while a separately trained overseer model learns to infer each agent's true strategic intent from behavioral signals alone. The overseer never sees private messages; it reads actions, conflict patterns, and communication metadata to predict what each agent is actually planning. Trained via GRPO with an LLM judge as the reward signal. The headline result: a 52.1% judge agreement rate from a 0% baseline — detecting betrayal before it happens in adversarial AI systems.
Multimodal video understanding pipeline that uses TwelveLabs Pegasus and Marengo models to transform long-form educational content into structured, digestible learning modules. Async processing via Django, Celery, and Redis keeps it fast at scale. Selected to present at a TwelveLabs developer webinar secured through cold outreach.
Computer vision model for identifying AI-generated and manipulated media using a CNN-based architecture trained on real vs. synthetic image datasets. Designed as a modular, plug-in component for authenticity verification pipelines, with a focus on generalizing across generation methods rather than overfitting to a single GAN or diffusion model.
Working with Prof. Pardos on the OATutor project. Building NLP pipelines that automatically detect factual inconsistencies and quality issues in AI-generated tutoring content — making AI outputs reliable at scale.
oatutor.io ↗Fraud, deepfakes, misinformation, LLM outputs — the tools exist. What's missing is accountability. Every project is a step toward closing that gap: systems that don't just perform, but explain and justify themselves.
Founded and lead a nonprofit that collects and donates shoes to underserved communities. Built the organization from scratch — logistics, outreach, volunteer coordination — and scaled it across the Bay Area.
Working on corporate sponsorship outreach for UC Berkeley's largest computer science undergraduate association. Connecting industry partners with one of the top CS programs in the country.
Open to SWE and ML internship roles for Summer 2026. Always down for a coffee chat or to hear what you're building.