Software & Machine Learning Engineer · UC Berkeley · CDSS '28

RONOK
TANVIR

I build AI systems that are worth trusting — fraud detection, model interpretability, and verifiable machine intelligence. If your AI can't explain itself, it's a liability.

Ronok Tanvir
Currently
UC Berkeley · Class of 2028
Computer Science + Data Science · CDSS
Focus
AI Trust & Safety Interpretability Computer Vision NLP Full-Stack
Target roles
SWE · ML Engineering
Summer 2026 Internship
Based in
Berkeley, CA
Software & ML Engineering AI Fraud Detection Deepfake Detection LLM Reliability Research Model Interpretability Computer Vision Cal Hacks 12.0 — 1st Place OpenEnv Hackathon — 1st Place TwelveLabs Webinar BrightData Backed Software & ML Engineering AI Fraud Detection Deepfake Detection LLM Reliability Research Model Interpretability Computer Vision Cal Hacks 12.0 — 1st Place OpenEnv Hackathon — 1st Place TwelveLabs Webinar BrightData Backed
01

Selected Work

Guardian
AI Safety

Real-time fraud detection system with an IsolationForest ML model hitting 88% precision, backed by SHAP-based interpretability so every flagged transaction comes with a human-readable explanation. Built in 36 hrs at Cal Hacks 12.0, secured the BrightData sponsorship track, and is in active product development with the BrightData team.

ReactFlaskSHAPBrightDataClaude APIFish.audio
🏆 Cal Hacks 12.0
1st Place · $2,000
Devpost ↗
Diplomacy Overseer
AI Interpretability

Multi-agent RL environment where 7 LLM agents play Diplomacy — negotiating, deceiving, and betraying — while a separately trained overseer model learns to infer each agent's true strategic intent from behavioral signals alone. The overseer never sees private messages; it reads actions, conflict patterns, and communication metadata to predict what each agent is actually planning. Trained via GRPO with an LLM judge as the reward signal. The headline result: a 52.1% judge agreement rate from a 0% baseline — detecting betrayal before it happens in adversarial AI systems.

GRPOQwen2.5HuggingFace TRLClaude APIFastAPIReact
🏆 OpenEnv Hackathon
1st Place · $10,000
Fleet AI Scalable Oversight
GitHub ↗
T-ETHER
Video Intelligence

Multimodal video understanding pipeline that uses TwelveLabs Pegasus and Marengo models to transform long-form educational content into structured, digestible learning modules. Async processing via Django, Celery, and Redis keeps it fast at scale. Selected to present at a TwelveLabs developer webinar secured through cold outreach.

TwelveLabs APIReactMultimodal MLPython
TwelveLabs Webinar Pick
SB Hacks 2026
Devpost ↗
Deepfake Detector
Media Integrity

Computer vision model for identifying AI-generated and manipulated media using a CNN-based architecture trained on real vs. synthetic image datasets. Designed as a modular, plug-in component for authenticity verification pipelines, with a focus on generalizing across generation methods rather than overfitting to a single GAN or diffusion model.

PyTorchCNNOpenCVPython
UC Berkeley
@ Data Science Society
Slides ↗
Team Bears — Masaki, Ronok, Aryan at OpenEnv Hackathon
Team Bears — Masaki, Ronok, Aryan at OpenEnv Hackathon
Ronok and Aryan at CalHacks 12.0 sign
CalHacks 12.0 · SF
Free Red Bulls at CalHacks
The Grind · Free Red Bulls at CalHacks
Hackathon venue with rows of tables and chairs
The Grind · Hackathon Floor
02

Research

01
Undergraduate Research · UC Berkeley
Automated Error Detection in LLM-Generated Educational Content

Working with Prof. Pardos on the OATutor project. Building NLP pipelines that automatically detect factual inconsistencies and quality issues in AI-generated tutoring content — making AI outputs reliable at scale.

oatutor.io ↗
02
The thread across all of it
When should you actually trust an AI?

Fraud, deepfakes, misinformation, LLM outputs — the tools exist. What's missing is accountability. Every project is a step toward closing that gap: systems that don't just perform, but explain and justify themselves.

03

Beyond Code

01
President & Co-Founder
A Step Forward

Founded and lead a nonprofit that collects and donates shoes to underserved communities. Built the organization from scratch — logistics, outreach, volunteer coordination — and scaled it across the Bay Area.

5,700+ pairs donated
02
Industry Committee · UC Berkeley
CSUA

Working on corporate sponsorship outreach for UC Berkeley's largest computer science undergraduate association. Connecting industry partners with one of the top CS programs in the country.

04

Coursework

Structure & Interpretation of Computer Programs
CS 61A
Data Structures & Algorithms
CS 61B
Principles & Techniques of Data Science
Data 100
Linear Algebra
Math 54
Discrete Mathematics & Probability Theory
CS 70
Intro to Full-Stack Development
Full-Stack Decal
05

Stack

ML / AI
  • PyTorch
  • TensorFlow / Keras
  • HuggingFace
  • HuggingFace TRL
  • GRPO / RL Fine-tuning
  • LangChain
  • scikit-learn
  • SHAP / Interpretability
  • OpenCV
  • FAISS
  • numpy / pandas
Languages
  • Python
  • Java
  • C++
  • JavaScript / TypeScript
  • SQL
  • HTML / CSS
  • Bash
  • Scheme
Frontend / Backend
  • React / Next.js
  • Flask
  • Django
  • FastAPI
  • Node.js
  • REST APIs
  • asyncio
  • Redis / Celery
  • PostgreSQL / MongoDB
  • Docker
Tools & APIs
  • Claude / Anthropic API
  • Qwen / Open Source LLMs
  • OpenEnv
  • TwelveLabs
  • BrightData
  • Deepgram
  • Git / GitHub
Let's build
something
that matters.

Open to SWE and ML internship roles for Summer 2026. Always down for a coffee chat or to hear what you're building.

Send a Message
Message sent! I'll get back to you soon.