Research Engineer, Judgment Systems
Intrinsic
Location
San Francisco
Employment Type
Full time
Location Type
On-site
Department
Engineering
Compensation
- Base Salary Range $250K – $400K • Offers Equity
Role
At Variance, we are teaching machines to make the hardest judgment calls at scale. We build AI agents for the high-precision gray area of stopping fraud, scams, and abuse. This isn't another sales tool or a customer service system. We're solving real problems in investigations and fraud prevention to protect innocent people from being harmed.
We’re a small, talent-dense team in San Francisco working on a problem at the edge of what AI systems can reliably do: making good decisions in messy, adversarial, real-world environments.
We’re looking for a Research Engineer to help push that frontier forward. You’ll design evals, study failures, build new research loops, and turn research ideas into production capabilities.
This role sits at the intersection of research and engineering: part model builder, part experimentalist, part systems engineer.
You’re a fit if you:
Care deeply about protecting people from fraud, scams, and abuse
Have strong opinions about model quality, evaluation, and experimental rigor
Want to work on core model and agent behavior
Are excited to train, fine-tune, and improve models for hard real-world judgment tasks
Think in tight research loops: hypothesis, experiment, evaluation, failure analysis, iteration
Thrive in ambiguous, fast-moving environments where the path is not obvious and the feedback loop is short
Are motivated by the challenge of making AI systems work in adversarial, regulated, and high-consequence settings
Want to help define what trustworthy AI means in real-world use cases
What you’ll do
Train, fine-tune, and improve models for fraud, scams, abuse, and other high-stakes judgment workflows
Own research threads focused on improving agent capability, reliability, and decision quality
Build proprietary benchmarks, datasets, and evals that reflect real customer workflows, regulatory constraints, and real failure modes
Design and run experiments across post-training, retrieval, tool use, planning, memory, and long-horizon agent behavior
Study where models break, why they break, and how to make them more robust
Prototype new training strategies, agent architectures, and evaluation methods, then turn the best ideas into production systems
Work closely with founders and engineering to translate research advances into deployed product capabilities
Push the boundary of what AI agents can do in regulated industries
What success looks like
Our models get materially better at making hard judgment calls in production
Our models are trusted at scale
We develop evals and training loops that compound over time
We understand failure modes more clearly and improve system behavior faster
New research ideas turn into real product capabilities quickly
Preferred background
Experience training, fine-tuning, or evaluating modern ML systems
Strong programming skills and comfort working in research-heavy codebases
Familiarity with LLMs, agent systems, post-training, reinforcement learning, retrieval, or adjacent areas
Ability to design clean experiments and draw reliable conclusions from noisy results
Strong engineering judgment and a bias toward building
Interest in fraud, risk, trust and safety, compliance, or other regulated and adversarial domains
Our culture
We believe in ownership, urgency, and craft. We enjoy spirited debate, wild ideas, and building things we’re proud of. We’re fully in-person in San Francisco.
What we offer
Competitive salary and meaningful equity
Platinum-level medical, dental, and vision insurance
Unlimited PTO, sick leave, and parental leave
Up to $100 per month in reimbursement for personal health and wellness expenses
401(k) plan
Compensation Range: $250K - $400K