Principal ML Engineer - AI Safety & Evaluation
1 Days Old
Principal ML Engineer - AI Safety & Evaluation
We're looking for a Principal Engineer to lead the technical strategy and architecture for protecting foundation models against misuse-such as jailbreaks, prompt injection, toxic outputs, and custom policy violations. In this role, you'll apply your expertise in scalable systems design, applied machine learning, and model-level defenses to build core infrastructure that ensures AI systems behave safely and responsibly in production. You'll set technical direction and drive architectural decisions across a broad surface area of AI safety systems-designing safety interventions, integrating evaluation workflows, and developing models and tooling that detect and prevent harmful or non-compliant behavior. This role is ideal for someone who wants to work at the intersection of model behavior, product safety, and system engineering.
What You'll Do
Architect and lead the development of model-level defenses against jailbreaks, prompt injection, and custom policy violations
Define and drive evaluation strategies, including adversarial testing and stress-testing pipelines, to identify safety weaknesses before deployment
Set technical direction for scalable mitigation techniques such as safety-focused fine-tuning, prompt shielding, and post-processing methods to reduce harmful or non-compliant outputs
Collaborate with red teamers and researchers to convert emerging threats into measurable evaluations and system-level safeguards
Scale and improve human-in-the-loop pipelines for detecting toxic, biased, or non-compliant outputs
Stay up to date with LLM safety research, jailbreak tactics, and adversarial trends, and apply insights to real-world defenses
What We're Looking For 7+ years of experience in applied machine learning, AI infrastructure, or safety-critical systems, with 3+ years in a senior or staff-level technical leadership role
Deep understanding of transformer-based architectures and experience building or evaluating safety interventions for LLMs
Proven expertise in analyzing and addressing adversarial behaviors, edge-case failures, and misuse scenarios
Demonstrated ability to guide long-term technical strategy, influence organizational direction, and mentor cross-functional teams
Strong written and verbal communication skills, with experience influencing technical direction at the org or platform level
field
Bachelors, Masters, or PhD in Computer Science, Machine Learning, or a related field
Nice to Have Experience applying techniques such as reinforcement learning from human feedback (RLHF), adversarial training, or safety fine-tuning at scale
Hands-on work designing prompt-level defenses, content filtering systems, or mechanisms to prevent jailbreaks and policy violations
Contributions to AI safety research, industry standards, or open-source tools related to model robustness, alignment, or evaluation
Familiarity with model governance frameworks, including safety policies, model cards, red teaming protocols, or risk classification methodologies
A10 Networks is an equal opportunity employer and a VEVRAA federal subcontractor. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability status, protected veteran status, or any other characteristic protected by law. A10 also complies with all applicable state and local laws governing nondiscrimination in employment.
#LI-AN1
Compensation: Up to $215K USD
- Location:
- San Jose, CA, United States
- Category:
- Computer And Mathematical Occupations