Principal Engineer, Inference

New Today

CoreWeave is the AI Hyperscaler™, delivering a cloud platform of cutting edge services powering the next wave of AI. Our technology provides enterprises and leading AI labs with the most performant, efficient and resilient solutions for accelerated computing. Since 2017, CoreWeave has operated a growing footprint of data centers covering every region of the US and across Europe. CoreWeave was ranked as one of the TIME100 most influential companies of 2024.
As the leader in the industry, we thrive in an environment where adaptability and resilience are key. Our culture offers career-defining opportunities for those who excel amid change and challenge. If you're someone who thrives in a dynamic environment, enjoys solving complex problems, and is eager to make a significant impact, CoreWeave is the place for you. Join us, and be part of a team solving some of the most exciting challenges in the industry.
CoreWeave powers the creation and delivery of the intelligence that drives innovation.
What You'll Do:
We're seeking a Principal Engineer to serve as the hands-on technical leader for our next-generation Inference Platform . As a senior individual contributor, you will architect and build the fastest, most cost-effective, and most reliable GPU inference services in the industry. You'll prototype new capabilities, drive engineering standards, and work shoulder-to-shoulder with engineering, product, orchestration, and hardware teams to make CoreWeave the best place on earth to serve frontier models in production. About the role:
Technical Vision & Strategy - Define the technical roadmap for ultra-low-latency, high-throughput inference. Evaluate and influence adoption of runtimes and frameworks (Triton, vLLM, TensorRT-LLM, Ray Serve, TorchServe) and guide build-vs-buy decisions. Platform Architecture - Design Kubernetes-native control-plane components that deploy, autoscale, and monitor fleets of model-server pods spanning thousands of GPUs. Implement advanced optimizations: micro-batching, speculative decoding, KV-cache reuse, early-exit heuristics, tensor/stream parallel inference, to squeeze every microsecond out of large-model serving. Build intelligent request routing and adaptive scheduling to maximize GPU utilization while guaranteeing strict P99 latency SLAs. Operational Excellence - Create real-time observability, live debugging hooks, and automated rollback/traffic-shift for model versioning.Develop cost-per-token and cost-per-request analytics so customers can instantly select the ideal hardware tier. Hands-on Development - Write production code, reference implementations, and performance benchmarks across gRPC/HTTP, CUDA Graphs, and NCCL/SHARP fast-paths. Lead deep-dive investigations into network, PCIe, NVLink, and memory-bandwidth bottlenecks. Mentorship & Collaboration - Coach engineers on large-scale inference best practices and performance profiling. Partner with lighthouse customers to launch and optimize mission-critical, real-time AI applications. Who You Are: 10+ years building distributed systems or HPC/cloud services, with 4+ years focused on real-time ML inference or other latency-critical data planes. Demonstrated expertise in micro-batch schedulers, GPU resource isolation, KV caching, speculative decoding, and mixed precision (BF16/FP8) inference Deep knowledge of PyTorch or TensorFlow serving internals , CUDA kernels, NCCL/SHARP, RDMA, NUMA, and GPU interconnect topologies. Proven track record of driving sub-50 ms global P99 latencies and optimizing cost-per-token / cost-per-request on multi-node GPU clusters. Fluency with Kubernetes (or Slurm/Ray) at production scale plus CI/CD, service meshes, and observability stacks (Prometheus, Grafana, OpenTelemetry). Excellent communicator who influences architecture across teams and presents complex trade-offs to executives and customers. Bachelor's or Master's in CS, EE, or related field (or equivalent practical experience). Preferred: Code contributions to open-source inference frameworks (vLLM, Triton, Ray Serve, TensorRT-LLM, TorchServe). Experience operating multi-region inference fleets or streaming-token services at a hyperscaler or AI research lab. Publications/talks on latency optimization, token streaming, or advanced model-server architectures. Wondering if you're a good fit? We believe in investing in our people, and value candidates who can bring their own diversified experiences to our teams - even if you aren't a 100% skill or experience match.
Why CoreWeave?
You will be the technical spearhead of an industry-defining inference platform, enabling world-class researchers and engineers to deploy generative AI, real-time personalization, and multimodal applications at scale. If shaving milliseconds off tail latency and inventing new techniques for billion-parameter model serving excites you, we'd love to chat.
The base salary range for this role is $206,000 to $303,000. The starting salary will be determined based on job-related knowledge, skills, experience, and market location. We strive for both market alignment and internal equity when determining compensation. In addition to base salary, our total rewards package includes a discretionary bonus, equity awards, and a comprehensive benefits program (all based on eligibility).
What We Offer
The range we've posted represents the typical compensation range for this role. To determine actual compensation, we review the market rate for each candidate which can include a variety of factors. These include qualifications, experience, interview performance, and location.
In addition to a competitive salary, we offer a variety of benefits to support your needs, including: Medical, dental, and vision insurance - 100% paid for by CoreWeave Company-paid Life Insurance Voluntary supplemental life insurance Short and long-term disability insurance Flexible Spending Account Health Savings Account Tuition Reimbursement Ability to Participate in Employee Stock Purchase Program (ESPP) Mental Wellness Benefits through Spring Health Family-Forming support provided by Carrot Paid Parental Leave Flexible, full-service childcare support with Kinside 401(k) with a generous employer match Flexible PTO Catered lunch each day in our office and data center locations A casual work environment A work culture focused on innovative disruption
Our Workplace
While we prioritize a hybrid work environment, remote work may be considered for candidates located more than 30 miles from an office, based on role requirements for specialized skill sets. New hires will be invited to attend onboarding at one of our hubs within their first month. Teams also gather quarterly to support collaboration
California Consumer Privacy Act - California applicants only
CoreWeave is an equal opportunity employer, committed to fostering an inclusive and supportive workplace. All qualified applicants and candidates will receive consideration for employment without regard to race, color, religion, sex, disability, age, sexual orientation, gender identity, national origin, veteran status, or genetic information.
As part of this commitment and consistent with the Americans with Disabilities Act (ADA), CoreWeave will ensure that qualified applicants and candidates with disabilities are provided reasonable accommodations for the hiring process, unless such accommodation would cause an undue hardship. If reasonable accommodation is needed, please contact: careers@coreweave.com.
Export Control Compliance
This position requires access to export controlled information. To conform to U.S. Government export regulations applicable to that information, applicant must either be (A) a U.S. person, defined as a (i) U.S. citizen or national, (ii) U.S. lawful permanent resident (green card holder), (iii) refugee under 8 U.S.C. § 1157, or (iv) asylee under 8 U.S.C. § 1158, (B) eligible to access the export controlled information without a required export authorization, or (C) eligible and reasonably likely to obtain the required export authorization from the applicable U.S. government agency. CoreWeave may, for legitimate business reasons, decline to pursue any export licensing process.
Location:
Bellevue, WA, United States
Category:
Architecture And Engineering Occupations

We found some similar jobs based on your search