Research Engineer - Multimodal Language Models

34 Days Old

Luma's mission is to build multimodal AI to expand human imagination and capabilities.

We believe that multimodality is critical for intelligence. To go beyond language models and build more aware, capable and useful systems, the next step function change will come from vision. So, we are working on training and scaling up multimodal foundation models for systems that can see and understand, show and explain, and eventually interact with our world to effect change.

We are looking for engineers with significant experience solving hard problems in PyTorch, multimodal data, and distributed systems. You will work as a team to end-to-end build cutting edge multimodal language models with strong emphasis on audio and visual data. Your contributions will be pivotal in shaping various research projects and product roadmaps.

Responsibilities



Experience



Compensation



Your application is reviewed by real people.
Location:
Palo Alto

We found some similar jobs based on your search