“I can succeed as a Data Engineer III at Capital Group.”
As a Data Engineer III with an agile mindset, you will contribute to the design, implementation, and delivery of large-scale, critical, and complex data architecture, storage, and pipelines. You will build enterprise distributed data processing systems and data lakes, optimizing compute and storage on cloud platforms.
You will live by the motto: “you ship it, you own it,” by providing and receiving constructive code reviews and taking ownership of outcomes. Your data insights will drive business decisions that improve the lives of millions of people, every single day.
You will also play a key role in enabling AI-powered agents and copilots that enhance productivity and decision-making for a wide range of users—including engineers, product managers, investment analysts, portfolio managers, and other investment group stakeholders. You will partner with cross-functional teams to build the data infrastructure and services that power these intelligent assistants.
“I am the person Capital Group is looking for.”
You have a bachelor’s degree in Computer Science, Engineering, or a related technical field.
You have 5+ years of experience with agile software development while ensuring disciplined engineering practices are followed with a focus on quality, test-driven development, controlled and automated build and releases, and code management.
You have experience working on a cloud platform like Microsoft Azure, AWS, or GCP (AWS preferred).
You are comfortable being hands-on and building & maintaining scalable data integration (ETL) pipelines using SQL, DBT, EMR (or Databricks), Python, and Spark. You consistently automate testing for these pipelines.
You understand the big data ecosystem, distributed data processing (Hadoop, Spark, Hive), data formats like Parquet/Delta/Iceberg, and orchestration tools like Airflow. You can navigate open-source frameworks in search of innovative solutions.
You can write complex SQL queries across large datasets and have experience working with columnar databases (e.g., Redshift, Snowflake).
You are passionate about building AI-powered tools and agents that help users work more efficiently and make better decisions.
You are adept at advanced prompt engineering and GenAI techniques with experience of integrating data systems with AI services, such as natural language interfaces, retrieval-augmented generation (RAG), or vector search, to support intelligent assistants and copilots.
You have experience in designing scalable, reliable ML systems and proficient in deploying models to production environments with understanding of latency and scalability concerns.
You have experience with data modeling and can tailor models to business problems.
You have strong tech translation and enablement skills. You articulate clear connections between the customer and action.
Southern California Base Salary Range: $136,858-$218,973