Data Engineer

  • Closed
  • US Company | Medium ( employees)
  • LATAM (100% remote)
  • 5+ years
  • Long-term (40h)
  • Advertising Services
  • Full Remote

Required skills

  • Python
  • SQL

Requirements

Must-haves

  • 5+ years of data engineering experience
  • Experience with Python for data engineering work involving ETL/ELT pipelines and related components
  • Proficiency with SQL, Python and other data-focused languages
  • Ability to design scalable solutions, evaluate emerging data technologies and anticipate new trends to address complex challenges
  • Strong communication skills in both spoken and written English

Nice-to-haves

  • Startup experience
  • Familiarity with Snowflake
  • Familiarity with AWS
  • Experience with DBT, Dagster, Apache Iceberg or Infrastructure as Code
  • Knowledge of scalable data lake and streaming patterns
  • Bachelor's Degree in Computer Engineering, Computer Science, or equivalent

What you will work on

  • Contribute to data engineering initiatives within an experienced data organization
  • Explore and analyze connected datasets to identify patterns and develop high-quality data models
  • Partner with the data engineering group to refine and optimize transformation workflows
  • Design, build and operationalize large-scale data solutions using AWS services and third-party technologies (Spark, EMR, DynamoDB, Redshift, Kinesis, Lambda, Glue, Snowflake)
  • Build production data pipelines covering ingestion through consumption using SQL and Python
  • Implement data engineering, ingestion and curation functions on AWS using native or custom tooling
  • Lead proofs of concept and guide the transition of validated solutions into scalable production environments across engineering, deployment and commercialization
  • Collaborate with analytics teams using Looker, QuickSight and Q to provide clean and reliable datasets