Senior Data Engineer

  • Closed
  • US Company | Small ( employees)
  • LATAM (100% remote)
  • 6+ years
  • Short-term (40h)
  • Analytics / Data
  • Full Remote

Required skills

  • AWS
  • Azure
  • Python
  • ETL
  • SQL

Requirements

Must-haves

  • 6+ years of data engineering experience
  • Proficiency Python for data processing (Pandas, PySpark)
  • Proficiency AWS (S3, Glue, Redshift, Lambda)
  • Expertise with SQL and relational databases (SQL Server)
  • Experience building ETL processes handling millions of records
  • Deep knowledge of data architecture, performance tuning, and governance practices
  • Strong communication skills in both spoken and written English

Nice-to-haves

  • Startup experience
  • Experience with Azure (Data Factory, Synapse, Blob Storage) experience
  • Experience in enterprise-scale environments with diverse data sources
  • Familiarity with big data tools (Spark) and distributed processing
  • Bachelor's Degree in Computer Engineering, Computer Science, or equivalent

What you will work on

This is a full-time role (40 hours/week) for a 7-12 month contract.

  • Design, develop, and maintain scalable ETL pipelines for processing large datasets
  • Build data infrastructure on AWS, Azure ensuring performance, scalability, and cost-effectiveness
  • Develop and optimize data models and schemas for analytics and reporting
  • Collaborate with analysts and stakeholders to understand data needs and deliver solutions
  • Ensure data quality, integrity, and security throughout the lifecycle
  • Monitor pipeline performance, troubleshoot bottlenecks, and resolve data issues
  • Document processes and contribute to continuous improvement efforts across the team