Proficiency in building and managing ETL pipelines using AWS.
Expertise in dbt (data build tool) for data transformation.
Solid experience with CI/CD pipelines, particularly with GitLab and DataOps.
Strong knowledge of Bash, Docker, and Python for process automation and simplification.
Familiarity with JIRA for project management.
Offer description
We are looking for an experienced Data Engineer to join our team. The role focuses on building and maintaining data pipelines and analytical models to meet both current and future business needs across multiple functions. You will play a key part in designing, automating, and optimizing data infrastructure and analytics processes.
Your responsibilities
Build and maintain optimal data pipeline architecture.
Assemble large, complex datasets that meet both functional and non-functional business requirements.
Identify, design, and implement internal process improvements: automating manual tasks, optimizing data delivery, and redesigning infrastructure for greater scalability.
Build infrastructure for the optimal extraction, transformation, and loading (ETL) of data from a variety of sources using SQL and AWS Big Data technologies.
Develop CI/CD pipelines (GitLab, DataOps).
Model data transformations using dbt (data build tool), including scripting and complex algorithms.