DCG
As a recruitment company, DCG understands that every business is powered by experienced professionals. Our management style and partnership approach enable us to meet your needs and provide continuous support. Due to our ongoing growth and the large number of recruitment projects we undertake for our partners, we are currently looking for:Senior Data EngineerResponsibilities: Designing new solutions and coming up with initiatives for improvements to existing solutions within data platforms - both as part of orders coming from the business (functional changes) and from technology (architectural changes) Development of data platforms and ETL/ELT processes: technical support and active participation in the development of data platforms Work on building and optimizing ETL/ELT processes that are responsible for processing large data sets Implement processes to ensure optimal data processing, using data engineering best practices Standardize and streamline technical processes: implementing and optimizing code, test and documentation management standards Selecting and configuring tools and development environments that support data engineering processes to maintain code quality and facilitate code scaling Ensure standards compliance and code review: responsible for applying existing platform development standards, initiating new guidelines where improvements are needed, and monitoring the quality of delivered solutions by conducting regular code-reviews Work directly with technology as a Data Engineer and Data Analyst to maintain a high level of technology sophistication, understand current challenges, and drive improvements based on actual technical needs Act as a mentor to the team, providing subject matter support in the areas of solution design, code standardization, process optimization and best practice implementation Requirements: Minimum 5 years of experience in designing and building Business Intelligence, ETL/ELT, Data Warehouse, Data Lake, Data Lakehouse, Big Data, OLAP class solutions Practical knowledge of various relational (e.g., SQL Server, Oracle, Redshift, PostgreSQL, Teradata) and non-relational database engines (e.g., MongoDB, Cosmos DB, DynamoDB, Neo4j, HBase, Redis, InfluxDB) Strong proficiency in SQL and Python (minimum 5 years of experience) Familiarity with data engineering and orchestration tools, particularly Spark/Databricks (including structured streaming mechanisms, DLT, etc.), Hadoop/CDP, Azure/Fabric Data Factory, Apache Flink, Apache Kafka, Apache Airflow, dbt, Debezium, and more Understanding of data governance, data quality, and batch/streaming data processing challenges Knowledge of architectural patterns in data, including Data Mesh, Data Vault, Dimensional Modeling, Medallion Architecture, and Lambda/Kappa Architectures Proficiency in using git repositories (Bitbucket, GitHub, GitLab) Experience with data services on the Azure and/or AWS platforms Flexibility, self-reliance, and efficiency, with a strong sense of responsibility for assigned tasks Practical knowledge of English at a minimum B2 level (C1+ preferred) Offer: Private medical care Co-financing for the sports card Training & learning opportunities Language course co-financing
| Opublikowana | 3 dni temu |
| Wygasa | za 27 dni |
| Rodzaj umowy | B2B |
| Tryb pracy | Zdalna |
| Źródło |
Milczenie jest przytłaczające. Wysyłasz aplikacje jedna po drugiej, ale Twoja skrzynka odbiorcza pozostaje pusta. Nasze AI ujawnia ukryte bariery, które utrudniają Ci dotarcie do rekruterów.
Nie znaleziono ofert, spróbuj zmienić kryteria wyszukiwania.