
Senior Data Engineer (Python & Databricks)
Cyclad
Status
Hexjobs Insights
Senior Data Engineer role at Cyclad focusing on SQL to Databricks migration. Responsibilities include transforming SQL logic to Python/PySpark and maintaining data models.
Słowa kluczowe
Benefity
- Remote working model
- Dynamic and innovation-driven engineering environment
- Full-time job agreement based on B2B
- Private medical care with dental care (covering 70% of costs)
- Multisport card (also for an accompanying person)
- Life insurance
In Cyclad we work with top international IT companies in order to boost their potential in delivering outstanding, cutting-edge technologies that shape the world of the future. We are seeking an experienced Senior Data Engineer with Python and Databricks.This role supports a large-scale transformation from SQL Server–based systems to a Databricks / Delta Lake platform. The focus is on enterprise-grade data engineering and software development, not analytics or reporting. The project is SQL2Databricks migration, it involves 3500-4000 SQL DBs (2TB), replicating data in different shapes/ schemas to Databricks.Project information:Type of project: IT ServicesOffice location: PolandWork model: Remote from PolandBudget: 140 - 160 PLN net/ h - b2bProject length: till the end of 2026, possible to extend it Only candidates with citizenship in the European Union and residence in PolandStart date: ASAP Project scope:Support a large-scale transformation from SQL Server–based systems to a Databricks / Delta Lake platformTransform complex, business-critical SQL logic (stored procedures) into clean, maintainable, and scalable Python / PySpark codeRedesign and implement this logic in Python / PySpark within DatabricksContribute to a large, long-running data engineering codebase used by multiple teamsDevelop production-grade transformation code (packages, modules, reusable components)Design and evolve data models within a Medallion Architecture (Bronze / Silver / Gold) across multiple data layersEnsure software engineering quality, reusability, and long-term maintainabilityApply software engineering best practices (clean code, OOP, modularization, refactoring)Work with very large data volumes and highly parallel, event-driven transformationsActively participate in code reviews and technical design discussionsSupport orchestration workflows (e.g., Azure Data Factory)Competence demands:Very strong Python and PySpark skills; proven experience with Databricks and Delta LakeExperience working in large, shared codebases (beyond notebooks)Strong SQL skills, especially reading and understanding complex logicSolid object-oriented programming experience, clean code principlesStrong data modelling background (transactional and analytical)Experience in redesigning models during platform migrationsFamiliarity with layered data architectures (Bronze / Silver / Gold)Very good English skillsNice to have:Azure Data Factory (orchestration)Azure DevOps, Git, CI/CD pipelinesPower BI or analytics toolingInfrastructure / DevOps knowledge (not mandatory)We offer:Remote working model Dynamic and innovation-driven engineering environmentFull-time job agreement based on b2bPrivate medical care with dental care (covering 70% of costs)Multisport card (also for an accompanying person)Life insurance
| Opublikowana | 16 dni temu |
| Wygasa | za 2 miesiące |
| Źródło |
Podobne oferty, które mogą Cię zainteresować
Na podstawie "Senior Data Engineer (Python & Databricks)"
Nie znaleziono ofert, spróbuj zmienić kryteria wyszukiwania.