Aplikuj teraz

Senior Data Engineer

Eucloid Data Solutions

Warsaw, rondo Ignacego Daszyńskiego 2b
18 000 - 20 000 PLN
Stacjonarna
Praca stała
SQL
ETL
🐍 Python
📊 Data Engineering
Apache Spark
📊 Databricks
Delta Lake
☁️ Amazon AWS
🤖 Apache Airflow
📊 data governance
Praca stała
Stacjonarna
Pełny etat

Role: Senior Data EngineerDuration: 12 months with possibility for extensionLocation: Warsaw, PolandVisa/Work Permit: Candidates from countries outside Poland must have their own arrangementsAbout EucloidAt Eucloid, innovation meets impact. As a leader in AI and Data Science, we createsolutions that redefine industries—from Hi-tech and D2C to Healthcare and SaaS. Withpartnerships with giants like Databricks, Google Cloud, and Adobe, we’re pushingboundaries and building next-gen technology.Join our talented team of engineers, scientists, and visionaries from top institutes likeIITs, IIMs, and NITs. At Eucloid, growth is a promise,What You’ll Do Design, build, and optimize scalable data pipelines supporting enterprise banking and financial services use cases Develop and maintain Databricks-based data solutions using Apache Spark and Delta Lake Build, monitor, and troubleshoot ETL / ELT workflows, including failure handling and recovery mechanisms Perform performance tuning, capacity planning, and reliability testing for production data pipelines Collaborate with solution architects, analysts, and cross-functional engineering teams to deliver end-to-end data solutions Investigate data quality issues, identify root causes, and implement long-term fixes Create and maintain technical documentation for data pipelines and platform components Ensure all data solutions follow cloud-first, security-aware, and governance-aligned principles Contribute to the migration of legacy data warehouse platforms (on-prem or cloud DWH) to Databricks-based lakehouse architectures, ensuring data consistency, reliability, and minimal business disruption Design and operate data pipelines aligned to non-functional requirements (NFRs), including high availability (HA), disaster recovery (DR), and defined RTO/RPO objectives What Makes You a FitAcademic Background:Bachelor’s or Master’s degree in Computer Science, Engineering, Statistics, or a related disciplineTechnical Expertise 3–5 years of hands-on experience in data engineering roles Strong proficiency in SQL for analytical queries and data transformations Advanced experience with Python and Apache Spark Hands-on experience with Databricks Lakehouse architecture, including Apache Spark and Delta Lake Experience working with at least one major cloud platform (AWS, Azure, or GCP) Solid understanding of distributed systems and large-scale data processing architectures Familiarity with modern data stack tools such as Airflow, dbt, Terraform, or similar orchestration and transformation tools Familiarity with Databricks Unity Catalog for data governance, access control, and lineage management Awareness of Databricks cost governance and optimisation practices in largescale, multi-workspace environments Domain & Delivery Experience

Experience delivering data platforms in regulated BFSI / banking environments, supporting requirements such as BCBS 239, GDPR and internal data governance standards

Exposure to regulated data platforms, including data quality, access controls and audit requirements Exposure to risk, finance, AML, or regulatory reporting data domains within financial services Understanding of data lineage, auditability, reconciliation, and data quality controls required for banking-grade data platforms Exposure to operating data pipelines with strict SLAs and enterprise reliability expectations Additional Skills Strong debugging, troubleshooting, and problem-solving skills in complex, production-grade data environments Ability to work independently in complex, multi-team environments Experience contributing to architecture governance, technical standards and design reviews Ability to mentor junior engineers and provide technical guidance within delivery teams Nice to Have Exposure to Data Mesh or domain-oriented data platform designs Experience supporting AI/ML or advanced analytics use cases on top of data platforms Prior exposure to UK or European banking environments Experience working on long-running, multi-year data transformation programs Engagement Details Employment Type: Full-time, fixed-term (12 months) Location: Poland Start Date: Early February 2026 Extension: Possible based on performance and program continuity About Our Leadership Anuj Gupta – Former Amazon leader with over 22 years of experience in building and managing large engineering teams. (B.Tech, IIT Delhi; MBA, ISB Hyderabad) Raghvendra Kushwah – Business consulting expert with 21+ years at Accenture and Cognizant (B.Tech, IIT Delhi; MBA, IIM Lucknow) Equal Opportunity StatementEucloid is an equal-opportunity employer. We celebrate diversity and are committed to creating an inclusive environment.Submit your resume to [email protected] with the subject line “Application: Senior Data Engineer Poland”

Wyświetlenia: 1
Opublikowanadzień temu
Wygasaza 28 dni
Rodzaj umowyPraca stała
Tryb pracyStacjonarna
Źródło
Logo

Podobne oferty, które mogą Cię zainteresować

Na podstawie "Senior Data Engineer"