Aplikuj teraz

Data Engineer with Databricks & PySpark  (Senior / Mid)

Capgemini Polska Sp. z o.o.

Gdańsk +5 więcej
Praca stała
📊 Databricks
Praca stała

Must have

  • Databricks

  • PySpark

  • English

Nice to have

  • CI/CD

  • Terraform

  • DevOps

  • SQL

Requirements description

  • You have hands-on experience in data engineering and are comfortable working independently on moderately complex tasks.
  • You’ve worked with Databricks and PySpark in real-world projects.
  • You’re proficient in Python for data transformation and automation.
  • You’ve used at least one cloud platform (AWS, Azure, or GCP) in a production environment.
  • You communicate well in English.

Offer description

Join our Insights & Data team – 400+ experts building scalable, cloud-native data solutions across AWS, Azure, and GCP. You’ll work with Databricks, PySpark, and modern DevOps tools to design and maintain robust data pipelines.

🔹 Real impact, real data

🔹 Full SDLC ownership

🔹 Agile, collaborative environment

If you’re experienced in data engineering, fluent in Python, and ready to grow in a cloud-first team – we’d love to meet you!

WHAT YOU’LL LOVE ABOUT WORKING HERE

  • Access to over 70 training tracks with certification opportunities (e.g., GenAI, Architects, Google) on our NEXT platform. Dive into a world of knowledge with free access to Education First languages platform, Pluralsight, TED Talks, Coursera and Udemy Business materials and trainings.

  • Enjoy hybrid working model that fits your life - after completing onboarding, connect work from a modern office with ergonomic work from home, thanks to home office package (including laptop, monitor, and chair). Ask your recruiter about the details.

  • Practical benefits: private medical care with Medicover with additional packages (e.g., dental, senior care, oncology) available on preferential terms, life insurance and 40+ options on our NAIS benefit platform, including Netflix, Spotify or Multisport.

Your responsibilities

  1. Develop and maintain data processing pipelines using Databricks and PySpark.
  2. Collaborate with senior engineers and architects to implement scalable data solutions.
  3. Work with cloud-native tools to ingest, transform, and store large datasets.
  4. Ensure data quality, consistency, and security in cloud environments.
  5. Participate in code reviews and contribute to continuous improvement initiatives.
Wyświetlenia: 5
Opublikowana18 dni temu
Wygasaza 24 dni
Rodzaj umowyPraca stała
Źródło
Logo

Podobne oferty, które mogą Cię zainteresować

Na podstawie "Data Engineer with Databricks & PySpark  (Senior / Mid)"