SoftServe
WE ARESoftServe is a global digital solutions company with headquarters in Austin, Texas, founded in 1993. Our associates work on 2,000+ projects with clients across North America, EMEA, and LATAM. We are about people who create bold things, make a difference, have fun, and love their work.Big Data and Analytics is the Center of Excellence's data consulting and engineering branch. Hundreds of data engineers and architects nowadays build data and analytics end-to-end solutions, from strategy to technical design and proof of concepts to full-scale implementation. We have customers in the healthcare, finance, manufacturing, retail, and energy domains.We hold top-level partnership statuses with all the major cloud providers and collaborate with many technology partners like AWS, GCP, Microsoft, Databricks, Snowflake, Confluent, and others.IF YOU ARE An architect specializing in data pipeline creation Experienced with both batch and streaming data processing Skilled in developing scalable data solutions using AWS services, building efficient data pipelines Familiar with Python, Scala, or Java and SQL for data manipulation and querying Knowledgeable in big data technologies such as Apache Spark, Databricks, or Flink for advanced data processing Unrivalled in orchestration tools like Apache Airflow or Managed Apache Airflow (MWAA) for scheduling workflows Accustomed to data streaming platforms like Apache Kafka, Amazon Managed Streaming for Apache Kafka (MSK), or Kinesis for real-time data processing Confident in Kafka, streaming concepts, Avro file format, SQL, GitHub, and Snowflake Adept at utilizing data warehouses such as Amazon Redshift or Snowflake for storage and analytics Proficient in translating customer requirements into development tasks, estimating timelines, and guiding teams toward project completion AND YOU WANT TO Own source-to-target mappings, drive discovery of new data sources, coordinate with business stakeholders on data ingestion Lead a team of data engineers Assist the data engineering team with clear implementation scope definitions Participate in architecture decision-making and scale the data platform by planning new data ingestion pipelines and maintaining the data model Drive requirements gathering with business and tech stakeholders and assist with data modeling for systems using both Snowflake and Kafka Create and maintain documentation for the data schema and model Be involved in the full project lifecycle, from initial design and proof of concepts (PoCs) to minimum viable product (MVP) development and full-scale implementation Investigate new technologies, build internal prototypes, and share knowledge with the SoftServe Big Data Community
| Opublikowana | 13 dni temu |
| Wygasa | za 17 dni |
| Rodzaj umowy | B2B |
| Tryb pracy | Zdalna |
| Źródło |
Milczenie jest przytłaczające. Wysyłasz aplikacje jedna po drugiej, ale Twoja skrzynka odbiorcza pozostaje pusta. Nasze AI ujawnia ukryte bariery, które utrudniają Ci dotarcie do rekruterów.
Nie znaleziono ofert, spróbuj zmienić kryteria wyszukiwania.