We are looking for a skilled MLOps Engineer with a strong background in cloud infrastructure, automation, CI/CD, and hands-on experience with machine learning deployment. You will work at the intersection of data science and engineering, helping to bring ML models to production and ensure their scalability, reliability, and maintainability within cloud-native environments.
This role is ideal for someone who thrives in modern, cloud-based ecosystems, understands ML lifecycle challenges, and is passionate about automating infrastructure and model deployment pipelines.
Your responsibilities
Design, implement, and maintain CI/CD pipelines for both application and ML workflows (GitLab CI, Argo CD)
Develop infrastructure as code using tools like Terraform, Ansible, and CloudFormation
Automate and manage cloud-native environments (AWS, GCP, Azure) and Kubernetes clusters (EKS, GKE, AKS)
Deploy and monitor machine learning models in production using MLflow, Kubeflow, or SageMaker
Collaborate with data scientists and software engineers to operationalize ML solutions
Ensure observability with modern monitoring, logging, and alerting tools
Troubleshoot infrastructure and deployment issues in distributed environments
Contribute to DevOps best practices and Agile workflows across teams
Our requirements
4+ years of experience with Python programming and Bash scripting
Hands-on experience with Docker & Kubernetes
Proven ability to design and implement CI/CD pipelines (e.g. GitLab CI, Argo CD)
Strong background in infrastructure automation using tools like Ansible, Terraform, and AWS CloudFormation
Proficiency with cloud platforms: AWS, GCP, Azure
Experience with managed Kubernetes services (EKS, GKE, AKS)
Familiarity with monitoring, logging, and alerting tools
1+ year of experience in Machine Learning domains (e.g. Computer Vision, NLP, Predictive Modelling)
Experience in productionizing ML solutions
Hands-on with MLOps tools such as MLflow, Kubeflow, or Amazon SageMaker