
RL Environments Engineer - Low-Level Engineering & Kernel Inference Optimization
Preference Model via XOR Inc.
Status
Hexjobs Insights
Remote role for Low-Level Engineers focused on creating RL environments for LLMs. Requires strong Python skills and understanding of memory hierarchies, threading models, and performance optimization. Hourly rate $90-$125.
Słowa kluczowe
About the companyXOR is hiring exclusively on behalf of our partner Preference Model.Preference Model is building the next generation of training data to power the future of AI. Today's models are powerful but fail to reach their potential across diverse use cases because so many of the tasks that we want to use these models for are outside of their training data distribution. Preference Model creates reinforcement learning environments that encapsulate real-world use cases, enabling AI systems to practice, adapt, and learn from feedback grounded in reality. We seek to bring the real world into distribution for the models.Our founding team has previous experience on Anthropic’s data team building data infrastructure, tokenizers, and datasets behind the Claude model. We are partnering with leading AI labs to push AI closer to achieving its transformative potential.The company has closed a large Seed round from Tier-1 VC’s in Silicon Valley and is working with top AI labs, informing priorities and timelines.XOR runs the end-to-end hiring process for this role (screening, take-home, and coordination with the Preference Model team). Please apply through this posting to be considered.Brief Description of the RoleWe're hiring Low-Level Engineers to design and build RL environments that teach LLMs kernel development, hardware optimization, and systems programming. The goal is to create realistic feedback loops where models learn to write high-performance code across GPU and CPU architectures.This is a remote contractor role with ≥4 hours overlap to PST and advanced English (C1/C2) required.RequirementsMinimal QualificationsStrong Python (engineering-quality, not notebook-only)Production mindset (debugging, reliability, iteration speed)Clear understanding of LLMs, their current limitationsAbility to meet throughput expectations and respond quickly to feedbackYou may be a good fit if one of the following appliesDeep understanding of memory hierarchies (registers, L1/L2/shared memory, HBM, system RAM) and their performance implicationsThreading models, synchronization primitives, and concurrent programming (warps, thread blocks, barriers, atomics)Cache coherence, memory access patterns, coalescing, and bank conflictsJIT compilation frameworks (e.g., Triton, JAX/XLA, TorchInductor, Numba)AOT compilation and optimization passes (LLVM, MLIR, TVM)Compiler and kernel frameworks such as CUTLASS, BitBLAS, or JAX/PallasModern C++, including templates, concurrency, and build systemsAssembly-level programming and low-level optimization across GPU and CPU architectures (e.g., x86, ARM, NVIDIA Hopper, NVIDIA Blackwell)Debugging and optimizing GPU kernels using CUDA and/or HIP/ROCmDeveloping PyTorch custom operators, backend extensions, or dispatcher integrations (e.g., ATen, TorchScript, or custom backends)Customizing, extending, or optimizing c, including distributed inference workflowsGPU communication libraries and collectives, such as NVIDIA NCCL, AMD RCCL, MPI, or UCXMixed-precision and low-precision kernels (e.g., FP16, BF16, FP8, INT8), including numerical stability and performance trade-offsCompensationHourly contractor rate: $90-$125 USD/hour (dependent on the expertise level and quality of take-home assignment).Monthly performance bonuses40 hours per week - fully remote independent contractor roleProcess1) Apply via the job boardPlease submit your CV and add a short note on which track fits you best:2) Short take-home assignment (form)After you apply, XOR will share a short take-home in the format of a form with a small task.The Preference Model technical team will review your submission.In parallel, you can schedule a short call with XOR to learn more about the role and the company and ask questions.3) Teamlead interviewIf the take-home looks strong, we will schedule a technical interview with the Preference Model team.4) Second take-home assignment (coding task)Final decision is made after second take-home assignment .Note on take-home compensationTime spent on the take-home can be compensated if you receive an offer.
Zaloguj się, aby zobaczyć pełne szczegóły
Utwórz darmowe konto, aby uzyskać dostęp do pełnego opisu oferty i zaaplikować.
| Opublikowana | 2 dni temu |
| Wygasa | za 3 miesiące |
| Rodzaj umowy | B2B, Praca stała |
| Źródło |
Podobne oferty, które mogą Cię zainteresować
Na podstawie "RL Environments Engineer - Low-Level Engineering & Kernel Inference Optimization"
Nie znaleziono ofert, spróbuj zmienić kryteria wyszukiwania.