HireSleek

Software Engineer, Technical Lead, Inference

Mistral

About Mistral

At Mistral AI, we believe in the power of AI to simplify tasks, save time and enhance learning and creativity. Our technology is designed to integrate seamlessly into daily working life. We democratize AI through high-performance, optimized, open-source and cutting-edge models, products and solutions. Our comprehensive AI platform is designed to meet enterprise needs, whether on-premises or in cloud environments. Our offerings include le Chat, the AI assistant for life and work. We are a dynamic, collaborative team passionate about AI and its potential to transform society. Our diverse workforce thrives in competitive environments and is committed to driving innovation. Our teams are distributed between France, USA, UK, Germany and Singapore. We are creative, low-ego and team-spirited. Join us to be part of a pioneering company shaping the future of AI. Together, we can make a meaningful impact. See more about our culture on https://mistral.ai/careers.

Job Summary

As the Technical Lead for the Inference team, you will drive the architecture and optimization of our inference backbone, ensuring high performance, scalability, and efficiency in a dynamic environment.

Key Responsibilities

  • Architect and optimize the inference for high-volume, low-latency, and high-availability environments.
  • Lead the acquisition and automation of benchmarks at both micro and macro scales.
  • Introduce new techniques and tools to improve performance, latency, throughput, and efficiency in our model inference stack.
  • Build tools to identify bottlenecks and sources of instability, and design solutions to address them.
  • Collaborate with machine learning researchers, engineers, and product managers to bring cutting-edge technologies into production.
  • Optimize code and infrastructure to maximize hardware utilization and efficiency.
  • Mentor and guide team members, fostering a culture of collaboration, innovation, and continuous learning.

Requirements

  • Extensive experience in C++ and Python, with a strong focus on backend development and performance optimization.
  • Deep understanding of modern ML architectures and experience with performance optimization for inference.
  • Proven track record with large-scale distributed systems, particularly performance-critical ones.
  • Familiarity with PyTorch, TensorRT, CUDA, NCCL.
  • Strong grasp of infrastructure, continuous integration, and continuous development principles.
  • Ability to lead and mentor team members, driving projects from concept to implementation.
  • Results-oriented mindset with a bias towards flexibility and impact.
  • Passion for staying ahead of emerging technologies and applying them to AI-driven solutions.
  • Humble attitude, eager to learn.

To apply for this job please visit jobs.lever.co.