Job description:
Role Summary
We are seeking a pragmatic Senior Machine Learning Engineer to accelerate our MLOps roadmap. Your primary mission will be to own the design and implementation of our V1 LLM Evaluation Platform, a critical system that will serve as the quality gate for all our AI features. You will be a key builder on a new initiative, working alongside dedicated Data Engineering and DevOps experts to deliver a tangible, high-impact platform. This role is for a hands-on engineer who thrives on building robust systems that provide leverage. You will be fully empowered to own the implementation and success of this project.
Sign-on Bonus: Eligible for candidates who are currently employed elsewhere and able to join GFT within 30 days of offer acceptance.
Key Responsibilities
- Build the V1 Evaluation Platform: Proactively own the end-to-end process of designing and building the core backend systems for our new LLM Evaluation Platform, leveraging Arize Phoenix as the foundational framework for traces, evaluations, and experiments.
- Implement Production Observability: Architect and implement the observability backbone for our AI services, integrating Phoenix with OpenTelemetry to create a centralized system for logging, tracing, and evaluating LLM behavior in production.
- Standardize LLM Deployment Pipeline: Design and implement the CI/CD framework for versioning, testing, and deploying prompt-based logic and LLM configurations, ensuring reproducible and auditable deployments across all AI features.
- Deliver Pragmatic Solutions: Consistently make pragmatic technical decisions that prioritize business value and speed of delivery, in line with our early-stage startup environment.
- Cross-functional Collaboration: Work closely with our Data Science team to understand their workflow and ensure the platform you build meets their core needs for experiment tracking and validation.
- Establish Core Patterns: Help establish and document the initial technical patterns for MLOps and model evaluation that will serve as the foundation for future development.
Required Qualifications
- 5+ years of dedicated software engineering experience, with a strong focus on backend or platform systems.
- Expertise in Python: You have a proven track record of building robust, testable, and maintainable production systems.
- Deep MLOps/LLMOps Experience: You have hands-on experience with the unique challenges of productionizing and evaluating modern ML/LLM systems, ideally with exposure to evaluation frameworks like Arize Phoenix, LangSmith, or similar platforms.
- Observability Expertise: Practical experience with modern observability frameworks, especially OpenTelemetry, is essential.
- Pragmatic Problem-Solving: A demonstrated ability to choose the right solution for the problem at hand, avoiding over-engineering while ensuring robustness.
Preferred Qualifications
- Arize Phoenix Experience: Hands-on experience with Phoenix for LLM observability, including setting up custom evaluators, managing datasets, and implementing trace-based debugging.
- AWS Experience: Familiarity with core AWS services used in a platform context (Kubernetes/EKS, RDS, S3, IAM).
- Experience in a Startup Environment: Comfortable with ambiguity and a fast-paced setting.
(Note: Due to the high volume of applications we receive, we are unable to respond to every candidate individually. If you have not received a response from GFT regarding your application within 10 workdays, please consider that we have decided to proceed with other candidates. We truly appreciate your interest in GFT and thank you for your understanding.)