Yuma AI

The AI Support Agent for Ecommerce

Senior DevOps / Infrastructure & AI LLM Systems Engineer (Hybrid/Barcelona)

$70K - $95K•0.03% - 0.10%•Barcelona, CT, ES / Barcelona, Catalonia, ES / Remote (ES; FR; GB; PL; DE; PT; IT; NL; BE; AU; IL; DK; EE)
Job type
Full-time
Role
Engineering, Devops
Experience
6+ years
Visa
US citizen/visa only
Skills
Kubernetes, Redis, Google Cloud, Docker, PostgreSQL, Microsoft Azure, Amazon Web Services (AWS)
Apply to Yuma AI and hundreds of other fast-growing YC startups with a single profile.
Apply to role ›

About the role

About Yuma AI:

Yuma is building a next-generation orchestration platform that deploys autonomous AI agents dedicated to customer support in e-commerce.
We already support 150+ paying merchants and help them automate up to 80% of their customer tickets freeing human agents for higher-value conversations.

Founded by Guillaume Luccisano (3rd-time YC founder) at the end of 2022, Yuma is one of the leaders in the AI Support Automation space with 10+ consecutive months of double-digit growth.

We are now a team of 25 passionate and ambitious people based between Paris, Barcelona and Boston and have grown revenue 5x in 2025. We aim to repeat this trajectory in 2026. The scale-up phase is fully underway. 🚀

About the role :

This is a foundational role. You will be our first dedicated DevOps/Infrastructure Engineer and will take full ownership of everything related to cloud infrastructure, deployments, reliability, and scaling .

Our engineering team is made up of 7 people, operates with intensity, and moves fast. Rapid iteration is one of our biggest advantages. Over the past two years, we’ve built a tremendous amount, but the surface area ahead of us is even larger as we scale usage, models, and automation. You will play a key role in keeping our platform fast, reliable, and ahead of the curve.

This role goes beyond DevOps. You will also contribute at the LLM layer: running evaluations, experimenting with models, improving latency, optimizing costs, and helping shape how our AI systems operate at scale. If you enjoy working at the intersection of infrastructure, backend systems, and AI, this is exactly the kind of role where you’ll thrive.

What You Will Own :

Infrastructure & Platform :

  • All cloud infrastructure across AWS, GCP, and Azure.
  • Kubernetes cluster management, scaling, upgrades, and security.
  • CI/CD pipelines (GitHub Actions) and deployment systems.
  • Observability, monitoring, logging, alerting, and reliability practices.
  • Incident response, on-call rotation, and uptime improvements.
  • Cost optimization and infra-level performance tuning.
  • Security best practices, IAM, secrets, policies, and overall infra hygiene.

Backend & Data Systems :

  • High-scale PostgreSQL (large DB, indexes, performance tuning).
  • Redis and Sidekiq pipelines, queue scaling, job parallelization.
  • API performance and throughput.

AI / LLM Systems :

  • Manage and optimize LLM deployments across cloud providers.
  • Improve latency, reliability, and cost through routing and system architecture.
  • Help build and maintain eval pipelines and A/B tests.
  • Contribute directly at the app level (prompts, agents, routing).
  • Support or prototype self-hosted model experiments (optional but valuable).

The Ideal You :

You have 8+ years of experience in DevOps / infrastructure roles, ideally in fast-paced SaaS or startup environments. You’ve scaled production systems before and know how systems behave under real load.

You’re equally comfortable deep in Kubernetes or writing Ruby/Python for a quick script, tool or LLM eval. You care about reliability, speed and pragmatism. You enjoy working on AI systems and have hands-on experience with LLM-powered applications.

Your toolkit includes:

  • Kubernetes, Docker
  • AWS, Azure, GCP (strong in at least 2)
  • GitHub Actions CI/CD
  • PostgreSQL, Redis, Sidekiq
  • LLM APIs (OpenAI, Azure, Anthropic; self-hosted a plus)
  • Terraform or similar IaC
  • Strong coding ability to contribute across the stack

The Alternative You :

If you're earlier in your career but have strong infrastructure experience and clear upside, and you can reasonably grow into the full scope within 2 to 3 years, feel free to reach out. Raw talent is welcome, but depth of experience scaling systems is a big plus here.

Why Yuma ?

  • High impact with ownership from day one : join a small, international engineering team where every feature you ship and every solution you design is directly visible in production.
  • Competitive compensation based on experience and stock options
  • Fast growth = fast learning curve : in this hybrid engineering role, you’ll quickly gain exposure to AI, product iteration, customer workflows, and cross-functional problem-solving
  • Work closely with founders and product/engineering leadership : your ideas and your ownership will directly influence the roadmap.
  • A culture of ownership, transparency, and continuous improvement : we move fast, iterate constantly, and empower people to grow
  • Flexibility : fully remote in Europe with preference for Barcelona office (Boston office is also an option)

Our Culture

Please, if you are considering applying, first read our culture page: https://www.notion.so/yuma-ai/Yuma-s-Culture-5b0e15f1334242ce8a62daab9f2038a1?pvs=4

About Yuma AI

Yuma is developing the most advanced AI agent orchestration platform, dedicated to customer support and ecommerce. We are automating the heavy burden that large Shopify merchants face in their day-to-day activities. This market is vast and homogeneous, and every single merchant needs Yuma. We have a head start in this space, but we will not rest until we are the clear winner in the e-commerce customer support AI market.

Yuma was launched in early 2023 by a third-time YC founder and is already serving hundreds of paying customers.

Yuma AI
Founded:2023
Batch:W23
Team Size:26
Status:
Active
Location:Boston
Founders
Guillaume Luccisano
Guillaume Luccisano
Founder