Senior DevOps Engineer
About Us
At Turquoise, we’re making healthcare pricing simpler, more transparent and lower cost for everyone. We’ve already launched our consumer-facing website that allows anyone to search and compare hospital insurance rates – something never before possible in the USA. We’re also working on solutions that are helping insurance companies and hospitals to negotiate their prices better with more understanding of the market conditions using petabytes scale datasets
We’re a Series B-stage startup backed by top VCs. More importantly, we’re a multi-talented group of folks with a big passion for improving healthcare. We’re eager to find ambitious yet well-rounded teammates to join us on this mission.
Our product is used by hospitals, health insurance companies and other companies that pay for healthcare in the US.
Our immediate goal is to maintain velocity of development while improving our infrastructure reliability and efficiency
The Role
We’re looking for a Senior DevOps Engineer who thrives in an environment where ownership, curiosity, and initiative are valued as much as technical skill. You’ll work alongside talented DevOps, data, and software engineers to evolve and scale an infrastructure that already delivers petabytes of data to our users.
Our mission is to build an internal platform that empowers developers by taking the pain out of deployment, monitoring, and infrastructure management. We’re not just keeping systems running — we’re making them better, smarter, and more developer-friendly.
You’ll succeed in this role if you’re the kind of engineer who spots opportunities to automate, improve, and simplify before being asked. Whether it’s refining architecture or eliminating toil through automation, you’re motivated to make things better — and you know how to navigate ambiguity to get it done.
We expect our senior engineers to own their work from start to finish. That means scoping out what needs to be done, asking for clarity when needed, and collaborating openly when dependencies arise. If you prefer waiting for detailed instructions or can’t get started without a perfectly groomed backlog, this probably isn’t the right fit.
If you’re excited to build infrastructure that supports meaningful, real-world healthcare outcomes — and you like working with a team that trusts you to figure things out — we’d love to meet you.
Key responsibilities
- Build and scale meaningful infrastructure — Design, deploy, and maintain secure, scalable, and highly observable infrastructure that supports petabytes of data.
- Own our container orchestration — Take the lead on managing Amazon EKS, including autoscaling, upgrades, and networking — ensuring our systems stay fast, flexible, and reliable.
- Drive GitOps excellence — Implement and maintain GitOps workflows with ArgoCD to ensure our infrastructure and applications stay consistent, automated, and easy to manage.
- Level up observability — Design and run modern monitoring and logging systems using tools like CloudWatch, DataDog, and Sentry to give teams deep visibility into system performance.
- Engineer CI/CD workflows — Use GitHub to streamline collaboration through pull requests, automation, and CI/CD integrations — enabling fast, safe shipping.
- Shape the system’s future — Collaborate on architectural decisions, build for high availability and scalability, and contribute to disaster recovery planning — with a voice that matters.
Requirements
- You have 5+ years of hands-on experience in DevOps, SRE, or Cloud Infrastructure roles, with a track record of building and supporting production systems.
- You’re confident working with core AWS services like VPC, IAM, EKS, RDS, and know your way around cloud networking and security best practices.
- You’re fluent in Terraform, CloudFormation, or Crossplane, and believe infrastructure should be versioned, repeatable, and easy to reason about.
- You’ve worked with ArgoCD or Flux, Helm, and understand GitOps as more than just a buzzword — you’ve used it to keep infrastructure and apps in sync.
- You use GitHub (and GitHub Actions) not just for source control, but as a key part of your automation and deployment pipelines.
- You’ve run Kubernetes clusters in production, understand its strengths and quirks, and know how to keep it running smoothly.
- You’re familiar with tools like Grafana, CloudWatch, DataDog, or Sentry, and know how to set up meaningful metrics, logs, and alerts.
- You write solid Python scripts to glue systems together, automate infrastructure tasks, or handle custom workflows.
- You’re comfortable working independently in a remote setup, asking questions when needed, and keeping momentum without being micromanaged.
Benefits
- Work From Home – Work from wherever you’re most productive — home, coworking space, or somewhere in between.
- Unlimited PTO – Take the time you need to rest, recharge, or handle life — we trust you to manage your schedule.
- Stock Option Plan – Join early and share in the long-term success of the company through our stock option plan.
- Invest in yourself – Get $1,200 annually for learning and development to grow your skills and career.
While we are a start-up with a lot of work to do, we also value work-life balance. Our goal is to provide you with a challenging work environment where you are learning new things every day, but where you also maintain normal hours.