The Farmer's Dog
The Farmer's Dog Innovation & Technology Culture
The Farmer's Dog Employee Perspectives
How does innovation show up in your company culture?
Innovation at The Farmer’s Dog starts with a low barrier to putting ideas on the table. If you have something to propose, the path is simple: Write it up and share it. We have a public engineering channel where design docs and proposals flow constantly. The goal is fast calibration and getting feedback from people you might not naturally talk to, so decisions don’t happen in silos.
What I’m most proud of is that technical alignment doesn’t depend on hierarchy. With a clear, shared vision, our teams align pragmatically and refine ideas in the open.
On the AI front, we’re intentional. We use AI for code generation, automated code review and design briefs. We’ve also enabled our product and design teams to generate prototypes with AI tools, creating a tighter partnership between engineering and the rest of the org. The expectation isn’t just “use AI.” The goal is for people to understand its capabilities and make informed decisions about where it adds value. That intentionality is what makes it stick.
What’s one recent innovation that improved user or employee experience?
We’ve invested in developer experience this year and it shows. Our CI/CD pipeline supports 300 to 400 deploys a week, with more than 100 on a busy day. That speed isn’t chaos — it’s the result of a reliable, well-managed system that lets engineers ship quickly and confidently.
We’ve also overhauled onboarding. Getting a new engineer’s local environment up and running used to take eight separate setup commands and a lot of documentation-hunting. We collapsed that down to a single command. We streamlined the supporting docs and built tooling that pulls directly from our internal knowledge base to get new hires productive faster, including onboarding to our AI workflows.
We believe that when engineers spend less time fighting tooling and more time building, the customer benefits downstream. Fast, reliable deploys mean we can iterate quickly and ship fixes the same day we find them.
How do you balance experimentation with stability?
We set ourselves up to move quickly, which means investing just as much in guardrails as we do in velocity. Our releases ship with monitoring that alerts our on-call engineers directly. Major changes go through release documents, ship behind feature flags and often run in shadow mode so we can validate consistency before customers see anything. If something goes wrong, we can roll back fast.
After incidents, we run reviews to understand what happened. The goal is to close the loop every time, turning what we learn from launches, bugs and code review into process improvements. We apply the same thinking to our AI workflows, rolling learnings back into our agents and tooling to catch similar issues going forward.
The piece that ties it together is making sure our people are growing alongside the tools. As we adopt AI across engineering, we want engineers actively engaging with their work, understanding the changes being made and using AI as a way to learn, not just to produce output. Experimentation works when the team is equipped to learn from what it ships.
