This article was written by our CTO & Co-founder, Michael Monin.

<aside> ℹ️

Why this matters for candidates

We're not a prototype. We run one of the most active AI companionship platforms in the world, on a lean, fast-moving Engineering team. If you care about scale, ownership, and shipping, read on.

</aside>


Scaling AI companionship — under the hood of a 10B request monolith

💎 Why We Use Rails

Garry Tan recently noted that the world is "sleeping" on the combination of Ruby on Rails and AI-augmented coding. Because Rails is built on "Convention Over Configuration," AI models like Claude can act autonomously within the codebase, resulting in massive productivity gains.

Screenshot 2026-02-11 at 2.10.51 pm.png

At EverAI, we’ve turned that "unlock" into a reality at massive scale. We aren't just building an app; we are running a global infrastructure that handles:

users active over time.png

📊 The raw numbers

10–12 billion HTTP requests in a typical 30-day window. Peaks push well north of 400M/day. 80M+ visits, 41M+ uniques, and we're serving a full petabyte of data (mostly images/videos — they're heavy).

Last 30 days - Requests


xxtotal requests last month.png

⚒️ Keeping the queues happy

Everything non-trivial is async: image, messages, notifications, ML inference queues, analytics events etc.. All flowing through Sidekiq.