Overview 🚀
- Safety and ethics are at the core of our product and mission. We recognize our unique responsibility as an adults-only AI platform, and we take a safety-by-design approach, embedding safety into all aspects of our operation and all stages of the development lifecycle.
- We know that you’re reading this because safety is important to you, too. This document aims to strike a balance between providing meaningful information on our moderation controls while not compromising security by giving bad actors information on how to evade them. If you have specific questions about safety that aren’t addressed here, we’re happy to answer them during your interviews.
Technical and organizational measures
Technical measures
- Users cannot upload or import external visual media to the platform. This prevents, for example, non-consensual intimate imagery and “nudification.”
- All text and images are subject to moderation. We incorporate layered safeguards on the platform, continuous monitoring, and dedicated oversight to maintain safety and integrity.
- Through ongoing collaboration between our Trust & Safety and AI teams, we are constantly improving and refining our proprietary moderation LLM (EverGuard), the top adult moderation tool worldwide.
- Because our moderation technology is proprietary and security-sensitive, we can’t share detailed implementation or infrastructure information publicly. You are invited to ask more specific questions during any interviews, and your interviewers will be happy to share more detailed information.
Organizational measures
Ensuring safety and ethics in our products is not a one-and-done or “check the box” exercise. It requires a culture of compliance, buy-in from top-level management, and an organizational commitment to safety.
The following gives an overview of how we’ve built a culture of compliance at EverAI:
- We have a zero-tolerance policy for Child Sexual Abuse Material (CSAM) and are committed to fully complying with valid requests from law enforcement. In addition to our other content moderation and removal controls, human moderators review flagged accounts and permanently block users that attempt to create CSAM. We are a registered reporter with and report known CSAM to the National Center for Missing and Exploited Children (or other local authorities as appropriate).
- We have an in-house Ethics and Compliance Officer (ECO) who owns safety and compliance issues. This is not simply a title given to someone in the company; the position was specifically recruited-for. Our ECO works with all teams on an ongoing basis to address compliance and/or safety topics that come up in their work, for example, reviewing new features and characters before they are published on the platform.
- All EverAI team members’ onboarding includes safety/compliance trainings. Additional company-wide and department-specific trainings on safety and broader compliance topics (e.g., data protection) are also provided on an ongoing basis.