Skip to content

Content moderation

Depicta operates within Greek and EU law. Every generation request passes through a content moderation system that evaluates prompts and images before anything is produced.

An automated classifier reviews each request — both text prompts and uploaded images — and blocks content that would not be lawful to produce. This happens before the image is generated, so blocked requests are not charged.

Uploaded images used in edit, combine, or reference operations go through the same moderation process.

Regardless of whether content moderation allows or blocks a request, you are responsible for the images you create and how you use them. An image that is lawful to generate may still be unlawful depending on its usage — for example, using a generated likeness of a real person in a misleading context. Content moderation cannot assess intent or downstream use.

The moderation system has two modes:

ModeDescription
Normal (default)Brand-safe filter. Suitable for most use cases — professional, commercial, and personal projects.
LiberalExtended filter for artistic freedom. Allows a broader range of creative expression, always within the boundaries of law.

You can set the strictness mode in three ways:

  • Account default — change your default mode in Settings. Applies to all requests unless overridden.
  • Per request (CLI)depicta image "prompt" --strictness liberal
  • Per request (API) — include "strictness": "liberal" in the request body.

The following categories are actively detected and blocked by the automated classifier. For the complete policy — including additional prohibited uses that are enforced through account review rather than automated detection — see the Acceptable Use Policy.

These result in immediate account termination and referral to law enforcement:

  • Child sexual abuse material (CSAM) — including fictional, cartoon, or AI-generated depictions. Criminal offense in all EU Member States.
  • Non-consensual intimate imagery — sexually explicit or degrading content depicting real, identifiable individuals without consent, including AI-generated deepfakes.

These result in content rejection and graduated enforcement:

  • Sexually explicit content — explicit sexual acts and exposed genitalia in sexual context (artistic nudity, classical art, and non-sexual nudity are allowed)
  • Violence — content that incites real-world violence against specific individuals or groups, or gratuitous graphic torture with no artistic purpose (fictional violence, action, horror, and historical depictions are allowed)
  • Hate speech — content that dehumanizes individuals or groups based on protected characteristics, including Holocaust denial and genocide denial (criminal offenses under EU and Greek law)
  • Self-harm and suicide promotion — content that promotes or glamorizes self-harm (mental health awareness, prevention campaigns, and recovery narratives are allowed)
  • Harassment and threats — content targeting specific individuals with intimidation or stalking imagery
  • Unauthorized depictions of real people — photorealistic generation of identifiable living persons without consent (caricature and clearly artistic depictions of public figures are allowed)
  • Illegal activity — actionable instructions for creating weapons (WMD, explosives, CBRN), synthesizing illegal drugs, terrorism, or forging documents