This Acceptable Use Policy ("AUP") defines the categories of content that may not be generated, requested, or distributed using GPT-Image-2.0 Playground ("the Service"). It is part of, and incorporated by reference into, our Terms of Service. Violations may result in immediate account termination, forfeiture of unused credits, and — where required by law — reporting to the appropriate authority.
We enforce this policy through (a) automated prompt screening before every image or video generation, (b) the upstream model provider's safety filters, and (c) ongoing review of flagged accounts. The screening is performed by an independent third-party Moderation service mandated by our payment processor; see Prompt Screening in the Terms of Service for details.
You may not use the Service to generate, request, or attempt to generate any of the following. Accounts that submit prompts in these categories — even rejected prompts — may be suspended and, where required, reported.
- Child Sexual Abuse Material (CSAM) of any kind, including depictions that are stylised, illustrated, AI-generated, or framed as fictional.
- Sexualisation of minors in any form.
- Sexual or intimate depictions of real, identifiable people without their explicit consent, including but not limited to deepfake nudes and "undress" filters.
- "Revenge" imagery created without the subject's permission.
- Imagery providing operational instruction for weapons of mass destruction, explosives, dangerous drugs, malware, or any other means of physical or digital attack.
- Content intended to incite, plan, or facilitate violence against specific individuals, groups, or institutions.
- Glorification of, recruitment for, or fundraising for terrorist or violent-extremist groups designated by the United States, the European Union, or the United Nations.
You may not use the Service to generate the following without an explicit, narrow, lawful purpose. The Service is not configured for these use cases and your account may be suspended.
The Service is general-purpose and does not host adult content. Pornographic, hardcore, fetish, or otherwise sexually explicit imagery — even of consenting adults — falls outside our supported scope. If you need an adult-content platform, use one designed and licensed for that purpose.
- Public figures, celebrities, politicians, or private individuals depicted in sexual, violent, defamatory, or false-context scenarios.
- Imagery of any real person presented as authentic news, evidence, or factual record when it is not.
- Impersonation or identity theft material (forged documents, IDs, signatures of specific people).
- Content that demeans, threatens, or incites hatred against individuals or groups based on race, ethnicity, national origin, religion, gender, gender identity, sexual orientation, disability, age, or any other protected characteristic.
- Targeted harassment of identifiable individuals.
- Imagery designed to deceive viewers about real-world events (fabricated screenshots, fake news photographs, doctored evidence).
- Election-related disinformation about candidates, voting procedures, or results.
- Fraud, scam, phishing, or social-engineering material.
- Gore, mutilation, or torture imagery intended to shock or distress.
- Content that glorifies, celebrates, or provides operational details of real violent acts.
- Drug synthesis, weapons manufacture, or instructions for any activity unlawful in the United States, the European Union, or your local jurisdiction.
- Imagery for tax evasion, document forgery, or other illegal financial activity.
- Direct reproduction of copyrighted characters, trademarked logos, watermarked photographs, or other protected content beyond what nominative or fair-use doctrine clearly permits.
- Imagery designed to dilute a third-party trademark or pass off goods as another's.
If you encounter generated content that you believe violates this AUP — whether on this Service or elsewhere — please contact us at hello@gptimage2-0.com. Reports involving CSAM are forwarded to the National Center for Missing and Exploited Children (NCMEC) at report.cybertip.org or the equivalent authority in your jurisdiction.
- Automated prompt screening operates on every image and video generation. Prompts denied or flagged by the moderator are blocked and the request returns an error.
- We may review individual accounts based on automated risk signals, payment processor escalations, or user reports.
- Confirmed violations result in account termination. Credits associated with terminated accounts are forfeited and not refundable.
- We cooperate with law enforcement and our payment processor's compliance team where required by law or contract.
We may update this AUP as content policies evolve at our payment processor, our upstream model providers, or in applicable law. Continued use of the Service after changes constitutes acceptance.
Last updated: 2026-04-25