gpt-image-2 Image Generation

OpenAI-compatible gpt-image-2 text-to-image API: drop in by switching base_url, multi-region endpoints, unified billing with your QCode key

gpt-image-2 Image Generation

QCode.cc exposes a fully OpenAI-compatible gpt-image-2 text-to-image API. gpt-image-2 is OpenAI's latest text-to-image model (released April 2026) and currently has the strongest in-image text rendering of any public model — it can reliably render English and Chinese characters inside generated images.

What QCode.cc adds:

  • Drop-in for the OpenAI SDK — only the base_url changes; request and response shapes are 100% identical
  • Multi-region access: HK / Japan / US / EU / Shenzhen direct — pick the entry closest to your network
  • Single API key: reuse your existing QCode.cc cr_ key — same quota account as Claude Code, Codex, and Gemini CLI
  • Unified usage view: image calls and chat calls are merged in your dashboard, plus a dedicated self-service usage page

Common use cases: poster generation, illustrations, product imagery, UI mockups, social-media assets.


Quick start

Three lines of Python for your first image:

from openai import OpenAI
import base64

client = OpenAI(
    base_url="https://api.qcode.cc/qcode-img/v1",
    api_key="cr_YOUR_QCODE_API_KEY",
    timeout=180.0,  # single image takes 30-90s; set at least 180s
)

result = client.images.generate(
    model="gpt-image-2",
    prompt="A cyberpunk Tokyo street at night, neon reflecting in rain puddles",
    size="1024x1024",
    quality="low",
    n=1,
)

with open("output.png", "wb") as f:
    f.write(base64.b64decode(result.data[0].b64_json))

⚠️ Always set timeout ≥ 180s explicitly: the OpenAI SDK's default timeout is too short. gpt-image-2 is a reasoning-based model and takes much longer than traditional text-to-image (see Generation latency).


Endpoints

gpt-image-2 is available on every QCode.cc access domain — append /qcode-img/v1 to the host:

Your location Recommended base_url Protocol Notes
Mainland China http://103.236.53.153/qcode-img/v1 HTTP Shenzhen direct, no 100s CDN limit; required for medium / high quality
Mainland China (HTTPS only) https://api.qcode.cc/qcode-img/v1 HTTPS Goes through global CDN; medium / high may hit 524 (see CDN 100s limit)
HK / SE Asia https://asia.qcode.cc/qcode-img/v1 HTTPS Hong Kong node
Europe https://eu.qcode.cc/qcode-img/v1 HTTPS Frankfurt node
North America https://us.qcode.cc/qcode-img/v1 HTTPS Los Angeles node

All entries route to the same billing system; usage and quota are unified. For the full domain reference, see Endpoints and API Paths.

qcode-img is the path prefix dedicated to image generation, parallel to /api (Anthropic), /openai/v1 (OpenAI), and /gemini (Google) used by other protocols.


API Reference

Endpoint

POST {base_url}/images/generations

Headers

Header Required Value
Authorization Bearer cr_xxxxxxxxxxxxxxxx (your QCode.cc API key)
Content-Type application/json

Request body

{
  "model": "gpt-image-2",
  "prompt": "A small ceramic vase with sunflower, photorealistic",
  "size": "1024x1024",
  "quality": "low",
  "n": 1
}
Field Type Required Default Values
model string Fixed gpt-image-2
prompt string Image description, multi-language (English / Chinese / etc.)
size string 1024x1024 1024x1024 (square) / 1024x1536 (portrait) / 1536x1024 (landscape)
quality string medium low / medium / high
n integer 1 Images per call (1 – 4)

Response

{
  "created": 1777135432,
  "data": [
    {
      "b64_json": "iVBORw0KGgo...(base64 PNG, 1-3 MB)",
      "revised_prompt": "A small ceramic vase with sunflower..."
    }
  ]
}
  • b64_json: base64-encoded PNG. Render directly with <img src="data:image/png;base64,...">
  • revised_prompt: model's polished version of your prompt (optional to display)

Error response

Errors follow the standard OpenAI error schema:

{
  "error": {
    "type": "rate_limit_error",
    "code": "image_daily_limit",
    "message": "Daily image generation count limit reached..."
  }
}
HTTP code Meaning
401 invalid_api_key API key invalid or disabled
401 key_expired API key expired
422 unsupported_size size not supported (only the three above)
429 crs_daily_exhausted Account daily budget reached
429 crs_total_exhausted Account total budget reached
429 image_daily_limit Per-key daily 100-image cap reached (default; can be raised on request)
429 concurrency_exhausted Per-key concurrency cap of 2 reached (default; can be raised)
503 service_overloaded Service-wide overload, retry shortly
503 image_provider_unavailable Upstream temporarily unavailable, retry shortly

Code samples

from openai import OpenAI
import base64

client = OpenAI(
    base_url="https://api.qcode.cc/qcode-img/v1",
    api_key="cr_YOUR_QCODE_API_KEY",
    timeout=180.0,
)

result = client.images.generate(
    model="gpt-image-2",
    prompt="A cyberpunk Tokyo street at night, neon reflecting in rain puddles",
    size="1024x1024",
    quality="low",
    n=1,
)

img_bytes = base64.b64decode(result.data[0].b64_json)
with open("output.png", "wb") as f:
    f.write(img_bytes)
print("Saved output.png")

curl

curl https://api.qcode.cc/qcode-img/v1/images/generations \
  -H "Authorization: Bearer cr_YOUR_QCODE_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-image-2",
    "prompt": "A cyberpunk Tokyo street at night",
    "size": "1024x1024",
    "quality": "low",
    "n": 1
  }' \
  | jq -r ".data[0].b64_json" | base64 -d > output.png

JavaScript / Node.js / browser

const r = await fetch("https://api.qcode.cc/qcode-img/v1/images/generations", {
  method: "POST",
  headers: {
    "Authorization": "Bearer cr_YOUR_QCODE_API_KEY",
    "Content-Type":  "application/json",
  },
  body: JSON.stringify({
    model: "gpt-image-2",
    prompt: "A cyberpunk Tokyo street at night",
    size: "1024x1024",
    quality: "low",
    n: 1,
  }),
});
const json = await r.json();
const dataUrl = "data:image/png;base64," + json.data[0].b64_json;
document.querySelector("img").src = dataUrl;

Limits and quotas

Default limits

Dimension Default Notes
Daily images 100 / day / key Resets at midnight Beijing time
Concurrency 2 in flight Exceeding returns 429 concurrency_exhausted
Account budget Shared with your QCode.cc dailyCostLimit / totalCostLimit Exceeding returns 429 crs_daily_exhausted

Defaults are sufficient for the vast majority of users. Contact support to raise them.

Usage queries

  • Customer dashboard: image calls are merged with chat calls (last-used time, daily cost, total cost, model breakdown)
  • Self-service page: https://api.qcode.cc/qcode-img/usage — paste your API key to see 30-day usage, detailed call list, and ECharts trend graph (your API key is stored only in your browser, never uploaded)

Billing

Per-image pricing

size low medium high
1024×1024 $0.08 (floor) $0.08 (floor) $0.211
1024×1536 $0.08 (floor) $0.08 (floor) $0.165
1536×1024 $0.08 (floor) $0.08 (floor) $0.165
2048×2048 $0.08 (floor) $0.08 (floor) $0.285

$0.08 per-call floor

  • If actual cost < $0.08, you are billed $0.08 (low / medium quality typically hit this floor)
  • If actual cost ≥ $0.08, you are billed the actual amount (not raised)

Multiple images

With n > 1, cost scales linearly. Example: n=2 + 1024×1024 high = 2 × $0.211 = $0.422.

Failed requests not billed

Any 4xx / 5xx failure is not billed. Client-disconnect (connection closed mid-request) is also not billed.

Currency

Pricing is in USD; final settlement follows the currency policy of your main QCode.cc account (CNY / USD).


Generation latency and timeout

gpt-image-2 is a reasoning-based model — significantly slower than traditional text-to-image (DALL·E 3 / SDXL):

quality Typical Complex prompt p99
low 20 – 35 s ~50 s
medium 50 – 90 s ~120 s
high 70 – 120 s ~150 s

Practical guidance:

  • The OpenAI Python SDK has a short default timeout — always set timeout=180.0 or higher
  • Browser fetch has no default timeout, but if you use an AbortController, give it at least 180 s
  • Mainland China users on medium / high should use the Shenzhen direct entry, otherwise the CDN 524 issue below will hit you

CDN 100s hard limit (524 errors)

HTTPS requests through api.qcode.cc / asia.qcode.cc / eu.qcode.cc / us.qcode.cc are fronted by a global CDN (CloudFlare). The CDN forces a 524 error if a single request waits more than 100 seconds for the origin to respond.

quality Safe via CDN 100s?
low ✅ Safe (< 35 s)
medium ⚠️ Occasionally hits the cap
high ❌ Frequently 524

Workarounds (recommended for medium / high):

  1. Use the Shenzhen HTTP entry http://103.236.53.153/qcode-img/v1 directly — no CDN, no 100s limit
  2. Or accept occasional 524 and add client-side retry

Prompt tips

  • Multi-language: write your prompt in English, Chinese, or mixed — all work
  • Be specific: scene, composition, lighting, style, lens / focal length / camera angle, etc.
  • Avoid brand names / public figures: the model may refuse or return blurry results (OpenAI content policy)
  • Text rendering: gpt-image-2 excels at rendering text inside images — embed English / Chinese titles, short phrases, or poster text directly in the prompt; no special syntax required

Sample prompt:

A vintage poster in Bauhaus style, bold black text "MORNING COFFEE" centered,
warm orange and cream color palette, geometric shapes, slightly textured paper background

Differences from the OpenAI API

Aspect OpenAI QCode.cc
SDK compatibility ✅ 100% — just change base_url
Pricing Token-based Tiered, $0.08 / image floor (see above)
/v1/images/edits (image editing) Supported ⏳ Not yet
stream + partial_images (incremental) Supported ⏳ Not yet
/v1/images/generations (main endpoint)

Online Playground

https://api.qcode.cc/qcode-img/ — try in the browser:

  • Paste API key + prompt, generate immediately
  • Bilingual UI (EN / ZH)
  • Defaults to low quality (avoids CDN 524)
  • Built-in API reference (curl / Python / JavaScript tabs, parameter table, error codes)
  • One-click PNG download

🚀
Get Started with QCode — Claude Code & Codex
One plan for both Claude Code and Codex, Asia-Pacific low latency
View Pricing Plans → Create Account
Team of 3+?
Enterprise: dedicated domain + sub-key management + ban protection, from ¥250/person/mo
Learn Enterprise →