Codex Complete Tutorial
From Installation to Mastery β Complete Usage Guide for OpenAI Codex CLI with QCode.cc
Codex Complete Tutorial¶
This is a complete Codex CLI tutorial for Chinese developers, taking you from zero to installing, configuring, and using OpenAI Codex CLI, and enjoying low-cost, low-latency AI programming experience through QCode.cc. Whether you're first touching AI programming tools or already using Claude Code and want to try new tools, this tutorial is suitable for you.
1. Codex Introduction¶
What is OpenAI Codex CLI?¶
Codex CLI is an open-source command-line AI programming assistant (Apache 2.0 license) launched by OpenAI, written in Rust and can run directly in the terminal. It can:
- Read and understand your code repository
- Edit files and generate new code
- Execute commands (such as running tests, installing dependencies)
- Iterate autonomously until the task is complete
The core philosophy of Codex is Autonomous Agent: you describe the task, Codex completes it autonomously in a sandbox, and you review the results at the end. This complements Claude Code's interactive dialogue style.
Development History of Codex¶
The name "Codex" has gone through multiple evolutions in OpenAI's product lineup:
- 2021: The earliest Codex was a code fine-tuned version of GPT-3, powering GitHub Copilot
- 2024: OpenAI reactivated the Codex brand, launching a cloud-based asynchronous AI programming agent
- 2025-2026: Codex CLI developed into a mature local command-line tool, rewritten in Rust, supporting advanced features such as MCP, Skills, and Multi-Agent
Current Codex is a multi-interface product: including CLI command-line tool (focus of this article), macOS desktop app, IDE plugins, and cloud agents integrated into ChatGPT. The version used through QCode.cc is the CLI version.
Core Differences Between Codex and Claude Code¶
| Dimension | Codex CLI | Claude Code |
|---|---|---|
| Execution Style | Autonomous execution, delivers results upon completion | Interactive dialogue, step-by-step confirmation |
| Open Source | Fully open source (Apache 2.0) | Not open source |
| Language | Rust (fast startup, low resource usage) | TypeScript |
| Sandbox Security | Built-in Landlock/seccomp sandbox | Permission prompt confirmation |
| Instruction File | AGENTS.md |
CLAUDE.md |
| Cloud Agent | Supported (built into ChatGPT) | Not supported |
In simple terms: Codex excels at "hands-off tasks" (give a clear requirement, let it run through), Claude Code excels at "pair programming" (discuss and modify while exploring, suitable for exploratory tasks). Using both together yields the best results.
Why Use Codex Through QCode.cc?¶
Codex CLI requires OpenAI API Key or ChatGPT subscription by default, but there are two problems in mainland China:
- Network unreachable: OpenAI API cannot be accessed directly
- High cost: Official GPT-5.3-Codex token pricing is not cheap
Through QCode.cc, you can:
- Asia-Pacific node low-latency access, no need for VPN or self-built proxies
- Cost reduction up to 80%, significant savings compared to official pricing
- Claude Code and Codex share plan quota, one plan works for both tools
- Multiple nodes available (Asia-Pacific primary node, Hong Kong, Shenzhen), ensuring connection stability
2. Installing Codex CLI¶
System Requirements¶
Before installing, please confirm your environment meets the following conditions:
- Operating System: macOS 12+, Ubuntu 20.04+, Windows 10+ (WSL2 recommended)
- Node.js: v22 LTS or higher (required for npm installation)
- Git: 2.x or higher (Codex needs Git toζη₯ code repository)
- Disk Space: Approximately 200MB (including npm dependencies)
Method 1: npm Installation (Recommended)¶
This is the most common installation method, applicable to all operating systems:
npm install -g @openai/codex
Tip: If you encounter permission issues, macOS/Linux users can add
sudo, or use nvm to manage Node.js to avoid permission issues.Chinese users: If npm download is slow, you can use the Taobao mirror:
bash npm install -g @openai/codex --registry=https://registry.npmmirror.com
Method 2: Homebrew Installation (macOS)¶
macOS users can also install via Homebrew:
brew install openai-codex
Homebrew's advantage is automatic dependency management and updates.
Method 3: Direct Binary Download (Advanced)¶
Download pre-compiled binaries for your platform from the GitHub Releases page and place them in the PATH directory. This method doesn't depend on Node.js.
# Example: Download and install Linux x64 version
wget https://github.com/openai/codex/releases/latest/download/codex-linux-x64
chmod +x codex-linux-x64
sudo mv codex-linux-x64 /usr/local/bin/codex
Verify Installation¶
codex --version
If you see version number output (such as 0.114.0), the installation is successful.
Current latest version: v0.114.0 (2026-03-11), supporting new features such as Skills system, Hooks engine, and MCP protocol.
Configure Shell Auto-Completion (Optional)¶
Codex supports shell auto-completion; press Tab when entering commands for suggestions:
# Zsh users
echo 'eval "$(codex completion zsh)"' >> ~/.zshrc
source ~/.zshrc
# Bash users
echo 'eval "$(codex completion bash)"' >> ~/.bashrc
source ~/.bashrc
If Zsh prompts
command not found: compdef, addautoload -Uz compinit && compinitbeforeeval.
3. QCode.cc Configuration¶
Codex CLI requires configuring two files to connect to QCode.cc service:
~/.codex/config.tomlβ Server endpoint and model configuration~/.codex/auth.jsonβ API key authentication
Step 1: Create Configuration Directory¶
Windows (PowerShell):
mkdir $HOME\.codex
macOS:
mkdir -p ~/.codex
Linux:
mkdir -p ~/.codex
Step 2: Create config.toml¶
Write the following content to ~/.codex/config.toml:
model_provider = "crs"
model = "gpt-5.3-codex-spark"
model_reasoning_effort = "high"
disable_response_storage = true
preferred_auth_method = "apikey"
[model_providers.crs]
name = "crs"
base_url = "https://asia.qcode.cc/openai"
wire_api = "responses"
requires_openai_auth = true
env_key = "CRS_OAI_KEY"
config.toml Field Details:
| Field | Description |
|---|---|
model_provider |
Model provider name, set to custom crs |
model |
Default model, recommend gpt-5.3-codex-spark |
model_reasoning_effort |
Reasoning effort: low, medium, high. Higher is more accurate but slower |
disable_response_storage |
Disable OpenAI storing conversation content (privacy protection) |
preferred_auth_method |
Authentication method, set to apikey to use API key |
base_url |
QCode.cc Asia-Pacific node address |
wire_api |
API protocol type, Codex uses responses |
requires_openai_auth |
Requires carrying OpenAI format authentication header |
env_key |
Environment variable name, Codex reads API key from this variable |
Step 3: Create auth.json¶
Write the following content to ~/.codex/auth.json:
{
"OPENAI_API_KEY": "cr_xxxxxxxxxx"
}
Replace
cr_xxxxxxxxxxwith your QCode.cc API key. The key starts withcr_.
auth.json Notes:
- This file provides API key to Codex, equivalent to setting
OPENAI_API_KEYenvironment variable - File permissions recommended as
600(readable/writable only by owner):chmod 600 ~/.codex/auth.json - If both
auth.jsonand environment variable exist,auth.jsontakes priority
Step 4: Set Environment Variables (Optional Alternative)¶
If you prefer providing the key via environment variable (instead of auth.json), you can set CRS_OAI_KEY:
Windows (PowerShell):
# Temporary setting (current session)
$env:CRS_OAI_KEY = "cr_xxxxxxxxxx"
# Permanent setting (write to user environment variable)
[System.Environment]::SetEnvironmentVariable("CRS_OAI_KEY", "cr_xxxxxxxxxx", [System.EnvironmentVariableTarget]::User)
macOS:
# Temporary setting
export CRS_OAI_KEY="cr_xxxxxxxxxx"
# Permanent setting
echo 'export CRS_OAI_KEY="cr_xxxxxxxxxx"' >> ~/.zshrc
source ~/.zshrc
Linux:
# Temporary setting
export CRS_OAI_KEY="cr_xxxxxxxxxx"
# Permanent setting (Bash)
echo 'export CRS_OAI_KEY="cr_xxxxxxxxxx"' >> ~/.bashrc
source ~/.bashrc
# Permanent setting (Zsh)
echo 'export CRS_OAI_KEY="cr_xxxxxxxxxx"' >> ~/.zshrc
source ~/.zshrc
When using environment variables, set OPENAI_API_KEY in auth.json to null:
{
"OPENAI_API_KEY": null
}
Available Models¶
The following Codex/GPT models are available through QCode.cc:
| Model | Description | Recommended Scenario |
|---|---|---|
gpt-5.4 |
Latest generation GPT, comprehensive upgrade | Daily development (recommended) |
gpt-5.4-pro (gpt-5.4 Pro) |
5.4 Pro version, enhanced reasoning | Complex architecture and reasoning |
gpt-5.4-codex |
5.4 Codex version, code specialist | Code-intensive tasks |
gpt-5.3-codex-spark |
5.3 lightweight version, fast speed | Cost-performance priority |
gpt-5.3-codex |
5.3 Codex standard version | Stable output |
All models share QCode.cc plan quota with Claude Code. Switching models doesn't require additional payment.
4. Basic Usage Tutorial¶
4.1 Launch Codex¶
Open terminal, go to your project directory, then run:
cd /path/to/your/project
codex
Codex will launch a terminal interactive interface (TUI), where you can enter natural language instructions. The interface consists of:
- Top status bar: Shows current model, approval mode, sandbox status
- Main area: AI's replies and operation logs
- Bottom input box: Where you enter instructions
You can also give tasks directly on the command line (non-interactive mode), suitable for script invocation:
# Interactive launch
codex
# Non-interactive mode: execute single task then exit
codex "Read this project's structure and give me an overview"
# Task with image
codex -i screenshot.png "Fix the UI issue shown in the screenshot"
# Specify model
codex -m gpt-5.1-codex-max "Refactor the error handling in the authentication module"
4.2 First Task: Let Codex Write a Function¶
Let's start with a simple example. Run Codex in your project directory and enter:
Write a Python function that accepts a list of strings and returns the longest one. If there are multiple strings with the same length, return the first one. Save to utils.py.
Codex will execute the following steps:
- Plan: Analyze your requirements and formulate implementation plan
- Generate code: Create
utils.pyand write the function - Request confirmation: In default mode, Codex will show pending file modifications and wait for your confirmation
You'll see a prompt like this:
Codex wants to create file: utils.py
βββββββββββββββββββββββββββββββββββββ
+ def find_longest(strings: list[str]) -> str:
+ """Return the longest string in the list, or the first one if there are multiple."""
+ if not strings:
+ raise ValueError("List cannot be empty")
+ return max(strings, key=len)
Accept? [y/n]
Enter y to confirm, and Codex will write the code to the file.
Next, you can continue to give more instructions, and Codex will maintain context within the same session:
Write a unit test for this function using pytest
Codex will automatically read the utils.py you just created, then generate the corresponding test file.
4.3 Understanding Codex's Sandbox Execution Mode¶
This is one of Codex's most important security features. Codex executes commands in a sandbox, with three security levels:
| Sandbox Mode | File Read | File Write | Command Execute | Network Access |
|---|---|---|---|---|
read-only |
Allowed | Requires confirmation | Requires confirmation | Requires confirmation |
workspace-write (default) |
Allowed | Allowed within workspace | Allowed within workspace | Denied by default |
danger-full-access |
Allowed | All allowed | All allowed | Allowed |
The default workspace-write mode is the best choice for daily development: Codex can freely read/write files and run commands within the project directory, but cannot access files or network outside the project.
If your task requires network access (such as npm install), you can temporarily enable network access:
codex -c 'sandbox_workspace_write.network_access=true' "Install dependencies and run tests"
4.4 Review and Accept Codex's Changes¶
Codex's file modifications follow an Approval Policy. By default:
- File editing: Shows diff and waits for your confirmation
- Shell commands: Shows command content and waits for your confirmation
When Codex proposes modifications, you can:
- Accept (y): Apply the modification
- Reject (n): Skip this modification
- View details: Carefully review the diff before deciding
Tip: Use the
/diffslash command to view all applied modifications in the current session at any time.
4.5 Common Interaction Tips¶
File reference: Enter @ followed by filename, Codex will automatically read that file's content:
Review @src/app.py and optimize error handling
Execute Shell commands: Start with ! to run commands directly, output will be passed to Codex:
!cat error.log
Analyze the error log above and find the root cause
Append instructions: When Codex is running, press Enter to insert new instructions, press Tab to queue the next round of instructions.
Backtrack editing: When the input box is empty, press Esc twice to return to the previous message and modify/resend. Continue pressing Esc to backtrack to earlier messages, then press Enter to fork a new dialogue line from that point.
Pipe input: You can pipe output from other commands to Codex for analysis:
# Analyze recent git changes
git diff HEAD~3 | codex "Review these changes and identify potential issues"
# Analyze error logs
cat /var/log/app/error.log | codex "Analyze the root causes of these errors"
# Review PR
gh pr diff 42 | codex "Review this PR's code quality and security"
Keyboard shortcuts:
| Shortcut | Function |
|---|---|
Tab |
Auto-complete file path (use with @) |
Enter |
Insert new instruction while Codex is running |
Tab |
Queue next round of instructions while Codex is running |
Esc x 2 |
Backtrack to previous message for editing |
Ctrl+C |
Cancel current operation |
Slash commands:
| Command | Description |
|---|---|
/help |
Show help |
/mode |
Switch approval mode |
/diff |
View all changes |
/mcp |
View connected MCP servers |
/status |
Show current session status |
/compact |
Compact conversation history to save tokens |
/permissions |
View and modify permission settings |
/review |
Code review |
5. Advanced Configuration¶
5.1 Custom Instruction Files (AGENTS.md)¶
Codex supports AGENTS.md files to provide AI with project context and work specifications, functioning similarly to Claude Code's CLAUDE.md.
Project-level instructions: Create AGENTS.md in the project root directory:
# AGENTS.md
## Project Description
This is a FastAPI backend project using PostgreSQL database.
## Code Standards
- All functions must have type annotations
- New API endpoints require corresponding tests
- Run `make lint` to check code style before committing
## Test Commands
- Unit tests: `pytest tests/unit/`
- Integration tests: `pytest tests/integration/`
- Code check: `make lint`
Global instructions: Create global default rules in ~/.codex/AGENTS.md, all projects will inherit:
# Global Instructions
- Always communicate in English
- Code comments use English
- Prefer functional programming style
- Generated code must include error handling
Subdirectory override: Creating AGENTS.override.md in a specific directory can override parent rules:
# services/payments/AGENTS.override.md
- All changes in this directory must be written to audit log
- Amount calculations use Decimal type, no floating point numbers
Codex searches for instruction files in the following order: AGENTS.override.md > AGENTS.md > configured fallback file. The combined total size limit defaults to 32KB, adjustable via project_doc_max_bytes.
5.2 Adjusting Approval Mode¶
Codex has three approval modes suitable for different usage scenarios:
Suggest Mode (Most Secure)¶
All operations require manual confirmation, including file editing and command execution. Suitable for learning phase or reviewing sensitive code.
codex --approval-mode suggest
Auto-Edit Mode (Recommended for Daily Use)¶
File editing executes automatically, command execution still requires confirmation. Good balance between efficiency and security.
codex --approval-mode auto-edit
Full-Auto Mode (Fully Autonomous)¶
All operations execute automatically, no confirmation required. Only recommended in isolated environments (such as Docker containers, CI/CD).
codex --full-auto
# Equivalent to: --approval-mode full-auto --sandbox workspace-write
Security tip:
--full-autostill retains sandbox protection (restricted to workspace). If you need completely unrestricted access, use--dangerously-bypass-approvals-and-sandbox, but it is strongly not recommended for non-isolated environments.
Set default mode in config.toml:
# Recommended for personal development
approval_policy = "on-request"
sandbox_mode = "workspace-write"
Switch mode during session: Use /mode command to switch without restarting:
/mode suggest # Switch to suggest mode
/mode auto-edit # Switch to auto-edit mode
/mode full-auto # Switch to full-auto mode
Recommended Configuration by Scenario¶
| Scenario | Approval Mode | Sandbox Mode |
|---|---|---|
| Personal daily development | auto-edit |
workspace-write |
| Team shared environment | suggest |
workspace-write |
| CI/CD pipeline | full-auto |
workspace-write |
| Learning and experimentation | suggest |
workspace-write |
| One-time script tasks | full-auto |
danger-full-access |
5.3 Configure MCP Servers¶
Codex supports Model Context Protocol (MCP), which can connect external tools to extend capabilities.
Add MCP server via command line:
codex mcp add my-server -- npx -y @some/mcp-server --config /path/to/config.json
Configure via config.toml:
[mcp_servers.filesystem]
command = "npx"
args = ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/allowed/dir"]
[mcp_servers.github]
command = "npx"
args = ["-y", "@modelcontextprotocol/server-github"]
env = { GITHUB_TOKEN = "ghp_your_token" }
After configuration, restart Codex and use /mcp command to view connected servers. MCP tools will automatically appear in Codex's available tools list alongside built-in tools.
Make Codex itself an MCP server: Codex can also run in reverse as an MCP server, called by other AI Agents. This is very useful when building multi-agent systems.
5.4 Configure Profile (Multi-Environment Management)¶
If you use different configurations across different projects (e.g., one API Key for work projects, another for personal projects), you can use the Profile feature:
# ~/.codex/config.toml
# Default configuration
model_provider = "crs"
model = "gpt-5.3-codex-spark"
[model_providers.crs]
name = "crs"
base_url = "https://asia.qcode.cc/openai"
wire_api = "responses"
requires_openai_auth = true
env_key = "CRS_OAI_KEY"
# Work project Profile
[profiles.work]
model = "gpt-5.1-codex-max"
model_reasoning_effort = "high"
# Personal project Profile (cost saving)
[profiles.personal]
model = "gpt-5.2-codex"
model_reasoning_effort = "medium"
Launch with specified profile:
codex --profile work "Refactor authentication module"
codex --profile personal "Write a small script"
5.5 Non-Interactive Mode (Scripts and Automation)¶
Codex can not only be used interactively but also run as a non-interactive tool in scripts and CI/CD pipelines. Simply pass the prompt parameter:
# Basic usage: execute task then exit
codex "Add installation instructions to README.md"
# Full-Auto + non-interactive: fully autonomous execution
codex --full-auto "Run test suite, fix all failing tests"
# Output transcript to file (for auditing)
codex --full-auto --transcript output.jsonl "Refactor error handling module"
Using Codex in CI/CD:
# GitHub Actions example
- name: Auto-fix lint errors
run: |
npx @openai/codex --full-auto "Run eslint --fix to fix all lint errors, then commit the fixes"
env:
CRS_OAI_KEY: ${{ secrets.QCODE_API_KEY }}
Codex SDK: If you need to invoke Codex in your own programs, you can use the official SDK for programmatic invocation, embedding Codex into your own development tools or workflows.
5.6 Configuration Priority¶
When multiple configuration sources conflict, Codex resolves them in the following priority order (highest to lowest):
- Command line arguments (
--model,-c, etc.) - Profile values (Profile specified by
--profile <name>) - Project configuration (
.codex/config.toml, from project root to current directory, closest takes precedence) - User configuration (
~/.codex/config.toml) - System configuration (
/etc/codex/config.toml, Unix systems) - Built-in defaults
Understanding this priority helps you precisely control behavior at different levels. For example, set general defaults in ~/.codex/config.toml, override specific settings in the project's .codex/config.toml, then use command line arguments for one-time adjustments.
6. Claude Code vs Codex Comparison¶
If you're already using Claude Code, the following comparison can help you understand the positioning differences between the two tools:
| Dimension | Claude Code | Codex CLI |
|---|---|---|
| Execution Mode | Interactive, developer in the loop | Autonomous, task-driven |
| Interaction Style | Dialogue-style, step-by-step discussion and confirmation | Give task, then autonomous completion |
| Core Model | Claude Opus 4.6 / Sonnet 4.6 | GPT-5.3-Codex-Spark |
| Context Window | 200K standard / 1M Beta | 400K |
| Sandbox Security | Permission prompt confirmation | Built-in Landlock/seccomp sandbox |
| Instruction File | CLAUDE.md |
AGENTS.md |
| MCP Support | Full support | Full support |
| Git Integration | /commit, /review and other commands |
Built-in Git awareness |
| Multi-Agent | Agent Teams (Research Preview) | Multi-Agent parallel (Subagents) |
| Non-Interactive Mode | claude -p "task" |
codex "task" |
| Open Source | Not open source | Open source (Apache 2.0) |
| Desktop App | None (terminal only) | macOS desktop app |
| QCode.cc Quota | Shared plan quota | Shared plan quota |
Which One is More Suitable for You?¶
Choose Claude Code when:
- Exploratory debugging, need to see and adjust direction along the way
- Complex code refactoring, need real-time discussion of approaches
- Architecture analysis and understanding of large codebases
- Need rich IDE tool integrations (LSP, browser, search, etc.)
- Need ultra-long context (1M token Beta) to handle large amounts of code
Choose Codex when:
- Feature development with clear requirements ("implement XX interface")
- Batch file processing and code migration
- Automation tasks in CI/CD pipelines
- Need parallel processing of multiple independent tasks
- Prefer open source tools, need auditing and customization
- Use Docker sandbox for security isolation
Best practice: Use both together β use Claude Code for planning and exploration, use Codex for execution and batch operations. Both tools share QCode.cc plan quota, switching cost is zero.
7. Practical Examples¶
The following demonstrates Codex usage through several real-world scenarios. Each example includes specific commands and expected effects.
Example 1: Understanding a New Project¶
When you take over an unfamiliar codebase:
cd /path/to/new/project
codex
In the interactive interface:
What does this project do? Please analyze the directory structure, main modules, tech stack,
and give a concise architecture diagram (using ASCII art).
Codex will scan project files, analyze dependency files like package.json, requirements.txt, go.mod, read key entry files, then provide a comprehensive project overview.
Example 2: Code Review¶
codex "Review all changes from the most recent git commit in the src/ directory. Focus on:
1. Potential bugs (null pointers, boundary conditions)
2. Security risks (SQL injection, XSS, hardcoded keys)
3. Performance issues (N+1 queries, unnecessary loops)
Provide specific code locations and fix suggestions."
Example 3: Batch Refactoring¶
codex --full-auto "Replace all Python file print() calls with the logging module.
Specific requirements:
1. Import logging at the top of each file
2. Create logger = logging.getLogger(__name__)
3. Replace print() with logger.info()
4. Keep original formatted strings
5. After replacement, run pytest to ensure nothing is broken"
Codex will process files one by one, maintaining code consistency, and finally run tests for verification.
Example 4: Write Complete Test Suite¶
codex "Write complete unit tests for src/services/user_service.py. Requirements:
1. Use pytest + pytest-mock
2. Cover all public methods
3. Include both happy path and exception path tests
4. Mock external dependencies (database, HTTP requests)
5. Save test file to tests/unit/test_user_service.py
6. Run tests to confirm all pass"
Example 5: Autonomous Test Fixing¶
Classic use case for Full-Auto mode β let Codex autonomously fix failing tests:
codex --full-auto "Run all tests. If any fail:
1. Analyze the failure reasons
2. Fix the code (not the tests)
3. Re-run tests
4. Repeat above steps until all tests pass
Finally, provide a fix summary."
Example 6: Implement UI from Design Mockup¶
codex -i design.png "Implement this page using React + Tailwind CSS based on this design mockup.
Requirements:
1. Responsive layout (mobile support)
2. Pixel-perfect recreation of the design mockup
3. Reasonable component splitting
4. Add basic interaction states (hover, focus)"
Example 7: Database Migration¶
codex "I need to add an avatar_url field (varchar 500, nullable) to the users table.
Please:
1. Create Alembic migration script
2. Update SQLAlchemy model
3. Update related Pydantic schema
4. Update CRUD operation functions
5. Add corresponding API endpoints (GET/PUT)
6. Run migration and confirm success"
Example 8: Auto-Generate Changelog in CI/CD¶
codex --full-auto "Analyze all git commits from the last release tag to now,
categorize them according to conventional commits specification,
and generate CHANGELOG.md update content.
Include: new features, bug fixes, breaking changes, other improvements."
8. FAQ¶
Configuration File Not Found¶
Problem: Codex prompts that configuration file not found or cannot load configuration
Solution:
- Check if configuration directory exists:
ls ~/.codex/ - Confirm both files
config.tomlandauth.jsonexist - Check if
config.tomlTOML syntax is correct (common errors: missing quotes, typos) - Use
codex --config-dumpto view actually loaded configuration
API Key Authentication Failed¶
Problem: Prompt 401 Unauthorized or API Key invalid
Solution:
- Confirm API key format is correct (starts with
cr_) - Check if key in
auth.jsonis complete (no extra spaces or line breaks) - If using environment variable, confirm variable name is
CRS_OAI_KEY(matchesenv_keyinconfig.toml) - Log in to QCode.cc console to confirm key status and remaining quota
Network Connection Issues¶
Problem: Cannot connect to QCode.cc service, timeout or connection refused
Solution:
- Check if network is normal:
curl -I https://asia.qcode.cc - Verify
base_urlconfiguration is correct (must behttps://asia.qcode.cc/openai) -
Try backup nodes:
-
Hong Kong node:
http://103.218.243.5/openai - Shenzhen node:
http://103.236.53.153/openai - If using company proxy/VPN, confirm proxy settings don't block HTTPS requests
Model Selection Advice¶
Problem: Unsure which model to choose
Advice:
| Your Need | Recommended Model | Reason |
|---|---|---|
| Daily development | gpt-5.3-codex-spark |
Fast speed, good cost-performance |
| Complex reasoning | gpt-5.1-codex-max |
Stronger reasoning ability |
| Simple tasks | gpt-5.2-2025-12-11 |
Less quota consumption |
| Most stable output | gpt-5.2-codex |
Fully validated |
After setting default model in config.toml, you can also switch temporarily:
codex -m gpt-5.1-codex-max "Analyze this complex concurrency bug"
Sandbox Restrictions Causing Command Failures¶
Problem: Commands Codex attempts to execute are rejected by sandbox
Solution:
- If it's a network operation (like
npm install), temporarily enable network:bash codex -c 'sandbox_workspace_write.network_access=true' "Install dependencies" - If you need to write files outside the project directory, temporarily expand writable scope:
bash codex --sandbox danger-full-access "Save output to /tmp/result.txt" - Use
/permissionsduring session to view and adjust current permissions
Cost Explanation¶
Problem: How are Codex and Claude Code costs calculated?
Explanation:
- Codex and Claude Code share QCode.cc plan quota
- The same plan can be used by both tools simultaneously
- Costs are calculated based on actual token consumption, not by tool
- In Full-Auto mode, Codex iterates autonomously for multiple rounds, token consumption for single tasks may be higher, but saves developer's interaction time
- It's recommended to use
/costcommand to view token usage for current session, or check overall quota usage in QCode.cc console
Can AGENTS.md and CLAUDE.md Coexist?¶
Yes. If your project is used by both Codex and Claude Code:
- Codex only reads
AGENTS.md, ignoresCLAUDE.md - Claude Code only reads
CLAUDE.md, ignoresAGENTS.md - They don't interfere with each other, you can maintain separate instruction files for different tools
- It's recommended to keep core specifications consistent in both files (such as test commands, code style, etc.)
9. QCode.cc Advantages¶
| Advantage | Description |
|---|---|
| 80% Cost Savings | Significantly lower than official pricing, same quota for less money |
| Shared Quota | Claude Code and Codex share plan quota, one plan for both tools |
| 99.9% Availability | Enterprise-grade stable service, multiple nodes as mutual backup |
| Asia-Pacific Optimized | Deployed in Asia-Pacific region, low latency for mainland China access |
| No VPN Required | No proxy or VPN configuration needed, out of the box |
| Multiple Nodes | Asia-Pacific primary node + Hong Kong + Shenzhen, smart routing ensures connection |
10. Related Documentation¶
- Environment Variables Configuration β Environment variable settings for Claude Code
- Quick Start β Claude Code quick start guide
- Aider Integration β Configuration for another open-source AI programming assistant
- CLI Tips β Advanced command-line usage for Claude Code
- Workflow Tips β Workflow suggestions for improving AI programming efficiency