## Config files & related artifacts |File / Artifact|Location|Purpose| |---|---|---| |`~/.codex/config.toml`|Home directory, hidden `.codex` folder|Main user config. Stores preferences (model, reasoning effort, approval mode, MCP servers). [GitHub](https://github.com/openai/codex?utm_source=chatgpt.com)| |`~/.codex/auth.json`|`~/.codex/` in home dir|Holds login/auth credentials when signing in as ChatGPT account. [OpenAI Developers+2GitHub+2](https://developers.openai.com/codex/cli/?utm_source=chatgpt.com)| |`~/.codex/instructions.md` or `~/.codex/AGENTS.md`|Home `.codex` folder or in project root (depending on scope)|Human-readable context / instructions / project-style guidance. Code-style, testing, how agents should behave. [GitHub+2OpenAI Help Center+2](https://github.com/openai/codex?utm_source=chatgpt.com)| |Project-level config (if any)|Possibly `./.codex/config.toml` or via local docs|If present, override or augment the user config in specific projects. Not clearly documented. Inferred via how `instructions.md`/`AGENTS.md` works. [DataCamp+1](https://www.datacamp.com/tutorial/openai-codex?utm_source=chatgpt.com)| --- ## Precedence / override order (inferred) Here is the likely order Codex CLI uses when deciding what setting to apply. Lower numbers are lower precedence; higher numbers win when there’s conflict. `1. Built-in defaults (hardcoded in Codex binary) 2. User config file (~/.codex/config.toml) 3. Auth settings (~/.codex/auth.json) for login/auth behavior 4. Environment variables / CLI flags (e.g. --model, --provider, etc.) 5. Local project instructions/docs (e.g. AGENTS.md / instructions.md) for behavior guidance` - CLI flags override settings in `config.toml`. [OpenAI Developers+2GitHub+2](https://developers.openai.com/codex/cli/?utm_source=chatgpt.com) - Environment variables support: e.g. setting provider endpoint or API key via env vars; `config.toml` can refer to env keys. [APIpie.ai+1](https://apipie.ai/docs/Integrations/Coding/Codex-CLI?utm_source=chatgpt.com) - AGENTS / instructions docs are more about guiding behavior rather than strict config parameters; they layer over config. [DataCamp+2OpenAI Developers+2](https://www.datacamp.com/tutorial/openai-codex?utm_source=chatgpt.com) ``` Built-in defaults (hardcoded in binary) β”‚ β”œβ”€β”€ User config β”‚ └── ~/.codex/config.toml β”‚ β”œβ”€β”€ Auth settings β”‚ └── ~/.codex/auth.json β”‚ β”œβ”€β”€ Project instructions / guidance β”‚ β”œβ”€β”€ ./.codex/config.toml (if supported) β”‚ β”œβ”€β”€ ./.codex/instructions.md β”‚ └── ./.codex/AGENTS.md β”‚ β”œβ”€β”€ Environment variables β”‚ └── e.g. CODEX_API_KEY=... , MODEL=... β”‚ └── Command-line arguments └── codex chat --model=gpt-4.1 --temperature=0.7 ``` - `config.toml` holds preferences (model, reasoning effort, provider, MCP servers). - `auth.json` is created by `codex login`. - `instructions.md` / `AGENTS.md` don’t override numeric values but inject behavioral guidance into sessions. - Env vars and CLI args always override config files.