Run Codex from a GitHub Actions workflow while keeping tight control over the privileges available to Codex. This action handles installing the Codex CLI and configuring it with a secure proxy to the Responses API.
Users must provide an API key for their chosen provider (for example, OPENAI_API_KEY or AZURE_OPENAI_API_KEY if using Azure for OpenAI models) as a GitHub Actions secret to use this action.
This fork builds on openai/codex-action so you can drop it into existing workflows while picking up a few opinionated improvements:
- ChatGPT subscription auth support — provide
codex-auth-json-b64, and the action writes your exportedauth.jsonintoCODEX_HOMEso Codex can run without an API key. Seedocs/subscription-auth.mdfor the full walkthrough plus theexamples/code-review-subscription.ymlworkflow. - Secret-scoped environment passthrough — the
pass-through-envinput acts as an allowlist so you can safely forward only the env vars Codex needs (for exampleGH_TOKENor release credentials) instead of exposing the entire runner environment. Seedocs/pass-through-env.mdfor setup details and security tips. - Extra guardrails and docs for teams — opt-in actor allowlists (
allow-users,allow-bots), stricter sandbox defaults (drop-sudo), and expanded security guidance help you keep subscription credentials and API keys locked down while still enjoying full Codex automation.
If you only need the upstream behavior, you can continue using openai/codex-action@v1; otherwise grab this fork when those enhancements matter.
While Codex cloud offers a powerful code review tool that you can use today, here is an example of how you can build your own code review workflow with openai/codex-action if you want to have more control over the experience.
In the following example, we define a workflow that is triggered whenever a user creates a pull request that:
- Creates a shallow clone of the repo.
- Ensures the
baseandheadrefs for the PR are available locally. - Runs Codex with a
promptthat includes the details specific to the PR. - Takes the output from Codex and posts it as a comment on the PR.
See security.md for tips on using openai/codex-action securely.
name: Perform a code review when a pull request is created.
on:
pull_request:
types: [opened]
jobs:
codex:
runs-on: ubuntu-latest
permissions:
contents: read
outputs:
final_message: ${{ steps.run_codex.outputs.final-message }}
steps:
- uses: actions/checkout@v5
with:
# Explicitly check out the PR's merge commit.
ref: refs/pull/${{ github.event.pull_request.number }}/merge
- name: Pre-fetch base and head refs for the PR
run: |
git fetch --no-tags origin \
${{ github.event.pull_request.base.ref }} \
+refs/pull/${{ github.event.pull_request.number }}/head
# If you want Codex to build and run code, install any dependencies that
# need to be downloaded before the "Run Codex" step because Codex's
# default sandbox disables network access.
- name: Run Codex
id: run_codex
uses: openai/codex-action@v1
with:
openai-api-key: ${{ secrets.OPENAI_API_KEY }}
prompt: |
This is PR #${{ github.event.pull_request.number }} for ${{ github.repository }}.
Review ONLY the changes introduced by the PR, so consider:
git log --oneline ${{ github.event.pull_request.base.sha }}...${{ github.event.pull_request.head.sha }}
Suggest any improvements, potential bugs, or issues.
Be concise and specific in your feedback.
Pull request title and body:
----
${{ github.event.pull_request.title }}
${{ github.event.pull_request.body }}
post_feedback:
runs-on: ubuntu-latest
needs: codex
if: needs.codex.outputs.final_message != ''
permissions:
issues: write
pull-requests: write
steps:
- name: Report Codex feedback
uses: actions/github-script@v7
env:
CODEX_FINAL_MESSAGE: ${{ needs.codex.outputs.final_message }}
with:
github-token: ${{ github.token }}
script: |
await github.rest.issues.createComment({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: context.payload.pull_request.number,
body: process.env.CODEX_FINAL_MESSAGE,
});For a ChatGPT subscription auth variant, see examples/code-review-subscription.yml.
| Name | Description | Default |
|---|---|---|
openai-api-key |
Secret used to start the Responses API proxy when you are using OpenAI (default). Store it in secrets. |
"" |
responses-api-endpoint |
Optional Responses API endpoint override, e.g. https://example.openai.azure.com/openai/v1/responses. Leave empty to use the proxy's default. |
"" |
codex-auth-json-b64 |
Base64-encoded contents of auth.json for Codex CLI (ChatGPT subscription auth). The action decodes and writes it to CODEX_HOME/auth.json. |
"" |
prompt |
Inline prompt text. Provide this or prompt-file. |
"" |
prompt-file |
Path (relative to the repository root) of a file that contains the prompt. Provide this or prompt. |
"" |
output-file |
File where the final Codex message is written. Leave empty to skip writing a file. | "" |
working-directory |
Directory passed to codex exec --cd. Defaults to the repository root. |
"" |
sandbox |
Sandbox mode for Codex. One of workspace-write (default), read-only or danger-full-access. |
workspace-write |
codex-version |
Version of @openai/codex to install. Set to "" to install the latest available version. |
0.104.0 |
codex-args |
Extra arguments forwarded to codex exec. Accepts JSON arrays (["--flag", "value"]) or shell-style strings. |
"" |
pass-through-env |
Optional newline- or comma-separated list of environment variable names forwarded to Codex. Only include the specific secrets Codex must read. | "" |
output-schema |
Inline schema contents written to a temp file and passed to codex exec --output-schema. Mutually exclusive with output-schema-file. |
"" |
output-schema-file |
Schema file forwarded to codex exec --output-schema. Leave empty to skip passing the option. |
"" |
model |
Model the agent should use. Defaults to gpt-5.3-codex; set model: "" to let Codex pick its default. |
gpt-5.3-codex |
effort |
Reasoning effort the agent should use. Leave empty to let Codex pick its default. | "" |
codex-home |
Directory to use as the Codex CLI home (config/cache). Uses the CLI default when empty. | "" |
safety-strategy |
Controls how the action restricts Codex privileges. See Safety strategy. | drop-sudo |
codex-user |
Username to run Codex as when safety-strategy is unprivileged-user. |
"" |
allow-users |
List of GitHub usernames who can trigger the action in addition to those who have write access to the repo. | "" |
allow-bots |
Allow runs triggered by GitHub Apps/bot accounts to bypass the write-access check. | false |
capture-json-events |
Capture codex exec --json output and parse metadata (session ID + usage). |
false |
json-events-file |
Optional path to write raw JSONL events when JSON capture is enabled. | "" |
write-step-summary |
Write run metadata and a final-message preview to GitHub Step Summary. | true |
trigger-phrase |
Optional phrase that must appear in issue/PR/comment text for the action to proceed. | "" |
label-trigger |
Optional issue/PR label name that triggers execution. | "" |
assignee-trigger |
Optional issue/PR assignee username that triggers execution. | "" |
track-progress |
Create/update a progress comment on issue/PR events while Codex runs. | false |
use-sticky-comment |
When tracking progress, reuse one marker-based comment instead of creating new comments. | false |
sanitize-github-context |
Sanitize untrusted GitHub payload text before deriving prompts from trigger-driven events. | true |
The safety-strategy input determines how much access Codex receives on the runner. Choosing the right option is critical, especially when sensitive secrets (like your OpenAI API key) are present.
See Protecting your OPENAI_API_KEY on the Security page for important details on this topic.
drop-sudo(default) — On Linux and macOS runners, the action revokes the default user’ssudomembership before invoking Codex. Codex then runs as that user without superuser privileges. This change lasts for the rest of the job, so subsequent steps cannot rely onsudo. This is usually the safest choice on GitHub-hosted runners.unprivileged-user— Runs Codex as the user provided viacodex-user. Use this if you manage your own runner with a pre-created unprivileged account. Ensure the user can read the repository checkout and any files Codex needs. Seeunprivileged-user.ymlfor an example of how to configure such an account onubuntu-latest.read-only— Executes Codex in a read-only sandbox. Codex can view files but cannot mutate the filesystem or access the network directly. The OpenAI API key still flows through the proxy, so Codex could read it if it can reach process memory.unsafe— No privilege reduction. Codex runs as the defaultrunneruser (which typically hassudo). Only use this when you fully trust the prompt. On Windows runners this is the only supported choice and the action will fail if another option is provided.
- Windows: GitHub-hosted Windows runners lack a supported sandbox. Set
safety-strategy: unsafe. The action validates this and exits early otherwise. - Linux/macOS: All options for
safety-strategyare supported. Again, if you pickdrop-sudo, remember that later steps in yourjobthat rely onsudowill fail. If you do need to run code that requiressudoafteropenai/codex-actionhas run, one option is to pipe the output ofopenai/codex-actionto a freshjobon a new host and to continue your workflow from there.
| Name | Description |
|---|---|
final-message |
Final message returned by codex exec. |
structured-output |
Stringified JSON when output-schema is used and Codex returns valid JSON in the final message. |
usage-json |
Stringified token usage extracted from JSON events (input_tokens, cached_input_tokens, output_tokens). |
execution-file |
Path to the raw JSONL event log when capture-json-events is enabled. |
session-id |
Session/thread ID extracted from JSON events (diagnostic only). |
conclusion |
Codex run result (success or failure). |
triggered |
Whether trigger conditions matched and the action proceeded. |
tracking-comment-id |
Comment ID used for progress tracking when track-progress is enabled. |
As we saw in the example above, we took the final-message output of the run_codex step and made it an output of the codex job in the workflow:
jobs:
codex:
# ...
outputs:
final_message: ${{ steps.run_codex.outputs.final-message }}If Codex needs access to workflow secrets (for example GH_TOKEN to push tags or
SENTRY_AUTH_TOKEN for release uploads), explicitly list those variable names in
the pass-through-env input and set the actual values via the workflow env
block. The input accepts either newline-separated or comma-separated names:
- uses: openai/codex-action@v1
with:
pass-through-env: |
GH_TOKEN
SENTRY_AUTH_TOKEN
env:
GH_TOKEN: ${{ secrets.GH_TOKEN }}
SENTRY_AUTH_TOKEN: ${{ secrets.SENTRY_AUTH_TOKEN }}Forwarding env vars is opt-in so you can keep the rest of the GitHub Actions
environment hidden from Codex. Whatever you expose here becomes visible to Codex
and any commands it runs; combining this feature with
sandbox: danger-full-access or safety-strategy: unsafe increases the risk of
token exfiltration, so prefer scoped credentials and the stricter sandbox modes.
See examples/pass-through-env.yml for a
full workflow, and docs/pass-through-env.md for a
deeper walkthrough that covers rotation and troubleshooting.
You can gate execution on GitHub event payload data by setting one or more of:
trigger-phrase, label-trigger, or assignee-trigger.
- If no trigger inputs are configured, behavior is unchanged (the action proceeds).
- If trigger inputs are configured and none match, the action no-ops cleanly with
output
triggered=false. - If trigger inputs are configured and a match occurs, the action can derive a
prompt from the event payload when
prompt/prompt-fileare not provided.
The sanitize-github-context input is true by default to strip hidden markup
and zero-width characters before deriving prompt text.
See examples/triggered-progress-review.yml
for an end-to-end trigger workflow.
Enable capture-json-events: true when you want machine-readable execution
metadata from codex exec --json. This powers outputs like session-id,
usage-json, and execution-file.
You can control where the raw JSONL goes with json-events-file; otherwise the
action writes to a temporary file and exposes its path via execution-file.
write-step-summary defaults to true and appends run metadata plus a concise
final-message preview to the GitHub Step Summary.
Set track-progress: true on issue/PR events to create/update a progress comment
while Codex runs. Add use-sticky-comment: true to reuse one marker-based comment
across runs and reduce comment noise.
- Run this action after
actions/checkout@v5so Codex has access to your repository contents. - To use a non-default Responses endpoint (for example Azure OpenAI), set
responses-api-endpointto the provider's URL while keepingopenai-api-keypopulated; the proxy will still sendAuthorization: Bearer <key>upstream. - If you want Codex to have access to a narrow set of privileged functionality, consider running a local MCP server that can perform these actions and configure Codex to use it.
- If you need more control over the CLI invocation, pass flags through
codex-argsor create aconfig.tomlincodex-home. - Once
openai/codex-actionis run once withopenai-api-key, you can also callcodexfrom subsequent scripts in your job. (You can omitpromptandprompt-filefrom the action in this case.)
To configure the Action to use OpenAI models hosted on Azure, pay close attention to the following:
- The
responses-api-endpointmust be set to the full URL (including any required query parameters) that Codex willPOSTto for a Responses API request. For Azure, this might look likehttps://YOUR_PROJECT_NAME.openai.azure.com/openai/v1/responses. Note that unlike when customizing a model provider in Codex, you must include thev1/responsessuffix to the URL yourself, if appropriate. - The
openai-api-keyinput must be a valid key that can be used with theAuthorization: Bearer <KEY>header when making aPOSTrequest to your Responses API endpoint. (This is also true for the value of theenv_keywhen setting a custom provider using the Codex CLI.)
Ultimately, your configured Action might look something like the following:
- name: Start Codex proxy
uses: openai/codex-action@v1
with:
openai-api-key: ${{ secrets.AZURE_OPENAI_API_KEY }}
responses-api-endpoint: "https://bolinfest-7804-resource.cognitiveservices.azure.com/openai/v1/responses"
prompt: "Debug all the things."If you already have a Codex login on a developer machine, you can export your CLI credentials and provide them to this action via a base64-encoded auth.json:
-
On a trusted machine where
codexis logged in, findauth.jsonunder~/.codex/auth.json. -
Base64-encode it and save into a GitHub secret:
Linux/macOS:
base64 -w0 ~/.codex/auth.jsonmacOS (BSD base64):
base64 -i ~/.codex/auth.json | tr -d '\n'
-
In your workflow, pass the secret to the action:
- uses: openai/codex-action@v1
with:
codex-auth-json-b64: ${{ secrets.CODEX_AUTH_JSON_B64 }}
prompt: |
Hello from subscription auth.Notes:
- Do not provide both
openai-api-keyandcodex-auth-json-b64unless you specifically want to use the Responses API proxy; if both are present, the proxy configuration takes precedence. auth.jsonis sensitive. This action writes it with file mode0600. Prefersafety-strategy: drop-sudoorunprivileged-userto limit risk.
For an end-to-end walkthrough of exporting, encoding, storing, and rotating these credentials, see docs/subscription-auth.md.
See the CHANGELOG for details.
This project is licensed under the Apache License 2.0.