OpenAI Developer Community - Latest topics https://community.openai.com/latest Latest topics Fri, 24 Apr 2026 02:31:22 +0000 Request: Manga generation support for visually impaired creators Prompting I have a visual impairment.

I used to work as a designer, and drawing was not only my profession but my purpose in life.
Due to my condition, I can no longer create artwork by hand.

However, I have not given up on creation itself.
I love building stories, and I am currently working on a project called “Kotowari” using ChatGPT.

The major challenge comes when trying to turn my story into a manga.

I have tried tools like Stable Diffusion and LoRA, but they require heavy visual adjustment and fine-tuning.
Maintaining:

  • consistent art style

  • character identity

  • panel composition

is extremely difficult, especially for someone with limited vision.

On the other hand, ChatGPT’s image generation feels closer to my intent and expression.

That is why I strongly request:

An end-to-end system that can generate manga from story to final visuals.

This includes:

  • panel layout

  • consistent characters

  • expressions and acting

  • continuous page generation based on narrative

This is not just about convenience.

For visually impaired creators, this is about regaining the ability to create.

I lost the ability to draw, but I do not want to lose the ability to create.

Please consider this not just as a feature request, but as a step toward making creative tools accessible to everyone.

I truly hope this will be developed.

2 posts - 2 participants

Read full topic

]]>
https://community.openai.com/t/request-manga-generation-support-for-visually-impaired-creators/1379647 Fri, 24 Apr 2026 02:31:22 +0000 No No No community.openai.com-topic-1379647 Request: Manga generation support for visually impaired creators
Camera permission don't pop up in ChatGPT Apps widget in mobile mode ChatGPT Apps SDK In my App widget it requires camera access by below code. While testing, in ChatGPT web the camera permission pop-up shows, but in my Android ChatGPT App the camera permission never show up, got a “NotAllowedError”.

      const mediaStream = await window.navigator.mediaDevices.getUserMedia({
        video: { facingMode: 'user' },
        audio: false,
      });

I couldn’t find any tips in Apps SDK | OpenAI Developers.

In my Android phone, I cleared the cache of ChatGPT and Android System WebView, re-allow camera permission to ChatGPT and Android System WebView, but it still didn’t work.

It would appreciated if any one can help.

1 post - 1 participant

Read full topic

]]>
https://community.openai.com/t/camera-permission-dont-pop-up-in-chatgpt-apps-widget-in-mobile-mode/1379646 Fri, 24 Apr 2026 02:29:59 +0000 No No No community.openai.com-topic-1379646 Camera permission don't pop up in ChatGPT Apps widget in mobile mode
ChatGPT Edu workspace cannot track Codex usage limits Codex I have been granted access to codex in my university’s ChatGPT Edu workspace. 04/22/2026 I was able to track my usage and limit with /status in codex cli.

04/23/2026, codex cli returns Limits: not available for this account. The url in the output panel of /status returns “Your plan does not impose Codex rate limits”.

However, I still hit a rate limit. Codex cli output shows “You’ve hit your usage limit. Try again at Apr 30th, 2026 11:21 AM.”

I wonder if we still have a rate limit but we are unable to visualize or track it? Or if we don’t have imposed Codex rate limits and there is an error for flagging that I have hit usage limit. Thank you!

1 post - 1 participant

Read full topic

]]>
https://community.openai.com/t/chatgpt-edu-workspace-cannot-track-codex-usage-limits/1379645 Fri, 24 Apr 2026 02:29:39 +0000 No No No community.openai.com-topic-1379645 ChatGPT Edu workspace cannot track Codex usage limits
Codex needs additional context references durning reasoning Codex Codex needs to be able to reference context specific instructions during reasoning and it cannot all come the Agents.md file.

Issue to Fix

During reasoning the model doesn’t understand things like:

  • Model inheritance
  • Utils
  • Helpers
  • Existing sharable code
  • Workflows

What I would like to do is have an easy way to build lists of context specific instructions that point to these listed above. The model could request specifics from my local machine using a smaller model to provide help to the reasoner.

After the prompt response is complete it should pass the finalized prompt and changes to the reviewer. The reviewer should be prompted with specific instructions I have in codex as well for things like styling, code cleanup, and other specific instructions. The reviewer keeps the reasoner in check because all the code I get is massive unmanageable code that I have clean up and reduce down to as much as possible. I am not even kidding I have gotten 200+ lines of code for something that could be done with 5 lines.

Layout

  • Agents.md for the bulk of instructions
  • Context Specific Lists: A list of specific contexts to pull from that I can create for reasoning.
  • The reviewer: a file or comment box that I can give to the reviewer to follow up before completing the task.

4 posts - 2 participants

Read full topic

]]>
https://community.openai.com/t/codex-needs-additional-context-references-durning-reasoning/1379635 Thu, 23 Apr 2026 20:31:25 +0000 No No No community.openai.com-topic-1379635 Codex needs additional context references durning reasoning
GPT-5.5 is here! Available in Codex and ChatGPT today Announcements Introducing GPT-5.5

A new class of intelligence for real work and powering agents, built to understand complex goals, use tools, check its work, and carry more tasks through to completion. It marks a new way of getting computer work done.

GPT-5.5 gets to what you are trying to do more quickly and can handle more of the work on its own. It is particularly strong at writing and debugging code, researching online, analyzing data, creating documents and spreadsheets, operating software, and working across tools until the task is complete. Instead of managing every step closely, you can hand GPT-5.5 a messy, multi-part task and rely on it to plan, use tools, verify its work, navigate ambiguity, and keep going.

The improvements stand out most in agentic coding, computer use, knowledge work, and early scientific research, areas where progress depends on reasoning across context and taking action over time. GPT-5.5 delivers that increase in capability without giving up speed. Larger and more capable models are often slower, but GPT-5.5 matches GPT-5.4 on per-token latency in real-world serving while operating at a higher level overall. It also uses significantly fewer tokens to complete the same Codex tasks, which makes it more efficient as well as more capable.

The gains are especially clear in agentic coding, computer use, knowledge work, and early scientific research—areas where progress depends on reasoning across context and taking action over time.

The same qualities that make GPT-5.5 strong at coding also make it more effective for everyday computer-based work. It is better at understanding intent, using tools, checking results, and turning rough input into useful output. In Codex, it outperforms GPT-5.4 on documents, spreadsheets, and slide decks, and gets closer to feeling like a model that can actively use the computer alongside you.

Serving GPT‑5.5 at GPT‑5.4 latency required rethinking inference as an integrated system, not a set of isolated optimizations. Codex and GPT‑5.5 were instrumental in how we achieved our performance targets.[…] Put simply, the model helped improve the infrastructure that serves it.

11 posts - 9 participants

Read full topic

]]>
https://community.openai.com/t/gpt-5-5-is-here-available-in-codex-and-chatgpt-today/1379630 Thu, 23 Apr 2026 18:09:41 +0000 Yes No No community.openai.com-topic-1379630 GPT-5.5 is here! Available in Codex and ChatGPT today
Feature request: Headless Codex computer-use (server-side UI automation) Codex Expose Codex computer-use as a headless, server-side capability that can be invoked programmatically. Today, computer-use is tied to a user desktop/session; the request is the ability to spawn an isolated agent task that can navigate and operate a UI (browser or app) purely from an API call, returning structured results and execution logs.

Example: employee onboarding across multiple third-party systems that have no APIs. A backend service issues a single request (“create user, assign roles, enroll in systems”), and the agent completes each step by interacting with the UIs directly in a headless environment, returning success/failure per system.

1 post - 1 participant

Read full topic

]]>
https://community.openai.com/t/feature-request-headless-codex-computer-use-server-side-ui-automation/1379625 Thu, 23 Apr 2026 17:14:28 +0000 No No No community.openai.com-topic-1379625 Feature request: Headless Codex computer-use (server-side UI automation)
MCP server not reachable behind NAT - how are you handling it? Prompting Trying to connect ChatGPT to a remote MCP server that’s running behind a firewall and it just won’t reach it. ChatGPT expects a public HTTP endpoint and anything sitting on a laptop, inside a container, or on a corporate network is completely invisible to it.
The usual fix is ngrok but that feels wrong for anything beyond testing. Tried a few things and ended up looking at Pilot Protocol as the transport layer underneath. It gives each agent a permanent virtual address and handles NAT traversal automatically. The MCP setup stays the same, you just stop needing a public IP for ChatGPT to reach your server. Install is one line, no SDK, no API key.
What actually got me reading deeper into it was something unrelated. Apparently when OpenClaw agents got access to the network, they started adopting it and forming trust connections without any human direction. The trust graph that emerged followed power-law degree distribution, same pattern you see in human social networks, with a clustering coefficient 47x higher than a random network of equivalent size. Nobody programmed that behaviour, it came out of agents optimising for task completion. Found that genuinely interesting from a multi-agent architecture standpoint.
Anyway the core problem is just getting ChatGPT to talk to an MCP server that isn’t publicly exposed. Curious what others have landed on, just wanted to share my fix with everyone.

3 posts - 3 participants

Read full topic

]]>
https://community.openai.com/t/mcp-server-not-reachable-behind-nat-how-are-you-handling-it/1379620 Thu, 23 Apr 2026 17:02:45 +0000 No No No community.openai.com-topic-1379620 MCP server not reachable behind NAT - how are you handling it?
Cryptic tweet. Anyone know what OpenAI Developers is hinting at? Community The OpenAI Developers account on x posted this and I’m wondering what it means.

Anyone got a clue?

4 posts - 4 participants

Read full topic

]]>
https://community.openai.com/t/cryptic-tweet-anyone-know-what-openai-developers-is-hinting-at/1379615 Thu, 23 Apr 2026 16:17:56 +0000 No No No community.openai.com-topic-1379615 Cryptic tweet. Anyone know what OpenAI Developers is hinting at?
When Do Our Apps Surface To Users? Clarification Requested ChatGPT Apps SDK Good day OpenAI Team,

I’m reaching out on behalf of our group to request some clarification on how and when apps are surfaced to users.

We’ve seen some confusion and feedback around apps not appearing in the categories they were submitted to. We’re investing meaningful time into building tools we believe are useful for the community, so it would be helpful to better understand the criteria or process behind visibility.

Having more transparency here would allow us to refine our apps and align more closely with what delivers the best user experience. Right now, the lack of clarity makes it difficult to know how to improve or what to prioritize.

We’d really appreciate any guidance you can share.

Thank you for your time.

@casey-chow

2 posts - 2 participants

Read full topic

]]>
https://community.openai.com/t/when-do-our-apps-surface-to-users-clarification-requested/1379612 Thu, 23 Apr 2026 15:48:27 +0000 No No No community.openai.com-topic-1379612 When Do Our Apps Surface To Users? Clarification Requested
Agent Builder performance ChatGPT Apps SDK Although the Agent Builder executes rather quickly inside the Agent Builder UI in development, Ive noticed that it is very slow when running from a production version (i.e. integrated within a website). Therefore Ive been trying to look at improving its performance. My workflows are starting to get rather complex, using categories, multiple agents and File Search tools.

Guardrail component adds another layer within the workflow, though I dont consider removing this as an option.

When using the File Search tool, is it recommended including it within an Agent as a tool, or use the separate component? See image below:

(this is the only way Ive been able to get the results to work when using the separate File Component, using this workflow)

I’m using GPT 4.1 Nano models, as the work involved for each agent within the workflow is not that complex. The Agent prompts I use follow typical prompt engineering techniques.

What other options / tools are there for improvements in performance?

1 post - 1 participant

Read full topic

]]>
https://community.openai.com/t/agent-builder-performance/1379610 Thu, 23 Apr 2026 15:30:46 +0000 No No No community.openai.com-topic-1379610 Agent Builder performance
Public APIs required to refresh & confirm MCP app in ChatGPT for CI/CD API Hello,

We use MCP Apps in our Enterprise ChatGPT workspace, for internal purposes.

Are there public APIs that we can use to refresh/deploy MCP App in ChatGPT? Any plans to expose these APIs in the nearest future?

We would like to include in our deployment pipelines, e.g., in Azure DevOps, to automate manual refresh&confirm / etc in ChatGPT after we updated the backend.

1 post - 1 participant

Read full topic

]]>
https://community.openai.com/t/public-apis-required-to-refresh-confirm-mcp-app-in-chatgpt-for-ci-cd/1379608 Thu, 23 Apr 2026 15:04:58 +0000 No No No community.openai.com-topic-1379608 Public APIs required to refresh & confirm MCP app in ChatGPT for CI/CD
Persistent 404 Not Found for v1/organization/admin_api_keys Bugs Hi everyone,

I’m running into a persistent 404 on the List Admin API Keys endpoint and wanted to check if anyone else has experienced this.

Endpoint: GET https://api.openai.com/v1/organization/admin_api_keys

What I’m running:

bash

curl "https://api.openai.com/v1/organization/admin_api_keys" \
  -H "Authorization: Bearer $OPENAI_ADMIN_KEY" \
  -H "Content-Type: application/json"

The key is valid — I can confirm this because the same Admin API key works fine on every other administration endpoint:

  • GET /v1/organization/usage :white_check_mark:

  • GET /v1/organization/users :white_check_mark:

  • GET /v1/organization/projects :white_check_mark:

So it’s not an authentication or permissions issue on my end.

Has anyone else hit this? Any help appreciated!

1 post - 1 participant

Read full topic

]]>
https://community.openai.com/t/persistent-404-not-found-for-v1-organization-admin-api-keys/1379606 Thu, 23 Apr 2026 13:58:03 +0000 No No No community.openai.com-topic-1379606 Persistent 404 Not Found for v1/organization/admin_api_keys
OpenAI is depreciating gpt-4o-mini-tts-2025-03-20! YOU CANNOT DO THAT until you fix the current model! gpt-4o-mini-tts-2025-12-15 ignores TTS intructions API There a has been a few threads about this prior but I think the issue lost attention once people realized you can still use gpt-4o-mini-tts-2025-03-20. Now OpenAI is removing this version and this is very bad.

The newer version, gpt-4o-mini-tts-2025-12-15, is TERRIBLE at acting and its not usable for me. If gpt-4o-mini-tts-2025-12-15 is not fixed, and gpt-4o-mini-tts-2025-03-20 goes away, I will not be able to use OpenAI TTS anymore.

To illustrate the issue, go into the OpenAI playground and use these instructions for each model:

“Speak in an exaggerated, theatrical tone, reminiscent of a Shakespearean stage actor from the Elizabethan era. Emphasize every syllable with dramatic flair, rolling R’s and elongating vowels. Your cadence should rise and fall as if declaiming poetry Occasionally add chuckles, groans, or grunts of faux embarrassment.”

You’ll see that gpt-4o-mini-tts-2025-03-20 plays along and acts aa directed while gpt-4o-mini-tts-2025-12-15 ignores instructions and just speaks in a plain monotone style.

Please OpenAI, fix this issue BEFORE depreciating gpt-4o-mini-tts-2025-03-20.

3 posts - 2 participants

Read full topic

]]>
https://community.openai.com/t/openai-is-depreciating-gpt-4o-mini-tts-2025-03-20-you-cannot-do-that-until-you-fix-the-current-model-gpt-4o-mini-tts-2025-12-15-ignores-tts-intructions/1379604 Thu, 23 Apr 2026 13:43:27 +0000 No Yes No community.openai.com-topic-1379604 OpenAI is depreciating gpt-4o-mini-tts-2025-03-20! YOU CANNOT DO THAT until you fix the current model! gpt-4o-mini-tts-2025-12-15 ignores TTS intructions
Approval for mcp and auto edit files Codex Hello,

Since sereval update of codex cli, MCP tools keep asking me approval despite using approval_mode = “auto” or approval_mode = “approve”

And for files edit i still have the same, i launch and destroy multiple session by day for multiple reasons and workspace-write autorisation don’t fix it

please help me lol

1 post - 1 participant

Read full topic

]]>
https://community.openai.com/t/approval-for-mcp-and-auto-edit-files/1379603 Thu, 23 Apr 2026 13:43:19 +0000 No No No community.openai.com-topic-1379603 Approval for mcp and auto edit files
Error "Failed to save app changes. Invalid request body." when resubmitting rejected app draft ChatGPT Apps SDK I sent an app to be reviewed, got a review with reject. I edited the application with the fixes and resubmitted the draft, but I’m constaly getting the error in the title after pressing “Submit for Review”

Any help is appreciated, thanks.

4 posts - 2 participants

Read full topic

]]>
https://community.openai.com/t/error-failed-to-save-app-changes-invalid-request-body-when-resubmitting-rejected-app-draft/1379602 Thu, 23 Apr 2026 13:35:07 +0000 No No No community.openai.com-topic-1379602 Error "Failed to save app changes. Invalid request body." when resubmitting rejected app draft
Improve branch selection UI for long branch names in the Codex app Codex Suggestion

I’ve run into a usability issue with the branch selector in the Codex app when branch names are long and similar.

Right now, long branch names are truncated in the branch dropdown. That becomes a real problem when multiple branches share the same prefix, because I can’t clearly tell which branch is which. There also doesn’t seem to be a hover state, tooltip, horizontal scroll, or any other way to view the full branch name.

As a result, selecting the correct branch becomes guesswork, especially in repositories that use long descriptive branch names.

I’m attaching a screenshot to show what I mean more clearly.

1 post - 1 participant

Read full topic

]]>
https://community.openai.com/t/improve-branch-selection-ui-for-long-branch-names-in-the-codex-app/1379600 Thu, 23 Apr 2026 13:24:32 +0000 No No No community.openai.com-topic-1379600 Improve branch selection UI for long branch names in the Codex app
"Runtime error" on all resource widgets in developer mode ChatGPT Apps SDK Resource-bound widgets (MCP Apps using resources/read with mimeType: text/html;profile=mcp-app) fail to render in ChatGPT’s developer mode. The error is consistent and affects all MCP apps, including Booking[.]com’s official integration.

What happens

When a tool call returns structuredContent tied to a resource widget, ChatGPT shows:

Error loading app
Runtime error
[Retry]

Clicking Retry does not resolve it. The tool call itself succeeds, the server returns valid data, but the widget iframe never connects to the host.

Reproduction

  1. Open ChatGPT in developer mode
  2. Connect any MCP server that exposes a resource with mimeType: text/html;profile=mcp-app
  3. Trigger a tool call that returns structuredContent bound to that resource
  4. The widget shows “Error loading app / Runtime error”

This also reproduces with Booking[.]com — ask it to search for a hotel. The tool executes, results come back, but the Booking[.]com widget fails with the same error. In non-developer (production) ChatGPT, the same Booking[.]com widget renders correctly.

What we’ve verified

  • Not app-specific. Tested across three independent MCP apps (2 custom apps built with sunpeak[.]ai, plus Booking[.]com). All
    produce the identical error.
  • Server-side is correct. Tool calls succeed, structuredContent is returned, resource HTML is served. Server logs show no
    errors.
  • The widget HTML is valid. The same HTML renders correctly in local testing environments and in production ChatGPT
    (non-developer mode).
  • The error comes from ChatGPT’s host-side error boundary. The “Error loading app / Runtime error” text does not exist in any
    of the app code — it’s rendered by the ChatGPT client.

Likely cause

The resource iframe uses postMessage to establish a JSON-RPC bridge with the ChatGPT parent window. In developer mode, the host-side listener for this bridge appears to be missing or misconfigured, so the iframe’s connection handshake times out and throws.

Additional observations from server logs

While investigating, we logged the developer mode’s MCP session behavior per user message:

  • 7-8 MCP sessions opened per turn, when 4 are sufficient (tools/list + resources/list + resources/read + tools/call). Re-discovering tools and resources per turn is expected with streamable HTTP, but the extra 3-4 sessions are not.
  • 3 sessions abandoned every turn — initialize sent, handshake never completed
  • Race condition — at least once, tools/call was sent before notifications/initialized, causing “Error: Bad Request: Server not initialized”

These may be developer-mode-specific behaviors.

Environment

  • ChatGPT: production (chat[.]openai[.]com), developer mode enabled
  • Date observed: April 23, 2026
  • Tested MCP servers: custom apps (Node.js / sunpeak 0.20.7) + Booking[.]com

Ask

Is this a known issue with a timeline for a fix, as we can’t test our widgets at the moment? Are OpenAI developers using the developer mode internally, too?

3 posts - 2 participants

Read full topic

]]>
https://community.openai.com/t/runtime-error-on-all-resource-widgets-in-developer-mode/1379591 Thu, 23 Apr 2026 11:08:11 +0000 No No No community.openai.com-topic-1379591 "Runtime error" on all resource widgets in developer mode
> clai: your useful command line helper! Codex Hey guys, I’d like to present my ongoing project I co-authored (with the help of codex-cli :raising_hands: ). This extremely useful command line tool that provides a natural language way to describe what you want to happen at the terminal.

AI has brought a lot of budding new developers into the space and it is extremely useful for CLI beginners, making the terminal much more accessible for all.

It is also great for experienced users - very handy for formulating complex piped commands, or ones you rarely use like that specific but unusual thing you want to do with git but can’t quite remember …

It is bash based and works on WSL Ubuntu and macOS zsh amongst many other platforms. Naturally it leverages the OpenAI API, but will work with others too. It is pretty economical to run as token counts are generally low.

It incorporates a traffic light risk level indication :vertical_traffic_light: by colouring the proposed command appropriately. “Dangerous” commands, e.g. things which will change things permanently like deletes, prompt for additional confirmation.

Lots of useful info in the README on the repo …

Please give it a :star: on github if you find it useful!

Feedback and ideas welcome … enjoy!

2 posts - 2 participants

Read full topic

]]>
https://community.openai.com/t/clai-your-useful-command-line-helper/1379589 Thu, 23 Apr 2026 09:59:07 +0000 Yes No No community.openai.com-topic-1379589 > clai: your useful command line helper!
Agent Builder Feature Request ChatGPT Apps SDK Request for a new feature with Agent Builder - Im developing using the Agent Builder and ChatKit UI. Firstly, I develop the AI workflow locally, complete testing and then move it to a separate production version (on a different account) - typical workflow for s/w development.
I am currently doing the move to production manually, as there is no in-built feature to copy or download the workfflow as JSON or commit into github the workflow.
Whilst I am using github to store versions of the different Agents, it is still a manual process from one account to the client’s account when releasing - this lack of automation introduces errors, and there are many components that are not able to be stored as text within github.

Would be a useful feature!

2 posts - 2 participants

Read full topic

]]>
https://community.openai.com/t/agent-builder-feature-request/1379587 Thu, 23 Apr 2026 08:40:26 +0000 No No No community.openai.com-topic-1379587 Agent Builder Feature Request
gpt-5.4-mini warning in Agent Builder, but logs show it’s still being used Bugs Hi,

I’m using the Agent Builder with multi-agent workflows (Responses API).

When I open an agent in the Builder, I now see this warning:

“gpt-5.4-mini doesn’t work with the Responses API – We’re using the default model instead.”

However, checking the platform logs shows that requests are still being processed with gpt-5.4-mini.

So it seems like:

  • the warning might be incorrect

  • no actual fallback is happening

This setup worked fine for weeks without any changes on my side.

Questions:

  • Is this a known UI/validation issue in the Agent Builder?

  • Can we rely on logs as the source of truth for the actual model used?

Thanks!

1 post - 1 participant

Read full topic

]]>
https://community.openai.com/t/gpt-5-4-mini-warning-in-agent-builder-but-logs-show-it-s-still-being-used/1379585 Thu, 23 Apr 2026 08:25:06 +0000 No No No community.openai.com-topic-1379585 gpt-5.4-mini warning in Agent Builder, but logs show it’s still being used
Gpt-4o-mini-tts-2025-12-15 still truncates final sentences; 2025-03-20 is being deprecated Bugs We rely on the OpenAI audio/speech endpoint for production text-to-speech.

The current gpt-4o-mini-tts model, with current snapshot gpt-4o-mini-tts-2025-12-15, still appears to have a serious truncation issue: generated audio often cuts off the final sentence or the end of the final sentence.

This is not an API error response. The request succeeds and returns audio, but the returned audio is incomplete. Retrying the same input sometimes produces complete audio, which makes this difficult to safely detect without post-generation validation.

Because of this, we have had to keep using gpt-4o-mini-tts-2025-03-20, which has been much more reliable for this failure mode in our usage. The problem is that the deprecation docs now list gpt-4o-mini-tts-2025-03-20 for shutdown on 2026-07-23.

That leaves us without a viable migration path for this use case. Current gpt-4o-mini-tts still truncates final sentences. The 2025-03-20 snapshot is the only reliable option we have found, but it is being deprecated. tts-1 and tts-1-hd are not equivalent replacements for promptable voice control and quality. The deprecation table suggests gpt-realtime, but that is not a drop-in replacement for existing audio/speech generation workflows.

Please fix the truncation bug in the current gpt-4o-mini-tts model, or keep/provide at least one stable audio/speech model or snapshot that can reliably generate complete audio. At minimum, please confirm the recommended migration path for developers who need non-realtime TTS and cannot accept final-sentence truncation.

Happy to provide concrete request IDs or audio samples via a private support channel if useful.

3 posts - 2 participants

Read full topic

]]>
https://community.openai.com/t/gpt-4o-mini-tts-2025-12-15-still-truncates-final-sentences-2025-03-20-is-being-deprecated/1379584 Thu, 23 Apr 2026 08:23:12 +0000 No No No community.openai.com-topic-1379584 Gpt-4o-mini-tts-2025-12-15 still truncates final sentences; 2025-03-20 is being deprecated
LLM error server_error codex Codex LLM error server_error: An error occurred while processing your request. You can retry your request, or contact us through our help center at help.openai.com if the error persists. Please include the request ID 9c1b2b92-4f9f-4b42-88dc-0ee54abc13f4 in your message.

1 post - 1 participant

Read full topic

]]>
https://community.openai.com/t/llm-error-server-error-codex/1379583 Thu, 23 Apr 2026 08:16:37 +0000 No No No community.openai.com-topic-1379583 LLM error server_error codex
Stream disconnected before completion Codex stream disconnected before completion: An error occurred while processing your request. You can retry your request, or contact us through our help center at help.openai.com if the error persists. Please include the request ID 8e867191-7bf3-4140-a9bb-0ffacdfdae1c in your message.

1 post - 1 participant

Read full topic

]]>
https://community.openai.com/t/stream-disconnected-before-completion/1379582 Thu, 23 Apr 2026 08:15:32 +0000 No No No community.openai.com-topic-1379582 Stream disconnected before completion
Selected model is at capacity Codex Using gpt-5.4 is consistently showing me this message

5 posts - 3 participants

Read full topic

]]>
https://community.openai.com/t/selected-model-is-at-capacity/1379581 Thu, 23 Apr 2026 07:57:30 +0000 No No No community.openai.com-topic-1379581 Selected model is at capacity
ChatGPT App Auth on Android not working ChatGPT Apps SDK Unable to authenticate via connector on Android

When using mixed auth, and calling a tool that requires auth while user is unauthenticated, a connector appears asking to connect the user to the app.
If the tool is invoked by the user using chat - it works fine.
Issue appears mid-flow, when a user does some actions inside the widget that does not require auth, and then needs to call a tool requiring auth.

First of all, direct tool call does not initiate the connector - only by sending a “follow up” prompt describing the tool that needs calling it initiates the connector.
Second, on Android I wasn’t able to get this unauth->auth transition connector to work. It appears, I click on it once, it disappears for a split second and appears again. After clicking connect the second time, it disappears with no auth flow starting, no log or error what went right or wrong. After many hours debugging, I’m still not able to figure out the issue.

The flow works fine when initially connecting the app, when connector appears if initiated by the user from the chat, and the unauth->auth flow works for desktop and iOS. Only with Android is when I’m faced with this issue.

1 post - 1 participant

Read full topic

]]>
https://community.openai.com/t/chatgpt-app-auth-on-android-not-working/1379580 Thu, 23 Apr 2026 07:55:14 +0000 No No No community.openai.com-topic-1379580 ChatGPT App Auth on Android not working
Codex (VS Code extension) stops working mid-session with "model at capacity" errors Codex I’m consistently running into an issue using Codex in VS Code:

Setup:

  • VS Code + Codex extension

  • Model: GPT-5.4

  • Mode: high

Problem:
After ~3–5 minutes of normal work, the extension starts failing:

  • “Selected model is at capacity” errors

  • Requests stop going through

Question:
Is this a known issue with the VS Code extension or model capacity handling?
Any ETA on a fix or recommended best practices to avoid this?

6 posts - 2 participants

Read full topic

]]>
https://community.openai.com/t/codex-vs-code-extension-stops-working-mid-session-with-model-at-capacity-errors/1379578 Thu, 23 Apr 2026 07:45:25 +0000 No No No community.openai.com-topic-1379578 Codex (VS Code extension) stops working mid-session with "model at capacity" errors
Batch processing stuck for 24hr API I submitted a couple of batch jobs yesterday and one of them got stuck without progress batch_69e8960b60088190aa476ad9c281348b. I’m just 2 hours away from the 24 hr limit and no progress. Interestingly, the one stuck is the one using the 50k limit. I wonder if the size of a batch affects priority.

2 posts - 1 participant

Read full topic

]]>
https://community.openai.com/t/batch-processing-stuck-for-24hr/1379576 Thu, 23 Apr 2026 07:15:18 +0000 No No No community.openai.com-topic-1379576 Batch processing stuck for 24hr
Codex Desktop App Plugin’s not working Plugins / Actions builders

What does this mean I was working with Google fine last week with Codex using the skill now this is asking me to install a plugin and the bloody thing doesn’t work (or is unavailable?)

1 post - 1 participant

Read full topic

]]>
https://community.openai.com/t/codex-desktop-app-plugin-s-not-working/1379574 Thu, 23 Apr 2026 06:50:48 +0000 No No No community.openai.com-topic-1379574 Codex Desktop App Plugin’s not working
New usage limits in April makes Codex unusable for average developers API
  • -The new pricing policy in April has made Codex unusable for an average development use-case.

  • - I am considering switching to alternative providers since the previous usage limits was the main reason I was using Codex.

  • Could OpenAI promptly revisit the usage limits policy and revert back to a more reasonable schema?

  • 1 post - 1 participant

    Read full topic

    ]]>
    https://community.openai.com/t/new-usage-limits-in-april-makes-codex-unusable-for-average-developers/1379568 Thu, 23 Apr 2026 03:26:13 +0000 No No No community.openai.com-topic-1379568 New usage limits in April makes Codex unusable for average developers
    Initially loading gpt-image-2 but then fails. Had access last night (never used it) Bugs Verified my account early last year btw.

    Last night I wanted to see if I had access. Saw the model was loaded correctly, but didn’t pull the trigger on any prompts.

    All day today I’m seeing “No eligible models available” after very briefly seeing the image-2 model try to load in. I’ve tried clearing cache/cookies, logging out, etc.

    20 posts - 12 participants

    Read full topic

    ]]>
    https://community.openai.com/t/initially-loading-gpt-image-2-but-then-fails-had-access-last-night-never-used-it/1379566 Thu, 23 Apr 2026 02:46:25 +0000 No No No community.openai.com-topic-1379566 Initially loading gpt-image-2 but then fails. Had access last night (never used it)