Yeah this makes sense, and I can see why it’s frustrating. The part where it keeps asking for more proof without actually pointing to anything missing is the real problem.
If you’ve already validated the sources, it shouldn’t keep circling back like that, it should just move forward.
Which model have you been using when you notice this?
]]>On the OpenAI Models page, the latest Realtime models are showing up as deprecated. At the same time, these same Realtime models are missing from the OpenAI API Platform, specifically under Audio → Prompts.
What’s confusing is that the only Realtime models available for configuration in the API Platform are the legacy gpt-4o Realtime models, which themselves are scheduled for deprecation.
This makes it seem possible that:
The intent was to deprecate the legacy Realtime models, but
The newest Realtime models were deprecated or removed by mistake instead.
Can someone from OpenAI please confirm whether this is:
An intentional deprecation,
A temporary UI/platform issue, or
An accidental regression?
This impacts the ability to configure Realtime audio workflows using the latest models, so clarification would be greatly appreciated.
Thanks!
Models Page
API Platform
]]>I was among the first 200 apps approved. I’ve never appeared in the directory, but until recently users could at least find my app (NA Drink Finder) by searching for it by name. Now it doesn’t surface in search at all. Meaning anyone without a direct link has no way to discover it.
This breaks basic promotion. I can’t tell users on social media to “search NA Drink Finder in ChatGPT” because the search returns nothing. An approved app that can’t be found by its exact name isn’t really launched.
A few direct questions:
Transparency here would go a long way. Right now the marketplace feels closed off to indie developers, which I don’t think is the intent.
]]>The remaining issue is correlation: the Twilio Call SIDs are not directly present in the OpenAI-side logs we can search, so we need one OpenAI-side identifier from the incoming SIP webhook to trace a specific call end-to-end.
Please reach out to our support (via help.openai.com) and share the following for one or two affected examples:
I am still a little concerned about the fact that Plus accounts are not documented as having access to this feature.
]]>

"On Plus (personal workspace), the Developer Mode + full MCP app management UI (including the “Refresh” actions button) isn’t supported, so that control can disappear even if you still have an old deep link.
To get the Refresh flow back, you need a Business / Enterprise / Edu workspace with developer mode enabled, then use Workspace settings → Apps → (your app) → Action control → Refresh (Business note: published apps can’t be updated in-place—changes require recreate/republish)."
The documentation for that is here:
https://help.openai.com/en/articles/12584461-developer-mode-and-mcp-apps-in-chatgpt-beta
]]>Welcome to the dev community!
Weekly quota isn’t based purely on message count, and it’s also not just raw tokens, it works more like a blended usage system.
Each message does involve tokens, but different models and features can have different usage impacts. So a request to a more advanced model or certain tools may count more toward your quota than a simpler one, even if the message length is similar.
That’s why:
In practice, things like model choice, response length, and feature usage can all affect how quickly you reach your limits.
~Smith
]]>the ability to work with longer-form videos derived from multiple sequential images
Your ideas are outstanding. However, generating motion (e.g. fluid animation) would, no doubt, require massive amounts and expensive compute which was why Sora was scuttled.
]]>Dear Team,
I would like to congratulate your team on the recent improvements in image processing and generation. The progress of this functionality has been remarkably meaningful and opens up a wide range of practical possibilities.
Building on this advancement, I would like to suggest a potential expansion: the ability to work with longer-form videos derived from multiple sequential images.
For example, it would be extraordinarily compelling to allow users to submit a large sequence of images—such as comic book or manga pages containing more than 100 panels—and enable the AI to:
Such a feature could represent a major breakthrough for content creators, artists, and even media companies, significantly reducing the effort required to transform visual stories into fully realized animations.
I believe that, given the capabilities already in place, this would be a natural and highly impactful next step.
Thank you for your attention and for the excellent work that has been carried out.
Prezados,
Gostaria de parabenizar a equipe pelas recentes melhorias relacionadas ao processamento e geração de imagens. A evolução dessa funcionalidade tem sido bastante relevante e abre diversas possibilidades práticas.
Aproveitando esse avanço, gostaria de sugerir uma possível expansão: a capacidade de trabalhar com vídeos mais extensos a partir de múltiplas imagens sequenciais.
Por exemplo, seria extremamente interessante permitir que o usuário enviasse uma sequência grande de imagens — como páginas de quadrinhos ou mangás (com mais de 100 quadros) — e a IA fosse capaz de:
Essa funcionalidade poderia representar um grande avanço para criadores de conteúdo, artistas e até empresas de mídia, reduzindo significativamente o esforço necessário para transformar histórias visuais em animações completas.
Acredito que, com os recursos atuais já implementados, essa evolução seria um passo natural e de grande impacto.
Agradeço pela atenção e pelo excelente trabalho que vem sendo desenvolvido.
]]>Edited to add: I already tried the standard troubleshooting steps… No browser plugins are installed, I cleared cache, I logged out and back in, and I logged in via a browser I’ve never used ChatGPT in before. Still can’t see my connected apps.
]]>The shift is real. The newer realtime model treats instructions more like guidance than strict rules, so things that used to act like hard constraints now get interpreted more loosely.
For example:
Older model:
When you can’t clearly hear the user, don’t proceed. If there’s background noise or you only caught part of the sentence, pause and ask them politely to repeat themselves in their preferred language, and keep the conversation in the same language.
Newer model:
Only respond to clear audio or text.
If audio is unclear/partial/noisy/silent, ask for clarification in {preferred_language}.
Continue in the same language as the user if intelligible.
The newer version works better because it’s broken into short, explicit rules instead of one long instruction. That structure tends to stick more reliably with 1.5.
Also, you might find the “Using realtime models” prompting guide helpful since it’s aligned with how the newer model behaves.
Curious if tightening the prompt like this improves things on your side.
-Mark G.
]]>I am able to create new apps but that is not sustainable long term.
This seems to be happening to a coworker as well in a totally different account so it does not seem to be a local issue.
]]>Here DallE 3, same Prompt:
]]>It is not entirely clear where the noise comes from. It could be a watermark that is mixed in to recognize AI images. That was an idea from @_j and makes sense to me. It could be part of a deepfake detection method, which unfortunately becomes more important the better the generators get.
But it could also be incorrect methods to add details or problems in the math and the method of the generator. Aside from a possible watermark, what it could also be is an incorrect method of adding details to an image. I have experimented with such methods myself. You can make an image more detailed by specifically modifying the latent space during image generation. Or by giving the image a starting direction by influencing the initial noise, so the start is no longer completely random.
Since the new generator is a hybrid method that, as far as I know, (is not publicly explained), it is difficult to say where the pattern comes from. But it is clearly amplified by the fact that the generator reuses image data. The patterns are already visible to me in the very first image and are disturbing, especially on structured surfaces or clouds. I can already recognize the pattern in the very first image in many images. I do not know exactly what my brain is doing, but it feels uncomfortable, just like the flicker confetti noise that DallE 3 generated. It is similar to the blubber distortion artifacts in heavily compressed MP3 files. Newer audio compression methods are better, they produce information loss that has less acoustic structure. Visually, the brain is stressed in the same way as with acoustic distortions, even before the pattern is destructive.
No unique pictures anymore in the same session: A related problem is now, that it is not possible to make multiple different images with different seeds using the same prompt. Each time, the session must be renewed for a new picture.
It would be an easy fix for OpenAI: Just do not reuse image data for the next images, and two problems are fixed at once—no noise amplification and no identical images in the same session anymore. The reedit anyway not work so. And for a reedit, the edit page shold be used for this, including re-prompting or prompt base corrections.
(For testing, I usually make 10 pictures of the same prompt. This means now I must restart the page for every new generation. Very toilsome. I think many customers make more than 1 image with the same prompt if they like the outcome.)
This are 5 images in a row, with the same prompt:
Each generation reinforces the pattern.
And all images are the same.
**
**
Here every prompt was different, this is the 5 generation:
Same effect, it keeps data from the previews pictures.
The destruction is even worse, because of the differences of the image data each step.
Short answer: yes, Pro accounts are expected to have access to Developer mode. When it’s missing, it’s usually not your plan, it’s something more subtle.
What’s been happening for a few folks is the UI just doesn’t show up because of session or browser quirks. Not ideal, but it happens.
A few quick things that have worked:
Sometimes that alone makes the toggle appear.
If it’s still not there after that, it’s likely tied to an account flag or rollout edge case. At that point, reaching out to [email protected] is the fastest way to get eyes on your specific account.
-Mark G.
]]>Highlights:
The first post includes all the findings, and will be updated from time to time.
I collected the findings on the first page. And if bugs are corrected, the text on the first page for them will be deleted.
Important:
Bugs:
Noise Amplification: The generator keeps some data from the made images, and reuses it for the next images. This is the cause of a very bad bug. It amplifies noise patterns very quickly, after just 3-5 pictures, the images are destroyed.
At the moment, the only workaround is to restart the session by reloading the web page.
No unique pictures anymore in the same session: a related problem is now that it is not possible to make multiple different images with different seeds using the same prompt. Each time, the session must be renewed for a new picture.
From what we’ve seen, this isn’t a hard technical blocker as much as a missing product layer. The pieces exist, but the “handoff” experience between ChatGPT and Codex hasn’t been fully stitched together yet. Same story with feedback, it works today, but it’s not obvious or trackable, which makes it feel like it disappears.
Your idea around explicit handoff objects (instead of full chat mirroring) lines up with how a lot of folks are thinking about it. Way more practical and controllable.
I’ve flagged this and shared it with the team for proper logging so it doesn’t get lost. The feedback angle especially comes up a lot, so you’re adding weight to something that’s already on the radar.
-Mark G.
]]>Seen it a couple of times today.
]]>Clearly, that’s striking the dome, and NASA is fully exposed…
![]()
Ohio “St. Patrick’s Day” Blast on Tuesday, March 17, 2026, at 9:00 AM EST. A 7-ton asteroid (around 6 feet wide) entered the atmosphere at 45,000 mph. It fragmented 30 miles above Valley City, Ohio. Impact: The explosion unleashed energy equal to 250 tons of TNT, creating a sonic boom that shook houses in NEO and PN.
way off on the map here i think?
ETA:
um maybe the guy taking the photo was right under it and looked up
close but he was DIRECTLY under the explosion oddly
too worried about keeping the buildings in i think heh
]]>Really appreciate you taking the time to test this so thoroughly, that side-by-side comparison is super helpful. It does look like this might be something specific to the account rather than anything on your end.
Since you’ve already ruled out the usual things, the next best step would be to open a support ticket so the team can take a closer look. If you haven’t already, it’d be great to include:
That should give support enough to check if it’s a rollout difference or something that just needs a refresh on the account.
~Smith
]]>For me, my usage is mostly in mathematics, and I have noticed much more common misunderstandings and hallucinations in my usage. This is concerning because the pro subscription is costly and I am currently not getting the performance that justifies the cost.
Hopefully this gets resolved soon.
]]>Short answer: Intel Macs can run ChatGPT in browser, but older hardware + modern web app = lag after a few prompts.
Most common causes:
What’s helped others:
If one browser felt slightly better, worth tweaking that one further.
-Mark G.
]]>This is an interesting idea, appreciate you laying it out so clearly.
Totally see what you’re getting at here. The gap between “generate something” and actually composing with control is a real one, especially for more serious workflows. The hybrid approach you’re describing where the human leads and the AI refines makes a lot of sense.
I’ll make sure this gets captured and passed along to the product team as feedback.
~Smith
]]>^.^
I think we’re all happy to see this little ‘thought.’
I went ahead and ungated my subject field… ran Sam’s well written Api creation through a bunch of lanes it shouldn’t be…
I’m actually surprised how well my children’s book lanes handle the thing…
I hope you don’t mind, you brought the best stress test so far ![]()
You saved me from releasing a version of my prompting assistant with this model that would have had an embarrassingly low amount of user input available.
Please don’t hesitate to explore with us whenever you get a moment to, or need to vent through the art.
]]>It sounds like this may be more about which login path is being triggered rather than the account itself.
In some cases, certain login methods (like entering an email first or using SSO) can automatically route you through a specific authentication flow that won’t complete successfully depending on how the account is set up. That can make it look like the account is blocked, when it’s really just the wrong login path being used.
If your account is linked to a social login (like Google), it’s worth trying that route directly instead of entering your email or going through any SSO option.
A few things that can help:
If that works, it would suggest the account itself is still accessible and it’s just a routing issue during sign-in. Hope this helps narrow it down a bit.
~Smith
]]>What’s happening is approval and directory placement are two separate things. An app can be approved and still not show in the directory right away. The directory is more selective, and most apps don’t get that placement at launch.
Here’s what we know so far: apps that show strong real-world use and good user satisfaction tend to get broader distribution over time. That can include directory placement or even being suggested proactively. There isn’t a way to request that though, it’s all signal-driven.
So if it’s only showing via search right now, that’s not a bug. It just means it hasn’t hit those signals yet.
What’s been working for others is driving real usage, gathering feedback, and iterating. As that builds, visibility can improve.
-Mark G.
]]>No, wait, that’s me.
Anyways…
Images can use web search too, now apparently.
I’m trying to add a app back to ChatGPT to test the app integration. Looks liked there is bug in app creation which includes:
Hope this is only transitive bugs. Let me know if you need more information.
Thanks,
Vuong
]]>