Hey! I'm Nick, and I work on Integrity at OpenAI. These checks are part of how we protect our first-party products from abuse like bots, scraping, fraud, and other attempts to misuse the platform.
A big reason we invest in this is because we want to keep free and logged-out access available for more users. My team’s goal is to help make sure the limited GPU resources are going to real users.
We also keep a very close eye on the user impact. We monitor things like page load time, time to first token and payload size, with a focus on reducing the overhead of these protections. For the majority of people, the impact is negligible, and only a very small percentage may see a slight delay from extra checks. We also continuously evaluate precision so we can minimize false positives while still making abuse meaningfully harder.
lxgryesterday at 8:57 PM
It's absurd how unusable Cloudflare is making the web when using a browser or IP address they consider "suspicious". I've lately been drowning in captchas for the crime of using Firefox. All in the interest of "bot protection", of course.
simonwyesterday at 8:43 PM
Presumably this is all because OpenAI offers free ChatGPT to logged out users and don't want that being abused as a free API endpoint.
i18nagentaitoday at 7:02 PM
The irony of a company that sells DDoS protection making the browsing experience worse for legitimate users. The real issue is that Cloudflare's bot detection runs JavaScript that introspects the page state — which means any site using Cloudflare is implicitly giving Cloudflare access to read the DOM of the protected application. That's a much bigger concern than the typing delay.
petcatyesterday at 8:40 PM
> These properties only exist if the ChatGPT React application has fully rendered and hydrated. A headless browser that loads the HTML but doesn't execute the JavaScript bundle won't have them. A bot framework that stubs out browser APIs but doesn't actually run React won't have them.
> This is bot detection at the application layer, not the browser layer.
I kind of just assumed that all sophisticated bot-detectors and adblock-detectors do this? Is there something revealing about the finding that ChatGPT/CloudFlare's bot detector triggers on "javascript didn't execute"?
Chance-Deviceyesterday at 8:53 PM
Perhaps the author should have made it clearer why we should care about any of this. OpenAI want you to use their real react app. That’s… ok? I skimmed the article looking for the punchline and there doesn’t seem to be one.
londons_exploreyesterday at 8:55 PM
I just don't understand why bot owners can't just run a complete windows 11 VM running Google Chrome complete with graphics acceleration.
You can probably run 50 of those simultaneously if you use memory page deduplication, and with a decent CPU+GPU you ought to be able to render 50 pages a second. That's 1 cent per thousand page loads on AWS. Damn cheap.
technionyesterday at 10:52 PM
To prompt a discussion that's purely technical: I'm interested in how this was done.
Specifically, Turnstile as far as I'm aware doesn't do anything specifically configurable or site specific. It works on sites that don't run React, and the cookie OpenAI-Sentinel-Turnstile-Token is not a CF cookie.
Did OpenAI somehow do something on their own API that uses data from Turnstile?
ripbozoyesterday at 8:41 PM
and chatgpt was then used to write this article. at least try to clean it up a bit
tommodevtoday at 5:27 AM
Ah, this explains chatgpt (and probably copilot) performance behind corporate firewalls such as zscaler.
Between the network latency and low end machines, there is an enormous lag between chatgpts response and being able to reply, especially for editing a canvas.
I've been sitting there for up to a minute plus waiting to be able to use the canvas controls or highlight text after an update.
croemertoday at 12:20 PM
When using ChatGPT Android app with some NextDNS block lists, I get an error modal in app saying "security misconfiguration blah blah".
Clearly I'm blocking some tracker and it's upset about that. I allowlisted a sentry subdomain and since then got no more complaints.
bredrenyesterday at 11:28 PM
On a related note, ChatGPT.com changed how it handles large text pastes this past week.
It now behaves like Claude, attaching the paste as a file for upload rather than inlining it.
This affected page UX some and reduces the cost of the browser tab some.
At some point, maybe still true, very long conversations ~froze/crashed ChatGPT pages.
TimLelandtoday at 2:22 PM
It seems they fixed the biggest issue Ive had where you start typing then it erases the content once the page fully loads
NSPG911yesterday at 11:16 PM
I was using KeepChatGPT[1] for a while back in 2023-2024, pre-Gemini-in-Google era, and I was fascinated as to how it was able to mask being a user without needing any API or help from the end user. I stopped using it after 2024 because 1) Gemini and 2) It breaks quite a lot. I did however, like how you had an option to push the AI panel to the right, if only Google even considers doing so.
Does anyone know how this is integrated on the Cloudflare side and across the app? Is this beyond standard turnstile? Is this custom/enterprise functionality? Something else?
edg5000today at 4:36 PM
The chat client has serious performance issues on lower end systems. Now I see why!
dsparkmantoday at 12:07 PM
That explains why ChatGPT has been running like shit all weekend. In the desktop app on Mac, it could not even complete a response. On the web, it would hang before you could input anything.
toshtoday at 3:01 AM
It used to be possible to type immediately while the page is loading and have all key presses end up in the input field.
Why run this check before user can type?
Why not run it later like before the message gets sent to the server?
tripdoutyesterday at 8:39 PM
AI-written article?
pautassotoday at 9:36 AM
AI goes through great lengths to ensure it's talking with humans.
Why would two AI bots want to chat with each other?
jtbaylytoday at 12:15 AM
Others here are asking if this is the cause of slow performance in a long chat.
But it seems clear to me that this is why I can't start typing right away when I first load the page and click to focus in the text field.
darepublicyesterday at 9:53 PM
I imagine to stop web automation from getting free API like use of the model
CorneredCoroneryesterday at 9:57 PM
> A headless browser that loads the HTML but doesn't execute the JavaScript bundle won't have them.
this is meaningless btw. A browser headless or not does execute javascript.
self-portraittoday at 7:03 AM
A/B testing /dev/ kit that tokenizes four permutations of language
lightedmantoday at 12:58 PM
Preventing me from typing until you SCAN MY SYSTEM?
Fine, by extension, you agree I can scan all of your systems for whatever I desire. This works both ways.
j45today at 4:03 PM
This is a lot of fingerprinting.
AndreyK1984today at 9:30 AM
CamuFox will fix it easy peasy.
refulgentisyesterday at 9:10 PM
If you have AI write a blog post for ya, when you think it's set, check word count (can c+p to google docs if AI can't pull it off with built in tools), and ask it to identify repetitions if it's over 1000.
Also, you can have it spotcheck colors: light orange on light background is unreadable, ask it to find the L*[1] of colors and dark/lighten as necessary if gap < 40 (that's minimum gap for yuge header text on background, 50 for text on background, these have gap of 25)
I haven't tried this yet, but, maybe have it count word count-per-header too. It's got 11 headers for 1000 words currently, makes reading feel really stacatto and you gotta evaluate "is this a real transition or vibetransition"
[1] L* as in L*a*b*, not L in Oklab
tristortoday at 2:42 PM
This explains some of the weird performance behavior I've seen in the last 24 hours with ChatGPT, sometimes lagging my entire browser while typing. Note, I'm a paying user with a Teams account, so it's kind of annoying that this is being applied to logged in paying users as well. I might have to vibe-code my own chat webUI using the APIs.
aucisson_masquetoday at 10:39 AM
Mistral chat is also free to use without account and doesn't do that.
arcfourtoday at 1:28 AM
> They exist only if the request passed through Cloudflare's network. A bot making direct requests to the origin server or running behind a non-Cloudflare proxy will produce missing or inconsistent values.
...I don't think that's possible even if you are a bot? I would be very surprised if OAI had their origin exposed to the internet. What is a "non-Cloudflare proxy"? Is this AI slop?
It's likely just looking at the CF properties as part of a bot scoring metric (e.g. many users from this ASN or that geoip to this specific city exhibit abusive patterns).
apsurdyesterday at 10:59 PM
Haven't read yet but instantly matched with my experience of the chat being unusable at times. The latency and glitch-like feel is unbearable.
seker18today at 10:50 AM
Cómo puedo acceder a un celular
aslihanayesterday at 8:58 PM
I mean, I can easily get them to behaving defensively for not being abused. But MBP with M5 here, my chatgpt tab always get stucked when I hit some prompt.
Really really bad user experience, wondering about when they will leave this approach.
gobdovanyesterday at 8:54 PM
Imagine if they'd put as much effort into making a decent frontend experience.
heliumterayesterday at 8:58 PM
I am shocked openai collects data about it's users before users have the opportunity to send the same data to openai servers!
EGregyesterday at 8:58 PM
Why does ChatGPT slow down so much when the conversations get long, while Claude does compaction?
My best guess is -- ChatGPT is running something in your browser to try to determine the best things to send down to the model API –- when it should have been running quantized models on its own server.
themafiayesterday at 9:12 PM
My theory is that "AI" doesn't really have any long term paying customers and the majority of the "users" are people who have cooked up some clever hack to effectively siphon computing power from these providers in an effort to crank out the lowest effort ad supported slop imaginable.
Every provider seems to have been plauged by these freeloaders to such an extent that they've had to develop extreme and onerous countermeasures just to avoid losing their shirts.
What's the word? Schadenfreude?
Josephjackjrob1today at 10:47 AM
cloud flare will not be around for long, its a shame as it is the GOAT lol
yapyapyesterday at 11:24 PM
wow OpenAi sure doesnt like bots for a company enabling the botification of the world wide web
avazhiyesterday at 9:51 PM
Another AI-slop article.
Sick.
syntheticmindtoday at 6:17 PM
[dead]
massi24today at 1:58 PM
[dead]
summitwebaudittoday at 11:21 AM
[dead]
kevinbaivtoday at 7:20 AM
[dead]
lancetheaitoday at 3:00 PM
[dead]
syntheticmindtoday at 1:00 PM
[dead]
aplomb1026yesterday at 10:06 PM
[dead]
kalugatoday at 8:44 AM
[dead]
oluwajubelo1yesterday at 10:26 PM
[dead]
techpulse_xtoday at 8:31 AM
[dead]
marsven_422today at 4:21 AM
[dead]
mistMyesterday at 11:20 PM
[dead]
56745742597yesterday at 9:10 PM
[dead]
pencilcodeyesterday at 9:28 PM
ai slop analysis finding CF detects non javascript capable browsers with no punchline
blinkbatyesterday at 9:01 PM
Ok... so... ?
beeringyesterday at 8:37 PM
So are you able to get free inference now that you decrypted this?
dgb23today at 11:48 AM
Why are companies like OpenAI and others that are all-in on LLMs still using ReactJS, Python and so on?
These programming languages and frameworks were made for developer convenience and got wide adoption, because it makes on-boarding easier.
This obviously comes at a cost of performance, complexity and introduces a liability into a system, because they are dependencies that come with a whole bunch of assumptions about how they are used.