OpenAI introduces ChatGPT Pro $100 tier with 5X usage limits for Codex compared to Plus

OpenAI is making moves to try and court more developers and vibe coders (those who build software using AI models and natural language) away from rivals like Anthropic.
Today, the firm arguably most synonymous with the generative AI boom announced it will begin offering a new, more mid-range subscription tier — a $100 ChatGPT Pro plan — which joins its free, Go ($8 monthly), Plus ($20 monthly) and existing Pro ($200 monthly) plans for individuals using ChatGPT and related OpenAI products.
OpenAI also currently offers Edu, Business ($25 per user monthly, formerly known as Team) and Enterprise (variably priced) plans for organizations in said sectors.
Why offer a $100 monthly ChatGPT Pro plan?
So why introduce a new $100 ChatGPT Pro plan, then?
The big selling point from OpenAI is that the new plan offers five times greater usage limits on Codex, the company's agentic vibe coding application/harness (the name is shared by both, as well as a lineup of coding-specific language gmodels), than the existing, $20 monthly Plus plan, which seems fair given the math ($20x5=$100).
As OpenAI co-founder and CEO Sam Altman wrote in a post on X: "It is very nice to see Codex getting so much love. We are launching a $100 ChatGPT Pro tier by very popular demand."
However, alongside this, OpenAI's official company account on X noted that "we’re rebalancing Codex usage in [ChatGPT] Plus to support more sessions throughout the week, rather than longer sessions in a single day."
That sounds a lot like OpenAI is also simultaneously reducing how much ChatGPT Plus users can use its Codex harness and application per day.
What are the new usage limits for the new $100 ChatGPT Pro plan vs. the $20 Plus?
So, what are the current limits on the $20 Plus plan? The new Pro plan gives you 5X greater than...what?
Turns out, this is trickier than you'd think to calculate, because it actually varies depending on which underlying AI model you are using to power the Codex application or harness, and whether you are working on code stored in the cloud or locally on your machine or servers.
OpenAI's Developer website notes that for individual users, usage is categorized by "Local Messages" (tasks run on the user's machine) and "Cloud Tasks" (tasks run on OpenAI's infrastructure), both of which share a five-hour rolling window. Currently, it actually shows the $100 Pro plan gives you 10X the amount of messages as the $20 Plus plan (see below)!
ChatGPT Plus ($20/month)
GPT-5.4: 33–168 local messages every 5 hours.
GPT-5.4-mini: 110–560 local messages every 5 hours.
GPT-5.3-Codex: 45–225 local messages and 10–60 cloud tasks every 5 hours.
Code Reviews: 10–25 pull requests per week
ChatGPT Pro 5X ($100/month)
GPT-5.4: 330-1680 local messages every 5 hours.
GPT-5.4-mini: 1100-5600 local messages every 5 hours.
GPT-5.3-Codex: 450-2,250 local messages and 100-600 cloud tasks every 5 hours.
Code Reviews: 100–250 pull requests per week
ChatGPT Pro 20x ($200/month)
GPT-5.4: 660-3,360 local messages every 5 hours.
GPT-5.4-mini: 2,200-11,200 local meessages every 5 hours.
GPT-5.3-Codex: 900-4,500 local messages and 200-1,200 cloud tasks every 5 hours.
Code Reviews: 200–500 pull requests per week
Exclusive Access: Includes GPT-5.3-Codex-Spark (research preview), which has its own dynamic usage limit.
And as OpenAI's Help documentation states:
"The number of Codex messages you can send within these limits varies based on the size and complexity of your coding tasks, and where you execute tasks. Small scripts or simple functions may only consume a fraction of your allowance, while larger codebases, long running tasks, or extended sessions that require Codex to hold more context will use significantly more per message."
The larger strategic implications and context
OpenAI’s sudden move toward the $100 price point and expanded agentic capacity comes amid the unprecedented financial ascent of its chief rival, Anthropic.
Just days ago, Anthropic revealed its annualized run-rate revenue (ARR) has topped $30 billion, surpassing OpenAI's last reported ARR of approximately $24–$25 billion.
This growth has been fueled by the massive adoption of Claude Code and Claude Cowork, products that have set the benchmark for enterprise-grade autonomous coding.
The competitive friction intensified on April 4, 2026, when Anthropic officially blocked Claude subscriptions from being used to provide the intelligence for third-party agentic AI harnesses like OpenClaw.
To be clear, Anthropic Claude models themselves can still be used with OpenClaw, users just must now pay for access to Claude models through Anthropic's application programming interface (API) or extra usage credits, rather than as part of the monthly Claude subscription tiers (which some have likened to an "all-you-can eat" buffet, making the economics challenging for Anthropic when power users and third-party harnesses like OpenClaw consume more than the $20 or $200 monthly user spend on the plans in tokens).
OpenClaw’s creator, Peter Steinberger, was notably hired by OpenAI in February 2026 to lead their personal agent strategy, and has, since joining, actively spoken out against Anthropic's limitations — advising that OpenAI's Codex and models generally don't have the same restrictions as Anthropic is now imposing.
By hiring Steinberger and subsequently launching a Pro tier that provides the high-volume capacity Anthropic recently restricted, OpenAI is effectively courting the displaced OpenClaw community to reclaim the professional developer market.
Want to read more?
Check out the full article on the original site