
OpenAI has released a number of improvements to its flagship model, GPT-5, After receiving a lot of user feedback.
In a recent post on X, CEO Sam Altman revealed the updates, which aim to revolutionise the AI space by emphasising advanced capabilities and user-centric design.
The addition of three different modes—Auto, Fast, and Thinking—is the main feature of the update.
With these three modes, users can customise their interactions with GPT-5 to meet their unique requirements, be they quick answers or well-considered insights.
Moreover, with higher rate limits that permit up to 3,000 messages per week, the Thinking mode in particular has been strengthened.
This growth demonstrates OpenAI's dedication to meeting the rising need for sophisticated AI features.
In order to facilitate smooth transitions between models like o3, 4.1, and GPT-5 Thinking mini, a new "Show additional models" toggle in settings will be available to paid subscribers.
However, because of its high GPU requirements, GPT-4.5 will only be available to Pro users.
Users can access the best tools for their needs with this tiered approach, all while preserving peak performance.
Following user feedback, OpenAI is improving GPT-5's personality to avoid the problems of earlier iterations that some users found unduly bothersome and instead achieve a balance between warmth and usefulness.
Altman outlined the long-term goal of providing per-user customisation of model personality in order to further improve the user experience.
In addition, Altman drew attention to a noteworthy development in the use of reasoning models: both free and paid users are using them much more frequently.
The number of users who use reasoning models on a daily basis is rising dramatically; for instance, we went from less than 1% to 7% for free users and from 7% to 24% for plus users.
He noted, "I anticipate that the use of reasoning will significantly increase over time, so rate limit increases are important."
This expansion emphasises how AI interactions are changing and the demand for models that can accurately answer complicated queries.