ChatGPT 3.5 is Discontinued in Support of GPT-4o mini

Open AI today, officially discontinued ChatGPT 3.5 in favor of the new cost-efficient AI model called GPT-4o mini. This new mini model is now more accessible and affordable than before, catering to small businesses, individual developers, and end users for quick tasks.
OpenAI is lowering the cost of AI, which is evident via the pricing structure of GPT-4o mini. At 15 cents per million input tokens and 60 cents per million output tokens, this mini model is cheaper than GPT-3.5 Turbo model by more than 60%.

Open AI claims that though it’s cheaper, GPT-4o mini doesn’t cut corners on performance. It outperforms GPT-3.5 Turbo and other small models across a range of academic benchmarks as follows:
- Textual Intelligence: With an 82% score on the MMLU benchmark, GPT-4o mini demonstrates superior reasoning capabilities compared to competitors like Gemini Flash (77.9%) and Claude Haiku (73.8%).
- Mathematical Reasoning: On the MGSM benchmark, GPT-4o mini scored an impressive 87.0%, leaving Gemini Flash (75.5%) and Claude Haiku (71.7%) in the dust.
- Coding Proficiency: With an 87.2% score on HumanEval, GPT-4o mini proves its mettle in coding tasks, outperforming both Gemini Flash (71.5%) and Claude Haiku (75.9%).
- Multimodal Reasoning: The model shows strong performance on the MMMU benchmark, scoring 59.4% compared to Gemini Flash’s 56.1% and Claude Haiku’s 50.2%.
Who is ChatGPT-4o Mini for?
GPT-4o mini’s low cost and latency make it ideal for a wide range of practical applications like:
- Chaining or parallelizing multiple model calls
- Processing large volumes of context (e.g., full code bases or conversation histories)
- Powering real-time customer support chatbots
- As for developers and small businesses, API also supports text and vision alongside plans for text, image, video, and audio inputs and outputs.
Our APK Breakdown of the ChatGPT app unveiled something exciting. Trigger word shortcuts for Voice Mode and Vision Mode. This could mean you can quickly activate Voice Mode and Vision Mode without touching. You could just speak and activate ChatGPT handsfree. You may also be able to set trigger words later. Similar to Google Assistant or Siri.
It was first discovered by us in our APK Breakdown of the latest ChatGPT v1.2024.193 beta. We found two strings stating vision_short_cut_trigger_words and voice_short_cut_trigger_words.
As we previously reported, ChatGPT has an Assistant activity which can be triggered from homescreen and can be used anywhere, even in the background. Addition of new voice trigger words mean you could activate voice and vision modes hands-free.
This could also mean the functionality for Vision Mode and Voice Mode may have already been implemented into the latest ChatGPT update. However, in order to activate it Open AI devs may have some secret words.
While we don’t know for sure which trigger words these are, but if you come across them let us know in the comments section below.
Download ChatGPT APK
The latest ChatGPT v1.2024.193 beta is here for download, which possibly get you access to the GPT-4o-mini model and features like Vision Mode or Voice Mode.