
Up until, say, six months ago, I was very jittery whenever people targeted me for building “GPT wrappers.”
“Anyone can do that,” they said.
Times have changed. I am now convinced in my belief that the real products would be built at a segmented application layer.
After over 2 years of paying $20 for a ChatGPT Plus subscription, I just canceled it this month. And I didn’t do this because it’s any less useful — but because “ChatGPT wrappers” fulfill my needs better.
I am now paying the same $20 for a Cursor subscription.
Imagine having a professional coder that lives right in your IDE and helps you as you type and think, and you don’t have to constantly
engage in a back-and-forth on a separate platform.
What Cursor has built using Claude, DeepSeek, and OpenAI APIs is incredible. It is the application layer for the coding segment that it has that makes the product astounding.
Think similarly of other products. Do you want to prompt ChatGPT to write your email, or do you want that in the Gmail compose email?
Do you want to talk to ChatGPT to sketch out a travel plan, or would you rather just go to a dedicated app that is trained to book your entire travel in one go?
Another reason it kinda sucks to pay for a foundational model-based interface like ChatGPT is that there ins’t a one-size fit all.
For example, while OpenAI has made better models in general, xAI’s Grok is astounding at generating images.
There is no way Sam Altman would ever bring Grok’s image functionality to ChatGPT.
Similarly, what if you wanted to transcribe an audio quickly? Google’s Speech-to-Text model excels at that! And it is available at throwaway prices in its AI studio!
So, I would much rather pay for a product that is able to seamlessly integrate a plethora of these models into one interface via API calls.
Total win-win. Foundational models make money, interface makes money, user is happy.
Yes, I want an AI interface that calls Grok to generate images or for sass, OpenAI for conversational messages, Claude for coding, and Google API for transcription.
OpenAI just released its latest model, GPT-4.5.
This time, Sam Altman doesn’t want you to think about benchmarks or ask tricky logical or scientific questions. It would fail at all that.
Instead, it is designed to be more human — and that may mean sucking at math, in exchange for giving more human-like, more thoughtful response, displaying a high emotional quotient and less hallucination.
At $75 per million input tokens and $150 per output, it is now by far the most expensive AI out there in the public domain.
I am not writing a lot on it, as I haven’t tried the model out a lot besides as an anonymized guinea pig in ChatGPT.
Experts whose opinion I value have tried it and say it performs better at engaging writing, so I look forward to giving it a shot.
My first impression is that the pricing doesn’t seem to be pragmatic at a time when the wider narrative is around bringing down the costs of AI for customers.
This newsletter was first published on Artificially Boosted.