If you’ve been following tech news, you know OpenAI released a new ChatGPT model last week—dubbed ChatGPT 5. What you might have missed, if you’re not a power user, is that the company simultaneously made earlier models unavailable, triggering backlash from users and developers who had built workflows, systems, and even businesses around the GPT-4 models. In response, OpenAI has brought back the popular GPT-4o model and may restore more.
In the fast-moving world of AI, companies are racing to build faster, better systems. Those systems are products for end users like you and me—and platforms for others who build products and services for their own customers. For those builders, a reliable, stable platform that operates in a well understood way might matter more than speed or even capabilities. Sam Altman, CEO of OpenAI said, “… suddenly deprecating old models that users depended on in their workflows was a mistake. ”
Today computing platforms are most always software, virtual systems where developers don’t have to think so much about the hardware. Back when computing platforms were primarily hardware, platform makers couldn’t change their systems in a matter of months—even if they wanted to. Most hardware was designed, manufactured, and sold for years to recoup not just R&D, but also tooling and production-line setup costs.
When hardware platforms stuck around, third-party developers had time to refine their products—and to grow their own ecosystems of apps and user communities.
Consider the Apple II line of computers, which was production for sixteen years. Long after Apple stopped releasing new software for it, companies like Beagle Brothers kept squeezing every last drop from the platform—creating and profitably shipping impressive titles a decade after the platform debuted.
Apple could have built backward compatibility for the Apple II into the Macintosh—but doing so would have kept the II platform alive even longer, leaving many users with no reason to upgrade and Apple with no new hardware or software revenue.
Note: If you want to go down this rabbit hole about the Apple II line and the compatibility that could have been, here are a couple of YouTubes to get you started:
This tension between third-party developers and platform makers—software or hardware—has been around for a long time.
Early on, third parties boost platform sales, attract new users, and invent new ways to make the platform useful. But once a platform maker has profited from a version, those same developers can look like the enemy—helping users wring more life from old systems and discouraging upgrades to the latest and greatest. When migration to the shiny new product slows, so does revenue. For the platform developer, it’s like driving with the parking brake on.
Besides racing to be the best, AI companies are racing for their lives: none of the major players are profitable—they’re burning cash at an astonishing rate. As Goldman Sachs noted last year, there isn’t yet a killer app for AI that justifies the more than $1 trillion—yes, trillion—invested in LLMs so far.
The future of companies like OpenAI doesn’t lie in last week’s model you’ll pay $20 a month for (or your employer will). It’s in the model that convinces your employer to pay $1,500–$3,000 a month for your access. They think they’re close. I’m skeptical, but I’m also struck by how many people I know spend $300–$500 a month on AI tools—many of which pay for themselves through capability or productivity gains. (My own AI subscriptions run about $50 a month and are mostly underused—I keep them for demos at speaking engagements.)
As such, being a third-party developer for a product or service built on an AI platform is extremely risky. The platform might change behavior tomorrow, and the company behind it could fail if it isn’t profitable when the AI bubble pops. That’s not to say there isn’t gold in them thar hills, but if I were investing in an AI company, I’d want to see that it either built its own platform or that it is platform-agnostic—able to jump quickly among major providers (which has been made easier by the MCP standard).
Right now, third-party AI developers are doing plenty of interesting work—but when the AI bubble pops, most will be wiped out. That's a shame; even if progress on LLMs and other AI systems froze today, it would take us a decade to realize their full potential.
I am curious, what would make you pay $1,500+ a month for an AI platform? Comment below (keep it safe for work).
My commentary may be republished online or in print under Creative Commons license CC BY-NC-ND 4.0. I ask that you edit only for style or to shorten, provide proper attribution and link to my contact information.
📥Recent Talks, News and Updates
I was featured as a “Person You Should Know (PYSK)” in the Columbia Business Times in March! This was a lot of fun and you can read the profile here.
I was also profiled in the Boomtown Supplement for the Columbia Missourian in April. You can read it here.
👍 Products I Recommend
Products a card game for workshop ideation and ice breakers (affiliate link). I use this in my workshops and classes regularly.
📆 Upcoming Talks/Classes
I will be presenting “AI Agents: Friend or Foe?” for the Human Resources Association of Central Missouri on September 9th at 8:30 am. Details about the program will be posted here.
Description: AI “agents” are the next step beyond chatbots: digital teammates that set goals, take actions across your apps, and finish real work while you sleep. Prof C will explain—in plain language—how these tireless helpers can streamline HR tasks like onboarding and analytics … and why the same autonomy can create new risks if we’re not careful. Join us to learn simple guardrails for deciding when an AI agent is a friend, when it might become a foe, and how to stay firmly in control.
I will be presenting “AI Strategies” for the Red River Valley Estate Planning Council, in Fargo, North Dakota on November 19. Details will be available here.