What to invest in when the GPT bubble bursts?

I’ve read this piece about how the AI boom is, in essence, bullshit. It does meet my intuition around how generative AI isn’t getting anything more than modest incremental improvements these days, and there’s probably nothing to follow those. There’s no AGI coming, the Turing test turned out to be a pretty bad way to measure actual intelligence. All this made me wonder what the best way would be to get ahead in the tech industry assuming that AI is a bubble.


Image generated by ChatGPT 4o today using the prompt “a couple practicing yoga” refined by adding “let’s go for more acrobatic poses”.

Before we get into it, let me qualify that I’m invested in tech stock so in pure Machiavellian terms, I kind of need the bubble to survive another few years if I want to be able to monetize my stock for real estate. But even if I ignored the moral questions here, realistically I can’t do anything to influence it, even if I decided to become a full-on shill for GPTs. My public presence and influence is insignificant in the grand scheme of things. Man, I can’t even convince my family members and friends to do anything, let alone “the Market” 😅 So my “investment disclosure” is more of an observation about volatility of wealth, I guess.

I should also add that I am a paying customer of ChatGPT, which I find somewhat useful. I’m not a hater, I don’t hold a radical stance against artificial intelligence.

There isn’t much I in the AI

My understanding so far is that the current AI boom is built mostly around fear. Fear fueled by mediocre minds afraid that there will be major shifts in workforce. They experienced chat bots passing the Turing test with flying colors and now they fear that human labor will be replaced. They’re influenced by the unreasonable claims of near-future revolution peddled by AI salesmen. So the industry-wide investment is in large part a way to hedge against “what if it’s true?”

My personal belief (not informed by any actual insider info) is that a lot of the AI capacity built by the giant corporations isn’t to sell a product for $30/month to end users but to allow internal consumption to create a second-order product. The AI would be a corporate “employee”. Working 24/7 with no benefits and sick leave. That’s the middle manager’s wet dream. An army of such employees would then outperform the “manual labor” workforce, making this corporation able to win at anything. Programming, logistics, designing hardware, controlling manufacturing, operating data centers, organizing not only your team’s calendar but entire divisions of companies. What the mediocre minds without capital fear, the mediocre minds with capital dream of.

Currently, it doesn’t look like this future is coming. The image above was generated from a straightforward honest prompt. No malicious input or intent. But it’s a great illustration of the disappointing nature of the technology.

The GPT that taught me two days back about CQT spectrograms hallucinated an argument to a librosa function and wouldn’t admit to a mistake even when presented with contradicting data that it itself brought up from the Internet when I repeatedly prompted it to. This doesn’t sound like a dependable employee.

Another time, a friend of mine suggested that I should stop manually going through PRs to summarize the latest changes for the podcast. I prompted GPT to present me a list 1, and the result was not only extremely inaccurate, but it couldn’t even count to 3 reliably in the headings, and the resulting PR contained obviously made-up authors:

Can you see the obvious placeholder names? More importantly, the information provided is useless. Let’s focus on a single lie here, “introduce per-interpreter GIL” was a change made by Eric Snow as part of PEP 684 for Python 3.12 in 2023, well outside of the date range specified in the prompt. The linked PR #132130 has nothing to do with either Eric, or the GIL, or contention. In fact, no PR in the list above is described correctly. The very first one mentioned is not about enhancing thread-safe reference counting, but about… generating social media preview cards for the documentation. How would you react to a coworker presenting you with such a summary?

Sure, there is a “stop lying” button in the UI, they call it “deep research”. In the case of my prompt, it ran for 10 minutes, presenting various promising descriptions of what it’s working on over the progress bar, and then it crashed.

Those examples aren’t to dunk on GPTs in general or ChatGPT in particular, they can be useful anyway. I mostly use ChatGPT as a much better Polish-English dictionary, when I know exactly what I’m trying to say but I don’t know the expression or the word. It’s also a fun way to find about new concepts and libraries I didn’t know about before. I find it especially satisfying when discussing philosophy and interpreting art. None of these sound like massive workforce replacements to me, though. Plus the costs of AI’s labor are currently extremely high, making it laughably non-profitable for businesses to employ compared to offshoring to real human workers.

What to invest energy in instead?

Since the goal of GPT investment seems to be “to replace those pesky human employees, ick ick”, finding another way to solve that “problem” would be a good bet, ignoring the ethical implications. However, since we’re talking about increasingly complex automation, replacement of manual labor, we are getting back into artificial intelligence territory real quick. Currently, the term “AI” is entirely tainted by GPTs, so when that bubble bursts, all other forms of artificial intelligence will likely be having a hard time finding investors as well. And let’s be real, there are literally thousands of bona fide geniuses around the globe actively working on pushing artificial intelligence science forward. It is simply that challenging of a nut to crack.

The more interesting approach to this in my mind is to address why the human workforce is viewed so negatively by the upper management that they invest billions into replacing it. I will leave the obvious but ultimately non-actionable avenues of “because they’re greedy fucks without empathy” and instead focus on what makes a job underperform. Since I can’t generalize to all industries, I’ll comment on software development. Here the most visible problems with “reliable output” (from the perspective of the worker) are to me:

  1. unreliable tools – software is currently built on foundations that we know are broken in terms of correctness and unstable as a result, worsened by constant dependency changes;
  2. changing operational environments – even perfect software is no good when the external world changes in a way that makes the software either irrelevant or downright break;
  3. employee distraction – the work environment and tools bring about tons of wasted focus due to entertainment, news, and communication being available at all times, which requires a lot of self-control from employees;
  4. uninspiring tasks – at its best, software development is building a solution that you’re inspired by and believe in, cranking out new features and seeing them connect in ways that helps solving a real-world problem in a beautiful way. Most of the time, though, you’re developing an app for a limited audience that isn’t doing anything interesting or ground-breaking. And you’re not working on new shiny features, you’re fixing a bug that stems from shoddy programming and insufficient testing. And you know that it’s the tip of the shit iceberg, the product contains many more bugs like that. Hard to do your best work in such an environment.

It seems to me (1) and (3) are fields where you could do some meaningful improvements, which would indirectly help with (4) and maybe (2) as well to some extent. In other words, thinking about sustainable computing 2, about development experience that leads to fewer bugs and to programs that are more resilient to execute and less brittle to operate.

The boring answer is that the reasonable wing of the industry is already focusing on those things. The rise of Rust and TypeScript are directly addressing the safety and correctness issue. WebAssembly is growing into a sandboxing platform with universal execution capability. Some of the most promising usages of on-device language models are in the fields of notification management and email categorization.

But there’s still so much more to do. I wish for the software I’m working with to be intelligible, not intelligent. I want to be able to understand what is going on. I need to be able to debug it and introspect it at runtime, analyze it statically for correctness and to help with reading the code. I want to be able to build and test the code quickly, and to refactor at will with no fear of breakage. I want to stop reinventing the wheel without fearing that somebody will hack, maliciously modify, or delete my dependency on the Internet.

Those are real challenges in the industry today and it seems to me that the GPTs are pulling us away from where we want to be.


  1. The prompt was:

    Looking at pull requests merged to https://github.com/python/cpython between 2025-03-22 and today (inclusive on both ends), provide me a list of 50 most significant changes introduced in that time frame. This list needs to be grouped by the following category:

    • free-threading
    • performance improvements
    • new features
    • bug fixes
    • curiosities

    Each category should contain at least 7 entries. For each entry, include the link to the pull request, its title, a one-sentence summary of what the pull request is doing, and the name of the pull request author.

  2. While these days you’ll mostly find discussion of environmental impact that computing resources have, I traditionally understood it as striving for longevity. This includes focus on long-term stability, both in system execution and in data access support.

    That requires things to be designed well enough to maintain longterm usefulness, and to be reasonably open to future maintenance. Of course, minimizing energy consumption and otherwise keeping the systems green is a worthwhile goal, too. Currently the industry isn’t doing particularly well on those things. Things are improving but are far from where we need them to be. 

#Programming