The Great Software Quality Collapse: How We Normalized Catastrophe

Idk who Denis Stetskov is, but there’s a lot in here that resonates.

When you need $364 billion in hardware to run software that should work on existing machines, you’re not scaling—you’re compensating for fundamental engineering failures.

Maybe tech is ripe for disruption along the lines of efficiency — ironic, no?

The pattern is clear: ship broken, fix later. Sometimes.

I’ve seen this a lot. Build it first. Make it efficient later. “We’ll just buy more compute” is the reasoning.

But are we “building sustainable systems or funding an experiment in how much hardware you can throw at bad code”?

Speaking of sustainable:

Companies are replacing junior positions with AI tools, but senior developers don't emerge from thin air. They grow from juniors who:

  • Debug production crashes at 2 AM
  • Learn why that "clever" optimization breaks everything
  • Understand system architecture by building it wrong first
  • Develop intuition through thousands of small failures

Like a lot of things now-a-days, it’s harvest the forest and extract the value, worry later about how the forest got there in the first place and what will happen when we don’t have a forest.

AI can't learn from its mistakes—it doesn't understand why something failed. It just pattern-matches from training data.\

Hey, that’s what we as an industry do: pattern-match on the solutions of others and fail to understand our mistakes — lol.