Back to blog
AISoftware EngineeringCareerDeveloper Tools

The FOMO Era of Software Engineering: Why Chasing Tools Is Eroding the Craft

A new AI tool launches every day, and developers are chasing each one at the cost of the thing that actually makes them engineers. Here's what the data says about the drift — and how to relocate rigor instead of losing it.

Aam2rican5
15 min read
new agent v0.3try this NOW+1 installgame-changer9 updatesharness betadeprecated soon10x or left behindthe FOMO era of software engineering

The Saturday-to-Wednesday Cycle

Last weekend I caught myself doing something I used to laugh at other people for. I spent a full Saturday afternoon setting up a new agent framework. By Sunday evening I had a working pipeline and was kind of proud of it. By Wednesday I saw a Twitter thread claiming a different tool did the same thing with half the config, and I felt a small sour twist in my stomach — the kind you get when you realize you've fallen behind on something you never signed up to race on.

That twist has a name now. Developers are calling it AI fatigue, tool fatigue, or more honestly, FOMO. A dev-focused founder, Sri Ram, described the mechanic precisely in a widely shared post: "You spend Saturday setting up a new tool. You have a workflow by Sunday. By Wednesday, someone posts about a way better tool." The cycle used to be monthly, back when JavaScript framework fatigue was the meme of the year. Now it's daily. And the sour twist isn't really about any specific tool. It's about the slow suspicion that while you're busy chasing surface changes, something underneath is quietly eroding.

This post is about what's eroding. What the measurements actually show. And what I'm trying to do about it — for myself, because I'm not immune to the twist either.

What the FOMO Actually Costs

Let me start with the number I keep coming back to. In February, Anthropic published a randomized controlled trial on how AI assistance affects coding skill formation. The result was uncomfortable: developers using AI scored 50% on comprehension tests about code they had literally just written, versus 67% for those coding manually. Cohen's d = 0.738, p = 0.01 — which is statistics-speak for this is not noise. The equivalent of nearly two letter grades.

The speed gain that was supposed to justify the trade? About two minutes. Not statistically significant. Two minutes of saved typing in exchange for a measurable hole in the mental model of the thing you just built.

Comprehension on code you just wroteManual67%AI-assisted50%Source: Anthropic, 2026. p=0.01, Cohen's d=0.738.

Zoom out from that single study and the picture gets worse. Google's 2025 DORA Report found that teams with 90% AI adoption saw a 9% increase in bug rates, a 91% increase in review time, and a 154% increase in PR size. A Stack Overflow longitudinal study showed heavy AI users had a 23% decline in their ability to write code from scratch. CodeRabbit's analysis of 8.1 million pull requests found AI-generated code averages 10.83 issues per PR versus 6.45 for human code, with logic errors 1.75x more frequent and security vulnerabilities 1.57x. Code duplication is up roughly 4x.

Here is the part that should make us uncomfortable: 93% of developers now use AI tools, and productivity has barely moved — stuck around 10%. BCG research shows that engineers juggling four or more AI tools simultaneously experience a measurable productivity drop from switching overhead. More tools, more coordination tax, less of the thing we bought them for.

Something is not adding up. We have more leverage than any cohort of engineers in history, and our comprehension is going down, our bugs are going up, our PRs are bloating, and our productivity curve is flat. The obvious question is: what are we actually doing with all this leverage?

The Confusion at the Heart of the Chase

Here is what I think is happening, and I think it's the real story under the FOMO.

Most engineers, quietly, have started to believe that the craft is the tool. So when a new tool comes out, it feels existential. If the craft is Cursor, then you need Cursor. If next month the craft becomes Claude Code, you need Claude Code. If the month after that it's some agent harness with a different philosophy, you need that too. Miss a release, miss a piece of yourself.

But the craft is not the tool. The craft was never the tool. The craft is the thing that sits underneath every tool and pre-dates every framework: problem decomposition, tradeoff analysis, invariant reasoning, debugging by hypothesis, designing for failure modes you haven't seen yet. None of that is in a changelog. None of it ships with a Product Hunt launch.

flowchart LR
  A[Problem] --> B[Decompose]
  B --> C[Form hypothesis]
  C --> D[Design for failure]
  D --> E[Reason about invariants]
  E --> F[Implement]
  F --> G{Works?}
  G -->|No| C
  G -->|Yes| H[Understand why]

That loop is the job. Every tool we've ever used — punch cards, assemblers, IDEs, Stack Overflow, Copilot, Claude — has been a lever that compresses the "implement" step. None of them have compressed the other five. The lever has gotten remarkable. The work has not changed.

When we chase tools, we're acting as if a better lever will make us better engineers. But a sharper chisel does not make you a sculptor. It just makes your mistakes faster.

Rigor Relocates, It Doesn't Vanish

I've been thinking about this a lot since reading a piece by bits-bytes-nn called Evolution of AI Agentic Patterns. The thesis is simple and I think it's right: engineering rigor hasn't disappeared in the AI era — it has relocated. The author traces three eras:

flowchart LR
  P["Prompt Engineering<br/>2022–2024<br/>craft = wording"] --> C["Context Engineering<br/>2025<br/>craft = what model sees"] --> H["Harness Engineering<br/>2026+<br/>craft = system around agent"]

In the prompt era, rigor lived in how cleverly you phrased the instruction. Then it moved to how you assembled context — codebase-wide semantic search, multi-file edits, retrieval pipelines. Now it's moving again, into what practitioners are calling "harnesses": the rules, error recovery, security guardrails, and validation gates wrapped around an agent. As the article puts it plainly: agent = model + harness.

The historical parallel in the piece stuck with me. Dynamic languages didn't abandon type safety; they relocated it from compile time to runtime testing. Agile didn't kill design; it relocated design from up-front docs to continuous feedback loops. In both cases, the thing that looked like the loss of discipline was actually a geographic shift, and the people who whined that "real engineering is dead" mostly just hadn't updated their map.

I think the same thing is happening now. Rigor is not vanishing. It's relocating from writing code to designing the system around the thing that writes the code. And FOMO is what it feels like to confuse a relocation for a disappearance. You sense that the old house is empty, and instead of finding the new house, you start hoarding furniture.

The new house, for what it's worth, looks like this: machine-readable rules instead of human review, automated validation gates instead of trust, observable and repeatable context assembly pipelines, security frameworks that limit what an agent can touch. It's not less rigorous. It is, if anything, more rigorous, because the blast radius of a mistake is now a production database at 3 a.m.

tools — treadmillfundamentals — compoundingdecomposereasoninvariantsjudgmentrigor doesn't vanish. it relocates.

The Two Patterns: Engaged vs. Delegated

Back to the Anthropic study, because this is the part I wish every engineer with a 12-tab AI workflow would sit with.

The researchers didn't just find that AI users lost comprehension on average. They found two clearly different patterns inside the AI group. One set of developers used AI and maintained — sometimes even improved — their mastery. The other set atrophied. The difference wasn't about which model they used, or how many tools they chained together. It was about what they did with the output.

Two patterns inside the AI-assisted groupEngagedasks follow-upsexplains it back≥65%Delegatedaccepts outputnever reads it<40%Same tools. Same models. Different habits. Different outcomes.

The high-scoring group treated AI as a thinking partner. They asked follow-up questions. They composed hybrid queries that asked for both code and an explanation of why that code worked. They used AI for conceptual inquiries while writing the hard parts themselves. They resolved their own errors. They scored 65% or higher on comprehension.

The low-scoring group treated AI as a vending machine. Accept the output. Paste the output. Ship the output. When something broke, prompt again. Prompt again. Prompt again. They scored under 40%.

Same tools. Same models. Different habits. Completely different outcomes in both skill retention and, presumably, long-term career trajectory.

This is the dividing line I care about. It isn't "people who use AI" versus "people who don't." That argument is over — 93% of us use it, including me. The real dividing line is people who engage versus people who delegate. And FOMO is a delegation trigger, because every new tool promises to let you delegate a little more, think a little less, and still keep up. The entire pitch of the treadmill is "hand over more cognition and stay competitive." The data says the opposite is happening: the more you hand over without engagement, the more you lose the very skills that made you worth AI-leverage in the first place.

The Business Model Behind the Treadmill

Before we get to what to do, I want to name something clearly: the FOMO is not an accident. It is a business model.

Launching AI tools is its own category now. Any time an open-source model drops, you see 10–20 wrappers on Product Hunt within 48 hours — same model, different skins, all competing for your attention and your monthly subscription. An AI influencer ecosystem has grown on top of that launch cadence, manufacturing urgency at industrial scale. "You need to try this NOW." "This changes EVERYTHING." Every YouTube thumbnail, every newsletter subject line, every Twitter thread is optimized to make you feel like the next career break is one tool away.

And unlike 2016's JavaScript framework fatigue, the stakes feel heavier in two ways. First, these tools cost real money — $20–50/month for the baseline, $100+ for the good tiers. Evaluating is no longer free. Second, there's a career anxiety layer JavaScript never had. Nobody thought React was going to take their job. The marketing around AI tools explicitly whispers that if you miss this wave you might not be employable in two years.

A lot of us are making rational decisions inside an irrational frame. "I can't afford to miss a launch" feels reasonable until you notice that the launches themselves are a churn mechanic, not a progression. Five launches a day is not five units of progress. It's five units of noise with a power-law distribution of signal, and you cannot possibly route the signal in real time by yourself.

What To Actually Do

I want to be honest: I'm writing this partly to convince myself. I feel the twist. I have an AI tools folder with bookmarks I will never open. I have spent evenings evaluating agents I stopped using 48 hours later. The drift is real and I'm not going to pretend I'm above it.

Here is what I'm trying — some from the research, some from what I've seen working for people I respect, some from what's buying me my own sanity back. None of it is revolutionary. That's the point. The answer to daily noise is not a cleverer filter; it's fewer decisions.

1. Freeze your stack on a 90-day clock

Pick a small stack. Commit to it for 90 days. Do not evaluate alternatives in that window. The only exception is a tool that directly blocks a problem you are hitting right now — not a hypothetical one.

This one is hard because it feels like giving up. It isn't. It's buying back the cognitive budget you were spending on evaluation and redirecting it at the thing you actually get paid for. The most productive engineers I know are, without exception, running some version of "I use Claude Code and that's it." They are not keeping up. They are building.

2. Adopt the "explain it back" rule

After AI generates a non-trivial piece of code, before you paste it, write three sentences explaining why it works. Not what it does — a junior can tell you what. Write the why: which invariant it preserves, which failure mode it handles, which assumption it's making about the data.

If you can't write the three sentences, you don't understand the code well enough to own it. Either prompt for an explanation until you can, or throw the code away and write it yourself. This one habit maps almost perfectly onto the Anthropic study's "engaged" pattern. It is the single cheapest thing you can do to stay on the right side of that 50/67 split.

3. Keep a first-principles journal

Before you prompt, write the problem statement by hand. Not in a comment. On paper or in a plain text file. Forty seconds of longhand — what am I actually trying to do, what are the constraints, what's the success condition, what would break it.

This is not productivity theater. It is the one moment in a modern workflow where you force yourself to do problem decomposition before reaching for a lever. If you skip it, the AI will happily decompose the problem for you, and over time the part of your brain that used to do that will stop firing. You can feel it atrophying if you pay attention. I started noticing it in myself around month six of heavy agent use and it scared me.

4. Run a weekly no-AI deep-work block

Two hours a week, one hard problem, no AI assistance. No autocomplete. No agent. You and the problem and whatever documentation you'd have used in 2018.

The goal is not nostalgia. The goal is to keep the muscle that does first-principles reasoning from going to sleep. Think of it as the strength training that lets you sprint with leverage the other 38 hours. Two hours a week is about 5% of your work time, which is a very cheap insurance policy against the 23% Stack Overflow skill decline.

5. Review AI output like it's from a junior you don't fully trust

Not like it's correct. Not like it's "probably fine." Review it like a PR from a smart but inexperienced teammate who is confident in ways they shouldn't be. Look for: edge cases they glossed over, error paths they skipped, security assumptions they made implicitly, names that reveal shallow model-of-domain, duplication of existing utilities.

This is one of the few parts of the job where human judgment still has a genuine and increasing edge, because it is exactly the part an AI trained on averaged code tends to be worst at. If you outsource the review to another AI, you are stacking averaged judgment on averaged code and calling it engineering. It is not.

6. Pick the evaluation gate you'll actually hold to

Sri Ram's five-question gate is the cleanest version of this I've seen, and I've started using it myself:

  1. Does it solve a problem I have right now? Not hypothetical. Experienced this week.
  2. Is it at least 6 months old? Launches are unstable. Six months filters out the pivots.
  3. Can I find three real users who've used it for more than a month? Not beta testers. Real integrators.
  4. What's the switching cost if it dies? If it changes your file format or CI pipeline, the bar is higher.
  5. Does it replace a step or add one? The best tools remove work. The worst add a new surface to manage.

He says this filter eliminates about 95% of launches. That matches my experience in the two weeks I've used it. You do not miss anything critical. You miss a lot of hype.

7. Read one foundational thing per week

Not a tool changelog. Not a launch post. Not a Twitter thread. A paper, a book chapter, a postmortem, a talk from a decade ago that aged into wisdom. The half-life of tool knowledge in 2026 is about six weeks. The half-life of Designing Data-Intensive Applications, the Google SRE book, or the classic distributed systems papers is measured in decades.

The ROI math on what you read is brutal if you work it out. An hour on a tool launch post depreciates to near-zero inside a quarter. An hour on a well-chosen paper pays dividends for your entire career. If you are trying to stay competitive over a ten-year horizon, the second hour is not even close.

A Parting Thought

Here is the thing nobody in the AI tooling discourse wants to say out loud: the engineers who are going to be worth hiring in three years are not the ones who tried the most tools. They're the ones who can still think clearly about a problem they've never seen before, under pressure, with incomplete information, and make good tradeoffs. That skill is built the same way it has always been built — by doing hard problems slowly, failing, and understanding why you failed. It is not built by frictionlessly accepting output from an agent and moving on.

AI is going to keep getting more capable. The harnesses around it are going to keep getting more sophisticated. The tools are going to keep launching, daily, forever, until the category consolidates the way JavaScript frameworks eventually did. None of that changes what the craft is. The craft is the same craft it was in 1995, in 2005, in 2015. Problem solving and critical thinking. The rest is furniture.

If you feel the twist — the sour little "am I falling behind" feeling every time someone posts a new agent framework — I want to suggest a reframe. You're not falling behind. You're being asked to run on a treadmill that measures the wrong thing. The thing worth keeping up with isn't the tool. It's your own capacity to think.

Stop chasing. Build something hard. Write the three-sentence explanation. Read the paper. The best AI tool in 2026 might, quietly, be the one you already have, used by someone who still knows how to think.


References and Further Reading

Share this post

PostLinkedIn

Related Posts