On usefulness, coarse-grained thinking, and why AGI — not the Singularity — is the right destination
The Exhausting Churn of Silicon Valley AI
In January 2025, DeepSeek’s R1 model — trained by a Chinese lab for a fraction of what US rivals spend — topped the Apple App Store, overtaking ChatGPT overnight, and sent Silicon Valley into what one analyst called a “code red.” OpenAI scrambled. Nvidia’s stock fell $600 billion in a single day. Within weeks, the next wave had arrived: GPT-5.2 Codex, Claude Opus 4.5, Gemini 3, Grok 3 — each proclaimed as the new frontier.
The coding tool wars ran in parallel. Cursor, a two-year-old startup, reached a $9 billion valuation. Windsurf attracted $3 billion in acquisition interest from OpenAI — then, in a 72-hour saga, got split three ways between OpenAI, Google, and Cognition. Claude Code and Gemini CLI arrived to contest the terminal. GitHub Copilot responded by adding Gemini, o3-mini, and Grok 3 as model options. Anthropic’s share of enterprise AI deployments climbed from 12% to 32%, while OpenAI’s dropped from 50% to 25% — all within a single year.
Andrej Karpathy — former Tesla AI director and OpenAI co-founder — coined a new term in February 2025: vibe coding. The idea: describe what you want in plain language, let the AI write the code, stop worrying about implementation. The phrase entered the Merriam-Webster Dictionary within a month. Within six months, researchers found that experienced developers using these tools took 19% longer to complete tasks — despite feeling 20% faster.
This is the texture of AI right now: breathless, cyclical, and deeply confusing. Every week produces a new model that “changes everything,” a new tool that renders previous tools obsolete, a new term that captures the moment — until the next moment arrives and the previous one is forgotten. For anyone trying to understand where AI is actually going, this environment is close to useless.
The churn of announcements is not evidence of progress. It is evidence of a field still overwhelmingly on the left arc of its journey.
To navigate it, we need a framework. And there is one — imperfect, often misused, but pointing at something real — that illuminates the pattern underneath the noise.
The Hype Cycle: A Practical Lens, Not a Law
The Gartner Hype Cycle, introduced in 1995, describes a pattern that repeats across the history of technology. A breakthrough emerges. Excitement inflates expectations far beyond what the technology can currently deliver. Disappointment follows. Then, gradually, realistic adoption takes hold — and the technology settles into genuine, embedded use.
Gartner names five phases: Innovation Trigger, Peak of Inflated Expectations, Trough of Disillusionment, Slope of Enlightenment, Plateau of Productivity. This is useful vocabulary. But the deeper truth is simpler: the curve has two fundamentally different halves, separated by a single moment of reckoning.
The left arc is the arc of imagination. It is driven by what a technology promises — by demos, by press releases, by investor decks, by the infectious excitement of people who have seen something new and cannot stop talking about it. There is no empirical ceiling on the left arc. Expectations can inflate indefinitely, because potential is unlimited.
The right arc is the arc of usefulness. It is driven by what a technology actually delivers — by problems it reliably solves, by workflows it permanently improves, by the quiet accumulation of people who depend on it without thinking about it. The right arc is bounded by reality: by infrastructure, by habits, by the ordinary friction of the world.
Between the two arcs sits the trough — which is not, as it is commonly portrayed, a failure of technology. It is something more useful: the usefulness test. The brutal, unsentimental moment when what a technology promises is cross-examined by what it actually delivers in the hands of real people solving real problems.

Figure 1: The Two-Arc Reading of the Hype Cycle — the trough is the usefulness test that separates imagination from evidence.
The criterion that separates the two arcs is not time, maturity, or technical sophistication. It is usefulness — the moment a technology stops being talked about and starts being depended upon.
This reframing matters because it cuts through the noise. The question to ask about any AI tool or model is not: is it impressive? It is: is it embedded? Is it woven into human activity in ways that would be painful to remove? Does it solve problems that genuinely needed solving, reliably, at scale?
On that criterion, the current AI landscape looks quite different from the way Silicon Valley describes it. The most celebrated tools — the latest coding assistants, the newest reasoning models — are almost entirely on the left arc. They generate enormous excitement. They produce remarkable demos. They attract billions in funding. But most of their claimed productivity gains remain contested, their enterprise deployments are still experimental, and the use cases that survive the trough have yet to fully crystallise.
Where Does AI Sit on the Curve?
Artificial intelligence is not a single technology. It is a family of capabilities — large language models, computer vision, AI agents, reinforcement learning, robotics — each at a different point on its own arc. The mistake of treating “AI” as one thing on one curve has been the source of enormous confusion in public discourse.

Figure 2: Where AI sub-technologies sit on the Hype Cycle — 2025/2026. AI is not one curve; it is a family of curves at different stages.
A more honest map looks like this. Narrow AI in specific domains — fraud detection, logistics optimisation, medical imaging analysis — has quietly crossed into the right arc. It is embedded, depended upon, and largely invisible to the public conversation. Large language models for general-purpose use are descending into the trough. The enterprise reality of 2025 — hallucinations, governance failures, unclear ROI, and the gap between demo performance and production reliability — is administering the usefulness test in real time. AI agents — autonomous systems like those being built into Cursor, Windsurf, and Claude Code — are near their own peak, generating fresh excitement before the trough has even arrived.
The competitive frenzy between OpenAI, Anthropic, Google, and DeepSeek is itself a left-arc phenomenon. When model quality is the primary battleground — when the headline is “Claude Opus 4.5 leads SWE-bench at 80.9%” or “Gemini 3 breaks 1500 on LMArena” — the conversation is still about potential, still about what these systems might do in the right hands. The right arc does not announce itself with benchmark scores. It announces itself with the quiet fact of irreplaceable use.
Progress in AI should be judged not by benchmark performance or Turing test results, but by the depth and breadth of its embedded usefulness in human life.
This is a more demanding standard than Silicon Valley typically applies — and a more honest one. A model that solves International Mathematical Olympiad problems at gold-medal level is a genuinely impressive left-arc achievement. A model that reliably shortens diagnosis time in rural hospitals, or makes legal representation accessible to people who currently cannot afford it, has crossed into the right arc. They are not the same thing.
The Right Destination: AGI, Not the Singularity
The individual AI sub-technologies — each completing its own mini-arc, each depositing its lessons and capabilities into the next — are not random. They are moving, cumulatively and directionally, toward a single accumulating destination: Artificial General Intelligence, or AGI — a system capable of performing any cognitive task a human can perform, across any domain, without being specifically trained for each.
AGI is a meaningful destination precisely because it is expressible. We can describe it, debate it, and at least in principle devise criteria for recognising it. It sits at a comprehensible point on the horizon — far off and technically formidable, but within the reach of human understanding. That is what makes it a useful guide.
The Technological Singularity is something else entirely. It is the hypothesised point at which machine intelligence surpasses human comprehension and begins recursive self-improvement beyond any human ability to predict or govern. This concept has captured enormous public imagination — Sam Altman moves goalposts from “AGI achieved in 2023” to “by 2025” to “by 2030”; Ray Kurzweil publishes revised scriptures; Jensen Huang of Nvidia predicts AI will surpass humans on any test by 2029. The language surrounding these predictions — civilisational transformation, the end of human intellectual supremacy, a new era beyond description — is not the language of technology. It is the language of eschatology.
Wittgenstein wrote that whereof one cannot speak, thereof one must be silent. The Singularity does the opposite: it speaks loudly about the unspeakable, providing the feeling of having explained something while actually explaining nothing. It is, in a precise sense, a conceptual dustbin — a place where people deposit everything that lies beyond the reach of their current understanding, unburdening themselves of any obligation to explain further. Engineers invoke it to sidestep hard ethical questions about today’s AI. Investors invoke it to sustain valuations that present capabilities do not justify. It terminates inquiry rather than advancing it.
In the vocabulary of the two-arc framework, the Singularity is permanently lodged on the left arc. It has no use cases. It cannot be embedded. It is pure potential, infinitely inflated, never testable against reality. AGI, by contrast, earns its status as destination precisely because it can be evaluated against the usefulness criterion. An AGI that is genuinely useful — embedded across medicine, education, scientific research, and the daily problems of ordinary people — will have crossed into the right arc. That crossing is the meaningful milestone.
Coarse-Grained Thinking as the Better Orientation
The Silicon Valley AI environment demands fine-grained attention. Every model release triggers a full cognitive response. Every new tool — Cursor today, Windsurf tomorrow, Claude Code next week — is assessed, debated, celebrated or dismissed. Developers abandoned VS Code and Copilot for Cursor and Claude because the productivity difference was immediate and visible. Then Windsurf promised something better. Then agents promised something better than that. The next platform is always arriving.
This is fine-grained thinking: reactive, granular, perpetually zoomed in. For those whose business depends on picking the winning tool of the quarter, it is a rational strategy. But it is poorly suited to understanding where AI is actually going. It mistakes the churn of the left arc for progress along the full curve.
Coarse-grained thinking operates at a higher level of abstraction. It holds AGI as the destination, and reads the current churn as the process by which sub-technologies accumulate into something larger. Each AI capability that crosses into the right arc — genuinely, irreplaceably embedded in human life — is a step along the staircase. The staircase is long. Each step involves its own left arc of excitement and its own right arc of tested usefulness. The overall trajectory is not visible from any single step.
The fine-grained thinker watches every wave. The coarse-grained thinker reads the tide — and is not surprised when the next wave looks exactly like the last one.
Coarse-grained thinking does not ignore Silicon Valley. It refuses to treat each event as a signal. It asks not which model wins this benchmark, but whether AI is becoming infrastructure — something humans depend upon without thinking about it, the way they depend on electricity, on the internet, on language itself. It demands a clear criterion — usefulness — and the patience to apply it consistently, even when the excitement around any given tool makes that patience feel like missing out.
The people who built the most durable value from past technology cycles were rarely those who chased every new release. They were those who correctly identified which technologies had genuinely crossed into the right arc — and invested in depth and embedding, rather than breadth and novelty.
Conclusion: One Question That Cuts Through Everything
The AI landscape of 2025 and 2026 is, by any measure, extraordinary. The speed of model improvement, the scale of investment, the breadth of application — none of it has precedent. OpenAI, Anthropic, Google, and DeepSeek are engaged in a genuine race, and the outputs of that race are remarkable. The noise is real.
But noise and progress are not the same thing. The Hype Cycle’s essential insight — that human enthusiasm for new technology systematically outpaces its demonstrated capability — applies with particular force to AI, because AI is radical enough to require entirely new cognitive frameworks. Those frameworks are built transition by transition, not in a single leap.
The two-arc reading of the Hype Cycle gives us a simple, practical orientation. Track not the model releases but the use cases. Track not the benchmark scores but the depth of embedding. Ask not when AGI will be achieved, but when AI will become infrastructure.
The Singularity will always recede another step into the fog. It will always be the next announcement away, the next model away, the next undisclosed breakthrough away. That is what dustbins are for.
AGI is harder to achieve, but it is the right destination — because it is the one that can be evaluated against the only criterion that has ever mattered: whether the technology, in the hands of real people solving real problems, is genuinely useful.
Everything else is left arc.
Enjoyed this article? Sign up for our newsletter to receive regular insights and stay connected.

