When Will AGI Arrive? What Prediction Markets Are Pricing

AGI artificial general intelligence arrival timeline concept with glowing neural network visualization
When will AGI arrive? Forecasters and prediction markets offer a range of timelines — and reveal genuine expert disagreement

When Will AGI Arrive? What Prediction Markets Are Pricing

Category: Technology Forecasts  |  Reading time: ~9 min

Few questions in technology carry higher stakes than the timeline for artificial general intelligence. AGI — a system capable of performing any intellectual task a human can, at human level or beyond — would represent a fundamental shift in the relationship between human and machine capability. It would reshape labour markets, geopolitics, scientific research, and the structure of economies in ways that are difficult to fully anticipate.

For this reason, AGI timelines are among the most actively discussed and most genuinely contested questions in forecasting. This article examines what the current evidence suggests, how expert opinion is distributed, and what prediction markets are pricing as the most likely arrival window.

QUICK ANSWER

Prediction markets and expert surveys on AGI timelines show wide distribution — reflecting genuine disagreement rather than false precision. Estimates range from 2–5 years among the most optimistic researchers to several decades among sceptics. The median view among professional forecasters places meaningful probability on AGI arriving between 2030 and 2040, with significant uncertainty on both sides. No consensus exists, and that uncertainty itself is informative.

What Is AGI — and Why the Definition Matters

Part of the difficulty in forecasting AGI timelines is definitional. “Artificial general intelligence” is not a precisely specified technical milestone — it is a conceptual threshold that different researchers define differently.

Narrow definitions focus on task performance: AGI arrives when an AI system can outperform the median human on a broad set of cognitive benchmarks. Broader definitions require genuine understanding, common sense reasoning, and the ability to acquire new skills without task-specific training. The most demanding definitions require self-awareness, agency, and adaptability across genuinely novel situations.

These different definitions produce very different timelines. Systems that meet narrow performance criteria may already be close. Systems meeting the most demanding cognitive definitions may be decades away. When evaluating any specific timeline claim, the definition being used matters enormously.

What Leading AI Researchers Say

The distribution of expert opinion on AGI is genuinely wide — wider than public discourse often suggests. Prominent AI researchers hold positions ranging from “we are very close” to “AGI as commonly imagined may never arrive.”

Among the most optimistic voices — including some senior figures at OpenAI and DeepMind — the argument is that current scaling trajectories, combined with improvements in reasoning and agency, could produce AGI-level systems within a few years. Sam Altman has publicly suggested that AGI could arrive “sooner than most people think.”

Among more sceptical researchers, the argument is that current AI systems — however impressive — lack the grounded understanding, common sense reasoning, and genuine generalisation that AGI requires. These researchers argue that benchmark performance is not equivalent to general intelligence, and that closing the remaining gap requires breakthroughs that current scaling alone cannot produce.

Between these poles is a large population of researchers who assign meaningful probability to AGI within 10–20 years while acknowledging that the timeline could be much longer or shorter depending on which technical challenges prove most difficult.

AGI timeline forecast concept with glowing path representing artificial intelligence development trajectory
Expert opinion on AGI timelines spans decades — the range itself reflects the depth of genuine uncertainty

What Prediction Markets Are Pricing

Prediction markets on AGI timelines aggregate distributed expectations into probability-weighted estimates. The picture they produce is consistent with what expert surveys show: wide distribution, no clear consensus, and meaningful probability mass across a range spanning roughly 2028 to 2045.

Key observations from prediction market data on AGI:

Near-term (before 2028)

Low but non-trivial probability. Markets assign this mostly to narrow definitions of AGI where current systems are already approaching threshold performance on specific benchmarks.

Medium-term (2028–2035)

Highest probability mass in most prediction market contracts. Reflects the view that current scaling plus architectural improvements could reach AGI thresholds within a decade.

Long-term (2035–2050)

Significant probability assigned to this window, reflecting the possibility that remaining technical challenges are harder than optimists expect.

After 2050 or never

Non-trivial probability among sceptics who believe AGI as typically defined requires capabilities that current approaches cannot reach.

What Would Need to Be True for Early AGI

For AGI to arrive in the near term — within the next 3–5 years — several conditions would likely need to hold. Continued scaling of compute and data would need to keep producing capability improvements at the current rate. Current limitations in reasoning, common sense, and generalisation would need to be addressable through architectural changes or training methods that are already in development. And the definition of AGI in use would need to be achievable with systems that are extensions of current large language models rather than fundamentally different architectures.

Most researchers who hold near-term AGI views cite one or more of these factors as plausible. Most who hold longer timelines are sceptical that current approaches can bridge the remaining gaps without breakthroughs that have not yet occurred.

The Safety Dimension

AGI timing cannot be discussed without the safety dimension. If AGI is possible within a decade, the question of whether it can be developed safely — aligned with human values and controllable by human oversight — becomes urgent. This is the core concern driving the safety research programs at Anthropic, OpenAI’s safety team, and independent institutions like the Machine Intelligence Research Institute.

Prediction markets on AGI-related safety outcomes — for example, whether the first AGI system is developed under adequate safety protocols — exist alongside timeline markets. The two are connected: a faster AGI timeline is generally considered higher risk from a safety perspective, because it compresses the time available for safety research to develop adequate tools.

Conclusion

The honest answer to “when will AGI arrive?” is: we do not know, and the uncertainty is genuine rather than merely a gap in information. Expert opinion spans decades. Prediction markets reflect wide distributions. The definition of AGI itself is contested in ways that affect any specific timeline.

What is clear is that the question is no longer considered purely speculative. It is actively shaping corporate strategy, research agendas, regulatory planning, and geopolitical competition. That shift in itself is significant — and it is why AGI timelines are among the most actively traded questions in prediction markets focused on technology outcomes.

For a broader overview of what is happening in AI in 2026, see our AI Predictions 2026 analysis. For the corporate race driving AI development, see OpenAI vs Google: Who Will Lead the AI Race in 2026?

Participate in AGI and AI Forecasts

Nexory allows users to engage with prediction markets on technology milestones — including AI development timelines and outcomes.

Explore predictions on Nexory

Frequently Asked Questions

What is AGI?

Artificial General Intelligence (AGI) refers to an AI system capable of performing any intellectual task a human can — across domains, without task-specific training. It is distinct from current AI systems, which are highly capable within specific domains but lack generalised reasoning and adaptability.

When will AGI be developed?

Expert estimates range from a few years to several decades. Prediction markets place the highest probability mass on a 2028–2035 window, but assign meaningful probability to both earlier and later outcomes. No consensus exists among researchers.

Is AGI dangerous?

The safety implications of AGI are taken seriously by leading AI researchers. The core concern is whether AGI systems can be developed in ways that keep them reliably aligned with human values and subject to human oversight — a challenge that requires significant research progress beyond current capabilities.