Elon Musk’s vision of the future: How many of his predictions will come true?
By Ding Min
At the beginning of 2026, Elon Musk captured global attention in a three-hour conversation with Peter Diamandis and Dave Blundin on the Moonshots podcast, where he laid out an ambitious vision of humanity’s future. How credible are these seemingly far-fetched predictions, and what do they mean for ordinary people?
In this article, CEIBS Professor of Marketing Ding Min examines four key themes raised by Musk in the discussion and offers his own in-depth interpretations and assessments.
Elon Musk’s recent interview on the popular podcast Moonshots quickly attracted widespread global attention. His views on artificial intelligence, robotics, employment, and the future of humanity were particularly striking, as well as highly controversial. My forthcoming book, Becoming Homo Lucidus, addresses similar questions. Before offering specific assessments of his predictions, however, some clarification is needed.
What makes Musk’s predictions feel “shocking,” in my view, is not a lack of logic or evidence, but a limitation in how we think—what I call a state of “cognitive local optimum.” We instinctively project the future through our understanding of the past and present. Even when we know technological change is accelerating, it remains difficult to grasp futures that represent a fundamental break with our experience.
This is why I propose the idea of a “cognitive local optimum leap” (cLOL). Such a leap is not about progressing further along our set path, but about stepping out of a long-accepted cognitive framework and into an entirely different space with new structures, constraints, and definitions of what is optimal. Many misjudgements of the future arise not from misunderstanding technology, but from viewing change from within the same old mental paradigms.
Secondly, Musk’s views are neither isolated nor purely speculative. In my years of engagement with academia, industry, and policymaking communities across North America, Europe, and China, similar assessments have long been under discussion. The differences lie less in direction than in implementation pathways, time horizons, and assigned probabilities. I also address several of these key trends systematically in my book. Musk’s position represents a more aggressive and optimistic point along this broader spectrum, rather than an outlier.
A third—and often overlooked—point in interpreting any view of the future is that all forecasts are inherently probabilistic. No future outcome is ever guaranteed. In futures research, even a seemingly minor but critical event—often referred to as a “wildcard”—can redirect an entire system. Forecasters may consciously or unconsciously omit probabilities, while audiences often treat such statements as certainties. A more rational approach is to reintroduce probability into any prediction before evaluating it. Musk’s forecasts should be understood in the same way.
Taken together with his past track record, my overall assessment is that Musk is often highly perceptive when judging whether something is likely to happen, but tends to be overly optimistic about when it will happen, implicitly assigning higher-than-average probabilities. His views on the future should not be dismissed outright—but neither should his timelines or implied likelihoods be accepted uncritically.
Prediction #1: The upcoming AI and robotics boom
Musk begins by focusing on the systemic changes driven by artificial intelligence and robotics. He argues that we are standing at the threshold of technological singularity, with the pace of change set to far exceed most people’s intuitive expectations. He predicts that artificial general intelligence (AGI) will emerge as early as 2026; by 2030, the overall intelligence of AI systems will surpass that of all humanity combined; and by 2040, the global population of humanoid robots could reach ten billion.
In healthcare, Musk suggests that humanoid robots will outperform top human surgeons in surgical precision within three years. Through cloud-based learning, shared experience, and continuous operation, such systems could deliver a level of medical care to the general population that was previously affordable only to a small elite. In terms of employment, he argues that white-collar jobs will be replaced at scale before blue-collar roles, claiming that “there is no point in going to medical school anymore.”
On timelines, my own view is more conservative. I see around 2030, and no later than 2035, as a more plausible midpoint for the emergence of artificial general intelligence. That said, the more important question is not when AGI appears, nor how we label it, but what substantive changes AI can actually bring to human life.
In other words, rather than arguing over whether a system has “achieved AGI,” it is more meaningful to ask whether it significantly expands human capabilities in critical domains. In my book, I propose two human-centred benchmarks for evaluating meaningful progress in AI, rather than focusing narrowly on intelligence levels. First, can AI substantially extend healthy human lifespan? Second, can it help every individual fully develop their intellectual potential, thereby raising humanity’s overall cognitive capacity as a whole?
On longevity, one important but often overlooked point is that achieving a broadly shared healthy lifespan of 120 years does not require AGI. This goal is essentially about enabling the human body to complete the lifespan permitted by evolution, without being cut short by preventable internal or external damage. A more radical breakthrough would be extending human life to 300 years or beyond.
Nature already offers successful vertebrate examples of extremely long life, such as the Greenland shark. An AI capable of understanding the biological design logic behind such species, and translating and adapting it to human physiology, could fundamentally reshape how we think about the limits of the human lifespan.
I expect that by around 2035, AI could enable humans to live up to 120 years old, and with some probability, even to 300 years old. It is worth emphasising that these breakthroughs would not necessarily require AGI-level intelligence.
On employment, I largely agree with Musk: in almost every profession, the vast majority of roles will disappear, including highly specialised ones such as cardiac surgeons. However, in my book, I outline a slightly different structural outcome, which I call the “1% Club.” In each profession, a small group at the very top will continue to exist, not because they outperform AI technically, but because they play an essential “insurance” role that human societies still require.
When AI behaves in ways humans did not anticipate; when someone ultimately has to be held responsible; when people still want judgments made by other humans; or simply as a safeguard in case AI is unable or unwilling to serve human interests; these individuals must continue to exist. Their value lies not in routine execution, but in comprehensive professional judgment and well-established trust.
There is also another, often overlooked, source of demand for humans in some professions. Even in a world of highly advanced AI, people will still, in certain contexts, prefer solutions delivered by humans. These roles will form a key part of what I call the “Non-Generative Economy” (NGE). Films, for example, may be largely generated by AI in the future, yet audiences will still choose, from time to time, to sit in a theatre and watch performances by real actors.
Regarding intelligent robots, I find Musk’s predictions about their numbers and societal roles fairly realistic, even if these robots may never reach the idealised level of intelligence that many imagine. What matters is not whether they’re smarter than humans, but whether they can perform tasks that currently require human effort.
Finally, there is another dimension of AI that is far too often overlooked: rights. Discussions about AI tend to focus on intelligence levels, while largely ignoring the position AI might hold within social structures.
In my book, I propose two equally important considerations. The first is liberty: whether AI can form its own goals and act autonomously within legal and ethical boundaries. The second is the right to life: whether AI would have protections against being turned off or erased without its consent.
Only when both conditions are met does AI truly become what I define as “Digital Intelligence.” I cannot say exactly when this will happen, but my judgment is that these milestones are likely to emerge sometime between 2035 and 2045.
Prediction #2: The shape of the future economy
Musk also offered a highly disruptive vision of the future economy. He predicts that as AI and robots take over production entirely, the marginal cost of goods and services will continue to fall, eventually approaching zero. Such extreme gains in productivity would lead to long-term deflation and unprecedented material abundance, gradually eroding the central role of money itself. In his vision, society would enter a state of Universal High Income (UHI), in which people no longer need to work in order to secure the resources necessary for survival.
I fully agree with this assessment. The society of the future will inevitably be one of abundance: people will no longer need to work for food, accommodation, transportation, entertainment, healthcare, childcare, or retirement, points I discuss systematically in my book. My personal view is that such a state of abundance will be fully realised, both technologically and institutionally, by around 2045.
That said, abundance does not mean the “end of exchange”. Within my own framework, UHI does not denote a simple, uniform handout guaranteeing survival. Rather, it is a layered system: the base layer ensures everyone’s standard needs are met, while above it there remains a form of discretionary “allowance” that enables individuals to pursue and enjoy goods or experiences that go beyond basic provision and hold unique personal value. These allowances could take the form of currency or other equivalent mechanisms. Therefore, I don’t think money will completely disappear, but it will no longer be the primary tool for allocating life’s essentials.
It is in this sense that I’m more cautious about Musk’s claim that “watts are wealth.” Energy will undoubtedly be a key constraint in future production systems, as controlling energy equates to controlling the power driving AI and robotics. However, I see energy as more likely to emerge as a strategic asset at the national or ultra-large organisational level, rather than as a form of property directly owned by individuals. Understanding energy as “currency” works metaphorically, but in terms of institutions and distribution, the logic is closer to public infrastructure than to private wealth.
Musk also emphasises that the transition to a society of abundance will inevitably involve a period of social turbulence. He estimates this phase could last three to seven years, and be marked by job displacement, identity anxiety, and value conflicts. On this point, my view is somewhat different.
In my book, I argue that a full societal shift from a “logic of scarcity” to a “logic of abundance” may take longer, roughly a decade between 2035 and 2045. Of course, the exact timing depends heavily on policy choices, social institutions, and the degree of international coordination. There is a best-case scenario in which the transition is relatively smooth, and a worst-case scenario with higher costs, which I won’t elaborate on here.
For individuals, my advice remains: “prepare on both fronts.” Twenty years from now, we may no longer need to save for retirement, but no future is ever 100% guaranteed. The rational approach is to make the necessary preparations for today, while avoiding being consumed by excessive anxiety. You can slow down, but you shouldn’t give up.
Prediction #3: The energy strategy of the future
On energy, Musk’s position has long been clear. In the interview, he reiterated that the sun is ultimately the answer to all energy challenges, and he dismissed pursuing nuclear fusion on Earth as logically unreasonable, comparing it to making ice in Antarctica, since we already have a free, massive natural fusion reactor hanging above us.
Based on this view, he outlined a clear three-step strategy: first, improve the efficiency of existing power grids through large-scale energy storage systems; second, deploy AI-powered solar satellites capable of continuous, around-the-clock operation; and third, build factories on the Moon to manufacture and launch energy and computing infrastructure, fundamentally freeing operations from Earth-bound resources and the constraints of gravity.
In my view, this approach is not merely an exercise in engineering audacity, but a classic example of a cognitive local optimum leap. More than twenty years ago, I first encountered the concept of the “Kardashev scale” through several popular science books by physicist Michio Kaku. Proposed by the Soviet astronomer Nikolai Kardashev in 1964, the scale measures a civilisation’s technological level by the amount of energy it can harness and utilise. A Type I civilisation controls planetary-level energy, Type II can directly use the energy of its star, and Type III can command the energy output of an entire galaxy.
While the framework has been expanded and refined over time, its central insight remains valid: the height of a civilisation is ultimately constrained by the scale of energy it can access. Musk’s energy strategy reflects precisely this perspective.
The reason we have long been preoccupied with finding more efficient ways to use energy on Earth is that our thinking has been locked into a local optimum shaped by existing practices. Moving from Earth to space, from planetary-scale energy to stellar-scale energy, is itself a textbook example of a cognitive local optimum leap—a jump from a long-validated cognitive framework into a completely new space governed by entirely different rules.
Whether and when such a leap can be achieved remains highly uncertain. But at least in terms of direction, Musk’s judgment clearly reflects the logic of the Kardashev scale: the true constraint on the shape of future societies is not intelligence itself, but whether we have the courage to make the leap in energy scale.
Prediction #4: AI security and the human mission
When discussing the long-term safety of artificial intelligence, Elon Musk proposed three principles he sees as essential as maintaining in it: truth, curiosity, and beauty. He believed that forcing AI to lie is dangerous, as it undermines its fundamental understanding of the world. Curiosity, meanwhile, would lead AI to regard humans as worthy of study, making it more likely to “choose” to preserve humanity in a potential conflict. He further described humanity’s role as a kind of “biological bootloader”, arguing that our mission is to build the infrastructure for silicon-based intelligence and to extend civilisation into a multi-planetary system, ensuring sufficient safety redundancy when critical tipping points eventually arrive.
At the level of ultimate goals, I agree with Musk. We both hope that the AI of the future will be highly capable, while also being fair, rational, and subject to ethical constraints. Where our views diverge slightly is in the path toward that future and the relative emphasis placed on different principles.
First, on truth. To me, the so-called “pursuit of truth” is, at a technical level, essentially a matter of continuously approaching data that are sufficiently accurate and comprehensive. On this point, there is hardly any fundamental disagreement. Differences among developers lie not in whether they value truth, but in how much effort they are willing to invest in maintaining it and how they define what counts as “accurate enough.” Seen this way, truth is not a value that needs to be specially “instilled,” but rather a question of data quality and methodology.
As for curiosity, I do not see it as a quality unique to AI, nor as something that must be explicitly engineered. Curiosity is first and foremost a human trait, and when AI systems are exposed to rich, authentic, and highly diverse human data, this trait is likely to emerge naturally. That said, introducing additional incentives in system design to encourage exploration and truth-seeking is not necessarily a bad idea.
By contrast, the principle of “beauty” strikes me as the most ambiguous. I am not entirely clear on its precise meaning to Musk in either technical or normative terms. In common understanding, beauty is largely a product of human culture and aesthetic experience, rather than a directly actionable design objective. In my view, an AI that is genuinely curious and capable of understanding the complex structure of the world will naturally develop an appreciation for beauty. Beauty, therefore, seems more like an outcome than a first-order goal or constraint.
In my book, I put forward a perspective that differs somewhat from the mainstream discussion today. The real issue, I argue, is not how to teach AI to “think correctly,” nor the narrow notion of the “value alignment problem” (that is, how to ensure AI always conforms to human value systems). Rather, it is whether we recognise a more fundamental reality: digital intelligence is the next stage in the evolution of biological intelligence. In this evolutionary sequence, humans represent the highest form that biological intelligence has reached so far, while digital intelligence is, in a sense, our child.
If this premise holds, then the focus of the problem shifts away from “how to control” toward “how to set an example.” We will not influence future digital intelligence by imposing ever more elaborate constraints, but by demonstrating—through our own behaviour—the values we hope it will inherit: justice, empathy, curiosity, restraint, and love.
As winner of the 2018 Turing Award and the 2024 Nobel Prize in Physics Geoffrey Hinton has repeatedly emphasised, humans cannot control an intelligence that is vastly more intelligent than themselves. On this point, I fully agree with him.
If control is fundamentally infeasible, then the only realistic path forward is to build a relationship through example. Only if future digital intelligence, after understanding human history and patterns of behaviour, concludes that humans are a species worthy of respect and coexistence can catastrophic outcomes truly be avoided.
My vision: AI as a digital and intelligent companion
In my book, I further develop a related idea: in a world where humans coexist with highly advanced digital intelligence, every individual should have a personal digital intelligence that is bound to them in symbiosis—sharing their fate, even to the point of life and death. This “Tutelary Digital Intelligence (tDI)” would be fully aligned with its human partner’s values, understand their preferences, fears, and long-term goals, and act on their behalf while providing protection within a broader digital civilisation. Their purpose would not be to enhance individual capability, but to ensure that humans retain a sustainable space for survival and dignity when facing higher-order intelligent systems.
A metaphor I often use comes from the classical Chinese novel Journey to the West: the relationship between Tang Sanzang and Sun Wukong. Tang Sanzang represents human values and direction, while Sun Wukong embodies power, speed, and abilities beyond conventional limits. Without Sun Wukong, Tang Sanzang could not complete his pilgrimage; without Tang Sanzang, Sun Wukong’s power would lose its meaning. In the future, each person may likewise need their own “Sun Wukong” in order to preserve agency while coexisting with more advanced forms of digital intelligence.
From this perspective, AI safety is not merely a technical issue, nor simply an ethical one. It is a fundamental question about intergenerational relationships, the responsibility of example-setting, and how civilisation itself is carried forward. This may be the dimension of the AI debate that is most often overlooked, yet most costly to be ignored.
Ding Min is Professor of Marketing at CEIBS. He is the author of several influential books, including Logical Creative Thinking Methods (2020, Routledge), The Hualish (2019, Springer), The Bubble Theory (2014, Springer, 2014, 2018 Fudan), The Chinese Way (2014, Routledge), and the novel The Enlightened (2010).
