AI in 2035 - A hope-filled vision for a humane future with AI
A hopeful, humane future with AI is possible. One that is decidedly different from the scenarios that are currently fashionable (human extinction, massive job displacement, useless class, et cetera). Have a look at 2035.

So far, I and others have talked and written about the technological reasons why these scenarios based on imminent AGI or superintelligence seem very unlikely 1. But we need to go beyond this: Since “we cannot create what we cannot imagine” 2, we need to spell out concrete future scenarios and turn them into vivid and lively narratives to show alternatives to the prevalent stories of tech executives, lobbyists and the traces of their opinion in much of media and entertainment. The point therein is not to make an accurate prediction of what will exactly happen in the future; it is to open up the space of realistic, attainable scenarios to establish again that “our future hasn’t been written yet. No one’s has. Your future is whatever you make it. So make it a good one” 3. Here’s one attempt. It may be detailed in the future with clear descriptions of what needs to happen when, how, why. It might be joined by many more scenarios. The recipe is: (a) let a few altered assumptions play out over a horizon of 10-15 years at societal scale; (b) make it realistic; (c) add a pinch of hope 4. This is a start.
Initial Situation
AI development in 2025 is mainly driven by a few highly valuated start-ups in few regions of the world, driven by massive amounts of venture capital and the promise of developing AGI that is in immediate reach (within next two years) 5. They invest heavily in GPUs that will be outdated and worthless in about five years after purchase 6.
We here sketch a realistic, hopeful scenario of the future of our societies in 2035 based on the following three mild 7 assumptions:
- There will be no fundamental leaps towards AGI, and AI agents becoming useful in a general sense (commercially) will still be ten years in the future.
- At the same time, AI developers will create ways of designing AI in a “pro-human” 8 way.
- Within the next 10 years, advances in neuroscience-inspired machine learning will make continual learning in AI possible and drop compute demands by two to three orders of magnitude.
In the following, we sketch how such a scenario could play out economically, socially (in terms of power concentration), and technologically: What does it mean for the current incumbents of AI power, and more local companies running on open-source AI? For societies and individuals, for education and labour?
Phase 1: Plateau (2025-2028)
Venture capital correction, not collapse
As AGI remains elusive and near-term AI agents prove less transformative than hoped, valuations of AI startups cool sharply. There’s a painful but contained “AI winter”, mostly hitting speculative ventures. The economic damage is limited because much of the invested capital went into real physical infrastructure (GPUs, data centres, networking), which can be repurposed for other high-performance computing tasks - cloud services, simulation, biotech, climate modelling, etc. Large firms (e.g., in pharmaceuticals, telecom, and green tech) quietly buy distressed AI assets cheaply, broadening the base of who benefits from AI infrastructure. Governments and supranational bodies (e.g., EU, ASEAN, African Union) frame compute as “strategic infrastructure”, akin to broadband or electricity. Investment moves from “moonshots” to utility models (long-term, regulated, cost-recovering) where pricing stabilizes under long-term contracts.
Open sourcing of compute and models
Open-source models rapidly close the performance gap with stagnant proprietary systems. As hardware rapidly depreciates and foundational models become open or cheap, the barriers to entry fall. Smaller firms, cooperatives, and universities gain access to powerful AI systems as a flourishing ecosystem of open models, local deployments, and AI cooperatives emerges - much like what happened after the dot-com crash when open web standards prevailed.
The rise of local AI
The initial concentration of power in a few AI labs and regions (U.S., U.K., China) triggers public concern. Governments and international bodies push for open standards, interoperability, and public AI research to reduce dependency on private monopolies. The “AGI race” narrative loses legitimacy; the discourse shifts from “who builds godlike AI first” to “how do we use existing AI responsibly for shared benefit” to re-establish global political stability. As full automation disappoints, attention returns to augmenting human labour rather than replacing it. Education, healthcare, and creative industries rediscover hybrid human-AI collaboration models. This fosters a more pluralistic innovation landscape, where local AI applications (in agriculture, public services, education, etc.) flourish. Policy emphasis moves from “safety from AGI” to safety of deployment and data integrity. Profit comes from deployment and integration (building AI into health systems, agriculture, logistics), not from promising AGI.
Phase 2: The pro-human turn (2027-2031)
From Alignment to Relationship
The late 2020s pro-human wave grows directly out of this gap in narrative and legitimacy. The “alignment” frame of the 2020s (AI obeying human commands safely) gives way to a relational frame as customers start preferring digital technology that respects their healthy boundaries. It begins with public agencies, in the fulfillment of their mission to be “for the people”, procuring AI systems that are co-designed to enhance human capacities without eroding the qualities that make us distinctively human. NGOs, for years commited to promote more healthy digital technology, pick this up as results underline a positive turn. A movement forms, likened by some to “green” in its breadth and momentum. Now developers in value-driven start-ups explicitly model ethical co-development: AI systems must preserve or strengthen relationality, autonomy, embodiment, social behaviour, and purpose. This ends the “AI-as-brain” metaphor. AI systems become more like tool of connection than an artificial mind. By design, these systems cannot induce dependency or “AI psychosis”. Their communication protocols enforce clarity, calmness, and healthy distance. Public agencies, NGOs, and values-driven startups find a window to shape standards and norms while big AGI players are disoriented. Economically, a new “trust economy” emerges: the competitive edge lies not in raw intelligence, but in transparency, humaneness, and emotional safety.
Trust, the new competitive advantage
The pro-human design paradigm becomes a brand and regulatory standard - much like “organic” or “fair trade” labels in food. Companies that can demonstrate genuine “pro-human certification” gain trust premiums and policy incentives. Firms optimized for scale and user capture (social media, ad-based AI interfaces) find their business models incompatible. They either pivot or decline. Smaller, values-driven AI labs and cooperatives thrive - they can innovate around trust, personalization, and well-being. New markets emerge: Education (AI mentors supporting intrinsic motivation rather than compliance), health (AI companions enhancing self-awareness, affect control, and peace, not simulation of affection), governance and democracy (civic AI systems enhancing participation and deliberation), and general work (AI tools augmenting creativity and collective intelligence, not replacing labour). The trust economy replaces the attention economy.
Human agency renaissance
Post-AGI disappointment makes society crave grounding, relationality, and purpose. Pro-human AI answers that demand: systems built to enhance human agency, not mimic it. This cultural turn creates market pull (not just regulatory push) for humane design. People experience AI not as competition, but as tools that strengthen personhood. Time saved by automation is reinvested in relationships, culture, and civic life. “AI etiquette” and design ethics evolve: AI systems are evaluated by how they help humans flourish, not only benchmark scores. Education systems teach AI relational literacy: how to engage with machines that respect autonomy and difference. Drivers for adoption are societal exhaustion with dehumanizing tech experiences; strong alignment with mental health and well-being policy goals; integration with education and public-sector services; and cultural resonance with sustainability values. Adoption begins in Europe, Japan, and smaller democracies, spreading through regulatory emulation and consumer pressure. Over time, global demand for trusted AI outweighs marginal productivity gains from manipulative or extractive systems.
Phase 3: Lean intelligence (2030-2035)
From academic breakthrough to market adoption
Early academic breakthroughs at the end of the 2020s in brain-inspired continual learning hints at major efficiency gains: AI systems that learn on the fly without retraining from scratch. Now these new learning architectures (e.g., neural networks with local learning rules) reach production quality. Compute requirements for model adaptation fall by 100x-1000x and training becomes incremental, local, and personalized, enabled also by higher sample efficiency. The global AI economy undergoes a structural deflation: massive, centralized training clusters further lose strategic importance and the remaining capital moat vanishes. Value shifts to distributed deployment and integration. Decentralized innovation flourishes: local firms, cooperatives, and research hubs deploy context-aware models on modest hardware. Small labs, local companies, and even community centres can now run adaptive, context-aware, pro-human AI locally. Policy that has once focussed on building new GPU farms finishes the transition to upgrading existing ones for energy efficiency and edge integration. Governments and publics, after the mid-2020s AI bubble, have political memory: they’re more sceptical of monopolization and more willing to regulate towards openness.
Ubiquitous AI
With low compute costs, AI becomes as ubiquitous as smartphones: calm in tone in comparison the early 21st century’s social media technology, but embedded in devices, local networks, and classrooms. Data centres from the AGI boom are retrofitted for edge coordination, climate simulation, and open scientific modelling; their sunk cost yields continuing value. AI becomes a general public good: most models are open, self-improving, and tuned locally. Economic growth is steady, human-centred, and less capital-intensive; the centre of innovation moves from speculative finance to education, health, and climate adaptation. The biggest threat to the “lean intelligence” future isn’t technical failure - it’s institutional inertia and rent-seeking by incumbents. But once intelligence becomes computationally cheap, power can’t hoard it easily. In that sense, the very success of neuroscience-inspired efficiency makes central control brittle; it erodes the economic foundation of monopolies faster than policy alone could. Now, education systems evolve (instead of teaching to outcompete machines, curricula emphasize relational intelligence, creativity, ethics, and self-direction) and labour markets stabilize (AI becomes a cognitive exoskeleton for workers, augmenting care, craftsmanship, and local governance rather than replacing them).
Conclusions
The failure of AGI expectations triggers a moral and structural realignment. By mid-2030s, intelligence is no longer centralized, extractive, or opaque. Pro-human AI is technology designed to deepen, not diminish, humanity. Coupled with efficient, continual-learning systems, this leads to a globally distributed, low-energy, human-centred intelligence landscape. By 2035, the narrative shifts from surpassing humanity to supporting human flourishing. This has profound effects for example on:
- The economy: A sustainable, equitable intelligence ecosystem is built around collaboration, not competition.
- Mental health: Calm technology built to explicitly strengthen humaneness (e.g., by slowing down) rather than catching attention has a soothing effect on various hotbeds of health crises.
- Ecology: Trusted AI systems running as public infrastructure lend their predictions certain credibility as a second opinion for policy interventions, making evidence-based policy creation a bit more reality.
- Social cohesion: Local communities start healing as the incentives for divisive public expression of opinion as induced by the attention economy fades out. Global tensions relax as AI-mediated remote work relaxes immigration-related problems while supplying workforce where needed, contributing to global development.
In short, society in 2035 could be decidedly more healthy, stable, and equitable based on this trajectory. It can be a future worth living in 9.
Footnotes
-
Compare for example my TEDx talk, AI as normal technology by Princeton computer scientists Arvind Narayanan and Sayash Kapoor, tech entrepreneur Anil Dash’s The majority AI view, or the discussion between podcaster Dwarkesh Patel and legendary developer Andrej Karpathy with the recent publications by Gary Marcus (Professor emeritus, NYU) and Subbarao Kambhampati (former president, AAAI). ↩
-
Quote by Lucille Clifton. ↩
-
Quote by the character Dr. Emmett “Doc” Brown in “Back to the Future Part III”. ↩
-
This is not a problem for realism: When one seeks a script for an entertaining action movie, one biases their story search towards those including catastrophies, so that the hero can shine. Similarly, when one looks for a future one wants to live in, they search the story space for the subset of stories that are hopeful and positive. In both cases, this does not limit the scenarios in being realistic, likely, or attainable. If they ultimately come to pass depends on who takes what action. It is the intention of future scenarios to make us act towards those we deem worth living in. Compare swissfuture - Swiss Society for Future Studies and the Hope Barometer. ↩
-
See, e.g., ai-2027.com. ↩
-
Ed Citron gives a dire outlook on bad economic investments in the hater’s guide to the AI bubble. ↩
-
These three assumptions are “mild” in the sense that their occurrence is considered at least likely by AI experts. Their implications (i.e., how the scenario unfolds and the future to which this leads) is of course open to many variables, and much more uncertain. The scenario sketched below is hence “realistic” in the sense that each step is attainable by possible concrete actions of today’s stakeholders that are neither extreme nor highly unlikely. ↩
-
By “pro-human AI” we understand AI systems that are designed in a way that they do not diminish capacities of the human that are commonly understood as making up humans at their core, e.g., their relationality, freedom, autonomy, purpose-seekingness, sociality etc. Such AI systems for example wouldn’t lead to “AI psychosis” in a similar way as today’s LLMs do, because they have been designed not to “mess” with those capacities (as humans won’t tolerate any bit less of what makes them fully human). ↩
-
Here’s a curated list of links to selected background material on AI and society that I published elsewhere, helpful for deepening the here underlying assumptions of why and how: (a) A guide to AI is a concise introduction to AI for non-technical people (its future, challenges, and opportunities for businesses and society); (b) Evidence-based risk assessment for public policy gives a brief perspective on AI risks and argues against views driven by a science-fiction worldview; (c) The stochastic nature of machine learning discusses implications of AI’s nature on high-consequence applications of AI, derived from first principles for non-technical readers; (d) Assessing deep learning constitutes a thorough survey of the implications of deep-learning-based AI on our humanity, including an introduction to its working principles. PDFs are also available from my publications page. ↩
