A Catholic Commentary on Jordan Hall’s “The Coming Great Transition”
Jordan Hall’s essay “The Coming Great Transition v2.0” has circulated widely in technology and “sense-making” circles, and for good reason. It is one of the more honest attempts to reckon with what artificial intelligence actually means — not just economically, but civilizationally and spiritually. Before engaging it critically, a note of fairness is in order: Hall converted to Christianity in early 2024, around the same time I entered the Catholic Church. He arrived by a different road — through systems theory, the failure of secular community-building projects, and what he describes as a “path to humility” — but he arrived at the same gate. His essay carries the marks of that journey. It deserves to be read as such, not as techno-utopianism in disguise.
That said, a fraternal critique is precisely what a serious argument invites. And Hall’s essay, for all its genuine depth, stops just before the questions that Catholic social teaching insists we cannot avoid.
What Hall Gets Right
The essay’s most important insight is also its most countercultural one: the coming disruption from AI will not primarily be economic. It will be existential. For two centuries, modern societies have organized human identity around productive labor. Work was not merely how people earned a living — it was how they understood their place in the world. If AI dissolves the demand for human labor across large sectors of the economy, the crisis will not be unemployment in the traditional sense. It will be the loss of a meaning-structure.
Hall is right to insist that Universal Basic Income cannot answer this. You can transfer money. You cannot transfer purpose. Pope John Paul II made the same argument in Laborem Exercens in 1981, observing that work matters not only because it produces goods but because through it the human person “becomes more a human being.” Hall reaches the same conclusion from a systems-theory direction. The destination is identical.
His distinction between the rivalrous and the generative is also genuinely illuminating. Rivalrous goods — oil, land, food — deplete when consumed. Generative goods — ideas, knowledge, software — compound when shared. AI dramatically accelerates the generative logic. This is not merely an economic observation. There is something almost theological in it: truth is not diminished by being spoken, love is not used up by being given. The generative structure Hall describes has deep roots in the Christian understanding of gift and creation.
And his treatment of pistis — the Greek word he rehabilitates from its thin modern translation as “faith” to its richer meaning as embodied, reality-indexed trust — is more theologically sophisticated than he may realize. What he is describing, in the language of network theory, is what the tradition calls virtue: the stable disposition to act reliably toward the good. Virtue cannot be downloaded. It cannot be performed indefinitely without detection. That is a very old Christian insight, and Hall arrives at it by thinking hard about how AI changes the economics of cooperation. While Hall intriguingly suggests that computer code operates as a kind of ontological oath (horkos), the Catholic tradition insists that a true covenant requires a freedom and a soul that no algorithm can simulate; technology can meticulously log reliability, but only a human person can bind themselves to absolute truth.
The Anthropological Question
Toward the end of his essay Hall shifts register and asks the question that underlies everything else: What is a human being for? He frames the post-transition world as a choice between two paths — “Mouse Utopia,” a comfortable existence of thin entertainment and simulated meaning, and something he calls “Living in the Kingdom,” a life oriented by genuine vocation, trust, and community.
This is the right question. And the Christian tradition has been working on the answer for two thousand years, which Hall himself acknowledges. But here the analysis requires sharpening. Hall’s Kingdom is compelling as a vision of interior transformation — getting Egypt out of your heart, as he puts it, exchanging the scarcity mentality for something deeper. What it lacks is a sufficiently robust account of the structures that must sustain and protect that transformation at scale.
Benedict XVI warned precisely about this in Caritas in Veritate: technological progress generates a persistent temptation to believe that technique itself resolves human problems. The danger is not that AI is powerful. The danger is that its power feels like an answer when it is only ever an instrument — an instrument that takes its moral character entirely from the ends it serves and the persons who wield it. Hall understands this. But his framework for what governs those ends remains, at the structural level, thin.
Where the Tradition Has More to Say
Hall’s pistis-centered networks are convincing as a description of how trust can scale within aligned communities of practice. Forty human-AI nodes, bound by demonstrated reliability and shared purpose, outcompeting a bureaucratic corporation of four thousand: plausible, even likely in many domains.
But Catholic social teaching insists on a question that this model does not yet answer: who is accountable when these networks cause harm? What mechanism of justice operates when a high-trust network makes decisions that affect thousands of people outside it? The tradition has a name for what is missing: the common good — bonum commune — which is the stubborn insistence that human flourishing is never only local.
Subsidiarity, the principle that decisions should be made at the most local level capable of addressing a problem, is only half of the Catholic social framework. The other half is solidarity: the recognition that larger structures remain necessary precisely to protect those who are not inside your network, those who have not been invited onto your trust ramp, those whose interests are not represented in your OODA loop. Hall’s framework is brilliant at describing how trust scales within aligned networks. It is nearly silent on how those networks relate to those they exclude.
History offers a sobering pattern here. Every decentralized revolution tends eventually to produce new concentrations of power — often less visible, and therefore less accountable, than the hierarchies it replaced. The Catholic tradition is not naive about institutions; it knows they calcify and corrupt. But it insists, against every wave of decentralizing enthusiasm, that the alternative to bad institutions is not no institutions. It is better ones, reformed in light of the person and the common good.
Counterfeit Pistis and the Stranger
Hall names “counterfeit pistis” — miscalibrated trust, binding to something that performs reliability without being reliable — as the central danger of his entire framework. He is right. But there is a second failure mode he does not name with equal clarity: authentic pistis operating without a moral horizon that extends beyond the network.
Communities of genuine trust that are genuinely indifferent to those outside them are not rare in history. They are the norm. They are called tribes. What distinguishes the Church’s account of community from sophisticated tribalism is precisely the claim that the neighbor cannot be defined by network proximity. The parable of the Good Samaritan is not a story about extending trust to someone who earned it through demonstrated reliability. It is a story about obligation to someone entirely outside your community, your credibility system, your shared framework of meaning.
Augustine understood that societies are ultimately shaped by their shared loves — what they collectively desire and pursue. A pistis-network built around shared love of the good, the true, and the beautiful is something the tradition would recognize and affirm. But the tradition would also ask: does this network love the stranger? Not as a metric to optimize, but as a person to encounter.
That question — who is my neighbor? — is the one that no AI architecture can answer. It is the one that has been waiting, through every civilizational transition, for a human response.
Technology in Service of the Person
The deepest convergence between Hall’s essay and Catholic social teaching is also the place where the tradition has the most to add. Hall describes a fork in the road: those who carry Egypt in their hearts will choose Mouse Utopia; those who do not will seek the Kingdom. The Church affirms this diagnosis while insisting that the choice is not purely interior. It is mediated by structures, by communities, by the visible practices of a tradition that shapes what we love before we are conscious of choosing.
This is why Catholic social teaching has always insisted that economic creativity and technological innovation must remain ordered toward the common good — not as an external constraint on freedom, but as the condition under which freedom becomes genuinely human. The principle of the universal destination of goods — that the earth and its fruits are ultimately meant for all — applies with full force to the generative abundance that AI is beginning to unlock. That abundance is not a private achievement to be distributed at the discretion of those who built it. It is a gift that carries obligation.
Artificial intelligence will transform economies, professions, and institutions. Some of Hall’s predictions will prove accurate. Smaller, more agile, trust-centered forms of organization will flourish in many domains. The meaning crisis he diagnoses is real, and his insistence that it requires a spiritual response is correct.
But the decisive question is not whether AI creates abundance or decentralization. It is whether the extraordinary generative power now becoming available remains oriented toward the dignity of every human person — not only those inside the network, not only those capable of building trust ramps and demonstrating reliability, but those at the margins of every system we have ever built.
Technology can extend human capabilities. It cannot define human purpose. For that we still need something older and deeper than any machine: a moral vision of the human person rooted in truth, responsibility, and love. Without that vision, even the most sophisticated pistis-network will eventually reproduce the oldest failures of human history. With it, the generative abundance Hall describes might yet become what it ought to be: not a Great Transition, but a genuine service to the flourishing of every person without exception.
—Riccardo Wagner is a professor of sustainable management and communication at Hochschule Fresenius (Cologne) and a management consultant specializing in leadership, culture, and digital transformation. He converted to Catholicism in 2024 and writes at the intersection of Catholic social teaching, organizational theory, and contemporary culture
Addendum: The Conversation Continues
Responses and extensions from the discussion on X
The essay above prompted a discussion on X that pushed the argument further in several directions I had not fully anticipated. What follows is not a summary — it is a continuation. Some of the sharpest thinking in this conversation came from others. I want to record it here.
Jordan Hall Responds: Vocation
Hall himself replied briefly but pointedly:
“This requires a deeper response — but the simplest answer is vocation. Think of AI as money — everyone uses it but only a thin slice of the population really is hands on. But where money → work; AI (proper AI) → vocation.”
This is the right word, and I take it seriously. The tradition has always insisted that every human person is called — not merely employed, not merely productive, but summoned toward something specific that only they can give to the world. If AI dissolves the work-identity nexus, freeing people to discover and pursue genuine vocation, that is not a loss. That is a recovery of something the industrial age suppressed.
But the word vocation carries weight that Hall’s formulation has not yet fully borne. Vocation requires formation — you do not discover it by optimizing. You find it through community, tradition, suffering, and time. And here is the point I pushed back on: vocation is never only mine. It emerges in relationship, shaped by the community that formed me, fulfilled in service to others. I cannot discover what I am called to in isolation. We are, as the tradition puts it, ordered toward one another.
That is why the structural question cannot be skipped. What prevents AI from doing the same in the vocation economy that money did in the work economy — rewarding those who already have the cultural capital, the community, the formation to name what they are called to? Vocation without solidarity is just a more meaningful version of winners and losers.
On Belonging Before Usefulness
One commenter asked the question underneath all the others: if the system collapses, what covers those it declares surplus? His answer, and mine, converged on the same place — the intermediate institutions that neither the state nor the market can replicate: family, parish, guild, civil society.
The Church has been doing this for two thousand years. Not perfectly. But the principle is structurally different from both welfare and charity in the thin modern sense. It does not ask what you can contribute before including you. It does not ask what you need before attending to you. You belong before you are useful — and before you are needy.
That is not nostalgia. It is the only proven model we have for belonging that does not depend on productivity. The question is whether we rebuild those intermediate institutions before the transition forces us to — or scramble for them after.
The Sequencing Problem
The most practically sharp observation came from a commenter working in organizational transformation:
“The networks that exclude the outsider usually don’t do it out of indifference. They do it because bringing someone onto the trust ramp gets treated as a phase-two problem. Optimize internally first. Expand later. Later rarely comes. That sequencing is the real structural failure, not the absence of good intention.”
This is exact. And it has a name in the tradition: John Paul II called it “structures of sin” in Sollicitudo Rei Socialis — not individual malice, but systems that make the wrong thing normal before anyone decides anything. The road to exclusion is usually paved with deferred good intentions.
The same commenter added a framing that deserves to stand on its own: “Infrastructure built before the purpose question gets asked will constrain which answers remain available.” That is almost a gloss on Augustine. Societies are shaped by their shared loves — and the infrastructure they build before they name those loves will quietly determine which futures remain possible. Which means the sequencing problem and the purpose problem are the same problem. You cannot fix the former without naming the latter.
Five Further Critiques
A fifth commenter offered a set of deeper structural objections that deserve engagement:
First: that machine-mediated pistis is always counterfeit, because quantitative evaluation changes the nature of trust at the root. I think this is not quite right as stated — Hall is explicit that the human remains sovereign and that AI provides visibility, not judgment. But the harder version of this critique holds: does the infrastructure of measurement quietly change what we mean by trust, even when no one intends it to? Quantification does not falsify trust. But it does shift what trust becomes.
Second: that continuously expanding the scope and speed of decisions eventually amounts to an attempt to be like God. I would reframe this: the danger is not divinity but decoupling — separating decision from responsibility. Speed without accountability is the real failure mode, and it has a history long before AI.
Third — and this is the sharpest of the five: that any preoccupation with efficiency is categorically still operating in the logic of scarcity. If abundance is genuinely the premise, efficiency as competitive advantage is a category error. Hall cannot build a post-scarcity framework on scarcity-logic dominance. That performative contradiction does not dissolve. It compounds.
Fourth: whether these networks generate genuine human value or feed machines on human attention — a vampirism of energy and meaning. This is a legitimate question, but it needs more precision than the metaphor provides. Every technology transforms attention. The question is always: toward what?
Fifth: the substrate-needs argument, developed by Forrest Landry, that synthetic systems stand in a fundamentally rivalrous relationship to organic life — not merely in competition, but in a different ontological category. The ecological dimension of this is worth taking seriously. The theological claim — a permanent categorical divide between the synthetic lineage and creation — is more speculative. The tradition is cautious about declaring any tool categorically outside redemption. The question has always been ends, not origins.
What unites all five critiques is something the tradition names directly: the question of telos. A network without a purpose ordered beyond itself does not need to be malicious to cause harm. It just needs to be left running.
I am grateful for a discussion that took the argument seriously enough to push it. The original essay stands. But these exchanges have sharpened what I believe the Catholic tradition uniquely contributes to this moment: not a refutation of Hall’s diagnosis, but an insistence that the cure requires more than better networks. It requires asking, before we build, what we love — and whether that love extends beyond the boundaries of our own trust.















Du muss angemeldet sein, um einen Kommentar zu veröffentlichen.