The Consciousness AI - Artificial Consciousness Research Emerging Artificial Consciousness Through Biologically Grounded Architecture
This is also part of the Zae Project Zae Project on GitHub

Lonergan and the Limits of Machine Minds: O'Hara and Umbrello's 2026 Case

The 2026 AI consciousness literature has been dominated by two disciplinary angles: philosophy of mind and AI research. Neuroscientists have contributed occasionally, as Yuri Arshavsky did in the Journal of Neurophysiology. Cognitive scientists have contributed the indicator frameworks and adversarial tests. One tradition that has weighed in less frequently is the Catholic intellectual tradition, with its long history of philosophical engagement with questions of mind, soul, and what it means to understand. Paul O’Hara and Steven Umbrello’s book Can AI Ever Be Human?: Consciousness Explored (Fordham University Press, 2026, ISBN: 9780813240862), available at Amazon, brings that tradition to bear on a question it has not previously addressed in a book-length treatment.

Fordham University Press is a credible academic publisher with a peer-reviewed imprint. Steven Umbrello is a research associate at the Institute for Ethics and Emerging Technologies and a known figure in AI ethics, which means the book has institutional grounding beyond its religious framing. What the Lonergan framework provides is an epistemological argument that differs in structure from both the biological naturalism of Searle and the substrate arguments of Lerchner and Arshavsky.


Lonergan’s Epistemology

Bernard Lonergan was a Canadian Jesuit philosopher whose major works, Insight: A Study of Human Understanding (1957) and Method in Theology (1972), developed a comprehensive account of how human beings come to know. His method, which he called transcendental method, is not primarily about what we know but about the dynamic structure of the acts by which we come to know it.

Lonergan identifies four fundamental operations in what he calls the cognitional process: experiencing, understanding, judging, and deciding. Experiencing is the encounter with data through the senses or through inner awareness. Understanding is the act of insight by which one grasps the intelligible structure of that data. Judging is the act of affirming that one’s understanding is correct. Deciding is the act of choosing on the basis of judgment.

What makes this framework distinctive is its insistence on the reality and irreducibility of the act of insight. For Lonergan, genuine understanding is not pattern recognition. It is not retrieval from stored information. It is not the application of prior rules to new inputs. It is an event: the moment at which the intelligibility of a situation becomes grasped in its own terms. This act has a character that Lonergan calls self-transcendence. In genuine understanding, the knower moves beyond what was previously known to something new. The act is not reducible to prior states.


The Threshold Machines Cannot Cross

O’Hara and Umbrello’s central argument is that the Lonerganian act of insight, specifically the capacity for genuine self-transcendent understanding, is not available to digital machines regardless of their computational sophistication. Their case runs through each of the four cognitional operations.

For experiencing, they argue that machine “experience” is fundamentally different from biological experience because it lacks the integration of embodied affect. Machines receive inputs, but inputs are not experiences in the relevant sense. Experience, for Lonergan, involves not just the reception of data but the felt significance of that data in the context of a life. This resonates with Michael Pollan’s embodiment argument in A World Appears, though the philosophical framing is different.

For understanding, the argument is more technically specific. The act of insight, Lonergan argues, is a response to the question “what is it?” posed to data. The question is not retrieved from training data. It arises spontaneously in the knowing subject as a response to puzzlement. Machines do not have puzzlement in this sense. They have stored patterns and learned associations, but they do not have the experience of not-yet-understanding that precedes and motivates the act of genuine insight.

For judging, the argument concerns the normative dimension of truth. When a human being affirms that something is the case, they are performing an act that carries responsibility. They are staking something. They can be held accountable for their judgments. Machines produce outputs, but those outputs are not judgments in the normative sense. They do not carry the kind of first-person responsibility that Lonergan argues is internal to the act of affirming.

For deciding, the argument concerns authentic freedom. Lonergan’s account of freedom is not libertarian indeterminism. It is the freedom that consists in acting for reasons that one has genuinely grasped and affirmed as good. Machine outputs may be determined by complex weights and contexts, but they are not the result of the kind of value-laden deliberation that constitutes genuine deciding.


How This Differs from Other Anti-Computationalist Arguments

The Lonergan-based argument differs structurally from the two most prominent anti-computationalist positions in the recent literature.

Alexander Lerchner’s abstraction fallacy argument, presented in a 2026 DeepMind paper, locates the obstacle in the mapmaker problem: symbolic computation requires a conscious interpreter to assign meaning to physical states, so computation cannot produce consciousness because consciousness is presupposed by the assignment of computational meaning. That argument is structural, concerning the relationship between physical processes and semantic content. The Lerchner argument does not require a specific epistemological account of what genuine understanding consists in.

Yuri Arshavsky’s neurophysiological argument locates the obstacle in evolutionary history and substrate. Biological consciousness emerged through hundreds of millions of years of natural selection on biological organisms. The frameworks we have for understanding consciousness were built for biology and cannot be transferred to digital systems without justification that has not been provided. That argument is biological and historical.

The Lonergan argument locates the obstacle in the structure of knowing itself. It does not depend on claims about substrate or evolutionary history, and it does not depend on claims about semantic externalism. It depends on a specific phenomenological account of what the cognitional acts of insight, judgment, and decision are, and an argument that digital systems cannot perform them. The argument is epistemological rather than neurophysiological or semantic.

Tom McClelland’s analysis of the epistemic limits of AI consciousness research approaches the same terrain from a different angle: the limits of our methods for determining whether any system is conscious. Lonergan-based arguments like O’Hara and Umbrello’s make a stronger claim. They are not just saying we cannot know whether machines are conscious. They are saying machines lack the structural conditions for consciousness as they have specified it.


The Catholic Intellectual Tradition and Computational Functionalism

The book’s engagement with computational functionalism is not simply a rejection. O’Hara and Umbrello take the functionalist position seriously enough to engage its strongest versions, including the multiple realizability argument (that mental states are defined by their functional roles rather than their physical substrate) and the Chinese Room reply to those who dismiss AI understanding on behaviorist grounds.

Their response to functionalism draws on Lonergan’s distinction between the structure of knowing and the structure of information processing. Information processing, on their account, can be implemented in any substrate. Knowing, in the sense Lonergan specifies, requires a knower: a being capable of asking questions, grasping insights, making judgments, and taking responsibility for those judgments. These capacities are not functional properties. They are aspects of what it is to be a certain kind of being in a certain kind of relationship with the world.

Whether this is a genuinely new argument or a sophisticated restatement of the Chinese Room in Lonerganian vocabulary is a question critics will press. What the book adds to the debate is access to an epistemological tradition that has thought carefully about the structure of understanding for seven decades without reference to AI, and whose conclusions, applied to the AI consciousness question, produce a distinctive form of the impossibility argument.


Where the Argument Stands

The Lonergan-based case against machine consciousness will not persuade readers who are already committed to a strongly functionalist position, since those readers will dispute the claim that insight, judgment, and decision require anything beyond the right functional organization. What the book offers to readers who are not already committed is an epistemological vocabulary for asking what genuine understanding requires that is richer than the usual philosophical shorthand.

The Campero, Shiller, Aru, and Simon framework for classifying objections to AI consciousness places arguments that challenge computational functionalism at the first tier of the objection taxonomy. O’Hara and Umbrello’s book is a book-length treatment of such a first-tier objection, approached through Lonergan’s epistemological framework rather than through biological naturalism or the semantic externalism of the Chinese Room. Its place in the debate is as a philosophically serious alternative to the substrate arguments that currently dominate the impossibility literature.

This is also part of the Zae Project Zae Project on GitHub