On AI: The Toaster Will Never Have a Soul
Getting to the heart of the matter
On an earlier draft of this, I had a joke written about thinking it had been 3 months since my last article in this series, only to then be surprised when I verified that it was actually 5.
Now, it’s been 9 months…
Boy does time fly when you have a 100k word novel draft to finish… and are working a full-time job, and leading a theology study group, and organizing young adult events for a parish, and acting as secretary for a school board, and spending time with friends and family, and taking care of kittens and a house….
I am, however, determined to get this article done before I get started on the second draft of METANOIA [2], so here we are.
Alright… where did we leave off…
Last time I promised a break from the technical analysis and a dip into the philosophical. I intended that in a different way by going back to look at the question of “Why does man even desire to create artificial life?” But my unpaid labor friend pointed out a vital flaw in that first attempt, and I lost the motivation to rectify it at the time, so I put that specific topic on the back burner.
Instead, I’ve decided I’ve done enough teasing and it’s time to get a bit spicy and state the core of my position and argument on Artificial Intelligence.
Which is…
Artificial General Intelligence Can’t Exist
As due course of the fundamental relationship between information and matter. Current research and development into AGI is wasting everyone’s time and money, to the point that researching time-travel would be more productive and fruitful.
Until now, I believe I had kept the scope of my analysis pretty tightly on the popular computing devices of today—that is, computers built on the “von Neumann architecture”—and whether or not they are specifically compatible with the objective of replicating a general form of intelligence equivalent to, or surpassing, that of a human. Now, however, I’m going to speak broadly about all theoretically plausible “thinking” machines. In fact, instead of an artificial intelligence, I will be using the human brain itself to make my argument.
This shifting gears in the series is part for the sake of my limited time, and for another part because I think I’ve hit upon a way to make my argument (somewhat) succinct for a more general audience. Not a general audience, mind you, but a more general audience. Some of the words I’m going to use here are awfully pretentious, and I can’t guarantee all of my points will be fully contextualized, so I will apologize upfront. Nevertheless, I believe at this point it’s intelligible enough to spur on a conversation which will hopefully refine it, or else spur someone else on to deriving and even better argument.
The Argument
I’m going to try and keep this pretty dry and direct to start, as it has proven easy in my drafting to go off on a hundred tangents. The particular line of reasoning I want to touch on today is as follows:
Artificial General Intelligence (AGI) is an attempt to replicate powers of “mind” through natural means
That is, purely by matter and energy—material reality
To create something through natural means, it must be a derivative of the fundamental principles at work in nature
In other words, it must be a product of the laws of nature/physics
“Human powers of mind,” by definition, includes “consciousness”
Consciousness has observable properties which contradict observable natural principles
Therefore, consciousness is not a derivative of natural principles and thus cannot be created by natural means.
Now, let me elaborate…
Point 1
Artificial General Intelligence (AGI) is an attempt to replicate human powers of mind through natural means
This one may seem self-explanatory at first glance, but even here the explanation can start down rabbit holes. The tricky part comes in the question of whether or not the human mind is the only form of mind that exists which has all of the properties, or “powers,” AGI researchers wish to replicate. It helps to understand what I’m getting at by listing out and describing these powers. That is, provide a model of mind:
Perception — the ability to “sense” information from various sources; this could be signals of external origin (sights, smells, etc.) or awareness of the mind’s own state.
Understanding — the ability to translate raw information from the various senses into meaningful knowledge.
Evaluation — the ability to determine the value of attained knowledge. That is, whether it provides a more complete and desirable understanding of perceived information.
Expression — the ability to “write” or “project” information back out into the external world. Usually, in order to elicit new information.
You may wonder how I’ve drawn these categorizations of what composes “powers of mind,” and the answer is merely a matter of semantics. For example, you may say I’m missing “Reasoning” from my list, but I may argue that the movement from Perception to Understanding to Evaluation and back is “Reasoning.” At the end of the day, there is no one objectively correct model due to the nature of language. There are, however useful and useless models.
The important part here is that my choice of language imparts a clear enough understanding of the underlying principles I’m attempting to describe, so that your mind can then evaluate if the information I’ve expressed is more complete and desirable than your current understanding. And, hopefully, will prompt you to express information that I understand as “agreement.”
That’s the hope, anyway.
(Also, I use “Evaluation” over the more controversial term “Choice,” because I’m not going to start with the assumption that the “Will” is “free.” While it is related, it is independent of today’s topic.)
I will point out that an inability to define a model of mind that can be clearly “mapped” to natural phenomena, let alone one which is agreed upon by the majority of researchers, is a key problem facing the creation of AGI. But I digress…
Getting back to the question above, the more permissive answer is acceptable. That is, it doesn’t matter if there is another “kind” of mind besides a “human” mind. So long as it has the ability to make evaluations of information, it is enough of a mind for my argument.
Lastly, however, I will emphasize that my argument does not hold for any conceptualization of AGI which includes supernatural principles. That really should be obvious, but I will state that explicitly for clarity, and it will be relevant later.
Point 2
To create something through natural means, it must be a derivative of the fundamental principles at work in nature
This is just a rephrasing of the assumed principle “like begets like,” which underpins all of the natural sciences. Something formed from natural causes will itself be natural. Even if a specific phenomenon doesn’t presently exist in the natural world, it always has the potential to exist somewhere within space-time as a product of universal phenomena.
What’s worth noting, is that this puts a restriction on our methods to create AGI. Namely, we then have to assume that new “axiomatic” (or fundamental) principles will not suddenly spring forth in the universe when arbitrary patterns are formed from matter in space-time.
…Saying it that way is still a bit of a mouthful, so let me try it this way: We have to assume the laws of nature are constant in all of space and time, and that every power of mind we wish to replicate can be defined as a pattern of material occurring within space and time.
I considered this a non-controversial assumption, but it’s important and has its challengers to address. But later.
Point 3
“Human powers of mind,” by definition, includes “consciousness”
I’ll explain Point 3 by saying that the model of mind I presented in Point 1 is necessarily related to the phenomenon we call “consciousness.” Often described as a certain kind of “self-awareness,” consciousness is intrinsically related to at least the ability to Understand information, and most would also include in their definition the ability to Perceive and Evaluate.
This is probably where more controversy will arise in my assumptions, but I hold that Understanding is essential to Consciousness, and that without Consciousness there is no ability to Understand.
Well, I thought this might be controversial, but I checked myself against the Cambridge Dictionary after writing the above and the first result for “consciousness” was:
“the state of understanding and realizing something”
Since consciousness is intrinsic to “the mind,” and an AGI is one with the powers of a mind, it can therefore be said that an AGI is a conscious machine.
Moving on…
Point 4
Consciousness has observable properties which contradict observable natural principles
This is where I believe I really have to explain myself, and is where lies the heart of debate on the matter. I wouldn’t know if—and highly doubt—that my observations are novel, but I haven’t seen them expressed, much less related, in a satisfyingly clear way in current discourse. So, I say them myself:
Information in relation to space-time is proportional to “diversity.” Meaning, for more information to exist in space-time, there must be more “states of” space-time. There must be more matter in more possible arrangements, more “things” in which information may be encoded distinctly. Conversely, as the scope of observed space-time narrows towards a singular state, information is lost.
As a reminder, when I say “space-time,” “natural reality,” “the material world,” or the like, I mean the same thing. The plane of existence where physical phenomena are at play.
In any case, this property of space-time where information is “lost” as matter is reduced is precisely what consciousness contradicts. Consciousness, and thus “understanding,” synthesizes information into a unified (that is a “singular”) state, and the more information is synthesized, the greater the “understanding” is said to be.
If this sounds strange or esoteric (and I imagine it might), consider the following illustration of the principle:
Suppose there are two images: one of a yellow rubber duck, and another of yellow raincoat. If you saw (perceived) the full picture of each, you would easily be able to tell the difference because there is enough information to identify (understand) the subjects and make the distinction (evaluate) that they are different.
Now, suppose you were only ever given a small square cut out of each image. The two would still likely appear different, but with only a partial shape of “some yellow stuff,” there becomes a real possibility of misidentifying the subjects, or even incorrectly identifying them as part of the same subject.
Now let’s say you were only given a single yellow pixel (same shade) of each image. It would be impossible then to distinguish between the images, let alone the subjects of the images, through only the information those pixels give.
As space-time is reduced toward the “singular,” it gives rise to more “ambiguity” in the information which can be derived from it. That is, there are less opportunities to make the distinctions required for understanding the whole of reality. And yet, a “reduction” is exactly the result of understanding. Instead of holding on to the positions and kinds of the great many multiples of particles of matter that compose the “screen,” your mind ultimately throws out all of that as soon as it determines to understand an image.
The subjects are a duck, a raincoat.
Yet, unless you perceive those great multitudes of particles (or at least the photons therefrom), you cannot understand the singular subject of the image. You see only an individual yellow pixel no different from any other. The greater reality of the “duck” and “raincoat” still exists, but that cannot be understood without perceiving the multitude in unity.
Understanding is like a funnel which takes the vast information encoded in the natural world—the “raw data”—and synthesizes down into singular “concepts”—bits of “knowledge”—and ultimately into your “experience.”
And this difference between how the natural world and consciousness relate to information is how we know one does not exist inside the other, but alongside. Their movements are independent of one another. In the language of mathematics, you might call them “orthogonal axes” of reality.
Conclusion
The viability of AGI as a product of technology is predicated on the assumption that “General Intelligence” is a naturally occurring phenomenon, since it is assumed by many that we, the only known form of general intelligence in the world, are a purely natural phenomena ourselves.
This cannot be assumed, as the empirically observable relationships between the natural world (space-time), information, and consciousness show that consciousness must operate independently of fundamental natural principles. Therefore, it is impossible for purely natural methods to produce AGI.
Parting Ramble
Now, for those of you who have participated in meaningful debates on AI, you probably identified that my argument is the classic: “Machines can’t be conscious because consciousness is not a natural phenomenon.” But, hopefully I have shown that such an argument can be made rationally and not merely as some emotional reaction to discomfort begat by a threatened worldview. AI has never been a threat to my worldview. In fact, until recently I assumed AGI on par with human intelligence would exist eventually. It was just a matter of making sufficient progress in neurology and material science.
However, with the recent AI craze giving me distinct flashbacks to Hitchhiker’s Guide to the Galaxy, I decided to start looking into the hardware and neurology myself and… then things started to click into place. Lo and behold, I was not the only one who balked at the kind of religious delirium that has taken many on the topic, whether for fear or longing. One of the great computing scientists of the 20th century, Edsger W. Dijkstra, likewise expressed his frustrations (of which AI was only a footnote):
(from his article “The threats to the computing sciences”; emphasis mine)
[…] Because computers appeared in a decade when faith in the progress and wholesomeness of science and technology was virtually unlimited, it might be wise to recall that, in view of its original objectives, mankind’s scientific endeavours over, say, the last five centuries have been a spectacular failure.
As you all remember, the first and foremost objective was the development of the Elixir that would give the one that drank it Eternal Youth. But since there is not much point in eternal poverty, the world of science quickly embarked on its second project, viz. the Philosopher’s Stone that would enable you to make as much Gold as you needed.
Needless to say, the planning of these two grandiose research projects went far beyond the predictive powers of the seers of that day and, for sound reasons of management, the accurate prediction of the future became the third hot scientific issue.
Well, we all know what happened as the centuries went by: medicine, chemistry, and astronomy quietly divorced themselves from quackery, alchemy and astrology. New goals were set and the original objectives were kindly forgotten.
Were they? No, not really. Evidently, the academic community continues to suffer from a lingering sense of guilt that the original objectives have not been met, for as soon as a new promising branch of science and technology sprouts, all the unfulfilled hopes and expectations are transferred to it. Such is the well-established tradition and, as we are all well aware, now computing science finds itself saddled with the thankless task of curing all the ills of the world and more, and the net result is that we have to operate in an unjustified euphoria of tacit assumptions, the doubting of which is viewed as sacrilege precisely because the justification of the euphoria is so shaky. […]
It’s hard to say if Dijkstra was trying to more than simply lament how the computing sciences were saddled with undue expectations. Expectations which have caused the once generalized discipline to be (in his view, seemingly) unhappily wed to a specific form of computing machine, which only deals with a subset of ideas within the academic discipline.
I, however, can’t help but turn my lament more broadly to the gross negligence of reason itself on display by many “thought leaders” of the last five hundred years, and especially many of those in positions of influence today. I could go on a much more colorful rant, but I digress.
I’ve wasted your time enough for today, so let’s leave the discussion here for now. If there’s interest, I may publish a collection of “defenses” for my position against the possible counters I’ve considered.
Otherwise, I’m frankly sick of the topic now and want to move on to other topics of philosophy, technology, and the arts (and finishing METANOIA [2], of course).
One Last Thought to Chew On
If consciousness is not “natural,” then we are left with an implication that some would find rather objectionable. If something exists which is beyond the natural world, then by definition it is supernatural.
You, therefore, as a “conscious being,” are a supernatural being.
In more ancient language, preserved in our more ancient traditions, there is a particular name for a supernatural being: a “spirit.”
You are a spirit.
So…
…if you want AGI—if you want the rocks to learn how to think—you cannot use natural means. You will have to figure out how to create a spirit and bind it to the rock. Essentially, you would have to learn necromancy, which I can’t recommend.
Personally, I will prefer to participate in propagating intelligence in the world the old fashioned way.
If you enjoyed this article, then please let me know and share it with others. You can subscribe if you feel like it, or just tip me on ko-fi. Any and all of the above is greatly appreciated.
If you didn’t like the article, then please also let me know and fight me on it. Please.







Have you read Miracles by CS Lewis. The first 3rd-ish is devoted to defining nature as a closed system of cause and effect and supernature as anything that intrudes on that system. Then he moves on to demonstrating supernature's necessity for human experience. It's what I think about when I consider this discussion of agi. AGI would be by design a closed system and not capable of those things that are central to the human inner life.