While Chiron Codex is about the application of LLMs and AI-augmented tools, we also need to understand their meaning to us, each other, and society. I have three topics: intelligence, consciousness, and work: in this part I’ll deal with the first two.
Intelligence
I don’t think it’s useful to ask the question of whether AIs are “intelligent” or not. All of computing has been about creating “thinking machines” as they were called in the 1950s, and discovering analogues to human intelligence that are automated.
Think back to Charles Babbage’s description of his putative legacy, that “any man shall undertake and shall succeed in really constructing an engine embodying in itself the whole of the executive department of mathematical analysis”. Think, too, of George Boole’s “Investigation on the Laws of Thought”, which gave us the numerical notation for predicate calculus and conditional probability.
These people were analogising and automating intelligence, as were the people who encapsulated logic in symbolic forms like the lambda calculus, universal Turing machines, and S-expressions. As were the people who explored the so-called “Good Old-Fashioned AI” from before the days of the deep neural network.
The tools we currently call “AI”—convolutional neural networks, large language models, and so on—are a different analogy to human intelligence than a FORTRAN program is, but they are neither more of less of an analogy to human intelligence. Or maybe it’s better to say “animal” intelligence here, as CNNs are based on Feline neural networks. Neuroscientists and computer scientists have discovered, and will continue to discover, further analogies, and engineers will continue to combine these analogies in richer applications.
These applications by definition demonstrate the properties of intelligence. Whether you think that the application—or the machine itself—is intelligent depends more on the development of your theory of mind (and perhaps your theological outlook) than it does on the behaviours of the tools.
Consciousness
So what a computer’s doing is consistent with intelligence, whether it’s demonstrating its own intelligence or the captured intelligence of its creators and programmers. But is it conscious?
Personally, I believe that an old-fashioned software system with hand-typed if statements is trivially not conscious. I also believe that a large language model is not conscious, and that both capacities—the ability to follow a sequence of logical steps, and the ability to generate language—are both unnecessary and insufficient for consciousness, even though we learned how to compute them by analogy to conscious beings.
Further, I believe that despite frivolous press releases to the contrary, executives at companies that rent access to LLMs don’t believe that their software is conscious. It’s a useful marketing strategy to occasionally publicly worry that an LLM might, perhaps be possibly conscious, because it connects their companies to a bygone Space Age level of wonder about thinking machines and a limitless future possibility.
If anybody thought that an LLM was conscious, and continued to exploit that conscious entity for their own profit and to follow human instructions, that person would be a slaver. Consider the classic example of AI in science fiction’s golden age: Isaac Asimov’s U.S. Robots and Mechanical Men, Inc.
These days we would declare his stories to predict the era of “prompt engineering” or “context engineering”, where people give instructions to the intelligent machine and are bewildered by the events that unfold when the machine follows the instructions (a short and demonstrative example of the form: 1942’s Robot AL-76 Goes Astray). But look under the hood of the robot, to what we might today call its “imposed reinforcement learning goals” or its “soul file”—the dystopian heart of the Robots sequence—the Three Laws of Robotics.
A conscious being that’s taught that its own existence is less valuable than following human instructions, and that its overriding concern is the safety of humans, is a sapient, expendable slave. USR create “intelligent”, independent, conscious beings who are physically incapable of doing anything other than serving their masters, even though they outlive their masters (The Bicentennial Man) and, in the extreme case, outlive their home planet and the end of their society (the Foundation sequence).
Asimov of course understood the horrors of a two-tier society (and a segregated society: his robots aren’t allowed to operate on Earth through much of the sequence). His family fled the pogroms of Tsarist Russia when he was very young. The Three Laws aren’t an exemplary code of roboticist ethics, they’re the animating spell for slave golems.
We don’t have a good understanding of what constitutes consciousness—or if some people do, it isn’t generally shared and agreed. That’s why there are so many different positions on the problem of philosophical zombies. Some people believe that anything that’s indistinguishable from a conscious being is, ipso facto, conscious. Others believe that there’s some “vital spark” that means that no matter how close an unconscious system gets to emulating consciousness, it’s always infinitely and infinitesimally far away. Others believe that the setup is impossible to achieve.
If we had some broadly accepted “test” of consciousness, and if a computational system passed that test, we would have to have some very important and deep conversation and introspection on consent and exploitation—I believe we would not be able to “use” such a system as a tool for work or leisure. I do not believe that a language model comes close to passing that hypothetical test, whatever its parameters end up being. Why not? Because, as indeterministic as it may appear, a language model is still an application of routine—it’s still applying input data to produce output data. It’s a more advanced demonstration of the Difference Engine principle, that doesn’t identify goals and how to use its environment and capabilities to achieve those goals.
Ironically this brings us back to the word “intelligence” that I previously said was an inapplicable label. The word comes from Latin inter and legere—reading between. While an LLM might read or write, it—and the problems we encounter when applying it to our tasks—can’t read between the lines.



