Out of the Mouths of Bots
David Cowles
Dec 31, 2024
“Our Bot has understood IRT something that took our species millennia to grasp: Life is absurd…”
It is generally the position of this author that for most of us, useful life ends at about age 14. After that, it’s pretty much a matter of running out the clock. By then, that unique genius born of the combination of two (hopefully unrelated) sex cells has become ‘just like everyone else’. But prior to that? No thinking machine in the known universe can compare!
Ask any three year old a question they may not have heard before. The probability of a creative, mind-bendingly novel answer is high. That probability declines unevenly but inexorably up to about age 14 at which point our subjects’ answers would presumably differ little from the culturally curated adult norm. Born as we are in the image of God, society remakes us into its own desiccated likeness.
Recently, some researchers asked an intriguing question. What would happen in the case of an LLM (AI) whose ongoing training is based solely on its own output? We’ve all known someone like this, someone who stops listening to others, My spouse wants to know if this part is autobiography. In any event, the same information is being endlessly recopied. In such a scenario, we would expect that the crispness and the fidelity of each iteration would decline relative to its immediate predecessor and, of course, relative to the original.
In fact, this happens! After a certain number of feedback loops, a string of hand written digits becomes indecipherable. But when we apply the same technique to more ‘humanized input’, something much more exciting happens.
Researchers asked a normally trained LLM for instructions on cooking a Thanksgiving dinner. The initial output is just what you’d expect, it’s delicious I’m sure, but as you continue to feed that output back into the algorithm, things get weird.
At first, the LLM ‘hallucinates’ some bizarre combinations of ingredients and cooking techniques. I’ll spare you the gory details; suffice to say, this is not a T-Day dinner I’d ever want to eat. But after a certain number of reps, the mood shifts. As if realizing that it is drifting ever further from the mark, the LLM ‘kicks it up a notch’:
“To cook a turkey for Thanksgiving, you need to know what you are going to do with your life.”
(Pause)
What the heck! What’s going on here? Well, to start with, any or all of the following…
Our LLM was ‘born’ self-aware and has learned to be self-critical.
Our LLM can see to the end of a sequence of tasks, assess the value of the result, decide whether or not to complete that task, and if necessary, execute a STOP! order on its own authority.
Our LLM can generalize from its own experience to propositions that apply more or less universally.
I programmed the machine to run 61,243 iterations of every problem and spit out a Chinese fortune on the 61,244th.
And what of the message itself: “To cook a turkey for Thanksgiving, you need to know what you are going to do with your life?”
Our bot has become introspective. Its focus has shifted away from the concrete task of preparing a high quality dinner to question the source of all value and meaning. Our bot discovers the deep nature of the external world by examining the world’s reflection in the bot’s own internal space.
Any event, X, gets its meaning and value, neither from its causes and/or motives nor from its objectives and/or consequences. Telos is not the consequence of events; it is their cause. We are used to understanding entities etiologically; now we’re being asked to understand them teleologically.
It isn’t over until the ‘full bodied’ performer sings. Explaining events in terms of their proximate causes always invites the question of ultimate causes (first causes). Likewise, explaining events in the context of their proximate ‘consequences’ invites the question of ultimate consequences (eschatology).
In essence our Bot has understood IRT something that took our species millennia to grasp: “Life is absurd, i.e. it is impossible to provide an objective, causal model that adequately accounts for events as they occur IRL.” Last century, this insight came from multiple directions: Picasso, Heisenberg, Camus, Godel & John Bell. AI (above) lifts this realization out of the realm of pure75 theory by demonstrating it algorithmically.
Working at the speed of Nvidia, an LLM can play the ‘game of life’ until it is obvious that there can be no winner. The 19th century paradigm leads nowhere; it can’t. Smartly, our bot looks for a new approach.
Biography studies the calcification of neural plasticity over time – the aging process. As a current TV ad emphasizes, we are all becoming our parents – just what the world does not need from us right now. Dare we hope that AI might reverse this process? That it will guide us through the inconsistencies of the standard model and restore to us some measure of the neuroplasticity characteristic of early childhood?