A famous essay, "Programming as Theory Building" (MW: PDF), proposes that every computer program starts with an idea that the programmer seeks to prove is true and possible. There is a problem and the programmer seeks to prove that there is a solution to it.
Beethoven famously wrote some of his best music when he was deaf. He saw the notes on the page and could read them with the same clarity with which one can read well-written source code, and he could spend hours alone with pen and paper, crafting notes and bars and sounds and instruments he could only remember, no longer hear, into a literal symphony that could move those who understood it to tears.
AIs (LLM Large Language Models, AMLM Advanced Machine-Learning Models) lack this capacity.
Naur and Beethoven and you and I are human beings. We can see possibilities where others do not or will not or cannot. Maybe they don't want to. Maybe they lack the training. And, genuinely and controversially, maybe they lack the capacity. But be that as it may, there are human beings who, within their area of expertise, demonstrate a capacity to take all that they know and combine them into new and interesting patterns never before seen or heard.
We're told that every machine-learning model is made up of trillions of data points in a "data space" of thousands, possibly millions of dimensions; it is the distances between the words and the directionality of their relationship that makes machine learning possible. (The "millions of dimensions" is needed to pack all of the associated meanings of a word like "run," a word with over 600 functioning definitions, into all of the ways it is used.) But there is one dimension into which these models cannot move: up.
The thing about AIs is that even though they're working in "millions of dimensions" (and legitimately, mathematically, they are), they're constrained to a box: the computer inside which they run. And they're constrained to only the data they've been trained on, never to exceed it, never to do anything clever with it. Worse, if they follow a path of words to some weird place and spit out "a hallucination" (which is really a failure of statistical analysis of the training data), they lack both the wisdom to understand that what they're doing is ridiculous and the creativity to do anything interesting with it.
The best an AI can do is find a lot of potential between two existing ideas. And for a lot of people, that seems to be enough. "Here's A, and here's B, and merge them together." I've done that; "Here's an existing program with our current standards; here's an older one that does something similar, but it is not up to our standards. Show me a version of the older one that meets the same standards as the new one." And that sometimes, even usually, works. Often, I have to edit it to get it to pass, but it's something.
But AI can't do anything new. If I ask it to exceed the state of the art on my research topic, derivatives of regular expressions in a systems programming language, it can't. That's not in the space between existing software projects; it hasn't been done. That's why it's called "exceeding the state of the art."
If you've ever had that moment of creativity when you created something new and beautiful, something profound and true, you move up. You deliberately and with intent assembled all the knowledge, skill, and desire you had into something new.
You briefly escaped into the space above.
All truly creative people know the space above is where the atmosphere is thin. You can't stay there forever. You can't even stay there very long. But we can get there, which is something large language models and machine learning systems just cannot do.