Normal people have very weird ideas about how computers work under the covers.
Omaha and I visit a cafe near our home nearly every day. Today, I got into a weird conversation with someone there about computers and, of all things, lists. I said a list was one dimensional, and she got angry at me. "A list is two dimensional. It goes down the paper and across the paper." I objected that that's not how it works inside a computer; it's just a single line, from beginning to end, of numbers. A list is just a line of numbers, in pairs: the address where to find each item on the list, and how much to read in until you've seen the whole item. The items themselves are somewhere else along the line. There's no multi-dimensionality at all to memory (or hard drive) accessing; it's just one dimension.
"But that's still two dimensional, right?" she insisted. "You have an address for each item, like, going down the page, but each item goes across it." I said that was an artifact of human interpretation; computer addressing was a single number, one for each place in RAM or on a disk.
She refused to believe that.
"But you have these things called blocks! I remember when they would get out of order and you had to defrag your hard-drive. A block is two-dimensional. Why don't we do that anymore?" I tried to explain that that was humans imposing something they could understand on the big numbers of memory addressing, where even a nominally small hard-drive could have 536 million blocks.
"No," she insisted. "You're wrong about something. I don't know what. I know you do this for a living, but you're wrong about this. Computers are multi-dimensional."
Only in Star-Trek.
Computers aren't multi-dimensional. We are. We use the primitive basics of addition, subtraction, pulling items from linear memory, writing items back to linear memory, and so forth, and build from them whole worlds in video games and virtual reality chats. But at their heart, it's linear memory, addition and subtraction. That's it. We impose meaning when we use those tools to communicate with each other. The letters you see right now are color differentials on a video display unit; the words and sentences don't mean anything to the computer at all. We impose meaning by writing software that we use to communicate with each other. In a video game, we might think, "That guy is trying to kill me!" but underneath it all some programmer wrote some code that animates a model to enter a field of view, and some other programmer wrote code to give those models a look and a sound, all to communicate with you that you're pretending to be in a dangerous situation. And at the bottom, the CPU and the GPU are doing addition, subtraction, multiplication, and moving memory around to put up the next frame, calculate the next distances, generate the random numbers and roll through the decision trees that will make the bad guys act in the next 16 milliseconds.
All in chunks of memory in straight lines.
ChatGPT and similar engines do the same thing. Programmers brag about "billions of dimensions," but they're just numbers: a collection of numbers in a straight line, some of which point to other collections, and after a couple of layers of these numbers point to other numbers they point to words. When you prompt an AI, it takes your sentences apart, and maybe throws in a collection of numbers from earlier parts of the conversation, and creates a new collection of numbers that points to the topmost layer. Eventually, with a little randomness thrown in so the conversation feels unique and un-replayable, it reaches the bottommost layer, at which point, having consulted with billions and billions of collected phrases, it returns one or more sentences that are, statistically, what someone who typed in the prompting sentences probably wanted to read.
It has zero knowledge of whether or not those sentences are factually correct. Or true. It's not interested. It's not capable of being interested. "Interest" is something the programmers haven't been able to impose on it. And whether or not you find the resut interesting or hallucinatory is based strictly on whether or not what you read agrees with what you already know, understand, or believe.
But if people can't understand the basics of what "a list" is from the perspective of a computer's memory, we're never going to be able to convince them AI is any more than a parlor trick.