Salesforce, Python, SQL, & other ways to put your data where you need it

Need event music? 🎸

Live and recorded jazz, pop, and meditative music for your virtual conference / Zoom wedding / yoga class / private party with quality sound and a smooth technical experience

Music, bird brains and LLM math

08 Apr 2026 🔖 music prompt engineering
💬 EN

Table of Contents

Wanna go really trippy about philosophy and LLMs? Check this shower thought about “what if language itself is sorta deterministic?”:

Bird brains and math

Last night it occurred to me that as nondeterministic / probabilistic as LLM’s computation output is, there’s still a certain 0’s-and-1’s determinism in the representation of the data being passed as inputs into those nondeterministic math formulas.

Recently, I read some hypotheses that humans having language might have something to do with our ability to reliably predict deterministic type of problem:  when the next beat is coming, in a steady musical rhythm.

Soooo … in the process of studying human neurobiology, and then studying how to imitate it with math … did we manage to figure out how to imitate nondeterministic language-shaped things because there’s something fundamentally deterministic about them in our brain synapses?

🤯

Background: I went down this shower-thought rabbit hole because I shared Stanford’s recent “Mirage” study with a friend in medical school.

The LLM-based Generative AI (“GenAI”) tool bluffed when Stanford researchers lied to it. They gave it a blank white image and said it was an X-ray or something.

And then the LLM lied right back, something along the lines of saying it could see a bone break in the lower right corner.

Intriguingly, it was surprisingly accurate, because instead of working off of the image, it was working off the the text in the patient’s chart. What fascinated me most about that was Baghat Ahmed’s reply to the LinkedIn post from which I found the study, saying:

“The most interesting part is not that scores stayed high without images. …

“Some of these questions might genuinely be answerable from clinical context alone.

“A radiologist often knows what to expect before looking at the scan.”

Okay, first of all … that alone blows my mind. Wow – “next token prediction” is literally a thing that neuro/bio/psych studied for years, and then math/stat/compsci studied mimicking for years, because it’s literally … what we humans often call “expertise.” A radiologist M.D. is often right, based off a chart, before they open the pictures, too – I hadn’t thought of it that way!

But as Baghat also said:

“The most interesting part … is that models confidently described images that were not there. … A model hallucinating a detailed X-ray reading from no image … fills in missing inputs with confident fabrication rather than flagging uncertainty. … In production, this is how AI systems fail silently. The output looks correct. Nobody notices the input was never used.”

My friend in med school replied:

“In the end, this shows that AI is a powerful tool, but the real value (of a human) is in being able to interpret and question things. … The real difference is humans know how to question the hypotheses (they start from).”

Humor break:

  • Go ahead, make all the jokes you want now about the people whose decisionmaking and rush jobs frustrate you. 😆
  • And yes, of course, the political implications of defunding education that promotes rigorous critical thinkining as a mechanism to increase popopulation-level manipulability is terrifying – we humans absolutely don’t always question things. 😬

But overall, yeah, I think she’s right – there’s something in our neurobiology and cultural teaching processes that seems to support second-guessing.

I wrote her back, wondering whether we’d ever figure out out the neurobiology of human second-guessing, and the mathematics of imitating it by running electricity through sand, the way we figured things out pretty well for “next token prediction,” and incorporate that into “AI.”

But…

What if the neurobiology of human second-guessing isn’t as tidily deterministic as next token prediction might be (see the possible rhythm/beat prediction shape of the neurology of language, above)?

What if we figured out LLMs because language is rhythm?

Maybe scientists won’t quite figure out the neurology and math of “questioning.”

Maybe we humans still get to keep second-guessing for ourselves, for a while. 🤷‍♀️

--- ---