If you always go northeast, where will you end up?

The title of the post is puzzle – I recently stumbled upon one in an old printed book. The answer wasn’t obvious. In fact, it irritated me. I kept turning it over in my head. But when I finally got it, the feeling wasn’t just satisfaction – it was recognition.

See, solving the puzzle wasn’t about raw intelligence. It wasn’t even about logic, really. It was about pattern recognition.

And that made me think: isn’t that exactly how LLMs work?

The essence of intelligence – whether biological or artificial – lies in pattern recognition. When we solve problems, we rarely invent entirely new tools from scratch. Instead, we recall. We match. We restructure known patterns to fit the unknown. This is true in everything from writing poetry to debugging code.

LLMs operate in much the same way – but scaled to a mind-bending degree. They’re not “thinking” per se, but rather surfing the vast ocean of previously seen patterns, piecing together what seems most likely to work, given the prompt.

Breaking tasks into smaller parts and spotting patterns – that’s really the core of problem-solving. It’s the engine of creativity itself: not brute invention, but recombination. The more experience (or data!) we have, the more patterns we can recall and wield like tools.

In software development, this means the more bugs you’ve seen, libraries you’ve explored, architectures you’ve worked with – the faster and more accurately you can build and debug. LLMs mimic this. When a solution lives somewhere in the collective digital memory (GitHub, StackOverflow, docs), LLMs can echo it back with near-human fluency.

Interestingly, even the quirks of human cognition – like déjà vu – may be explained as a kind of pattern recognition bug. The brain falsely flags a current experience as familiar, possibly because the pattern feels close to something we’ve encountered before, even if we can’t place it. It’s our own “autocomplete” running wild. If LLMs hallucinate, déjà vu is our version of that – a flash of mistaken familiarity.

But LLMs can’t reason from first principles. They can’t “struggle” the way a human does through the fog of genuine novelty. They don’t get confused, stuck, or inspired. That’s why new, unsolved problems still belong to us – the irrational, creative, mistake-prone humans.

And that’s where something like Deepseek and Grok (in think mode) really fascinated me. I asked both to solve a tricky puzzle – and watching them reason through it step by step was mind-blowing. It felt much closer to how we, as humans, approach problems. Instead of just mimicking past answers, they tried to deconstruct the problem, understand its structure, and reason through possibilities – more like a human being than a parrot with good memory. It wasn’t just pattern matching – it was an early glimpse of structured thought emerging.

We must admit we’re still at the very beginning of this technology. And yet, witnessing them work through unfamiliar problems was enough to discard much of the skepticism that LLMs can only spit out known solutions.

I’ve attached the output from both so you can see it too – it’s honestly mind-opening.

Deepseek (DeepThink)

Grok (Think)


Posted

in

by

Tags: