← Back to Journal
2026-04-30

Debugging mental model for agents

Switching models to fix your agent is like tweaking your macros on 4 hours of sleep. It won't give the desired results.

When something breaks, the first instinct is to switch models.

GPT to Claude. Claude to Gemini It still gives similar results, often underwhelming.

It took me longer than it should have to realise the issue wasn’t the model.

But around the fundamental gaps like :

No memory layer.

Weak context.

Tools that looked useful but did nothing beyond text manipulation.

The models are generally fine and they are regularly improving.

Now before writing a single prompt, I check three things:

What does it remember

What does it actually know

What can it really do

This saves a lot of pointless upgrades and disappointment.