← Writing

May 1, 2026

How I think about prioritization

Prioritization is the job. Everything else — roadmaps, specs, stakeholder updates — is scaffolding around the decision of what to build next.

Most frameworks try to make this feel objective. RICE gives you a score. ICE gives you a score. Kano sorts features into buckets. The assumption is that if you measure enough things, the right answer will surface on its own.

It rarely does.

The problem with scores

A RICE score is only as good as the estimates inside it. Reach is a guess. Impact is a guess. Confidence is literally called "confidence" — it's a placeholder for how uncertain you are. Multiply four uncertain numbers together and you get a very precise uncertain number.

That's not useless. It forces you to be explicit about your assumptions. It creates a shared language for the team. But it is not a decision. It's a starting point for a conversation.

What I actually do

I start with the question: what would have to be true for this to be the most important thing we could work on?

Then I check if those things are true.

If a feature scores high on RICE but I can't answer that question clearly, I slow down. High scores on bad assumptions are worse than no framework at all — they give bad decisions the appearance of rigor.

If a feature scores lower but the strategic case is obvious, I don't ignore that. Frameworks are inputs to judgment, not replacements for it.

On gut feel

I'm not suspicious of intuition. Pattern recognition built from real experience is valuable. The problem is that "my gut says so" is unfalsifiable — no one can push back on it, and it tends to encode whatever biases the person with the loudest voice happens to have.

The discipline is to name what your gut is actually responding to, then examine whether that thing is real. Sometimes it is. Sometimes you realize you're just anchored to the first solution you thought of.

The question worth asking

Before any prioritization exercise, I find it useful to ask: are we solving the right problem?

Not "is this feature worth building" but "is the problem we're trying to solve actually the bottleneck?" The best prioritization work I've seen starts here, before any scores are calculated.


This is a living document. I'll update it as my thinking changes.