The LLM as Thinking Partner
On conversational rubber-ducking
I’ve been using Claude as a kind of thinking partner lately—not for answers, exactly, but for the kind of back-and-forth that helps me figure out what I actually think.
It’s a bit like rubber duck debugging, except the duck talks back. And sometimes the duck has read more papers than you have.
What works
The most useful mode isn’t “give me the answer” but “help me think through this.” I’ll describe a problem I’m stuck on, and then ask questions like:
- What am I missing here?
- What’s the strongest argument against this approach?
- How would someone who disagrees with me frame this?
The responses aren’t always right, but they’re often useful—they surface assumptions I didn’t know I was making.
What doesn’t
It’s easy to mistake fluency for correctness. The model will confidently explain something that’s subtly wrong, and if you’re not careful, you’ll nod along because it sounds right.
The trick is to stay in dialogue mode. Push back. Ask for sources. Say “I’m not sure that’s right” and see what happens.
It’s a tool for thinking, not a replacement for it.