I've been reading Algorithms to Live By, and two concepts keep surfacing in my work: early stopping and regularization. They're foundational in machine learning—ways to keep algorithms from overthinking their training data. Turns out they're even more valuable for managing people who overthink everything else.

Early stopping is simple: you stop optimizing once you're most of the way there. In machine learning, this means halting training before your model memorizes every quirk of the training set. The counterintuitive part is that stopping early often produces better results than going all the way. You capture the signal without overfitting to noise—it lowers variance. The same applies to analysis. Sometimes you just need to understand the first or second order effects and trust those generalize enough. Chasing every edge case doesn't make your decision better—it makes it brittle. I've written before about different types of decisions and when to treat them differently. Early stopping is essentially recognizing when to call it.

This is hardest with people who pride themselves on thoroughness. Academics, analysts, anyone trained to believe incomplete thinking is sloppy thinking. They'll agonize over achieving perfect information when most decisions require good-enough information delivered while it still matters.

Regularization takes a different approach to the same problem. Instead of stopping the process early, you constrain it artificially. In algorithms, this means penalizing complexity—forcing the model to favor simpler patterns. In real life, this means boxing people in. One page, not ten. A lunch meeting, not a day-long workshop. An email, not a deck. People complain about these constraints, but constraints are the point. They force prioritization and clarity. They prevent the natural human tendency to explore every intellectual cul-de-sac.

I see this most clearly with researchers brought into business contexts. Give them unlimited space and time, and you'll get a fascinating taxonomy of every possible consideration. Give them 30 minutes and two slides, and you'll get the two things that actually matter. Sometimes, the constraint doesn't dumb down the thinking—it distills it. The act of fitting an idea into a tight space reveals what's essential and what's decoration. It's regularization. You're penalizing complexity, and what remains is more generalizable, more actionable, more useful.

The real insight isn't that constraints are tolerable—it's that they actively improve output quality for a certain type of thinker. The person who can think deeply will always think deeply. Early stopping and regularization are both saying the same thing: most of the value is in the first 80%, and chasing the last 20% often makes things worse. The algorithm that generalizes isn't the one that saw everything. It's the one that stopped looking at the right time.

Knowing when to stop isn't a failure of rigor. It's a different kind of discipline entirely.