• Skip to main content
  • Skip to header right navigation
  • Skip to site footer
Farnam Street Logo

Farnam Street

Mastering the best of what other people have already figured out

  • Newsletter
  • Books
  • Podcast
  • Articles
  • Log In
  • Become a Member
TweetEmailLinkedInPrint

Perfectionism

Brain Food – No. 525 – May 21, 2023

Timeless ideas and insights for life. (Read the archives)

FS

The Metagame: Think One Step Ahead

“The metagame is about understanding the bigger picture and outsmarting the competition by doing something they can’t or won’t do. When you understand why your competitors do the things they do, you can choose to play a game they can’t play.”

— Source

Insight

“Hope begins in the dark, the stubborn hope that if you just show up and try to do the right thing, the dawn will come.”

— Anne Lamott

Tiny Thought

Everyone is a perfectionist when they care enough.​

If you’re not obsessed with it, you’ll never master it.​

The reason you won’t master it is because you won’t care enough to be a perfectionist. ​

(Click here to share on Twitter)

Etc.

Elizabeth Gilbert reflects on the death of her partner:

“I have learned that Grief is a force of energy that cannot be controlled or predicted. It comes and goes on its own schedule. Grief does not obey your plans, or your wishes. Grief will do whatever it wants to you, whenever it wants to. In that regard, Grief has a lot in common with Love. The only way that I can “handle” Grief, then, is the same way that I “handle” Love — by not “handling” it. By bowing down before its power, in complete humility.”

— Source

On trust and AI

“The findings have implications for a variety of businesses, from retailers and hospitals to financial firms, as they decide not only how much to invest in AI, but how decision makers can use the technology to their advantage. As part of the study, Tapestry managers who oversee shelf stocking provided employees called “allocators” with two sets of recommendations to help them choose which goods to display. One set was from an algorithm that allocators could interpret, and the other was from a “black box” algorithm they couldn’t. When the allocators received a recommendation from an interpretable algorithm, they often overruled it based on their own intuition. But when the same allocators had a recommendation from a similarly accurate “black box” machine learning algorithm, they were more likely to accept it even though they couldn’t tell what was behind it. Why? Because they trusted their own peers who had worked with the programmers to develop the algorithm. The allocators “knew that people like them—people with their knowledge base and experience—had had input into how and why these recommendations were being made and had tested the performance of the algorithm.”

— Source

Cheers,
— Shane

P.S. Six people can have their own key and open the gate, yet no two people have the same key. Clever.

.

Discover What You’re Missing

Get the weekly email full of actionable ideas and insights you can use at work and home.


As seen on:

New York Times logo
Wall Street Journal logo
The Economist logo
Financial Times logo
Farnam Street Logo

© 2025 Farnam Street Media Inc. All Rights Reserved.
Proudly powered by WordPress. Hosted by Pressable. See our Privacy Policy.

  • About
  • Sponsorship
  • Support
  • Speaking

We’re Syrus Partners.
We buy amazing businesses.


Farnam Street participates in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising commissions by linking to Amazon.