What Is Taste?

A recent paper trains a model to predict which of two papers will receive more citations, and calls this learning scientific taste. The results look solid, but it left me unsatisfied. Citations are a byproduct of taste, sort of like training a film critic by predicting box office revenue. It measures something real but misses the thing itself. This got me thinking: what is taste, exactly?

What Taste Is (and Isn’t)

It seems that taste is one of those things that people feel viscerally but resist precise definition. Here I’ve collected some of the attempts at defining it, each of which I think captures a different aspect of the same thing.

Ira Glass describes the taste gap: you get into creative work because you have good taste, but early on, your ability can’t match it. You can tell your work falls short, and that gap is painful enough that many people quit. The ones who don’t quit close the gap through sheer volume of work. In ML language, the verifier (taste) runs ahead of your generator (execution).

Chris Olah distinguishes research intimacy from research taste. Intimacy is internalizing raw, undigested knowledge about your domain: e.g., memorizing hundreds of neurons in InceptionV1 and knowing how they behave. Taste is different, but intimacy feeds it. Olah suspects that many “brilliant insights” are natural next steps for someone deeply intimate with a topic, and that deep intimacy is “one of the key ingredients in beating the research taste market.”

Michael Nielsen identifies two researcher archetypes: the problem-solver and the problem-creator. Problem-solvers attack well-posed challenges, and problem-creators ask new questions or find simple connections no one noticed. Arguably, the problem-creator’s core skill is taste: knowing which questions to ask, which areas will thrive, which promising ideas won’t pan out. Richard Hamming makes the same point more bluntly: “What are the important problems of your field? Why aren’t you working on them?” The ability to answer the first question is taste. Hamming also noticed that people who worked with their office doors open, despite constant interruptions, ended up working on more important problems than those who kept their doors closed. The closed-door people were more productive day to day, but “somehow they seem to work on slightly the wrong thing.” Taste, it seems, is sharpened by ambient exposure to peers.

Harriet Zuckerman interviewed nearly every American Nobel laureate of the 20th century for the book Scientific Elite. She found that the primary benefit of apprenticeship under great scientists was adopting their research style and standards rather than access to resources. Many laureates identified the simplicity of solutions as a mark of taste.

Michael Polanyi discusses something related to taste in Personal Knowledge. His central concept is tacit knowledge: we always know more than we can tell. For example, a cyclist can’t articulate the physics of balance. Polanyi argues that all explicit knowledge rests on a tacit substrate, and that scientific discovery depends on trained intuition and aesthetic judgment.

My short synthesis of these descriptions is that taste is an emergent felt sense, acquired bottom-up through practice and proximity. It operates as a fast, pre-verbal filter on an enormous space of possibilities. You recognize it in others but can’t directly transfer it, aside from osmosis over consistent interactions.

Why Defining Taste Matters Now

I believe that taste is the core skill of research. It’s what tells you which question to ask, which results are surprising, which directions are worth your time. Precisely because it’s so difficult to define, it’s a skill that current models are far from having.

Can models eventually develop taste? I think so. The “human existence proof” shows that some configuration of neurons “have taste” in the sense that they can reliably produce good research, and I see no first-principles reason to believe artificial networks can’t. But if the best human researchers can’t articulate their own taste, it’s not obvious what loss or reward would elicit it from a model.