Einstein vs peer review: where should we set the bar for good ideas?
What happens when curiosity and caution collide
First, a quick update: I’m pleased to say that Waterstones has recently named Proof one of the Best Popular Science Books of 2025! After working on the book behind the scenes for years, it’s been wonderful to see it now resonating with so many people. It’s previously also been picked as one of the best science books of the year (so far) by New Scientist, Financial Times and Barnes & Noble, so if you haven’t got your copy yet, I think you might like it.
On that note, I’d like to share one of the stories I came across while writing Proof, and what it can tell us about taking risks when adopting new ideas.
Einstein was furious. In 1936, an American journal had dared to send his paper for peer review without his permission. Einstein, who had recently moved to the US, was expecting the editor to make the decision on their own, as they often did in Europe. When, to his surprise, he received ten pages of critical review comments instead, he took it up with the editor:
‘We (Mr. Rosen and I) had sent you our manuscript for publication and had not authorised you to show it to specialists before it is printed. I see no reason to address the – in any case, erroneous – comments of your anonymous expert. On the basis of this incident, I prefer to publish the paper elsewhere.’
As the anger wore off, Einstein’s hostility to the feedback might have dimmed too. His paper later appeared in a less prestigious journal, in a heavily revised form.
Peer review is now ubiquitous in scientific publishing, but it wasn’t the norm in the early 20th Century. In 1905, Albert Einstein published four groundbreaking papers, including his research on Brownian motion and special relativity. None of them were sent to external experts for review. Instead, the editors of Annalen der Physik read the research and decided to publish it.
Max Planck, the editor-in-chief, had spoken of how he didn’t want to be overly sceptical if it risked accidentally suppressing odd-but-valuable discoveries. As he put it, his policy was ‘to shun much more the reproach of having suppressed strange opinions than that of having been too gentle in evaluating them.’ As a result, Annalen der Physik typically published over 90 percent of papers they received. In contrast, modern journals like Science reject over 90 percent of submitted papers.
When it comes to new ideas, there are two main errors we can make. We can set the bar too low, and let lots of bad ideas through. Or we can set the bar too high, and miss out on good ones.
Science isn’t the only field that grapples with these tradeoffs. Courts must as well. In the 1760s, English legal scholar William Blackstone argued that convicting innocent people is much worse than acquitting the guilty. ‘It is better that ten guilty persons escape than that one innocent suffer,’ as he put it. Not everyone has agreed with this balance. During insurrections in 1950s Vietnam, the mantra among the Viet Cong was ‘better to kill ten innocent people than let one enemy escape.’
Others even tried to quantify the tradeoffs. In revolutionary France, mathematician Marquis de Condorcet proposed that the risk of a false conviction should be no more than the risk of someone having a random fatal accident in any given week. In his calculations, this worked out to be around 1 in 144,768.
Even if you don’t adopt quite such a maths-heavy approach to decision-making, it’s still worth considering where you would place the bar. When I’ve given talks recently, the audience generally side with Blackstone on the topic of legal error. But there’s more ambiguity about whether they agree with Max Planck. The volume of scientific papers is now much larger, even if some of the author-editor relationships persist in the era of peer review.
When it comes to evidence, it’s not just about how confident we are. It’s also about what we do with that confidence. Are you more Planck or Blackstone when it comes to decisions? Where do you currently set the bar for good ideas? And what would convince you to change that?


The fascinating addition about this one was that Einstein's submitted paper was utterly wrong. It claimed to prove gravitational waves cannot arise General Relativity. And the anonymous reviewer had caught (one of) Einstein's major mistakes - he had confused a co-ordinate singularity with a physical one.
So agitated was the reviewer (Howard Robertson) when we heard that Einstein was going to publish anywa, he went to visit Infeld, one of Einstein's closest collaborators to try to convince him and spare Einstein embarrassment. By the time Infeld went to talk to Einstein, Einstein had worked it out for himself (or at least he said he had) - and the paper as published comes to the opposite conclusion.
I imagine you know this and omitted for sake of space, but I find it ironic that the headlines on the LIGO results all (rightly) mention that Einstein predicted gravitational waves, but he was only saved from declaring them impossible by a conscientious peer reviewer.
Great commentary.
I will add that the use of standardized, boilerplate sets of citations in the first paragraph of so many papers these days seems a way of signalling to reviewers that the authors are not daring to step too far afield. Unfortunately, this leads to its own issues. In his exhaustive book, "Invasion Biology," Mark Davis writes:
"The process by which preliminary conclusions become inflated generalizations often involves a series of small missteps, each one of which might be regarded as mostly innocuous. For example, when citing a particular finding or conclusion for the first time, authors often take the time to describe the particular context in which the specific finding or conclusion was made. At a later time, the same author may then cite this same finding in another manuscript, or other researchers, without having actually read the original source, may use the information provided by a secondary reference to cite the original work. In both cases, it is common for these subsequent references to leave out the details needed to assess the reliability and generality of the original finding or conclusion. As time goes on, it is not uncommon for the finding or conclusion to be simply stated as fact, with a perfunctory citation of the original author. By now, the original findings or conclusions are often included as boilerplate in introductions and conclusions of articles and proposals. After enough of these iterations, the original finding can become such an integral part of accepted ecological wisdom that many authors feel comfortable in reporting it without citing any source at all. The general problem is that the more often that preliminary ideas and tentative conclusions are presented as an axiomatic starting point for further discussion and research, the more likely it is that practitioners, particularly young practitioners, begin to regard the statements as factual, believing that they have having been thoroughly and comprehensively empirically confirmed."