> If you have a big effect you’ll see it even with small data.
That’s in line with what I was saying so I’m not sure where I missed the point.
P-value a function of effect size, variance and sample size. Bigger wins would be those that have a larger effect and more consistent effect, scaled to the number of users (or just get more users).
> But in most cases, at a startup, you should be going after wins that are way more impactful and end up having p-values lower than 0.05, anyway.
This was the part I was quibbling with. The size of the p value is pretty much irrelevant unless you know how much data you are collecting. The p values might always be about ~.05 if you know the effects are likely large and powered the study appropriately.
Completely agree on the Bayesian point though, and the importance of defining the loss function. Getting people used to talking about the strength of the evidence rather than statistical significance is a massive win most of the time.