Two perspectives
[...] my biggest problem with Frequentist stuff is it makes an objectionable assumption that repeatedly performing an experiment is mathematically equivalent from sampling from a random sequence.
This is equivalent to a strong statement about the Kolmogorov Complexity of data coming out of your experiment. The difference between “I don’t know better than p(x) what the next x value will be” and “the universe conspires so that in repeated sampling the long term frequency of X=x is p(x)” are two completely different views of the physics of the world. Only one of them is compatible with the known facts about the world. Sometimes the frequentist viewpoint is an acceptable substitute for the “physics” (or more generally mechanistic description of a process) but that’s an assumption that should in general be tested by collecting a large sample of things.
So if you’re planning to do Frequentist statistics because you have a plethora of thousands and thousands of data points collected in a stable experimental manner and these pass at least some basic tests for randomness… then I say more power to you. This is vastly less than 1% of most science. -- Daniel Lakeland [ref]