pglpm, (edited )
@pglpm@lemmy.ca avatar

P-values-based methods and statistical significance are flawed: even when used correctly (e.g.: stopping rule decided beforehand, various “corrections” of all kinds for number of datapoints, non-gaussianity, and so on), one can get results that are “statistically non-significant” but clearly significant in all common-sense meanings of this word; and vice-versa. There’s a constant literature – with mathematical and logical proofs – dating back from the 1940s pointing out the in-principle flaws of “statistical significance” and null-hypothesis testing. The editorial from the American Statistical Association gives an extensive list.

I’d like to add: I’m saying this not because I read it somewhere (I don’t like unscientific “my football team is better than yours”-like discussions), but because I personally sat down and patiently went through the proofs and counterexamples, and the (almost non-existing) counter-proofs. That’s what made me change methodology. This is something that many researchers using “statistical significance” have not done.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • uselessserver093
  • Food
  • [email protected]
  • aaaaaaacccccccce
  • test
  • CafeMeta
  • testmag
  • MUD
  • RhythmGameZone
  • RSS
  • dabs
  • oklahoma
  • Socialism
  • KbinCafe
  • TheResearchGuardian
  • Ask_kbincafe
  • SuperSentai
  • feritale
  • KamenRider
  • All magazines