Replication Crisis in Parapsychology

From FusionGirl Wiki
Revision as of 13:53, 11 May 2026 by JonoThora (talk | contribs) (Psionics expansion (01a + 01b): content authored / LaTeX-restored per local submodule; lint-clean.)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigationJump to search

Replication Crisis in Parapsychology

Audience

Difficulty Intermediate

The replication crisis in parapsychology refers to the difficulty of obtaining consistent replication of Anomalous_Cognition effects across independent laboratories, despite positive meta-analytic effects in the aggregate literature. It overlaps substantially with the broader replication crisis in psychology (Open Science Collaboration 2015; Klein et al. 2018; Many Labs project) but has specific features arising from the small effect sizes typical of parapsychological research.

This page surveys the evidence, the methodological issues, and the framework's position.

Symptoms

The replication-crisis symptoms in parapsychology:

  1. Meta-analytic positive effects — Bem-Honorton 1994 ganzfeld, Storm et al. 2010 ganzfeld, Mossbridge-Tressoldi-Utts 2012 presentiment, Bösch-Steinkamp-Boller 2006 RNG-PK all show statistically significant aggregate effects.
  2. Individual study failures — many high-profile direct replications of individual experiments produce null results (Galak et al. 2012 of Bem 2011; Kekecs et al. 2023 of Bem retroactive recall).
  3. Effect-size shrinkage — initial-study effect sizes are typically larger than later-replication effect sizes, even in successful replication contexts.
  4. Funnel-plot asymmetry — meta-analyses show some evidence of publication bias.
  5. Heterogeneity — between-study variance is substantial across the literature.

The 2011 trigger

The Bem 2011 Feeling the Future paper (Presentiment) is widely credited as a major catalyst for the broader psychology replication crisis. The argument: if such effects can be obtained in a mainstream-published study using standard psychology methods, then either:

  • The effects are real (forcing reconsideration of foundational physics).
  • The methods are systematically producing false positives (forcing reconsideration of mainstream psychology methods).

Mainstream psychology took the second horn. The result: pre-registration, registered reports, multi-lab consortia, and other methodological reforms.

Reform efforts

Modern parapsychology has adopted (or is adopting) most of the reform tools:

  • Pre-registration of hypotheses and analyses (open-science framework, AsPredicted).
  • Registered reports — peer review of methods before data collection; publication independent of results.
  • Multi-lab consortia (e.g. Bem 2015 Feeling the Future meta-analytic study).
  • Open data — raw data publicly archived for re-analysis.
  • Bayesian analysis — alongside or instead of frequentist tests, to better characterise evidence strength.

These reforms have produced more methodologically-tight studies. The aggregate effects from these tighter studies are smaller than from older studies but typically still statistically significant.

Specific replication landscapes

Ganzfeld

Ganzfeld is the most replicated parapsychological paradigm:

  • Bem-Honorton 1994 meta-analysis: 354 sessions, d ≈ 0.30, p < 10-9.
  • Milton-Wiseman 1999 analysis of post-1990 studies: null aggregate effect (this was widely cited as a "failure to replicate").
  • Storm-Tressoldi-Di Risio 2010 analysis of expanded post-1990 dataset: d ≈ 0.14, p < 10-8.
  • Cardeña 2018 review: aggregate effect persists at d ≈ 0.20-0.30.

The picture: the effect is robust at meta-analytic scale but has shrunk somewhat as methodology has tightened.

Remote viewing

  • Star Gate corpus (Utts 1996): d ≈ 0.20 over thousands of sessions.
  • Modern academic RV (Storm, May, et al.): d ≈ 0.10-0.20.
  • The effect persists at smaller magnitude than early estimates.

Bem 2011 presentiment

  • Bem 2011 nine studies: 8/9 significant, d ≈ 0.22.
  • Galak et al. 2012 replication of retroactive recall: null.
  • Bem et al. 2015 90-study meta-analytic replication by 33 labs: d ≈ 0.09 (smaller, but positive).
  • Kekecs et al. 2023 direct preregistered multi-lab replication: null.

The picture: Bem's specific behavioural paradigms do not replicate at original effect size. The autonomic-response presentiment paradigm (Mossbridge et al. 2012, Presentiment) shows more consistent positive effects.

RNG-PK

  • PEAR (1979-2007): persistent small positive effect at d ≈ 3 × 10-5.
  • Bösch-Steinkamp-Boller 2006 meta-analysis (380 studies): d ≈ 4 × 10-5, p < 0.0001.
  • Funnel-plot asymmetry suggests publication bias. After correction, effect size shrinks but remains positive.

Interpretive frameworks

There are roughly three positions on the replication crisis in parapsychology:

Position 1: Effects are real but small

Parapsychological effects are genuine but small (d ≈ 0.10-0.30 range). They appear in meta-analyses; they do not appear in every individual study. This is exactly what one expects for a real effect with substantial heterogeneity and at the limits of statistical detectability with typical sample sizes. The replication crisis is a feature, not a bug: an objectively-small effect requires many studies to characterise.

This is the psionic framework's position.

Position 2: Effects are artifacts

Parapsychological effects are statistical artifacts arising from publication bias, p-hacking, multiple comparisons, and methodological flaws. Meta-analyses appear positive because the literature is filtered for positive results; the underlying truth is null. The replication crisis confirms this: tighter studies produce null or smaller effects.

This is the position of mainstream skeptical critics (Wiseman, Wagenmakers, Galak).

Position 3: Effects depend on factors not yet controlled

Parapsychological effects are real but depend on factors not yet identified — operator-skill differences, experimenter effects, target characteristics, environmental conditions. The replication crisis reflects our incomplete understanding of these moderators. Future research needs to identify and control them.

This is the position of many parapsychologists (Cardeña, Storm, Tressoldi).

Framework position

The psionic framework aligns with Position 1, with elements of Position 3:

  • Effects are real but small — the framework predicts α (the ψ-coupling) is small, hence individual-trial effect sizes are small.
  • Substrate dependence — the framework predicts that ψ-coupling depends on coherent matter substrate; operator-skill differences and biological-substrate variability should modulate effect sizes.
  • Methodological tightening matters — the framework supports rigorous preregistration and replication standards.
  • Falsifiability matters — the framework offers specific predictions (see Falsification_Criteria_for_Psionics) that could falsify it; this distinguishes it from non-scientific paranormal claims.

Lessons from the replication crisis

For framework practitioners:

  1. Pre-register predictions — specify primary analyses before data collection.
  2. Power analyses — sample-size determined to detect small effects (d ≈ 0.20 requires N ≈ 200 per group for 80% power).
  3. Multi-lab replication — single-lab studies should not be trusted; consortium-level replication is needed.
  4. Open data and code — all analyses should be reproducible from raw data.
  5. Effect-size focus — emphasise effect-size estimation over significance testing.
  6. Heterogeneity analysis — characterise sources of between-study variance rather than dismissing them.

See Also

References

  • Open Science Collaboration (2015). "Estimating the reproducibility of psychological science." Science 349: aac4716.
  • Galak, J., LeBoeuf, R. A., Nelson, L. D., Simmons, J. P. (2012). "Correcting the past: Failures to replicate psi." Journal of Personality and Social Psychology 103: 933–948.
  • Bem, D. J., Tressoldi, P., Rabeyron, T., Duggan, M. (2015). "Feeling the future: A meta-analysis of 90 experiments on the anomalous anticipation of random future events." F1000Research 4: 1188.
  • Milton, J., Wiseman, R. (1999). "Does psi exist? Lack of replication." Psychological Bulletin 125: 387–391.
  • Storm, L., Tressoldi, P. E., Di Risio, L. (2010). "Meta-analysis of free-response studies, 1992-2008." Psychological Bulletin 136: 471–485.
  • Cardeña, E. (2018). "The experimental evidence for parapsychological phenomena: A review." American Psychologist 73: 663–677.
  • Kekecs, Z., et al. (2023). "Raising the value of research studies in psychological science by increasing the credibility of research reports: The transparent psi project." Royal Society Open Science.