Resources

On this page we want to present advice and materials about Open Science and Open Science practices.

Frequently Asked Questions

In recent years, many studies have failed to be replicated (replication crisis), resulting from inappropriate use of statistics, selective publication procedures that prefer publishing desired results, a lack of access to material and data, and even fraud. This gave rise to the open science movement, with the goal to make scientific processes more transparent and thus achieve more accessible and reproducible results. Solutions to promote open science include the preregistration of studies, writing registered reports for provisional acceptance in journals, and making all materials, data and code available for the public.  

 

  • Spellman, B., Gilbert, E. A., & Corker, K. S. (2017, April 18). Open Science: What, Why, and How. https://doi.org/10.31234/osf.io/ak6jr
  • McKiernan, E. C., Bourne, P. E., Brown, C. T., Buck, S., Kenall, A., Lin, J., ... & Yarkoni, T. (2016). Point of view: How open science helps researchers succeed. elife, 5, e16800.
  • Nosek, B. A., Alter, G., Banks, G. C., Borsboom, D., Bowman, S. D., Breckler, S. J., ... & Yarkoni, T. S. C. I. E. N. T. I. F. I. C. S. T. A. N. D. A. R. D. S. (2015). Promoting an open research culture. Science, 348(6242), 1422-1425.
  • https://twitter.com/KordingLab/status/1461701317714497551?s=20
  • Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant. Psychological Science, 22(11), 1359–1366.
  • Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2018). False-Positive Citations [Review of False-Positive Citations]. Perspectives on Psychological Science: A Journal of the Association for Psychological Science, 13(2), 255–259.
  • Yarkoni, T. (2020). The generalizability crisis. The Behavioral and Brain Sciences, 1–37.
  • Smaldino, P. E., & McElreath, R. (2016). The natural selection of bad science. Royal Society Open Science, 3(9), 160384.
  • Axt, J. R. (2016, January). So you’ve decided to do more open science [PowerPoint slides]. In K. Corker (chair), Translating open science into daily practice. Society for Personality and Social Psychology, San Diego, CA. Retrieved from https://osf.io/buvwf/
  • Zwaan, R. A., Etz, A., Lucas, R. E., & Donnellan, M. B. (2017). Making replication mainstream. The Behavioral and Brain Sciences, 41, e120.
  • Gelman, A. (2018). How to Think Scientifically about Scientists’ Proposals for Fixing Science. Socius, 4, 2378023118785743.
  • https://osf.io/vkhbt/
  • https://www.youtube.com/watch?v=kzUtpDBo8wk

Open data refers to making raw data publicly available. This way the data can be checked, helping to identify potential errors or confirm reported results, and it can further be reused beyond its original purpose to explore related hypotheses or conduct meta analyses. There are several repositories where data can be made available for anyone.

 

Repositories for making data publicly available:

  • github.com
  • osf.io
  • figshare.com
  • datadryad.org

Rouder, J. N. (2016). The what, why, and how of born-open data. Behavior research methods, 48(3), 1062-1069.

Piwowar, H. A., & Vision, T. J. (2013). Data reuse and the open data citation advantage. PeerJ, 1, e175.

Pre-registration means that the study, hypotheses and/or analysis plans are specified prior to implementation and execution. This is to prevent hypotheses from being adjusted to fit the data after the fact (Hypothesizing After Results are Known = HARKing), which can seemingly increase the evidence for one's theory. It is also intended to more clearly distinguish data-contingent analysis (exploratory) from confirmatory testing.

 

How to do a preregistration?

Technically, you only need a timestamped document in which your planned study hypotheses and data analysis plans prior to your experiment/data analysis are specified. You can achieve this on multiple paths:

 

Szollosi, A., Kellen, D., Navarro, D., Shiffrin, R., & Donkin, C. (2019). Preregistration is redundant, at best. https://doi.org/10.31234/osf.io/x36pz

Szollosi, A., Kellen, D., Navarro, D. J., Shiffrin, R., van Rooij, I., Van Zandt, T., & Donkin, C. (2020). Is Preregistration Worthwhile? Trends in Cognitive Sciences, 24(2), 94–95.

Registered reports:

Registered reports are a format of publication in which authors submit their manuscript prior to study implementation and before results are known. They usually include introduction, hypothese, methods, analyses plans, expected results and importance of the study. They fulfill the same role as -> preregistrations, but by decoupling the publication decision from the results they are additionally intended to reduce the rate of result-contingent publications in order to tackle the significance-bias in the literature.

 

How to do a registered report?

Chambers, C. D., & Tzavella, L. (2021). The past, present and future of Registered Reports. Nature Human Behaviour, 1–14. https://www.nature.com/articles/s41562-021-01193-7

 

Open access (OA) means that research outputs (code, data, publications etc.) are downloadable free of cost. The advantage is that a more equitable system of access to knowledge is provided.  

 

List of Open Access (OA) journals

https://web.archive.org/web/20130430095414/http://doaj.org/doaj?uiLanguage=en 

 

See also: 

Deceptive publishing

https://osiglobal.org/2019/03/19/osi-brief-deceptive-publishing/

Open Science not only refers to making data or code public, preregistering hypotheses or publishing Open Access. Open Science, at least in our understanding, also aims for policy changes on all levels of the academic system including employment status of scientists, questions of representation in committees and boards or the current incentive system.    

 

Employment status of PhDs, postdocs etc.: 

https://www.science.org/content/article/controversial-berlin-law-gives-postdocs-pathway-permanent-jobs

 

Incentive system: 

Publish or perish: how our literature serves the anti-science agenda

https://www.youtube.com/watch?v=-FKjykXRjHo

 

Callaway, Ewen. 2016. “Beat It, Impact Factor! Publishing Elite Turns against Controversial Metric.” Nature 535 (7611): 210–11.

https://www.nature.com/articles/nature.2016.20224

 

Editorial. 2006. “The Impact Factor Game. It Is Time to Find a Better Way to Assess the Scientific Literature.” PLoS Medicine 3 (6): e291.

https://journals.plos.org/plosmedicine/article?id=10.1371/journal.pmed.0030291

 

Lariviere, Vincent, and Cassidy R. Sugimoto. 2018. “The Journal Impact Factor: A Brief History, Critique, and Discussion of Adverse Effects.” arXiv [cs.DL]. arXiv. http://arxiv.org/abs/1801.08992.

https://arxiv.org/abs/1801.08992

 

Paulus, Frieder M., Nicole Cruz, and Sören Krach. 2018. “The Impact Factor Fallacy.” Frontiers in Psychology 9 (August). https://doi.org/10.3389/fpsyg.2018.01487.

https://www.frontiersin.org/articles/10.3389/fpsyg.2018.01487/full

 

Brembs, Björn, Katherine Button, and Marcus Munafò. 2013. “Deep Impact: Unintended Consequences of Journal Rank.” Frontiers in Human Neuroscience 7: 291.

https://www.frontiersin.org/articles/10.3389/fnhum.2013.00291/full

 

Heesen, R., & Journal of Philosophy Inc. (2018). Why the reward structure of science makes reproducibility problems inevitable. The Journal of Philosophy, 115(12), 661–674.



List of publications on the critique of the impact factor: 

https://www.zotero.org/groups/2346073/open_research_open_science_open_scholarship/collections/NAIM9GHA

 

Representation/ Diversifying science

 

“Joint Commitment for Action on Inclusion and Diversity in Publishing”

https://www.rsc.org/new-perspectives/talent/joint-commitment-for-action-inclusion-and-diversity-in-publishing/

 

Altman, Micah, and Philip N. Cohen. 2021. “Openness and Diversity in Journal Editorial Boards.” https://doi.org/10.31235/osf.io/4nq97.

https://osf.io/preprints/socarxiv/4nq97/

 

Eaton, Asia A., Jessica F. Saunders, Ryan K. Jacobson, and Keon West. 2020. “How Gender and Race Stereotypes Impact the Advancement of Scholars in STEM: Professors’ Biased Evaluations of Physics and Biology Post-Doctoral Candidates.” Sex Roles 82 (3): 127–41.

https://link.springer.com/article/10.1007%2Fs11199-019-01052-w

 

Ginther, Donna K., Walter T. Schaffer, Joshua Schnell, Beth Masimore, Faye Liu, Laurel L. Haak, and Raynard Kington. 2011. “Race, Ethnicity, and NIH Research Awards.” Science 333 (6045): 1015–19.

https://www.science.org/doi/10.1126/science.1196783?url_ver=Z39.88-2003&rfr_id=ori:rid:crossref.org&rfr_dat=cr_pub%3dpubmed

 

Ginther, Donna K., Jodi Basner, Unni Jensen, Joshua Schnell, Raynard Kington, and Walter T. Schaffer. 2018. “Publications as Predictors of Racial and Ethnic Differences in NIH Research Awards.” PloS One 13 (11): e0205929.

https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0205929#pone-0205929-g006

 

Eisen, Michael B. 2020. “Racism in Science: We Need to Act Now.” eLife 9 (June). https://doi.org/10.7554/eLife.59636.

https://elifesciences.org/articles/59636

Aczel, B., Szaszi, B., & Holcombe, A. O. (2021). A billion-dollar donation: estimating the cost of researchers’ time spent on peer review. Research Integrity and Peer Review, 6(1), 14.

Kathawalla, U.-K., Silverstein, P., & Syed, M. (2021). Easing into open science: A guide for graduate students and their advisors. Collabra. Psychology, 7(1). https://doi.org/10.1525/collabra.18684

 

Importance of Measurement

 

Fiedler, K., McCaughey, L., & Prager, J. (2021). Quo Vadis, Methodology? The Key Role of Manipulation Checks for Validity Control and Quality of Science. Perspectives on Psychological Science: A Journal of the Association for Psychological Science, 16(4), 816–826.

 

Flake, J. K., & Fried, E. I. (2020). Measurement Schmeasurement: Questionable Measurement Practices and How to Avoid Them. Advances in Methods and Practices in Psychological Science, 3(4), 456–465.

 

Isager, P. M. (n.d.). Test validity defined as d-connection between target and measured attribute: Expanding the causal definition of Borsboom et al.

 

Kievit, R. A., Romeijn, J.-W., Waldorp, L. J., Wicherts, J. M., Scholte, H. S., & Borsboom, D. (2011). Mind the Gap: A Psychometric Approach to the Reduction Problem. Psychological Inquiry, 22(2), 67–87.

 

Markus, K. A., & Borsboom, D. (2012). The cat came back: Evaluating arguments against psychological measurement. Theory & Psychology, 22(4), 452–466.

 

Importance of Base Rate

 

Bird, A. (2018). Understanding the Replication Crisis as a Base Rate Fallacy. The British Journal for the Philosophy of Science. https://doi.org/10.1093/bjps/axy051/5071906

 

Importance of Theory

 

Oberauer, K., & Lewandowsky, S. (2019). Addressing the theory crisis in psychology. Psychonomic Bulletin & Review, 26(5), 1596–1618.

 

Berkman, E. T., & Wilson, S. M. (2021). So Useful as a Good Theory? The Practicality Crisis in (Social) Psychological Theory. Perspectives on Psychological Science: A Journal of the Association for Psychological Science, 16(4), 864–874.

 

Devezer, B., Nardin, L. G., Baumgaertner, B., & Buzbas, E. O. (2019). Scientific discovery in a model-centric framework: Reproducibility, innovation, and epistemic diversity. PloS One, 14(5), e0216125.

 

Devezer, B., Navarro, D. J., Vandekerckhove, J., & Ozge Buzbas, E. (2021). The case for formal methodology in scientific reform. Royal Society Open Science, 8(3), 200805.

 

Eronen, M. I., & Bringmann, L. F. (2021). The Theory Crisis in Psychology: How to Move Forward. Perspectives on Psychological Science: A Journal of the Association for Psychological Science, 16(4), 779–788.

 

Gervais, W. M. (2021). Practical Methodological Reform Needs Good Theory. Perspectives on Psychological Science: A Journal of the Association for Psychological Science, 16(4), 827–843.

 

Grahek, I., Schaller, M., & Tackett, J. L. (2021). Anatomy of a Psychological Theory: Integrating Construct-Validation and Computational-Modeling Methods to Advance Theorizing. Perspectives on Psychological Science: A Journal of the Association for Psychological Science, 16(4), 803–815.

 

Haslbeck, J. M. B., Ryan, O., Robinaugh, D., Waldorp, L., & Borsboom, D. (n.d.). Modeling Psychopathology: From Data Models to Formal Theories. https://doi.org/10.31234/osf.io/jgm7f

 

Lundberg, I., Johnson, R., & Stewart, B. (2020). What is your estimand? Defining the target quantity connects statistical evidence to theory. In SocArXiv. https://doi.org/10.31235/osf.io/ba67n

 

Meehl, P. E. (1990). Appraising and amending theories: The strategy of lakatosian defense and two principles that warrant it. Psychological Inquiry, 1(2), 108–141.

 

Meehl, P. E. (2004). Theoretical risks and tabular asterisks: Sir Karl, Sir Ronald, and the slow progress of soft psychology. Applied & Preventive Psychology: Journal of the American Association of Applied and Preventive Psychology, 11(1), 1.

 

Navarro, D. (n.d.). Between the devil and the deep blue sea: Tensions between scientific judgement and statistical model selection. https://doi.org/10.31234/osf.io/39q8y

 

Proulx, T., & Morey, R. D. (2021). Beyond Statistical Ritual: Theory in Psychological Science. Perspectives on Psychological Science: A Journal of the Association for Psychological Science, 16(4), 671–681.

 

Rich, P., de Haan, R., Wareham, T., & van Rooij, I. (n.d.). How hard is cognitive science? https://doi.org/10.31234/osf.io/k79nv

 

Roberts, S., & Pashler, H. (n.d.). How Persuasive Is a Good Fit? A Comment on Theory Testing. https://doi.org/10.1037//0033-295X.107.2.358

 

van Rooij, I., & Baggio, G. (2021). Theory Before the Test: How to Build High-Verisimilitude Explanatory Theories in Psychological Science. Perspectives on Psychological Science: A Journal of the Association for Psychological Science, 16(4), 682–697.

Guest, O., & Martin, A. E. (2021). How Computational Modeling Can Force Theory Building in Psychological Science. Perspectives on Psychological Science: A Journal of the Association for Psychological Science, 16(4), 789–802.

 

Open Science Collaboration (in press). Maximizing the reproducibility of your research.

In S. O. Lilienfeld & I. D. Waldman (Eds.), Psychological Science Under Scrutiny: Recent Challenges and Proposed Solutions. New York, NY: Wiley.

 

Steegen, S., Tuerlinckx, F., Gelman, A., & Vanpaemel, W. (2016). Increasing Transparency Through a Multiverse Analysis. Perspectives on Psychological Science: A Journal of the Association for Psychological Science, 11(5), 702–712.

Rubin, M. (2017). An Evaluation of Four Solutions to the Forking Paths Problem: Adjusted Alpha, Preregistration, Sensitivity Analyses, and Abandoning the Neyman-Pearson Approach. Review of General Psychology: Journal of Division 1, of the American Psychological Association, 21(4), 321–329.

McElreath, R., & Smaldino, P. E. (2015). Replication, Communication, and the Population Dynamics of Scientific Discovery. PloS One, 10(8), e0136088.

Westfall, J., & Yarkoni, T. (2016). Statistically Controlling for Confounding Constructs Is Harder than You Think. PloS One, 11(3), e0152719.

Scheel, A. M., Tiokhin, L., Isager, P. M., & Lakens, D. (2021). Why Hypothesis Testers Should Spend Less Time Testing Hypotheses. Perspectives on Psychological Science: A Journal of the Association for Psychological Science, 16(4), 744–755.

Greenland, S. (2019). Valid P-Values Behave Exactly as They Should: Some Misleading Criticisms of P-Values and Their Resolution With S-Values. The American Statistician, 73(sup1), 106–114.

Greenland, S., & Rafi, Z. (2019). To Aid Scientific Inference, Emphasize Unconditional Descriptions of Statistics. In arXiv [stat.ME]. arXiv. http://arxiv.org/abs/1909.08583

Greenland, S., Senn, S. J., Rothman, K. J., Carlin, J. B., Poole, C., Goodman, S. N., & Altman, D. G. (2016). Statistical tests, P values, confidence intervals, and power: a guide to misinterpretations. European Journal of Epidemiology, 31(4), 337–350.

Tosh, C., Greengard, P., Goodrich, B., Gelman, A., Vehtari, A., & Hsu, D. (2021). The piranha problem: Large effects swimming in a small pond. In arXiv [math.ST]. arXiv. http://arxiv.org/abs/2105.13445

Hoffmann, S., Schönbrodt, F., Elsas, R., Wilson, R., Strasser, U., & Boulesteix, A.-L. (2021). The multiplicity of analysis strategies jeopardizes replicability: lessons learned across disciplines. Royal Society Open Science, 8(4), 201925.

Irvine, E. (2021). The Role of Replication Studies in Theory Building. Perspectives on Psychological Science: A Journal of the Association for Psychological Science, 16(4), 844–853.

Leonelli, S. (2018). Rethinking Reproducibility as a Criterion for Research Quality. In Including a Symposium on Mary Morgan: Curiosity, Imagination, and Surprise (Vol. 36B, pp. 129–146). Emerald Publishing Limited.

Mayo, D. G. (2018). Statistical inference as severe testing: How to get beyond the statistics wars. Cambridge University Press.

McElreath, R., & Smaldino, P. E. (2015). Replication, Communication, and the Population Dynamics of Scientific Discovery. PloS One, 10(8), e0136088.

Rubin, M. (2017). An Evaluation of Four Solutions to the Forking Paths Problem: Adjusted Alpha, Preregistration, Sensitivity Analyses, and Abandoning the Neyman-Pearson Approach. Review of General Psychology: Journal of Division 1, of the American Psychological Association, 21(4), 321–329.

Amrhein, V., Trafimow, D., & Greenland, S. (2019). Inferential Statistics as Descriptive Statistics: There Is No Replication Crisis if We Don’t Expect Replication. The American Statistician, 73(sup1), 262–270.

Bird, A. (2018). Understanding the Replication Crisis as a Base Rate Fallacy. The British Journal for the Philosophy of Science. https://doi.org/10.1093/bjps/axy051/5071906

Earp, B. D., & Trafimow, D. (2015). Replication, falsification, and the crisis of confidence in social psychology. Frontiers in Psychology, 6, 621.

Gelman, A. (n.d.). The Problems With P -Values are not Just With P -Values.

Gelman, A. (2018). The Failure of Null Hypothesis Significance Testing When Studying Incremental Changes, and What to Do About It. Personality & Social Psychology Bulletin, 44(1), 16–23.

Akerlof, G. A., & Michaillat, P. (2018). Persistence of false paradigms in low-power sciences. Proceedings of the National Academy of Sciences of the United States of America, 115(52), 13228–13233.

Scroll to Top