HRB CRFC Open Science Challenge - Week 2
Thank you to everyone who came to our first session of the HRB Open Science Challenge. If you weren’t able to make it, no problem – we’ll catch you at the next session on May 4th, in room 4.05 of the Western Gateway Building (10:30 to 11:30), when we’ll be discussing what makes a good research question. Please give some thought to that before we meet, as well as what data you might/will use for your project.
During the first meeting we discussed motivations for open science.
Open Access - Since much of science is publicly funded, it seems reasonable that the public should be able to access the results of said research.
Reproducibility/Replicability – Single experiments in isolation provide little assurance that an observed effect is “real”. Science thus relies on scientists repeating and then confirming/dis-confirming earlier studies to eventually arrive at a consensus. Sharing research methods and materials is needed to facilitate this process.
Research Integrity – There is plenty of evidence to suggest that many studies are not conducted properly, especially when it comes to analyses of the resulting data. Mistakes tend to go un-detected because we typically work in isolation and don’t usually fully document our analyses. Further, we can’t really count on peer review to catch mistakes, since we don’t typically share materials with editors/reviewers, and their ability to judge science is highly variable. Thus, the best way (at least in my opinion) to improve the quality of data analyses is to ensure that they are done in a transparent, reproducible manner, which will be a substantial focus of these workshops.
Lastly we touched on the various tools we’ll cover during the challenge. These include:
- Statistical programming/scripting
- Reproducible reports/literate programming
- Software testing
- Pre-registration
- Registered reports
- Pre-prints
- Open science publishing platforms