Metascience Seminars

The Metascience Seminars showcase advances in metascience, broadly defined, including work that examines how science works in practice and how it can be improved. Recordings of past seminars will be available on our YouTube channel.

If you are interested in presenting, please email Jan Feld (jan.feld@vuw.ac.nz).

Seminar 1: When Might Correlation Be Causation? An Introduction to the Target Trial Framework

Speaker: Harrison Hansford (UNSW)
Watch: Youtube
Date and time: March 12, 2026, 11:00–11:45AM (AEDT)
Abstract: Many research questions about the effect of an intervention are unlikely to be answered with a randomised trial and decision-makers may then rely on observational data. However, as everyone knows, correlation does not always indicate causation. The target trial framework provides a principled way to design observational studies to reduce the risk of common biases introduced by study design. Harrison will introduce the target trial framework, illustrate how the approach has been applied in the medical literature, and briefly discuss TARGET - a reporting guideline designed to improve transparency, reproducibility, and methodological rigour in observational studies of interventions. 

Seminar 2: Fake participants in human research

Co-badged talk with the NSW Health Statewide Biobank!

Speaker: Fiona Giles (Melbourne)
Links: Forthcoming
Date and time: March 25

 

Seminar 3: How about „ERЯOR: A Bug Bounty Program for Science”?

Speaker: Malte Elson (Bern)
Links: Forthcoming
Date and time: April 23, 2026, 4:00–4:45PM (AEST)
Abstract: The scientific enterprise, as a human profession not immune to error, has developed several failsafe mechanisms (e.g., peer review) that also reflect one of its basic tenets: Science is self-correcting. However, these mechanisms are not purposely designed to catch errors, and often only do so long after errors have already proliferated in the literature. Most errors are discovered coincidentally, and error detection as a scientific activity is rarely incentivised. The project Examining the Reproducibility and Robustness of Research (ERЯOR) is a comprehensive program to systematically detect, report, and prevent errors in scientific publications modelled after bug bounty programs in the technology industry with two major goals: 1) Estimating a benefit-cost ratio of funding error detection, and 2) Obtaining robust empirical estimates of types of errors and their prevalence in the literature.

In ERЯOR, Investigators examine highly-cited published works (including study materials, data, and code) for errors and receive monetary compensation depending on the severity of their findings: the more impactful the errors, the greater the payout. Similarly, authors who agree to having their work examined this way receive compensation if their work proves to be reliable. A cost-benefit analysis of implementing ERЯOR on a larger scale will need to consider (1) running costs, (2) counterfactual costs (e.g., papers existing in the literature without having been purposely reviewed for errors), (3) consequential costs (e.g., further grants awarded building on flawed research), and (4) opportunity costs (other ideas not pursued instead of flawed research).

ERЯOR joins a range of proposals and measures to foster a culture that is open to the possibility of error in science, and that embraces a new discourse norm of constructive criticism.