Critical evaluation of information: Credibility crisis

This post is the first in a series on critical evaluation of information.

One of the important elements of inquiry learning and information literacy is critical evaluation of information in formal education, the workplace and everyday home and community life.  A mantra of librarianship is the critical evaluation of web-based information, with the rationale that in pre-internet times publishers and librarians did the filtering work for you, and thus what appeared in print was trustworthy and authoritative. My own University Library perpetuates this mantra:

Screen Shot 2016-02-26 at 9.16.15 AM

Extract from QUT Library StudyWell resource

The idea that all formally published information back in the day was ‘reliable and relevant’ is a fiction. Today it remains so. It is doing students a disservice to lull them into a false sense of security regarding the information that is filtered for them by publishers and librarians. Scholarly sources have always contained biases and omissions such as a lack of voices from women, indigenous peoples and other minority groups. Peer review procedures have probably always been suspect.

The credibility of scholarly information is compromised by a range of factors including:

  • Publishing & dissemination bias
  • Falsified research findings, academic fraud & misconduct
  • Plagiarism
  • Hoax
  • Corruption
  • Conflict of interest
  • Lack of rigour
  • Predatory publishing
  • p-hacking/data dredging
  • etc…

 

In recent times there seems to be an increasing number of exposures of credibility, seen in via Retraction Watch, which publishes instances of scholarly and journalistic misconduct. For instance, in 2012, Fang, Steen and Casadevall investigated retracted papers and found that 67.4% of retracted papers were due to misconduct such as fraud (data fabrication and falsification), duplicate publication and plagiarism and that there had been a ‘discernible rise’ (p. 17028) in misconduct from the 1990s.

Non-replicability of findings has also emerged as an issue. ‘Direct replication is the attempt to recreate the conditions believed sufficient for obtaining a previously observed finding and is the means of establishing reproducibility of a finding with new data'(Open Science Collaboration, 2015). Pashler and Harris (2012) found that in experimental psychology direct replication was uncommon, with conceptual replication (where there is a different methodological design to the original study) being more likely . They conclude that ‘there are likely to be serious replicability problems in our field, and correcting these errors will require many significant reforms in current practices and incentives’ (p. 535). The Open Science Collaboration (2015) conducted replications of 100 psychology studies and found that the effect size and statistically significant results produced weaker evidence than the original studies. Likewise, Chang and Li (2015) from the U.S. Federal Reserve attempted to replicate economics studies published in 67 papers and concluded that ‘economics research is usually not replicable’ (p.1).

The journal PLOS Biology has recently responded to the problem of replicability in broadening the scope of the journal to include meta-research (Kousta, Ferguson & Ganley 2016):

Screen Shot 2016-03-04 at 11.02.00 AM

Extract from Kousta, Ferguson & Ganley 2016 http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4699700/ CC BY 4.0

Publication bias may be a problem that is related to replicability. Publication bias is evident through selective publication, where ‘studies with significant or positive results were more likely to be published than those with non-significant or negative results’ (Song et al 2010 p. iii).  Pasher and Harris (2012) observed that ‘performing conceptual rather than direct replication attempts interacts insidiously with publication bias, opening the door to literatures that appear to confirm the reality of phenomena that in fact do not exist’ (p. 531). The Open Science Collaboration (2015) point out that publication bias is an issue because ‘journal reviewers and editors may dismiss a new test of a published idea as unoriginal’, and thus reject a paper for publication.

The rigour of scientific research has been examined by in a research publication reporting on the risk of bias in research conducted with animals (Macleod et al 2015). The authors found that ‘of over 1,000 publications from leading UK institutions over two-thirds did not report even one of four items considered critical to reducing the risk of bias, and only one publication reported all four measures’ (p. 9). The measures were:

  1. randomisation
  2. blinded assessment of outcome
  3. sample size calculations
  4. conflict of interest

It seems that linguistic obfuscation could be a marker of fraudulent science. Markowitz and Hancock (2015) examined 253 publications retracted for fraudulent data and compared them with a control group of unretracted papers. They found that ‘fraudulent papers were written with significantly higher levels of linguistic obfuscation, including lower readibility and higher levels of jargon’ (p. 1) and that ‘linguistic obfuscation was correlated with the number of references per paper’ (p. 8).

P-hacking and data dredging is where data and statistical tests are manipulated to create statistical significance. There is evidence to suggest that p-hacking is practised, for instance, Masicampo and Lalande (2012) explain that ‘in null hypothesis testing, p values are judged relative to an arbitary threshold for significance (.05)’ (p. 2271).  In examining 12 issues from three prominent psychology journals they found that  ‘p values were much more common immediately below .05 than would be expected based on the number of p values occurring in other ranges.’ (p.  2271). Thus, there seemed to be a bias towards reporting results that are bordering on ‘significance’, thus implying that the researchers manipulated the results to get as close as possible to significance. They concluded that this may be due to publication bias and overreliance on null-hypothesis significance testing. They also speculated that researchers may be engaging in data dredging in order to ‘nudge research results in a favorable direction’ (p. 2276). Similarly, Leggett et al (2013) found an overrepresentation of p-values around the arbitary cut-off of .05. In comparing psychology articles published in 1965 with those published in 2005, they found an increase of ‘just significant’ results.  P-hacking was also found in a recent analysis of all open access papers on the PubMed database (Head et al 2015). The authors found that ‘p-hacking is widespread throughout science…[but that] p-hacking probably does not drastically alter scientific consensuses drawn from meta-analyses’.

Predatory publishing is where unscrupulous publishers create open access journals which accept papers without any discernible peer review and quality control, and often require a fee for publication. The full criteria for a journal being deemed predatory is here, and the website, run by librarian Jeffery Beall which includes a list of predatory publishers is here. Predatory publishing has been exposed via a number of stings, hoaxes and investigations. The Conversation has published some articles explaining the issue here, and here.

Unlike the other credibility factors mentioned in this post, predatory publishing is a phenomenon that is made possible via web publishing, and therefore supports the stance argued in the extract from my University Library quoted above. However, given the evidence of widespread problem with credibility due to a range of factors, it seems non-sensical to argue that internet publishing is the basis for a rise in unreliable information. It seems to me that we should encourage and empower students to  be critical of all information, regardless of the credibility of the publisher and availability via the library catalogue and subscription databases.

 

 

References:

Chang, A., & Li, P. (2015). Is economics research replicable? Sixty published papers from thirteen journals say ”usually not. Washington

Fang, F., Steen, R. G., & Casadevall, A. (2012). Misconduct accounts for the majority of retracted scientific publications. Proceedings of the National Academy of Sciences of the United States of America, 109(42), pp. 17028-17033.

Head, M., Holman, L., Kahn, A., & Jennions, M. (2015). The extent and consequences of p-hacking in science. PLOS Biology, 13(3), e1002106. doi: DOI: 10.1371/journal.pbio.1002106

Kousta, S., Ferguson, C., & Ganley, E. (2016). Meta-research: Broadening the scope of PLOS Biology. PLOS Biology, 14(1).

Leggett, N., Thomas, N., Loetscher, T., & Nicholls, M. (2013). The life of p: “Just significant’ results are on the rise. Quarterly Journal of Experimental Psychology, 66(12), 2303-2309.

Macleod, M., McLean, A., Kyriakopoulou, A., Serghiou, S., Wilde, A., Sherratt, N., . . . Sena, E. (2015). Risk of bias in reports of in vivo research: A focus for improvement. PLOS Biology, 13(10).

Markowitz, D., & Hancock, J. (2015). Linguistic obfuscation in fraudulent science [link to abstract only]. Journal of Language and Social Psychology, 1-11. doi: 10.1177/0261927X15614605

Masicampo, E., & Lalande, D. (2012). A peculiar prevalence of p values just below .05. Quarterly Journal of Experimental Psychology, 65(11), 2271-2279.

Open Science Collaboration. (2015). Estimating the reproducibility of psychological science. Science, 349(6251). doi: 10.1126/science.aac4716

Pashler, H., & Harris, C. (2012). Is the replicability crisis overblown? Three arguments examined. Perspectives on Psychological Science, 7(6), 531-536.

Schumaker, C., Schell, L., Portalupi, S., Oeller, P., Cabrera, L., Bassler, D., . . . Meerpohl, J. (2014) Extent of Non-Publication in Cohorts of Studies Approved by Research Ethics Committees or Included in Trial Registries. PLoS ONE, 9(12). doi: DOI: 10.1371/journal.pone.0114023

Song, F., Parekh, S., Hooper, L., Loke, Y., Ryder, J., Sutton, A., . . . Harvey, I. (2010). Dissemination and publication of research findings: an updated review of related biases Health Technology Assessment, 14(8)

8 responses to “Critical evaluation of information: Credibility crisis

  1. Pingback: Can you trust academic research? | Research Watch·

  2. Pingback: Reflect Better – Learn Better – NHI NGUYEN·

  3. Pingback: BCM 212 Reflection | JADE MARIANI·

  4. Pingback: Reflection … The Ethical Learning Curve | ebonyandivory·

  5. Pingback: Research Project – Final reflection – Welcome·

  6. Pingback: Online Learning and Learning Behaviour·

  7. Pingback: Ethics and Awareness in Facebook Ad Retargeting [Report] [Reflection] – The Millennial Executive·

  8. Pingback: Reflection on Research and Project – Connor McKechnie·

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s