Evidence-Based Practice in Healthcare
In what ways is bias avoided in Randomised controlled trial design
According to Fives et al. (2013), the United Kingdom Medical Research Council (MRC) observed that numerous studies had deemed randomised controlled trials (RCTs) as the best method of research in evaluating the effectiveness of interventions in healthcare research for many centuries now. The technique measures the connection between an intervention and the predisposed outcomes to verify whether the treatment is cost-effective (Kahan, Rehal and Cro, 2015). However, Mansournia et al. (2017) observe that most studies reveal that RCTs are susceptible to bias which may be evident in every phase of the technique. Bias refers to a systematic error that occurs in results or conclusion that are far from the truth or the expected outcome (Mansournia et al., 2017). In randomised controlled trials, bias may occur in the study design stage, planning stage, during the test, and publication and dissemination of experiments. According to recent research, pre-trial and design bias has been categorically classified as ethical bias where the concepts of clinical equipoise are not met satisfactorily (National Institute for Health and Care Excellence, 2020). In this case, there exists a lack of genuine uncertainty regarding the merits of comparative intervention in each trial. NICE (2020) suggests that to control this kind of bias, there must be an agreement between researchers to avert the superiority that exists between various treatments to curb any instances of human trials being unethical.
According to the United Kingdom National Institute of Health Research, design bias is also affirmed in many RCTs currently (Fives et al., 2013). Ideally, controlled clinical trials are quite expensive; therefore; they rely on pharmaceutical industrial funding to assess the efficacy of treatment interventions. Krauss (2018) states that in cases of industrial trials, outcomes are established before the clinical trials are initiated, thus rendering the method design-biased. However convenient the industry-sponsored trials have been, researchers have revealed it to be a double-edged sword which contributes to the pre-determined flaws of RCTs that have strained the connection between academic and the pharmaceutical sectors (Sianesi, 2016). Investigators try to minimise this bias by opting for public sponsored clinical trials rather than industrial funded that may help alleviate the costs of drugs (Fives et al., 2013). Availability of bodies like the Association of Medical Research Charities (AMRC) in the United Kingdom has assisted in publicly-funding almost a third of medical researches carried out in the UK today (NICE, 2020). Additionally, NICE (2020) states that public-funded trials would assist in curbing incentives that result in harmful effects of drugs and end cases of exaggerated effectiveness on interventions that are often dominant in industrial sponsored trials.
According to the National Institute of Health Research, selection bias is also predominant in randomised controlled trials (Krauss, 2018). According to a report on a study, selection bias is based either on the experimenter through sampling bias, or to the participants by response bias (Mansournia et al., 2017). According to Sianesi (2016), studies have established that there is always a motive behind every participation in research which complicates the generalisation of the outcome constructs. Due to this, preferential will of participation or non-participation, randomised control trials are response biased since knowledge of the experimented is presented to participants before the randomisation process (Sianesi, 2016). Additionally, selection bias has been observed on recruiters where they possess knowledge of the partakers’ conditions (Kahan, Rehal and Cro, 2015). Such knowledge prompts the investigators to exaggerate the outcome of the trials of each intervention (Kahan et al., 2015). Also, sample bias may prevail where recruitment is based on registered persons in phone directories and healthcare databases since not everyone is recorded in these platforms (Fives et al., 2013). The National Institute of Health Research suggests measures such as minimisation to elude bias in RCTs (NICE, 2020). The strategy allows for application of ratio analysis across all the trial arms. Based on a report from the UK Medical Research Council, stratification is an alternative strategy to help curtail selection bias where group samples are accorded depending on the characteristics to ensure equality is distributed in all the trials (Krauss, 2018). Conversely, Mansournia et al. (2017) suggest that block randomisation would be a preventive measure for bias where experimenters should ensure an equal number of participants is allocated in each clinical trial arm.
According to Fives et al. (2013), masking is also an effective method of controlling bias in RCTs where investigators, healthcare providers and data collectors are blinded of any information about the participants. Masking is also responsible for minimising any other potential bias that may occur in the RCTs (Fives et al., 2013). Alternatively, the Medical Research Council suggests the allocation concealment technique as a strategy to not only control bias in the randomised controlled trials but also, to ensure the effectiveness of the method (Paludan-Müller, Laursen and Hróbjartsson, 2016). It merely prevents the next step in the assignment from being known by either the researchers or the subjects since it may prompt unstable and inflated outcomes. Paludan-Müller et al. (2016) note that a study based on the method indicates that in an instance where the next trial to which a partaker will be allocated is known, the technique is no longer a randomised control trial; instead, it is non-randomized. Such knowledge would tempt the research to use the intervention on subjects which they perceive it would be helpful to them; therefore interfering with the adequacy of that particular intervention (Paludan-Müller et al., 2016). The research states that the absence of allocation concealments may result in overestimated outcomes which alternatively cause a biased allocation.
A report from the UK Health Research Authority indicates that publication bias is intense in most RCTs. The report reveals that only positive and more statistical outcomes are published compared to negative results (NICE, 2020). The arguments raised by proponents of this type of bias is that some results tend to be insignificant both statistically and clinically; hence such a predisposition can be tolerated (Fives et al., 2013). However, the Health Research Authority stipulates that all outcomes of a research must be published, since, despite the insignificance of an outcome, it may indicate a crucial aspect of the intervention that if ignored would compromise the public health (Kahan et al., 2015). Other remedies put across by the Health Research Authority to combat this bias include compulsory registration and setting up of clinical trial registration nationally and at regional level (Fives et al., 2013), a detailed protocol of all clinical trials, and provision of a comprehensive and explicit explanation of all interventions. The HRA also requires all studies to account for any missing data and changes in outcomes.
What are Quality Adjusted Life Years and why does NICE use them
According to Ogden (2017), the UK National Institute of Health and Care Excellence (NICE) defines the quality-adjusted life years (QALY) model as a disease-burden estimator that encompasses both the quality and quantity of life lived. The model is applied during the economic valuation of evaluating the significance and rate of various medical interventions put across in the British health sector (National Institute for Health and Care Excellence, 2017). Researchers simply term it as a cost-utility analysis for measuring the extent of health benefits of medical interventions and comparing the values of various medicines (Ogden, 2017). Due to the limited budget and increased number of spending options in the health sector, NICE applies the QALY model to ensure cost-effectiveness of all activities geared towards guaranteeing the efficacy of healthcare services in England (Ogden, 2017). This is in alignment with one of its principles of “helping health, public health, and social care professionals deliver the best possible care in Britain and with the resources available”(NICE, 2017). According to NICE, one QALY represents one year of perfect health and every year of less than ideal life has a utility value ranging from 0-1. NICE expresses health benefits of an intervention as QALYs gained which are usually represented as the product of years of experience and the utility value of life [quality of life, QoL] (Ogden, 2017). NICE uses QALY to measure cost-effectiveness by combining the amounts of extra QALYs evaluated on a specific medicine with the additional costs of that medication to yield the incremental cost per QALY ratio. With this ratio, NICE can estimate the extra QALYs the new drug delivers alongside its costs compared to current treatments; hence, it can deduce an informed judgment on the value of that medicine (Ogden, 2017). By employing this method, NICE has successfully analysed the specific interventions that are cost-effective to the National Health Service. For instance, NICE deduced that interventions costing less than 20000 Euros per QALY gained are cost-effective.
Why is the sensitivity and specificity of a test important in relation to diagnosis?
According to Saunders et al. (2015), the UK National Institute of Health and Clinical Excellence views diagnostic tests as useful tools that instil confidence in medical practitioners about the diagnosis of patient’s sickness. According to recent medical research, diagnosis is characterised by several aspects that define its effectiveness in revealing various illnesses in patients (McNamara and Martin, 2018). The study marks the sensitivity and specificity of diagnostic tests as core elements that define the value of this field. Firstly, Power and Wright (2013) observe that the sensitivity evaluates how good a test is in revealing whether the client has a specific condition or not. In other words, it demonstrates the true positive rate of a patient’s probability to suffer from a certain illness (Trevethan, 2017). Saunders et al. (2015) note that highly sensitive tests are used to detect the presence of a condition that exhibits few indicators. Specificity, on the other hand, encompasses attempts to measure the negativity of an investigated situation in patients (McNamara and Martin, 2018). NICE identifies it as the true negative rate of a certain illness. Highly specific tests are bound to rule out a diagnosis, especially when the patients do not have symptoms of that particular condition (Power and Wright, 2013). According to NICE, the specificity and sensitivity of tests are crucial when observing different populations and have been deemed efficient in determining the diagnostic potential of a test (Saunders et al., 2015). Additionally, the methods are useful summary measures applicable in describing the utility of any testing technique (Trevethan, 2017). According to NICE, a valuable strategy for comparing the performance of diagnostic tests is to correlate the sensitivities at equivalent specificities.
References
Fives, A., Rusell, D.W., Kearns, N., Lyons, R., Eaton, P., Canavan, J., Devaney, C. and O’Brien, A., 2013. The role of random allocation in randomised controlled trials: distinguishing selection bias from baseline imbalance. Journal of Multidisciplinary Evaluation, 9(20), p.33.
Kahan, B.C., Rehal, S. and Cro, S., 2015. Risk of selection bias in randomised trials. Trials, 16(1), p.405.
Krauss, A., 2018. Why all randomised controlled trials produce biased results. Annals of medicine, 50(4), pp.312-322.
McNamara, L.A. and Martin, S.W., 2018. Principles of epidemiology and public health. In Principles and Practice of Pediatric Infectious Diseases (pp. 1-9). Elsevier.
Mansournia, M.A., Higgins, J.P., Sterne, J.A. and Hernán, M.A., 2017. Biases in randomised trials: a conversation between trialists and epidemiologists. Epidemiology (Cambridge, Mass.), 28(1), p.54.
National Institute for Health and Care Excellence. (2017). Summary of technology appraisal decisions. Available from: https://www.nice.org.uk/about/what-we-do/our-programmes/nice-guidance/nice-technology-appraisalguidance/summary-of-decisions
National Institute for Health and Care Excellence. (2020).Appendix C: Methodology Checklist: Randomized Controlled Trials Retrieved from https://www.nice.org.uk/process/pmg6/resources/the-guidelines-manual-appendices-bi-2549703709/chapter/appendix-c-methodology-checklist-randomised-controlled-trials
Ogden, J., 2017. QALYs and their role in the NICE decision‐making process. Prescriber, 28(4), pp.41-43.
Paludan-Müller, A., Laursen, D.R.T. and Hróbjartsson, A., 2016. Mechanisms and direction of allocation bias in randomised clinical trials. BMC medical research methodology, 16(1), p.133.
Power, M., Fell, G. and Wright, M., 2013. Principles for high-quality, high-value testing. BMJ Evidence-Based Medicine, 18(1), pp.5-10.
Saunders, L.J., Zhu, H., Bunce, C., Doré, C.J., Freemantle, N. and Crabb, D.P., 2015. Ophthalmic statistics note 5: diagnostic tests—sensitivity and specificity. British Journal of Ophthalmology, 99(9), pp.1168-1170.
Sianesi, B., 2016. “Randomisation bias” in the medical literature: A review (No. W16/23). IFS Working Papers.
Trevethan, R., 2017. Sensitivity, specificity, and predictive values: foundations, pliabilities, and pitfalls in research and practice. Frontiers in public health, 5, p.307.