ObjectiveTo evaluate the risk of bias and reliability of conclusions of systematic reviews (SRs) of lung cancer screening. MethodsWe searched PubMed, EMbase, The Cochrane Library (Issue 2, 2016), Web of Knowledge, CBM, WanFang Data and CNKI to collect SRs of lung cancer screening from inception to February 29th, 2016. The ROBIS tool was applied to assess the risk of bias of included SRs, and then GRADE system was used for evidence quality assessment of outcomes of SRs. ResultsA total of 11 SRs involving 5 outcomes (mortality, detection rate, survival rate, over-diagnosis and potential benefits and harms) were included. The results of risk of bias assessment by ROBIS tool showed:Two studies completely matched the 4 questions of phase 1. In the phase 2, 6 studies were low risk of bias in the including criteria field; 8 studies were low risk of bias in the literature search and screening field; 3 studies were low risk of bias in the data abstraction and quality assessment field; and 5 studies were low risk of bias in the data synthesis field. In the phase 3 of comprehensive risk of bias results, 5 studies were low risk. The results of evidence quality assessment by GRADE system showed:three studies had A level evidence on the outcome of mortality; 1 study had A level evidence on detection; 1 study had A level evidence on survival rate; 3 studies on over-diagnosis had C level evidence; and 2 studies on potential benefits and harms had B level evidence. ConclusionThe risk of bias of SRs of lung cancer screening is totally modest; however, the evidence quality of outcomes of these SRs is totally low. Clinicians should cautiously use these evidence to make decision based on local situation.
Accurately assessing the risk of bias is a critical challenge in network meta-analysis (NMA). By integrating direct and indirect evidence, NMA enables the comparison of multiple interventions, but its outcomes are often influenced by bias risks, particularly the propagation of bias within complex evidence networks. This paper systematically reviews commonly used bias risk assessment tools in NMA, highlighting their applications, limitations, and challenges across interventional trials, observational studies, diagnostic tests, and animal experiments. Addressing the issues of tool misapplication, mixed usage, and the lack of comprehensive tools for overall bias assessment in NMA, we propose strategies such as simplifying tool operation, enhancing usability, and standardizing evaluation processes. Furthermore, advancements in artificial intelligence (AI) and large language models (LLMs) offer promising opportunities to streamline bias risk assessments and reduce human interference. The development of specialized tools and the integration of intelligent technologies will enhance the rigor and reliability of NMA studies, providing robust evidence to support medical research and clinical decision-making.
This paper introduces the main contents of ROB-ME (Risk Of Bias due to Missing Evidence), including backgrounds, scope of the tool, signal questions and the operation process. The ROB-ME tool has the advantages of clear logic, complete details, simple operation and good applicability. The ROB-ME tool offers considerable advantages for assessing the risk of non-reporting biases and will be useful to researchers, thus being worth popularizing and applying.
The COSMIN-RoB checklist includes three sections with a total of 10 boxes, which is used to evaluate risk of bias of studies on content validity, internal structure, and other measurement properties. COSMIN classifies reliability, measurement error, criteria validity, hypothesis testing for construct validity, and responsiveness as other measurement properties, which primarily focus on the quality of the (sub)scale as a whole, rather than on the item level. Among the five measurement properties, reliability, measurement error and criteria validity are the most widely used in the studies. Therefore, this paper aims to interpret COSMIN-RoB checklist with examples to guide researchers to evaluate the risk of bias of the studies on reliability, measurement error and criteria validity of PROMs.
Nonrandomized studies are an important method for evaluating the effects of exposures (including environmental, occupational, and behavioral exposures) on human health. Risk of bias in nonrandomized studies of exposures (ROBINS-E) is used to evaluate the risk of bias in natural or occupational exposure observational studies. This paper introduces the main contents of ROBINS-E 2022, including backgrounds, seven domains, signal questions and the operation process.
Evidence synthesis is the process of systematically gathering, analyzing, and integrating available research evidence. The quality of evidence synthesis depends on the quality of the original studies included. Validity assessment, also known as risk of bias assessment, is an essential method for assessing the quality of these original studies. Currently, there are numerous validity assessment tools available, but some of them lack a rigorous development process and evaluation. The application of inappropriate validity assessment tools to assessing the quality of the original studies during the evidence synthesis process may compromise the accuracy of study conclusions and mislead the clinical practice. To address this dilemma, the LATITUDES Network, a one-stop resource website for validity assessment tools, was established in September 2023, led by academics at the University of Bristol, U.K. This Network is dedicated to collecting, sorting and promoting validity assessment tools to improve the accuracy of original study validity assessments and increase the robustness and reliability of the results of evidence synthesis. This study introduces the background of the establishment of the LATITUDES Network, the included validity assessment tools, and the training resources for the use of validity assessment tools, in order to provide a reference for domestic scholars to learn more about the LATITUDES Network, to better use the appropriate validity assessment tools to conduct study quality assessments, and to provide references for the development of validity assessment tools.
ObjectiveTo realize automatic risk bias assessment for the randomized controlled trial (RCT) literature using BERT (Bidirectional Encoder Representations from Transformers) as an approach for feature representation and text classification.MethodsWe first searched The Cochrane Library to obtain risk bias assessment data and detailed information on RCTs, and constructed data sets for text classification. We assigned 80% of the data set as the training set, 10% as the test set, and 10% as the validation set. Then, we used BERT to extract features, construct text classification model, and evaluate the seven types of risk bias values (high and low). The results were compared with those from traditional machine learning methods using a combination of n-gram and TF-IDF as well as the Linear SVM classifier. The accuracy rate (P value), recall rate (R value) and F1 value were used to evaluate the performance of the models.ResultsOur BERT-based model achieved F1 values of 78.5% to 95.2% for the seven types of risk bias assessment tasks, which was 14.7% higher than the traditional machine learning method. F1 values of 85.7% to 92.8% were obtained in the extraction task of the other six types of biased descriptors except "other sources of bias", which was 18.2% higher than the traditional machine learning method.ConclusionsThe BERT-based automatic risk bias assessment model can realize higher accuracy in risk of bias assessment for RCT literature, and improve the efficiency of assessment.
High-quality randomized controlled trials are the best source of evidence to explain the relationship between health interventions and outcomes. However, in cases where they are insufficient, indirect, or inappropriate, researchers may need to include non-randomized studies of interventions to strengthen the evidence body and improve the certainty (quality) of evidence. The latest research from the GRADE working group provides a way for researchers to integrate randomized and non-randomized evidence. The present paper introduced the relevant methods to provide guidance for systematic reviewers, health technology assessors, and guideline developers.
This paper summarizes the methodological quality assessment tools of artificial intelligence-based diagnostic test accuracy studies, and introduces QUADAS-AI and modified QUADAS-2. Moreover, this paper summarizes reporting guidelines of these studies as well, and then introduces specific reporting standards in AI-centred research, and checklist for AI in dental research.
The QUADAS-2, QUIPS, and PROBAST tools are not specific for prognostic accuracy studies and the use of these tools to assess the risk of bias in prognostic accuracy studies is prone to bias. Therefore, QUAPAS, a risk of bias assessment tool for prognostic accuracy studies, has recently been developed. The tool combines QUADAS-2, QUIPS, and PROBAST, and consists of 5 domains, 18 signaling questions, 5 risk of bias questions, and 4 applicability questions. This paper will introduce the content and usage of QUAPAS to provide inspiration and references for domestic researchers.