west china medical publishers
Keyword
  • Title
  • Author
  • Keyword
  • Abstract
Advance search
Advance search

Search

find Keyword "偏倚风险" 28 results
  • BERT-based automated risk of bias assessment

    ObjectiveTo realize automatic risk bias assessment for the randomized controlled trial (RCT) literature using BERT (Bidirectional Encoder Representations from Transformers) as an approach for feature representation and text classification.MethodsWe first searched The Cochrane Library to obtain risk bias assessment data and detailed information on RCTs, and constructed data sets for text classification. We assigned 80% of the data set as the training set, 10% as the test set, and 10% as the validation set. Then, we used BERT to extract features, construct text classification model, and evaluate the seven types of risk bias values (high and low). The results were compared with those from traditional machine learning methods using a combination of n-gram and TF-IDF as well as the Linear SVM classifier. The accuracy rate (P value), recall rate (R value) and F1 value were used to evaluate the performance of the models.ResultsOur BERT-based model achieved F1 values of 78.5% to 95.2% for the seven types of risk bias assessment tasks, which was 14.7% higher than the traditional machine learning method. F1 values of 85.7% to 92.8% were obtained in the extraction task of the other six types of biased descriptors except "other sources of bias", which was 18.2% higher than the traditional machine learning method.ConclusionsThe BERT-based automatic risk bias assessment model can realize higher accuracy in risk of bias assessment for RCT literature, and improve the efficiency of assessment.

    Release date:2021-03-19 07:04 Export PDF Favorites Scan
  • Application of Risk of Bias Tool in Cochrane Systematic Reviews on Acupuncture

    ObjectiveTo evaluate whether and to what extent the new risk of bias (ROB) tool has been used in Cochrane systematic reviews (CSRs) on acupuncture. MethodsWe searched the Cochrane Database of Systematic Review (CDSR) in issue 12, 2011. Two reviewers independently selected CSRs which primarily focused on acupuncture and moxibustion. Then the data involving in essential information, the information about ROB (sequence generation, allocation concealment, blindness, incomplete outcome data, selective reporting and other potential sources of bias) and GRADE were extracted and statistically analyzed. ResultsIn total, 41CSRs were identified, of which 19 CSRs were updated reviews. Thirty-three were published between 2009 and 2011. 60.98% reviews used the Cochrane Handbook as their ROB assessment tool. Most CSRs gave information about sequence generation, allocation concealment, blindness, and incomplete outcome data, however, half of them (54.55%, 8/69) showed selective reporting or other potential sources of bias. Conclusion"Risk of bias" tools have been used in most CSRs on acupuncture since 2009. However, the lack of evaluation items still remains.

    Release date: Export PDF Favorites Scan
  • A Chinese introduction to risk of bias due to missing evidence in network meta-analysis (ROB-MEN)

    Selective non-reporting and publication bias of study results threaten the validity of systematic reviews and meta-analyses, thus affect clinical decision making. There are no rigorous methods to evaluate the risk of bias in network meta-analyses currently. This paper introduces the main contents of ROB-MEN (risk of bias due to missing evidence in network meta-analysis), including tables of the tool, operation process and signal questions. The pairwise comparisons table and the ROB-MEN table are the tool’s core. The ROB-MEN tool can be applied to very large and complex networks including lots of interventions to avoid time-consuming and labor-intensive process, and it has the advantages of clear logic, complete details and good applicability. It is the first tool used to evaluate the risk of bias due to missing evidence in network meta-analysis and is useful to researchers, thus being worth popularizing and applying.

    Release date:2024-05-13 09:34 Export PDF Favorites Scan
  • Interpretation of the updated COSMIN-RoB checklist in evaluating risk of bias of studies on reliability and measurement error

    The COSMIN community updated the COSMIN-RoB checklist on reliability and measurement error in 2021. The updated checklist can be applied to the assessment of all types of outcome measurement studies, including clinician-reported outcome measures (ClinPOMs), performance-basd outcome measurement instruments (PerFOMs), and laboratory values. In order to help readers better understand and apply the updated COSMIN-RoB checklist and provide methodological references for conducting systematic reviews of ClinPOMs, PerFOMs and laboratory values, this paper aimed to interpret the updated COSMIN-RoB checklist on reliability and measurement error studies.

    Release date:2022-11-14 09:36 Export PDF Favorites Scan
  • Risk bias assessment tool RoB2 (revised version 2019) for randomized controlled trial : an interpretation

    RoB2 (revised version 2019), an authoritative tool for assessing the risk of bias in randomized controlled trials, has been updated and improved based on the original version. This article elaborated and interpreted the background and main content of RoB2 (revised version 2019), as well as the operation process of the new software. Compared with the previous version of RoB2 (revised version 2018), RoB2 (revised version 2019) has the advantages of rich content, complete details, accurate questions, and simple operation, etc. Additionally, it is more user-friendly for researchers and beginners. The risk bias assessment of randomized controlled trials is more comprehensive and accurate, and it is an authoritative, trustworthy, and popular tool for evaluating the risk of bias in randomized controlled studies in medical practice.

    Release date:2021-07-22 06:18 Export PDF Favorites Scan
  • Interpretation of ROBIS Tool in Evaluating the Risk of Bias of a Selected Systematic Review

    ObjectiveTo interpret ROBIS, a new tool to evaluate the risk of bias in systematic reviews, to promote the comprehension of it and its proper application. MethodsWe explained each item of ROBIS tool, used it to evaluate the risk of bias of a selected intervention review whose title was Cyclophosphamide for Primary Nephrotic Syndrome of Children: A Systematic Review, and judged the risk of bias in the review. ResultsThe selected systematic review as a whole was rated as “high risk of bias”, because there existed high risk of bias in domain 2 to 4, namely identification and selection of studies, data collection and study appraisal, synthesis and findings. The risk of bias in domain 1 (study eligibility criteria) was low. The relevance of identified studies and the review’s research question was appropriately considered and the reviewers avoided emphasizing results on the basis of their statistical significance. ConclusionROBIS is a new tool worthy of being recommended to evaluate risk of bias in systematic reviews. Reviewers should use ROBIS items as standards to conduct and produce high quality systematic reviews.

    Release date: Export PDF Favorites Scan
  • PROBAST+AI: an introduction to the quality, risk of bias, and applicability assessment tool for prediction model studies using artificial intelligence or regression methods

    With the rapid development of artificial intelligence (AI) and machine learning technologies, the development of AI-based prediction models has become increasingly prevalent in the medical field. However, the PROBAST tool, which is used to evaluate prediction models, has shown growing limitations when assessing models built on AI technologies. Therefore, Moons and colleagues updated and expanded PROBAST to develop the PROBAST+AI tool. This tool is suitable for evaluating prediction model studies based on both artificial intelligence methods and regression methods. It covers four domains: participants and data sources, predictors, outcomes, and analysis, allowing for systematic assessment of quality in model development, risk of bias in model evaluation, and applicability. This article interprets the content and evaluation process of the PROBAST+AI tool, aiming to provide references and guidance for domestic researchers using this tool.

    Release date: Export PDF Favorites Scan
  • Evaluation of the accuracy of the large language model for risk of bias assessment in analytical studies

    Objective To systematically review the accuracy and consistency of large language models (LLM) in assessing risk of bias in analytical studies. Methods The cohort and case-control studies related to COVID-19 based on the team's published systematic review of clinical characteristics of COVID-19 were included. Two researchers independently screened the studies, extracted data, and assessed risk of bias of the included studies with the LLM-based BiasBee model (version Non-RCT) used for automated evaluation. Kappa statistics and score differences were used to analyze the agreement between LLM and human evaluations, with subgroup analysis for Chinese and English studies. Results A total of 210 studies were included. Meta-analysis showed that LLM scores were generally higher than those of human evaluators, particularly in representativeness of exposed cohorts (△=0.764) and selection of external controls (△=0.109). Kappa analysis indicated slight agreement in items such as exposure assessment (κ=0.059) and adequacy of follow-up (κ=0.093), while showing significant discrepancies in more subjective items, such as control selection (κ=−0.112) and non-response rate (κ=−0.115). Subgroup analysis revealed higher scoring consistency for LLM in English-language studies compared to that of Chinese-language studies. Conclusion LLM demonstrate potential in risk of bias assessment; however, notable differences remain in more subjective tasks. Future research should focus on optimizing prompt engineering and model fine-tuning to enhance LLM accuracy and consistency in complex tasks.

    Release date:2025-05-13 01:41 Export PDF Favorites Scan
  • Overview and perspectives on risk of bias assessment tools in network meta-analysis

    Accurately assessing the risk of bias is a critical challenge in network meta-analysis (NMA). By integrating direct and indirect evidence, NMA enables the comparison of multiple interventions, but its outcomes are often influenced by bias risks, particularly the propagation of bias within complex evidence networks. This paper systematically reviews commonly used bias risk assessment tools in NMA, highlighting their applications, limitations, and challenges across interventional trials, observational studies, diagnostic tests, and animal experiments. Addressing the issues of tool misapplication, mixed usage, and the lack of comprehensive tools for overall bias assessment in NMA, we propose strategies such as simplifying tool operation, enhancing usability, and standardizing evaluation processes. Furthermore, advancements in artificial intelligence (AI) and large language models (LLMs) offer promising opportunities to streamline bias risk assessments and reduce human interference. The development of specialized tools and the integration of intelligent technologies will enhance the rigor and reliability of NMA studies, providing robust evidence to support medical research and clinical decision-making.

    Release date:2025-05-13 01:41 Export PDF Favorites Scan
  • An interpretation of QUAPAS: a tool for assessing risk of bias in prognostic accuracy studies

    The QUADAS-2, QUIPS, and PROBAST tools are not specific for prognostic accuracy studies and the use of these tools to assess the risk of bias in prognostic accuracy studies is prone to bias. Therefore, QUAPAS, a risk of bias assessment tool for prognostic accuracy studies, has recently been developed. The tool combines QUADAS-2, QUIPS, and PROBAST, and consists of 5 domains, 18 signaling questions, 5 risk of bias questions, and 4 applicability questions. This paper will introduce the content and usage of QUAPAS to provide inspiration and references for domestic researchers.

    Release date:2023-04-14 10:48 Export PDF Favorites Scan
3 pages Previous 1 2 3 Next

Format

Content