The background and status of the quality assessment instruments of clinical trials, and several frequently used instruments both domesticly and abroad were introduced, and the problems in this field were discussed.
Objective To compare the efficiency of epidermis cell culture between big graft method and small strip method. Methods The big graft method was to cut the skin tissue reticularly from dermis layer while the epidermis were not cut off. After it was digested fully in trypsin, theepidermis was separated from skin and was used to culture epidermal cells. The small strip method was routine. The time to cut the skin and to separate the epidermis was recorded, and the number and quality of cells were compared between two methods. Results It took 8-10 minutes to cut an area of 5 cm2 skin into small strips and 1-2 minutes into big grafts. It took 10-15 minutes to separate the epidermis from the same area skin by small strip method and 2 minutes by big graft method. The cells showed better vigor and its number was more by big grafts than by small strips.The chance of fibroblast contamination was reduced obviously. Conclusion The big graft method is simpler than the small strip method and can culture more epidermis cells with less chance of fibroblast contamination.
Currently, there is a lack of clarity and standardization regarding the implementation details of interventions in traditional Chinese medicine clinical practice guidelines (CPGs). This in methodological guidance for standardizing the implementation prescription adversely impacts the quality of implementation and hinders the clinical application rate of recommendations. Through in-depth analysis of implementation prescription of evidence-based CPGs in traditional Chinese medicine, we identified the challenges associated with standardization. In response, we propose enhancing the technical specifications of implementation prescriptions, advocating for improved formulation processes, diverse reporting approaches, and standardizedological guidelines. These recommendations aim to serve as a methodological reference and guidance for clinical practice guideline developers.
ObjectivesTo provide an overview of whether the clinical decision support system (CDSS) was effective in reducing medication error and improving medication safety and to assess the quality of available scientific evidence.MethodsPubMed, EMbase, The Cochrane Library, CBM, WanFang Data, VIP and CNKI databases were electronically searched to collect systematic reviews (SRs) on application of clinical decision support system in the medication error and safety from January, 1996 to November, 2018. Two reviewers independently screened literature, extracted data and then evaluated methodological quality of included SRs by using AMSTAR tool.g AMSTAR tool.ResultsA total of 20 SRs including 256 980 healthcare practitioners and 1 683 675 patients were included. Specifically, 16 studies demonstrated moderate quality and 4 demonstrated high quality. 19 SRs evaluated multiple process of care outcome: 9 were sufficient evidence, 6 were limited evidence, and 7 were insufficient evidence which proved that CDSS had a positive effect on process outcome. 13 SRs evaluated reported patient outcomes: 1 with sufficient evidence, 3 with limited evidence, and 9 without sufficient evidence.ConclusionsCDSS reduces medication error by inconsistently improving process of care measures and seldom improving patient outcomes. Larger samples and longer-term studies are required to ensure a larger and more reliable evidence base on the effects of CDSS intervention on patient outcomes.
Based on the principles and methods of systematic review of randomized controlled clinical trials, systematic review of economic analyses can integrate information from multiple economic studies which focus on the same clinical questions. It can also provide important insights by systematically examining how differences among studies lead to different results. Generally, there are seven steps to conduct such a review: 1) formulating questions; 2) establishing eligibility criteria; 3) searching and selecting eligible economic analyses; 4) assessing the validity of economic analyses; 5) acquiring data; 6) analyzing and synthesizing data; and 7) presenting results. Owing to the specificity of economic analyses, many methodological challenges exist, including the varieties of economic models, analytic perspectives, time horizons, and uncertainty and sensitivity analysis among different economic analyses. This may cause difficulties for critical assessment of the economic analyses.
Objective To assess the evidence of Cochrane systematic reviews on the treatment of temporomandibular disorders (TMD) as well as the methodological quality of all randomized controlled trials (RCTs) of the included systematic reviews. Methods The Cochrane Library (Issue 3, 2008) was searched for systematic reviews on the treatment of temporomandibular disorders. The risk of bias was assessed independently by two authors. Results Three systematic reviews involving 25 RCTs were included. The methods of 23 studies were rated as of lower quality with high risk of various biases. Only 2 studies were of high quality. Conclusion There is insufficient or inconsistent evidence to support the use of hyaluronate, occlusal adjustment, and stabilization splint therapy for the treatment of TMD. The overall quality of RCTs about the treatment of TMD is generally low. Analysis of the included trials showed that some trials had no clear description of randomization methods, allocation concealment, sample size calculation, and intention-to-treat analysis. To improve the quality of the reporting of RCTs, clinical trial registration and the revised Consolidated Standards of Reporting Trials (CONSORT) statement should be introduced into the trial design and strictly followed.
Objective To assess the methodological quality of systematic reviews or meta-analyses of intervention published in the Chinese Journal of Evidence-Based Medicine, so as to provide evidence for improving the domestic methodological quality. Methods The systematic reviews or meta-analyses of intervention published from 2001 to 2011 were identified by searching the Chinese Journal of Evidence-Based Medicine. The methodological quality of included studies was assessed by AMSTAR scale. The Excel software was used to input data, and Mata-Analyst software was used to conduct statistical analysis. Results A total of 379 studies were included. The average score of AMSTAR was 6.15±1.35 (1.5-9.5 point). Just some items of AMSTAR scale were influenced by the following features of included studies: publication date, funded or not, number of author, author’s unit, and number of author’s unit. The total AMSTAR score of studies published after 2008 was higher than those published before 2008 (P=0.02), but the improvement of methodological quality was limited. While the total AMSTAR score of studies published by 3 or more than 3 authors were higher than those published by 2 or less than 2 authors (P=0.04). Conclusion The methodological quality of the included studies published in the Chinese Journal of Evidence-Based Pediatrics is uneven. Although the methodological quality improves somewhat after the publication of AMSTAR scale, there is no big progress, so it still needs to be further improved.
ObjectiveTo systematically review the research issues related to evidence quality grading methods for public health decision making. MethodsPubMed, Web of Science, CNKI, WanFang Data, CBM and VIP databases were electronically searched to collect studies related to the application of evidence quality grading methods for public health decision making from inception to December 2022. The questions were constructed according to the SPIDER model. The quality of the included literature was evaluated by using the CASP checklist, and a three-level interpretation analysis of the questions on the application of quality rating methods for public health decision making was conducted using the thematic synthesis method to establish a pool of question entries. ResultsA total of 14 papers were included, covering seven countries. GRADE was the commonly used method for grading the quality of evidence. CASP evaluation results showed eight high quality studies, four medium quality studies and two low quality studies. The thematic synthesis method summarized 13 question entries in 7 categories. ConclusionThe existing methodology for grading the quality of evidence for public health decision making suffers from the diversity of evidence sources and the underestimation of the level of evidence from complex intervention studies.
The Core Outcome Measures in Effectiveness Trials (COMET) Working Group has published a series of research and reporting guidelines related to core outcome sets since it was established. This article introduces and interprets the Core Outcome Set-STAndardised Protocol Items: the COS-STAP Statement which is developed by the COMET and published in February 2019. It will then be compared with Core Outcome Set-STAndards for Reporting (COS-STAR) and Core Outcome Set-STAndards for Development (COS-STAD), which have been introduced to China. The significance of these guidelines for the development of core outcomes in the field of traditional Chinese medicine is discussed, so as tp draw researchers' attention to this area.
Accurately assessing the risk of bias is a critical challenge in network meta-analysis (NMA). By integrating direct and indirect evidence, NMA enables the comparison of multiple interventions, but its outcomes are often influenced by bias risks, particularly the propagation of bias within complex evidence networks. This paper systematically reviews commonly used bias risk assessment tools in NMA, highlighting their applications, limitations, and challenges across interventional trials, observational studies, diagnostic tests, and animal experiments. Addressing the issues of tool misapplication, mixed usage, and the lack of comprehensive tools for overall bias assessment in NMA, we propose strategies such as simplifying tool operation, enhancing usability, and standardizing evaluation processes. Furthermore, advancements in artificial intelligence (AI) and large language models (LLMs) offer promising opportunities to streamline bias risk assessments and reduce human interference. The development of specialized tools and the integration of intelligent technologies will enhance the rigor and reliability of NMA studies, providing robust evidence to support medical research and clinical decision-making.