With the increasing availability of clinical and biomedical big data, machine learning is being widely used in scientific research and academic papers. It integrates various types of information to predict individual health outcomes. However, deficiencies in reporting key information have gradually emerged. These include issues like data bias, model fairness across different groups, and problems with data quality and applicability. Maintaining predictive accuracy and interpretability in real-world clinical settings is also a challenge. This increases the complexity of safely and effectively applying predictive models to clinical practice. To address these problems, TRIPOD+AI (transparent reporting of a multivariable prediction model for individual prognosis or diagnosis+artificial intelligence) introduces a reporting standard for machine learning models. It is based on TRIPOD and aims to improve transparency, reproducibility, and health equity. These improvements enhance the quality of machine learning model applications. Currently, research on prediction models based on machine learning is rapidly increasing. To help domestic readers better understand and apply TRIPOD+AI, we provide examples and interpretations. We hope this will support researchers in improving the quality of their reports.