
Tracking the information about your manuscript
Communicate with the editorial office
Query manuscript payment status Editor LoginCollecting, editing, reviewing and other affairs offices
Managing manuscripts
Managing author information and external review Expert Information Reviewer LoginOnline Review
Online Communication with the Editorial Department
WeChat Official Account
Page Views
Application and prospects of big data and AI technology in magnetic confinement fusion
ZHANG Xiaoxi;DUAN Sizhe;GAO Baofeng;CHI Hao;GUI Nan;[Significance] The rapid development of big data and artificial intelligence(AI) technologies has greatly advanced research onmagnetic confinement fusion(MCF), which is a promising approach to achieving sustainable fusion energy. Because of the complexity ofplasma dynamics, including nonlinear and high-dimensional processes, conventional methods for simulation, monitoring, and control havefaced significant limitations in accuracy and computational efficiency. By integrating AI and big data, researchers can now address manyof these challenges, leading to significant improvements in the precision and speed of simulations as well as real-time analysis ofexperimental data. The application of these technologies has become essential for advancing fusion research, particularly in facilitating thecontrol and optimization of plasma confinement, a key factor in achieving sustained nuclear fusion reactions. [Progress] The application ofAI in MCF has led to numerous advances across different areas. In plasma simulation, AI-based methods, such as machine learning-drivensurrogate models, have significantly reduced computational costs while maintaining or even enhancing accuracy. These models enable therapid prediction of plasma behavior in response to varying conditions, which is crucial for optimizing experimental parameters andensuring the stability of plasma confinement. The capability of AI to manage large-scale, high-dimensional data has proven particularlybeneficial for multiscale simulations that involve complex interactions between physical processes. In experimental monitoring and control,AI, combined with big data analytics, has enabled real-time processing of sensor data from fusion devices. Through predictive modelingand adaptive control mechanisms, AI algorithms can detect potential anomalies and make autonomous adjustments to operationalparameters, thereby improving the reliability and safety of fusion experiments. The dynamic nature of plasma requires precise andimmediate responses to fluctuations, and the capability of AI to analyze past experimental data and predict future behavior has enabledmore effective management of plasma instabilities, thereby enhancing the overall system robustness and contributing to the optimization ofplasma performance. AI has also been instrumental in the design and optimization of fusion devices. By employing AI to model magneticfield configurations and predict material performance under extreme conditions, researchers have been able to improve the durability andefficiency of critical reactor components. These advancements include optimizing the design of superconducting magnets andplasma-facing materials, both of which are essential for the long-term operation of fusion reactors. AI-driven optimization has resulted inimproved magnetic confinement configurations, ensuring better plasma stability and enhanced confinement performance, which arenecessary for achieving continuous fusion reactions. Furthermore, AI facilitates interdisciplinary collaboration by integrating data fromdiverse fields, such as plasma physics, materials science, and computational modeling. The use of AI in cross-disciplinary research fostersinnovation and accelerates progress in addressing key challenges in fusion research. Moreover, AI has contributed to the development ofintelligent educational platforms and virtual experimentation environments, enabling researchers and students to gain hands-on experiencethrough simulations and virtual experiments. These platforms are crucial for advancing knowledge and skills in plasma physics and fusiontechnology and help cultivate the next generation of fusion researchers. [Conclusions and Prospects] The future of MCF research will beincreasingly shaped by the integration of AI and big data technologies. The capability of AI to enhance simulation accuracy, optimizeexperimental design, and improve real-time control systems will play a central role in overcoming existing technical barriers in fusionresearch. Furthermore, AI-driven materials research will contribute to the discovery and design of new materials capable of withstandingthe harsh conditions inside fusion reactors, thus ensuring longer operational lifespans and increased reactor efficiency. As AI technologiescontinue to evolve, they are expected to play a more significant role in all levels of fusion research, from experimental planning toreal-time plasma control and material optimization. These advancements will not only accelerate progress toward realizing practical fusionenergy but also contribute to the development of novel technologies that support the broader scientific community. In addition, AI-powerededucational platforms will continue to provide researchers with advanced tools for learning and experimentation, helping them bridge thegap between theory and practical application. The continued development of AI and big data in this field holds great promise for thesuccessful realization of MCF as a viable energy source for the future.
Experimental method for the microscopic visualization of microplastic transport and retention with artificial intelligence image recognition
WANG Xiaopu;ZHAO Hailong;REN Lingling;GAO Yanyan;XUE Dongxing;SHAN He;WANG Bin;YAO Chuanjin;ZHAO Jian;BAI Yingrui;[Objective] Traditional experimental methods cannot facilitate the direct observation of the migration of microplastics withinporous media. To address this issue, this study developed a microscopic visualization experimental system to investigate the migration andretention of microplastics and integrated artificial intelligence for the efficient identification and calculation of microplastics. The aim wasto quantify the impact of the porous media structure on the migration behavior of microplastics and provide an intuitive, accurate, simple,and scalable experimental system suitable for innovative teaching and research in environmental and related disciplines. [Methods] Thisstudy developed a microscopic visualization experimental system to deeply investigate the migration and retention behavior ofmicroplastics in porous media by constructing pore-scale single-channel models. In the experiment, five pore-scale single-channel modelswith different size parameters were developed to simulate the retention of microplastics under different porosity conditions. Theexperimental setup included a microsyringe, a micro-infusion pump, an optical microscope, and a high-resolution camera. Themicrosyringe and micro-infusion pump were used to control the injection of fluids, while the optical microscope and high-resolutioncamera were employed to capture the migration process of microplastics in the channels. Before the experiment, ethanol was used to expel air from the channels, followed by saturation with deionized water. Then, a microplastic suspension treated with an ultrasonic processor was injected into the channels at a rate of 10 μL/h, with images captured at a rate of 2 frames per second. The obtained images were corrected and preprocessed using the ImageJ software to eliminate halos and speckles. Aiming at the small size and large quantity of microplastic particles, this study employed machine learning algorithms for image recognition and counting and developed custom script codes, which were combined with the ImageJ's macro function, to perform the automated batch analysis of data, significantly improving the efficiency and accuracy of microplastic identification, especially with accuracy rates of over 98% for dispersed individual particles and over 95% for particles aggregated in porous media. This method, which combined microscopic visualization technology with artificial intelligence image recognition, provided a novel and efficient experimental method for the study of microplastic environmental behavior. [Results] The experimental results of the microscopic visualization experiment on microplastic transport and retention provided the following conclusions.(1) The results revealed the impact of the porous media structure on the migration behavior of microplastics, with an increase in the media particle size and channel width leading to a significant increase in microplastic retention by 29.8%–56.0% and 14.5%–37.6%, respectively. These findings confirmed the key role of the porous media structure in the retention behavior of microplastics.(2) The experiment visually demonstrated the deposition patterns of microplastics in porous media, which were consistent with existing research findings, further validating the effectiveness of the experimental method.(3) By integrating artificial intelligence image recognition technology, this study developed an efficient method for the identification and counting of microplastics, significantly improving the accuracy and efficiency of data processing. This method not only provided new experimental means for environmental research and microplastic teaching but also showcased the potential application of artificial intelligence technology in the field of environmental science. [Conclusions] This study effectively quantified the impact of the porous media structure on the microplastic retention behavior through a microscopic visualization experimental system and confirmed the consistency of experimental results with existing research. By integrating artificial intelligence image recognition technology, the accuracy of microplasticidentification and efficiency of data processing were significantly improved, providing an innovative experimental method forenvironmental science teaching and research.
Experimental design for studying the slip ferroelectric properties of double-layer two-dimensional material ReTe_2 based on first-principles calculations
GUO Jianxin;CUI Haozhen;YANG Baozhu;WANG Fu;YAN Xiaobing;[Objective] With the rapid development of society, integrated circuits have become the foundation of the information age, occupying an important position in communications, aerospace, automotive electronics, and other fields. As one of the core development aspects of integrated circuits, the pursuit of high performance and miniaturization of memory is continuously advancing. Ferroelectric random access memory, with its nonvolatile data storage characteristics and advantages, such as unlimited read/write cycles, high-speed read/write, and low power consumption, is highly favored by researchers. To pursue simplicity and miniaturization of electronic components in integrated circuits, the development of low-dimensional ferroelectric materials can meet the requirements of compactness and high-density storage based on miniature size. Two-dimensional materials with nanoscale dimensions have become potential options for achieving ferroelectric miniaturization. This research introduces a two-dimensional ferroelectric material based on transition-metal chalcogenides, i.e., ReTe_2, and systematically introduces the calculation-based design process of this material through the calculation of its structural phonon spectrum, molecular dynamics, slip path, potential barrier, charge density difference, ferroelectric polarization intensity, and electrostatic potential. [Methods] This work uses the Materials Studio modeling tool(Visualizer) to convert the ReTe_2 bulk material structure model downloaded from the crystal database website into a double-layer two-dimensional ferroelectric model. First-principles methods are used to calculate the energy of relative slip between the upper and lower layers of double-layer two-dimensional ReTe_2, thereby obtaining the two ferroelectric state structures A and A′ with the lowest energy and the intermediate state B for the transition between these two ferroelectric states. The phonon spectrum software Phonopy based on VASP(Vienna Ab-initio Simulation Package) and the first-principles dynamics method(AIMD, Ab-initio Molecular Dynamics) of VASP are used to calculate the structural and thermodynamic stability of ReTe_2. The ferroelectric polarization intensity of ReTe_2 is obtained using the Berry phase method. By setting the calculation parameters of VASP, the plane-averaged electrostatic potential and charge density difference of ReTe_2 can be obtained to analyze its electronic distribution characteristics and the origin of ferroelectricity. [Results] The following results were obtained through thefirst-principles calculations:(1) The structure of double-layer two-dimensional ReTe_2 is determined based on corresponding movement and inversion of the ReTe_2 bulk material, thereby meeting the symmetry requirements of two-dimensional slip ferroelectric materials: the symmetry requirement for two-dimensional sliding ferroelectric materials is that a single layer must either have inversion symmetry but lack a horizontal mirror plane(A/B stacking), or have a horizontal mirror plane but lack inversion symmetry(A/A stacking), with opposite sliding ferroelectric states connected by a horizontal mirror plane. The results of phonon spectrum analysis and molecular dynamics calculations showed that this two-dimensional structure, which relies on van der Waals forces between the two layers, has structural and thermodynamic stability.(2) The ferroelectric polarization intensity of ReTe_2 calculated using the Berry phase method in VASP is 0.915 pC/m, and the energy barrier during the slip process is 8.08 meV. These results indicate that ReTe_2 has a large ferroelectric slip and a small slip barrier, making it an ideal two-dimensional ferroelectric slip material.(3) The calculation and analysis of the plane-averaged electrostatic potential and differential charge density of ReTe_2 showed that the source of ferroelectricity in ReTe_2 can be attributed to the relative displacement of the double layers, causing a difference in net charge transfer between the top and bottom layers, thereby generating two-dimensional vertical polarization. The polarization direction reverses with different movement directions, exhibiting typical ferroelectric characteristics. [Conclusion] The analysis of the structural characteristics, plane-averaged electrostatic potential, and differential charge density showed that the double-layer two-dimensional ReTe_2 is a typical two-dimensional ferroelectric slip material. This experimental method can be used to confirm determine whether new materials can be used as two-dimensional ferroelectric slip materials. In addition, by mastering the experimental analysis process, students' scientific interest and overall scientific literacy in exploring microscopic mechanisms and processes can be cultivated.
Partial discharge fault identification in switchgear based on IIVY-SVMD-MPE-SVM
XIE Qian;ZHENG Shengyu;LIU Xinghua;LI Hui;DANG Jian;XIE Tuo;[Objective] As an important protection and control device in the power system, the switchgear inevitably experiences different types of partial discharges because of its harsh working environment. Thus, the accurate identification of fault types is crucial to ensure the safe and stable operation of the power system and prevent equipment damage. However, at the present stage, the characterization of fault information during partial discharge fault identification in the switchgear cabinet is difficult, and the accuracy of partial discharge fault identification is low. In this study, we propose the automatic optimization of successive variational mode decomposition(SVMD) and support vector machine(SVM) parameters based on the IIVY algorithm to realize efficient identification of different partial discharge types. [Methods] First, we develop three multistrategy fusion methods using spatial pyramid matching chaotic mapping for initialization parameters, adaptive t-distribution for decision updates, and dynamic adaptive power selection for mode updates. On this basis, we propose the IIVY algorithm. Second, we develop a partial discharge feature extraction strategy based on IIVY-SVMD-MPE. This uses the IIVY algorithm to adaptively select the SVMD penalty factor α, combine it with correlation coefficients to filter the three largest IMF components, extract the multiscale permutation entropy(MPE), and construct the multidimensional fusion feature dataset. Third, we establish a switchgear localized discharge fault identification model based on IIVY-SVM using the IIVY algorithm to select the three largest IMF components for the MPE and construct a multidimensional fusion feature dataset. Finally, we establish a fault identification model based on IIVY-SVM for efficient identification of partial discharge types. The IIVY algorithm adaptively optimizes the penalty parameter C and kernel parameter σ in SVM, resulting in a fault identification model with the optimal parameter combination. [Results] This study combines the experimental data, compares the 10 fault identification models, and draws the following conclusions:(1) The IIVY algorithm proposed in this study is more advantageous than the three original optimization algorithms in the hyperparameter adaptive optimization under the same conditions, which proves the high efficiency of the proposed improvement strategy.(2) The pattern recognition model SVM is more suitable for partial discharge fault identification than BP and ELM.(3) MPE can be used to extract the fault features carried by the signal more comprehensively.(4) The adoption of a single signal processing or feature extraction method has a large impact on the accuracy of fault recognition, and the model proposed in this study can efficiently process the original signal and extract fault features.(5) Overall, the comprehensive recognition accuracy of the fault recognition model proposed in this study reaches 98.8%, in which the recognition accuracies of the pin–plate discharges, discharges along the surface, suspended discharges, and air gap discharges are 100%, 100%, 100%, 95% and 100%, 95% and 100%. [Conclusions] By establishing a multi-strategy fusion method based on spatial pyramid matching chaotic mapping, adaptive t-distribution, and dynamic adaptive weighting based on the IIVY algorithm, we propose and establish a partial discharge feature extraction method based on IIVY-SVMD-MPE and a partial discharge fault identification model based on IIVY-SVM, utilize the IIVY algorithm adaptive optimization of the SVMD penalty factor α with the penalty parameter C and kernel parameter σ in SVM, and realize the fault recognition model. The test results showed that the fault identification model established in this study has an identification accuracy of 98.8%, which effectively improves the fault identification accuracy and stability and provides a reference for partial discharge fault identification in the switchgear.
System design and experimental study of single-phase immersion liquid cooling for high-computing-power data center
LIU Zhan;JI Shenrui;SUN Xinshan;LING Yunzhi;[Objective] With the rapidly increasing demand for high computing power and intelligent computing from artificial intelligence, big data, and cloud platforms, the cooling capacity of traditional air cooling technology for IT cabinets has reached its limit. The renewal and replacement of diverse computing infrastructure, such as high density, high computing power, and high thermal output, have effectively promoted the development of liquid cooling technology. Liquid cooling technology effectively improves the traditional form of air cooling and can meet the precise cooling needs of high-density cabinets and chip levels. Depth research and development of liquid cooling technology are crucial for reducing data center energy consumption and improving energy utilization efficiency. Thus far, no teaching or research experiments on single-phase immersion liquid cooling have been conducted in Chinese universities. [Methods] A comprehensive single-phase immersion liquid cooling test bench for teaching and research was constructed in this study. The detailed thermal performance was analyzed, and the associated calculation was carried out to select the circulation pump, plate heat exchanger, cooling tower, heating device, and other heat transfer equipment to establish the single-phase immersion liquid cooling system. The coolant was comparatively selected from fluorinated liquid, deionized water, and mineral oil based on the fluid thermal physical properties, fluid motion characteristics, heat exchange performance, and operational stability. Three circulation mass flow rates were designed to compare the performance difference of the proposed single-phase immersion liquid cooling unit. The circulation mass flow rate, fluid pressure, liquid height, and power consumption were monitored and utilized to control and reflect the detailed operation of the single-phase immersion liquid cooling system. After reasonable design, comparative selection, and device connection, both the primary side cooling water cycle and the secondary side coolant cycle were assembled. After the gas injection pressure and water injection tests, the airtightness and compressive strength of the established experimental system were confirmed. Meanwhile, system debugging was conducted to test the accuracy and sensitivity of the monitoring system. [Results] With the circulation mass flow rate of cooling water varying from 4.4 m~3/h to 6.4 m~3/h, the temperatures of the inlet and outlet of the coolant and cooling water have been reduced. For pressure loss, the pressure difference on the coolant side was nearly uninfluenced, whereas the maximum pressure drop was approximately 56.5 kPa on the cooling water side, with a mass flow rate of 6.4 m~3/h. The power usage efficiency of the proposed single-phase immersion cooling system was varied in the range of 1.08–1.09. The coefficient of performance of the cooling system decreased from 6.4 to 5.68, with the cooling water mass flow rate increasing from 4.4 m~3/h to 6.4 m~3/h. The efficiency of the circulation was evaluated and determined to be approximately 20% for three operation cases. [Conclusions] Generally, the performance of the single-phase liquid cooling and heat dissipation system utilized for high-computing-power data centers is thoroughly explored and analyzed, and its improvement space and application potential are evaluated. On the one hand, the present work can provide technical support for the low-carbon and efficient operation of green data centers. On the other hand, the present work can further improve the teaching practice reform of liquid cooling research-oriented experiments in data centers.
Shaking table experimental study considering the coupling effect of large-scale building equipment and structures
QIN Chang'an;ZHANG Guowei;SONG Jincheng;WANG Chen;ZHOU Zhou;XIONG Ziyan;[Objective] This study investigates the interaction between equipment and building structures, with a focus on how equipment mass and stiffness influence the dynamic response of these structures. To achieve this, large-scale shaking table tests were conducted using a scaled model of a benchmark steel frame of the equipment. [Methods] The equipment model was based on a water-cooling machine, a crucial component of large-scale equipment used in industries relating to the medical, chemical, and high-tech sectors. The selection of the model considers the overall weight, vibration characteristics, and other features of this type of equipment. The decoupling reconstruction method was used to decompose the equipment system into mass and stiffness characteristic components. The scaled model of the main structure used a benchmark steel frame, which is a standard model for structural analysis and validation research. The El Centro seismic wave was applied as an input to the shaking table at a maximum peak acceleration of 0.4 g, complemented by a white noise input of 0.05 g before and after each test. For the coupled system, connectors with varying stiffness were sequentially swapped under each counterweight level, resulting in five tests and 25 working conditions. Through these large-scale shaking table tests, the frequencies of the equipment at different masses and stiffnesses were analyzed, along with the effects of equipment frequencies on the vibration modes and dynamic responses of the coupled system. The acceleration time-history curves were directly recorded using acceleration sensors, while displacement time-history curves were derived through integration methods. [Results] As the frequency of the equipment decreased, the first mode shape of the main structure transitioned from translational motion to stationary, while the third mode shape shifted from torsional to translational. The first and second mode shapes of the equipment evolved from relatively stationary to translational. High-frequency equipment can be regarded as an additional mass for the seismic performance analysis of coupled systems. The coupling effect sharpened the acceleration time-history curve of the main structure, increasing the peak during intense vibration periods and decreasing the peak during mild vibration periods, while the acceleration time-history curve of the equipment exhibited an opposite evolutionary pattern. As the frequency of the equipment decreased, the maximum acceleration peak of the structure first decreased and then increased, whereas the equipment displayed a trend of first increasing and then decreasing. The equipment tended to suppress the acceleration response of the structure. The coupling effect also sharpened the displacement time-history curve of the main structure, resulting in a reduction in the displacement amplitude, while the displacement amplitude of the equipment demonstrated similar characteristics. As the frequency of the equipment decreased, the maximum peak displacement of the structure first decreased and then increased, while the equipment showed a trend of first increasing and then decreasing, indicating that the equipment can suppress the displacement response of the structure. [Conclusions] In seismic design, it is essential to analyze equipment and structure as integrated systems to accurately assess their actual responses under seismic action, thereby avoiding unnecessary increases in cost due to overestimating seismic demand. Additionally, placing equipment sensitive to acceleration responses, which may induce adverse resonance, should be avoided in areas identified as seismically weak. This research provides theoretical support for the seismic design of coupled equipment-structure systems.
Design of a motion control-integrated experimental teaching platform for unmanned surface vehicles
YANG Shaolong;ZHU Yanji;XIANG Xianbo;GAO Xingyun;[Objective] Education in the field of intelligent ships falls under higher maritime engineering education, characterized by a broad knowledge base and high practical requirements. The development and application of intelligent ships have a significant impact on higher maritime education and talent cultivation in ship and ocean engineering. A common issue exists in intelligent ship education, where theory learning is emphasized at the cost of design education and practice application, making it difficult for students to apply theoretical knowledge to solve complex problems in the research and development of unmanned surface vehicles (USVs). To address this challenge, this paper proposes an integrated and continuous experimental teaching platform for the development and verification of USV motion control systems, with the goal of enhancing students’ hands-on capabilities in this field. [Method] This paper applies a model-based design and development paradigm to the practical teaching of USV navigation control systems. This paradigm has become an advanced and practical design and development model for such systems. Centered on the progression from basic navigation control principles to practice implementation, the model-based paradigm is integrated throughout the entire experimental teaching process, from development to verification. Experimental teaching cases are designed based on the typical three-degree-of-freedom planar motion equations of USVs and the classic “guidance-navigation-control” architecture. These cases involve constructing a digital USV controlled object model and an autonomous navigation control system. Through model-in-the-loop (MIL), software-in-the-loop, and hardware-in-the-loop (HIL) simulations, algorithms and key software and hardware are tested and verified in a serialized and phased manner. This approach mitigates the high cost and risk associated with real-vehicle testing while guiding students through experimental tasks, such as motion modeling, multiwaypoint tracking control, remote operation, and state machine design for autonomous mission switching. Finally, the key software and hardware verified through multiple simulation tests are deployed to a consumer-grade USV platform for field testing. [Results] This paper presents a comprehensive experimental teaching platform for USV motion control systems, constructed based on a model-based design approach using MATLAB simulation tools, projected-based case studies, Raytheon V5 Nano controller, 0.68-meter consumer-grade USVs, and other software and hardware. Students engage in a full development cycle, from theoretical design and MIL simulation, to HIL simulation involving real vehicles and controllers, and finally to real-vehicle testing for the verification of autonomous navigation tasks. Through this integrated process design and innovative training approach, the platform successfully achieves the goal of unifying teaching and practice in the design and development of USV motion control systems. [Conclusion] The integrated experimental teaching platform for USV motion control developed in this paper covers the entire process of “theory, design, and practice,” providing continuous and integrated experimental teaching from simulation to real-world application. Closely aligned with theoretical instruction, the platform supports comprehensive and challenging experimental tasks, such as motion modeling, algorithm design, motion control, and parameter tuning. This approach enhances students’ abilities in system-level experimental design, verification, and independent innovative practice. The development process is seamlessly linked and logically coherent, effectively fulfilling the training goal of integrated and continuous experimental teaching for USV navigation control. The platform enables students to master the core methodologies and workflows involved in the design and verification of control systems for intelligent marine equipment, laying a solid foundation for scientific research and engineering applications in the field of intelligent unmanned systems.
Synergistic innovation mechanisms in global scientific research cooperation: An empirical study based on multinational collaborative laboratories
LIU Changfeng;WANG Rui;[Objective] This study provides a comprehensive examination of collaborative laboratories (collaboratories) and their instrumental role in driving global scientific and technological progress. Through a dual-focused investigation of their historical development and contemporary operational frameworks, we identify and analyze critical systemic barriers that hinder effective international research cooperation. By synthesizing theoretical constructs with empirical evidence from leading collaboratory initiatives, this research develops evidence-based optimization strategies for transnational knowledge production systems. The proposed framework offers substantive contributions to strengthening the adaptive capacity and long-term viability of global scientific networks in an increasingly interconnected research landscape. [Methods]A mixed-methods approach was employed, combining systematic literature reviews, theoretical analyses, and comparative case studies. The theoretical foundation was anchored in the Triple Helix Model and Open Innovation Theory, which collectively frame the interplay among academia, industry, and government, as well as cross-boundary knowledge integration. Four globally representative collaboratories were examined: the MIT Media Lab (disciplinary convergence), CERN (large-scale science infrastructure), Fraunhofer-Gesellschaft (industry-driven applied research), and the Deep-Time Digital Earth (DDE) Program (data-centric global governance). Data were synthesized from academic publications, institutional reports, and policy documents, with a focus on operational models, innovation outputs, and socio-technical challenges. Comparative analysis emphasized governance structures, resource integration mechanisms, collaboration scales, and broader societal impacts. [Results]ollaboratories emerged as pivotal platforms for disruptive innovation, resource sharing, and global knowledge networking. The MIT Media Lab demonstrated the potential of industry-academia alliances in accelerating interdisciplinary breakthroughs, such as wearable technologies and AI ethics frameworks, while CERN exemplified the power of open science in enabling foundational discoveries like the Higgs boson and serendipitous technological spin-offs like the World Wide Web. Fraunhofer-Gesellschaft showcased how contract-based R&D bridges industrial innovation gaps, exemplified by its MP3 encoding technology, and the DDE Program highlighted the transformative role of AI-driven data integration in advancing Earth science for global sustainability. igitalization, policy frameworks such as the Bayh-Dole Act, and globalization were identified as key drivers, enabling collaboratories to significantly enhance innovation efficiency through shared infrastructure and cross-disciplinary talent pools. Outputs spanned patents, industrial solutions, and global public goods like open-access scientific data. However, critical challenges persist, including geopolitical tensions over data sovereignty in initiatives like DDE, ethical dilemmas in AI applications at MIT Media Lab, and intellectual property disputes arising from multi-stakeholder collaborations at Fraunhofer and CERN. Sustainability concerns further loom, as reliance on volatile funding sources—such as corporate sponsorships or politically contingent budgets—threatens long-term operational stability. [Conclusions]To address these challenges, the study proposes a tripartite “Mechanism-Policy-Technology” (MPT) synergy framework. Mechanism optimization requires dynamic intellectual property-sharing protocols, such as blockchain-enabled smart contracts, and diversified funding models to balance public and private interests. Policy harmonization necessitates transnational standards for data sovereignty, inspired by frameworks like the EU’s GDPR, and multilateral agreements facilitated by organizations such as UNESCO to mitigate geopolitical friction. Technological empowerment involves leveraging AI for cross-disciplinary coordination, such as automated experiment design, and blockchain for secure, transparent data ecosystems, as seen in DDE’s Earth Explorer platform. The findings underscore that collaboratories are evolving from isolated entities into interconnected, digital-first innovation ecosystems. Future success hinges on aligning open science principles with agile governance, ethical AI integration, and inclusive global participation. Policymakers, academia, and industry must collaboratively invest in digital infrastructure, standardize cross-border protocols, and prioritize equitable knowledge distribution to unlock the full potential of transnational scientific collaboration.
Service Performance Evaluation System of University Laboratory Animal Institutions based on Analytic Hierarchy Process
MU Dandan;XU Xiao;MA Xixiang;XIONG Wenjing;REN Jing;WU Jiemin;LONG Yun;ZHOU Shunchang;[Objective] Laboratory animal institutions in academic institutions serve as multifunctional platforms that combine core operations including laboratory animal breeding, experimental condition provision, education and quality assurance, as well as research and technical services related to animal experimentation. These institutions in academic institutions serve as multifunctional platforms that combine core operations including laboratory animal breeding, experimental condition provision, education and quality assurance, as well as research and technical services related to animal experimentation. Optimizing their service performance is crucial for advancing academic disciplines, thereby rendering the scientific quantification and evaluation of their service quality and operational capabilities an increasingly significant research focus. Nonetheless, notable disparities exist among different facilities regarding management systems, resource allocation, and service efficiency. Numerous institutions encounter challenges such as suboptimal equipment utilization, inadequate standardization protocols, persistent cost-containment issues, and complications in data collection, all of which directly affect research productivity and teaching quality. Consequently, the establishment of a scientific and systematic performance evaluation framework to comprehensively assess service quality, operational efficiency, and support capacity is of substantial theoretical significance and practical value for the advancement of laboratory animal institutions. [Methods] This study aims to synthesize and critically evaluate recent research advancements in laboratory animal institutions. By integrating the operational characteristics of these facilities, we systematically analyze the determinants influencing both service capacity and performance metrics in this specialized sector. The research prioritizes three key domains, service investment, operational execution, and efficacy outcomes, to construct a hierarchical evaluation. Through expert scoring and the application of the analytic hierarchy process(AHP), we determined precise weight distributions for each evaluation indicator. This methodological approach facilitates a comprehensive and objective evaluation of service quality, operational efficiency, and support capacity, thereby equipping university administrators with robust evidence for institutional planning, resource allocation, and policy formulation. To assess the practical applicability of our framework, we conducted an in-depth empirical case study at the Laboratory Animal Center of Huazhong University of Science and Technology. [Results] The experimental results demonstrate that all target matrices at the scheme level successfully passed consistency verification, with an accurate weight distribution across all indicators. Furthermore, the empirical study revealed that strategic adjustments implemented by the Laboratory Animal Center between 2021 and 2024 resulted in measurable performance improvements, thereby reinforcing the strong applicability and operational feasibility of the AHP in evaluating service performance within laboratory animal institutions in higher education. Our findings contribute a substantial theoretical foundation and methodological support for performance assessment in this domain. [Conclusions] Through a comprehensive analysis employing the AHP, this study has developed a robust, scientifically grounded, and practical performance evaluation framework for laboratory animal facilities within higher education institutions. This framework provides these facilities with deep insights into their operational efficiencies and deficiencies, thereby enabling data-driven strategic adjustments and fostering continuous service enhancement. The adoption of this evaluation system is anticipated to support the development of modernized, intelligent, and highly efficient operational mechanisms in laboratory animal facilities. Furthermore, it is expected to empower these institutions to make evidence-based strategic decisions, facilitating ongoing improvements and the establishment of advanced operational practices.
Workload calculation method for the laboratory safety teams at the departmental level in universities
LI Kuan;GUO Yufeng;AI Desheng;[Objective] As fundamental hubs for academic instruction and scientific investigation, university laboratories bear critical responsibility for safeguarding personnel safety and ensuring uninterrupted research operations. The escalating sophistication of contemporary research methodologies and experimental protocols has correspondingly amplified the challenges associated with laboratory safety management. This study proposes to establish a systematic workload assessment framework for departmental level laboratory safety teams in universities. The developed model seeks to achieve three primary objectives: ① optimizing the distribution of safety management resources, ② elevating the overall standard of laboratory safety protocols, and ③ providing both conceptual foundations and actionable methodologies for enhancing safety management systems in academic laboratories. [Methods] This study draws on the experiences of laboratory safety team construction in both domestic and international universities and integrates China's policies and regulations on laboratory safety management. It systematically reviews the requirements of the Grading & Classification Administration of Laboratory Safety in Universities and Colleges (Trial Implementation) and proposes a model for calculating the workload of departmental safety teams based on hierarchical and classified laboratory safety management. The model covers five aspects: laboratory safety inspections, safety training, safety assessments, emergency drills, and other laboratory safety management tasks. The study establishes key parameters for each aspect, such as inspection frequencies (e.g., weekly for I-risk laboratories and monthly for II-risk ones), training hours (e.g., 24 hours for I-risk laboratories), and assessment times (e.g., 30 minutes per report for I-risk laboratories). These parameters are tailored to different risk levels (Levels I-IV) and are designed to be adaptable to various disciplines, laboratory sizes, and risk profiles to ensure the model's generalizability and applicability. [Results] The innovative outcome of this study is the development of a comprehensive and quantifiable model for calculating the workload of laboratory safety teams. The model takes into account the risk levels of laboratories and integrates the frequency and time consumption of safety management tasks, providing a clear reference for personnel allocation. The study finds that there are significant differences in the workload of safety management tasks across laboratories with different risk levels. For instance, I-risk laboratories (Level I) require more frequent inspections and detailed assessments compared to IV-risk ones (Level IV). The model also highlights the necessity of customizing safety management strategies according to the specific characteristics of each laboratory. Additionally, the study identifies existing issues in the current allocation of laboratory safety teams, such as unclear responsibilities, lack of clear workload definitions, and insufficient professionalism, which hinder the improvement of laboratory safety management. The application of the model can provide scientific suggestions for optimizing the structure of safety teams and enhancing management efficiency. [Conclusions] The model developed in this study provides important theoretical and practical guidance for university laboratory safety management. It offers a scientific basis for optimizing the allocation of safety personnel and improving management efficiency by quantifying the workload of safety management tasks. Looking forward, with the development of laboratory safety management technologies and the accumulation of practical experience, the model can be further refined to incorporate emerging technologies such as dynamic risk assessment and artificial intelligence-assisted management, to develop intelligent allocation solutions that adapt to evolving laboratory safety management needs. Moreover, customizing allocation strategies based on different disciplines, laboratory sizes, and risk levels will help drive the continuous improvement of laboratory safety management towards more refined and professional directions, providing strong support for the construction of world-class universities and disciplines in China.
Study on the design and analysis methods of orthogonal experiment
Liu Ruijiang,Zhang Yewang,Wen Chongwei,Tang Jian(School of Pharmaceutics,Jiangsu University,Zhenjiang 212013,China)The importance of orthogonal experimental design and analysis is introduced briefly.The principle and characteristic are expounded.The design methods of orthogonal experiment and analysis methods of orthogonal experimental results are analyzed in detail,which afford fully systemic methods for orthogonal experimental design and analysis.Problems in orthogonal experimental design and analysis and development of software for orthogonal experimental design and analysis are also pointed out in the end.
[Downloads: 52,870 ] [Citations: 3,083 ] [Reads: 28 ] HTML PDF Cite this article
Research on statistical analyses and countermeasures of 100 laboratory accidents
Li Zhihong;Training Department,Kunming Fire Command School;This paper summarizes 100typical cases of laboratory accidents from 2001and analyzes the cases in fields of accident type,accident link,accident cause,dangerous substance category,etc.The result shows as follows:the fire disasters and explosive accidents are the main types of laboratory accidents;the dangerous chemicals,instruments and equipment,and pressure vessels are main dangerous substances;the instruments and equipment and reagent application processes are the main links of accidents;the violation of rules,improper operation,carelessness,wire short circuit and aging are the main reasons of accidents.It also puts forward the countermeasures and suggestions for the prevention and control of laboratory accidents in the following aspects:establishing complete safety management system,actively promoting standard construction of laboratory safety,strengthening laboratory safety education and training,and formulating and improving emergency plans for laboratory accidents.
[Downloads: 9,140 ] [Citations: 492 ] [Reads: 17 ] HTML PDF Cite this article
Constructing practice teaching system focusing on ability training
Zhang Zhongfu(Zengcheng College,South China Normal University,Guangzhou 511363,China)The economic and social development has given rise to an increasing demand for applied talents,and people have attached more and more importance to practice teaching.The practice teaching system should focus on ability training,build practice teaching system,adjust the practice teaching contents,reform practicing teaching pattern,coordinate practicing teaching administrating system,and constructing a scientific and reasonable quality ensuring system and a practicing teaching evaluation system.
[Downloads: 4,161 ] [Citations: 253 ] [Reads: 20 ] HTML PDF Cite this article
The application of studying fluorescence spectroscopy on protein
Yin Yanxia,Xiang Benqiong,Tong Li(College of Life Science,Beijing Normal University,Beijing 100875,China)Fluorescence spectroscopy is very important for studying protein structure and conformation changes.The concept and principle of fluorescence spectroscopy are introduced at first,then the application of studying fluorescence spectroscopy on protein is explained.
[Downloads: 5,113 ] [Citations: 252 ] [Reads: 19 ] HTML PDF Cite this article
The CNC machine tool with systematic work process and its application of teaching design
Li Yanxian(Department of Mechanical and Electronic Engineering,Nanjing Communications Institute of Technology,Nanjing 211188,China)According to professional training objectives and the main jobs of the structure of vocational skills and knowledge required to "CNC machine tools and spare parts" for the carrier,taking the CNC programming and operation of capacity-building as the center,this paper shows the design of the "knowledge of CNC machine tools,observation and analysis of CNC lathes,CNC milling machine to observe and analyze the processing center,programming and processing stepped shaft,threaded shaft of the programming and processing,hand wheel slot programming and processing,convex programming and processing of the template,the base of the programming and processing"of 9 items,25 learning environment,67 tasks,and one of the "convex template programming and processing" learning environment for the teaching unit design.
[Downloads: 382,855 ] [Citations: 6 ] [Reads: 19 ] HTML PDF Cite this article
Study on the design and analysis methods of orthogonal experiment
Liu Ruijiang,Zhang Yewang,Wen Chongwei,Tang Jian(School of Pharmaceutics,Jiangsu University,Zhenjiang 212013,China)The importance of orthogonal experimental design and analysis is introduced briefly.The principle and characteristic are expounded.The design methods of orthogonal experiment and analysis methods of orthogonal experimental results are analyzed in detail,which afford fully systemic methods for orthogonal experimental design and analysis.Problems in orthogonal experimental design and analysis and development of software for orthogonal experimental design and analysis are also pointed out in the end.
[Downloads: 52,870 ] [Citations: 3,083 ] [Reads: 28 ] HTML PDF Cite this article
Construction and actualization of new experimental teaching system for chemical specialty
YANG Jin-tian(Institute of Life Science,Huzhou Normal College,Huzhou 313000,China)The new system of chemical experiment teaching is constructed,and the comprehensive experiments,open experiments and research-oriented experiments are set up to improve the degree of source sharing,the efficiency of using equipment and the quality of experimental teaching,hence efficiently optimizing the practical abilities and fostering innovative spirit for the undergraduates are achieved.
[Downloads: 24,228 ] [Citations: 11 ] [Reads: 20 ] HTML PDF Cite this article
Practice and thinking of education of“College Students' Innovative and Entrepreneurial Training Program”based on tutor system
Qian Xiaoming;Rong Huawei;Qian Jingzhu;Office of Academic Affairs,Nanjing University of Technology;The innovation and entrepreneurship education has been included in the teaching and education program of college schools."College Students' Innovative and Entrepreneurship Training Program "has become an"Excellent Program"as one of the most important reform tasks in Ministry of Education.The tutor system is an effective way of innovative education and pilot training for both college schools and students.Students learn the method of innovation researches and technique of entrepreneurial process through the program.In the meanwhile,teachers in college schools find a new stage to improve their teaching ability.This article focuses on the project,practice and feasibility of the"College Students' Innovative and Entrepreneurial Training Program "under the tutor system.
[Downloads: 10,004 ] [Citations: 219 ] [Reads: 27 ] HTML PDF Cite this article
Experimental research on protein immunoblot assay
ZHANG Yan-wan,YE Jue,SHI Na,MENG Xian-min,WANG Lai-yuan(Central Laboratory,Fuwai Hospital for Cardiology,Chinese Academy of Medical Sciences,Beijing 100037,China)The paper discribes the important significance of protein immunoblot assay(Western blotting) in the research on protein,the experiment principle and the methods of protein immunoblot assay.A few main aspects of experiment technology methods are analyzed and discussed,and the research experience of protein immunoblot assay is also discribed.
[Downloads: 9,231 ] [Citations: 105 ] [Reads: 21 ] HTML PDF Cite this article