
Tracking the information about your manuscript
Communicate with the editorial office
Query manuscript payment status Editor LoginCollecting, editing, reviewing and other affairs offices
Managing manuscripts
Managing author information and external review Expert Information Reviewer LoginOnline Review
Online Communication with the Editorial Department
WeChat Official Account
Page Views
Hashing for localization: A review of recent advances and future trends
REN Peng;LIU Jingyu;ZHANG Weibo;[Significance] This paper reviews state-of-the-art hashing-for-localization(HfL) technologies. It begins by reviewing visual localization and hashing-based visual retrieval technologies. In the field of visual localization, camera pose estimation and image geolocalization are two major research topics. Although their application areas differ, they share common key technologies in terms of image feature extraction and matching. Additionally, both face common challenges, including complex feature representation and slow localization speed. Hashing-based visual retrieval is a technology for accelerating retrieval using hashing algorithms. This technology performs feature matching by computing the Hamming distance between compact hash codes, which simplifies computational complexity and improves retrieval efficiency. Therefore, the development of hashing algorithms for accelerating visual localization represents an efficient and robust solution. [Progress] This paper introduces a series of HfL technologies that leverage hashing algorithms to accelerate visual localization. The typical process for implementing HfL begins by extracting visual features from a query image for localization and from reference images with localization information. These features are then mapped into compact binary hash codes using trained hash functions while preserving their visual similarities. During localization, the query image, represented by the generated hash code, is compared with the hash codes of the reference images in the database using the Hamming distance. Localization is achieved by selecting the closest matches through ranking, i.e., the localization information of the reference images whose hash codes have small Hamming distances to the query image's hash code is considered the localization result. This paper reviews four types of HfL technologies: remote sensing HfL, sketch HfL, street view image geolocalization, and encrypting hashing against localization. It also discusses their empirical advantages over existing visual localization methods. [Conclusions and Prospects] Through methodological and experimental analysis, it is observed that HfL offers the following three main advantages:(a) HfL significantly reduces storage requirements by mapping image features to compact binary hash codes;(b) HfL accelerates the feature matching process by computing the Hamming distance between hash codes;(c) HfL, aided by geographic cluster extraction, achieves exact localization. HfL effectively overcomes the inefficiencies of traditional visual localization technologies and represents a breakthrough in fast localization. In the future, fusing various data sources, including visual, global navigation satellite system, Wi-Fi, and acoustic signals, to enhance Hf L will be a key trend. Leveraging large language models to update HfL with semantic information will be another key trend. HfL will provide efficient and accurate localization support across small-, medium-, and large-scale application scenarios.
Design of atmospheric correction experiment for an Air–Space–Ground synchronous measurement framework of satellite remote sensing in aquatic environments
CUI Jianyong;ZHANG Wangchen;HOU Shuhang;REN Peng;LIU Shanwei;SU Xinyuan;[Objective] Accurate atmospheric correction is critical for retrieving water quality parameters from satellite remote sensing, as >90% of the radiation signal received by optical satellites originates from atmospheric interference rather than water constituents. Traditional correction methods—such as physics-based radiative transfer models(e.g., moderate resolution atmospheric radiative transfer code), dark pixel algorithms, and empirical approaches—face notable challenges, including computational complexity, dependence on synchronous ground measurements, and spatiotemporal mismatches between satellite overpasses and ground data collection. This study proposes an innovative air–space–ground synchronous measurement framework to improve atmospheric correction accuracy in coastal waters. By integrating unmanned aerial vehicle(UAV) hyperspectral data, ground-based spectral measurements, and satellite imagery, this study aims to reduce reliance on extensive ground sampling, address low-altitude atmospheric noise, and enhance the reliability of chlorophyll-a(Chl-a) concentration inversion, thereby supporting operational water quality monitoring. [Methods] The experiment was conducted in three phases. First, during Sentinel-2 overpasses, synchronous data acquisition was carried out. UAV hyperspectral imagery(Cubert S185 sensor, 450–998 nm, 138 bands), ground-measured water surface spectra(TriOS AWRMMS system), and in situ Chl-a concentrations(EXO multi-parameter analyzer) were collected within±1 h in Tangdao Bay, Qingdao. A grid of 29 sampling points with 100 m spacing ensured spatial coverage and spectral consistency. Second, in the air–ground correction phase, UAV imagery was preprocessed to remove spatial noise using LEE filtering and spectral artifacts using Savitzky–Golay smoothing. The matching pixel-by-pixel–empirical line method(MPP–ELM) was employed to align UAV reflectance with ground measurements. Robust regression minimized outliers, and spectral response functions were applied to harmonize UAV and Sentinel-2 bands, reducing mismatches. Third, for air–space correction, an exponential–trigonometric optimized backpropagation(ETO-BP) neural network was designed to map UAV-corrected reflectance to Sentinel-2 multispectral data. The model incorporated scaled conjugate gradient optimization for faster convergence, dropout layers to prevent overfitting, and ensemble learning to enhance robustness. Training used 1 098 data pairs resampled to 10 m resolution for spatial consistency. [Results] The hierarchical air–ground–space correction achieved superior performance. Corrected Sentinel-2 reflectance showed an R2(coefficient of determination) of 0.92 against ground truth, outperforming traditional physics-based methods(R2=0.72), UAV-only correction(R2=0.65), and ground-only approaches(R2=0.89). Chl-a inversion accuracy improved by 42% after correction, with RMSE(root mean square error) decreasing from 8.7 to 5.1 μg/L. The optimal Chl-a model combined a four-band ratio feature and particle backscattering slope, achieving R2=0.88. UAV hyperspectral data addressed the spatial limitations of sparse ground sampling, enabling “face-to-face” correction and reducing false alarms by 35%. Sensitivity analysis indicated that aerosol heterogeneity and residual low-altitude scattering—despite ground-based constraints—remained challenges, emphasizing the need for real-time meteorological integration. [Conclusions] This study presents a robust framework for atmospheric correction by synergizing multiplatform remote sensing data. The key contributions of the proposed framework are as follows.(1) Integration of MPP–ELM with ETO-BP: The hybrid model bridges spatiospectral gaps between ground points, UAV imagery, and satellite pixels, effectively addressing nonlinear atmospheric effects.(2) Operational efficiency: UAVs allow rapid, high-resolution data acquisition, reducing dependence on labor-intensive ground campaigns while maintaining correction accuracy.(3) Enhanced Chl-a retrieval: The model's improved accuracy supports coastal water quality monitoring, particularly in optically complex regions. However, the framework's performance depends on minimal ground data for low-altitude noise suppression and remains sensitive to aerosol variability. Future work should incorporate real-time atmospheric profiles and extend validation to diverse aquatic environments. This approach underscores the potential of multiplatform remote sensing for advancing environmental management and ecological conservation.
Research on the construction path of national laboratories under the new national system
WU Peng;WU Gang;[Objective] National laboratories are the “strategic heavyweights” in the race to shape the future. Under the new national system, China has only recently begun the substantial development of its national laboratories, and the planning frameworks, governance models, evaluation mechanisms, and innovation ecosystems associated with them are still in the exploration phase. This paper reviews the 40-year evolution of Chinese national laboratories, examining their development, effectiveness, existing challenges, and current opportunities. Drawing on the successful practices of world-class national laboratories in Europe and the United States, it offers recommendations to support the advancement of Chinese national laboratories within the new national system and to accelerate the establishment of a national laboratory system with Chinese characteristics. [Methods] This paper involves extensively reviewing the literature on the construction, management, and development of national laboratories at home and abroad, conducting interviews with staff and partners involved in the construction of national laboratories, and understanding the current status, problems, and trends of the construction and development of Chinese national laboratories. [Results] Through research, the paper proposes that under the new national system, it is necessary to leverage the synergistic advantages of the government's “visible hand” and the market's “invisible hand” to promote the construction of national laboratories. The current urgent need is for the following four aspects:(1) strengthening top-level design, establishing an authoritative decision-making command system and an efficient coordination execution system, classifying and promoting the construction of national laboratories, and providing an improved diversified-resource-investment mechanism;(2) optimizing governance models, clearly defining the rights and responsibilities of sponsors, managers, and legal entities of national laboratories through national legislation, breaking down barriers to talent mobility and resource sharing among different innovation entities, encouraging and supporting organized scientific research, end-to-end innovation, and large-scale collaborative problem-solving, and working to establish an “asymmetric” advantage in areas crucial to future competitiveness and urgent national needs;(3) improving the evaluation and diagnosis for national laboratories, leveraging the incentive role of evaluation, exploring the development of a long-term strategic diagnostic evaluation system for national laboratories, along with an artificial intelligence and big data-driven national laboratory monitoring system, and ensuring timely “calibration” of their goals and actions; and(4) deepening guidance and radiation, promoting collaborative innovation between national laboratories, research universities, research institutions, and leading technology companies, accelerating the transformation of scientific and technological achievements into real productive forces, and creating a community for the development of national laboratories. [Conclusions] Under the new national system, national laboratories must not only make “milestone” scientific and technological contributions in key areas of national economic and social development and national security but also conduct “ice-breaking” exploration and innovation in scientific and technological systems and mechanisms. Through coordinated planning, improved governance, optimized evaluation, and leading radiation measures, efforts will be made to mitigate the drawbacks of traditional-national-platform innovation resources being scattered, closed, and inefficient, continuously enhance the innovative leadership, resource convergence, and development support of national laboratories, form an innovation system and ecosystem centered on national laboratories, and make them the “flagship” of the national strategic scientific and technological forces and the “anchor” for maintaining national security.
Experimental study on beam position monitor based on silicon pixel chip Topmetal
LIU Jun;GAO Chaosong;WANG Hulin;SUN Xiangming;[Objective] A beam position monitor is an important component in a particle accelerator, enabling real-time measurement and characterization of beam parameters, such as position, intensity, spot size, and other information about the beam. These measurements are essential for optimizing the accelerator performance, ensuring beam stability, and conducting high-precision experiments. Traditional beam position monitors often rely on scintillators, wire chambers, or semiconductor detectors, each with inherent limitations in resolution, noise, or radiation hardness. To address these challenges, this paper explores the use of Topmetal, a low-noise and high-resolution silicon pixel sensor fabricated using complementary metal–oxide–semiconductor(CMOS) technology, as the charge collection electrode in a gas-based beam monitor. [Methods] The proposed beam monitor integrates a Topmetal sensor into a gas detector structure, where ionizing particles generate electron–ion pairs in the gas chamber. The ionized electrons drift toward the Topmetal pixel array under the influence of an applied electric field and are sensed and read by Topmetal. The position of the beam particles was calculated based on the signal distribution on the pixel array. A dedicated readout electronics system was designed to process the signals from the beam monitor, which consists of front-and back-end electronics. The front-end electronics consists of a Topmetal bonding board and a motherboard. To increase the sensitive area of the detector, four Topmetal-II chips were installed in one row on the bonding board. The motherboard mainly implements four functions: power supply for the chips, bias voltage provision, control signal fan-out, and analog output buffering. The back-end electronics were designed based on a Xilinx Kintex-7 series field-programmable gate array, which was mainly responsible for Topmetal-II chip configuration, receiving analog output signals for analog-to-digital conversion, data packaging and caching, data transmission, and other functions. The readout electronics system was designed with low noise, fast signal digitization, and efficient data acquisition. To validate the feasibility and evaluate the performance of the beam position monitor, tests using 241Am α-particles and heavy ion beams were conducted in addition to the electronic tests. [Results] The tests proved that all the designed functions of the readout electronics system worked as expected. For the downlink, the readout electronics system can correctly configure the Topmetal chip, whereas for the uplink, it can read out the data of the chip and transmit the data to the computer through the ethernet. The 241Am α particles test revealed that the whole detector system of the beam position monitor, including the high-voltage system, gas system, and electronic system, worked as expected, and the detector could successfully register individual α particles. The beam tests demonstrated that the detector could work stably under the beam environment and resolve individual beam particles with the beam flux of 104–106 pps. With each pixel size of 83 μm×83 μm, the Topmetal-based beam monitor can achieve excellent position resolution, making it suitable for high-precision beam diagnostics. Furthermore, the detector's gas-based design offers flexibility in adjusting the sensitivity and dynamic range by varying the gas mixture and pressure. [Conclusions] This paper provides a new approach for high-position-resolution beam position monitoring, combining the advantages of CMOS pixel sensors and gas detectors. The Topmetal-based system offers excellent spatial resolution compared with traditional beam position monitors, along with low noise and radiation tolerance. Future work will focus on improving the rate capability of detectors for high-intensity beam applications and exploiting gas amplification mechanisms. The successful implementation of this technology could significantly enhance beam diagnostics in the accelerators, particularly in applications requiring micron-level precision, such as synchrotron light sources, medical proton and ion therapy, and particle and nuclear physics experiments.
Experimental study on low-frequency electrical characteristics of fracture-filling natural gas hydrate sediments
XING Lanchang;WANG Yunlong;WANG Yonghui;HAN Weifeng;WEI Wei;LIU Bao;[Objective] Field drilling observations have revealed that natural gas hydrates can be of several types, including the pore-filling type, the fracture-filling type, or a combination thereof, within geological formations. Electrical parameters are commonly used to estimate hydrate saturation and sediment permeability. However, findings for pore-filling hydrates cannot be directly applied to fracture-filling hydrates because of their distinct characteristics. Consequently, comprehensive investigations integrating experimental, numerical, and theoretical approaches are essential. This study provides an experimental platform for characterizing the low-frequency electrical properties of fracture-filling hydrate sediments, supporting the development of evaluation models for hydrate saturation and sediment permeability. [Methods] First, a three-dimensional finite-element numerical model is developed using COMSOL Multiphysics to simulate the electrical field and optimize electrode structural parameters. Second, a testing apparatus is designed based on the optimized electrodes and the four-electrode principle to measure the low-frequency electrical parameters of hydrate-bearing samples. A scheme for preparing sediment samples containing fracture-filling hydrates is also established, along with a method for processing test data. Third, the influence of hydrate saturation and fracture count on the low-frequency electrical properties of fracture-filling hydrate sediments is analyzed using the electrical double layer(EDL) polarization theory. [Results] The results reveal the following. First, for the spatial arrangement of the four electrodes, the minimum measurement error in conductivity is achieved when the radius of the disk-shaped potential electrode is equal to the inner radius of the ring-shaped current electrode. Second, for the analyzed electrical circuit, the relative errors of the real and imaginary parts of the electrical impedance are 8.7% and 5.9%, respectively. Third, within the low-frequency range from 1×10-2 to 1×103 Hz, as the saturation of the fracture-filling hydrates increases, the electrical conduction of the hydrate-bearing sediments decreases, and the EDL polarization strength decreases, leading to reductions in the real part and peak value of the imaginary part of complex conductivity. Finally, the tetrahydrofuran hydrates synthesized in the experiment exhibit a porous structure with interconnected pore water. Under identical hydrate saturation conditions, an increased number of fractures allows more water to seep into the hydrates from the sea sand, enhancing the sample's electrical conduction while reducing the EDL polarization strength. Consequently, the real part of complex conductivity increases, and the peak value of the imaginary part decreases. [Conclusions] Finite-element modeling is an effective tool for optimizing electrode array designs for impedance measurements. The testing apparatus and experimental scheme developed in this study adequately meet the requirements for preparing fracture-filling hydrate samples and measuring their low-frequency complex conductivity. Given that hydrate saturation and fracture parameters influence complex conductivity, future experimental work focusing on fracture characteristics, such as shape, density, and dip angle, is warranted. Such research will contribute significantly to the development of evaluation models for hydrate saturation and sediment permeability applicable to fracture-filling hydrate reservoirs.
Prediction and safety evaluation of tunnel deformation based on monitoring data and RBF neural network
LI Jianhong;JING Wei;LIU Gongning;[Objective] The deformation law and its magnitude during tunnel construction are always an important criteria for the stability of surrounding rock of tunnel. Due to the many factors such as monitoring cost, construction period and complex site conditions, it is impossible to monitor every section on site. Therefore, it is necessary to propose a set of targeted tunnel deformation prediction methods, and to fit and modify the prediction model through on-site statistical data, so as to provide a reliable means for the safety of tunnel construction, which is the main focus and difficulty of current research. [Methods] To solve above mentioned problem, a tunnel deformation prediction and safety evaluation method based on monitoring data and RBF neural network is proposed in this paper. Firstly, taking a long tunnel in Southwest China as an example, the deformation of the tunnel is measured and analyzed on site. Then, based on the RBF neural network model, the measured data of tunnel vault settlement, peripheral displacement and internal deformation of surrounding rock are fitted and predicted, and compared with the prediction results of traditional BP neural network, the tunnel safety evaluation is carried out according to the predicted results, and the corresponding construction suggestions are put forward. [Results] The analysis results show that: 1) the determination coefficients R2 for predicting tunnel deformation (including tunnel vault settlement, peripheral displacement and internal deformation of surrounding rock) based on RBF neural network proposed in this paper are 0.999, 0.998 and 0.978, respectively, which are above 0.997, with high accuracy. However, the determination coefficients R2 of the traditional BP neural network are 0.992, 0.984 and 0.886, respectively. 2) According to the obtained data, it can be clearly explained that the prediction generalization ability of RBF neural network is better than that of traditional BP neural network (especially the internal displacement of surrounding rock), and it can achieve a good agreement with the field tunnel deformation monitoring data. 3) Combined with the RBF neural network prediction results and the three-level safety evaluation criteria of tunnel engineering, the safety evaluation of 8 sections of the relied tunnel project is carried out. The vault settlement and the average displacement variation rate around the first five sections are 0.1mm/d, which is less than the safety criterion of 0.2mm/d. The surrounding rock is basically stable and can be constructed normally. For K33+228 cross section, its average variation rate is greater than 0.2~1.0mm/d, and it should be strengthened observation. For K33+240 and K33+251 sections, the predicted average rate of vault settlement and peripheral displacement is greater than 1.0mm/d, and the lining should be strengthened. Namely, most sections of the tunnel meet the safety requirements and can ensure normal construction, but the average deformation rate of individual sections is still large, and it is necessary to strengthen support and observation in the future. With the increase of time, the internal displacement of different parts of the surrounding rock decreases, and the displacement changes tend to be stable. [Conclusions] The deformation prediction method proposed in this paper can not only provide quantitative decision-making basis for the current project, but also accumulates important technical experience for subsequent tunnel construction under similar geological conditions.
Test methods for bulk density, moisture content, porosity, specific yield of quaternary salt lake core samples in the Qaidam basin
LIU Liang;YIN Lucheng;JIA Jiantuan;XIAO Yuping;ZHAO Yuxiang;HAN Guang;ZHU Yunjun;LI Haiming;[Objective] The physical properties of salt lake core samples, such as bulk density, moisture content, porosity, and specific yield, are fundamental parameters for exploring potash deposits. These properties are crucial for evaluating brine deposits and calculating salt and brine reserves. Due to their highly heterogeneous structure and susceptibility to external forces, salt-lake core samples often deviate from their original state,cause structural deformation. In addition, their strong hygroscopicity leads to the loss of crystallized water, altering their composition. During the drilling process, brine inevitably drains from the salt core samples, complicating the measurement of brine volume discharged under gravity. This makes conventional methods unsuitable for determining porosity and specific yield. [Methods] After extracting the salt-lake core samples from the drill rods, brine is drained under gravity. Bulk density is determined using Archimedes’ principle, while moisture content is measured at 45 ℃, including the crystallized water from thenardite (sodium sulfate) and borax. After measuring bulk density, the sample is placed in a siphon test bottle filled with brine from the same borehole and depth. A vacuum pump creates negative pressure, allowing brine to infiltrate the sample’s pores for the specific yield test. The volume of brine injected equals the volume that would be discharged under gravity, and the ratio of injected brine volume to the total sample volume defines the specific yield. Porosity is calculated by adding the volume of brine discharged (specific yield) to the volume retained (water retention) and dividing by the total sample volume. Porosity is further calculated through water retention and K-value tests(The K-value represents the ratio of the volume of brine that the sample cannot discharge under the action of gravity to the volume of water adsorbed by the sample). The specific yield and porosity data are validated by field pumping tests, with good agreement. [Results] The experiment uses brine from the same location and depth as the sample, which better reflects the actual conditions of Quaternary salt lake strata. Experimental temperatures are strictly controlled to prevent the loss of crystallized water from salts other than thenardite and borax. Specific yield is determined using the siphon test bottle, and porosity is calculated more accurately by determining the K-value and the content of adsorbed water. The specific yield data shows less than 10% variation compared to unsteady flow pumping test results. [Conclusions] The proposed method is simple, straightforward, cost-effective, and fast. Using brine as the medium better matches actual formation conditions. The specific yield data aligns well with field pumping test results and can be used for reserve estimation of Quaternary salt lake liquid mineral layers. However, due to reliance on low negative pressure backfilling, this method is unsuitable for more compact salt-lake core samples from the Tertiary period. Since the test is destructive and the samples are heterogeneous, it cannot be repeated. Therefore, equipment calibration, environmental control, personnel training, and measures to preserve sample integrity during field sampling, storage, and transportation are essential. The test requires two people to cooperate in operation and two people to check the recorded data.Dual-operator procedures are also used to ensure data accuracy and reliability
Contour line reconstruction algorithm based on multifeature fusion matching and multicondition-constraint surface construction
YAO Jinpeng;JIAN Chu;ZHU Sixin;ZHOU He;DENG Jiayue;HE Mengyu;JIAN Xingxiang;[Objective] The geological surface reconstruction method based on contour line data has been widely adopted due to its efficiency and accuracy. Specifically, the contour reconstruction technique utilizing feature point matching and mesh partitioning has become a critical research focus in this field because of its excellent performance. However, existing feature point matching methods often exhibit inadequate robustness and accuracy when dealing with complex geological structures and noisy data. Additionally, current triangulation algorithms face significant challenges in generating high-quality triangular meshes, such as in avoiding elongated and narrow triangles and ensuring uniform distribution without self-intersections. This study aims to address these issues by proposing a novel contour line reconstruction algorithm that integrates multifeature fusion matching with multicondition-constraint surface construction, thereby enhancing the accuracy and reliability of geological surface reconstructions. [Methods] To overcome the limitations of multifeature constraints and the local defects of fuzzy matching algorithms, this paper proposes the mentioned algorithm. First, a similarity quantification evaluation method based on spatial quadrilaterals is introduced, defining four feature measures to enhance robustness and compensate for the shortcomings of basic triangular patches in capturing feature point adjacency information and orientation constraints. Principal component analysis is employed for feature fusion and optimal solution computation, with principal components selected based on cumulative contribution rates. The dissimilarity between matching point pairs is defined using Euclidean distance, establishing a robust matching relationship. Second, a secondary matching mechanism incorporating distance weights and inflection point detection is proposed to mitigate the impact of locally unreasonable matches. Finally, to address the inadequacies of existing triangulation algorithms during the tiling process, an adjacency surface roughness function is defined to assess the quality of adjacent triangles. Surface construction is then performed based on this quality assessment, ensuring the smoothness and detail-capture ability of the involved triangular mesh. [Results] Experimental results demonstrate that the proposed algorithm achieves reasonable outcomes in modeling geological exploration Contour Profile data and geophysical inversion profile data. By introducing multiple feature measures and optimization mechanisms, the accuracy and robustness of contour line reconstruction are significantly improved compared to conventional approaches including global optimal constraint matching and local optimal constraint matching. Notably, when handling complex geological structures and noisy data, the new algorithm exhibits higher adaptability and stability. Thereby enhancing the overall quality of the reconstructed models. [Conclusions] This study provides a robust and efficient solution for geological surface reconstruction through theoretical innovation and methodological improvements, significantly enhancing the accuracy and reliability of geological structure models. It offers substantial support for fields such as resource exploration, environmental monitoring, and disaster prevention.
Research on Simulation Teaching of Carbon Emission Flow Calculation in Power Systems under Low-Carbon Transition Context
CHEN Feixiong;FU Xiaoying;SUN Wenjing;SHAO Zhenguo;[Objective] Conventional carbon emission flow (CEF) calculation methods for power systems face challenges regarding unfair network loss allocation and difficulty in quantifying renewable energy's carbon reduction contributions. Existing studies often allocate network losses entirely to the load side or neglect the actual carbon reduction effects of renewable energy, leading to inequitable carbon emission responsibility distribution. To address these issues, this study proposes a simulation-based teaching method for CEF calculation grounded in power decomposition. [Methods] To overcome these limitations, the proposed method decomposes the actual power network into a basic power network and a power deviation network. Virtual nodes are introduced into transmission branches to distribute network losses equally between the generation side and the load side. We treat renewable energy units as negative-valued loads to quantify their carbon reduction contributions by calculating the difference in injected carbon flow rates. Finally, an IEEE 57-node system compares carbon flow rate differences under four allocation modes and simulates the impact of real power fluctuations on actual carbon flows. [Results] Experimental results demonstrate that: 1) Under the bidirectional allocation mode, branch carbon flow rates fall between those of generator-side allocation and load-side allocation, validating the rationality of shared responsibility between generation and load sides. 2) When renewable energy unit G2 is treated as a negative load, its carbon reduction contribution increases from 66.49 t/h in the basic power network to 92.74 t/h in the power deviation network. This accurately reflects the impact of renewable power variability on carbon emission reduction within the system. [Conclusions] The proposed power decomposition-based CEF calculation method provides a fair and precise carbon emission measurement tool for power system low-carbon transitions. The teaching framework successfully integrates theoretical knowledge with simulation experiments, enhancing students' understanding of CEF-related theories while cultivating their low-carbon awareness and research literacy.
Application and exploration of FLAC3D numerical simulation technology in coal mining disaster prevention and mitigation teaching
WANG Pu;CHEN Huidan;WEI Zesheng;ZHANG Jun;ZHANG Chuanyang;ZHANG Mei;[Objective] During underground mining, overburden movement, stress evolution, and energy release are the fundamental causes of mine pressure responses such as fault slipping, large deformation, rock bursts, and other hazards, which pose significant threats to mining safety. However, these underground disasters are often difficult to detect in advance, highly complex, and occur suddenly, making traditional disaster prevention and mitigation teaching methods limited in conveying disaster causation and coping mechanisms. To address issues such as poor intuitiveness, weak engineering relevance, and a disconnect between theory and practice in current teaching methods, this study aims to cultivate students’ analytical thinking and engineering decision-making skills in disaster warning and prevention. It introduces Fast Lagrangian Analysis of Continua in 3D(FLAC3D) numerical simulation technology into the teaching framework for disaster prevention and mitigation. Two typical teaching models are designed and implemented to explore the feasibility, practicality, and promotion value of FLAC3D in the teaching system. [Methods] Using FLAC3D, this study constructs a high-precision numerical model to simulate typical disaster conditions, such as reverse fault mining and gob-side roadway support. It integrates parameter regulation, dynamic evolution tracking, and result visualization modules to systematically analyze the entire process from mining disturbance to dynamic response and disaster chain evolution. This forms a “theoretical teaching → modeling practice” model—an effective instructional approach for disaster prevention and mitigation. A closed-loop teaching path is developed: “theoretical teaching → model design → parameter debugging → result interpretation → optimization decision-making → summary.” The model fully considers the influence of fault coal pillar width variation on peak stress and introduces a comparative analysis of roadway surrounding rock stability under multiple support structures, thereby enabling an approximate reconstruction of real working conditions. [Results] Results show that FLAC3D offers excellent dynamic visualization and parametric control capabilities. It clearly presents stress concentrations in mining, fault barrier effects, and rock failure modes and quantitatively analyzes the adaptability and effectiveness of various support schemes in controlling surrounding rock deformation. The deep integration of numerical simulation with teaching objectives has significantly enhanced students’ understanding of disaster mechanisms, triggering paths, and the development of prevention and control strategies. The problem-oriented simulation process has effectively stimulated students’ systematic analysis and engineering judgment. Feedback from teaching practices indicates a marked improvement in students’ awareness of disaster early warning and numerical simulation proficiency. [Conclusions] Embedding FLAC3D numerical simulation technology into the disaster prevention and mitigation teaching system effectively bridges the gap between theory and practice and significantly enhances the intuitiveness, interactivity, and practicality of instruction. This method strengthens students’ quantitative understanding of the spatiotemporal evolution of disasters and offers an important platform for personalized learning paths and complex disaster modeling training. In the future, it can be further integrated with data mining and deep learning capabilities of artificial intelligence to establish a new paradigm in teaching and safety management, combining “numerical simulation + real-time monitoring + intelligent analysis.” This approach will support the high-quality development of coal mine engineering talent and the disaster prevention and control system.
Study on the design and analysis methods of orthogonal experiment
Liu Ruijiang,Zhang Yewang,Wen Chongwei,Tang Jian(School of Pharmaceutics,Jiangsu University,Zhenjiang 212013,China)The importance of orthogonal experimental design and analysis is introduced briefly.The principle and characteristic are expounded.The design methods of orthogonal experiment and analysis methods of orthogonal experimental results are analyzed in detail,which afford fully systemic methods for orthogonal experimental design and analysis.Problems in orthogonal experimental design and analysis and development of software for orthogonal experimental design and analysis are also pointed out in the end.
[Downloads: 52,870 ] [Citations: 3,083 ] [Reads: 30 ] HTML PDF Cite this article
Research on statistical analyses and countermeasures of 100 laboratory accidents
Li Zhihong;Training Department,Kunming Fire Command School;This paper summarizes 100typical cases of laboratory accidents from 2001and analyzes the cases in fields of accident type,accident link,accident cause,dangerous substance category,etc.The result shows as follows:the fire disasters and explosive accidents are the main types of laboratory accidents;the dangerous chemicals,instruments and equipment,and pressure vessels are main dangerous substances;the instruments and equipment and reagent application processes are the main links of accidents;the violation of rules,improper operation,carelessness,wire short circuit and aging are the main reasons of accidents.It also puts forward the countermeasures and suggestions for the prevention and control of laboratory accidents in the following aspects:establishing complete safety management system,actively promoting standard construction of laboratory safety,strengthening laboratory safety education and training,and formulating and improving emergency plans for laboratory accidents.
[Downloads: 9,140 ] [Citations: 492 ] [Reads: 23 ] HTML PDF Cite this article
Constructing practice teaching system focusing on ability training
Zhang Zhongfu(Zengcheng College,South China Normal University,Guangzhou 511363,China)The economic and social development has given rise to an increasing demand for applied talents,and people have attached more and more importance to practice teaching.The practice teaching system should focus on ability training,build practice teaching system,adjust the practice teaching contents,reform practicing teaching pattern,coordinate practicing teaching administrating system,and constructing a scientific and reasonable quality ensuring system and a practicing teaching evaluation system.
[Downloads: 4,161 ] [Citations: 253 ] [Reads: 23 ] HTML PDF Cite this article
The application of studying fluorescence spectroscopy on protein
Yin Yanxia,Xiang Benqiong,Tong Li(College of Life Science,Beijing Normal University,Beijing 100875,China)Fluorescence spectroscopy is very important for studying protein structure and conformation changes.The concept and principle of fluorescence spectroscopy are introduced at first,then the application of studying fluorescence spectroscopy on protein is explained.
[Downloads: 5,113 ] [Citations: 252 ] [Reads: 22 ] HTML PDF Cite this article
The CNC machine tool with systematic work process and its application of teaching design
Li Yanxian(Department of Mechanical and Electronic Engineering,Nanjing Communications Institute of Technology,Nanjing 211188,China)According to professional training objectives and the main jobs of the structure of vocational skills and knowledge required to "CNC machine tools and spare parts" for the carrier,taking the CNC programming and operation of capacity-building as the center,this paper shows the design of the "knowledge of CNC machine tools,observation and analysis of CNC lathes,CNC milling machine to observe and analyze the processing center,programming and processing stepped shaft,threaded shaft of the programming and processing,hand wheel slot programming and processing,convex programming and processing of the template,the base of the programming and processing"of 9 items,25 learning environment,67 tasks,and one of the "convex template programming and processing" learning environment for the teaching unit design.
[Downloads: 382,855 ] [Citations: 6 ] [Reads: 23 ] HTML PDF Cite this article
Study on the design and analysis methods of orthogonal experiment
Liu Ruijiang,Zhang Yewang,Wen Chongwei,Tang Jian(School of Pharmaceutics,Jiangsu University,Zhenjiang 212013,China)The importance of orthogonal experimental design and analysis is introduced briefly.The principle and characteristic are expounded.The design methods of orthogonal experiment and analysis methods of orthogonal experimental results are analyzed in detail,which afford fully systemic methods for orthogonal experimental design and analysis.Problems in orthogonal experimental design and analysis and development of software for orthogonal experimental design and analysis are also pointed out in the end.
[Downloads: 52,870 ] [Citations: 3,083 ] [Reads: 30 ] HTML PDF Cite this article
Construction and actualization of new experimental teaching system for chemical specialty
YANG Jin-tian(Institute of Life Science,Huzhou Normal College,Huzhou 313000,China)The new system of chemical experiment teaching is constructed,and the comprehensive experiments,open experiments and research-oriented experiments are set up to improve the degree of source sharing,the efficiency of using equipment and the quality of experimental teaching,hence efficiently optimizing the practical abilities and fostering innovative spirit for the undergraduates are achieved.
[Downloads: 24,228 ] [Citations: 11 ] [Reads: 23 ] HTML PDF Cite this article
Practice and thinking of education of“College Students' Innovative and Entrepreneurial Training Program”based on tutor system
Qian Xiaoming;Rong Huawei;Qian Jingzhu;Office of Academic Affairs,Nanjing University of Technology;The innovation and entrepreneurship education has been included in the teaching and education program of college schools."College Students' Innovative and Entrepreneurship Training Program "has become an"Excellent Program"as one of the most important reform tasks in Ministry of Education.The tutor system is an effective way of innovative education and pilot training for both college schools and students.Students learn the method of innovation researches and technique of entrepreneurial process through the program.In the meanwhile,teachers in college schools find a new stage to improve their teaching ability.This article focuses on the project,practice and feasibility of the"College Students' Innovative and Entrepreneurial Training Program "under the tutor system.
[Downloads: 10,004 ] [Citations: 219 ] [Reads: 30 ] HTML PDF Cite this article
Experimental research on protein immunoblot assay
ZHANG Yan-wan,YE Jue,SHI Na,MENG Xian-min,WANG Lai-yuan(Central Laboratory,Fuwai Hospital for Cardiology,Chinese Academy of Medical Sciences,Beijing 100037,China)The paper discribes the important significance of protein immunoblot assay(Western blotting) in the research on protein,the experiment principle and the methods of protein immunoblot assay.A few main aspects of experiment technology methods are analyzed and discussed,and the research experience of protein immunoblot assay is also discribed.
[Downloads: 9,231 ] [Citations: 105 ] [Reads: 29 ] HTML PDF Cite this article