| Title | Keywords | ||
|---|---|---|---|
| Author | Authorship | ||
| Corresponding Author | Funds | ||
| DOI | Column | ||
| Summary | |||
| Timeframe | - | ||
| Title | Keywords | ||
|---|---|---|---|
| Author | Authorship | ||
| Corresponding Author | Funds | ||
| DOI | Column | ||
| Summary | |||
| Timeframe | - | ||
[Objective] In the traditional curriculum of Electrical Machinery, students often struggle to grasp abstract theoretical concepts that are fundamental to the subject. These challenges are frequently compounded by methodological constraints that limit the practical application of these theories. Moreover, students may experience a disconnect between the academic material they study and the engineering standards widely used in the industry. [Methods] To address these limitations, this study introduces an innovative educational approach that incorporates real-world data from a transformer factory report, supplied by a collaborative industry partner, as an authoritative reference standard. This initiative aims to bridge the frequently encountered gap between academic theory and practical engineering applications. The paper outlines the development of an innovative experimental setup leveraging the capabilities of two robust software tools: Ansys Maxwell for magnetic field analysis and MATLAB/Simulink for electrical circuit analysis. Within this revised experimental framework, student groups are tasked with constructing models using these tools. The students then engage in a dual-path simulation process, where they not only simulate the electrical machinery but also compare their simulation results against the empirical data from the factory report. [Results] Through this comparison, a data feedback loop is established to meticulously analyze any discrepancies between the simulated outcomes and actual data. This detailed analysis provides valuable insights into the factors causing variations between theoretical predictions and real-world performance. This process not only enhances the students' understanding of the correlation between simulation results and actual product performance but also hones their ability to use multiple tools collaboratively to solve complex engineering problems. The practical implementation of this approach demonstrated that it effectively improves students' capabilities to apply theoretical knowledge to real-world scenarios. The approach also fosters a deeper understanding of the intricacies involved in the design and operation of electrical machinery. Furthermore, this method provides a pragmatic solution for conducting industry-education integrated experiments, especially in situations with limited access to measured operational data. [Conclusions] Integrating industry-standard data into the academic curriculum prepares students better for future challenges they may encounter in their engineering careers, ensuring that they are not only well-versed in theoretical concepts but also adept at applying them in practical, real-world contexts. The students can, thus, develop the necessary skills and knowledge to bridge the gap between academic learning and professional practice. Thus, students can develop into more competent and confident engineers, ready to tackle the complexities of the modern engineering landscape.
[Significance] Numerous astronomical observations have confirmed the existence of dark matter in the universe. Weakly Interacting Massive Particles(WIMPs) can naturally form in the early universe, and if they exist stably, their mass density satisfies the constraints imposed by astronomical observations. These properties make WIMPs one of the most promising candidates for dark matter. Their detection methods include direct detection, indirect detection, and collider detection, often described as the approaches of “reaching for the sky, delving into the earth, and venturing to the Antarctic.” Xenon, an inert element, has become an optimal target material for the direct detection of WIMP dark matter particles owing to its unique physical and chemical properties. Internationally, major detection experiments using liquid xenon as the target material include XENON, LUX-ZEPLIN(LZ), and PandaX, and the dual-phase time projection chamber(DPTPC) is the core technology for dark matter detection. As one of the most cutting-edge scientific directions of the 21 st century, dark matter research may hold the key to revealing the fundamental laws of the universe. By elaborating on a series of achievements in the field of dark matter research, this paper enables readers to understand international experimental technologies and scientific progress in liquid xenon–based dark matter detection. [Progress] XENON10 was the world's first liquid xenon–based dark matter detection experiment, which sets a limit of 8.8 × 10-44 cm2 for WIMPs with a mass of 100 GeV/c2. XENON100 increased the target mass to 10 times that of XENON10, allowing it to place a stringent limit of 2.0 × 10-45 cm2 on the spin-independent nucleon scattering of WIMPs with a mass of 55 GeV/c2. With a target mass 32 times that of XENON100, XENON1T sets a cross-section limit of 1.6 × 10-47 cm2 for spin-independent WIMP–nucleon scattering at a mass of 50 GeV/c2. XENONnT is a rapid upgrade based on XENON1T; using 97.1 days of valid data, it reported a minimum upper cross-section limit of 2.58 × 10-47 cm2 for spin-independent WIMP–nucleon scattering at a mass of 28 GeV/c2. Using 85.3 days of scientific data, LUX sets a minimum upper cross-section limit of 7.6 × 10-46 cm2 for spin-independent WIMP–nucleon scattering at a mass of 33 GeV/c2. An analysis of its complete 95-day dataset yielded a minimum upper cross-section limit of 6.0 × 10-46 cm2 at the same mass. Based on 332 days of newly acquired data, LUX sets an upper exclusion limit of 2.2 × 10-46 cm2 for spin-independent WIMP interactions at a mass of 50 GeV/c2. A combined analysis of two LUX phases excluded a minimum cross-section of 1.1 × 10-46 cm2 for WIMPs at 50 GeV/c2 and set a minimum upper cross-section limit of 1.6 × 10-41 cm2 for spin-dependent WIMP–nucleon scattering at a mass of 35 GeV/c2. The ZEPLIN-III detector, using 319 days of data, excluded a cross-section limit of 4.8 × 10-44 cm2 for WIMPs with a mass near 50 GeV/c2. LZ, an upgraded experiment building on the foundations of LUX and ZEPLIN-III, used 60 days of valid data collected between 2021 and 2022 to set a cross-section limit of 9.2 × 10-48 cm2 for spin-independent WIMP–nucleon scattering at a mass of 36 GeV/c2. By combining 220 days of data collected between 2023 and 2024 with the previous 60-day dataset, LZ placed world-leading limits on spin-independent and spin-dependent WIMP–nucleon scattering interactions for WIMP masses ≥9 GeV/c2. [Conclusions and Prospects] To achieve more precise detection of dark matter signals, the XLZD collaboration has been established by the XENON, LZ, and DARWIN experiments, with the goal of constructing and operating a next-generation dark matter detector. The XLZD detector will exceed the scale of DARWIN and further close the gap to the “ neutrino fog.” With the completion of the next-generation dark matter detector by 2030, XLZD may enable the discovery of dark matter particles in the universe. This breakthrough would help humanity solve a puzzle that has persisted for nearly a century and open new avenues for understanding the universe.
[Objective] Salicylic acid(SA) is a phenolic hormone widely distributed in plant tissues, exerting critical regulatory roles in growth, development, and stress resistance. It also serves as a precursor to the pharmaceutical aspirin. SA has attracted increasing attention recently as a key secondary metabolite useful in plant biology and biomedical research. The isochorismate synthase(ICS) pathway has traditionally been considered the conserved route for SA biosynthesis, starting from chorismic acid through the sequential action of ICS1, EDS5, PBS3, and EPS1. However, this view has been challenged by the recently identified phenylalanine ammonia lyase(PAL) pathway, which begins with phenylalanine, proceeds through multiple enzyme-catalyzed reactions to form benzoyl-CoA. Then, the newly discovered enzymes BEBT/OSD2, BBO/BBH/OSD3, and BSH/BSE/OSD4 ultimately synthesize SA. Accumulating evidence suggests the PAL pathway as the ancient and conserved route for SA biosynthesis. A robust extraction and analytical method was developed to accurately detect the intermediates of the two pathways and identify the active one in plants. [Methods] Leaves were flash-frozen in liquid nitrogen, ground into a fine powder using a tissue grinder, and extracted with 80% methanol at a ratio of 1 mL per 0.1 g of the fresh sample. The mixture was sonicated for 10 min and centrifuged at 14,000 g and 4 ℃ for 10 min. The supernatant was transferred to a fresh centrifuge tube. The pellet was re-extracted with 0.5 mL of 100% methanol, and the supernatants from the two extractions were combined. The pooled extract was passed through a 0.22 μm microporous membrane filter and transferred into a brown autosampler vial for metabolic analysis. The major intermediate metabolites of the benzoyl-CoA-dependent PAL pathway in Nicotiana benthamiana—including benzyl benzoate, benzyl salicylate, and SA—were analyzed by ultra-performance liquid chromatography-tandem mass spectrometry(UPLC-MS/MS). A C18 column was employed for separation and purification. The MS conditions included electrospray ionization, multiple reaction monitoring, and detection in positive and negative ion modes. Standard metabolite solutions were serially diluted to final concentrations of 1, 5, 50, 100, and 500 ng/mL and quantified using calibration curves. [Results] This method exhibited an excellent linearity for the intermediates of SA biosynthesis pathways, with correlation coefficients reaching 1.0, indicating robust sensitivity and accuracy. Arabidopsis thaliana and N. benthamiana leaves were inoculated with Pseudomonas syringae pv. tomato DC3000. In A. thaliana, the levels of the ICS pathway intermediates, isochorismic acid and isochorismic acid-glutamine, were elevated significantly, along with a marked accumulation of SA. In contrast, in N. benthamiana, the levels of these intermediates did not change markedly, whereas SA levels were robustly elevated. Leaves of the N. benthamiana mutants, bbo and bsh, inoculated with DC3000, demonstrated distinct metabolic alterations: benzyl benzoate and benzyl salicylate accumulated in bbo and bsh, respectively. However, in both, the SA content was remarkably reduced compared with the wild type. These results confirm that the PAL pathway is the predominant route for SA biosynthesis in N. benthamiana. [Conclusions] A robust and sensitive UPLC-MS/MS method was established for detecting the key intermediates of the ICS and PAL pathways. It features a straightforward sample pretreatment method, a short analysis time, high detection sensitivity, and broad applicability. This approach not only facilitates the accurate quantification of intermediates but also provides a valuable tool for dissecting pathway dynamics and for advancing the research into SA biosynthesis regulation.
[Objective] In recent years, with the rapid development of large model technology, research on robotic arm manipulation based on large models has advanced and become a mainstream research direction. However, most existing educational platforms for robotic grasping focus on traditional algorithms and do not delve into integrating large-scale model-based technologies. This limits the students' understanding of cutting-edge intelligent grasp methods. To address this issue, an intelligent experimental teaching platform for robotic arm grasping was designed and developed based on domestic large models. The new platform integrates advanced technologies such as multimodal human–computer interaction, decision-making by large-language-model agents, open-vocabulary visual detection, and fine-tuning of large models. [Methods] The platform has a modular design comprising three core functional modules. First, a human–robot voice interaction channel was constructed using speech recognition and synthesis APIs to support Mandarin and English instructions and feedback. Second, the locally deployed DeepSeek, fine-tuned with Prompt-Tuning or QLoRA, was used to develop an agent system. This system parses user instructions, automatically decomposes them into executable task steps, and generates corresponding robotic-arm motion-control sequences. Third, the locally deployed Grounding DINO or the cloud-based multimodal Qwen model was used to recognize and locate target objects of any category, returning their position coordinates in the image workspace. In addition, a simulation environment was developed for the experimental platform. This environment was built using CoppeliaSim software, thereby significantly reducing experimental costs and facilitating offline learning for students. [Results] The results of physical and simulated grasping experiments on the designed platform revealed that the robotic arm grasping system performed complex grasping tasks through natural language interaction. The fine-tuned large-language model effectively parsed user instructions, automatically decomposed complex tasks into executable steps, and generated corresponding robotic-arm motion-control sequences in a rational manner. The deployed vision-language models detected various object types and provided accurate position coordinates for robotic-arm grasping. [Conclusions] The developed platform helps students to gain a deep understanding of the core knowledge needed for tasks such as robotic visual control and motion planning. In addition, it stimulates their research interests in artificial intelligence and robotics and cultivates their interdisciplinary innovation capabilities.
[Objective] As mineral resource extraction in China continues to advance deeper into the Earth's crust, underground engineering faces formidable challenges such as high surrounding-rock stresses, severe mining-induced disturbances, and difficulty of fracturing hard rock formations. The solution to these problems is closely related to the mechanical properties of rock, and the fracture characteristics of rock are a research focus. Primary defects, such as joints and fractures, occur randomly within rock masses. The deformation and failure of rock structures are inevitably linked to the evolution of microcracks. Moreover, since the tensile strength of rock is much lower than its compressive strength, examining its tensile failure characteristics is critical. [Methods] In this study, Brazilian splitting tests were conducted on coal, sandy mudstone, and shale samples. The process was comprehensively monitored using both the JM3813 multifunctional static strain gauge and the DS5 series full-information acoustic emission(AE) signal system. The study examined the deformation characteristics, evolution of AE ringing counts and energy parameters, and failure modes during the splitting fracture processes of coal, sandy mudstone, and shale. Concurrently, computed tomography(CT) scanning technology and AVIZO 3D reconstruction software were utilized for an in-depth analysis of the pore and fracture characteristics and fracture morphology of the coal and rock after splitting. The analysis revealed the intrinsic fracture mechanisms within the coal and rock. [Results] After the shale reached the peak strength(8.26 MPa), the stress fell sharply in a “cliff-like” manner. The sharp drop was mainly attributed to the rapid propagation of internal microcracks in the shale under tensile stress, leading to macroscopic fracture and the typical brittle behavior. The sandy mudstone underwent a deformation process similar to that of the coal sample. Still, the peak stress of the coal sample(0.83 MPa) was significantly lower than that of the sandy mudstone(1.95 MPa) and was only one-tenth of the shale's strength. The relationship between strain values within the three deformation zones of the coal sample, sandy mudstone, and shale generally followed the pattern: Strain 3 > Strain 2 > Strain 1. AE ring counts, cumulative ring counts, energy, and cumulative energy increased with stress accumulation. Classification of failure modes based on the relationship between AF/RA and 1 revealed that shear fractures dominate over tensile fractures in coal samples. However, the maximum energy event occurs during tensile fractures. Sandy mudstone and shales have more tensile fractures than shear fractures. AVIZO 3D reconstruction revealed coal samples containing numerous large-scale fissure structures. Following shape filtering and binary conversion in image processing, the fracture surfaces in coal, sandy mudstone, and shale exhibited bold “Y”-shaped, conventional “T”-shaped, and finer “L”-shaped patterns, respectively, with areas of 0.002 95, 0.002 65, and 0.001 73 m2, respectively. [Conclusions] Coal is a porous medium with a tensile strength significantly lower than that of shale and sandy mudstone. The splitting failure process of all three materials in the Brazilian splitting tests generally proceeds from the bottom to the top. AE parameters(i.e., ring count, cumulative ring count, energy, and cumulative energy) are closely correlated with the stress state of the material. The failure processes of coal, sandy mudstone, and shale all exhibit characteristics consistent with macroscopic tensile splitting failure. However, the fracture morphology of coal samples is more complex than that of sandy mudstone and shale.
[Objective] MEMS piezoelectric hydrophones offer advantages such as small size, high sensitivity, and passive operation, making them valuable for underwater detection, marine science, and defense applications. However, the electrical signals they generate are extremely weak and easily affected by noise, requiring analog front-end preprocessing. Existing solutions primarily rely on discrete components or large instruments, resulting in large size, limited integration, and restricted applicability, which hinder the miniaturization and array-based development of hydrophone systems. Therefore, developing a high-performance, low-noise, dedicated analog front-end chip holds significant engineering value and strategic importance. [Methods] This study presents a low-noise, dedicated analog front-end chip for MEMS piezoelectric hydrophones fabricated using a 0.18 μm CMOS process. Powered by a ±2.5 V supply, the chip integrates key modules including a charge-sensitive amplifier(CSA), a programmable gain amplifier(PGA), a switched-capacitor low-pass filter(SC-LPF), and an automatic gain control(AGC) loop. The CSA employs a hybrid chopping stabilization and auto-zeroing scheme to effectively suppress low-frequency and folding noise while avoiding the input impedance degradation observed in conventional architectures. The PGA provides 64 gain settings through a capacitor array, and the SC-LPF offers 16 selectable cutoff frequencies, enhancing adaptability across application scenarios. The AGC, based on a digital feedback strategy, dynamically regulates gain through peak detection and hysteresis comparison to maintain optimal amplitude range. Additional on-chip modules, including a clock generator, reference source, and power-on reset circuit, ensure stable system operation. [Results] Post-layout simulation results show an equivalent input noise spectral density of 231 n V/√Hz at 1 Hz and 59n V/√Hz at 1 kHz, demonstrating excellent low-noise performance. The nonlinearity between input charge and output voltage remains below 1.26%, indicating high linearity. The power supply rejection ratio exceeds 105 dB at low frequencies, and the common-mode rejection ratio exceeds 95 dB across most process corners, confirming strong immunity to interference. Transient simulations verify that the AGC effectively adjusts gain in response to input amplitude variations. The total layout area is 1 147 × 762 μm, and compared with previously reported designs, the chip achieves superior noise performance, programmability, and integration. [Conclusions] This study introduces a high-precision, low-noise, programmable-gain-and-bandwidth integrated circuit for a front-end chip tailored for MEMS piezoelectric hydrophones. The incorporation of advanced noise-suppression techniques and a flexible architecture substantially enhances signal conditioning performance and application versatility. Simulation results confirm its advantages in noise, linearity, and interference rejection, meeting the stringent requirements of weak underwater signal detection. The proposed chip provides a practical pathway toward the miniaturization and array implementation of hydrophone systems. Future research will focus on further reducing power consumption and chip area to lower overall energy use while maintaining high-performance and minimizing device size, thereby supporting broader real-world deployment of array systems, enhancing array sensor detection accuracy, and strengthening applications in marine exploration, ecological monitoring, military reconnaissance, and related fields. Additionally, the adjustable-gain and bandwidth architecture proposed in this study establishes a foundation for flexible configuration and multifunctionality in future sensor systems, offering strong scalability and adaptability.
[Objective] With the rapid development of autonomous driving technology, accurate perception of the surrounding environment has become increasingly critical, and 3D environment perception has emerged as a major research focus in this field. Traditional 3D perception systems rely heavily on expensive sensors such as LiDAR, which offer high accuracy but incur substantial costs and computational demands, limiting their scalability in large autonomous vehicle fleets. Although more recent 3D occupancy prediction methods rely solely on multicamera inputs, they typically require supervised learning with annotated 3D occupancy data, which is costly to obtain and consumes substantial memory. To address these challenges, this article proposes Image2 Occupancy, an improved 3D Gaussian-splatting-based occupancy prediction method that uses only 2D surround-view camera images. The method enables effective semantic occupancy prediction of 3D scenes while reducing the need for annotated data and large memory capacity. [Methods] The Image2 Occupancy framework consists of two components:(1) 2D-to-3D feature extraction and spatial mapping, and(2) self-supervised 3D occupancy representation learning. In the first component, BEVStereo and Swin Transformer modules extract 2D features from panoramic input images. These features are then interpolated and mapped to 3D space using the intrinsic and extrinsic parameters of the camera, yielding voxel-level feature representations. This process converts 2D image information into 3D semantic occupancy cues, providing accurate input for subsequent self-supervised learning. In the second component, an improved Gaussian splatting technique projects 3D voxel features back onto the 2D image plane while preserving semantic information. Gaussian points placed at each voxel center approximate scene occupancy, enabling rendering of semantic and depth maps by computing pixel-level depth and semantic information. A novel self-supervised learning framework generates pseudo-labels from the predicted depth and semantic maps of the model, eliminating the need for real 3D occupancy labels. A specialized loss function, combining cross-entropy and depth losses, minimizes discrepancies between rendered and ground-truth semantic and depth maps, optimizing prediction accuracy. [Results] Experiments on the NuScenes dataset show that Image2 Occupancy achieves an mIoU of 27.87, improving performance by 3.94 percentage points(a 16.5% increase) over existing 2D-input methods and performing comparable to, or better than, several 3D-input methods. Compared with NeRF-based approaches, GPU memory usage is reduced by 54.7% while maintaining the same number of Gaussian points. Ablation studies further validate the effectiveness of the core components of the method. [Conclusions] Image2 Occupancy reduces hardware dependence and substantially decreases the need for large annotated datasets through self-supervised learning, offering a cost-effective and scalable 3D environment perception solution for autonomous driving systems with strong potential for practical deployment.
[Objective] Proton exchange membrane fuel cells are pivotal to the global energy transition, however, their catalysts exhibit high sensitivity to CO poisoning. CO preferential oxidation(CO-PROX) serves as the core technology for hydrogen purification, and honeycomb ceramics are ideal supports for CO-PROX catalysts. However, raw cordierite honeycomb ceramics(2 Mg O·2 Al2 O3·5 SiO2) have drawbacks, including a low specific surface area and poor coating adhesion, which limit catalytic performance. Oriented toward cultivating scientific thinking in teaching practice, this study investigates how pretreatment of honeycomb ceramic supports affects catalytic performance in CO-PROX under hydrogen-rich conditions. It aims to enhance support performance through pretreatment optimization and establish an experimental teaching paradigm that progresses from single-factor to multiparameter optimization. [Methods] Single-factor experiments were first conducted to screen the reasonable operating ranges of key parameters as a basis for systematic optimization of the pretreatment process. Employing the coating loading rate and catalytic activity(correlated with subsequent T50/T90 indicators) as evaluation criteria, this study investigated the independent effects of acid treatment time(1–3 h), nitric acid concentration(1–3 mol/L), calcination temperature(300–500 ℃), and calcination time(1–3 h). This step excluded support structure damage and ineffective modifications caused by excessive parameter values, and the study then determined the effective range for subsequent multifactor optimization. Based on the results, a response surface methodology(RSM) model was constructed using a four-variable central composite rotatable design. A total of 30 experiments were designed, comprising 16 full-factor points covering different level combinations of the 4 parameters, 8 axial points to expand the response at the parameter boundaries, and 6 center repeat points to evaluate experimental errors. The temperatures at which CO conversion reached 50%(T50) and 90%(T90) were used as response values. The RSM model's visual analysis function enabled intuitive identification of parameter interactions and facilitated determination of the parameter combination that minimized T50 and T90 to optimal levels. The model fitting effect was verified to ensure consistency between the experimental data and the predicted results. Finally, the pretreatment process parameters were systematically optimized and verified, and model fitting was used to analyze synergistic effects between acid treatment time, acid concentration, calcination temperature, and calcination time to determine the optimal process parameters. [Results] The single-factor experiments revealed that treating the supports with 1 mol/L nitric acid for 2–3 h effectively optimized their specific surface area and surface roughness, thereby improving coating loading rate. Additionally, calcination at 400 ℃ for 1 h enhanced the pore structure and modified the surface chemical state. The RSM-based model demonstrated strong agreement between predicted and experimental values. The optimal process parameters were identified as a 2.5 h treatment with 1 mol/L nitric acid, followed by calcination at 400 ℃ for 1 h, which significantly enhanced catalytic activity. The analysis of the RSM model revealed that acid treatment time, acid concentration, and calcination temperature exhibit notable synergistic effects on catalytic performance, whereas calcination time shows negligible interactions and can thus be optimized independently. [Conclusions] This study offers a reference for process development in catalytic chemical systems and presents an instructional framework to enhance students' capabilities in multifactor coupling analysis.
[Objective] Urban traffic link tunnels, mainly composed of bifurcated tunnels, have been rapidly constructed. A fire in this complex tunnel would cause more serious facility damage and casualties due to multipath smoke propagation and a more uneven temperature distribution compared with an ordinary single tunnel. Furthermore, a bifurcated tunnel contains different tunnel slopes and bifurcation angles to connect surface and underground transportation systems. However, previous research on tunnel fires has mainly focused on a single ordinary tunnel or a horizontal bifurcated tunnel; fires in an inclined bifurcated tunnel have rarely been studied. To clarify the mechanism of smoke propagation and the temperature profile in a bifurcated tunnel, the present study conducted a series of small-scale experiments to investigate the maximum ceiling temperature in a bifurcated tunnel with an inclined mainline. [Methods] Froude's similarity criterion was used to guide the design of the small-scale experimental bench. A 1/20 scale bifurcated tunnel platform was constructed, consisting of a 10 m mainline and a 4 m ramp, with a cross-section of 0.25 m × 0.5 m. Three bifurcation angles(10°, 20°, and 30°), five mainline tunnel slopes(0%, 1%, 3%, 5%, and 7%), and three heat release rates(1.12, 1.64, and 2.8 kW) were considered. Different longitudinal ventilation velocities supplied from the mainline before shunting were used for analyzing their effects on smoke propagation and temperature distribution. The temperature at the tunnel ceiling and along the tunnel centerline was detected and analyzed. The effects of the bifurcation angle and the mainline slope on the maximum ceiling temperature were investigated, and an empirical model was developed to predict it in a bifurcated tunnel. [Results and Conclusions] Experimental results showed that the heat release rate significantly affected the maximum ceiling temperature, with higher rates resulting in higher maximum ceiling temperatures. The larger bifurcation angle resulted in a higher maximum ceiling temperature at relatively low longitudinal ventilation; however, its effect on the maximum ceiling temperature was limited when the longitudinal ventilation velocity exceeded 0.2 m/s. In particular, the maximum ceiling temperature was more sensitive to the bifurcation angles at a relatively low heat release rate. The maximum ceiling temperature decreased with increasing longitudinal ventilation because of the cooling effect and the flame tilting effect. The maximum ceiling temperature decreased with increasing mainline slope as the stronger stack effect improved the induced airflow velocity. The effect of the mainline slope on the maximum ceiling temperature was more pronounced when the slope was <3%, but this effect weakened when the slope was >3%. The maximum ceiling temperature in the bifurcated tunnel could not be accurately predicted using previous empirical models, as these models were developed based on tests conducted for ordinary single-line or horizontally branched tunnels. Therefore, a predictive model for the maximum ceiling temperature in a branched tunnel with a mainline slope was developed by accounting for the mainline slope, heat release rate, bifurcation angle, and longitudinal ventilation velocity. This study contributes to understanding smoke propagation and provides a validated tool for evaluating maximum temperature in a bifurcated tunnel.
[Objective] The construction of high-fidelity three-dimensional(3D) models for large-scale engineering projects remains a challenge because of the prolonged data acquisition cycles inherent to such tasks. These extended cycles often lead to overall visual dimness of the reconstructed models, as well as practical difficulties in coordinating heterogeneous resources across multiple stakeholders. Addressing these challenges requires integrating advanced data acquisition technologies with efficient modeling pipelines and user-oriented platforms for planning and analysis. [Methods] To this end, this study proposes a novel engineering planning platform that combines unmanned aerial vehicle(UAV) oblique photogrammetry, realistic 3D modeling, and modern web-based development frameworks into a unified technical workflow. The proposed platform aims to improve the perceptual quality of 3D reconstructions and to meet broader demands for resource coordination, visualization, and interactive planning in complex engineering scenarios. The 3D reconstruction component of the platform is based on widely adopted structure-from-motion(SfM) and multiview stereo(MVS) methods, which together form the foundation of contemporary photogrammetric pipelines. However, conventional implementations of SfM-MVS often suffer from illumination inconsistencies and texture degradation, especially when applied to extensive scenes collected over long time spans. To overcome these limitations, this study introduces a block-input redundancy strategy. Instead of using a single block point cloud from the SfM phase as the sole input to the MVS phase, the proposed method repeatedly feeds identical block data into the MVS reconstruction process. This design leverages the color-consistency fusion mechanism and voxel-level color aggregation algorithms, thereby suppressing local illumination discrepancies and enhancing global texture fidelity. Therefore, the reconstructed models exhibit improved visual brightness, greater detail preservation, and more consistent rendering across large spatial extents. [Results] To evaluate the practicality and effectiveness of the proposed approach, the methodology was applied to the Pinglu Canal Project, a representative large-scale infrastructure undertaking in China. The experimental results showed that the redundant block-input method significantly enhances the visual realism and coherence of 3D models, particularly in brightness and texture clarity. In parallel, the web-based platform supports data analysis, multisource information integration, and scenario-driven planning. Its interactive visualization functions enable engineers, planners, and decision-makers to collaboratively explore 3D environments, allocate resources, and simulate planning outcomes with enhanced efficiency and transparency. Such capabilities are essential for advancing the digital transformation of engineering practices, where the integration of accurate spatial data with intuitive decision-support systems is becoming increasingly indispensable. [Conclusions] Overall, the findings of this study highlight both the methodological innovation and the practical significance of the proposed platform. The combination of UAV-based photogrammetry, enhanced SfM-MVS reconstruction, and web-enabled planning tools is a robust and adaptable solution for large-scale engineering applications. Furthermore, the scalability and generalizability of the block-input redundancy method imply that it can be applied to other large-scale projects requiring high-quality 3D reconstructions under variable acquisition conditions. By incorporating advanced photogrammetric algorithms with engineering planning, this research contributes to the ongoing informatization of infrastructure projects and offers a feasible technological pathway for integrating 3D modeling with intelligent decision support.