In recent years,there has been a significant increase in research focused on the growth of large-area single crystals.Rajan et al.[1]recently achieved the growth of large-area monolayers of transition-metal chalcogeni...In recent years,there has been a significant increase in research focused on the growth of large-area single crystals.Rajan et al.[1]recently achieved the growth of large-area monolayers of transition-metal chalcogenides through assisted nucleation.The quality of molecular beam epitaxy(MBE)-grown two-dimensional(2D)materials can be greatly enhanced by using sacrificial species deposited simultaneously from an electron beam evaporator during the growth process.This technique notably boosts the nucleation rate of the target epitaxial layer,resulting in large,homogeneous monolayers with improved quasiparticle lifetimes and fostering the development of epitaxial van der Waals heterostructures.Additionally,micrometer-sized silver films have been formed at the air-water interface by directly depositing electrospray-generated silver ions onto an aqueous dispersion of reduced graphene oxide under ambient conditions[2].展开更多
Massive amounts of data are acquired in modern and future information technology industries such as communication,radar,and remote sensing.The presence of large dimensionality and size in these data offers new opportu...Massive amounts of data are acquired in modern and future information technology industries such as communication,radar,and remote sensing.The presence of large dimensionality and size in these data offers new opportunities to enhance the performance of signal processing in such applications and even motivate new ones.However,the curse of dimensionality is always a challenge when processing such high-dimensional signals.In practical tasks,high-dimensional signals need to be acquired,processed,and analyzed with high accuracy,robustness,and computational efficiency.This special section aims to address these challenges,where articles attempt to develop new theories and methods that are best suited to the high dimensional nature of the signals involved,and explore modern and emerging applications in this area.展开更多
Ni-Fe-based oxides are among the most promising catalysts developed to date for the bottleneck oxygen evolution reaction(OER)in water electrolysis.However,understanding and mastering the synergy of Ni and Fe remain ch...Ni-Fe-based oxides are among the most promising catalysts developed to date for the bottleneck oxygen evolution reaction(OER)in water electrolysis.However,understanding and mastering the synergy of Ni and Fe remain challenging.Herein,we report that the synergy between Ni and Fe can be tailored by crystal dimensionality of Ni,Fe-contained Ruddlesden-Popper(RP)-type perovskites(La_(0.125)Sr_(0.875))n+1(Ni_(0.25)Fe_(0.75))nO3n+1(n=1,2,3),where the material with n=3 shows the best OER performance in alkaline media.Soft X-ray absorption spectroscopy spectra before and after OER reveal that the material with n=3 shows enhanced Ni/Fe-O covalency to boost the electron transfer as compared to those with n=1 and n=2.Further experimental investigations demonstrate that the Fe ion is the active site and the Ni ion is the stable site in this system,where such unique synergy reaches the optimum at n=3.Besides,as n increases,the proportion of unstable rock-salt layers accordingly decreases and the leaching of ions(especially Sr^(2+))into the electrolyte is suppressed,which induces a decrease in the leaching of active Fe ions,ultimately leading to enhanced stability.This work provides a new avenue for rational catalyst design through the dimensional strategy.展开更多
Integrable systems play a crucial role in physics and mathematics.In particular,the traditional(1+1)-dimensional and(2+1)-dimensional integrable systems have received significant attention due to the rarity of integra...Integrable systems play a crucial role in physics and mathematics.In particular,the traditional(1+1)-dimensional and(2+1)-dimensional integrable systems have received significant attention due to the rarity of integrable systems in higher dimensions.Recent studies have shown that abundant higher-dimensional integrable systems can be constructed from(1+1)-dimensional integrable systems by using a deformation algorithm.Here we establish a new(2+1)-dimensional Chen-Lee-Liu(C-L-L)equation using the deformation algorithm from the(1+1)-dimensional C-L-L equation.The new system is integrable with its Lax pair obtained by applying the deformation algorithm to that of the(1+1)-dimension.It is challenging to obtain the exact solutions for the new integrable system because the new system combines both the original C-L-L equation and its reciprocal transformation.The traveling wave solutions are derived in implicit function expression,and some asymmetry peakon solutions are found.展开更多
Methane in-situ explosion fracturing(MISEF)enhances permeability in shale reservoirs by detonating desorbed methane to generate detonation waves in perforations.Fracture propagation in bedding shale under varying expl...Methane in-situ explosion fracturing(MISEF)enhances permeability in shale reservoirs by detonating desorbed methane to generate detonation waves in perforations.Fracture propagation in bedding shale under varying explosion loads remains unclear.In this study,prefabricated perforated shale samples with parallel and vertical bedding are fractured under five distinct explosion loads using a MISEF experimental setup.High-frequency explosion pressure-time curves were monitored within an equivalent perforation,and computed tomography scanning along with three-dimensional reconstruction techniques were used to investigate fracture propagation patterns.Additionally,the formation mechanism and influencing factors of explosion crack-generated fines(CGF)were clarified by analyzing the morphology and statistics of explosion debris particles.The results indicate that methane explosion generated oscillating-pulse loads within perforations.Explosion characteristic parameters increase with increasing initial pressure.Explosion load and bedding orientation significantly influence fracture propagation patterns.As initial pressure increases,the fracture mode transitions from bi-wing to 4–5 radial fractures.In parallel bedding shale,radial fractures noticeably deflect along the bedding surface.Vertical bedding facilitates the development of transverse fractures oriented parallel to the cross-section.Bifurcation-merging of explosioninduced fractures generated CGF.CGF mass and fractal dimension increase,while average particle size decreases with increasing explosion load.This study provides valuable insights into MISEF technology.展开更多
Fractal theory offers a powerful tool for the precise description and quantification of the complex pore structures in reservoir rocks,crucial for understanding the storage and migration characteristics of media withi...Fractal theory offers a powerful tool for the precise description and quantification of the complex pore structures in reservoir rocks,crucial for understanding the storage and migration characteristics of media within these rocks.Faced with the challenge of calculating the three-dimensional fractal dimensions of rock porosity,this study proposes an innovative computational process that directly calculates the three-dimensional fractal dimensions from a geometric perspective.By employing a composite denoising approach that integrates Fourier transform(FT)and wavelet transform(WT),coupled with multimodal pore extraction techniques such as threshold segmentation,top-hat transformation,and membrane enhancement,we successfully crafted accurate digital rock models.The improved box-counting method was then applied to analyze the voxel data of these digital rocks,accurately calculating the fractal dimensions of the rock pore distribution.Further numerical simulations of permeability experiments were conducted to explore the physical correlations between the rock pore fractal dimensions,porosity,and absolute permeability.The results reveal that rocks with higher fractal dimensions exhibit more complex pore connectivity pathways and a wider,more uneven pore distribution,suggesting that the ideal rock samples should possess lower fractal dimensions and higher effective porosity rates to achieve optimal fluid transmission properties.The methodology and conclusions of this study provide new tools and insights for the quantitative analysis of complex pores in rocks and contribute to the exploration of the fractal transport properties of media within rocks.展开更多
The dimensional accuracy of machined parts is strongly influenced by the thermal behavior of machine tools (MT). Minimizing this influence represents a key objective for any modern manufacturing industry. Thermally in...The dimensional accuracy of machined parts is strongly influenced by the thermal behavior of machine tools (MT). Minimizing this influence represents a key objective for any modern manufacturing industry. Thermally induced positioning error compensation remains the most effective and practical method in this context. However, the efficiency of the compensation process depends on the quality of the model used to predict the thermal errors. The model should consistently reflect the relationships between temperature distribution in the MT structure and thermally induced positioning errors. A judicious choice of the number and location of temperature sensitive points to represent heat distribution is a key factor for robust thermal error modeling. Therefore, in this paper, the temperature sensitive points are selected following a structured thermomechanical analysis carried out to evaluate the effects of various temperature gradients on MT structure deformation intensity. The MT thermal behavior is first modeled using finite element method and validated by various experimentally measured temperature fields using temperature sensors and thermal imaging. MT Thermal behavior validation shows a maximum error of less than 10% when comparing the numerical estimations with the experimental results even under changing operation conditions. The numerical model is used through several series of simulations carried out using varied working condition to explore possible relationships between temperature distribution and thermal deformation characteristics to select the most appropriate temperature sensitive points that will be considered for building an empirical prediction model for thermal errors as function of MT thermal state. Validation tests achieved using an artificial neural network based simplified model confirmed the efficiency of the proposed temperature sensitive points allowing the prediction of the thermally induced errors with an accuracy greater than 90%.展开更多
With the extensive application of large-scale array antennas,the increasing number of array elements leads to the increasing dimension of received signals,making it difficult to meet the real-time requirement of direc...With the extensive application of large-scale array antennas,the increasing number of array elements leads to the increasing dimension of received signals,making it difficult to meet the real-time requirement of direction of arrival(DOA)estimation due to the computational complexity of algorithms.Traditional subspace algorithms require estimation of the covariance matrix,which has high computational complexity and is prone to producing spurious peaks.In order to reduce the computational complexity of DOA estimation algorithms and improve their estimation accuracy under large array elements,this paper proposes a DOA estimation method based on Krylov subspace and weighted l_(1)-norm.The method uses the multistage Wiener filter(MSWF)iteration to solve the basis of the Krylov subspace as an estimate of the signal subspace,further uses the measurement matrix to reduce the dimensionality of the signal subspace observation,constructs a weighted matrix,and combines the sparse reconstruction to establish a convex optimization function based on the residual sum of squares and weighted l_(1)-norm to solve the target DOA.Simulation results show that the proposed method has high resolution under large array conditions,effectively suppresses spurious peaks,reduces computational complexity,and has good robustness for low signal to noise ratio(SNR)environment.展开更多
NGLY1 Deficiency is an ultra-rare autosomal recessively inherited disorder. Characteristic symptoms include among others, developmental delays, movement disorders, liver function abnormalities, seizures, and problems ...NGLY1 Deficiency is an ultra-rare autosomal recessively inherited disorder. Characteristic symptoms include among others, developmental delays, movement disorders, liver function abnormalities, seizures, and problems with tear formation. Movements are hyperkinetic and may include dysmetric, choreo-athetoid, myoclonic and dystonic movement elements. To date, there have been no quantitative reports describing arm movements of individuals with NGLY1 Deficiency. This report provides quantitative information about a series of arm movements performed by an individual with NGLY1 Deficiency and an aged-matched neurotypical participant. Three categories of arm movements were tested: 1) open ended reaches without specific end point targets;2) goal-directed reaches that included grasping an object;3) picking up small objects from a table placed in front of the participants. Arm movement kinematics were obtained with a camera-based motion analysis system and “initiation” and “maintenance” phases were identified for each movement. The combination of the two phases was labeled as a “complete” movement. Three-dimensional analysis techniques were used to quantify the movements and included hand trajectory pathlength, joint motion area, as well as hand trajectory and joint jerk cost. These techniques were required to fully characterize the movements because the NGLY1 individual was unable to perform movements only in the primary plane of progression instead producing motion across all three planes of movement. The individual with NGLY1 Deficiency was unable to pick up objects from a table or effectively complete movements requiring crossing the midline. The successfully completed movements were analyzed using the above techniques and the results of the two participants were compared statistically. Almost all comparisons revealed significant differences between the two participants, with a notable exception of the 3D initiation area as a percentage of the complete movement. The statistical tests of these measures revealed no significant differences between the two participants, possibly suggesting a common underlying motor control strategy. The 3D techniques used in this report effectively characterized arm movements of an individual with NGLY1 deficiency and can be used to provide information to evaluate the effectiveness of genetic, pharmacological, or physical rehabilitation therapies.展开更多
Machining is as old as humanity, and changes in temperature in both the machine’s internal and external environments can be of great concern as they affect the machine’s thermal stability and, thus, the machine’s d...Machining is as old as humanity, and changes in temperature in both the machine’s internal and external environments can be of great concern as they affect the machine’s thermal stability and, thus, the machine’s dimensional accuracy. This paper is a continuation of our earlier work, which aimed to analyze the effect of the internal temperature of a machine tool as the machine is put into operation and vary the external temperature, the machine floor temperature. Some experiments are carried out under controlled conditions to study how machine tool components get heated up and how this heating up affects the machine’s accuracy due to thermally induced deviations. Additionally, another angle is added by varying the machine floor temperature. The parameters mentioned above are explored in line with the overall thermal stability of the machine tool and its dimensional accuracy. A Robodrill CNC machine tool is used. The CNC was first soaked with thermal energy by gradually raising the machine floor temperature to a certain level before putting the machine in operation. The machine was monitored, and analytical methods were deplored to evaluate thermal stability. Secondly, the machine was run idle for some time under raised floor temperature before it was put into operation. Data was also collected and analyzed. It is observed that machine thermal stability can be achieved in several ways depending on how the above parameters are joggled. This paper, in conclusion, reinforces the idea of machine tool warm-up process in conjunction with a carefully analyzed and established machine floor temperature variation for the approximation of the machine tool’s thermally stability to map the long-time behavior of the machine tool.展开更多
This study presents a numerical analysis of three-dimensional steady laminar flow in a rectangular channel with a 180-degree sharp turn. The Navier-Stokes equations are solved by using finite difference method for Re ...This study presents a numerical analysis of three-dimensional steady laminar flow in a rectangular channel with a 180-degree sharp turn. The Navier-Stokes equations are solved by using finite difference method for Re = 900. Three-dimensional streamlines and limiting streamlines on wall surface are used to analyze the three-dimensional flow characteristics. Topological theory is applied to limiting streamlines on inner walls of the channel and two-dimensional streamlines at several cross sections. It is also shown that the flow impinges on the end wall of turn and the secondary flow is induced by the curvature in the sharp turn.展开更多
There is an urgent need to develop optimal solutions for deformation control of deep high‐stress roadways,one of the critical problems in underground engineering.The previously proposed four‐dimensional support(here...There is an urgent need to develop optimal solutions for deformation control of deep high‐stress roadways,one of the critical problems in underground engineering.The previously proposed four‐dimensional support(hereinafter 4D support),as a new support technology,can set the roadway surrounding rock under three‐dimensional pressure in the new balanced structure,and prevent instability of surrounding rock in underground engineering.However,the influence of roadway depth and creep deformation on the surrounding rock supported by 4D support is still unknown.This study investigated the influence of roadway depth and creep deformation time on the instability of surrounding rock by analyzing the energy development.The elastic strain energy was analyzed using the program redeveloped in FLAC3D.The numerical simulation results indicate that the combined support mode of 4D roof supports and conventional side supports is highly applicable to the stability control of surrounding rock with a roadway depth exceeding 520 m.With the increase of roadway depth,4D support can effectively restrain the area and depth of plastic deformation in the surrounding rock.Further,4D support limits the accumulation range and rate of elastic strain energy as the creep deformation time increases.4D support can effectively reduce the plastic deformation of roadway surrounding rock and maintain the stability for a long deformation period of 6 months.As confirmed by in situ monitoring results,4D support is more effective for the long‐term stability control of surrounding rock than conventional support.展开更多
This paper presents a new dimension reduction strategy for medium and large-scale linear programming problems. The proposed method uses a subset of the original constraints and combines two algorithms: the weighted av...This paper presents a new dimension reduction strategy for medium and large-scale linear programming problems. The proposed method uses a subset of the original constraints and combines two algorithms: the weighted average and the cosine simplex algorithm. The first approach identifies binding constraints by using the weighted average of each constraint, whereas the second algorithm is based on the cosine similarity between the vector of the objective function and the constraints. These two approaches are complementary, and when used together, they locate the essential subset of initial constraints required for solving medium and large-scale linear programming problems. After reducing the dimension of the linear programming problem using the subset of the essential constraints, the solution method can be chosen from any suitable method for linear programming. The proposed approach was applied to a set of well-known benchmarks as well as more than 2000 random medium and large-scale linear programming problems. The results are promising, indicating that the new approach contributes to the reduction of both the size of the problems and the total number of iterations required. A tree-based classification model also confirmed the need for combining the two approaches. A detailed numerical example, the general numerical results, and the statistical analysis for the decision tree procedure are presented.展开更多
This study explores the influence of infill patterns on machine acceleration prediction in the realm of three-dimensional(3D)printing,particularly focusing on extrusion technology.Our primary objective was to develop ...This study explores the influence of infill patterns on machine acceleration prediction in the realm of three-dimensional(3D)printing,particularly focusing on extrusion technology.Our primary objective was to develop a long short-term memory(LSTM)network capable of assessing this impact.We conducted an extensive analysis involving 12 distinct infill patterns,collecting time-series data to examine their effects on the acceleration of the printer’s bed.The LSTM network was trained using acceleration data from the adaptive cubic infill pattern,while the Archimedean chords infill pattern provided data for evaluating the network’s prediction accuracy.This involved utilizing offline time-series acceleration data as the training and testing datasets for the LSTM model.Specifically,the LSTM model was devised to predict the acceleration of a fused deposition modeling(FDM)printer using data from the adaptive cubic infill pattern.Rigorous testing yielded a root mean square error(RMSE)of 0.007144,reflecting the model’s precision.Further refinement and testing of the LSTM model were conducted using acceleration data from the Archimedean chords infill pattern,resulting in an RMSE of 0.007328.Notably,the developed LSTM model demonstrated superior performance compared to an optimized recurrent neural network(RNN)in predicting machine acceleration data.The empirical findings highlight that the adaptive cubic infill pattern considerably influences the dimensional accuracy of parts printed using FDM technology.展开更多
The care of a patient involved in major trauma with exsanguinating haemorrhage is time-critical to achieve definitive haemorrhage control,and it requires coordinated multidisciplinary care.During initial resuscitation...The care of a patient involved in major trauma with exsanguinating haemorrhage is time-critical to achieve definitive haemorrhage control,and it requires coordinated multidisciplinary care.During initial resuscitation of a patient in the emergency department(ED),Code Crimson activation facilitates rapid decisionmaking by multi-disciplinary specialists for definitive haemorrhage control in operating theatre(OT)and/or interventional radiology(IR)suite.Once this decision has been made,there may still be various factors that lead to delay in transporting the patient from ED to OT/IR.Red Blanket protocol identifies and addresses these factors and processes which cause delay,and aims to facilitate rapid and safe transport of the haemodynamically unstable patient from ED to OT,while minimizing delay in resuscitation during the transfer.The two processes,Code Crimson and Red Blanket,complement each other.It would be ideal to merge the two processes into a single protocol rather than having two separate workflows.Introducing these quality improvement strategies and coor-dinated processes within the trauma framework of the hospitals/healthcare systems will help in further improving the multi-disciplinary care for the complex trauma patients requiring rapid and definitive haemorrhage control.展开更多
Quantum error correction technology is an important method to eliminate errors during the operation of quantum computers.In order to solve the problem of influence of errors on physical qubits,we propose an approximat...Quantum error correction technology is an important method to eliminate errors during the operation of quantum computers.In order to solve the problem of influence of errors on physical qubits,we propose an approximate error correction scheme that performs dimension mapping operations on surface codes.This error correction scheme utilizes the topological properties of error correction codes to map the surface code dimension to three dimensions.Compared to previous error correction schemes,the present three-dimensional surface code exhibits good scalability due to its higher redundancy and more efficient error correction capabilities.By reducing the number of ancilla qubits required for error correction,this approach achieves savings in measurement space and reduces resource consumption costs.In order to improve the decoding efficiency and solve the problem of the correlation between the surface code stabilizer and the 3D space after dimension mapping,we employ a reinforcement learning(RL)decoder based on deep Q-learning,which enables faster identification of the optimal syndrome and achieves better thresholds through conditional optimization.Compared to the minimum weight perfect matching decoding,the threshold of the RL trained model reaches 0.78%,which is 56%higher and enables large-scale fault-tolerant quantum computation.展开更多
In this paper, two-dimensional (2-D) correction scheme is proposed to improve the performance of conventional Min-Sum (MS) decoding of regular low density parity check codes. The adopted algorithm to obtain the correc...In this paper, two-dimensional (2-D) correction scheme is proposed to improve the performance of conventional Min-Sum (MS) decoding of regular low density parity check codes. The adopted algorithm to obtain the correction factors is simply based on estimating the mean square difference (MSD) between the transmitted codeword and the posteriori information of both bit and check node that produced at the MS decoder. Semi-practical tests using software-defined radio (SDR) and specific code simulations show that the proposed quasi-optimal algorithm provides a comparable error performance as Sum-Product (SP) decoding while requiring less complexity.展开更多
Spatially-coupled low-density parity-check(SC-LDPC)codes are prominent candidates for fu-ture communication standards due to their‘threshold saturation’properties.However,when facing burst erasures,the decoding proc...Spatially-coupled low-density parity-check(SC-LDPC)codes are prominent candidates for fu-ture communication standards due to their‘threshold saturation’properties.However,when facing burst erasures,the decoding process will stop and the decoding performances will dramatically de-grade.To improve the ability of burst erasure corrections,this paper proposes a two-dimensional SC-LDPC(2D-SC-LDPC)codes constructed by parallelly connecting two asymmetric SC-LDPC coupled chains for resistance to burst erasures.Density evolution algorithm is presented to evaluate the as-ymptotic performances against burst erasures,by which the maximum correctable burst erasure length can be computed.The analysis results show that the maximum correctable burst erasure lengths of the proposed 2D-SC-LDPC codes are much larger than the SC-LDPC codes and the asym-metric SC-LDPC codes.Finite-length performance simulation results of the 2D-SC-LDPC codes over the burst erasure channel confirm the excellent asymptotic performances.展开更多
In this paper, we propose to generalize the coding schemes first proposed by Kozic &al to high spectral efficient modulation schemes. We study at first Chaos Coded Modulation based on the use of small ...In this paper, we propose to generalize the coding schemes first proposed by Kozic &al to high spectral efficient modulation schemes. We study at first Chaos Coded Modulation based on the use of small dimensional modulo-MAP encoding process and we give a solution to study the distance spectrum of such coding schemes to accurately predict their performances. However, the obtained performances are quite poor. To improve them, we use then a high dimensional modulo-MAP mapping process similar to the low-density generator-matrix codes (LDGM) introduced by Kozic &al. The main difference with their work is that we use an encoding and decoding process on GF (2m) which enables to obtain better performances while preserving a quite simple decoding algorithm when we use the Extended Min-Sum (EMS) algorithm of Declercq &Fossorier.展开更多
This paper briefly introduces the characteristics and structure of symbol QR two-dimensional code, a detailed analysis of the image processing method to identify QR code of the whole process, and the bilinear mapping ...This paper briefly introduces the characteristics and structure of symbol QR two-dimensional code, a detailed analysis of the image processing method to identify QR code of the whole process, and the bilinear mapping method is applied to image correction, the final steps of decoding are given. The actual test results show that, the design algorithm has theoretical and practical, this recognition system can correctly read QR code, and has high recognition rate and recognition speed, has practical value and application prospect.展开更多
文摘In recent years,there has been a significant increase in research focused on the growth of large-area single crystals.Rajan et al.[1]recently achieved the growth of large-area monolayers of transition-metal chalcogenides through assisted nucleation.The quality of molecular beam epitaxy(MBE)-grown two-dimensional(2D)materials can be greatly enhanced by using sacrificial species deposited simultaneously from an electron beam evaporator during the growth process.This technique notably boosts the nucleation rate of the target epitaxial layer,resulting in large,homogeneous monolayers with improved quasiparticle lifetimes and fostering the development of epitaxial van der Waals heterostructures.Additionally,micrometer-sized silver films have been formed at the air-water interface by directly depositing electrospray-generated silver ions onto an aqueous dispersion of reduced graphene oxide under ambient conditions[2].
文摘Massive amounts of data are acquired in modern and future information technology industries such as communication,radar,and remote sensing.The presence of large dimensionality and size in these data offers new opportunities to enhance the performance of signal processing in such applications and even motivate new ones.However,the curse of dimensionality is always a challenge when processing such high-dimensional signals.In practical tasks,high-dimensional signals need to be acquired,processed,and analyzed with high accuracy,robustness,and computational efficiency.This special section aims to address these challenges,where articles attempt to develop new theories and methods that are best suited to the high dimensional nature of the signals involved,and explore modern and emerging applications in this area.
基金Guangdong Basic and Applied Basic Research Foundation,Grant/Award Number:2023A1515012878Natural Science Foundation of Anhui Province,Grant/Award Number:2008085ME134+2 种基金Australian Research Council Discovery Projects,Grant/Award Numbers:ARC DP200103315,ARC DP200103332Major Special Science and Technology Project of Anhui Province,Grant/Award Number:202103a07020007Key Research and Development Program of Anhui Province,Grant/Award Number:202104a05020057。
文摘Ni-Fe-based oxides are among the most promising catalysts developed to date for the bottleneck oxygen evolution reaction(OER)in water electrolysis.However,understanding and mastering the synergy of Ni and Fe remain challenging.Herein,we report that the synergy between Ni and Fe can be tailored by crystal dimensionality of Ni,Fe-contained Ruddlesden-Popper(RP)-type perovskites(La_(0.125)Sr_(0.875))n+1(Ni_(0.25)Fe_(0.75))nO3n+1(n=1,2,3),where the material with n=3 shows the best OER performance in alkaline media.Soft X-ray absorption spectroscopy spectra before and after OER reveal that the material with n=3 shows enhanced Ni/Fe-O covalency to boost the electron transfer as compared to those with n=1 and n=2.Further experimental investigations demonstrate that the Fe ion is the active site and the Ni ion is the stable site in this system,where such unique synergy reaches the optimum at n=3.Besides,as n increases,the proportion of unstable rock-salt layers accordingly decreases and the leaching of ions(especially Sr^(2+))into the electrolyte is suppressed,which induces a decrease in the leaching of active Fe ions,ultimately leading to enhanced stability.This work provides a new avenue for rational catalyst design through the dimensional strategy.
基金Project supported by the National Natural Science Foundation of China (Grant Nos.12275144,12235007,and 11975131)K.C.Wong Magna Fund in Ningbo University。
文摘Integrable systems play a crucial role in physics and mathematics.In particular,the traditional(1+1)-dimensional and(2+1)-dimensional integrable systems have received significant attention due to the rarity of integrable systems in higher dimensions.Recent studies have shown that abundant higher-dimensional integrable systems can be constructed from(1+1)-dimensional integrable systems by using a deformation algorithm.Here we establish a new(2+1)-dimensional Chen-Lee-Liu(C-L-L)equation using the deformation algorithm from the(1+1)-dimensional C-L-L equation.The new system is integrable with its Lax pair obtained by applying the deformation algorithm to that of the(1+1)-dimension.It is challenging to obtain the exact solutions for the new integrable system because the new system combines both the original C-L-L equation and its reciprocal transformation.The traveling wave solutions are derived in implicit function expression,and some asymmetry peakon solutions are found.
基金funded by the National Key Research and Development Program of China(No.2020YFA0711800)the National Science Fund for Distinguished Young Scholars(No.51925404)+2 种基金the National Natural Science Foundation of China(No.12372373)the Postgraduate Research&Practice Innovation Program of Jiangsu Province(No.KYCX24_2909)the Graduate Innovation Program of China University of Mining and Technology(No.2024WLKXJ134)。
文摘Methane in-situ explosion fracturing(MISEF)enhances permeability in shale reservoirs by detonating desorbed methane to generate detonation waves in perforations.Fracture propagation in bedding shale under varying explosion loads remains unclear.In this study,prefabricated perforated shale samples with parallel and vertical bedding are fractured under five distinct explosion loads using a MISEF experimental setup.High-frequency explosion pressure-time curves were monitored within an equivalent perforation,and computed tomography scanning along with three-dimensional reconstruction techniques were used to investigate fracture propagation patterns.Additionally,the formation mechanism and influencing factors of explosion crack-generated fines(CGF)were clarified by analyzing the morphology and statistics of explosion debris particles.The results indicate that methane explosion generated oscillating-pulse loads within perforations.Explosion characteristic parameters increase with increasing initial pressure.Explosion load and bedding orientation significantly influence fracture propagation patterns.As initial pressure increases,the fracture mode transitions from bi-wing to 4–5 radial fractures.In parallel bedding shale,radial fractures noticeably deflect along the bedding surface.Vertical bedding facilitates the development of transverse fractures oriented parallel to the cross-section.Bifurcation-merging of explosioninduced fractures generated CGF.CGF mass and fractal dimension increase,while average particle size decreases with increasing explosion load.This study provides valuable insights into MISEF technology.
基金supported by the National Natural Science Foundation of China (Nos.52374078 and 52074043)the Fundamental Research Funds for the Central Universities (No.2023CDJKYJH021)。
文摘Fractal theory offers a powerful tool for the precise description and quantification of the complex pore structures in reservoir rocks,crucial for understanding the storage and migration characteristics of media within these rocks.Faced with the challenge of calculating the three-dimensional fractal dimensions of rock porosity,this study proposes an innovative computational process that directly calculates the three-dimensional fractal dimensions from a geometric perspective.By employing a composite denoising approach that integrates Fourier transform(FT)and wavelet transform(WT),coupled with multimodal pore extraction techniques such as threshold segmentation,top-hat transformation,and membrane enhancement,we successfully crafted accurate digital rock models.The improved box-counting method was then applied to analyze the voxel data of these digital rocks,accurately calculating the fractal dimensions of the rock pore distribution.Further numerical simulations of permeability experiments were conducted to explore the physical correlations between the rock pore fractal dimensions,porosity,and absolute permeability.The results reveal that rocks with higher fractal dimensions exhibit more complex pore connectivity pathways and a wider,more uneven pore distribution,suggesting that the ideal rock samples should possess lower fractal dimensions and higher effective porosity rates to achieve optimal fluid transmission properties.The methodology and conclusions of this study provide new tools and insights for the quantitative analysis of complex pores in rocks and contribute to the exploration of the fractal transport properties of media within rocks.
文摘The dimensional accuracy of machined parts is strongly influenced by the thermal behavior of machine tools (MT). Minimizing this influence represents a key objective for any modern manufacturing industry. Thermally induced positioning error compensation remains the most effective and practical method in this context. However, the efficiency of the compensation process depends on the quality of the model used to predict the thermal errors. The model should consistently reflect the relationships between temperature distribution in the MT structure and thermally induced positioning errors. A judicious choice of the number and location of temperature sensitive points to represent heat distribution is a key factor for robust thermal error modeling. Therefore, in this paper, the temperature sensitive points are selected following a structured thermomechanical analysis carried out to evaluate the effects of various temperature gradients on MT structure deformation intensity. The MT thermal behavior is first modeled using finite element method and validated by various experimentally measured temperature fields using temperature sensors and thermal imaging. MT Thermal behavior validation shows a maximum error of less than 10% when comparing the numerical estimations with the experimental results even under changing operation conditions. The numerical model is used through several series of simulations carried out using varied working condition to explore possible relationships between temperature distribution and thermal deformation characteristics to select the most appropriate temperature sensitive points that will be considered for building an empirical prediction model for thermal errors as function of MT thermal state. Validation tests achieved using an artificial neural network based simplified model confirmed the efficiency of the proposed temperature sensitive points allowing the prediction of the thermally induced errors with an accuracy greater than 90%.
基金supported by the National Basic Research Program of China。
文摘With the extensive application of large-scale array antennas,the increasing number of array elements leads to the increasing dimension of received signals,making it difficult to meet the real-time requirement of direction of arrival(DOA)estimation due to the computational complexity of algorithms.Traditional subspace algorithms require estimation of the covariance matrix,which has high computational complexity and is prone to producing spurious peaks.In order to reduce the computational complexity of DOA estimation algorithms and improve their estimation accuracy under large array elements,this paper proposes a DOA estimation method based on Krylov subspace and weighted l_(1)-norm.The method uses the multistage Wiener filter(MSWF)iteration to solve the basis of the Krylov subspace as an estimate of the signal subspace,further uses the measurement matrix to reduce the dimensionality of the signal subspace observation,constructs a weighted matrix,and combines the sparse reconstruction to establish a convex optimization function based on the residual sum of squares and weighted l_(1)-norm to solve the target DOA.Simulation results show that the proposed method has high resolution under large array conditions,effectively suppresses spurious peaks,reduces computational complexity,and has good robustness for low signal to noise ratio(SNR)environment.
文摘NGLY1 Deficiency is an ultra-rare autosomal recessively inherited disorder. Characteristic symptoms include among others, developmental delays, movement disorders, liver function abnormalities, seizures, and problems with tear formation. Movements are hyperkinetic and may include dysmetric, choreo-athetoid, myoclonic and dystonic movement elements. To date, there have been no quantitative reports describing arm movements of individuals with NGLY1 Deficiency. This report provides quantitative information about a series of arm movements performed by an individual with NGLY1 Deficiency and an aged-matched neurotypical participant. Three categories of arm movements were tested: 1) open ended reaches without specific end point targets;2) goal-directed reaches that included grasping an object;3) picking up small objects from a table placed in front of the participants. Arm movement kinematics were obtained with a camera-based motion analysis system and “initiation” and “maintenance” phases were identified for each movement. The combination of the two phases was labeled as a “complete” movement. Three-dimensional analysis techniques were used to quantify the movements and included hand trajectory pathlength, joint motion area, as well as hand trajectory and joint jerk cost. These techniques were required to fully characterize the movements because the NGLY1 individual was unable to perform movements only in the primary plane of progression instead producing motion across all three planes of movement. The individual with NGLY1 Deficiency was unable to pick up objects from a table or effectively complete movements requiring crossing the midline. The successfully completed movements were analyzed using the above techniques and the results of the two participants were compared statistically. Almost all comparisons revealed significant differences between the two participants, with a notable exception of the 3D initiation area as a percentage of the complete movement. The statistical tests of these measures revealed no significant differences between the two participants, possibly suggesting a common underlying motor control strategy. The 3D techniques used in this report effectively characterized arm movements of an individual with NGLY1 deficiency and can be used to provide information to evaluate the effectiveness of genetic, pharmacological, or physical rehabilitation therapies.
文摘Machining is as old as humanity, and changes in temperature in both the machine’s internal and external environments can be of great concern as they affect the machine’s thermal stability and, thus, the machine’s dimensional accuracy. This paper is a continuation of our earlier work, which aimed to analyze the effect of the internal temperature of a machine tool as the machine is put into operation and vary the external temperature, the machine floor temperature. Some experiments are carried out under controlled conditions to study how machine tool components get heated up and how this heating up affects the machine’s accuracy due to thermally induced deviations. Additionally, another angle is added by varying the machine floor temperature. The parameters mentioned above are explored in line with the overall thermal stability of the machine tool and its dimensional accuracy. A Robodrill CNC machine tool is used. The CNC was first soaked with thermal energy by gradually raising the machine floor temperature to a certain level before putting the machine in operation. The machine was monitored, and analytical methods were deplored to evaluate thermal stability. Secondly, the machine was run idle for some time under raised floor temperature before it was put into operation. Data was also collected and analyzed. It is observed that machine thermal stability can be achieved in several ways depending on how the above parameters are joggled. This paper, in conclusion, reinforces the idea of machine tool warm-up process in conjunction with a carefully analyzed and established machine floor temperature variation for the approximation of the machine tool’s thermally stability to map the long-time behavior of the machine tool.
文摘This study presents a numerical analysis of three-dimensional steady laminar flow in a rectangular channel with a 180-degree sharp turn. The Navier-Stokes equations are solved by using finite difference method for Re = 900. Three-dimensional streamlines and limiting streamlines on wall surface are used to analyze the three-dimensional flow characteristics. Topological theory is applied to limiting streamlines on inner walls of the channel and two-dimensional streamlines at several cross sections. It is also shown that the flow impinges on the end wall of turn and the secondary flow is induced by the curvature in the sharp turn.
基金support from the National Key Research and Development Program of China(Nos.2023YFC2907300 and 2019YFE0118500)the National Natural Science Foundation of China(Nos.U22A20598 and 52104107)the Natural Science Foundation of Jiangsu Province(No.BK20200634).
文摘There is an urgent need to develop optimal solutions for deformation control of deep high‐stress roadways,one of the critical problems in underground engineering.The previously proposed four‐dimensional support(hereinafter 4D support),as a new support technology,can set the roadway surrounding rock under three‐dimensional pressure in the new balanced structure,and prevent instability of surrounding rock in underground engineering.However,the influence of roadway depth and creep deformation on the surrounding rock supported by 4D support is still unknown.This study investigated the influence of roadway depth and creep deformation time on the instability of surrounding rock by analyzing the energy development.The elastic strain energy was analyzed using the program redeveloped in FLAC3D.The numerical simulation results indicate that the combined support mode of 4D roof supports and conventional side supports is highly applicable to the stability control of surrounding rock with a roadway depth exceeding 520 m.With the increase of roadway depth,4D support can effectively restrain the area and depth of plastic deformation in the surrounding rock.Further,4D support limits the accumulation range and rate of elastic strain energy as the creep deformation time increases.4D support can effectively reduce the plastic deformation of roadway surrounding rock and maintain the stability for a long deformation period of 6 months.As confirmed by in situ monitoring results,4D support is more effective for the long‐term stability control of surrounding rock than conventional support.
文摘This paper presents a new dimension reduction strategy for medium and large-scale linear programming problems. The proposed method uses a subset of the original constraints and combines two algorithms: the weighted average and the cosine simplex algorithm. The first approach identifies binding constraints by using the weighted average of each constraint, whereas the second algorithm is based on the cosine similarity between the vector of the objective function and the constraints. These two approaches are complementary, and when used together, they locate the essential subset of initial constraints required for solving medium and large-scale linear programming problems. After reducing the dimension of the linear programming problem using the subset of the essential constraints, the solution method can be chosen from any suitable method for linear programming. The proposed approach was applied to a set of well-known benchmarks as well as more than 2000 random medium and large-scale linear programming problems. The results are promising, indicating that the new approach contributes to the reduction of both the size of the problems and the total number of iterations required. A tree-based classification model also confirmed the need for combining the two approaches. A detailed numerical example, the general numerical results, and the statistical analysis for the decision tree procedure are presented.
文摘This study explores the influence of infill patterns on machine acceleration prediction in the realm of three-dimensional(3D)printing,particularly focusing on extrusion technology.Our primary objective was to develop a long short-term memory(LSTM)network capable of assessing this impact.We conducted an extensive analysis involving 12 distinct infill patterns,collecting time-series data to examine their effects on the acceleration of the printer’s bed.The LSTM network was trained using acceleration data from the adaptive cubic infill pattern,while the Archimedean chords infill pattern provided data for evaluating the network’s prediction accuracy.This involved utilizing offline time-series acceleration data as the training and testing datasets for the LSTM model.Specifically,the LSTM model was devised to predict the acceleration of a fused deposition modeling(FDM)printer using data from the adaptive cubic infill pattern.Rigorous testing yielded a root mean square error(RMSE)of 0.007144,reflecting the model’s precision.Further refinement and testing of the LSTM model were conducted using acceleration data from the Archimedean chords infill pattern,resulting in an RMSE of 0.007328.Notably,the developed LSTM model demonstrated superior performance compared to an optimized recurrent neural network(RNN)in predicting machine acceleration data.The empirical findings highlight that the adaptive cubic infill pattern considerably influences the dimensional accuracy of parts printed using FDM technology.
文摘The care of a patient involved in major trauma with exsanguinating haemorrhage is time-critical to achieve definitive haemorrhage control,and it requires coordinated multidisciplinary care.During initial resuscitation of a patient in the emergency department(ED),Code Crimson activation facilitates rapid decisionmaking by multi-disciplinary specialists for definitive haemorrhage control in operating theatre(OT)and/or interventional radiology(IR)suite.Once this decision has been made,there may still be various factors that lead to delay in transporting the patient from ED to OT/IR.Red Blanket protocol identifies and addresses these factors and processes which cause delay,and aims to facilitate rapid and safe transport of the haemodynamically unstable patient from ED to OT,while minimizing delay in resuscitation during the transfer.The two processes,Code Crimson and Red Blanket,complement each other.It would be ideal to merge the two processes into a single protocol rather than having two separate workflows.Introducing these quality improvement strategies and coor-dinated processes within the trauma framework of the hospitals/healthcare systems will help in further improving the multi-disciplinary care for the complex trauma patients requiring rapid and definitive haemorrhage control.
基金Project supported by the Natural Science Foundation of Shandong Province,China(Grant Nos.ZR2021MF049,ZR2022LLZ012,and ZR2021LLZ001)。
文摘Quantum error correction technology is an important method to eliminate errors during the operation of quantum computers.In order to solve the problem of influence of errors on physical qubits,we propose an approximate error correction scheme that performs dimension mapping operations on surface codes.This error correction scheme utilizes the topological properties of error correction codes to map the surface code dimension to three dimensions.Compared to previous error correction schemes,the present three-dimensional surface code exhibits good scalability due to its higher redundancy and more efficient error correction capabilities.By reducing the number of ancilla qubits required for error correction,this approach achieves savings in measurement space and reduces resource consumption costs.In order to improve the decoding efficiency and solve the problem of the correlation between the surface code stabilizer and the 3D space after dimension mapping,we employ a reinforcement learning(RL)decoder based on deep Q-learning,which enables faster identification of the optimal syndrome and achieves better thresholds through conditional optimization.Compared to the minimum weight perfect matching decoding,the threshold of the RL trained model reaches 0.78%,which is 56%higher and enables large-scale fault-tolerant quantum computation.
文摘In this paper, two-dimensional (2-D) correction scheme is proposed to improve the performance of conventional Min-Sum (MS) decoding of regular low density parity check codes. The adopted algorithm to obtain the correction factors is simply based on estimating the mean square difference (MSD) between the transmitted codeword and the posteriori information of both bit and check node that produced at the MS decoder. Semi-practical tests using software-defined radio (SDR) and specific code simulations show that the proposed quasi-optimal algorithm provides a comparable error performance as Sum-Product (SP) decoding while requiring less complexity.
基金Supported by the National Natural Science Foundation of China(No.U19B2015,62271386,61801371).
文摘Spatially-coupled low-density parity-check(SC-LDPC)codes are prominent candidates for fu-ture communication standards due to their‘threshold saturation’properties.However,when facing burst erasures,the decoding process will stop and the decoding performances will dramatically de-grade.To improve the ability of burst erasure corrections,this paper proposes a two-dimensional SC-LDPC(2D-SC-LDPC)codes constructed by parallelly connecting two asymmetric SC-LDPC coupled chains for resistance to burst erasures.Density evolution algorithm is presented to evaluate the as-ymptotic performances against burst erasures,by which the maximum correctable burst erasure length can be computed.The analysis results show that the maximum correctable burst erasure lengths of the proposed 2D-SC-LDPC codes are much larger than the SC-LDPC codes and the asym-metric SC-LDPC codes.Finite-length performance simulation results of the 2D-SC-LDPC codes over the burst erasure channel confirm the excellent asymptotic performances.
文摘In this paper, we propose to generalize the coding schemes first proposed by Kozic &al to high spectral efficient modulation schemes. We study at first Chaos Coded Modulation based on the use of small dimensional modulo-MAP encoding process and we give a solution to study the distance spectrum of such coding schemes to accurately predict their performances. However, the obtained performances are quite poor. To improve them, we use then a high dimensional modulo-MAP mapping process similar to the low-density generator-matrix codes (LDGM) introduced by Kozic &al. The main difference with their work is that we use an encoding and decoding process on GF (2m) which enables to obtain better performances while preserving a quite simple decoding algorithm when we use the Extended Min-Sum (EMS) algorithm of Declercq &Fossorier.
文摘This paper briefly introduces the characteristics and structure of symbol QR two-dimensional code, a detailed analysis of the image processing method to identify QR code of the whole process, and the bilinear mapping method is applied to image correction, the final steps of decoding are given. The actual test results show that, the design algorithm has theoretical and practical, this recognition system can correctly read QR code, and has high recognition rate and recognition speed, has practical value and application prospect.