The longitudinal dispersion of the projectile in shooting tests of two-dimensional trajectory corrections fused with fixed canards is extremely large that it sometimes exceeds the correction ability of the correction ...The longitudinal dispersion of the projectile in shooting tests of two-dimensional trajectory corrections fused with fixed canards is extremely large that it sometimes exceeds the correction ability of the correction fuse actuator.The impact point easily deviates from the target,and thus the correction result cannot be readily evaluated.However,the cost of shooting tests is considerably high to conduct many tests for data collection.To address this issue,this study proposes an aiming method for shooting tests based on small sample size.The proposed method uses the Bootstrap method to expand the test data;repeatedly iterates and corrects the position of the simulated theoretical impact points through an improved compatibility test method;and dynamically adjusts the weight of the prior distribution of simulation results based on Kullback-Leibler divergence,which to some extent avoids the real data being"submerged"by the simulation data and achieves the fusion Bayesian estimation of the dispersion center.The experimental results show that when the simulation accuracy is sufficiently high,the proposed method yields a smaller mean-square deviation in estimating the dispersion center and higher shooting accuracy than those of the three comparison methods,which is more conducive to reflecting the effect of the control algorithm and facilitating test personnel to iterate their proposed structures and algorithms.;in addition,this study provides a knowledge base for further comprehensive studies in the future.展开更多
To understand the strengths of rocks under complex stress states,a generalized nonlinear threedimensional(3D)Hoek‒Brown failure(NGHB)criterion was proposed in this study.This criterion shares the same parameters with ...To understand the strengths of rocks under complex stress states,a generalized nonlinear threedimensional(3D)Hoek‒Brown failure(NGHB)criterion was proposed in this study.This criterion shares the same parameters with the generalized HB(GHB)criterion and inherits the parameter advantages of GHB.Two new parameters,b,and n,were introduced into the NGHB criterion that primarily controls the deviatoric plane shape of the NGHB criterion under triaxial tension and compression,respectively.The NGHB criterion can consider the influence of intermediate principal stress(IPS),where the deviatoric plane shape satisfies the smoothness requirements,while the HB criterion not.This criterion can degenerate into the two modified 3D HB criteria,the Priest criterion under triaxial compression condition and the HB criterion under triaxial compression and tension condition.This criterion was verified using true triaxial test data for different parameters,six types of rocks,and two kinds of in situ rock masses.For comparison,three existing 3D HB criteria were selected for performance comparison research.The result showed that the NGHB criterion gave better prediction performance than other criteria.The prediction errors of the strength of six types of rocks and two kinds of in situ rock masses were in the range of 2.0724%-3.5091%and 1.0144%-3.2321%,respectively.The proposed criterion lays a preliminary theoretical foundation for prediction of engineering rock mass strength under complex in situ stress conditions.展开更多
In recent years,there has been a significant increase in research focused on the growth of large-area single crystals.Rajan et al.[1]recently achieved the growth of large-area monolayers of transition-metal chalcogeni...In recent years,there has been a significant increase in research focused on the growth of large-area single crystals.Rajan et al.[1]recently achieved the growth of large-area monolayers of transition-metal chalcogenides through assisted nucleation.The quality of molecular beam epitaxy(MBE)-grown two-dimensional(2D)materials can be greatly enhanced by using sacrificial species deposited simultaneously from an electron beam evaporator during the growth process.This technique notably boosts the nucleation rate of the target epitaxial layer,resulting in large,homogeneous monolayers with improved quasiparticle lifetimes and fostering the development of epitaxial van der Waals heterostructures.Additionally,micrometer-sized silver films have been formed at the air-water interface by directly depositing electrospray-generated silver ions onto an aqueous dispersion of reduced graphene oxide under ambient conditions[2].展开更多
Massive amounts of data are acquired in modern and future information technology industries such as communication,radar,and remote sensing.The presence of large dimensionality and size in these data offers new opportu...Massive amounts of data are acquired in modern and future information technology industries such as communication,radar,and remote sensing.The presence of large dimensionality and size in these data offers new opportunities to enhance the performance of signal processing in such applications and even motivate new ones.However,the curse of dimensionality is always a challenge when processing such high-dimensional signals.In practical tasks,high-dimensional signals need to be acquired,processed,and analyzed with high accuracy,robustness,and computational efficiency.This special section aims to address these challenges,where articles attempt to develop new theories and methods that are best suited to the high dimensional nature of the signals involved,and explore modern and emerging applications in this area.展开更多
Ni-Fe-based oxides are among the most promising catalysts developed to date for the bottleneck oxygen evolution reaction(OER)in water electrolysis.However,understanding and mastering the synergy of Ni and Fe remain ch...Ni-Fe-based oxides are among the most promising catalysts developed to date for the bottleneck oxygen evolution reaction(OER)in water electrolysis.However,understanding and mastering the synergy of Ni and Fe remain challenging.Herein,we report that the synergy between Ni and Fe can be tailored by crystal dimensionality of Ni,Fe-contained Ruddlesden-Popper(RP)-type perovskites(La_(0.125)Sr_(0.875))n+1(Ni_(0.25)Fe_(0.75))nO3n+1(n=1,2,3),where the material with n=3 shows the best OER performance in alkaline media.Soft X-ray absorption spectroscopy spectra before and after OER reveal that the material with n=3 shows enhanced Ni/Fe-O covalency to boost the electron transfer as compared to those with n=1 and n=2.Further experimental investigations demonstrate that the Fe ion is the active site and the Ni ion is the stable site in this system,where such unique synergy reaches the optimum at n=3.Besides,as n increases,the proportion of unstable rock-salt layers accordingly decreases and the leaching of ions(especially Sr^(2+))into the electrolyte is suppressed,which induces a decrease in the leaching of active Fe ions,ultimately leading to enhanced stability.This work provides a new avenue for rational catalyst design through the dimensional strategy.展开更多
Integrable systems play a crucial role in physics and mathematics.In particular,the traditional(1+1)-dimensional and(2+1)-dimensional integrable systems have received significant attention due to the rarity of integra...Integrable systems play a crucial role in physics and mathematics.In particular,the traditional(1+1)-dimensional and(2+1)-dimensional integrable systems have received significant attention due to the rarity of integrable systems in higher dimensions.Recent studies have shown that abundant higher-dimensional integrable systems can be constructed from(1+1)-dimensional integrable systems by using a deformation algorithm.Here we establish a new(2+1)-dimensional Chen-Lee-Liu(C-L-L)equation using the deformation algorithm from the(1+1)-dimensional C-L-L equation.The new system is integrable with its Lax pair obtained by applying the deformation algorithm to that of the(1+1)-dimension.It is challenging to obtain the exact solutions for the new integrable system because the new system combines both the original C-L-L equation and its reciprocal transformation.The traveling wave solutions are derived in implicit function expression,and some asymmetry peakon solutions are found.展开更多
Fractal theory offers a powerful tool for the precise description and quantification of the complex pore structures in reservoir rocks,crucial for understanding the storage and migration characteristics of media withi...Fractal theory offers a powerful tool for the precise description and quantification of the complex pore structures in reservoir rocks,crucial for understanding the storage and migration characteristics of media within these rocks.Faced with the challenge of calculating the three-dimensional fractal dimensions of rock porosity,this study proposes an innovative computational process that directly calculates the three-dimensional fractal dimensions from a geometric perspective.By employing a composite denoising approach that integrates Fourier transform(FT)and wavelet transform(WT),coupled with multimodal pore extraction techniques such as threshold segmentation,top-hat transformation,and membrane enhancement,we successfully crafted accurate digital rock models.The improved box-counting method was then applied to analyze the voxel data of these digital rocks,accurately calculating the fractal dimensions of the rock pore distribution.Further numerical simulations of permeability experiments were conducted to explore the physical correlations between the rock pore fractal dimensions,porosity,and absolute permeability.The results reveal that rocks with higher fractal dimensions exhibit more complex pore connectivity pathways and a wider,more uneven pore distribution,suggesting that the ideal rock samples should possess lower fractal dimensions and higher effective porosity rates to achieve optimal fluid transmission properties.The methodology and conclusions of this study provide new tools and insights for the quantitative analysis of complex pores in rocks and contribute to the exploration of the fractal transport properties of media within rocks.展开更多
The dimensional accuracy of machined parts is strongly influenced by the thermal behavior of machine tools (MT). Minimizing this influence represents a key objective for any modern manufacturing industry. Thermally in...The dimensional accuracy of machined parts is strongly influenced by the thermal behavior of machine tools (MT). Minimizing this influence represents a key objective for any modern manufacturing industry. Thermally induced positioning error compensation remains the most effective and practical method in this context. However, the efficiency of the compensation process depends on the quality of the model used to predict the thermal errors. The model should consistently reflect the relationships between temperature distribution in the MT structure and thermally induced positioning errors. A judicious choice of the number and location of temperature sensitive points to represent heat distribution is a key factor for robust thermal error modeling. Therefore, in this paper, the temperature sensitive points are selected following a structured thermomechanical analysis carried out to evaluate the effects of various temperature gradients on MT structure deformation intensity. The MT thermal behavior is first modeled using finite element method and validated by various experimentally measured temperature fields using temperature sensors and thermal imaging. MT Thermal behavior validation shows a maximum error of less than 10% when comparing the numerical estimations with the experimental results even under changing operation conditions. The numerical model is used through several series of simulations carried out using varied working condition to explore possible relationships between temperature distribution and thermal deformation characteristics to select the most appropriate temperature sensitive points that will be considered for building an empirical prediction model for thermal errors as function of MT thermal state. Validation tests achieved using an artificial neural network based simplified model confirmed the efficiency of the proposed temperature sensitive points allowing the prediction of the thermally induced errors with an accuracy greater than 90%.展开更多
To analyze the relationship between macro and meso parameters of the gas hydrate bearing coal(GHBC)and to calibrate the meso-parameters,the numerical tests were conducted to simulate the laboratory triaxial compressio...To analyze the relationship between macro and meso parameters of the gas hydrate bearing coal(GHBC)and to calibrate the meso-parameters,the numerical tests were conducted to simulate the laboratory triaxial compression tests by PFC3D,with the parallel bond model employed as the particle contact constitutive model.First,twenty simulation tests were conducted to quantify the relationship between the macro–meso parameters.Then,nine orthogonal simulation tests were performed using four meso-mechanical parameters in a three-level to evaluate the sensitivity of the meso-mechanical parameters.Furthermore,the calibration method of the meso-parameters were then proposed.Finally,the contact force chain,the contact force and the contact number were examined to investigate the saturation effect on the meso-mechanical behavior of GHBC.The results show that:(1)The elastic modulus linearly increases with the bonding stiffness ratio and the friction coefficient while exponentially increasing with the normal bonding strength and the bonding radius coefficient.The failure strength increases exponentially with the increase of the friction coefficient,the normal bonding strength and the bonding radius coefficient,and remains constant with the increase of bond stiffness ratio;(2)The friction coefficient and the bond radius coefficient are most sensitive to the elastic modulus and the failure strength;(3)The number of the force chains,the contact force,and the bond strength between particles will increase with the increase of the hydrate saturation,which leads to the larger failure strength.展开更多
With the extensive application of large-scale array antennas,the increasing number of array elements leads to the increasing dimension of received signals,making it difficult to meet the real-time requirement of direc...With the extensive application of large-scale array antennas,the increasing number of array elements leads to the increasing dimension of received signals,making it difficult to meet the real-time requirement of direction of arrival(DOA)estimation due to the computational complexity of algorithms.Traditional subspace algorithms require estimation of the covariance matrix,which has high computational complexity and is prone to producing spurious peaks.In order to reduce the computational complexity of DOA estimation algorithms and improve their estimation accuracy under large array elements,this paper proposes a DOA estimation method based on Krylov subspace and weighted l_(1)-norm.The method uses the multistage Wiener filter(MSWF)iteration to solve the basis of the Krylov subspace as an estimate of the signal subspace,further uses the measurement matrix to reduce the dimensionality of the signal subspace observation,constructs a weighted matrix,and combines the sparse reconstruction to establish a convex optimization function based on the residual sum of squares and weighted l_(1)-norm to solve the target DOA.Simulation results show that the proposed method has high resolution under large array conditions,effectively suppresses spurious peaks,reduces computational complexity,and has good robustness for low signal to noise ratio(SNR)environment.展开更多
NGLY1 Deficiency is an ultra-rare autosomal recessively inherited disorder. Characteristic symptoms include among others, developmental delays, movement disorders, liver function abnormalities, seizures, and problems ...NGLY1 Deficiency is an ultra-rare autosomal recessively inherited disorder. Characteristic symptoms include among others, developmental delays, movement disorders, liver function abnormalities, seizures, and problems with tear formation. Movements are hyperkinetic and may include dysmetric, choreo-athetoid, myoclonic and dystonic movement elements. To date, there have been no quantitative reports describing arm movements of individuals with NGLY1 Deficiency. This report provides quantitative information about a series of arm movements performed by an individual with NGLY1 Deficiency and an aged-matched neurotypical participant. Three categories of arm movements were tested: 1) open ended reaches without specific end point targets;2) goal-directed reaches that included grasping an object;3) picking up small objects from a table placed in front of the participants. Arm movement kinematics were obtained with a camera-based motion analysis system and “initiation” and “maintenance” phases were identified for each movement. The combination of the two phases was labeled as a “complete” movement. Three-dimensional analysis techniques were used to quantify the movements and included hand trajectory pathlength, joint motion area, as well as hand trajectory and joint jerk cost. These techniques were required to fully characterize the movements because the NGLY1 individual was unable to perform movements only in the primary plane of progression instead producing motion across all three planes of movement. The individual with NGLY1 Deficiency was unable to pick up objects from a table or effectively complete movements requiring crossing the midline. The successfully completed movements were analyzed using the above techniques and the results of the two participants were compared statistically. Almost all comparisons revealed significant differences between the two participants, with a notable exception of the 3D initiation area as a percentage of the complete movement. The statistical tests of these measures revealed no significant differences between the two participants, possibly suggesting a common underlying motor control strategy. The 3D techniques used in this report effectively characterized arm movements of an individual with NGLY1 deficiency and can be used to provide information to evaluate the effectiveness of genetic, pharmacological, or physical rehabilitation therapies.展开更多
Machining is as old as humanity, and changes in temperature in both the machine’s internal and external environments can be of great concern as they affect the machine’s thermal stability and, thus, the machine’s d...Machining is as old as humanity, and changes in temperature in both the machine’s internal and external environments can be of great concern as they affect the machine’s thermal stability and, thus, the machine’s dimensional accuracy. This paper is a continuation of our earlier work, which aimed to analyze the effect of the internal temperature of a machine tool as the machine is put into operation and vary the external temperature, the machine floor temperature. Some experiments are carried out under controlled conditions to study how machine tool components get heated up and how this heating up affects the machine’s accuracy due to thermally induced deviations. Additionally, another angle is added by varying the machine floor temperature. The parameters mentioned above are explored in line with the overall thermal stability of the machine tool and its dimensional accuracy. A Robodrill CNC machine tool is used. The CNC was first soaked with thermal energy by gradually raising the machine floor temperature to a certain level before putting the machine in operation. The machine was monitored, and analytical methods were deplored to evaluate thermal stability. Secondly, the machine was run idle for some time under raised floor temperature before it was put into operation. Data was also collected and analyzed. It is observed that machine thermal stability can be achieved in several ways depending on how the above parameters are joggled. This paper, in conclusion, reinforces the idea of machine tool warm-up process in conjunction with a carefully analyzed and established machine floor temperature variation for the approximation of the machine tool’s thermally stability to map the long-time behavior of the machine tool.展开更多
The geometry of joints has a significant influence on the mechanical properties of rocks.To simplify the curved joint shapes in rocks,the joint shape is usually treated as straight lines or planes in most laboratory e...The geometry of joints has a significant influence on the mechanical properties of rocks.To simplify the curved joint shapes in rocks,the joint shape is usually treated as straight lines or planes in most laboratory experiments and numerical simulations.In this study,the computerized tomography (CT) scanning and photogrammetry were employed to obtain the internal and surface joint structures of a limestone sample,respectively.To describe the joint geometry,the edge detection algorithms and a three-dimensional (3D) matrix mapping method were applied to reconstruct CT-based and photogrammetry-based jointed rock models.For comparison tests,the numerical uniaxial compression tests were conducted on an intact rock sample and a sample with a joint simplified to a plane using the parallel computing method.The results indicate that the mechanical characteristics and failure process of jointed rocks are significantly affected by the geometry of joints.The presence of joints reduces the uniaxial compressive strength (UCS),elastic modulus,and released acoustic emission (AE) energy of rocks by 37%–67%,21%–24%,and 52%–90%,respectively.Compared to the simplified joint sample,the proposed photogrammetry-based numerical model makes the most of the limited geometry information of joints.The UCS,accumulative released AE energy,and elastic modulus of the photogrammetry-based sample were found to be very close to those of the CT-based sample.The UCS value of the simplified joint sample (i.e.38.5 MPa) is much lower than that of the CT-based sample (i.e.72.3 MPa).Additionally,the accumulative released AE energy observed in the simplified joint sample is 3.899 times lower than that observed in the CT-based sample.CT scanning provides a reliable means to visualize the joints in rocks,which can be used to verify the reliability of photogrammetry techniques.The application of the photogrammetry-based sample enables detailed analysis for estimating the mechanical properties of jointed rocks.展开更多
This study presents a numerical analysis of three-dimensional steady laminar flow in a rectangular channel with a 180-degree sharp turn. The Navier-Stokes equations are solved by using finite difference method for Re ...This study presents a numerical analysis of three-dimensional steady laminar flow in a rectangular channel with a 180-degree sharp turn. The Navier-Stokes equations are solved by using finite difference method for Re = 900. Three-dimensional streamlines and limiting streamlines on wall surface are used to analyze the three-dimensional flow characteristics. Topological theory is applied to limiting streamlines on inner walls of the channel and two-dimensional streamlines at several cross sections. It is also shown that the flow impinges on the end wall of turn and the secondary flow is induced by the curvature in the sharp turn.展开更多
There is an urgent need to develop optimal solutions for deformation control of deep high‐stress roadways,one of the critical problems in underground engineering.The previously proposed four‐dimensional support(here...There is an urgent need to develop optimal solutions for deformation control of deep high‐stress roadways,one of the critical problems in underground engineering.The previously proposed four‐dimensional support(hereinafter 4D support),as a new support technology,can set the roadway surrounding rock under three‐dimensional pressure in the new balanced structure,and prevent instability of surrounding rock in underground engineering.However,the influence of roadway depth and creep deformation on the surrounding rock supported by 4D support is still unknown.This study investigated the influence of roadway depth and creep deformation time on the instability of surrounding rock by analyzing the energy development.The elastic strain energy was analyzed using the program redeveloped in FLAC3D.The numerical simulation results indicate that the combined support mode of 4D roof supports and conventional side supports is highly applicable to the stability control of surrounding rock with a roadway depth exceeding 520 m.With the increase of roadway depth,4D support can effectively restrain the area and depth of plastic deformation in the surrounding rock.Further,4D support limits the accumulation range and rate of elastic strain energy as the creep deformation time increases.4D support can effectively reduce the plastic deformation of roadway surrounding rock and maintain the stability for a long deformation period of 6 months.As confirmed by in situ monitoring results,4D support is more effective for the long‐term stability control of surrounding rock than conventional support.展开更多
This paper presents a new dimension reduction strategy for medium and large-scale linear programming problems. The proposed method uses a subset of the original constraints and combines two algorithms: the weighted av...This paper presents a new dimension reduction strategy for medium and large-scale linear programming problems. The proposed method uses a subset of the original constraints and combines two algorithms: the weighted average and the cosine simplex algorithm. The first approach identifies binding constraints by using the weighted average of each constraint, whereas the second algorithm is based on the cosine similarity between the vector of the objective function and the constraints. These two approaches are complementary, and when used together, they locate the essential subset of initial constraints required for solving medium and large-scale linear programming problems. After reducing the dimension of the linear programming problem using the subset of the essential constraints, the solution method can be chosen from any suitable method for linear programming. The proposed approach was applied to a set of well-known benchmarks as well as more than 2000 random medium and large-scale linear programming problems. The results are promising, indicating that the new approach contributes to the reduction of both the size of the problems and the total number of iterations required. A tree-based classification model also confirmed the need for combining the two approaches. A detailed numerical example, the general numerical results, and the statistical analysis for the decision tree procedure are presented.展开更多
When discovering the potential of canards flying in 4-dimensional slow-fast system with a bifurcation parameter, the key notion “symmetry” plays an important role. It is of one parameter on slow vector field. Then, ...When discovering the potential of canards flying in 4-dimensional slow-fast system with a bifurcation parameter, the key notion “symmetry” plays an important role. It is of one parameter on slow vector field. Then, it should be determined to introduce parameters to all slow/fast vectors. It is, however, there might be no way to explore for another potential in this system, because the geometrical structure is quite different from the system with one parameter. Even in this system, the “symmetry” is also useful to obtain the potentials classified by R. Thom. In this paper, via the coordinates changing, the possible way to explore for the potential will be shown. As it is analyzed on “hyper finite time line”, or done by using “non-standard analysis”, it is called “Hyper Catastrophe”. In the slow-fast system which includes a very small parameter , it is difficult to do precise analysis. Thus, it is useful to get the orbits as a singular limit. When trying to do simulations, it is also faced with difficulty due to singularity. Using very small time intervals corresponding small , we shall overcome the difficulty, because the difference equation on the small time interval adopts the standard differential equation. These small intervals are defined on hyper finite number N, which is nonstandard. As and the intervals are linked to use 1/N, the simulation should be done exactly.展开更多
The estimation of covariance matrices is very important in many fields, such as statistics. In real applications, data are frequently influenced by high dimensions and noise. However, most relevant studies are based o...The estimation of covariance matrices is very important in many fields, such as statistics. In real applications, data are frequently influenced by high dimensions and noise. However, most relevant studies are based on complete data. This paper studies the optimal estimation of high-dimensional covariance matrices based on missing and noisy sample under the norm. First, the model with sub-Gaussian additive noise is presented. The generalized sample covariance is then modified to define a hard thresholding estimator , and the minimax upper bound is derived. After that, the minimax lower bound is derived, and it is concluded that the estimator presented in this article is rate-optimal. Finally, numerical simulation analysis is performed. The result shows that for missing samples with sub-Gaussian noise, if the true covariance matrix is sparse, the hard thresholding estimator outperforms the traditional estimate method.展开更多
We consider the Hyperverse as a collection of multiverses in a (4 + 1)-dimensional spacetime with gravitational constant G. Multiverses in our model are bouquets of thin shells (with synchronized intrinsic times). If ...We consider the Hyperverse as a collection of multiverses in a (4 + 1)-dimensional spacetime with gravitational constant G. Multiverses in our model are bouquets of thin shells (with synchronized intrinsic times). If gis the gravitational constant of a shell Sand εits thickness, then G~εg. The physical universe is supposed to be one of those thin shells inside the local bouquet called Local Multiverse. Other remarkable objects of the Hyperverse are supposed to be black holes, black lenses, black rings and (generalized) Black Saturns. In addition, Schwarzschild-de Sitter multiversal nurseries can be hidden inside those Black Saturns, leading to their Bousso-Hawking nucleation. It also suggests that black holes in our physical universe might harbor embedded (2 + 1)-dimensional multiverses. This is compatible with outstanding ideas and results of Bekenstein, Hawking-Vaz and Corda about “black holes as atoms” and the condensation of matter on “apparent horizons”. It allows us to formulate conjecture 12.1 about the origin of the Local Multiverse. As an alternative model, we examine spacetime warping of our universe by external universes. It gives data for the accelerated expansion and the cosmological constant Λ, which are in agreement with observation, thus opening a possibility for verification of the multiverse model.展开更多
An “Eigenstate Adjustment Autonomy” Model, permeated by the Nanosystem’s Fermi Level Pinning along with its rigid Conduction Band Discontinuity, compatible with pertinent Experimental Measurements, is being employe...An “Eigenstate Adjustment Autonomy” Model, permeated by the Nanosystem’s Fermi Level Pinning along with its rigid Conduction Band Discontinuity, compatible with pertinent Experimental Measurements, is being employed for studying how the Functional Eigenstate of the Two-Dimensional Electron Gas (2DEG) dwelling within the Quantum Well of a typical Semiconductor Nanoheterointerface evolves versus (cryptographically) selectable consecutive Cumulative Photon Dose values. Thus, it is ultimately discussed that the experimentally observed (after a Critical Cumulative Photon Dose) Phenomenon of 2DEG Negative Differential Mobility allows for the Nanosystem to exhibit an Effective Qubit Specific Functionality potentially conducive to (Telecommunication) Quantum Information Registering.展开更多
基金the National Natural Science Foundation of China(Grant No.61973033)Preliminary Research of Equipment(Grant No.9090102010305)for funding the experiments。
文摘The longitudinal dispersion of the projectile in shooting tests of two-dimensional trajectory corrections fused with fixed canards is extremely large that it sometimes exceeds the correction ability of the correction fuse actuator.The impact point easily deviates from the target,and thus the correction result cannot be readily evaluated.However,the cost of shooting tests is considerably high to conduct many tests for data collection.To address this issue,this study proposes an aiming method for shooting tests based on small sample size.The proposed method uses the Bootstrap method to expand the test data;repeatedly iterates and corrects the position of the simulated theoretical impact points through an improved compatibility test method;and dynamically adjusts the weight of the prior distribution of simulation results based on Kullback-Leibler divergence,which to some extent avoids the real data being"submerged"by the simulation data and achieves the fusion Bayesian estimation of the dispersion center.The experimental results show that when the simulation accuracy is sufficiently high,the proposed method yields a smaller mean-square deviation in estimating the dispersion center and higher shooting accuracy than those of the three comparison methods,which is more conducive to reflecting the effect of the control algorithm and facilitating test personnel to iterate their proposed structures and algorithms.;in addition,this study provides a knowledge base for further comprehensive studies in the future.
基金supported by the National Natural Science Foundation of China(Grant Nos.51934003,52334004)Yunnan Major Scientific and Technological Projects(Grant No.202202AG050014)。
文摘To understand the strengths of rocks under complex stress states,a generalized nonlinear threedimensional(3D)Hoek‒Brown failure(NGHB)criterion was proposed in this study.This criterion shares the same parameters with the generalized HB(GHB)criterion and inherits the parameter advantages of GHB.Two new parameters,b,and n,were introduced into the NGHB criterion that primarily controls the deviatoric plane shape of the NGHB criterion under triaxial tension and compression,respectively.The NGHB criterion can consider the influence of intermediate principal stress(IPS),where the deviatoric plane shape satisfies the smoothness requirements,while the HB criterion not.This criterion can degenerate into the two modified 3D HB criteria,the Priest criterion under triaxial compression condition and the HB criterion under triaxial compression and tension condition.This criterion was verified using true triaxial test data for different parameters,six types of rocks,and two kinds of in situ rock masses.For comparison,three existing 3D HB criteria were selected for performance comparison research.The result showed that the NGHB criterion gave better prediction performance than other criteria.The prediction errors of the strength of six types of rocks and two kinds of in situ rock masses were in the range of 2.0724%-3.5091%and 1.0144%-3.2321%,respectively.The proposed criterion lays a preliminary theoretical foundation for prediction of engineering rock mass strength under complex in situ stress conditions.
文摘In recent years,there has been a significant increase in research focused on the growth of large-area single crystals.Rajan et al.[1]recently achieved the growth of large-area monolayers of transition-metal chalcogenides through assisted nucleation.The quality of molecular beam epitaxy(MBE)-grown two-dimensional(2D)materials can be greatly enhanced by using sacrificial species deposited simultaneously from an electron beam evaporator during the growth process.This technique notably boosts the nucleation rate of the target epitaxial layer,resulting in large,homogeneous monolayers with improved quasiparticle lifetimes and fostering the development of epitaxial van der Waals heterostructures.Additionally,micrometer-sized silver films have been formed at the air-water interface by directly depositing electrospray-generated silver ions onto an aqueous dispersion of reduced graphene oxide under ambient conditions[2].
文摘Massive amounts of data are acquired in modern and future information technology industries such as communication,radar,and remote sensing.The presence of large dimensionality and size in these data offers new opportunities to enhance the performance of signal processing in such applications and even motivate new ones.However,the curse of dimensionality is always a challenge when processing such high-dimensional signals.In practical tasks,high-dimensional signals need to be acquired,processed,and analyzed with high accuracy,robustness,and computational efficiency.This special section aims to address these challenges,where articles attempt to develop new theories and methods that are best suited to the high dimensional nature of the signals involved,and explore modern and emerging applications in this area.
基金Guangdong Basic and Applied Basic Research Foundation,Grant/Award Number:2023A1515012878Natural Science Foundation of Anhui Province,Grant/Award Number:2008085ME134+2 种基金Australian Research Council Discovery Projects,Grant/Award Numbers:ARC DP200103315,ARC DP200103332Major Special Science and Technology Project of Anhui Province,Grant/Award Number:202103a07020007Key Research and Development Program of Anhui Province,Grant/Award Number:202104a05020057。
文摘Ni-Fe-based oxides are among the most promising catalysts developed to date for the bottleneck oxygen evolution reaction(OER)in water electrolysis.However,understanding and mastering the synergy of Ni and Fe remain challenging.Herein,we report that the synergy between Ni and Fe can be tailored by crystal dimensionality of Ni,Fe-contained Ruddlesden-Popper(RP)-type perovskites(La_(0.125)Sr_(0.875))n+1(Ni_(0.25)Fe_(0.75))nO3n+1(n=1,2,3),where the material with n=3 shows the best OER performance in alkaline media.Soft X-ray absorption spectroscopy spectra before and after OER reveal that the material with n=3 shows enhanced Ni/Fe-O covalency to boost the electron transfer as compared to those with n=1 and n=2.Further experimental investigations demonstrate that the Fe ion is the active site and the Ni ion is the stable site in this system,where such unique synergy reaches the optimum at n=3.Besides,as n increases,the proportion of unstable rock-salt layers accordingly decreases and the leaching of ions(especially Sr^(2+))into the electrolyte is suppressed,which induces a decrease in the leaching of active Fe ions,ultimately leading to enhanced stability.This work provides a new avenue for rational catalyst design through the dimensional strategy.
基金Project supported by the National Natural Science Foundation of China (Grant Nos.12275144,12235007,and 11975131)K.C.Wong Magna Fund in Ningbo University。
文摘Integrable systems play a crucial role in physics and mathematics.In particular,the traditional(1+1)-dimensional and(2+1)-dimensional integrable systems have received significant attention due to the rarity of integrable systems in higher dimensions.Recent studies have shown that abundant higher-dimensional integrable systems can be constructed from(1+1)-dimensional integrable systems by using a deformation algorithm.Here we establish a new(2+1)-dimensional Chen-Lee-Liu(C-L-L)equation using the deformation algorithm from the(1+1)-dimensional C-L-L equation.The new system is integrable with its Lax pair obtained by applying the deformation algorithm to that of the(1+1)-dimension.It is challenging to obtain the exact solutions for the new integrable system because the new system combines both the original C-L-L equation and its reciprocal transformation.The traveling wave solutions are derived in implicit function expression,and some asymmetry peakon solutions are found.
基金supported by the National Natural Science Foundation of China (Nos.52374078 and 52074043)the Fundamental Research Funds for the Central Universities (No.2023CDJKYJH021)。
文摘Fractal theory offers a powerful tool for the precise description and quantification of the complex pore structures in reservoir rocks,crucial for understanding the storage and migration characteristics of media within these rocks.Faced with the challenge of calculating the three-dimensional fractal dimensions of rock porosity,this study proposes an innovative computational process that directly calculates the three-dimensional fractal dimensions from a geometric perspective.By employing a composite denoising approach that integrates Fourier transform(FT)and wavelet transform(WT),coupled with multimodal pore extraction techniques such as threshold segmentation,top-hat transformation,and membrane enhancement,we successfully crafted accurate digital rock models.The improved box-counting method was then applied to analyze the voxel data of these digital rocks,accurately calculating the fractal dimensions of the rock pore distribution.Further numerical simulations of permeability experiments were conducted to explore the physical correlations between the rock pore fractal dimensions,porosity,and absolute permeability.The results reveal that rocks with higher fractal dimensions exhibit more complex pore connectivity pathways and a wider,more uneven pore distribution,suggesting that the ideal rock samples should possess lower fractal dimensions and higher effective porosity rates to achieve optimal fluid transmission properties.The methodology and conclusions of this study provide new tools and insights for the quantitative analysis of complex pores in rocks and contribute to the exploration of the fractal transport properties of media within rocks.
文摘The dimensional accuracy of machined parts is strongly influenced by the thermal behavior of machine tools (MT). Minimizing this influence represents a key objective for any modern manufacturing industry. Thermally induced positioning error compensation remains the most effective and practical method in this context. However, the efficiency of the compensation process depends on the quality of the model used to predict the thermal errors. The model should consistently reflect the relationships between temperature distribution in the MT structure and thermally induced positioning errors. A judicious choice of the number and location of temperature sensitive points to represent heat distribution is a key factor for robust thermal error modeling. Therefore, in this paper, the temperature sensitive points are selected following a structured thermomechanical analysis carried out to evaluate the effects of various temperature gradients on MT structure deformation intensity. The MT thermal behavior is first modeled using finite element method and validated by various experimentally measured temperature fields using temperature sensors and thermal imaging. MT Thermal behavior validation shows a maximum error of less than 10% when comparing the numerical estimations with the experimental results even under changing operation conditions. The numerical model is used through several series of simulations carried out using varied working condition to explore possible relationships between temperature distribution and thermal deformation characteristics to select the most appropriate temperature sensitive points that will be considered for building an empirical prediction model for thermal errors as function of MT thermal state. Validation tests achieved using an artificial neural network based simplified model confirmed the efficiency of the proposed temperature sensitive points allowing the prediction of the thermally induced errors with an accuracy greater than 90%.
基金National Natural Science Foundation Joint Fund Project(U21A20111)National Natural Science Foundation of China(51974112,51674108).
文摘To analyze the relationship between macro and meso parameters of the gas hydrate bearing coal(GHBC)and to calibrate the meso-parameters,the numerical tests were conducted to simulate the laboratory triaxial compression tests by PFC3D,with the parallel bond model employed as the particle contact constitutive model.First,twenty simulation tests were conducted to quantify the relationship between the macro–meso parameters.Then,nine orthogonal simulation tests were performed using four meso-mechanical parameters in a three-level to evaluate the sensitivity of the meso-mechanical parameters.Furthermore,the calibration method of the meso-parameters were then proposed.Finally,the contact force chain,the contact force and the contact number were examined to investigate the saturation effect on the meso-mechanical behavior of GHBC.The results show that:(1)The elastic modulus linearly increases with the bonding stiffness ratio and the friction coefficient while exponentially increasing with the normal bonding strength and the bonding radius coefficient.The failure strength increases exponentially with the increase of the friction coefficient,the normal bonding strength and the bonding radius coefficient,and remains constant with the increase of bond stiffness ratio;(2)The friction coefficient and the bond radius coefficient are most sensitive to the elastic modulus and the failure strength;(3)The number of the force chains,the contact force,and the bond strength between particles will increase with the increase of the hydrate saturation,which leads to the larger failure strength.
基金supported by the National Basic Research Program of China。
文摘With the extensive application of large-scale array antennas,the increasing number of array elements leads to the increasing dimension of received signals,making it difficult to meet the real-time requirement of direction of arrival(DOA)estimation due to the computational complexity of algorithms.Traditional subspace algorithms require estimation of the covariance matrix,which has high computational complexity and is prone to producing spurious peaks.In order to reduce the computational complexity of DOA estimation algorithms and improve their estimation accuracy under large array elements,this paper proposes a DOA estimation method based on Krylov subspace and weighted l_(1)-norm.The method uses the multistage Wiener filter(MSWF)iteration to solve the basis of the Krylov subspace as an estimate of the signal subspace,further uses the measurement matrix to reduce the dimensionality of the signal subspace observation,constructs a weighted matrix,and combines the sparse reconstruction to establish a convex optimization function based on the residual sum of squares and weighted l_(1)-norm to solve the target DOA.Simulation results show that the proposed method has high resolution under large array conditions,effectively suppresses spurious peaks,reduces computational complexity,and has good robustness for low signal to noise ratio(SNR)environment.
文摘NGLY1 Deficiency is an ultra-rare autosomal recessively inherited disorder. Characteristic symptoms include among others, developmental delays, movement disorders, liver function abnormalities, seizures, and problems with tear formation. Movements are hyperkinetic and may include dysmetric, choreo-athetoid, myoclonic and dystonic movement elements. To date, there have been no quantitative reports describing arm movements of individuals with NGLY1 Deficiency. This report provides quantitative information about a series of arm movements performed by an individual with NGLY1 Deficiency and an aged-matched neurotypical participant. Three categories of arm movements were tested: 1) open ended reaches without specific end point targets;2) goal-directed reaches that included grasping an object;3) picking up small objects from a table placed in front of the participants. Arm movement kinematics were obtained with a camera-based motion analysis system and “initiation” and “maintenance” phases were identified for each movement. The combination of the two phases was labeled as a “complete” movement. Three-dimensional analysis techniques were used to quantify the movements and included hand trajectory pathlength, joint motion area, as well as hand trajectory and joint jerk cost. These techniques were required to fully characterize the movements because the NGLY1 individual was unable to perform movements only in the primary plane of progression instead producing motion across all three planes of movement. The individual with NGLY1 Deficiency was unable to pick up objects from a table or effectively complete movements requiring crossing the midline. The successfully completed movements were analyzed using the above techniques and the results of the two participants were compared statistically. Almost all comparisons revealed significant differences between the two participants, with a notable exception of the 3D initiation area as a percentage of the complete movement. The statistical tests of these measures revealed no significant differences between the two participants, possibly suggesting a common underlying motor control strategy. The 3D techniques used in this report effectively characterized arm movements of an individual with NGLY1 deficiency and can be used to provide information to evaluate the effectiveness of genetic, pharmacological, or physical rehabilitation therapies.
文摘Machining is as old as humanity, and changes in temperature in both the machine’s internal and external environments can be of great concern as they affect the machine’s thermal stability and, thus, the machine’s dimensional accuracy. This paper is a continuation of our earlier work, which aimed to analyze the effect of the internal temperature of a machine tool as the machine is put into operation and vary the external temperature, the machine floor temperature. Some experiments are carried out under controlled conditions to study how machine tool components get heated up and how this heating up affects the machine’s accuracy due to thermally induced deviations. Additionally, another angle is added by varying the machine floor temperature. The parameters mentioned above are explored in line with the overall thermal stability of the machine tool and its dimensional accuracy. A Robodrill CNC machine tool is used. The CNC was first soaked with thermal energy by gradually raising the machine floor temperature to a certain level before putting the machine in operation. The machine was monitored, and analytical methods were deplored to evaluate thermal stability. Secondly, the machine was run idle for some time under raised floor temperature before it was put into operation. Data was also collected and analyzed. It is observed that machine thermal stability can be achieved in several ways depending on how the above parameters are joggled. This paper, in conclusion, reinforces the idea of machine tool warm-up process in conjunction with a carefully analyzed and established machine floor temperature variation for the approximation of the machine tool’s thermally stability to map the long-time behavior of the machine tool.
基金supported by the National Natural Science Foundation of China(Grant Nos.42277150,41977219)Henan Provincial Science and Technology Research Project(Grant No.222102320271).
文摘The geometry of joints has a significant influence on the mechanical properties of rocks.To simplify the curved joint shapes in rocks,the joint shape is usually treated as straight lines or planes in most laboratory experiments and numerical simulations.In this study,the computerized tomography (CT) scanning and photogrammetry were employed to obtain the internal and surface joint structures of a limestone sample,respectively.To describe the joint geometry,the edge detection algorithms and a three-dimensional (3D) matrix mapping method were applied to reconstruct CT-based and photogrammetry-based jointed rock models.For comparison tests,the numerical uniaxial compression tests were conducted on an intact rock sample and a sample with a joint simplified to a plane using the parallel computing method.The results indicate that the mechanical characteristics and failure process of jointed rocks are significantly affected by the geometry of joints.The presence of joints reduces the uniaxial compressive strength (UCS),elastic modulus,and released acoustic emission (AE) energy of rocks by 37%–67%,21%–24%,and 52%–90%,respectively.Compared to the simplified joint sample,the proposed photogrammetry-based numerical model makes the most of the limited geometry information of joints.The UCS,accumulative released AE energy,and elastic modulus of the photogrammetry-based sample were found to be very close to those of the CT-based sample.The UCS value of the simplified joint sample (i.e.38.5 MPa) is much lower than that of the CT-based sample (i.e.72.3 MPa).Additionally,the accumulative released AE energy observed in the simplified joint sample is 3.899 times lower than that observed in the CT-based sample.CT scanning provides a reliable means to visualize the joints in rocks,which can be used to verify the reliability of photogrammetry techniques.The application of the photogrammetry-based sample enables detailed analysis for estimating the mechanical properties of jointed rocks.
文摘This study presents a numerical analysis of three-dimensional steady laminar flow in a rectangular channel with a 180-degree sharp turn. The Navier-Stokes equations are solved by using finite difference method for Re = 900. Three-dimensional streamlines and limiting streamlines on wall surface are used to analyze the three-dimensional flow characteristics. Topological theory is applied to limiting streamlines on inner walls of the channel and two-dimensional streamlines at several cross sections. It is also shown that the flow impinges on the end wall of turn and the secondary flow is induced by the curvature in the sharp turn.
基金support from the National Key Research and Development Program of China(Nos.2023YFC2907300 and 2019YFE0118500)the National Natural Science Foundation of China(Nos.U22A20598 and 52104107)the Natural Science Foundation of Jiangsu Province(No.BK20200634).
文摘There is an urgent need to develop optimal solutions for deformation control of deep high‐stress roadways,one of the critical problems in underground engineering.The previously proposed four‐dimensional support(hereinafter 4D support),as a new support technology,can set the roadway surrounding rock under three‐dimensional pressure in the new balanced structure,and prevent instability of surrounding rock in underground engineering.However,the influence of roadway depth and creep deformation on the surrounding rock supported by 4D support is still unknown.This study investigated the influence of roadway depth and creep deformation time on the instability of surrounding rock by analyzing the energy development.The elastic strain energy was analyzed using the program redeveloped in FLAC3D.The numerical simulation results indicate that the combined support mode of 4D roof supports and conventional side supports is highly applicable to the stability control of surrounding rock with a roadway depth exceeding 520 m.With the increase of roadway depth,4D support can effectively restrain the area and depth of plastic deformation in the surrounding rock.Further,4D support limits the accumulation range and rate of elastic strain energy as the creep deformation time increases.4D support can effectively reduce the plastic deformation of roadway surrounding rock and maintain the stability for a long deformation period of 6 months.As confirmed by in situ monitoring results,4D support is more effective for the long‐term stability control of surrounding rock than conventional support.
文摘This paper presents a new dimension reduction strategy for medium and large-scale linear programming problems. The proposed method uses a subset of the original constraints and combines two algorithms: the weighted average and the cosine simplex algorithm. The first approach identifies binding constraints by using the weighted average of each constraint, whereas the second algorithm is based on the cosine similarity between the vector of the objective function and the constraints. These two approaches are complementary, and when used together, they locate the essential subset of initial constraints required for solving medium and large-scale linear programming problems. After reducing the dimension of the linear programming problem using the subset of the essential constraints, the solution method can be chosen from any suitable method for linear programming. The proposed approach was applied to a set of well-known benchmarks as well as more than 2000 random medium and large-scale linear programming problems. The results are promising, indicating that the new approach contributes to the reduction of both the size of the problems and the total number of iterations required. A tree-based classification model also confirmed the need for combining the two approaches. A detailed numerical example, the general numerical results, and the statistical analysis for the decision tree procedure are presented.
文摘When discovering the potential of canards flying in 4-dimensional slow-fast system with a bifurcation parameter, the key notion “symmetry” plays an important role. It is of one parameter on slow vector field. Then, it should be determined to introduce parameters to all slow/fast vectors. It is, however, there might be no way to explore for another potential in this system, because the geometrical structure is quite different from the system with one parameter. Even in this system, the “symmetry” is also useful to obtain the potentials classified by R. Thom. In this paper, via the coordinates changing, the possible way to explore for the potential will be shown. As it is analyzed on “hyper finite time line”, or done by using “non-standard analysis”, it is called “Hyper Catastrophe”. In the slow-fast system which includes a very small parameter , it is difficult to do precise analysis. Thus, it is useful to get the orbits as a singular limit. When trying to do simulations, it is also faced with difficulty due to singularity. Using very small time intervals corresponding small , we shall overcome the difficulty, because the difference equation on the small time interval adopts the standard differential equation. These small intervals are defined on hyper finite number N, which is nonstandard. As and the intervals are linked to use 1/N, the simulation should be done exactly.
文摘The estimation of covariance matrices is very important in many fields, such as statistics. In real applications, data are frequently influenced by high dimensions and noise. However, most relevant studies are based on complete data. This paper studies the optimal estimation of high-dimensional covariance matrices based on missing and noisy sample under the norm. First, the model with sub-Gaussian additive noise is presented. The generalized sample covariance is then modified to define a hard thresholding estimator , and the minimax upper bound is derived. After that, the minimax lower bound is derived, and it is concluded that the estimator presented in this article is rate-optimal. Finally, numerical simulation analysis is performed. The result shows that for missing samples with sub-Gaussian noise, if the true covariance matrix is sparse, the hard thresholding estimator outperforms the traditional estimate method.
文摘We consider the Hyperverse as a collection of multiverses in a (4 + 1)-dimensional spacetime with gravitational constant G. Multiverses in our model are bouquets of thin shells (with synchronized intrinsic times). If gis the gravitational constant of a shell Sand εits thickness, then G~εg. The physical universe is supposed to be one of those thin shells inside the local bouquet called Local Multiverse. Other remarkable objects of the Hyperverse are supposed to be black holes, black lenses, black rings and (generalized) Black Saturns. In addition, Schwarzschild-de Sitter multiversal nurseries can be hidden inside those Black Saturns, leading to their Bousso-Hawking nucleation. It also suggests that black holes in our physical universe might harbor embedded (2 + 1)-dimensional multiverses. This is compatible with outstanding ideas and results of Bekenstein, Hawking-Vaz and Corda about “black holes as atoms” and the condensation of matter on “apparent horizons”. It allows us to formulate conjecture 12.1 about the origin of the Local Multiverse. As an alternative model, we examine spacetime warping of our universe by external universes. It gives data for the accelerated expansion and the cosmological constant Λ, which are in agreement with observation, thus opening a possibility for verification of the multiverse model.
文摘An “Eigenstate Adjustment Autonomy” Model, permeated by the Nanosystem’s Fermi Level Pinning along with its rigid Conduction Band Discontinuity, compatible with pertinent Experimental Measurements, is being employed for studying how the Functional Eigenstate of the Two-Dimensional Electron Gas (2DEG) dwelling within the Quantum Well of a typical Semiconductor Nanoheterointerface evolves versus (cryptographically) selectable consecutive Cumulative Photon Dose values. Thus, it is ultimately discussed that the experimentally observed (after a Critical Cumulative Photon Dose) Phenomenon of 2DEG Negative Differential Mobility allows for the Nanosystem to exhibit an Effective Qubit Specific Functionality potentially conducive to (Telecommunication) Quantum Information Registering.