Quantum random number generators adopting single negligible dead time of avalanche photodiodes (APDs) photon detection have been restricted due to the non- We propose a new approach based on an APD array to improve...Quantum random number generators adopting single negligible dead time of avalanche photodiodes (APDs) photon detection have been restricted due to the non- We propose a new approach based on an APD array to improve the generation rate of random numbers significantly. This method compares the detectors' responses to consecutive optical pulses and generates the random sequence. We implement a demonstration experiment to show its simplicity, compactness and scalability. The generated numbers are proved to be unbiased, post-processing free, ready to use, and their randomness is verified by using the national institute of standard technology statistical test suite. The random bit generation efficiency is as high as 32.8% and the potential generation rate adopting the 32× 32 APD array is up to tens of Gbits/s.展开更多
The travel time data collection method is used to assist the congestion management. The use of traditional sensors (e.g. inductive loops, AVI sensors) or more recent Bluetooth sensors installed on major roads for coll...The travel time data collection method is used to assist the congestion management. The use of traditional sensors (e.g. inductive loops, AVI sensors) or more recent Bluetooth sensors installed on major roads for collecting data is not sufficient because of their limited coverage and expensive costs for installation and maintenance. Application of the Global Positioning Systems (GPS) in travel time and delay data collections is proven to be efficient in terms of accuracy, level of details for the data and required data collection of man-power. While data collection automation is improved by the GPS technique, human errors can easily find their way through the post-processing phase, and therefore data post-processing remains a challenge especially in case of big projects with high amount of data. This paper introduces a stand-alone post-processing tool called GPS Calculator, which provides an easy-to-use environment to carry out data post-processing. This is a Visual Basic application that processes the data files obtained in the field and integrates them into Geographic Information Systems (GIS) for analysis and representation. The results show that this tool obtains similar results to the currently used data post-processing method, reduces the post-processing effort, and also eliminates the need for the second person during the data collection.展开更多
When castings become complicated and the demands for precision of numerical simulation become higher,the numerical data of casting numerical simulation become more massive.On a general personal computer,these massive ...When castings become complicated and the demands for precision of numerical simulation become higher,the numerical data of casting numerical simulation become more massive.On a general personal computer,these massive numerical data may probably exceed the capacity of available memory,resulting in failure of rendering.Based on the out-of-core technique,this paper proposes a method to effectively utilize external storage and reduce memory usage dramatically,so as to solve the problem of insufficient memory for massive data rendering on general personal computers.Based on this method,a new postprocessor is developed.It is capable to illustrate filling and solidification processes of casting,as well as thermal stess.The new post-processor also provides fast interaction to simulation results.Theoretical analysis as well as several practical examples prove that the memory usage and loading time of the post-processor are independent of the size of the relevant files,but the proportion of the number of cells on surface.Meanwhile,the speed of rendering and fetching of value from the mouse is appreciable,and the demands of real-time and interaction are satisfied.展开更多
To improve the ability of detecting underwater targets under strong wideband interference environment,an efficient method of line spectrum extraction is proposed,which fully utilizes the feature of the target spectrum...To improve the ability of detecting underwater targets under strong wideband interference environment,an efficient method of line spectrum extraction is proposed,which fully utilizes the feature of the target spectrum that the high intense and stable line spectrum is superimposed on the wide continuous spectrum.This method modifies the traditional beam forming algorithm by calculating and fusing the beam forming results at multi-frequency band and multi-azimuth interval,showing an excellent way to extract the line spectrum when the interference and the target are not in the same azimuth interval simultaneously.Statistical efficiency of the estimated azimuth variance and corresponding power of the line spectrum band depends on the line spectra ratio(LSR)of the line spectrum.The change laws of the output signal to noise ratio(SNR)with the LSR,the input SNR,the integration time and the filtering bandwidth of different algorithms bring the selection principle of the critical LSR.As the basis,the detection gain of wideband energy integration and the narrowband line spectrum algorithm are theoretically analyzed.The simulation detection gain demonstrates a good match with the theoretical model.The application conditions of all methods are verified by the receiver operating characteristic(ROC)curve and experimental data from Qiandao Lake.In fact,combining the two methods for target detection reduces the missed detection rate.The proposed post-processing method in2-dimension with the Kalman filter in the time dimension and the background equalization algorithm in the azimuth dimension makes use of the strong correlation between adjacent frames,could further remove background fluctuation and improve the display effect.展开更多
Creation of arbitrary features with high resolution is critically important in the fabrication of nano-optoelectronic devices.Here,sub-50 nm surface structuring is achieved directly on Sb2S3 thin films via microsphere...Creation of arbitrary features with high resolution is critically important in the fabrication of nano-optoelectronic devices.Here,sub-50 nm surface structuring is achieved directly on Sb2S3 thin films via microsphere femtosecond laser irradi-ation in far field.By varying laser fluence and scanning speed,nano-feature sizes can be flexibly tuned.Such small patterns are attributed to the co-effect of microsphere focusing,two-photons absorption,top threshold effect,and high-repetition-rate femtosecond laser-induced incubation effect.The minimum feature size can be reduced down to~30 nm(λ/26)by manipulating film thickness.The fitting analysis between the ablation width and depth predicts that the feature size can be down to~15 nm at the film thickness of~10 nm.A nano-grating is fabricated,which demonstrates desirable beam diffraction performance.This nano-scale resolution would be highly attractive for next-generation laser nano-lithography in far field and in ambient air.展开更多
Low contrast of Magnetic Resonance(MR)images limits the visibility of subtle structures and adversely affects the outcome of both subjective and automated diagnosis.State-of-the-art contrast boosting techniques intole...Low contrast of Magnetic Resonance(MR)images limits the visibility of subtle structures and adversely affects the outcome of both subjective and automated diagnosis.State-of-the-art contrast boosting techniques intolerably alter inherent features of MR images.Drastic changes in brightness features,induced by post-processing are not appreciated in medical imaging as the grey level values have certain diagnostic meanings.To overcome these issues this paper proposes an algorithm that enhance the contrast of MR images while preserving the underlying features as well.This method termed as Power-law and Logarithmic Modification-based Histogram Equalization(PLMHE)partitions the histogram of the image into two sub histograms after a power-law transformation and a log compression.After a modification intended for improving the dispersion of the sub-histograms and subsequent normalization,cumulative histograms are computed.Enhanced grey level values are computed from the resultant cumulative histograms.The performance of the PLMHE algorithm is comparedwith traditional histogram equalization based algorithms and it has been observed from the results that PLMHE can boost the image contrast without causing dynamic range compression,a significant change in mean brightness,and contrast-overshoot.展开更多
In the analysis of high-rise building, traditional displacement-based plane elements are often used to get the in-plane internal forces of the shear walls by stress integration. Limited by the singular problem produce...In the analysis of high-rise building, traditional displacement-based plane elements are often used to get the in-plane internal forces of the shear walls by stress integration. Limited by the singular problem produced by wall holes and the loss of precision induced by using differential method to derive strains, the displacement-based elements cannot always present accuracy enough for design. In this paper, the hybrid post-processing procedure based on the Hellinger-Reissner variational principle is used for improving the stress precision of two quadrilateral plane elements. In order to find the best stress field, three different forms are assumed for the displacement-based plane elements and with drilling DOF. Numerical results show that by using the proposed method, the accuracy of stress solutions of these two displacement-based plane elements can be improved.展开更多
Cyber-Physical Systems are very vulnerable to sparse sensor attacks.But current protection mechanisms employ linear and deterministic models which cannot detect attacks precisely.Therefore,in this paper,we propose a n...Cyber-Physical Systems are very vulnerable to sparse sensor attacks.But current protection mechanisms employ linear and deterministic models which cannot detect attacks precisely.Therefore,in this paper,we propose a new non-linear generalized model to describe Cyber-Physical Systems.This model includes unknown multivariable discrete and continuous-time functions and different multiplicative noises to represent the evolution of physical processes and randomeffects in the physical and computationalworlds.Besides,the digitalization stage in hardware devices is represented too.Attackers and most critical sparse sensor attacks are described through a stochastic process.The reconstruction and protectionmechanisms are based on aweighted stochasticmodel.Error probability in data samples is estimated through different indicators commonly employed in non-linear dynamics(such as the Fourier transform,first-return maps,or the probability density function).A decision algorithm calculates the final reconstructed value considering the previous error probability.An experimental validation based on simulation tools and real deployments is also carried out.Both,the new technology performance and scalability are studied.Results prove that the proposed solution protects Cyber-Physical Systems against up to 92%of attacks and perturbations,with a computational delay below 2.5 s.The proposed model shows a linear complexity,as recursive or iterative structures are not employed,just algebraic and probabilistic functions.In conclusion,the new model and reconstructionmechanism can protect successfully Cyber-Physical Systems against sparse sensor attacks,even in dense or pervasive deployments and scenarios.展开更多
In this paper, we study the existence of the transcendental meromorphic solution of the delay differential equations , where a(z) is a rational function, and are polynomials in w(z) with rational c...In this paper, we study the existence of the transcendental meromorphic solution of the delay differential equations , where a(z) is a rational function, and are polynomials in w(z) with rational coefficients, k is a positive integer. Under the assumption when above equations own transcendental meromorphic solutions with minimal hyper-type, we derive the concrete conditions on the degree of the right side of them. Specially, when w(z)=0 is a root of , its multiplicity is at most k. Some examples are given here to illustrate that our results are accurate.展开更多
Magnesium(Mg)and its alloys are emerging as a structural material for the aerospace,automobile,and electronics industries,driven by the imperative of weight reduction.They are also drawing notable attention in the med...Magnesium(Mg)and its alloys are emerging as a structural material for the aerospace,automobile,and electronics industries,driven by the imperative of weight reduction.They are also drawing notable attention in the medical industries owing to their biodegradability and a lower elastic modulus comparable to bone.The ability to manufacture near-net shape products featuring intricate geometries has sparked huge interest in additive manufacturing(AM)of Mg alloys,reflecting a transformation in the manufacturing sectors.However,AM of Mg alloys presents more formidable challenges due to inherent properties,particularly susceptibility to oxidation,gas trapping,high thermal expansion coefficient,and low solidification temperature.This leads to defects such as porosity,lack of fusion,cracking,delamination,residual stresses,and inhomogeneity,ultimately influencing the mechanical,corrosion,and surface properties of AM Mg alloys.To address these issues,post-processing of AM Mg alloys are often needed to make them suitable for application.The present article reviews all post-processing techniques adapted for AM Mg alloys to date,including heat treatment,hot isostatic pressing,friction stir processing,and surface peening.The utilization of these methods within the hybrid AM process,employing interlayer post-processing,is also discussed.Optimal post-processing conditions are reported,and their influence on the microstructure,mechanical,and corrosion properties are detailed.Additionally,future prospects and research directions are proposed.展开更多
Association rule learning(ARL)is a widely used technique for discovering relationships within datasets.However,it often generates excessive irrelevant or ambiguous rules.Therefore,post-processing is crucial not only f...Association rule learning(ARL)is a widely used technique for discovering relationships within datasets.However,it often generates excessive irrelevant or ambiguous rules.Therefore,post-processing is crucial not only for removing irrelevant or redundant rules but also for uncovering hidden associations that impact other factors.Recently,several post-processing methods have been proposed,each with its own strengths and weaknesses.In this paper,we propose THAPE(Tunable Hybrid Associative Predictive Engine),which combines descriptive and predictive techniques.By leveraging both techniques,our aim is to enhance the quality of analyzing generated rules.This includes removing irrelevant or redundant rules,uncovering interesting and useful rules,exploring hidden association rules that may affect other factors,and providing backtracking ability for a given product.The proposed approach offers a tailored method that suits specific goals for retailers,enabling them to gain a better understanding of customer behavior based on factual transactions in the target market.We applied THAPE to a real dataset as a case study in this paper to demonstrate its effectiveness.Through this application,we successfully mined a concise set of highly interesting and useful association rules.Out of the 11,265 rules generated,we identified 125 rules that are particularly relevant to the business context.These identified rules significantly improve the interpretability and usefulness of association rules for decision-making purposes.展开更多
Despite the maturity of ensemble numerical weather prediction(NWP),the resulting forecasts are still,more often than not,under-dispersed.As such,forecast calibration tools have become popular.Among those tools,quantil...Despite the maturity of ensemble numerical weather prediction(NWP),the resulting forecasts are still,more often than not,under-dispersed.As such,forecast calibration tools have become popular.Among those tools,quantile regression(QR)is highly competitive in terms of both flexibility and predictive performance.Nevertheless,a long-standing problem of QR is quantile crossing,which greatly limits the interpretability of QR-calibrated forecasts.On this point,this study proposes a non-crossing quantile regression neural network(NCQRNN),for calibrating ensemble NWP forecasts into a set of reliable quantile forecasts without crossing.The overarching design principle of NCQRNN is to add on top of the conventional QRNN structure another hidden layer,which imposes a non-decreasing mapping between the combined output from nodes of the last hidden layer to the nodes of the output layer,through a triangular weight matrix with positive entries.The empirical part of the work considers a solar irradiance case study,in which four years of ensemble irradiance forecasts at seven locations,issued by the European Centre for Medium-Range Weather Forecasts,are calibrated via NCQRNN,as well as via an eclectic mix of benchmarking models,ranging from the naïve climatology to the state-of-the-art deep-learning and other non-crossing models.Formal and stringent forecast verification suggests that the forecasts post-processed via NCQRNN attain the maximum sharpness subject to calibration,amongst all competitors.Furthermore,the proposed conception to resolve quantile crossing is remarkably simple yet general,and thus has broad applicability as it can be integrated with many shallow-and deep-learning-based neural networks.展开更多
The effects of a magnetic dipole on a nonlinear thermally radiative ferromagnetic liquidflowing over a stretched surface in the presence of Brownian motion and thermophoresis are investigated.By means of a similarity t...The effects of a magnetic dipole on a nonlinear thermally radiative ferromagnetic liquidflowing over a stretched surface in the presence of Brownian motion and thermophoresis are investigated.By means of a similarity transformation,ordinary differential equations are derived and solved afterwards using a numerical(the BVP4C)method.The impact of various parameters,namely the velocity,temperature,concentration,is presented graphically.It is shown that the nanoparticles properties,in conjunction with the magnetic dipole effect,can increase the thermal conductivity of the engineered nanofluid and,consequently,the heat transfer.Comparison with earlier studies indicates high accuracy and effectiveness of the numerical approach.An increase in the Brow-nian motion parameter and thermophoresis parameter enhances the concentration and the related boundary layer.The skin-friction rises when the viscosity parameter is increased.A larger value of the ferromagnetic para-meter results in a higher skin-friction and,vice versa,in a smaller Nusselt number.展开更多
Based on high-tide shoreline data extracted from 87 Landsat satellite images from 1986 to 2019 as well as using the linear regression rate and performing a Mann-Kendall(M–K)trend test,this study analyzes the linear c...Based on high-tide shoreline data extracted from 87 Landsat satellite images from 1986 to 2019 as well as using the linear regression rate and performing a Mann-Kendall(M–K)trend test,this study analyzes the linear characteristics and nonlinear behavior of the medium-to long-term shoreline evolution of Jinghai Bay,eastern Guangdong Province.In particular,shoreline rotation caused by a shore-normal coastal structure is emphasized.The results show that the overall shoreline evolution over the past 30 years is characterized by erosion on the southwest beach,with an average erosion rate of 3.1 m/a,and significant accretion on the northeast beach,with an average accretion rate of 5.6 m/a.Results of the M–K trend test indicate that significant shoreline changes occurred in early 2006,which can be attributed to shore-normal engineering.Prior to that engineering construction,the shorelines are slightly eroded,where the average erosion rate is 0.7 m/a.However,after shore-normal engineering is performed,the shoreline is characterized by significant erosion(3.2 m/a)on the southwest beach and significant accretion(8.5 m/a)on the northeast beach,thus indicating that the shore-normal engineering at the updrift headland contributes to clockwise shoreline rotation.Further analysis shows that the clockwise shoreline rotation is promoted not only by longshore sediment transport processes from southwest to northeast,but also by cross-shore sediment transport processes.These findings are crucial for beach erosion risk management,coastal disaster zoning,regional sediment budget assessments,and further observations and predictions of beach morphodynamics.展开更多
In this study, the influence of confined concrete models on the response of reinforced concrete structures is investigatedat member and global system levels. The commonly encountered concrete models such as Modified K...In this study, the influence of confined concrete models on the response of reinforced concrete structures is investigatedat member and global system levels. The commonly encountered concrete models such as Modified Kent-Park, Saatçioğlu-Razvi, and Mander are considered. Two moment-resisting frames designed according to thepre-modern code are taken into consideration to reflect the example of an RC moment-resisting frame in thecurrent building stock. The building is in an earthquake-prone zone located on Z3 Soil Type. The inelasticresponse of the building frame is modelled by considering the plastic hinges formed on each beam and columnelement for different concrete classes and stirrups spacings. The models are subjected to non-linear static analyses.The differences between confined concrete models are comparatively investigated at both reinforced concretemember and system levels. Based on the results of the comparative analysis, it is revealed that the column behaviouris mostly influenced by the choice of model, due to axial loads and confinement effects, while the beams areless affected, and also it is observed that the differences exhibited in the moment-curvature response of columncross-sections do not significantly affect the overall behaviour of the global system. This highlights the critical roleof model selection relative to the concrete strength and stirrup spacing of the member.展开更多
The 3D reconstruction pipeline uses the Bundle Adjustment algorithm to refine the camera and point parameters. The Bundle Adjustment algorithm is a compute-intensive algorithm, and many researchers have improved its p...The 3D reconstruction pipeline uses the Bundle Adjustment algorithm to refine the camera and point parameters. The Bundle Adjustment algorithm is a compute-intensive algorithm, and many researchers have improved its performance by implementing the algorithm on GPUs. In the previous research work, “Improving Accuracy and Computational Burden of Bundle Adjustment Algorithm using GPUs,” the authors demonstrated first the Bundle Adjustment algorithmic performance improvement by reducing the mean square error using an additional radial distorting parameter and explicitly computed analytical derivatives and reducing the computational burden of the Bundle Adjustment algorithm using GPUs. The naïve implementation of the CUDA code, a speedup of 10× for the largest dataset of 13,678 cameras, 4,455,747 points, and 28,975,571 projections was achieved. In this paper, we present the optimization of the Bundle Adjustment algorithm CUDA code on GPUs to achieve higher speedup. We propose a new data memory layout for the parameters in the Bundle Adjustment algorithm, resulting in contiguous memory access. We demonstrate that it improves the memory throughput on the GPUs, thereby improving the overall performance. We also demonstrate an increase in the computational throughput of the algorithm by optimizing the CUDA kernels to utilize the GPU resources effectively. A comparative performance study of explicitly computing an algorithm parameter versus using the Jacobians instead is presented. In the previous work, the Bundle Adjustment algorithm failed to converge for certain datasets due to several block matrices of the cameras in the augmented normal equation, resulting in rank-deficient matrices. In this work, we identify the cameras that cause rank-deficient matrices and preprocess the datasets to ensure the convergence of the BA algorithm. Our optimized CUDA implementation achieves convergence of the Bundle Adjustment algorithm in around 22 seconds for the largest dataset compared to 654 seconds for the sequential implementation, resulting in a speedup of 30×. Our optimized CUDA implementation presented in this paper has achieved a 3× speedup for the largest dataset compared to the previous naïve CUDA implementation.展开更多
Finger vein extraction and recognition hold significance in various applications due to the unique and reliable nature of finger vein patterns. While recently finger vein recognition has gained popularity, there are s...Finger vein extraction and recognition hold significance in various applications due to the unique and reliable nature of finger vein patterns. While recently finger vein recognition has gained popularity, there are still challenges associated with extracting and processing finger vein patterns related to image quality, positioning and alignment, skin conditions, security concerns and processing techniques applied. In this paper, a method for robust segmentation of line patterns in strongly blurred images is presented and evaluated in vessel network extraction from infrared images of human fingers. In a four-step process: local normalization of brightness, image enhancement, segmentation and cleaning were involved. A novel image enhancement method was used to re-establish the line patterns from the brightness sum of the independent close-form solutions of the adopted optimization criterion derived in small windows. In the proposed method, the computational resources were reduced significantly compared to the solution derived when the whole image was processed. In the enhanced image, where the concave structures have been sufficiently emphasized, accurate detection of line patterns was obtained by local entropy thresholding. Typical segmentation errors appearing in the binary image were removed using morphological dilation with a line structuring element and morphological filtering with a majority filter to eliminate isolated blobs. The proposed method performs accurate detection of the vessel network in human finger infrared images, as the experimental results show, applied both in real and artificial images and can readily be applied in many image enhancement and segmentation applications.展开更多
Through its media logic, which is defined by programmability, pervasiveness, connectivity, and datafication, WeChat establishes a non-linear, interactive, and user-dominated media environment. This media logic enhance...Through its media logic, which is defined by programmability, pervasiveness, connectivity, and datafication, WeChat establishes a non-linear, interactive, and user-dominated media environment. This media logic enhances the influence of each user within WeChat’s narrative landscape and revolutionises traditional media narration methods by enabling them to generate and distribute content. In such an environment, users are able to edit, receive, and send information without constraints of time or distance, which enables delayed interactions that broaden the potential for human communication and dissemination. Additionally, WeChat partially replaces social behaviours and institutions, thereby modifying their original structures and characteristics. As individuals become more accustomed to this media environment, they progressively adjust to the forms that are appropriate for media representation on WeChat, thereby completing the mediatization of culture and society. In this process, individuals are becoming more dependent on media and media logic, with WeChat’s influence permeating social and cultural activities through its media logic. This interaction has a substantial impact on society and culture, thereby aiding in their reconstruction.展开更多
The loss of Baryonic Matter through Black Holes from our spatial 3-D Universe into its 4th dimension as Dark Matter, is used along with the Conservation of Angular Momentum Principle to prove theoretically the acceler...The loss of Baryonic Matter through Black Holes from our spatial 3-D Universe into its 4th dimension as Dark Matter, is used along with the Conservation of Angular Momentum Principle to prove theoretically the accelerated expansion of the 3-D Universe, as has already been confirmed experimentally being awarded the 2011 Nobel Prize in Physics. Theoretical calculations can estimate further to indicate the true nature of the acceleration;that the outward acceleration is due to the rotation of the Universe caused by Dark Energy from the Void, that the acceleration is non-linear, initially increasing from zero for the short period of about a Million years at a constant rate, and then leveling off non-linearly over extended time before the outward acceleration begins to decrease in a non-linear fashion until it is matched by the gravitational attraction of the matter content of 4D Space and the virtual matter in 3-D Vacuum Space. m = m(4D) + m(Virtual). The rotation of our 3D Universe will become constant once all 3D matter has entered 4D space. As the 3-D Universe tries to expand further it will be pulled inward by its gravitational attraction and will then keep on oscillating about a final radius r<sub>f</sub> while it also keeps on oscillating at right angles to the radius r<sub>f</sub> around final angular velocity ω<sub>f</sub>, until it becomes part of the 4-D Universe. The constant value of the Angular Momentum of our Universe is L = .展开更多
The non-linear forced vibration of axially moving viscoelastic beams excited by the vibration of the supporting foundation is investigated. A non-linear partial-differential equation governing the transverse motion is...The non-linear forced vibration of axially moving viscoelastic beams excited by the vibration of the supporting foundation is investigated. A non-linear partial-differential equation governing the transverse motion is derived from the dynamical, constitutive equations and geometrical relations. By referring to the quasi-static stretch assumption, the partial-differential non-linearity is reduced to an integro-partial-differential one. The method of multiple scales is directly applied to the governing equations with the two types of non-linearity, respectively. The amplitude of near- and exact-resonant steady state is analyzed by use of the solvability condition of eliminating secular terms. Numerical results are presented to show the contributions of foundation vibration amplitude, viscoelastic damping, and nonlinearity to the response amplitude for the first and the second mode.展开更多
基金Supported by the Chinese Academy of Sciences Center for Excellence and Synergetic Innovation Center in Quantum Information and Quantum Physics,Shanghai Branch,University of Science and Technology of Chinathe National Natural Science Foundation of China under Grant No 11405172
文摘Quantum random number generators adopting single negligible dead time of avalanche photodiodes (APDs) photon detection have been restricted due to the non- We propose a new approach based on an APD array to improve the generation rate of random numbers significantly. This method compares the detectors' responses to consecutive optical pulses and generates the random sequence. We implement a demonstration experiment to show its simplicity, compactness and scalability. The generated numbers are proved to be unbiased, post-processing free, ready to use, and their randomness is verified by using the national institute of standard technology statistical test suite. The random bit generation efficiency is as high as 32.8% and the potential generation rate adopting the 32× 32 APD array is up to tens of Gbits/s.
文摘The travel time data collection method is used to assist the congestion management. The use of traditional sensors (e.g. inductive loops, AVI sensors) or more recent Bluetooth sensors installed on major roads for collecting data is not sufficient because of their limited coverage and expensive costs for installation and maintenance. Application of the Global Positioning Systems (GPS) in travel time and delay data collections is proven to be efficient in terms of accuracy, level of details for the data and required data collection of man-power. While data collection automation is improved by the GPS technique, human errors can easily find their way through the post-processing phase, and therefore data post-processing remains a challenge especially in case of big projects with high amount of data. This paper introduces a stand-alone post-processing tool called GPS Calculator, which provides an easy-to-use environment to carry out data post-processing. This is a Visual Basic application that processes the data files obtained in the field and integrates them into Geographic Information Systems (GIS) for analysis and representation. The results show that this tool obtains similar results to the currently used data post-processing method, reduces the post-processing effort, and also eliminates the need for the second person during the data collection.
基金supported by the New Century Excellent Talents in University(NCET-09-0396)the National Science&Technology Key Projects of Numerical Control(2012ZX04014-031)+1 种基金the Natural Science Foundation of Hubei Province(2011CDB279)the Foundation for Innovative Research Groups of the Natural Science Foundation of Hubei Province,China(2010CDA067)
文摘When castings become complicated and the demands for precision of numerical simulation become higher,the numerical data of casting numerical simulation become more massive.On a general personal computer,these massive numerical data may probably exceed the capacity of available memory,resulting in failure of rendering.Based on the out-of-core technique,this paper proposes a method to effectively utilize external storage and reduce memory usage dramatically,so as to solve the problem of insufficient memory for massive data rendering on general personal computers.Based on this method,a new postprocessor is developed.It is capable to illustrate filling and solidification processes of casting,as well as thermal stess.The new post-processor also provides fast interaction to simulation results.Theoretical analysis as well as several practical examples prove that the memory usage and loading time of the post-processor are independent of the size of the relevant files,but the proportion of the number of cells on surface.Meanwhile,the speed of rendering and fetching of value from the mouse is appreciable,and the demands of real-time and interaction are satisfied.
基金supported by the National Natural Science Foundation of China(51875535)the Natural Science Foundation for Young Scientists of Shanxi Province(201701D221017,201901D211242)。
文摘To improve the ability of detecting underwater targets under strong wideband interference environment,an efficient method of line spectrum extraction is proposed,which fully utilizes the feature of the target spectrum that the high intense and stable line spectrum is superimposed on the wide continuous spectrum.This method modifies the traditional beam forming algorithm by calculating and fusing the beam forming results at multi-frequency band and multi-azimuth interval,showing an excellent way to extract the line spectrum when the interference and the target are not in the same azimuth interval simultaneously.Statistical efficiency of the estimated azimuth variance and corresponding power of the line spectrum band depends on the line spectra ratio(LSR)of the line spectrum.The change laws of the output signal to noise ratio(SNR)with the LSR,the input SNR,the integration time and the filtering bandwidth of different algorithms bring the selection principle of the critical LSR.As the basis,the detection gain of wideband energy integration and the narrowband line spectrum algorithm are theoretically analyzed.The simulation detection gain demonstrates a good match with the theoretical model.The application conditions of all methods are verified by the receiver operating characteristic(ROC)curve and experimental data from Qiandao Lake.In fact,combining the two methods for target detection reduces the missed detection rate.The proposed post-processing method in2-dimension with the Kalman filter in the time dimension and the background equalization algorithm in the azimuth dimension makes use of the strong correlation between adjacent frames,could further remove background fluctuation and improve the display effect.
基金This work is supported by Academic Research Fund Tier 2,Ministry of Education-Singapore(MOE2019-T2-2-147)T.C.acknowledges support from the National Key Research and Development Program of China(2019YFA0709100,2020YFA0714504).
文摘Creation of arbitrary features with high resolution is critically important in the fabrication of nano-optoelectronic devices.Here,sub-50 nm surface structuring is achieved directly on Sb2S3 thin films via microsphere femtosecond laser irradi-ation in far field.By varying laser fluence and scanning speed,nano-feature sizes can be flexibly tuned.Such small patterns are attributed to the co-effect of microsphere focusing,two-photons absorption,top threshold effect,and high-repetition-rate femtosecond laser-induced incubation effect.The minimum feature size can be reduced down to~30 nm(λ/26)by manipulating film thickness.The fitting analysis between the ablation width and depth predicts that the feature size can be down to~15 nm at the film thickness of~10 nm.A nano-grating is fabricated,which demonstrates desirable beam diffraction performance.This nano-scale resolution would be highly attractive for next-generation laser nano-lithography in far field and in ambient air.
基金This work was supported by Taif university Researchers Supporting Project Number(TURSP-2020/114),Taif University,Taif,Saudi Arabia.
文摘Low contrast of Magnetic Resonance(MR)images limits the visibility of subtle structures and adversely affects the outcome of both subjective and automated diagnosis.State-of-the-art contrast boosting techniques intolerably alter inherent features of MR images.Drastic changes in brightness features,induced by post-processing are not appreciated in medical imaging as the grey level values have certain diagnostic meanings.To overcome these issues this paper proposes an algorithm that enhance the contrast of MR images while preserving the underlying features as well.This method termed as Power-law and Logarithmic Modification-based Histogram Equalization(PLMHE)partitions the histogram of the image into two sub histograms after a power-law transformation and a log compression.After a modification intended for improving the dispersion of the sub-histograms and subsequent normalization,cumulative histograms are computed.Enhanced grey level values are computed from the resultant cumulative histograms.The performance of the PLMHE algorithm is comparedwith traditional histogram equalization based algorithms and it has been observed from the results that PLMHE can boost the image contrast without causing dynamic range compression,a significant change in mean brightness,and contrast-overshoot.
文摘In the analysis of high-rise building, traditional displacement-based plane elements are often used to get the in-plane internal forces of the shear walls by stress integration. Limited by the singular problem produced by wall holes and the loss of precision induced by using differential method to derive strains, the displacement-based elements cannot always present accuracy enough for design. In this paper, the hybrid post-processing procedure based on the Hellinger-Reissner variational principle is used for improving the stress precision of two quadrilateral plane elements. In order to find the best stress field, three different forms are assumed for the displacement-based plane elements and with drilling DOF. Numerical results show that by using the proposed method, the accuracy of stress solutions of these two displacement-based plane elements can be improved.
基金supported by Comunidad de Madrid within the framework of the Multiannual Agreement with Universidad Politécnica de Madrid to encourage research by young doctors(PRINCE).
文摘Cyber-Physical Systems are very vulnerable to sparse sensor attacks.But current protection mechanisms employ linear and deterministic models which cannot detect attacks precisely.Therefore,in this paper,we propose a new non-linear generalized model to describe Cyber-Physical Systems.This model includes unknown multivariable discrete and continuous-time functions and different multiplicative noises to represent the evolution of physical processes and randomeffects in the physical and computationalworlds.Besides,the digitalization stage in hardware devices is represented too.Attackers and most critical sparse sensor attacks are described through a stochastic process.The reconstruction and protectionmechanisms are based on aweighted stochasticmodel.Error probability in data samples is estimated through different indicators commonly employed in non-linear dynamics(such as the Fourier transform,first-return maps,or the probability density function).A decision algorithm calculates the final reconstructed value considering the previous error probability.An experimental validation based on simulation tools and real deployments is also carried out.Both,the new technology performance and scalability are studied.Results prove that the proposed solution protects Cyber-Physical Systems against up to 92%of attacks and perturbations,with a computational delay below 2.5 s.The proposed model shows a linear complexity,as recursive or iterative structures are not employed,just algebraic and probabilistic functions.In conclusion,the new model and reconstructionmechanism can protect successfully Cyber-Physical Systems against sparse sensor attacks,even in dense or pervasive deployments and scenarios.
文摘In this paper, we study the existence of the transcendental meromorphic solution of the delay differential equations , where a(z) is a rational function, and are polynomials in w(z) with rational coefficients, k is a positive integer. Under the assumption when above equations own transcendental meromorphic solutions with minimal hyper-type, we derive the concrete conditions on the degree of the right side of them. Specially, when w(z)=0 is a root of , its multiplicity is at most k. Some examples are given here to illustrate that our results are accurate.
文摘Magnesium(Mg)and its alloys are emerging as a structural material for the aerospace,automobile,and electronics industries,driven by the imperative of weight reduction.They are also drawing notable attention in the medical industries owing to their biodegradability and a lower elastic modulus comparable to bone.The ability to manufacture near-net shape products featuring intricate geometries has sparked huge interest in additive manufacturing(AM)of Mg alloys,reflecting a transformation in the manufacturing sectors.However,AM of Mg alloys presents more formidable challenges due to inherent properties,particularly susceptibility to oxidation,gas trapping,high thermal expansion coefficient,and low solidification temperature.This leads to defects such as porosity,lack of fusion,cracking,delamination,residual stresses,and inhomogeneity,ultimately influencing the mechanical,corrosion,and surface properties of AM Mg alloys.To address these issues,post-processing of AM Mg alloys are often needed to make them suitable for application.The present article reviews all post-processing techniques adapted for AM Mg alloys to date,including heat treatment,hot isostatic pressing,friction stir processing,and surface peening.The utilization of these methods within the hybrid AM process,employing interlayer post-processing,is also discussed.Optimal post-processing conditions are reported,and their influence on the microstructure,mechanical,and corrosion properties are detailed.Additionally,future prospects and research directions are proposed.
文摘Association rule learning(ARL)is a widely used technique for discovering relationships within datasets.However,it often generates excessive irrelevant or ambiguous rules.Therefore,post-processing is crucial not only for removing irrelevant or redundant rules but also for uncovering hidden associations that impact other factors.Recently,several post-processing methods have been proposed,each with its own strengths and weaknesses.In this paper,we propose THAPE(Tunable Hybrid Associative Predictive Engine),which combines descriptive and predictive techniques.By leveraging both techniques,our aim is to enhance the quality of analyzing generated rules.This includes removing irrelevant or redundant rules,uncovering interesting and useful rules,exploring hidden association rules that may affect other factors,and providing backtracking ability for a given product.The proposed approach offers a tailored method that suits specific goals for retailers,enabling them to gain a better understanding of customer behavior based on factual transactions in the target market.We applied THAPE to a real dataset as a case study in this paper to demonstrate its effectiveness.Through this application,we successfully mined a concise set of highly interesting and useful association rules.Out of the 11,265 rules generated,we identified 125 rules that are particularly relevant to the business context.These identified rules significantly improve the interpretability and usefulness of association rules for decision-making purposes.
基金supported by the National Natural Science Foundation of China (Project No.42375192)the China Meteorological Administration Climate Change Special Program (CMA-CCSP+1 种基金Project No.QBZ202315)support by the Vector Stiftung through the Young Investigator Group"Artificial Intelligence for Probabilistic Weather Forecasting."
文摘Despite the maturity of ensemble numerical weather prediction(NWP),the resulting forecasts are still,more often than not,under-dispersed.As such,forecast calibration tools have become popular.Among those tools,quantile regression(QR)is highly competitive in terms of both flexibility and predictive performance.Nevertheless,a long-standing problem of QR is quantile crossing,which greatly limits the interpretability of QR-calibrated forecasts.On this point,this study proposes a non-crossing quantile regression neural network(NCQRNN),for calibrating ensemble NWP forecasts into a set of reliable quantile forecasts without crossing.The overarching design principle of NCQRNN is to add on top of the conventional QRNN structure another hidden layer,which imposes a non-decreasing mapping between the combined output from nodes of the last hidden layer to the nodes of the output layer,through a triangular weight matrix with positive entries.The empirical part of the work considers a solar irradiance case study,in which four years of ensemble irradiance forecasts at seven locations,issued by the European Centre for Medium-Range Weather Forecasts,are calibrated via NCQRNN,as well as via an eclectic mix of benchmarking models,ranging from the naïve climatology to the state-of-the-art deep-learning and other non-crossing models.Formal and stringent forecast verification suggests that the forecasts post-processed via NCQRNN attain the maximum sharpness subject to calibration,amongst all competitors.Furthermore,the proposed conception to resolve quantile crossing is remarkably simple yet general,and thus has broad applicability as it can be integrated with many shallow-and deep-learning-based neural networks.
文摘The effects of a magnetic dipole on a nonlinear thermally radiative ferromagnetic liquidflowing over a stretched surface in the presence of Brownian motion and thermophoresis are investigated.By means of a similarity transformation,ordinary differential equations are derived and solved afterwards using a numerical(the BVP4C)method.The impact of various parameters,namely the velocity,temperature,concentration,is presented graphically.It is shown that the nanoparticles properties,in conjunction with the magnetic dipole effect,can increase the thermal conductivity of the engineered nanofluid and,consequently,the heat transfer.Comparison with earlier studies indicates high accuracy and effectiveness of the numerical approach.An increase in the Brow-nian motion parameter and thermophoresis parameter enhances the concentration and the related boundary layer.The skin-friction rises when the viscosity parameter is increased.A larger value of the ferromagnetic para-meter results in a higher skin-friction and,vice versa,in a smaller Nusselt number.
基金The National Nature Science Foundation of China under contract No.42071007the Nature Science Foundation of Hainan Province under contract Nos 422RC665,421QN0883,and 423RC553。
文摘Based on high-tide shoreline data extracted from 87 Landsat satellite images from 1986 to 2019 as well as using the linear regression rate and performing a Mann-Kendall(M–K)trend test,this study analyzes the linear characteristics and nonlinear behavior of the medium-to long-term shoreline evolution of Jinghai Bay,eastern Guangdong Province.In particular,shoreline rotation caused by a shore-normal coastal structure is emphasized.The results show that the overall shoreline evolution over the past 30 years is characterized by erosion on the southwest beach,with an average erosion rate of 3.1 m/a,and significant accretion on the northeast beach,with an average accretion rate of 5.6 m/a.Results of the M–K trend test indicate that significant shoreline changes occurred in early 2006,which can be attributed to shore-normal engineering.Prior to that engineering construction,the shorelines are slightly eroded,where the average erosion rate is 0.7 m/a.However,after shore-normal engineering is performed,the shoreline is characterized by significant erosion(3.2 m/a)on the southwest beach and significant accretion(8.5 m/a)on the northeast beach,thus indicating that the shore-normal engineering at the updrift headland contributes to clockwise shoreline rotation.Further analysis shows that the clockwise shoreline rotation is promoted not only by longshore sediment transport processes from southwest to northeast,but also by cross-shore sediment transport processes.These findings are crucial for beach erosion risk management,coastal disaster zoning,regional sediment budget assessments,and further observations and predictions of beach morphodynamics.
文摘In this study, the influence of confined concrete models on the response of reinforced concrete structures is investigatedat member and global system levels. The commonly encountered concrete models such as Modified Kent-Park, Saatçioğlu-Razvi, and Mander are considered. Two moment-resisting frames designed according to thepre-modern code are taken into consideration to reflect the example of an RC moment-resisting frame in thecurrent building stock. The building is in an earthquake-prone zone located on Z3 Soil Type. The inelasticresponse of the building frame is modelled by considering the plastic hinges formed on each beam and columnelement for different concrete classes and stirrups spacings. The models are subjected to non-linear static analyses.The differences between confined concrete models are comparatively investigated at both reinforced concretemember and system levels. Based on the results of the comparative analysis, it is revealed that the column behaviouris mostly influenced by the choice of model, due to axial loads and confinement effects, while the beams areless affected, and also it is observed that the differences exhibited in the moment-curvature response of columncross-sections do not significantly affect the overall behaviour of the global system. This highlights the critical roleof model selection relative to the concrete strength and stirrup spacing of the member.
文摘The 3D reconstruction pipeline uses the Bundle Adjustment algorithm to refine the camera and point parameters. The Bundle Adjustment algorithm is a compute-intensive algorithm, and many researchers have improved its performance by implementing the algorithm on GPUs. In the previous research work, “Improving Accuracy and Computational Burden of Bundle Adjustment Algorithm using GPUs,” the authors demonstrated first the Bundle Adjustment algorithmic performance improvement by reducing the mean square error using an additional radial distorting parameter and explicitly computed analytical derivatives and reducing the computational burden of the Bundle Adjustment algorithm using GPUs. The naïve implementation of the CUDA code, a speedup of 10× for the largest dataset of 13,678 cameras, 4,455,747 points, and 28,975,571 projections was achieved. In this paper, we present the optimization of the Bundle Adjustment algorithm CUDA code on GPUs to achieve higher speedup. We propose a new data memory layout for the parameters in the Bundle Adjustment algorithm, resulting in contiguous memory access. We demonstrate that it improves the memory throughput on the GPUs, thereby improving the overall performance. We also demonstrate an increase in the computational throughput of the algorithm by optimizing the CUDA kernels to utilize the GPU resources effectively. A comparative performance study of explicitly computing an algorithm parameter versus using the Jacobians instead is presented. In the previous work, the Bundle Adjustment algorithm failed to converge for certain datasets due to several block matrices of the cameras in the augmented normal equation, resulting in rank-deficient matrices. In this work, we identify the cameras that cause rank-deficient matrices and preprocess the datasets to ensure the convergence of the BA algorithm. Our optimized CUDA implementation achieves convergence of the Bundle Adjustment algorithm in around 22 seconds for the largest dataset compared to 654 seconds for the sequential implementation, resulting in a speedup of 30×. Our optimized CUDA implementation presented in this paper has achieved a 3× speedup for the largest dataset compared to the previous naïve CUDA implementation.
文摘Finger vein extraction and recognition hold significance in various applications due to the unique and reliable nature of finger vein patterns. While recently finger vein recognition has gained popularity, there are still challenges associated with extracting and processing finger vein patterns related to image quality, positioning and alignment, skin conditions, security concerns and processing techniques applied. In this paper, a method for robust segmentation of line patterns in strongly blurred images is presented and evaluated in vessel network extraction from infrared images of human fingers. In a four-step process: local normalization of brightness, image enhancement, segmentation and cleaning were involved. A novel image enhancement method was used to re-establish the line patterns from the brightness sum of the independent close-form solutions of the adopted optimization criterion derived in small windows. In the proposed method, the computational resources were reduced significantly compared to the solution derived when the whole image was processed. In the enhanced image, where the concave structures have been sufficiently emphasized, accurate detection of line patterns was obtained by local entropy thresholding. Typical segmentation errors appearing in the binary image were removed using morphological dilation with a line structuring element and morphological filtering with a majority filter to eliminate isolated blobs. The proposed method performs accurate detection of the vessel network in human finger infrared images, as the experimental results show, applied both in real and artificial images and can readily be applied in many image enhancement and segmentation applications.
文摘Through its media logic, which is defined by programmability, pervasiveness, connectivity, and datafication, WeChat establishes a non-linear, interactive, and user-dominated media environment. This media logic enhances the influence of each user within WeChat’s narrative landscape and revolutionises traditional media narration methods by enabling them to generate and distribute content. In such an environment, users are able to edit, receive, and send information without constraints of time or distance, which enables delayed interactions that broaden the potential for human communication and dissemination. Additionally, WeChat partially replaces social behaviours and institutions, thereby modifying their original structures and characteristics. As individuals become more accustomed to this media environment, they progressively adjust to the forms that are appropriate for media representation on WeChat, thereby completing the mediatization of culture and society. In this process, individuals are becoming more dependent on media and media logic, with WeChat’s influence permeating social and cultural activities through its media logic. This interaction has a substantial impact on society and culture, thereby aiding in their reconstruction.
文摘The loss of Baryonic Matter through Black Holes from our spatial 3-D Universe into its 4th dimension as Dark Matter, is used along with the Conservation of Angular Momentum Principle to prove theoretically the accelerated expansion of the 3-D Universe, as has already been confirmed experimentally being awarded the 2011 Nobel Prize in Physics. Theoretical calculations can estimate further to indicate the true nature of the acceleration;that the outward acceleration is due to the rotation of the Universe caused by Dark Energy from the Void, that the acceleration is non-linear, initially increasing from zero for the short period of about a Million years at a constant rate, and then leveling off non-linearly over extended time before the outward acceleration begins to decrease in a non-linear fashion until it is matched by the gravitational attraction of the matter content of 4D Space and the virtual matter in 3-D Vacuum Space. m = m(4D) + m(Virtual). The rotation of our 3D Universe will become constant once all 3D matter has entered 4D space. As the 3-D Universe tries to expand further it will be pulled inward by its gravitational attraction and will then keep on oscillating about a final radius r<sub>f</sub> while it also keeps on oscillating at right angles to the radius r<sub>f</sub> around final angular velocity ω<sub>f</sub>, until it becomes part of the 4-D Universe. The constant value of the Angular Momentum of our Universe is L = .
基金Project supported by the National Natural Science Foundation of China (No. 10472060)Natural Science Founda-tion of Shanghai Municipality (No. 04ZR14058)Doctor Start-up Foundation of Shenyang Institute of Aeronautical Engineering (No. 05YB04).
文摘The non-linear forced vibration of axially moving viscoelastic beams excited by the vibration of the supporting foundation is investigated. A non-linear partial-differential equation governing the transverse motion is derived from the dynamical, constitutive equations and geometrical relations. By referring to the quasi-static stretch assumption, the partial-differential non-linearity is reduced to an integro-partial-differential one. The method of multiple scales is directly applied to the governing equations with the two types of non-linearity, respectively. The amplitude of near- and exact-resonant steady state is analyzed by use of the solvability condition of eliminating secular terms. Numerical results are presented to show the contributions of foundation vibration amplitude, viscoelastic damping, and nonlinearity to the response amplitude for the first and the second mode.