Quantum random number generators adopting single negligible dead time of avalanche photodiodes (APDs) photon detection have been restricted due to the non- We propose a new approach based on an APD array to improve...Quantum random number generators adopting single negligible dead time of avalanche photodiodes (APDs) photon detection have been restricted due to the non- We propose a new approach based on an APD array to improve the generation rate of random numbers significantly. This method compares the detectors' responses to consecutive optical pulses and generates the random sequence. We implement a demonstration experiment to show its simplicity, compactness and scalability. The generated numbers are proved to be unbiased, post-processing free, ready to use, and their randomness is verified by using the national institute of standard technology statistical test suite. The random bit generation efficiency is as high as 32.8% and the potential generation rate adopting the 32× 32 APD array is up to tens of Gbits/s.展开更多
In order to carry out numerical simulation using geologic structural data obtained from Landmark(seismic interpretation system), underground geological structures are abstracted into mechanical models which can reflec...In order to carry out numerical simulation using geologic structural data obtained from Landmark(seismic interpretation system), underground geological structures are abstracted into mechanical models which can reflect actual situations and facilitate their computation and analyses.Given the importance of model building, further processing methods about traditional seismic interpretation results from Landmark should be studied and the processed result can then be directly used in numerical simulation computations.Through this data conversion procedure, Landmark and FLAC(the international general stress software) are seamlessly connected.Thus, the format conversion between the two systems and the pre-and post-processing in simulation computation is realized.A practical application indicates that this method has many advantages such as simple operation, high accuracy of the element subdivision and high speed, which may definitely satisfy the actual needs of floor grid cutting.展开更多
The travel time data collection method is used to assist the congestion management. The use of traditional sensors (e.g. inductive loops, AVI sensors) or more recent Bluetooth sensors installed on major roads for coll...The travel time data collection method is used to assist the congestion management. The use of traditional sensors (e.g. inductive loops, AVI sensors) or more recent Bluetooth sensors installed on major roads for collecting data is not sufficient because of their limited coverage and expensive costs for installation and maintenance. Application of the Global Positioning Systems (GPS) in travel time and delay data collections is proven to be efficient in terms of accuracy, level of details for the data and required data collection of man-power. While data collection automation is improved by the GPS technique, human errors can easily find their way through the post-processing phase, and therefore data post-processing remains a challenge especially in case of big projects with high amount of data. This paper introduces a stand-alone post-processing tool called GPS Calculator, which provides an easy-to-use environment to carry out data post-processing. This is a Visual Basic application that processes the data files obtained in the field and integrates them into Geographic Information Systems (GIS) for analysis and representation. The results show that this tool obtains similar results to the currently used data post-processing method, reduces the post-processing effort, and also eliminates the need for the second person during the data collection.展开更多
When castings become complicated and the demands for precision of numerical simulation become higher,the numerical data of casting numerical simulation become more massive.On a general personal computer,these massive ...When castings become complicated and the demands for precision of numerical simulation become higher,the numerical data of casting numerical simulation become more massive.On a general personal computer,these massive numerical data may probably exceed the capacity of available memory,resulting in failure of rendering.Based on the out-of-core technique,this paper proposes a method to effectively utilize external storage and reduce memory usage dramatically,so as to solve the problem of insufficient memory for massive data rendering on general personal computers.Based on this method,a new postprocessor is developed.It is capable to illustrate filling and solidification processes of casting,as well as thermal stess.The new post-processor also provides fast interaction to simulation results.Theoretical analysis as well as several practical examples prove that the memory usage and loading time of the post-processor are independent of the size of the relevant files,but the proportion of the number of cells on surface.Meanwhile,the speed of rendering and fetching of value from the mouse is appreciable,and the demands of real-time and interaction are satisfied.展开更多
To improve the ability of detecting underwater targets under strong wideband interference environment,an efficient method of line spectrum extraction is proposed,which fully utilizes the feature of the target spectrum...To improve the ability of detecting underwater targets under strong wideband interference environment,an efficient method of line spectrum extraction is proposed,which fully utilizes the feature of the target spectrum that the high intense and stable line spectrum is superimposed on the wide continuous spectrum.This method modifies the traditional beam forming algorithm by calculating and fusing the beam forming results at multi-frequency band and multi-azimuth interval,showing an excellent way to extract the line spectrum when the interference and the target are not in the same azimuth interval simultaneously.Statistical efficiency of the estimated azimuth variance and corresponding power of the line spectrum band depends on the line spectra ratio(LSR)of the line spectrum.The change laws of the output signal to noise ratio(SNR)with the LSR,the input SNR,the integration time and the filtering bandwidth of different algorithms bring the selection principle of the critical LSR.As the basis,the detection gain of wideband energy integration and the narrowband line spectrum algorithm are theoretically analyzed.The simulation detection gain demonstrates a good match with the theoretical model.The application conditions of all methods are verified by the receiver operating characteristic(ROC)curve and experimental data from Qiandao Lake.In fact,combining the two methods for target detection reduces the missed detection rate.The proposed post-processing method in2-dimension with the Kalman filter in the time dimension and the background equalization algorithm in the azimuth dimension makes use of the strong correlation between adjacent frames,could further remove background fluctuation and improve the display effect.展开更多
This paper proposed improvements to the low bit rate parametric audio coder with sinusoid model as its kernel. Firstly, we propose a new method to effectively order and select the perceptually most important sinusoids...This paper proposed improvements to the low bit rate parametric audio coder with sinusoid model as its kernel. Firstly, we propose a new method to effectively order and select the perceptually most important sinusoids. The sinusoid which contributes most to the reduction of overall NMR is chosen. Combined with our improved parametric psychoacoustic model and advanced peak riddling techniques, the number of sinusoids required can be greatly reduced and the coding efficiency can be greatly enhanced. A lightweight version is also given to reduce the amount of computation with only little sacrifice of performance. Secondly, we propose two enhancement techniques for sinusoid synthesis: bandwidth enhancement and line enhancement. With little overhead, the effective bandwidth can be extended one more octave; the timbre tends to sound much brighter, thicker and more beautiful.展开更多
Low contrast of Magnetic Resonance(MR)images limits the visibility of subtle structures and adversely affects the outcome of both subjective and automated diagnosis.State-of-the-art contrast boosting techniques intole...Low contrast of Magnetic Resonance(MR)images limits the visibility of subtle structures and adversely affects the outcome of both subjective and automated diagnosis.State-of-the-art contrast boosting techniques intolerably alter inherent features of MR images.Drastic changes in brightness features,induced by post-processing are not appreciated in medical imaging as the grey level values have certain diagnostic meanings.To overcome these issues this paper proposes an algorithm that enhance the contrast of MR images while preserving the underlying features as well.This method termed as Power-law and Logarithmic Modification-based Histogram Equalization(PLMHE)partitions the histogram of the image into two sub histograms after a power-law transformation and a log compression.After a modification intended for improving the dispersion of the sub-histograms and subsequent normalization,cumulative histograms are computed.Enhanced grey level values are computed from the resultant cumulative histograms.The performance of the PLMHE algorithm is comparedwith traditional histogram equalization based algorithms and it has been observed from the results that PLMHE can boost the image contrast without causing dynamic range compression,a significant change in mean brightness,and contrast-overshoot.展开更多
In the analysis of high-rise building, traditional displacement-based plane elements are often used to get the in-plane internal forces of the shear walls by stress integration. Limited by the singular problem produce...In the analysis of high-rise building, traditional displacement-based plane elements are often used to get the in-plane internal forces of the shear walls by stress integration. Limited by the singular problem produced by wall holes and the loss of precision induced by using differential method to derive strains, the displacement-based elements cannot always present accuracy enough for design. In this paper, the hybrid post-processing procedure based on the Hellinger-Reissner variational principle is used for improving the stress precision of two quadrilateral plane elements. In order to find the best stress field, three different forms are assumed for the displacement-based plane elements and with drilling DOF. Numerical results show that by using the proposed method, the accuracy of stress solutions of these two displacement-based plane elements can be improved.展开更多
Pre-and post-selected(PPS) measurement, especially the weak PPS measurement, has been proved to be a useful tool for measuring extremely tiny physical parameters. However, it is difficult to retain both the attainable...Pre-and post-selected(PPS) measurement, especially the weak PPS measurement, has been proved to be a useful tool for measuring extremely tiny physical parameters. However, it is difficult to retain both the attainable highest measurement sensitivity and precision with the increase of the parameter to be measured. Here, a modulated PPS measurement scheme based on coupling-strength-dependent modulation is presented with the highest sensitivity and precision retained for an arbitrary coupling strength. This idea is demonstrated by comparing the modulated PPS measurement scheme with the standard PPS measurement scheme in the case of unbalanced input meter. By using the Fisher information metric, we derive the optimal pre-and post-selected states, as well as the optimal coupling-strength-dependent modulation without any restriction on the coupling strength. We also give the specific strategy of performing the modulated PPS measurement scheme, which may promote practical application of this scheme in precision metrology.展开更多
Background:Despite free tuberculosis(TB)care in Pakistan,patients still have to bear high costs,which push them into poverty.This study estimated the pre-and post-diagnosis costs households bear for TB care,and invest...Background:Despite free tuberculosis(TB)care in Pakistan,patients still have to bear high costs,which push them into poverty.This study estimated the pre-and post-diagnosis costs households bear for TB care,and investigated coping mechanisms among adults≥18 years in Karachi,Pakistan.Methods:We conducted a cross-sectional study comprising of 516 TB patients identified with completion of at least one month intensive treatment from four public sector health facilities from two institutes in Karachi,Pakistan.A standardized questionnaire to estimate patient’s costs was administered.The study outcomes were direct medical and non-medical costs,and indirect costs.The costs were estimated during pre-diagnostic and post-diagnostic phase which includes diagnostic,treatment,and hospitalization phases.A descriptive analysis including mean and standard deviation(±SD),median and interquartile range(IQR),and frequencies and proportions(%)was employed.Results:Out of 516 TB patients,52.1%were female with a mean age of 32.4(±13.7)years.The median costs per patient during the pre-diagnostic,diagnostic,treatment and hospitalization periods were estimated at USD63.8/PKR7,377,USD24/PKR2,755,USD10.5/PKR1,217,and USD349.0/PKR40,300,respectively.The total household median cost was estimated at USD129.2/PKR14,919 per patient.The median indirect cost was estimated at USD52.0/PKR5,950 per patient.Of total,54.1%of patients preferred and consulted private providers in the first place at the onset of symptoms,while,36%attended public healthcare services,5%and 4.1%went to dispensary and pharmacy,respectively,as a first point of care.Conclusions:TB patients bear substantial out-of-pocket costs before they are enrolled in publically funded TB programs.There should be provision of transport and food vouchers,also health insurance for in-patient treatment.This advocates a critical investigation into an existing financial support network for TB patients in Pakistan towards reducing the burden.展开更多
Despite the maturity of ensemble numerical weather prediction(NWP),the resulting forecasts are still,more often than not,under-dispersed.As such,forecast calibration tools have become popular.Among those tools,quantil...Despite the maturity of ensemble numerical weather prediction(NWP),the resulting forecasts are still,more often than not,under-dispersed.As such,forecast calibration tools have become popular.Among those tools,quantile regression(QR)is highly competitive in terms of both flexibility and predictive performance.Nevertheless,a long-standing problem of QR is quantile crossing,which greatly limits the interpretability of QR-calibrated forecasts.On this point,this study proposes a non-crossing quantile regression neural network(NCQRNN),for calibrating ensemble NWP forecasts into a set of reliable quantile forecasts without crossing.The overarching design principle of NCQRNN is to add on top of the conventional QRNN structure another hidden layer,which imposes a non-decreasing mapping between the combined output from nodes of the last hidden layer to the nodes of the output layer,through a triangular weight matrix with positive entries.The empirical part of the work considers a solar irradiance case study,in which four years of ensemble irradiance forecasts at seven locations,issued by the European Centre for Medium-Range Weather Forecasts,are calibrated via NCQRNN,as well as via an eclectic mix of benchmarking models,ranging from the naïve climatology to the state-of-the-art deep-learning and other non-crossing models.Formal and stringent forecast verification suggests that the forecasts post-processed via NCQRNN attain the maximum sharpness subject to calibration,amongst all competitors.Furthermore,the proposed conception to resolve quantile crossing is remarkably simple yet general,and thus has broad applicability as it can be integrated with many shallow-and deep-learning-based neural networks.展开更多
Magnesium(Mg)and its alloys are emerging as a structural material for the aerospace,automobile,and electronics industries,driven by the imperative of weight reduction.They are also drawing notable attention in the med...Magnesium(Mg)and its alloys are emerging as a structural material for the aerospace,automobile,and electronics industries,driven by the imperative of weight reduction.They are also drawing notable attention in the medical industries owing to their biodegradability and a lower elastic modulus comparable to bone.The ability to manufacture near-net shape products featuring intricate geometries has sparked huge interest in additive manufacturing(AM)of Mg alloys,reflecting a transformation in the manufacturing sectors.However,AM of Mg alloys presents more formidable challenges due to inherent properties,particularly susceptibility to oxidation,gas trapping,high thermal expansion coefficient,and low solidification temperature.This leads to defects such as porosity,lack of fusion,cracking,delamination,residual stresses,and inhomogeneity,ultimately influencing the mechanical,corrosion,and surface properties of AM Mg alloys.To address these issues,post-processing of AM Mg alloys are often needed to make them suitable for application.The present article reviews all post-processing techniques adapted for AM Mg alloys to date,including heat treatment,hot isostatic pressing,friction stir processing,and surface peening.The utilization of these methods within the hybrid AM process,employing interlayer post-processing,is also discussed.Optimal post-processing conditions are reported,and their influence on the microstructure,mechanical,and corrosion properties are detailed.Additionally,future prospects and research directions are proposed.展开更多
Association rule learning(ARL)is a widely used technique for discovering relationships within datasets.However,it often generates excessive irrelevant or ambiguous rules.Therefore,post-processing is crucial not only f...Association rule learning(ARL)is a widely used technique for discovering relationships within datasets.However,it often generates excessive irrelevant or ambiguous rules.Therefore,post-processing is crucial not only for removing irrelevant or redundant rules but also for uncovering hidden associations that impact other factors.Recently,several post-processing methods have been proposed,each with its own strengths and weaknesses.In this paper,we propose THAPE(Tunable Hybrid Associative Predictive Engine),which combines descriptive and predictive techniques.By leveraging both techniques,our aim is to enhance the quality of analyzing generated rules.This includes removing irrelevant or redundant rules,uncovering interesting and useful rules,exploring hidden association rules that may affect other factors,and providing backtracking ability for a given product.The proposed approach offers a tailored method that suits specific goals for retailers,enabling them to gain a better understanding of customer behavior based on factual transactions in the target market.We applied THAPE to a real dataset as a case study in this paper to demonstrate its effectiveness.Through this application,we successfully mined a concise set of highly interesting and useful association rules.Out of the 11,265 rules generated,we identified 125 rules that are particularly relevant to the business context.These identified rules significantly improve the interpretability and usefulness of association rules for decision-making purposes.展开更多
AIM:To understand the current situation of parental perspectives,knowledge,and practices concerning myopia prevention and control for both pre-and school-aged children.METHODS:This study was a cross-sectional survey t...AIM:To understand the current situation of parental perspectives,knowledge,and practices concerning myopia prevention and control for both pre-and school-aged children.METHODS:This study was a cross-sectional survey that involved children aged 0 to 15y and their parents.Participants were required to respond to an online questionnaire by scanning a quick response(QR)code.The questionnaire consisted of 25 tick-box questions and was open to response from December 22,2022,to January 5,2023.The dioptric traits of the children,the visual status and educational background of the parents,the parental perspectives towards myopia and its risks,and the parents’knowledge and practices related to myopia prevention and control were recorded and measured.The Chi-square test and binomial logistic regression were used for statistics.RESULTS:Totally 350 parents responded to the questionnaire.The prevalence and severity of myopia among the surveyed children exhibited a positive correlation with advancing age(P<0.001 and P=0.004,respectively).Nearly half of parents with myopic children considered myopia did not pose any health threat and could be effectively corrected(P<0.001).Parents who held master’s or doctoral degree demonstrated a better understanding of children’s vision standards for each age group(P=0.001),and 31.63% of them could undergo initial vision screening for their children during the age of 0 to 3y while parents with bachelor’s degree(34.04%)and below(32.43%)mainly initiated the vision examination for their children at the age of 4 to 6y(P=0.05).Parents with master’s or doctoral degree also exhibited more rational practices concerning outdoor time(P=0.048)and sleep time(P=0.044).No other significant discrepancy among the different educational groups in additional conceptions of myopia,such as hyperopia reserve,axis length,and corneal curvature alterations.Most parents preferred to employ conventional interventions,such as enhancing indoor lighting condition(80.00%)and ensuring appropriate reading posture and distance(71.71%).CONCLUSION:The current status of parental knowledge and practices about myopia prevention and control remains outdated and deficient.The administrative department should implement efficacious and adaptable measures to enhance parental awareness and foster their commitment towards myopia prevention and control.展开更多
Background Weed infestation in cotton has been reported to offer severe competition and cause yield reduction to a large extent.Weeding via cultural practices is time consuming,tedious,and expensive due to long durati...Background Weed infestation in cotton has been reported to offer severe competition and cause yield reduction to a large extent.Weeding via cultural practices is time consuming,tedious,and expensive due to long duration of cotton crop and regular monsoon rains during cotton production in India.Chemical weed control has been successfully utilized in cotton in the recent past.However,continuous use of similar herbicides leads to resistance in weeds against herbicides.And when sprayed to the field,herbicides not only suppress weeds but leave undesirable residues in the soil that are hazardous to the environment.Therefore,a study was performed at cotton research area at Chaudhary Charan Singh Haryana Agricultural University,Hisar,Haryana during two consecutive kharif seasons(2020 and 2021)to determine the most suitable and sustainable weed management strategy through the integration of chemical and cultural methods.Results Mulching with rice straw of 7.5 t ha^(-1)resulted in significantly higher cotton seed yield(3189 and 3084 kg ha^(-1))and better weed control in comparison to no mulch treatments(2990 and 2904 kg ha^(-1))in 2020 and 2021,respectively.Among various weed management levels,the significantly lowest cotton seed yield was recorded in untreated control(1841 and 1757 kg·ha^(-1)during 2020 and 2021,respectively)in comparison to other treatments while all other treatments were statistically at par with each other during both years of crop experimentation.Conclusion Mulching with rice straw of 7.5 t·ha^(-1)along with a pre-emergence application of pendimethalin(active ingredient)at 1.5 kg·ha^(-1)fb(followed by)one hoeings at 45 days after sowing(DAS)and fb glyphosate 2 kg·ha^(-1)(Shielded spray)at 90 DAS is a viable option for effective control of grassy and broadleaved weeds in Bt cotton in north-west India.展开更多
Finger vein extraction and recognition hold significance in various applications due to the unique and reliable nature of finger vein patterns. While recently finger vein recognition has gained popularity, there are s...Finger vein extraction and recognition hold significance in various applications due to the unique and reliable nature of finger vein patterns. While recently finger vein recognition has gained popularity, there are still challenges associated with extracting and processing finger vein patterns related to image quality, positioning and alignment, skin conditions, security concerns and processing techniques applied. In this paper, a method for robust segmentation of line patterns in strongly blurred images is presented and evaluated in vessel network extraction from infrared images of human fingers. In a four-step process: local normalization of brightness, image enhancement, segmentation and cleaning were involved. A novel image enhancement method was used to re-establish the line patterns from the brightness sum of the independent close-form solutions of the adopted optimization criterion derived in small windows. In the proposed method, the computational resources were reduced significantly compared to the solution derived when the whole image was processed. In the enhanced image, where the concave structures have been sufficiently emphasized, accurate detection of line patterns was obtained by local entropy thresholding. Typical segmentation errors appearing in the binary image were removed using morphological dilation with a line structuring element and morphological filtering with a majority filter to eliminate isolated blobs. The proposed method performs accurate detection of the vessel network in human finger infrared images, as the experimental results show, applied both in real and artificial images and can readily be applied in many image enhancement and segmentation applications.展开更多
In the present computational fluid dynamics (CFD) community, post-processing is regarded as a procedure to view parameter distribution, detect characteristic structure and reveal physical mechanism of fluid flow bas...In the present computational fluid dynamics (CFD) community, post-processing is regarded as a procedure to view parameter distribution, detect characteristic structure and reveal physical mechanism of fluid flow based on computational or experimental results. Field plots by contours, iso-surfaces, streamlines, vectors and others are traditional post-processing techniques. While the shock wave, as one important and critical flow structure in many aerodynamic problems, can hardly be detected or distinguished in a direct way using these traditional methods, due to possible confusions with other similar discontinuous flow structures like slip line, contact discontinuity, etc. Therefore, method for automatic detection of shock wave in post-processing is of great importance for both academic research and engineering applications. In this paper, the current status of methodologies developed for shock wave detection and implementations in post-processing platform are reviewed, as well as discussions on advantages and limitations of the existing methods and proposals for further studies of shock wave detection method. We also develop an advanced post-processing software, with improved shock detection.展开更多
The laser shock processing implemented by a laser-induced high-pressure plasma which propagates into the sample as a shockwave is innovatively applied as a post-processing technique on HfO_(2)/SiO_(2) multilayer coati...The laser shock processing implemented by a laser-induced high-pressure plasma which propagates into the sample as a shockwave is innovatively applied as a post-processing technique on HfO_(2)/SiO_(2) multilayer coatings for the first time.The pure mechanical post-processing has provided evidence of a considerable promotion effect of the laser-induced damage threshold,which increased by a factor of about 4.6 with appropriate processing parameters.The promotion mechanism is confirmed to be the comprehensive modification of the intrinsic defects and the mechanical properties,which made the applicability of this novel post-processing technique on various types of coatings possible.Based on experiments,an interaction equation for the plasma pressure is established,which clarifies the existence of the critical pressure and provides a theoretical basis for selecting optimal processing parameters.In addition to the further clarification of the underlying damage mechanism,the laser shock post-processing provides a promising technique to realize the comprehensive and effective improvement of the laser-induced damage resistance of coatings.展开更多
In this paper, the super spectral viscosity (SSV) method is developed by introducing a spectrally small amount of high order regularization which is only activated on high frequencies. The resulting SSV approximatio...In this paper, the super spectral viscosity (SSV) method is developed by introducing a spectrally small amount of high order regularization which is only activated on high frequencies. The resulting SSV approximation is stable and convergent to the exact entropy solution. A Gegenbauer-Chebyshev post-processing for the SSV solution is proposed to remove the spurious oscillations at the disconti-nuities and recover accuracy from the spectral approximation. The ssv method is applied to the scahr periodic Burgers equation and the one-dimensional system of Euler equations of gas dynamics. The numerical results exhibit high accuracy and resolution to the exact entropy solution,展开更多
A number of processes for post-production treatment of“raw”biochars,including leaching,aeration,grinding or sieving to reduce particle size,and chemical or steam activation,have been suggested as means to enhance bi...A number of processes for post-production treatment of“raw”biochars,including leaching,aeration,grinding or sieving to reduce particle size,and chemical or steam activation,have been suggested as means to enhance biochar effectiveness in agriculture,forestry,and environmental restoration.Here,I review studies on post-production processing methods and their effects on biochar physio-chemical properties and present a meta-analysis of plant growth and yield responses to post-processed vs.“raw”biochars.Data from 23 studies provide a total of 112 comparisons of responses to processed vs.unprocessed biochars,and 103 comparisons allowing assessment of effects relative to biochar particle size;additional 8 published studies involving 32 comparisons provide data on effects of biochar leachates.Overall,post-processed biochars resulted in significantly increased average plant growth responses 14%above those observed with unprocessed biochar.This overall effect was driven by plant growth responses to reduced biochar particle size,and heating/aeration treatments.The assessment of biochar effects by particle size indicates a peak at a particle size of 0.5-1.0 mm.Biochar leachate treatments showed very high heterogeneity among studies and no average growth benefit.I conclude that physiochemical post-processing of biochar offers substantial additional agronomic benefits compared to the use of unprocessed biochar.Further research on post-production treatments effects will be important for biochar utilization to maximize benefits to carbon sequestration and system productivity in agriculture,forestry,and environmental restoration.展开更多
基金Supported by the Chinese Academy of Sciences Center for Excellence and Synergetic Innovation Center in Quantum Information and Quantum Physics,Shanghai Branch,University of Science and Technology of Chinathe National Natural Science Foundation of China under Grant No 11405172
文摘Quantum random number generators adopting single negligible dead time of avalanche photodiodes (APDs) photon detection have been restricted due to the non- We propose a new approach based on an APD array to improve the generation rate of random numbers significantly. This method compares the detectors' responses to consecutive optical pulses and generates the random sequence. We implement a demonstration experiment to show its simplicity, compactness and scalability. The generated numbers are proved to be unbiased, post-processing free, ready to use, and their randomness is verified by using the national institute of standard technology statistical test suite. The random bit generation efficiency is as high as 32.8% and the potential generation rate adopting the 32× 32 APD array is up to tens of Gbits/s.
基金Projects 50221402, 50490271 and 50025413 supported by the National Natural Science Foundation of Chinathe National Basic Research Program of China (2009CB219603, 2009 CB724601, 2006CB202209 and 2005CB221500)+1 种基金the Key Project of the Ministry of Education (306002)the Program for Changjiang Scholars and Innovative Research Teams in Universities of MOE (IRT0408)
文摘In order to carry out numerical simulation using geologic structural data obtained from Landmark(seismic interpretation system), underground geological structures are abstracted into mechanical models which can reflect actual situations and facilitate their computation and analyses.Given the importance of model building, further processing methods about traditional seismic interpretation results from Landmark should be studied and the processed result can then be directly used in numerical simulation computations.Through this data conversion procedure, Landmark and FLAC(the international general stress software) are seamlessly connected.Thus, the format conversion between the two systems and the pre-and post-processing in simulation computation is realized.A practical application indicates that this method has many advantages such as simple operation, high accuracy of the element subdivision and high speed, which may definitely satisfy the actual needs of floor grid cutting.
文摘The travel time data collection method is used to assist the congestion management. The use of traditional sensors (e.g. inductive loops, AVI sensors) or more recent Bluetooth sensors installed on major roads for collecting data is not sufficient because of their limited coverage and expensive costs for installation and maintenance. Application of the Global Positioning Systems (GPS) in travel time and delay data collections is proven to be efficient in terms of accuracy, level of details for the data and required data collection of man-power. While data collection automation is improved by the GPS technique, human errors can easily find their way through the post-processing phase, and therefore data post-processing remains a challenge especially in case of big projects with high amount of data. This paper introduces a stand-alone post-processing tool called GPS Calculator, which provides an easy-to-use environment to carry out data post-processing. This is a Visual Basic application that processes the data files obtained in the field and integrates them into Geographic Information Systems (GIS) for analysis and representation. The results show that this tool obtains similar results to the currently used data post-processing method, reduces the post-processing effort, and also eliminates the need for the second person during the data collection.
基金supported by the New Century Excellent Talents in University(NCET-09-0396)the National Science&Technology Key Projects of Numerical Control(2012ZX04014-031)+1 种基金the Natural Science Foundation of Hubei Province(2011CDB279)the Foundation for Innovative Research Groups of the Natural Science Foundation of Hubei Province,China(2010CDA067)
文摘When castings become complicated and the demands for precision of numerical simulation become higher,the numerical data of casting numerical simulation become more massive.On a general personal computer,these massive numerical data may probably exceed the capacity of available memory,resulting in failure of rendering.Based on the out-of-core technique,this paper proposes a method to effectively utilize external storage and reduce memory usage dramatically,so as to solve the problem of insufficient memory for massive data rendering on general personal computers.Based on this method,a new postprocessor is developed.It is capable to illustrate filling and solidification processes of casting,as well as thermal stess.The new post-processor also provides fast interaction to simulation results.Theoretical analysis as well as several practical examples prove that the memory usage and loading time of the post-processor are independent of the size of the relevant files,but the proportion of the number of cells on surface.Meanwhile,the speed of rendering and fetching of value from the mouse is appreciable,and the demands of real-time and interaction are satisfied.
基金supported by the National Natural Science Foundation of China(51875535)the Natural Science Foundation for Young Scientists of Shanxi Province(201701D221017,201901D211242)。
文摘To improve the ability of detecting underwater targets under strong wideband interference environment,an efficient method of line spectrum extraction is proposed,which fully utilizes the feature of the target spectrum that the high intense and stable line spectrum is superimposed on the wide continuous spectrum.This method modifies the traditional beam forming algorithm by calculating and fusing the beam forming results at multi-frequency band and multi-azimuth interval,showing an excellent way to extract the line spectrum when the interference and the target are not in the same azimuth interval simultaneously.Statistical efficiency of the estimated azimuth variance and corresponding power of the line spectrum band depends on the line spectra ratio(LSR)of the line spectrum.The change laws of the output signal to noise ratio(SNR)with the LSR,the input SNR,the integration time and the filtering bandwidth of different algorithms bring the selection principle of the critical LSR.As the basis,the detection gain of wideband energy integration and the narrowband line spectrum algorithm are theoretically analyzed.The simulation detection gain demonstrates a good match with the theoretical model.The application conditions of all methods are verified by the receiver operating characteristic(ROC)curve and experimental data from Qiandao Lake.In fact,combining the two methods for target detection reduces the missed detection rate.The proposed post-processing method in2-dimension with the Kalman filter in the time dimension and the background equalization algorithm in the azimuth dimension makes use of the strong correlation between adjacent frames,could further remove background fluctuation and improve the display effect.
文摘This paper proposed improvements to the low bit rate parametric audio coder with sinusoid model as its kernel. Firstly, we propose a new method to effectively order and select the perceptually most important sinusoids. The sinusoid which contributes most to the reduction of overall NMR is chosen. Combined with our improved parametric psychoacoustic model and advanced peak riddling techniques, the number of sinusoids required can be greatly reduced and the coding efficiency can be greatly enhanced. A lightweight version is also given to reduce the amount of computation with only little sacrifice of performance. Secondly, we propose two enhancement techniques for sinusoid synthesis: bandwidth enhancement and line enhancement. With little overhead, the effective bandwidth can be extended one more octave; the timbre tends to sound much brighter, thicker and more beautiful.
基金This work was supported by Taif university Researchers Supporting Project Number(TURSP-2020/114),Taif University,Taif,Saudi Arabia.
文摘Low contrast of Magnetic Resonance(MR)images limits the visibility of subtle structures and adversely affects the outcome of both subjective and automated diagnosis.State-of-the-art contrast boosting techniques intolerably alter inherent features of MR images.Drastic changes in brightness features,induced by post-processing are not appreciated in medical imaging as the grey level values have certain diagnostic meanings.To overcome these issues this paper proposes an algorithm that enhance the contrast of MR images while preserving the underlying features as well.This method termed as Power-law and Logarithmic Modification-based Histogram Equalization(PLMHE)partitions the histogram of the image into two sub histograms after a power-law transformation and a log compression.After a modification intended for improving the dispersion of the sub-histograms and subsequent normalization,cumulative histograms are computed.Enhanced grey level values are computed from the resultant cumulative histograms.The performance of the PLMHE algorithm is comparedwith traditional histogram equalization based algorithms and it has been observed from the results that PLMHE can boost the image contrast without causing dynamic range compression,a significant change in mean brightness,and contrast-overshoot.
文摘In the analysis of high-rise building, traditional displacement-based plane elements are often used to get the in-plane internal forces of the shear walls by stress integration. Limited by the singular problem produced by wall holes and the loss of precision induced by using differential method to derive strains, the displacement-based elements cannot always present accuracy enough for design. In this paper, the hybrid post-processing procedure based on the Hellinger-Reissner variational principle is used for improving the stress precision of two quadrilateral plane elements. In order to find the best stress field, three different forms are assumed for the displacement-based plane elements and with drilling DOF. Numerical results show that by using the proposed method, the accuracy of stress solutions of these two displacement-based plane elements can be improved.
基金supported by the National Natural Science Foundation of China(Grant Nos.11674234 and 11605205)the Fundamental Research Funds for the Central Universities,China(Grant No.2012017yjsy143)+4 种基金the National Key Research and Development Program of China(Grant No.2017YFA0305200)the Youth Innovation Promotion Association of Chinese Academy of Sciences(CAS)(Grant No.2015317)the Natural Science Foundation of Chongqing,China(Grant Nos.cstc2015jcyjA00021 and cstc2018jcyjAX0656)the Entrepreneurship and Innovation Support Program for Chongqing Overseas Returnees,China(Grant No.cx017134)the Fund of CAS Key Laboratory of Microscale Magnetic Resonance,China,and the Fund of CAS Key Laboratory of Quantum Information,China
文摘Pre-and post-selected(PPS) measurement, especially the weak PPS measurement, has been proved to be a useful tool for measuring extremely tiny physical parameters. However, it is difficult to retain both the attainable highest measurement sensitivity and precision with the increase of the parameter to be measured. Here, a modulated PPS measurement scheme based on coupling-strength-dependent modulation is presented with the highest sensitivity and precision retained for an arbitrary coupling strength. This idea is demonstrated by comparing the modulated PPS measurement scheme with the standard PPS measurement scheme in the case of unbalanced input meter. By using the Fisher information metric, we derive the optimal pre-and post-selected states, as well as the optimal coupling-strength-dependent modulation without any restriction on the coupling strength. We also give the specific strategy of performing the modulated PPS measurement scheme, which may promote practical application of this scheme in precision metrology.
基金supported by the Pakistan Health Research Council。
文摘Background:Despite free tuberculosis(TB)care in Pakistan,patients still have to bear high costs,which push them into poverty.This study estimated the pre-and post-diagnosis costs households bear for TB care,and investigated coping mechanisms among adults≥18 years in Karachi,Pakistan.Methods:We conducted a cross-sectional study comprising of 516 TB patients identified with completion of at least one month intensive treatment from four public sector health facilities from two institutes in Karachi,Pakistan.A standardized questionnaire to estimate patient’s costs was administered.The study outcomes were direct medical and non-medical costs,and indirect costs.The costs were estimated during pre-diagnostic and post-diagnostic phase which includes diagnostic,treatment,and hospitalization phases.A descriptive analysis including mean and standard deviation(±SD),median and interquartile range(IQR),and frequencies and proportions(%)was employed.Results:Out of 516 TB patients,52.1%were female with a mean age of 32.4(±13.7)years.The median costs per patient during the pre-diagnostic,diagnostic,treatment and hospitalization periods were estimated at USD63.8/PKR7,377,USD24/PKR2,755,USD10.5/PKR1,217,and USD349.0/PKR40,300,respectively.The total household median cost was estimated at USD129.2/PKR14,919 per patient.The median indirect cost was estimated at USD52.0/PKR5,950 per patient.Of total,54.1%of patients preferred and consulted private providers in the first place at the onset of symptoms,while,36%attended public healthcare services,5%and 4.1%went to dispensary and pharmacy,respectively,as a first point of care.Conclusions:TB patients bear substantial out-of-pocket costs before they are enrolled in publically funded TB programs.There should be provision of transport and food vouchers,also health insurance for in-patient treatment.This advocates a critical investigation into an existing financial support network for TB patients in Pakistan towards reducing the burden.
基金supported by the National Natural Science Foundation of China (Project No.42375192)the China Meteorological Administration Climate Change Special Program (CMA-CCSP+1 种基金Project No.QBZ202315)support by the Vector Stiftung through the Young Investigator Group"Artificial Intelligence for Probabilistic Weather Forecasting."
文摘Despite the maturity of ensemble numerical weather prediction(NWP),the resulting forecasts are still,more often than not,under-dispersed.As such,forecast calibration tools have become popular.Among those tools,quantile regression(QR)is highly competitive in terms of both flexibility and predictive performance.Nevertheless,a long-standing problem of QR is quantile crossing,which greatly limits the interpretability of QR-calibrated forecasts.On this point,this study proposes a non-crossing quantile regression neural network(NCQRNN),for calibrating ensemble NWP forecasts into a set of reliable quantile forecasts without crossing.The overarching design principle of NCQRNN is to add on top of the conventional QRNN structure another hidden layer,which imposes a non-decreasing mapping between the combined output from nodes of the last hidden layer to the nodes of the output layer,through a triangular weight matrix with positive entries.The empirical part of the work considers a solar irradiance case study,in which four years of ensemble irradiance forecasts at seven locations,issued by the European Centre for Medium-Range Weather Forecasts,are calibrated via NCQRNN,as well as via an eclectic mix of benchmarking models,ranging from the naïve climatology to the state-of-the-art deep-learning and other non-crossing models.Formal and stringent forecast verification suggests that the forecasts post-processed via NCQRNN attain the maximum sharpness subject to calibration,amongst all competitors.Furthermore,the proposed conception to resolve quantile crossing is remarkably simple yet general,and thus has broad applicability as it can be integrated with many shallow-and deep-learning-based neural networks.
文摘Magnesium(Mg)and its alloys are emerging as a structural material for the aerospace,automobile,and electronics industries,driven by the imperative of weight reduction.They are also drawing notable attention in the medical industries owing to their biodegradability and a lower elastic modulus comparable to bone.The ability to manufacture near-net shape products featuring intricate geometries has sparked huge interest in additive manufacturing(AM)of Mg alloys,reflecting a transformation in the manufacturing sectors.However,AM of Mg alloys presents more formidable challenges due to inherent properties,particularly susceptibility to oxidation,gas trapping,high thermal expansion coefficient,and low solidification temperature.This leads to defects such as porosity,lack of fusion,cracking,delamination,residual stresses,and inhomogeneity,ultimately influencing the mechanical,corrosion,and surface properties of AM Mg alloys.To address these issues,post-processing of AM Mg alloys are often needed to make them suitable for application.The present article reviews all post-processing techniques adapted for AM Mg alloys to date,including heat treatment,hot isostatic pressing,friction stir processing,and surface peening.The utilization of these methods within the hybrid AM process,employing interlayer post-processing,is also discussed.Optimal post-processing conditions are reported,and their influence on the microstructure,mechanical,and corrosion properties are detailed.Additionally,future prospects and research directions are proposed.
文摘Association rule learning(ARL)is a widely used technique for discovering relationships within datasets.However,it often generates excessive irrelevant or ambiguous rules.Therefore,post-processing is crucial not only for removing irrelevant or redundant rules but also for uncovering hidden associations that impact other factors.Recently,several post-processing methods have been proposed,each with its own strengths and weaknesses.In this paper,we propose THAPE(Tunable Hybrid Associative Predictive Engine),which combines descriptive and predictive techniques.By leveraging both techniques,our aim is to enhance the quality of analyzing generated rules.This includes removing irrelevant or redundant rules,uncovering interesting and useful rules,exploring hidden association rules that may affect other factors,and providing backtracking ability for a given product.The proposed approach offers a tailored method that suits specific goals for retailers,enabling them to gain a better understanding of customer behavior based on factual transactions in the target market.We applied THAPE to a real dataset as a case study in this paper to demonstrate its effectiveness.Through this application,we successfully mined a concise set of highly interesting and useful association rules.Out of the 11,265 rules generated,we identified 125 rules that are particularly relevant to the business context.These identified rules significantly improve the interpretability and usefulness of association rules for decision-making purposes.
基金Supported by the National Natural Science Foundation of China(No.82205196).
文摘AIM:To understand the current situation of parental perspectives,knowledge,and practices concerning myopia prevention and control for both pre-and school-aged children.METHODS:This study was a cross-sectional survey that involved children aged 0 to 15y and their parents.Participants were required to respond to an online questionnaire by scanning a quick response(QR)code.The questionnaire consisted of 25 tick-box questions and was open to response from December 22,2022,to January 5,2023.The dioptric traits of the children,the visual status and educational background of the parents,the parental perspectives towards myopia and its risks,and the parents’knowledge and practices related to myopia prevention and control were recorded and measured.The Chi-square test and binomial logistic regression were used for statistics.RESULTS:Totally 350 parents responded to the questionnaire.The prevalence and severity of myopia among the surveyed children exhibited a positive correlation with advancing age(P<0.001 and P=0.004,respectively).Nearly half of parents with myopic children considered myopia did not pose any health threat and could be effectively corrected(P<0.001).Parents who held master’s or doctoral degree demonstrated a better understanding of children’s vision standards for each age group(P=0.001),and 31.63% of them could undergo initial vision screening for their children during the age of 0 to 3y while parents with bachelor’s degree(34.04%)and below(32.43%)mainly initiated the vision examination for their children at the age of 4 to 6y(P=0.05).Parents with master’s or doctoral degree also exhibited more rational practices concerning outdoor time(P=0.048)and sleep time(P=0.044).No other significant discrepancy among the different educational groups in additional conceptions of myopia,such as hyperopia reserve,axis length,and corneal curvature alterations.Most parents preferred to employ conventional interventions,such as enhancing indoor lighting condition(80.00%)and ensuring appropriate reading posture and distance(71.71%).CONCLUSION:The current status of parental knowledge and practices about myopia prevention and control remains outdated and deficient.The administrative department should implement efficacious and adaptable measures to enhance parental awareness and foster their commitment towards myopia prevention and control.
文摘Background Weed infestation in cotton has been reported to offer severe competition and cause yield reduction to a large extent.Weeding via cultural practices is time consuming,tedious,and expensive due to long duration of cotton crop and regular monsoon rains during cotton production in India.Chemical weed control has been successfully utilized in cotton in the recent past.However,continuous use of similar herbicides leads to resistance in weeds against herbicides.And when sprayed to the field,herbicides not only suppress weeds but leave undesirable residues in the soil that are hazardous to the environment.Therefore,a study was performed at cotton research area at Chaudhary Charan Singh Haryana Agricultural University,Hisar,Haryana during two consecutive kharif seasons(2020 and 2021)to determine the most suitable and sustainable weed management strategy through the integration of chemical and cultural methods.Results Mulching with rice straw of 7.5 t ha^(-1)resulted in significantly higher cotton seed yield(3189 and 3084 kg ha^(-1))and better weed control in comparison to no mulch treatments(2990 and 2904 kg ha^(-1))in 2020 and 2021,respectively.Among various weed management levels,the significantly lowest cotton seed yield was recorded in untreated control(1841 and 1757 kg·ha^(-1)during 2020 and 2021,respectively)in comparison to other treatments while all other treatments were statistically at par with each other during both years of crop experimentation.Conclusion Mulching with rice straw of 7.5 t·ha^(-1)along with a pre-emergence application of pendimethalin(active ingredient)at 1.5 kg·ha^(-1)fb(followed by)one hoeings at 45 days after sowing(DAS)and fb glyphosate 2 kg·ha^(-1)(Shielded spray)at 90 DAS is a viable option for effective control of grassy and broadleaved weeds in Bt cotton in north-west India.
文摘Finger vein extraction and recognition hold significance in various applications due to the unique and reliable nature of finger vein patterns. While recently finger vein recognition has gained popularity, there are still challenges associated with extracting and processing finger vein patterns related to image quality, positioning and alignment, skin conditions, security concerns and processing techniques applied. In this paper, a method for robust segmentation of line patterns in strongly blurred images is presented and evaluated in vessel network extraction from infrared images of human fingers. In a four-step process: local normalization of brightness, image enhancement, segmentation and cleaning were involved. A novel image enhancement method was used to re-establish the line patterns from the brightness sum of the independent close-form solutions of the adopted optimization criterion derived in small windows. In the proposed method, the computational resources were reduced significantly compared to the solution derived when the whole image was processed. In the enhanced image, where the concave structures have been sufficiently emphasized, accurate detection of line patterns was obtained by local entropy thresholding. Typical segmentation errors appearing in the binary image were removed using morphological dilation with a line structuring element and morphological filtering with a majority filter to eliminate isolated blobs. The proposed method performs accurate detection of the vessel network in human finger infrared images, as the experimental results show, applied both in real and artificial images and can readily be applied in many image enhancement and segmentation applications.
文摘In the present computational fluid dynamics (CFD) community, post-processing is regarded as a procedure to view parameter distribution, detect characteristic structure and reveal physical mechanism of fluid flow based on computational or experimental results. Field plots by contours, iso-surfaces, streamlines, vectors and others are traditional post-processing techniques. While the shock wave, as one important and critical flow structure in many aerodynamic problems, can hardly be detected or distinguished in a direct way using these traditional methods, due to possible confusions with other similar discontinuous flow structures like slip line, contact discontinuity, etc. Therefore, method for automatic detection of shock wave in post-processing is of great importance for both academic research and engineering applications. In this paper, the current status of methodologies developed for shock wave detection and implementations in post-processing platform are reviewed, as well as discussions on advantages and limitations of the existing methods and proposals for further studies of shock wave detection method. We also develop an advanced post-processing software, with improved shock detection.
基金the National Natural Science Foundation of China(NSFC)(No.11704285)the Natural Science Foundation of Zhejiang Province(No.LY20E050027)the Wenzhou Science and Technology Plan Projects(No.G20170012).
文摘The laser shock processing implemented by a laser-induced high-pressure plasma which propagates into the sample as a shockwave is innovatively applied as a post-processing technique on HfO_(2)/SiO_(2) multilayer coatings for the first time.The pure mechanical post-processing has provided evidence of a considerable promotion effect of the laser-induced damage threshold,which increased by a factor of about 4.6 with appropriate processing parameters.The promotion mechanism is confirmed to be the comprehensive modification of the intrinsic defects and the mechanical properties,which made the applicability of this novel post-processing technique on various types of coatings possible.Based on experiments,an interaction equation for the plasma pressure is established,which clarifies the existence of the critical pressure and provides a theoretical basis for selecting optimal processing parameters.In addition to the further clarification of the underlying damage mechanism,the laser shock post-processing provides a promising technique to realize the comprehensive and effective improvement of the laser-induced damage resistance of coatings.
文摘In this paper, the super spectral viscosity (SSV) method is developed by introducing a spectrally small amount of high order regularization which is only activated on high frequencies. The resulting SSV approximation is stable and convergent to the exact entropy solution. A Gegenbauer-Chebyshev post-processing for the SSV solution is proposed to remove the spurious oscillations at the disconti-nuities and recover accuracy from the spectral approximation. The ssv method is applied to the scahr periodic Burgers equation and the one-dimensional system of Euler equations of gas dynamics. The numerical results exhibit high accuracy and resolution to the exact entropy solution,
基金This work was funded by grants from Natural Science and Engineering Research Council of Canada,with additional support from Haliburton Forest and Wild Life Reserve and the Ontario Mining Association.
文摘A number of processes for post-production treatment of“raw”biochars,including leaching,aeration,grinding or sieving to reduce particle size,and chemical or steam activation,have been suggested as means to enhance biochar effectiveness in agriculture,forestry,and environmental restoration.Here,I review studies on post-production processing methods and their effects on biochar physio-chemical properties and present a meta-analysis of plant growth and yield responses to post-processed vs.“raw”biochars.Data from 23 studies provide a total of 112 comparisons of responses to processed vs.unprocessed biochars,and 103 comparisons allowing assessment of effects relative to biochar particle size;additional 8 published studies involving 32 comparisons provide data on effects of biochar leachates.Overall,post-processed biochars resulted in significantly increased average plant growth responses 14%above those observed with unprocessed biochar.This overall effect was driven by plant growth responses to reduced biochar particle size,and heating/aeration treatments.The assessment of biochar effects by particle size indicates a peak at a particle size of 0.5-1.0 mm.Biochar leachate treatments showed very high heterogeneity among studies and no average growth benefit.I conclude that physiochemical post-processing of biochar offers substantial additional agronomic benefits compared to the use of unprocessed biochar.Further research on post-production treatments effects will be important for biochar utilization to maximize benefits to carbon sequestration and system productivity in agriculture,forestry,and environmental restoration.