Based on a strong inter-diagonal matrix and Taylor series expansions,an oversample reconstruction method was proposed to calibrate the optical micro-scanning error. The technique can obtain regular 2 ×2 microscan...Based on a strong inter-diagonal matrix and Taylor series expansions,an oversample reconstruction method was proposed to calibrate the optical micro-scanning error. The technique can obtain regular 2 ×2 microscanning undersampling images from the real irregular undersampling images,and can then obtain a high spatial oversample resolution image. Simulations and experiments show that the proposed technique can reduce optical micro-scanning error and improve the system's spatial resolution. The algorithm is simple,fast and has low computational complexity. It can also be applied to other electro-optical imaging systems to improve their spatial resolution and has a widespread application prospect.展开更多
Oversampling is commonly encountered in orthogonal frequency division multiplexing (OFDM) systems to ease various performance characteristics. In this paper, we investigate the performance and complexity of one tap ze...Oversampling is commonly encountered in orthogonal frequency division multiplexing (OFDM) systems to ease various performance characteristics. In this paper, we investigate the performance and complexity of one tap zero-forcing (ZF) and minimum mean-square error (MMSE) equalizers in oversampled OFDM systems. Theoretical analysis and simulation results show that oversampling not only reduces the noise at equalizer output but also helps mitigate ill effects of spectral nulls. One tap equalizers therefore yield improved symbol-error-rate (SER) performance with the increase in oversampling rate, but at the expense of increased system bandwidth and modest complexity requirements.展开更多
BACKGROUND Postoperative delirium,particularly prevalent in elderly patients after abdominal cancer surgery,presents significant challenges in clinical management.AIM To develop a synthetic minority oversampling techn...BACKGROUND Postoperative delirium,particularly prevalent in elderly patients after abdominal cancer surgery,presents significant challenges in clinical management.AIM To develop a synthetic minority oversampling technique(SMOTE)-based model for predicting postoperative delirium in elderly abdominal cancer patients.METHODS In this retrospective cohort study,we analyzed data from 611 elderly patients who underwent abdominal malignant tumor surgery at our hospital between September 2020 and October 2022.The incidence of postoperative delirium was recorded for 7 d post-surgery.Patients were divided into delirium and non-delirium groups based on the occurrence of postoperative delirium or not.A multivariate logistic regression model was used to identify risk factors and develop a predictive model for postoperative delirium.The SMOTE technique was applied to enhance the model by oversampling the delirium cases.The model’s predictive accuracy was then validated.RESULTS In our study involving 611 elderly patients with abdominal malignant tumors,multivariate logistic regression analysis identified significant risk factors for postoperative delirium.These included the Charlson comorbidity index,American Society of Anesthesiologists classification,history of cerebrovascular disease,surgical duration,perioperative blood transfusion,and postoperative pain score.The incidence rate of postoperative delirium in our study was 22.91%.The original predictive model(P1)exhibited an area under the receiver operating characteristic curve of 0.862.In comparison,the SMOTE-based logistic early warning model(P2),which utilized the SMOTE oversampling algorithm,showed a slightly lower but comparable area under the curve of 0.856,suggesting no significant difference in performance between the two predictive approaches.CONCLUSION This study confirms that the SMOTE-enhanced predictive model for postoperative delirium in elderly abdominal tumor patients shows performance equivalent to that of traditional methods,effectively addressing data imbalance.展开更多
Recently,anomaly detection(AD)in streaming data gained significant attention among research communities due to its applicability in finance,business,healthcare,education,etc.The recent developments of deep learning(DL...Recently,anomaly detection(AD)in streaming data gained significant attention among research communities due to its applicability in finance,business,healthcare,education,etc.The recent developments of deep learning(DL)models find helpful in the detection and classification of anomalies.This article designs an oversampling with an optimal deep learning-based streaming data classification(OS-ODLSDC)model.The aim of the OSODLSDC model is to recognize and classify the presence of anomalies in the streaming data.The proposed OS-ODLSDC model initially undergoes preprocessing step.Since streaming data is unbalanced,support vector machine(SVM)-Synthetic Minority Over-sampling Technique(SVM-SMOTE)is applied for oversampling process.Besides,the OS-ODLSDC model employs bidirectional long short-term memory(Bi LSTM)for AD and classification.Finally,the root means square propagation(RMSProp)optimizer is applied for optimal hyperparameter tuning of the Bi LSTM model.For ensuring the promising performance of the OS-ODLSDC model,a wide-ranging experimental analysis is performed using three benchmark datasets such as CICIDS 2018,KDD-Cup 1999,and NSL-KDD datasets.展开更多
Rockburst is a common geological disaster in underground engineering,which seriously threatens the safety of personnel,equipment and property.Utilizing machine learning models to evaluate risk of rockburst is graduall...Rockburst is a common geological disaster in underground engineering,which seriously threatens the safety of personnel,equipment and property.Utilizing machine learning models to evaluate risk of rockburst is gradually becoming a trend.In this study,the integrated algorithms under Gradient Boosting Decision Tree(GBDT)framework were used to evaluate and classify rockburst intensity.First,a total of 301 rock burst data samples were obtained from a case database,and the data were preprocessed using synthetic minority over-sampling technique(SMOTE).Then,the rockburst evaluation models including GBDT,eXtreme Gradient Boosting(XGBoost),Light Gradient Boosting Machine(LightGBM),and Categorical Features Gradient Boosting(CatBoost)were established,and the optimal hyperparameters of the models were obtained through random search grid and five-fold cross-validation.Afterwards,use the optimal hyperparameter configuration to fit the evaluation models,and analyze these models using test set.In order to evaluate the performance,metrics including accuracy,precision,recall,and F1-score were selected to analyze and compare with other machine learning models.Finally,the trained models were used to conduct rock burst risk assessment on rock samples from a mine in Shanxi Province,China,and providing theoretical guidance for the mine's safe production work.The models under the GBDT framework perform well in the evaluation of rockburst levels,and the proposed methods can provide a reliable reference for rockburst risk level analysis and safety management.展开更多
Imbalanced datasets are common in practical applications,and oversampling methods using fuzzy rules have been shown to enhance the classification performance of imbalanced data by taking into account the relationship ...Imbalanced datasets are common in practical applications,and oversampling methods using fuzzy rules have been shown to enhance the classification performance of imbalanced data by taking into account the relationship between data attributes.However,the creation of fuzzy rules typically depends on expert knowledge,which may not fully leverage the label information in training data and may be subjective.To address this issue,a novel fuzzy rule oversampling approach is developed based on the learning vector quantization(LVQ)algorithm.In this method,the label information of the training data is utilized to determine the antecedent part of If-Then fuzzy rules by dynamically dividing attribute intervals using LVQ.Subsequently,fuzzy rules are generated and adjusted to calculate rule weights.The number of new samples to be synthesized for each rule is then computed,and samples from the minority class are synthesized based on the newly generated fuzzy rules.This results in the establishment of a fuzzy rule oversampling method based on LVQ.To evaluate the effectiveness of this method,comparative experiments are conducted on 12 publicly available imbalance datasets with five other sampling techniques in combination with the support function machine.The experimental results demonstrate that the proposed method can significantly enhance the classification algorithm across seven performance indicators,including a boost of 2.15%to 12.34%in Accuracy,6.11%to 27.06%in G-mean,and 4.69%to 18.78%in AUC.These show that the proposed method is capable of more efficiently improving the classification performance of imbalanced data.展开更多
Delirium,a complex neurocognitive syndrome,frequently emerges following surgery,presenting diverse manifestations and considerable obstacles,especially among the elderly.This editorial delves into the intricate phenom...Delirium,a complex neurocognitive syndrome,frequently emerges following surgery,presenting diverse manifestations and considerable obstacles,especially among the elderly.This editorial delves into the intricate phenomenon of postoperative delirium(POD),shedding light on a study that explores POD in elderly individuals undergoing abdominal malignancy surgery.The study examines pathophysiology and predictive determinants,offering valuable insights into this challenging clinical scenario.Employing the synthetic minority oversampling technique,a predictive model is developed,incorporating critical risk factors such as comorbidity index,anesthesia grade,and surgical duration.There is an urgent need for accurate risk factor identification to mitigate POD incidence.While specific to elderly patients with abdominal malignancies,the findings contribute significantly to understanding delirium pathophysiology and prediction.Further research is warranted to establish standardized predictive for enhanced generalizability.展开更多
In this editorial,we comment on the article by Hu et al entitled“Predictive modeling for postoperative delirium in elderly patients with abdominal malignancies using synthetic minority oversampling technique”.We wan...In this editorial,we comment on the article by Hu et al entitled“Predictive modeling for postoperative delirium in elderly patients with abdominal malignancies using synthetic minority oversampling technique”.We wanted to draw attention to the general features of postoperative delirium(POD)as well as the areas where there are uncertainties and contradictions.POD can be defined as acute neurocognitive dysfunction that occurs in the first week after surgery.It is a severe postoperative complication,especially for elderly oncology patients.Although the underlying pathophysiological mechanism is not fully understood,various neuroinflammatory mechanisms and neurotransmitters are thought to be involved.Various assessment scales and diagnostic methods have been proposed for the early diagnosis of POD.As delirium is considered a preventable clinical entity in about half of the cases,various early prediction models developed with the support of machine learning have recently become a hot scientific topic.Unfortunately,a model with high sensitivity and specificity for the prediction of POD has not yet been reported.This situation reveals that all health personnel who provide health care services to elderly patients should approach patients with a high level of awareness in the perioperative period regarding POD.展开更多
Deep learning offers a novel opportunity to achieve both high-quality and high-speed computer-generated holography(CGH).Current data-driven deep learning algorithms face the challenge that the labeled training dataset...Deep learning offers a novel opportunity to achieve both high-quality and high-speed computer-generated holography(CGH).Current data-driven deep learning algorithms face the challenge that the labeled training datasets limit the training performance and generalization.The model-driven deep learning introduces the diffraction model into the neural network.It eliminates the need for the labeled training dataset and has been extensively applied to hologram generation.However,the existing model-driven deep learning algorithms face the problem of insufficient constraints.In this study,we propose a model-driven neural network capable of high-fidelity 4K computer-generated hologram generation,called 4K Diffraction Model-driven Network(4K-DMDNet).The constraint of the reconstructed images in the frequency domain is strengthened.And a network structure that combines the residual method and sub-pixel convolution method is built,which effectively enhances the fitting ability of the network for inverse problems.The generalization of the 4K-DMDNet is demonstrated with binary,grayscale and 3D images.High-quality full-color optical reconstructions of the 4K holograms have been achieved at the wavelengths of 450 nm,520 nm,and 638 nm.展开更多
Real-time prediction of the rock mass class in front of the tunnel face is essential for the adaptive adjustment of tunnel boring machines(TBMs).During the TBM tunnelling process,a large number of operation data are g...Real-time prediction of the rock mass class in front of the tunnel face is essential for the adaptive adjustment of tunnel boring machines(TBMs).During the TBM tunnelling process,a large number of operation data are generated,reflecting the interaction between the TBM system and surrounding rock,and these data can be used to evaluate the rock mass quality.This study proposed a stacking ensemble classifier for the real-time prediction of the rock mass classification using TBM operation data.Based on the Songhua River water conveyance project,a total of 7538 TBM tunnelling cycles and the corresponding rock mass classes are obtained after data preprocessing.Then,through the tree-based feature selection method,10 key TBM operation parameters are selected,and the mean values of the 10 selected features in the stable phase after removing outliers are calculated as the inputs of classifiers.The preprocessed data are randomly divided into the training set(90%)and test set(10%)using simple random sampling.Besides stacking ensemble classifier,seven individual classifiers are established as the comparison.These classifiers include support vector machine(SVM),k-nearest neighbors(KNN),random forest(RF),gradient boosting decision tree(GBDT),decision tree(DT),logistic regression(LR)and multilayer perceptron(MLP),where the hyper-parameters of each classifier are optimised using the grid search method.The prediction results show that the stacking ensemble classifier has a better performance than individual classifiers,and it shows a more powerful learning and generalisation ability for small and imbalanced samples.Additionally,a relative balance training set is obtained by the synthetic minority oversampling technique(SMOTE),and the influence of sample imbalance on the prediction performance is discussed.展开更多
According to the oversampling imaging characteristics, an infrared small target detection method based on deep learning is proposed. A 7-layer deep convolutional neural network(CNN) is designed to automatically extrac...According to the oversampling imaging characteristics, an infrared small target detection method based on deep learning is proposed. A 7-layer deep convolutional neural network(CNN) is designed to automatically extract small target features and suppress clutters in an end-to-end manner. The input of CNN is an original oversampling image while the output is a cluttersuppressed feature map. The CNN contains only convolution and non-linear operations, and the resolution of the output feature map is the same as that of the input image. The L1-norm loss function is used, and a mass of training data is generated to train the network effectively. Results show that compared with several baseline methods, the proposed method improves the signal clutter ratio gain and background suppression factor by 3–4 orders of magnitude, and has more powerful target detection performance.展开更多
Traditional inverse synthetic aperture radar(ISAR)imaging methods for maneuvering targets have low resolution and poor capability of noise suppression. An ISAR imaging method of maneuvering targets based on phase retr...Traditional inverse synthetic aperture radar(ISAR)imaging methods for maneuvering targets have low resolution and poor capability of noise suppression. An ISAR imaging method of maneuvering targets based on phase retrieval is proposed,which can provide a high-resolution and focused map of the spatial distribution of scatterers on the target. According to theoretical derivation, the modulus of raw data from the maneuvering target is not affected by radial motion components for ISAR imaging system, so the phase retrieval algorithm can be used for ISAR imaging problems. However, the traditional phase retrieval algorithm will be not applicable to ISAR imaging under the condition of random noise. To solve this problem, an algorithm is put forward based on the range Doppler(RD) algorithm and oversampling smoothness(OSS) phase retrieval algorithm. The algorithm captures the target information in order to reduce the influence of the random phase on ISAR echoes, and then applies OSS for focusing imaging based on prior information of the RD algorithm. The simulated results demonstrate the validity of this algorithm, which cannot only obtain high resolution imaging for high speed maneuvering targets under the condition of random noise, but also substantially improve the success rate of the phase retrieval algorithm.展开更多
Oversampling sigma–delta(Σ–Δ)analog-to-digital converters(ADCs)are currently one of the most widely used architectures for high-resolution ADCs.The rapid development of integrated circuit manufacturing processes h...Oversampling sigma–delta(Σ–Δ)analog-to-digital converters(ADCs)are currently one of the most widely used architectures for high-resolution ADCs.The rapid development of integrated circuit manufacturing processes has allowed the realization of a high resolution in exchange for speed.Structurally,theΣ–ΔADC is divided into two parts:a front-end analog modulator and a back-end digital filter.The performance of the front-end analog modulator has a marked influence on the entireΣ–ΔADC system.In this paper,a 4-order single-loop switched-capacitor modulator with a CIFB(cascade-of-integrators feed-back)structure is proposed.Based on the chosen modulator architecture,the ASIC circuit is implemented using a chartered 0.35μm CMOS process with a chip area of 1.72×0.75 mm^2.The chip operates with a 3.3-V power supply and a power dissipation of 22 mW.According to the results,the performance of the designed modulator has been improved compared with a mature industrial chip and the effective number of bits(ENOB)was almost 18-bit.展开更多
Most modern technologies,such as social media,smart cities,and the internet of things(IoT),rely on big data.When big data is used in the real-world applications,two data challenges such as class overlap and class imba...Most modern technologies,such as social media,smart cities,and the internet of things(IoT),rely on big data.When big data is used in the real-world applications,two data challenges such as class overlap and class imbalance arises.When dealing with large datasets,most traditional classifiers are stuck in the local optimum problem.As a result,it’s necessary to look into new methods for dealing with large data collections.Several solutions have been proposed for overcoming this issue.The rapid growth of the available data threatens to limit the usefulness of many traditional methods.Methods such as oversampling and undersampling have shown great promises in addressing the issues of class imbalance.Among all of these techniques,Synthetic Minority Oversampling TechniquE(SMOTE)has produced the best results by generating synthetic samples for the minority class in creating a balanced dataset.The issue is that their practical applicability is restricted to problems involving tens of thousands or lower instances of each.In this paper,we have proposed a parallel mode method using SMOTE and MapReduce strategy,this distributes the operation of the algorithm among a group of computational nodes for addressing the aforementioned problem.Our proposed solution has been divided into three stages.Thefirst stage involves the process of splitting the data into different blocks using a mapping function,followed by a pre-processing step for each mapping block that employs a hybrid SMOTE algo-rithm for solving the class imbalanced problem.On each map block,a decision tree model would be constructed.Finally,the decision tree blocks would be com-bined for creating a classification model.We have used numerous datasets with up to 4 million instances in our experiments for testing the proposed scheme’s cap-abilities.As a result,the Hybrid SMOTE appears to have good scalability within the framework proposed,and it also cuts down the processing time.展开更多
Due to the anonymity of blockchain,frequent security incidents and attacks occur through it,among which the Ponzi scheme smart contract is a classic type of fraud resulting in huge economic losses.Machine learningbase...Due to the anonymity of blockchain,frequent security incidents and attacks occur through it,among which the Ponzi scheme smart contract is a classic type of fraud resulting in huge economic losses.Machine learningbased methods are believed to be promising for detecting ethereum Ponzi schemes.However,there are still some flaws in current research,e.g.,insufficient feature extraction of Ponzi scheme smart contracts,without considering class imbalance.In addition,there is room for improvement in detection precision.Aiming at the above problems,this paper proposes an ethereum Ponzi scheme detection scheme through opcode context analysis and adaptive boosting(AdaBoost)algorithm.Firstly,this paper uses the n-gram algorithm to extract more comprehensive contract opcode features and combine them with contract account features,which helps to improve the feature extraction effect.Meanwhile,adaptive synthetic sampling(ADASYN)is introduced to deal with class imbalanced data,and integrated with the Adaboost classifier.Finally,this paper uses the improved AdaBoost classifier for the identification of Ponzi scheme contracts.Experimentally,this paper tests our model in real-world smart contracts and compares it with representative methods in the aspect of F1-score and precision.Moreover,this article compares and discusses the state of art methods with our method in four aspects:data acquisition,data preprocessing,feature extraction,and classifier design.Both experiment and discussion validate the effectiveness of our model.展开更多
This study aims to develop a low-cost refractometer for measuring the sucrose content of fruit juice,which is an important factor affecting human health.While laboratory-grade refractometers are expensive and unsuitab...This study aims to develop a low-cost refractometer for measuring the sucrose content of fruit juice,which is an important factor affecting human health.While laboratory-grade refractometers are expensive and unsuitable for personal use,existing low-cost commercial options lack stability and accuracy.To address this gap,we propose a refractometer that replaces the expensive CCD sensor and light source with a conventional LED and a reasonably priced CMOS sensor.By analyzing the output waveform pattern of the CMOS sensor,we achieve high precision with a personal-use-appropriate accuracy of 0.1%.We tested the proposed refractometer by conducting 100 repeated measurements on various fruit juice samples,and the results demonstrate its reliability and consistency.Running on a 48 MHz ARM processor,the algorithm can acquire data within 0.2 seconds.Our low-cost refractometer is suitable for personal health management and small-scale production,providing an affordable and reliable method for measuring sucrose concentration in fruit juice.It improves upon the existing low-cost options by offering better stability and accuracy.This accessible tool has potential applications in optimizing the sucrose content of fruit juice for better health and quality control.展开更多
An error correction technique for the micro-scanning instrument of the optical micro-scanning thermal microscope imaging system is proposed. The technique is based on micro-scanning technology combined with the propos...An error correction technique for the micro-scanning instrument of the optical micro-scanning thermal microscope imaging system is proposed. The technique is based on micro-scanning technology combined with the proposed second-order oversampling reconstruction algorithm and local gradient image reconstruction algorithm. In this paper, we describe the local gradient image reconstruction model, the error correction technique, down-sampling model and the error correction principle. In this paper, we use a Lena original image and four low-resolution images obtained from the standard half-pixel displacement to simulate and verify the effectiveness of the proposed technique. In order to verify the effectiveness of the proposed technique, two groups of low-resolution thermal microscope images are collected by the actual thermal microscope imaging system for experimental study. Simulations and experiments show that the proposed technique can reduce the optical micro-scanning errors, improve the imaging effect of the system and improve the system's spatial resolution. It can be applied to other electro-optical imaging systems to improve their resolution.展开更多
This article presents a high speed third-order continuous-time(CT)sigma-delta analog-to-digital converter(SDADC)based on voltagecontrolled oscillator(VCO),featuring a digital programmable quantizer structure.To improv...This article presents a high speed third-order continuous-time(CT)sigma-delta analog-to-digital converter(SDADC)based on voltagecontrolled oscillator(VCO),featuring a digital programmable quantizer structure.To improve the overall performance,not only oversampling technique but also noise-shaping enhancing technique is used to suppress in-band noise.Due to the intrinsic first-order noise-shaping of the VCO quantizer,the proposed third-order SDADC can realize forth-order noise-shaping ideally.As a bright advantage,the proposed programmable VCO quantizer is digital-friendly,which can simplify the design process and improve antiinterference capability of the circuit.A 4-bit programmable VCO quantizer clocked at 2.5 GHz,which is proposed in a 40 nm complementary metaloxide semiconductor(CMOS)technology,consists of an analog VCO circuit and a digital programmable quantizer,achieving 50.7 dB signal-to-noise ratio(SNR)and 26.9 dB signal-to-noise-and-distortion ration(SNDR)for a 19 MHz−3.5 dBFS input signal in 78 MHz bandwidth(BW).The digital quantizer,which is programmed in the Verilog hardware description language(HDL),consists of two-stage D-flip-flop(DFF)based registers,XOR gates and an adder.The presented SDADC adopts the cascade of integrators with feed-forward summation(CIFF)structure with a third-order loop filter,operating at 2.5 GHz and showing behavioral simulation performance of 92.9 dB SNR over 78 MHz bandwidth.展开更多
Learning from imbalanced data is one of the greatest challenging problems in binary classification,and this problem has gained more importance in recent years.When the class distribution is imbalanced,classical machin...Learning from imbalanced data is one of the greatest challenging problems in binary classification,and this problem has gained more importance in recent years.When the class distribution is imbalanced,classical machine learning algorithms tend to move strongly towards the majority class and disregard the minority.Therefore,the accuracy may be high,but the model cannot recognize data instances in the minority class to classify them,leading to many misclassifications.Different methods have been proposed in the literature to handle the imbalance problem,but most are complicated and tend to simulate unnecessary noise.In this paper,we propose a simple oversampling method based on Multivariate Gaussian distribution and K-means clustering,called GK-Means.The new method aims to avoid generating noise and control imbalances between and within classes.Various experiments have been carried out with six classifiers and four oversampling methods.Experimental results on different imbalanced datasets show that the proposed GK-Means outperforms other oversampling methods and improves classification performance as measured by F1-score and Accuracy.展开更多
基金Supported by the National Natural Science Foundation of China(NSFC 61501396)the Colleges and Universities under the Science and Technology Research Projects of Hebei Province(QN2015021)
文摘Based on a strong inter-diagonal matrix and Taylor series expansions,an oversample reconstruction method was proposed to calibrate the optical micro-scanning error. The technique can obtain regular 2 ×2 microscanning undersampling images from the real irregular undersampling images,and can then obtain a high spatial oversample resolution image. Simulations and experiments show that the proposed technique can reduce optical micro-scanning error and improve the system's spatial resolution. The algorithm is simple,fast and has low computational complexity. It can also be applied to other electro-optical imaging systems to improve their spatial resolution and has a widespread application prospect.
文摘Oversampling is commonly encountered in orthogonal frequency division multiplexing (OFDM) systems to ease various performance characteristics. In this paper, we investigate the performance and complexity of one tap zero-forcing (ZF) and minimum mean-square error (MMSE) equalizers in oversampled OFDM systems. Theoretical analysis and simulation results show that oversampling not only reduces the noise at equalizer output but also helps mitigate ill effects of spectral nulls. One tap equalizers therefore yield improved symbol-error-rate (SER) performance with the increase in oversampling rate, but at the expense of increased system bandwidth and modest complexity requirements.
基金Supported by Discipline Advancement Program of Shanghai Fourth People’s Hospital,No.SY-XKZT-2020-2013.
文摘BACKGROUND Postoperative delirium,particularly prevalent in elderly patients after abdominal cancer surgery,presents significant challenges in clinical management.AIM To develop a synthetic minority oversampling technique(SMOTE)-based model for predicting postoperative delirium in elderly abdominal cancer patients.METHODS In this retrospective cohort study,we analyzed data from 611 elderly patients who underwent abdominal malignant tumor surgery at our hospital between September 2020 and October 2022.The incidence of postoperative delirium was recorded for 7 d post-surgery.Patients were divided into delirium and non-delirium groups based on the occurrence of postoperative delirium or not.A multivariate logistic regression model was used to identify risk factors and develop a predictive model for postoperative delirium.The SMOTE technique was applied to enhance the model by oversampling the delirium cases.The model’s predictive accuracy was then validated.RESULTS In our study involving 611 elderly patients with abdominal malignant tumors,multivariate logistic regression analysis identified significant risk factors for postoperative delirium.These included the Charlson comorbidity index,American Society of Anesthesiologists classification,history of cerebrovascular disease,surgical duration,perioperative blood transfusion,and postoperative pain score.The incidence rate of postoperative delirium in our study was 22.91%.The original predictive model(P1)exhibited an area under the receiver operating characteristic curve of 0.862.In comparison,the SMOTE-based logistic early warning model(P2),which utilized the SMOTE oversampling algorithm,showed a slightly lower but comparable area under the curve of 0.856,suggesting no significant difference in performance between the two predictive approaches.CONCLUSION This study confirms that the SMOTE-enhanced predictive model for postoperative delirium in elderly abdominal tumor patients shows performance equivalent to that of traditional methods,effectively addressing data imbalance.
文摘Recently,anomaly detection(AD)in streaming data gained significant attention among research communities due to its applicability in finance,business,healthcare,education,etc.The recent developments of deep learning(DL)models find helpful in the detection and classification of anomalies.This article designs an oversampling with an optimal deep learning-based streaming data classification(OS-ODLSDC)model.The aim of the OSODLSDC model is to recognize and classify the presence of anomalies in the streaming data.The proposed OS-ODLSDC model initially undergoes preprocessing step.Since streaming data is unbalanced,support vector machine(SVM)-Synthetic Minority Over-sampling Technique(SVM-SMOTE)is applied for oversampling process.Besides,the OS-ODLSDC model employs bidirectional long short-term memory(Bi LSTM)for AD and classification.Finally,the root means square propagation(RMSProp)optimizer is applied for optimal hyperparameter tuning of the Bi LSTM model.For ensuring the promising performance of the OS-ODLSDC model,a wide-ranging experimental analysis is performed using three benchmark datasets such as CICIDS 2018,KDD-Cup 1999,and NSL-KDD datasets.
基金Project(52161135301)supported by the International Cooperation and Exchange of the National Natural Science Foundation of ChinaProject(202306370296)supported by China Scholarship Council。
文摘Rockburst is a common geological disaster in underground engineering,which seriously threatens the safety of personnel,equipment and property.Utilizing machine learning models to evaluate risk of rockburst is gradually becoming a trend.In this study,the integrated algorithms under Gradient Boosting Decision Tree(GBDT)framework were used to evaluate and classify rockburst intensity.First,a total of 301 rock burst data samples were obtained from a case database,and the data were preprocessed using synthetic minority over-sampling technique(SMOTE).Then,the rockburst evaluation models including GBDT,eXtreme Gradient Boosting(XGBoost),Light Gradient Boosting Machine(LightGBM),and Categorical Features Gradient Boosting(CatBoost)were established,and the optimal hyperparameters of the models were obtained through random search grid and five-fold cross-validation.Afterwards,use the optimal hyperparameter configuration to fit the evaluation models,and analyze these models using test set.In order to evaluate the performance,metrics including accuracy,precision,recall,and F1-score were selected to analyze and compare with other machine learning models.Finally,the trained models were used to conduct rock burst risk assessment on rock samples from a mine in Shanxi Province,China,and providing theoretical guidance for the mine's safe production work.The models under the GBDT framework perform well in the evaluation of rockburst levels,and the proposed methods can provide a reliable reference for rockburst risk level analysis and safety management.
基金funded by the National Science Foundation of China(62006068)Hebei Natural Science Foundation(A2021402008),Natural Science Foundation of Scientific Research Project of Higher Education in Hebei Province(ZD2020185,QN2020188)333 Talent Supported Project of Hebei Province(C20221026).
文摘Imbalanced datasets are common in practical applications,and oversampling methods using fuzzy rules have been shown to enhance the classification performance of imbalanced data by taking into account the relationship between data attributes.However,the creation of fuzzy rules typically depends on expert knowledge,which may not fully leverage the label information in training data and may be subjective.To address this issue,a novel fuzzy rule oversampling approach is developed based on the learning vector quantization(LVQ)algorithm.In this method,the label information of the training data is utilized to determine the antecedent part of If-Then fuzzy rules by dynamically dividing attribute intervals using LVQ.Subsequently,fuzzy rules are generated and adjusted to calculate rule weights.The number of new samples to be synthesized for each rule is then computed,and samples from the minority class are synthesized based on the newly generated fuzzy rules.This results in the establishment of a fuzzy rule oversampling method based on LVQ.To evaluate the effectiveness of this method,comparative experiments are conducted on 12 publicly available imbalance datasets with five other sampling techniques in combination with the support function machine.The experimental results demonstrate that the proposed method can significantly enhance the classification algorithm across seven performance indicators,including a boost of 2.15%to 12.34%in Accuracy,6.11%to 27.06%in G-mean,and 4.69%to 18.78%in AUC.These show that the proposed method is capable of more efficiently improving the classification performance of imbalanced data.
文摘Delirium,a complex neurocognitive syndrome,frequently emerges following surgery,presenting diverse manifestations and considerable obstacles,especially among the elderly.This editorial delves into the intricate phenomenon of postoperative delirium(POD),shedding light on a study that explores POD in elderly individuals undergoing abdominal malignancy surgery.The study examines pathophysiology and predictive determinants,offering valuable insights into this challenging clinical scenario.Employing the synthetic minority oversampling technique,a predictive model is developed,incorporating critical risk factors such as comorbidity index,anesthesia grade,and surgical duration.There is an urgent need for accurate risk factor identification to mitigate POD incidence.While specific to elderly patients with abdominal malignancies,the findings contribute significantly to understanding delirium pathophysiology and prediction.Further research is warranted to establish standardized predictive for enhanced generalizability.
文摘In this editorial,we comment on the article by Hu et al entitled“Predictive modeling for postoperative delirium in elderly patients with abdominal malignancies using synthetic minority oversampling technique”.We wanted to draw attention to the general features of postoperative delirium(POD)as well as the areas where there are uncertainties and contradictions.POD can be defined as acute neurocognitive dysfunction that occurs in the first week after surgery.It is a severe postoperative complication,especially for elderly oncology patients.Although the underlying pathophysiological mechanism is not fully understood,various neuroinflammatory mechanisms and neurotransmitters are thought to be involved.Various assessment scales and diagnostic methods have been proposed for the early diagnosis of POD.As delirium is considered a preventable clinical entity in about half of the cases,various early prediction models developed with the support of machine learning have recently become a hot scientific topic.Unfortunately,a model with high sensitivity and specificity for the prediction of POD has not yet been reported.This situation reveals that all health personnel who provide health care services to elderly patients should approach patients with a high level of awareness in the perioperative period regarding POD.
基金We are grateful for financial supports from National Natural Science Foundation of China(62035003,61775117)China Postdoctoral Science Foundation(BX2021140)Tsinghua University Initiative Scientific Research Program(20193080075).
文摘Deep learning offers a novel opportunity to achieve both high-quality and high-speed computer-generated holography(CGH).Current data-driven deep learning algorithms face the challenge that the labeled training datasets limit the training performance and generalization.The model-driven deep learning introduces the diffraction model into the neural network.It eliminates the need for the labeled training dataset and has been extensively applied to hologram generation.However,the existing model-driven deep learning algorithms face the problem of insufficient constraints.In this study,we propose a model-driven neural network capable of high-fidelity 4K computer-generated hologram generation,called 4K Diffraction Model-driven Network(4K-DMDNet).The constraint of the reconstructed images in the frequency domain is strengthened.And a network structure that combines the residual method and sub-pixel convolution method is built,which effectively enhances the fitting ability of the network for inverse problems.The generalization of the 4K-DMDNet is demonstrated with binary,grayscale and 3D images.High-quality full-color optical reconstructions of the 4K holograms have been achieved at the wavelengths of 450 nm,520 nm,and 638 nm.
基金funded by the National Natural Science Foundation of China(Grant No.41941019)the State Key Laboratory of Hydroscience and Engineering(Grant No.2019-KY-03)。
文摘Real-time prediction of the rock mass class in front of the tunnel face is essential for the adaptive adjustment of tunnel boring machines(TBMs).During the TBM tunnelling process,a large number of operation data are generated,reflecting the interaction between the TBM system and surrounding rock,and these data can be used to evaluate the rock mass quality.This study proposed a stacking ensemble classifier for the real-time prediction of the rock mass classification using TBM operation data.Based on the Songhua River water conveyance project,a total of 7538 TBM tunnelling cycles and the corresponding rock mass classes are obtained after data preprocessing.Then,through the tree-based feature selection method,10 key TBM operation parameters are selected,and the mean values of the 10 selected features in the stable phase after removing outliers are calculated as the inputs of classifiers.The preprocessed data are randomly divided into the training set(90%)and test set(10%)using simple random sampling.Besides stacking ensemble classifier,seven individual classifiers are established as the comparison.These classifiers include support vector machine(SVM),k-nearest neighbors(KNN),random forest(RF),gradient boosting decision tree(GBDT),decision tree(DT),logistic regression(LR)and multilayer perceptron(MLP),where the hyper-parameters of each classifier are optimised using the grid search method.The prediction results show that the stacking ensemble classifier has a better performance than individual classifiers,and it shows a more powerful learning and generalisation ability for small and imbalanced samples.Additionally,a relative balance training set is obtained by the synthetic minority oversampling technique(SMOTE),and the influence of sample imbalance on the prediction performance is discussed.
基金supported by the National Key Research and Development Program of China(2016YFB0500901)the Natural Science Foundation of Shanghai(18ZR1437200)the Satellite Mapping Technology and Application National Key Laboratory of Geographical Information Bureau(KLSMTA-201709)
文摘According to the oversampling imaging characteristics, an infrared small target detection method based on deep learning is proposed. A 7-layer deep convolutional neural network(CNN) is designed to automatically extract small target features and suppress clutters in an end-to-end manner. The input of CNN is an original oversampling image while the output is a cluttersuppressed feature map. The CNN contains only convolution and non-linear operations, and the resolution of the output feature map is the same as that of the input image. The L1-norm loss function is used, and a mass of training data is generated to train the network effectively. Results show that compared with several baseline methods, the proposed method improves the signal clutter ratio gain and background suppression factor by 3–4 orders of magnitude, and has more powerful target detection performance.
基金supported by the National Natural Science Foundation of China(6157138861601398)the National Natural Science Foundation of Hebei Province(F2016203251)
文摘Traditional inverse synthetic aperture radar(ISAR)imaging methods for maneuvering targets have low resolution and poor capability of noise suppression. An ISAR imaging method of maneuvering targets based on phase retrieval is proposed,which can provide a high-resolution and focused map of the spatial distribution of scatterers on the target. According to theoretical derivation, the modulus of raw data from the maneuvering target is not affected by radial motion components for ISAR imaging system, so the phase retrieval algorithm can be used for ISAR imaging problems. However, the traditional phase retrieval algorithm will be not applicable to ISAR imaging under the condition of random noise. To solve this problem, an algorithm is put forward based on the range Doppler(RD) algorithm and oversampling smoothness(OSS) phase retrieval algorithm. The algorithm captures the target information in order to reduce the influence of the random phase on ISAR echoes, and then applies OSS for focusing imaging based on prior information of the RD algorithm. The simulated results demonstrate the validity of this algorithm, which cannot only obtain high resolution imaging for high speed maneuvering targets under the condition of random noise, but also substantially improve the success rate of the phase retrieval algorithm.
基金funded by the Major Emerging Industrial Projects of Anhuithe Postdoctoral Project from Hefei
文摘Oversampling sigma–delta(Σ–Δ)analog-to-digital converters(ADCs)are currently one of the most widely used architectures for high-resolution ADCs.The rapid development of integrated circuit manufacturing processes has allowed the realization of a high resolution in exchange for speed.Structurally,theΣ–ΔADC is divided into two parts:a front-end analog modulator and a back-end digital filter.The performance of the front-end analog modulator has a marked influence on the entireΣ–ΔADC system.In this paper,a 4-order single-loop switched-capacitor modulator with a CIFB(cascade-of-integrators feed-back)structure is proposed.Based on the chosen modulator architecture,the ASIC circuit is implemented using a chartered 0.35μm CMOS process with a chip area of 1.72×0.75 mm^2.The chip operates with a 3.3-V power supply and a power dissipation of 22 mW.According to the results,the performance of the designed modulator has been improved compared with a mature industrial chip and the effective number of bits(ENOB)was almost 18-bit.
文摘Most modern technologies,such as social media,smart cities,and the internet of things(IoT),rely on big data.When big data is used in the real-world applications,two data challenges such as class overlap and class imbalance arises.When dealing with large datasets,most traditional classifiers are stuck in the local optimum problem.As a result,it’s necessary to look into new methods for dealing with large data collections.Several solutions have been proposed for overcoming this issue.The rapid growth of the available data threatens to limit the usefulness of many traditional methods.Methods such as oversampling and undersampling have shown great promises in addressing the issues of class imbalance.Among all of these techniques,Synthetic Minority Oversampling TechniquE(SMOTE)has produced the best results by generating synthetic samples for the minority class in creating a balanced dataset.The issue is that their practical applicability is restricted to problems involving tens of thousands or lower instances of each.In this paper,we have proposed a parallel mode method using SMOTE and MapReduce strategy,this distributes the operation of the algorithm among a group of computational nodes for addressing the aforementioned problem.Our proposed solution has been divided into three stages.Thefirst stage involves the process of splitting the data into different blocks using a mapping function,followed by a pre-processing step for each mapping block that employs a hybrid SMOTE algo-rithm for solving the class imbalanced problem.On each map block,a decision tree model would be constructed.Finally,the decision tree blocks would be com-bined for creating a classification model.We have used numerous datasets with up to 4 million instances in our experiments for testing the proposed scheme’s cap-abilities.As a result,the Hybrid SMOTE appears to have good scalability within the framework proposed,and it also cuts down the processing time.
基金This work was supported by National Key R&D Program of China(Grant Numbers 2020YFB1005900,2022YFB3305802).
文摘Due to the anonymity of blockchain,frequent security incidents and attacks occur through it,among which the Ponzi scheme smart contract is a classic type of fraud resulting in huge economic losses.Machine learningbased methods are believed to be promising for detecting ethereum Ponzi schemes.However,there are still some flaws in current research,e.g.,insufficient feature extraction of Ponzi scheme smart contracts,without considering class imbalance.In addition,there is room for improvement in detection precision.Aiming at the above problems,this paper proposes an ethereum Ponzi scheme detection scheme through opcode context analysis and adaptive boosting(AdaBoost)algorithm.Firstly,this paper uses the n-gram algorithm to extract more comprehensive contract opcode features and combine them with contract account features,which helps to improve the feature extraction effect.Meanwhile,adaptive synthetic sampling(ADASYN)is introduced to deal with class imbalanced data,and integrated with the Adaboost classifier.Finally,this paper uses the improved AdaBoost classifier for the identification of Ponzi scheme contracts.Experimentally,this paper tests our model in real-world smart contracts and compares it with representative methods in the aspect of F1-score and precision.Moreover,this article compares and discusses the state of art methods with our method in four aspects:data acquisition,data preprocessing,feature extraction,and classifier design.Both experiment and discussion validate the effectiveness of our model.
文摘This study aims to develop a low-cost refractometer for measuring the sucrose content of fruit juice,which is an important factor affecting human health.While laboratory-grade refractometers are expensive and unsuitable for personal use,existing low-cost commercial options lack stability and accuracy.To address this gap,we propose a refractometer that replaces the expensive CCD sensor and light source with a conventional LED and a reasonably priced CMOS sensor.By analyzing the output waveform pattern of the CMOS sensor,we achieve high precision with a personal-use-appropriate accuracy of 0.1%.We tested the proposed refractometer by conducting 100 repeated measurements on various fruit juice samples,and the results demonstrate its reliability and consistency.Running on a 48 MHz ARM processor,the algorithm can acquire data within 0.2 seconds.Our low-cost refractometer is suitable for personal health management and small-scale production,providing an affordable and reliable method for measuring sucrose concentration in fruit juice.It improves upon the existing low-cost options by offering better stability and accuracy.This accessible tool has potential applications in optimizing the sucrose content of fruit juice for better health and quality control.
基金Supported by Postgraduate Innovation Funding Project of Hebei Province(CXZZSS2019050)the Qinhuangdao City Key Research and Development Program Science and Technology Support Project(201801B010)
文摘An error correction technique for the micro-scanning instrument of the optical micro-scanning thermal microscope imaging system is proposed. The technique is based on micro-scanning technology combined with the proposed second-order oversampling reconstruction algorithm and local gradient image reconstruction algorithm. In this paper, we describe the local gradient image reconstruction model, the error correction technique, down-sampling model and the error correction principle. In this paper, we use a Lena original image and four low-resolution images obtained from the standard half-pixel displacement to simulate and verify the effectiveness of the proposed technique. In order to verify the effectiveness of the proposed technique, two groups of low-resolution thermal microscope images are collected by the actual thermal microscope imaging system for experimental study. Simulations and experiments show that the proposed technique can reduce the optical micro-scanning errors, improve the imaging effect of the system and improve the system's spatial resolution. It can be applied to other electro-optical imaging systems to improve their resolution.
基金This work was supported by the Natural Science Foundation of the Jiangsu Higher Education Institutions of China under Grant No.18KJB510045.
文摘This article presents a high speed third-order continuous-time(CT)sigma-delta analog-to-digital converter(SDADC)based on voltagecontrolled oscillator(VCO),featuring a digital programmable quantizer structure.To improve the overall performance,not only oversampling technique but also noise-shaping enhancing technique is used to suppress in-band noise.Due to the intrinsic first-order noise-shaping of the VCO quantizer,the proposed third-order SDADC can realize forth-order noise-shaping ideally.As a bright advantage,the proposed programmable VCO quantizer is digital-friendly,which can simplify the design process and improve antiinterference capability of the circuit.A 4-bit programmable VCO quantizer clocked at 2.5 GHz,which is proposed in a 40 nm complementary metaloxide semiconductor(CMOS)technology,consists of an analog VCO circuit and a digital programmable quantizer,achieving 50.7 dB signal-to-noise ratio(SNR)and 26.9 dB signal-to-noise-and-distortion ration(SNDR)for a 19 MHz−3.5 dBFS input signal in 78 MHz bandwidth(BW).The digital quantizer,which is programmed in the Verilog hardware description language(HDL),consists of two-stage D-flip-flop(DFF)based registers,XOR gates and an adder.The presented SDADC adopts the cascade of integrators with feed-forward summation(CIFF)structure with a third-order loop filter,operating at 2.5 GHz and showing behavioral simulation performance of 92.9 dB SNR over 78 MHz bandwidth.
文摘Learning from imbalanced data is one of the greatest challenging problems in binary classification,and this problem has gained more importance in recent years.When the class distribution is imbalanced,classical machine learning algorithms tend to move strongly towards the majority class and disregard the minority.Therefore,the accuracy may be high,but the model cannot recognize data instances in the minority class to classify them,leading to many misclassifications.Different methods have been proposed in the literature to handle the imbalance problem,but most are complicated and tend to simulate unnecessary noise.In this paper,we propose a simple oversampling method based on Multivariate Gaussian distribution and K-means clustering,called GK-Means.The new method aims to avoid generating noise and control imbalances between and within classes.Various experiments have been carried out with six classifiers and four oversampling methods.Experimental results on different imbalanced datasets show that the proposed GK-Means outperforms other oversampling methods and improves classification performance as measured by F1-score and Accuracy.