BACKGROUND Liver transplantation(LT)is a life-saving intervention for patients with end-stage liver disease.However,the equitable allocation of scarce donor organs remains a formidable challenge.Prognostic tools are p...BACKGROUND Liver transplantation(LT)is a life-saving intervention for patients with end-stage liver disease.However,the equitable allocation of scarce donor organs remains a formidable challenge.Prognostic tools are pivotal in identifying the most suitable transplant candidates.Traditionally,scoring systems like the model for end-stage liver disease have been instrumental in this process.Nevertheless,the landscape of prognostication is undergoing a transformation with the integration of machine learning(ML)and artificial intelligence models.AIM To assess the utility of ML models in prognostication for LT,comparing their performance and reliability to established traditional scoring systems.METHODS Following the Preferred Reporting Items for Systematic Reviews and Meta-Analysis guidelines,we conducted a thorough and standardized literature search using the PubMed/MEDLINE database.Our search imposed no restrictions on publication year,age,or gender.Exclusion criteria encompassed non-English studies,review articles,case reports,conference papers,studies with missing data,or those exhibiting evident methodological flaws.RESULTS Our search yielded a total of 64 articles,with 23 meeting the inclusion criteria.Among the selected studies,60.8%originated from the United States and China combined.Only one pediatric study met the criteria.Notably,91%of the studies were published within the past five years.ML models consistently demonstrated satisfactory to excellent area under the receiver operating characteristic curve values(ranging from 0.6 to 1)across all studies,surpassing the performance of traditional scoring systems.Random forest exhibited superior predictive capabilities for 90-d mortality following LT,sepsis,and acute kidney injury(AKI).In contrast,gradient boosting excelled in predicting the risk of graft-versus-host disease,pneumonia,and AKI.CONCLUSION This study underscores the potential of ML models in guiding decisions related to allograft allocation and LT,marking a significant evolution in the field of prognostication.展开更多
The Indian Himalayan region is frequently experiencing climate change-induced landslides.Thus,landslide susceptibility assessment assumes greater significance for lessening the impact of a landslide hazard.This paper ...The Indian Himalayan region is frequently experiencing climate change-induced landslides.Thus,landslide susceptibility assessment assumes greater significance for lessening the impact of a landslide hazard.This paper makes an attempt to assess landslide susceptibility in Shimla district of the northwest Indian Himalayan region.It examined the effectiveness of random forest(RF),multilayer perceptron(MLP),sequential minimal optimization regression(SMOreg)and bagging ensemble(B-RF,BSMOreg,B-MLP)models.A landslide inventory map comprising 1052 locations of past landslide occurrences was classified into training(70%)and testing(30%)datasets.The site-specific influencing factors were selected by employing a multicollinearity test.The relationship between past landslide occurrences and influencing factors was established using the frequency ratio method.The effectiveness of machine learning models was verified through performance assessors.The landslide susceptibility maps were validated by the area under the receiver operating characteristic curves(ROC-AUC),accuracy,precision,recall and F1-score.The key performance metrics and map validation demonstrated that the BRF model(correlation coefficient:0.988,mean absolute error:0.010,root mean square error:0.058,relative absolute error:2.964,ROC-AUC:0.947,accuracy:0.778,precision:0.819,recall:0.917 and F-1 score:0.865)outperformed the single classifiers and other bagging ensemble models for landslide susceptibility.The results show that the largest area was found under the very high susceptibility zone(33.87%),followed by the low(27.30%),high(20.68%)and moderate(18.16%)susceptibility zones.The factors,namely average annual rainfall,slope,lithology,soil texture and earthquake magnitude have been identified as the influencing factors for very high landslide susceptibility.Soil texture,lineament density and elevation have been attributed to high and moderate susceptibility.Thus,the study calls for devising suitable landslide mitigation measures in the study area.Structural measures,an immediate response system,community participation and coordination among stakeholders may help lessen the detrimental impact of landslides.The findings from this study could aid decision-makers in mitigating future catastrophes and devising suitable strategies in other geographical regions with similar geological characteristics.展开更多
BACKGROUND Colorectal cancer significantly impacts global health,with unplanned reoperations post-surgery being key determinants of patient outcomes.Existing predictive models for these reoperations lack precision in ...BACKGROUND Colorectal cancer significantly impacts global health,with unplanned reoperations post-surgery being key determinants of patient outcomes.Existing predictive models for these reoperations lack precision in integrating complex clinical data.AIM To develop and validate a machine learning model for predicting unplanned reoperation risk in colorectal cancer patients.METHODS Data of patients treated for colorectal cancer(n=2044)at the First Affiliated Hospital of Wenzhou Medical University and Wenzhou Central Hospital from March 2020 to March 2022 were retrospectively collected.Patients were divided into an experimental group(n=60)and a control group(n=1984)according to unplanned reoperation occurrence.Patients were also divided into a training group and a validation group(7:3 ratio).We used three different machine learning methods to screen characteristic variables.A nomogram was created based on multifactor logistic regression,and the model performance was assessed using receiver operating characteristic curve,calibration curve,Hosmer-Lemeshow test,and decision curve analysis.The risk scores of the two groups were calculated and compared to validate the model.RESULTS More patients in the experimental group were≥60 years old,male,and had a history of hypertension,laparotomy,and hypoproteinemia,compared to the control group.Multiple logistic regression analysis confirmed the following as independent risk factors for unplanned reoperation(P<0.05):Prognostic Nutritional Index value,history of laparotomy,hypertension,or stroke,hypoproteinemia,age,tumor-node-metastasis staging,surgical time,gender,and American Society of Anesthesiologists classification.Receiver operating characteristic curve analysis showed that the model had good discrimination and clinical utility.CONCLUSION This study used a machine learning approach to build a model that accurately predicts the risk of postoperative unplanned reoperation in patients with colorectal cancer,which can improve treatment decisions and prognosis.展开更多
Background and Objective The effectiveness of radiofrequency ablation(RFA)in improving long-term survival outcomes for patients with a solitary hepatocellular carcinoma(HCC)measuring 5 cm or less remains uncertain.Thi...Background and Objective The effectiveness of radiofrequency ablation(RFA)in improving long-term survival outcomes for patients with a solitary hepatocellular carcinoma(HCC)measuring 5 cm or less remains uncertain.This study was designed to elucidate the impact of RFA therapy on the survival outcomes of these patients and to construct a prognostic model for patients following RFA.Methods This study was performed using the Surveillance,Epidemiology,and End Results(SEER)database from 2004 to 2017,focusing on patients diagnosed with a solitary HCC lesion≤5 cm in size.We compared the overall survival(OS)and cancer-specific survival(CSS)rates of these patients with those of patients who received hepatectomy,radiotherapy,or chemotherapy or who were part of a blank control group.To enhance the reliability of our findings,we employed stabilized inverse probability treatment weighting(sIPTW)and stratified analyses.Additionally,we conducted a Cox regression analysis to identify prognostic factors.XGBoost models were developed to predict 1-,3-,and 5-year CSS.The XGBoost models were evaluated via receiver operating characteristic(ROC)curves,calibration plots,decision curve analysis(DCA)curves and so on.Results Regardless of whether the data were unadjusted or adjusted for the use of sIPTWs,the 5-year OS(46.7%)and CSS(58.9%)rates were greater in the RFA group than in the radiotherapy(27.1%/35.8%),chemotherapy(32.9%/43.7%),and blank control(18.6%/30.7%)groups,but these rates were lower than those in the hepatectomy group(69.4%/78.9%).Stratified analysis based on age and cirrhosis status revealed that RFA and hepatectomy yielded similar OS and CSS outcomes for patients with cirrhosis aged over 65 years.Age,race,marital status,grade,cirrhosis status,tumor size,and AFP level were selected to construct the XGBoost models based on the training cohort.The areas under the curve(AUCs)for 1,3,and 5 years in the validation cohort were 0.88,0.81,and 0.79,respectively.Calibration plots further demonstrated the consistency between the predicted and actual values in both the training and validation cohorts.Conclusion RFA can improve the survival of patients diagnosed with a solitary HCC lesion≤5 cm.In certain clinical scenarios,RFA achieves survival outcomes comparable to those of hepatectomy.The XGBoost models developed in this study performed admirably in predicting the CSS of patients with solitary HCC tumors smaller than 5 cm following RFA.展开更多
Spatial heterogeneity refers to the variation or differences in characteristics or features across different locations or areas in space. Spatial data refers to information that explicitly or indirectly belongs to a p...Spatial heterogeneity refers to the variation or differences in characteristics or features across different locations or areas in space. Spatial data refers to information that explicitly or indirectly belongs to a particular geographic region or location, also known as geo-spatial data or geographic information. Focusing on spatial heterogeneity, we present a hybrid machine learning model combining two competitive algorithms: the Random Forest Regressor and CNN. The model is fine-tuned using cross validation for hyper-parameter adjustment and performance evaluation, ensuring robustness and generalization. Our approach integrates Global Moran’s I for examining global autocorrelation, and local Moran’s I for assessing local spatial autocorrelation in the residuals. To validate our approach, we implemented the hybrid model on a real-world dataset and compared its performance with that of the traditional machine learning models. Results indicate superior performance with an R-squared of 0.90, outperforming RF 0.84 and CNN 0.74. This study contributed to a detailed understanding of spatial variations in data considering the geographical information (Longitude & Latitude) present in the dataset. Our results, also assessed using the Root Mean Squared Error (RMSE), indicated that the hybrid yielded lower errors, showing a deviation of 53.65% from the RF model and 63.24% from the CNN model. Additionally, the global Moran’s I index was observed to be 0.10. This study underscores that the hybrid was able to predict correctly the house prices both in clusters and in dispersed areas.展开更多
The numerical simulation and slope stability prediction are the focus of slope disaster research.Recently,machine learning models are commonly used in the slope stability prediction.However,these machine learning mode...The numerical simulation and slope stability prediction are the focus of slope disaster research.Recently,machine learning models are commonly used in the slope stability prediction.However,these machine learning models have some problems,such as poor nonlinear performance,local optimum and incomplete factors feature extraction.These issues can affect the accuracy of slope stability prediction.Therefore,a deep learning algorithm called Long short-term memory(LSTM)has been innovatively proposed to predict slope stability.Taking the Ganzhou City in China as the study area,the landslide inventory and their characteristics of geotechnical parameters,slope height and slope angle are analyzed.Based on these characteristics,typical soil slopes are constructed using the Geo-Studio software.Five control factors affecting slope stability,including slope height,slope angle,internal friction angle,cohesion and volumetric weight,are selected to form different slope and construct model input variables.Then,the limit equilibrium method is used to calculate the stability coefficients of these typical soil slopes under different control factors.Each slope stability coefficient and its corresponding control factors is a slope sample.As a result,a total of 2160 training samples and 450 testing samples are constructed.These sample sets are imported into LSTM for modelling and compared with the support vector machine(SVM),random forest(RF)and convo-lutional neural network(CNN).The results show that the LSTM overcomes the problem that the commonly used machine learning models have difficulty extracting global features.Furthermore,LSTM has a better prediction performance for slope stability compared to SVM,RF and CNN models.展开更多
To perform landslide susceptibility prediction(LSP),it is important to select appropriate mapping unit and landslide-related conditioning factors.The efficient and automatic multi-scale segmentation(MSS)method propose...To perform landslide susceptibility prediction(LSP),it is important to select appropriate mapping unit and landslide-related conditioning factors.The efficient and automatic multi-scale segmentation(MSS)method proposed by the authors promotes the application of slope units.However,LSP modeling based on these slope units has not been performed.Moreover,the heterogeneity of conditioning factors in slope units is neglected,leading to incomplete input variables of LSP modeling.In this study,the slope units extracted by the MSS method are used to construct LSP modeling,and the heterogeneity of conditioning factors is represented by the internal variations of conditioning factors within slope unit using the descriptive statistics features of mean,standard deviation and range.Thus,slope units-based machine learning models considering internal variations of conditioning factors(variant slope-machine learning)are proposed.The Chongyi County is selected as the case study and is divided into 53,055 slope units.Fifteen original slope unit-based conditioning factors are expanded to 38 slope unit-based conditioning factors through considering their internal variations.Random forest(RF)and multi-layer perceptron(MLP)machine learning models are used to construct variant Slope-RF and Slope-MLP models.Meanwhile,the Slope-RF and Slope-MLP models without considering the internal variations of conditioning factors,and conventional grid units-based machine learning(Grid-RF and MLP)models are built for comparisons through the LSP performance assessments.Results show that the variant Slopemachine learning models have higher LSP performances than Slope-machine learning models;LSP results of variant Slope-machine learning models have stronger directivity and practical application than Grid-machine learning models.It is concluded that slope units extracted by MSS method can be appropriate for LSP modeling,and the heterogeneity of conditioning factors within slope units can more comprehensively reflect the relationships between conditioning factors and landslides.The research results have important reference significance for land use and landslide prevention.展开更多
The use of a CO2 laser system for fabrication of microfluidic chip on polymethyl methacrylate (PMMA) is presented to reduce fabrication cost and time of chip. The grooving process of the laser system and a model for...The use of a CO2 laser system for fabrication of microfluidic chip on polymethyl methacrylate (PMMA) is presented to reduce fabrication cost and time of chip. The grooving process of the laser system and a model for the depth of microchannels are investigated. The relations between the depth of laser-cut channels and the laser beam power, velocity or the number of passes of the beam along the same channel are evaluated. In the experiments, the laser beam power varies from 0 to 50 W, the laser beam scanning velocity varies from 0 to 1 000 mm/s and the passes vary in the range of 1 to 10 times. Based on the principle of conservation of energy, the influence of the laser beam velocity, the laser power and the number of groove passes are examine. Considering the grooving interval energy loss, a modified mathematical model has been obtained and experimental data show good agreement with the theoretical model. This approach provides a simple way of predicting groove depths. The system provides a cost alternative of the other methods and it is especially useful on research work of rnicrofluidic prototyping due to the short cycle time of production.展开更多
Machine learning models were used to improve the accuracy of China Meteorological Administration Multisource Precipitation Analysis System(CMPAS)in complex terrain areas by combining rain gauge precipitation with topo...Machine learning models were used to improve the accuracy of China Meteorological Administration Multisource Precipitation Analysis System(CMPAS)in complex terrain areas by combining rain gauge precipitation with topographic factors like altitude,slope,slope direction,slope variability,surface roughness,and meteorological factors like temperature and wind speed.The results of the correction demonstrated that the ensemble learning method has a considerably corrective effect and the three methods(Random Forest,AdaBoost,and Bagging)adopted in the study had similar results.The mean bias between CMPAS and 85%of automatic weather stations has dropped by more than 30%.The plateau region displays the largest accuracy increase,the winter season shows the greatest error reduction,and decreasing precipitation improves the correction outcome.Additionally,the heavy precipitation process’precision has improved to some degree.For individual stations,the revised CMPAS error fluctuation range is significantly reduced.展开更多
This paper provides a review of predictive analytics for roads,identifying gaps and limitations in current methodologies.It explores the implications of these limitations on accuracy and application,while also discuss...This paper provides a review of predictive analytics for roads,identifying gaps and limitations in current methodologies.It explores the implications of these limitations on accuracy and application,while also discussing how advanced predictive analytics can address these challenges.The article acknowledges the transformative shift brought about by technological advancements and increased computational capabilities.The degradation of pavement surfaces due to increased road users has resulted in safety and comfort issues.Researchers have conducted studies to assess pavement condition and predict future changes in pavement structure.Pavement Management Systems are crucial in developing prediction performance models that estimate pavement condition and degradation severity over time.Machine learning algorithms,artificial neural networks,and regression models have been used,with strengths and weaknesses.Researchers generally agree on their accuracy in estimating pavement condition considering factors like traffic,pavement age,and weather conditions.However,it is important to carefully select an appropriate prediction model to achieve a high-quality prediction performance system.Understanding the strengths and weaknesses of each model enables informed decisions for implementing prediction models that suit specific needs.The advancement of prediction models,coupled with innovative technologies,will contribute to improved pavement management and the overall safety and comfort of road users.展开更多
The exhaust emissions and frequent traffic incidents caused by traffic congestion have affected the operation and development of urban transport systems.Monitoring and accurately forecasting urban traffic operation is...The exhaust emissions and frequent traffic incidents caused by traffic congestion have affected the operation and development of urban transport systems.Monitoring and accurately forecasting urban traffic operation is a critical task to formulate pertinent strategies to alleviate traffic congestion.Compared with traditional short-time traffic prediction,this study proposes a machine learning algorithm-based traffic forecasting model for daily-level peak hour traffic operation status prediction by using abundant historical data of urban traffic performance index(TPI).The study also constructed a multi-dimensional influencing factor set to further investigate the relationship between different factors on the quality of road network operation,including day of week,time period,public holiday,car usage restriction policy,special events,etc.Based on long-term historical TPI data,this research proposed a daily dimensional road network TPI prediction model by using an extreme gradient boosting algorithm(XGBoost).The model validation results show that the model prediction accuracy can reach higher than 90%.Compared with other prediction models,including Bayesian Ridge,Linear Regression,ElatsicNet,SVR,the XGBoost model has a better performance,and proves its superiority in large high-dimensional data sets.The daily dimensional prediction model proposed in this paper has an important application value for predicting traffic status and improving the operation quality of urban road networks.展开更多
N-11-azaartemisinins potentially active against Plasmodium falciparum are designed by combining molecular electrostatic potential (MEP), ligand-receptor interaction, and models built with supervised machine learning m...N-11-azaartemisinins potentially active against Plasmodium falciparum are designed by combining molecular electrostatic potential (MEP), ligand-receptor interaction, and models built with supervised machine learning methods (PCA, HCA, KNN, SIMCA, and SDA). The optimization of molecular structures was performed using the B3LYP/6-31G* approach. MEP maps and ligand-receptor interactions were used to investigate key structural features required for biological activities and likely interactions between N-11-azaartemisinins and heme, respectively. The supervised machine learning methods allowed the separation of the investigated compounds into two classes: cha and cla, with the properties ε<sub>LUMO+1</sub> (one level above lowest unoccupied molecular orbital energy), d(C<sub>6</sub>-C<sub>5</sub>) (distance between C<sub>6</sub> and C<sub>5</sub> atoms in ligands), and TSA (total surface area) responsible for the classification. The insights extracted from the investigation developed and the chemical intuition enabled the design of sixteen new N-11-azaartemisinins (prediction set), moreover, models built with supervised machine learning methods were applied to this prediction set. The result of this application showed twelve new promising N-11-azaartemisinins for synthesis and biological evaluation.展开更多
In the existing landslide susceptibility prediction(LSP)models,the influences of random errors in landslide conditioning factors on LSP are not considered,instead the original conditioning factors are directly taken a...In the existing landslide susceptibility prediction(LSP)models,the influences of random errors in landslide conditioning factors on LSP are not considered,instead the original conditioning factors are directly taken as the model inputs,which brings uncertainties to LSP results.This study aims to reveal the influence rules of the different proportional random errors in conditioning factors on the LSP un-certainties,and further explore a method which can effectively reduce the random errors in conditioning factors.The original conditioning factors are firstly used to construct original factors-based LSP models,and then different random errors of 5%,10%,15% and 20%are added to these original factors for con-structing relevant errors-based LSP models.Secondly,low-pass filter-based LSP models are constructed by eliminating the random errors using low-pass filter method.Thirdly,the Ruijin County of China with 370 landslides and 16 conditioning factors are used as study case.Three typical machine learning models,i.e.multilayer perceptron(MLP),support vector machine(SVM)and random forest(RF),are selected as LSP models.Finally,the LSP uncertainties are discussed and results show that:(1)The low-pass filter can effectively reduce the random errors in conditioning factors to decrease the LSP uncertainties.(2)With the proportions of random errors increasing from 5%to 20%,the LSP uncertainty increases continuously.(3)The original factors-based models are feasible for LSP in the absence of more accurate conditioning factors.(4)The influence degrees of two uncertainty issues,machine learning models and different proportions of random errors,on the LSP modeling are large and basically the same.(5)The Shapley values effectively explain the internal mechanism of machine learning model predicting landslide sus-ceptibility.In conclusion,greater proportion of random errors in conditioning factors results in higher LSP uncertainty,and low-pass filter can effectively reduce these random errors.展开更多
Rapid and accurate acquisition of soil organic matter(SOM)information in cultivated land is important for sustainable agricultural development and carbon balance management.This study proposed a novel approach to pred...Rapid and accurate acquisition of soil organic matter(SOM)information in cultivated land is important for sustainable agricultural development and carbon balance management.This study proposed a novel approach to predict SOM with high accuracy using multiyear synthetic remote sensing variables on a monthly scale.We obtained 12 monthly synthetic Sentinel-2 images covering the study area from 2016 to 2021 through the Google Earth Engine(GEE)platform,and reflectance bands and vegetation indices were extracted from these composite images.Then the random forest(RF),support vector machine(SVM)and gradient boosting regression tree(GBRT)models were tested to investigate the difference in SOM prediction accuracy under different combinations of monthly synthetic variables.Results showed that firstly,all monthly synthetic spectral bands of Sentinel-2 showed a significant correlation with SOM(P<0.05)for the months of January,March,April,October,and November.Secondly,in terms of single-monthly composite variables,the prediction accuracy was relatively poor,with the highest R^(2)value of 0.36 being observed in January.When monthly synthetic environmental variables were grouped in accordance with the four quarters of the year,the first quarter and the fourth quarter showed good performance,and any combination of three quarters was similar in estimation accuracy.The overall best performance was observed when all monthly synthetic variables were incorporated into the models.Thirdly,among the three models compared,the RF model was consistently more accurate than the SVM and GBRT models,achieving an R^(2)value of 0.56.Except for band 12 in December,the importance of the remaining bands did not exhibit significant differences.This research offers a new attempt to map SOM with high accuracy and fine spatial resolution based on monthly synthetic Sentinel-2 images.展开更多
Forest fires are natural disasters that can occur suddenly and can be very damaging,burning thousands of square kilometers.Prevention is better than suppression and prediction models of forest fire occurrence have dev...Forest fires are natural disasters that can occur suddenly and can be very damaging,burning thousands of square kilometers.Prevention is better than suppression and prediction models of forest fire occurrence have developed from the logistic regression model,the geographical weighted logistic regression model,the Lasso regression model,the random forest model,and the support vector machine model based on historical forest fire data from 2000 to 2019 in Jilin Province.The models,along with a distribution map are presented in this paper to provide a theoretical basis for forest fire management in this area.Existing studies show that the prediction accuracies of the two machine learning models are higher than those of the three generalized linear regression models.The accuracies of the random forest model,the support vector machine model,geographical weighted logistic regression model,the Lasso regression model,and logistic model were 88.7%,87.7%,86.0%,85.0%and 84.6%,respectively.Weather is the main factor affecting forest fires,while the impacts of topography factors,human and social-economic factors on fire occurrence were similar.展开更多
Tunnels are vital in connecting crucial transportation hubs as transportation infrastructure evolves.Variations in tunnel design standards and driving conditions across different levels directly impact driver visual p...Tunnels are vital in connecting crucial transportation hubs as transportation infrastructure evolves.Variations in tunnel design standards and driving conditions across different levels directly impact driver visual perception and traffic safety.This study employs a Gaussian hybrid clustering machine learning model to explore driver gaze patterns in highway tunnels and exits.By utilizing contour coefficients,the optimal number of classification clusters is determined.Analysis of driver visual behavior across tunnel levels,focusing on gaze point distribution,gaze duration,and sweep speed,was conducted.Findings indicate freeway tunnel exits exhibit three distinct fixation point categories aligning with Gaussian distribution,while highway tunnels display four such characteristics.Notably,in both tunnel types,65%of driver gaze is concentrated on the near area ahead of their lane.Differences emerge in highway tunnels due to oncoming traffic,leading to 13.47%more fixation points and 0.9%increased fixation time in the right lane compared to regular highway tunnel conditions.Moreover,scanning speeds predominantly fall within the 0.25-0.3 range,accounting for 75.47%and 31.14%of the total sweep speed.展开更多
Hyperparameter tuning is a key step in developing high-performing machine learning models, but searching large hyperparameter spaces requires extensive computation using standard sequential methods. This work analyzes...Hyperparameter tuning is a key step in developing high-performing machine learning models, but searching large hyperparameter spaces requires extensive computation using standard sequential methods. This work analyzes the performance gains from parallel versus sequential hyperparameter optimization. Using scikit-learn’s Randomized SearchCV, this project tuned a Random Forest classifier for fake news detection via randomized grid search. Setting n_jobs to -1 enabled full parallelization across CPU cores. Results show the parallel implementation achieved over 5× faster CPU times and 3× faster total run times compared to sequential tuning. However, test accuracy slightly dropped from 99.26% sequentially to 99.15% with parallelism, indicating a trade-off between evaluation efficiency and model performance. Still, the significant computational gains allow more extensive hyperparameter exploration within reasonable timeframes, outweighing the small accuracy decrease. Further analysis could better quantify this trade-off across different models, tuning techniques, tasks, and hardware.展开更多
Artificial Intelligence (AI) is transforming organizational dynamics, and revolutionizing corporate leadership practices. This research paper delves into the question of how AI influences corporate leadership, examini...Artificial Intelligence (AI) is transforming organizational dynamics, and revolutionizing corporate leadership practices. This research paper delves into the question of how AI influences corporate leadership, examining both its advantages and disadvantages. Positive impacts of AI are evident in communication, feedback systems, tracking mechanisms, and decision-making processes within organizations. AI-powered communication tools, as exemplified by Slack, facilitate seamless collaboration, transcending geographical barriers. Feedback systems, like Adobe’s Performance Management System, employ AI algorithms to provide personalized development opportunities, enhancing employee growth. AI-based tracking systems optimize resource allocation, as exemplified by studies like “AI-Based Tracking Systems: Enhancing Efficiency and Accountability.” Additionally, AI-powered decision support, demonstrated during the COVID-19 pandemic, showcases the capability to navigate complex challenges and maintain resilience. However, AI adoption poses challenges in human resources, potentially leading to job displacement and necessitating upskilling efforts. Managing AI errors becomes crucial, as illustrated by instances like Amazon’s biased recruiting tool. Data privacy concerns also arise, emphasizing the need for robust security measures. The proposed solution suggests leveraging Local Machine Learning Models (LLMs) to address data privacy issues. Approaches such as federated learning, on-device learning, differential privacy, and homomorphic encryption offer promising strategies. By exploring the evolving dynamics of AI and leadership, this research advocates for responsible AI adoption and proposes LLMs as a potential solution, fostering a balanced integration of AI benefits while mitigating associated risks in corporate settings.展开更多
In this paper we apply the nonlinear time series analysis method to small-time scale traffic measurement data. The prediction-based method is used to determine the embedding dimension of the traffic data. Based on the...In this paper we apply the nonlinear time series analysis method to small-time scale traffic measurement data. The prediction-based method is used to determine the embedding dimension of the traffic data. Based on the reconstructed phase space, the local support vector machine prediction method is used to predict the traffic measurement data, and the BIC-based neighbouring point selection method is used to choose the number of the nearest neighbouring points for the local support vector machine regression model. The experimental results show that the local support vector machine prediction method whose neighbouring points are optimized can effectively predict the small-time scale traffic measurement data and can reproduce the statistical features of real traffic measurements.展开更多
This paper presents a state-of-the-art review in modeling approach of hardware in the loop simulation(HILS)realization of electric machine drives using commercial real time machines.HILS implementation using digital s...This paper presents a state-of-the-art review in modeling approach of hardware in the loop simulation(HILS)realization of electric machine drives using commercial real time machines.HILS implementation using digital signal processors(DSPs)and field programmable gate array(FPGA)for electric machine drives has been investigated but those methods have drawbacks such as complexity in development and verification.Among various HILS implementation approaches,more efficient development and verification for electric machine drives can be achieved through use of commercial real time machines.As well as implementation of the HILS,accurate modeling of a control target system plays an important role.Therefore,modeling trend in electric machine drives for HILS implementation is needed to be reviewed.This paper provides a background of HILS and commercially available real time machines and characteristics of each real time machine are introduced.Also,recent trends and progress of permanent magnet synchronous machines(PMSMs)modeling are presented for providing more accurate HILS implementation approaches in this paper.展开更多
文摘BACKGROUND Liver transplantation(LT)is a life-saving intervention for patients with end-stage liver disease.However,the equitable allocation of scarce donor organs remains a formidable challenge.Prognostic tools are pivotal in identifying the most suitable transplant candidates.Traditionally,scoring systems like the model for end-stage liver disease have been instrumental in this process.Nevertheless,the landscape of prognostication is undergoing a transformation with the integration of machine learning(ML)and artificial intelligence models.AIM To assess the utility of ML models in prognostication for LT,comparing their performance and reliability to established traditional scoring systems.METHODS Following the Preferred Reporting Items for Systematic Reviews and Meta-Analysis guidelines,we conducted a thorough and standardized literature search using the PubMed/MEDLINE database.Our search imposed no restrictions on publication year,age,or gender.Exclusion criteria encompassed non-English studies,review articles,case reports,conference papers,studies with missing data,or those exhibiting evident methodological flaws.RESULTS Our search yielded a total of 64 articles,with 23 meeting the inclusion criteria.Among the selected studies,60.8%originated from the United States and China combined.Only one pediatric study met the criteria.Notably,91%of the studies were published within the past five years.ML models consistently demonstrated satisfactory to excellent area under the receiver operating characteristic curve values(ranging from 0.6 to 1)across all studies,surpassing the performance of traditional scoring systems.Random forest exhibited superior predictive capabilities for 90-d mortality following LT,sepsis,and acute kidney injury(AKI).In contrast,gradient boosting excelled in predicting the risk of graft-versus-host disease,pneumonia,and AKI.CONCLUSION This study underscores the potential of ML models in guiding decisions related to allograft allocation and LT,marking a significant evolution in the field of prognostication.
文摘The Indian Himalayan region is frequently experiencing climate change-induced landslides.Thus,landslide susceptibility assessment assumes greater significance for lessening the impact of a landslide hazard.This paper makes an attempt to assess landslide susceptibility in Shimla district of the northwest Indian Himalayan region.It examined the effectiveness of random forest(RF),multilayer perceptron(MLP),sequential minimal optimization regression(SMOreg)and bagging ensemble(B-RF,BSMOreg,B-MLP)models.A landslide inventory map comprising 1052 locations of past landslide occurrences was classified into training(70%)and testing(30%)datasets.The site-specific influencing factors were selected by employing a multicollinearity test.The relationship between past landslide occurrences and influencing factors was established using the frequency ratio method.The effectiveness of machine learning models was verified through performance assessors.The landslide susceptibility maps were validated by the area under the receiver operating characteristic curves(ROC-AUC),accuracy,precision,recall and F1-score.The key performance metrics and map validation demonstrated that the BRF model(correlation coefficient:0.988,mean absolute error:0.010,root mean square error:0.058,relative absolute error:2.964,ROC-AUC:0.947,accuracy:0.778,precision:0.819,recall:0.917 and F-1 score:0.865)outperformed the single classifiers and other bagging ensemble models for landslide susceptibility.The results show that the largest area was found under the very high susceptibility zone(33.87%),followed by the low(27.30%),high(20.68%)and moderate(18.16%)susceptibility zones.The factors,namely average annual rainfall,slope,lithology,soil texture and earthquake magnitude have been identified as the influencing factors for very high landslide susceptibility.Soil texture,lineament density and elevation have been attributed to high and moderate susceptibility.Thus,the study calls for devising suitable landslide mitigation measures in the study area.Structural measures,an immediate response system,community participation and coordination among stakeholders may help lessen the detrimental impact of landslides.The findings from this study could aid decision-makers in mitigating future catastrophes and devising suitable strategies in other geographical regions with similar geological characteristics.
基金This study has been reviewed and approved by the Clinical Research Ethics Committee of Wenzhou Central Hospital and the First Hospital Affiliated to Wenzhou Medical University,No.KY2024-R016.
文摘BACKGROUND Colorectal cancer significantly impacts global health,with unplanned reoperations post-surgery being key determinants of patient outcomes.Existing predictive models for these reoperations lack precision in integrating complex clinical data.AIM To develop and validate a machine learning model for predicting unplanned reoperation risk in colorectal cancer patients.METHODS Data of patients treated for colorectal cancer(n=2044)at the First Affiliated Hospital of Wenzhou Medical University and Wenzhou Central Hospital from March 2020 to March 2022 were retrospectively collected.Patients were divided into an experimental group(n=60)and a control group(n=1984)according to unplanned reoperation occurrence.Patients were also divided into a training group and a validation group(7:3 ratio).We used three different machine learning methods to screen characteristic variables.A nomogram was created based on multifactor logistic regression,and the model performance was assessed using receiver operating characteristic curve,calibration curve,Hosmer-Lemeshow test,and decision curve analysis.The risk scores of the two groups were calculated and compared to validate the model.RESULTS More patients in the experimental group were≥60 years old,male,and had a history of hypertension,laparotomy,and hypoproteinemia,compared to the control group.Multiple logistic regression analysis confirmed the following as independent risk factors for unplanned reoperation(P<0.05):Prognostic Nutritional Index value,history of laparotomy,hypertension,or stroke,hypoproteinemia,age,tumor-node-metastasis staging,surgical time,gender,and American Society of Anesthesiologists classification.Receiver operating characteristic curve analysis showed that the model had good discrimination and clinical utility.CONCLUSION This study used a machine learning approach to build a model that accurately predicts the risk of postoperative unplanned reoperation in patients with colorectal cancer,which can improve treatment decisions and prognosis.
文摘Background and Objective The effectiveness of radiofrequency ablation(RFA)in improving long-term survival outcomes for patients with a solitary hepatocellular carcinoma(HCC)measuring 5 cm or less remains uncertain.This study was designed to elucidate the impact of RFA therapy on the survival outcomes of these patients and to construct a prognostic model for patients following RFA.Methods This study was performed using the Surveillance,Epidemiology,and End Results(SEER)database from 2004 to 2017,focusing on patients diagnosed with a solitary HCC lesion≤5 cm in size.We compared the overall survival(OS)and cancer-specific survival(CSS)rates of these patients with those of patients who received hepatectomy,radiotherapy,or chemotherapy or who were part of a blank control group.To enhance the reliability of our findings,we employed stabilized inverse probability treatment weighting(sIPTW)and stratified analyses.Additionally,we conducted a Cox regression analysis to identify prognostic factors.XGBoost models were developed to predict 1-,3-,and 5-year CSS.The XGBoost models were evaluated via receiver operating characteristic(ROC)curves,calibration plots,decision curve analysis(DCA)curves and so on.Results Regardless of whether the data were unadjusted or adjusted for the use of sIPTWs,the 5-year OS(46.7%)and CSS(58.9%)rates were greater in the RFA group than in the radiotherapy(27.1%/35.8%),chemotherapy(32.9%/43.7%),and blank control(18.6%/30.7%)groups,but these rates were lower than those in the hepatectomy group(69.4%/78.9%).Stratified analysis based on age and cirrhosis status revealed that RFA and hepatectomy yielded similar OS and CSS outcomes for patients with cirrhosis aged over 65 years.Age,race,marital status,grade,cirrhosis status,tumor size,and AFP level were selected to construct the XGBoost models based on the training cohort.The areas under the curve(AUCs)for 1,3,and 5 years in the validation cohort were 0.88,0.81,and 0.79,respectively.Calibration plots further demonstrated the consistency between the predicted and actual values in both the training and validation cohorts.Conclusion RFA can improve the survival of patients diagnosed with a solitary HCC lesion≤5 cm.In certain clinical scenarios,RFA achieves survival outcomes comparable to those of hepatectomy.The XGBoost models developed in this study performed admirably in predicting the CSS of patients with solitary HCC tumors smaller than 5 cm following RFA.
文摘Spatial heterogeneity refers to the variation or differences in characteristics or features across different locations or areas in space. Spatial data refers to information that explicitly or indirectly belongs to a particular geographic region or location, also known as geo-spatial data or geographic information. Focusing on spatial heterogeneity, we present a hybrid machine learning model combining two competitive algorithms: the Random Forest Regressor and CNN. The model is fine-tuned using cross validation for hyper-parameter adjustment and performance evaluation, ensuring robustness and generalization. Our approach integrates Global Moran’s I for examining global autocorrelation, and local Moran’s I for assessing local spatial autocorrelation in the residuals. To validate our approach, we implemented the hybrid model on a real-world dataset and compared its performance with that of the traditional machine learning models. Results indicate superior performance with an R-squared of 0.90, outperforming RF 0.84 and CNN 0.74. This study contributed to a detailed understanding of spatial variations in data considering the geographical information (Longitude & Latitude) present in the dataset. Our results, also assessed using the Root Mean Squared Error (RMSE), indicated that the hybrid yielded lower errors, showing a deviation of 53.65% from the RF model and 63.24% from the CNN model. Additionally, the global Moran’s I index was observed to be 0.10. This study underscores that the hybrid was able to predict correctly the house prices both in clusters and in dispersed areas.
基金funded by the National Natural Science Foundation of China (41807285)。
文摘The numerical simulation and slope stability prediction are the focus of slope disaster research.Recently,machine learning models are commonly used in the slope stability prediction.However,these machine learning models have some problems,such as poor nonlinear performance,local optimum and incomplete factors feature extraction.These issues can affect the accuracy of slope stability prediction.Therefore,a deep learning algorithm called Long short-term memory(LSTM)has been innovatively proposed to predict slope stability.Taking the Ganzhou City in China as the study area,the landslide inventory and their characteristics of geotechnical parameters,slope height and slope angle are analyzed.Based on these characteristics,typical soil slopes are constructed using the Geo-Studio software.Five control factors affecting slope stability,including slope height,slope angle,internal friction angle,cohesion and volumetric weight,are selected to form different slope and construct model input variables.Then,the limit equilibrium method is used to calculate the stability coefficients of these typical soil slopes under different control factors.Each slope stability coefficient and its corresponding control factors is a slope sample.As a result,a total of 2160 training samples and 450 testing samples are constructed.These sample sets are imported into LSTM for modelling and compared with the support vector machine(SVM),random forest(RF)and convo-lutional neural network(CNN).The results show that the LSTM overcomes the problem that the commonly used machine learning models have difficulty extracting global features.Furthermore,LSTM has a better prediction performance for slope stability compared to SVM,RF and CNN models.
基金funded by the Natural Science Foundation of China(Grant Nos.41807285,41972280 and 52179103).
文摘To perform landslide susceptibility prediction(LSP),it is important to select appropriate mapping unit and landslide-related conditioning factors.The efficient and automatic multi-scale segmentation(MSS)method proposed by the authors promotes the application of slope units.However,LSP modeling based on these slope units has not been performed.Moreover,the heterogeneity of conditioning factors in slope units is neglected,leading to incomplete input variables of LSP modeling.In this study,the slope units extracted by the MSS method are used to construct LSP modeling,and the heterogeneity of conditioning factors is represented by the internal variations of conditioning factors within slope unit using the descriptive statistics features of mean,standard deviation and range.Thus,slope units-based machine learning models considering internal variations of conditioning factors(variant slope-machine learning)are proposed.The Chongyi County is selected as the case study and is divided into 53,055 slope units.Fifteen original slope unit-based conditioning factors are expanded to 38 slope unit-based conditioning factors through considering their internal variations.Random forest(RF)and multi-layer perceptron(MLP)machine learning models are used to construct variant Slope-RF and Slope-MLP models.Meanwhile,the Slope-RF and Slope-MLP models without considering the internal variations of conditioning factors,and conventional grid units-based machine learning(Grid-RF and MLP)models are built for comparisons through the LSP performance assessments.Results show that the variant Slopemachine learning models have higher LSP performances than Slope-machine learning models;LSP results of variant Slope-machine learning models have stronger directivity and practical application than Grid-machine learning models.It is concluded that slope units extracted by MSS method can be appropriate for LSP modeling,and the heterogeneity of conditioning factors within slope units can more comprehensively reflect the relationships between conditioning factors and landslides.The research results have important reference significance for land use and landslide prevention.
基金This project is supported by National Hi-tech Research and Development Program of China (863 Program, No.2002AA421150)Specialized Research Fund for the Doctoral Program of Higher Education of China (No.20030335091).
文摘The use of a CO2 laser system for fabrication of microfluidic chip on polymethyl methacrylate (PMMA) is presented to reduce fabrication cost and time of chip. The grooving process of the laser system and a model for the depth of microchannels are investigated. The relations between the depth of laser-cut channels and the laser beam power, velocity or the number of passes of the beam along the same channel are evaluated. In the experiments, the laser beam power varies from 0 to 50 W, the laser beam scanning velocity varies from 0 to 1 000 mm/s and the passes vary in the range of 1 to 10 times. Based on the principle of conservation of energy, the influence of the laser beam velocity, the laser power and the number of groove passes are examine. Considering the grooving interval energy loss, a modified mathematical model has been obtained and experimental data show good agreement with the theoretical model. This approach provides a simple way of predicting groove depths. The system provides a cost alternative of the other methods and it is especially useful on research work of rnicrofluidic prototyping due to the short cycle time of production.
基金Program of Science and Technology Department of Sichuan Province(2022YFS0541-02)Program of Heavy Rain and Drought-flood Disasters in Plateau and Basin Key Laboratory of Sichuan Province(SCQXKJQN202121)Innovative Development Program of the China Meteorological Administration(CXFZ2021Z007)。
文摘Machine learning models were used to improve the accuracy of China Meteorological Administration Multisource Precipitation Analysis System(CMPAS)in complex terrain areas by combining rain gauge precipitation with topographic factors like altitude,slope,slope direction,slope variability,surface roughness,and meteorological factors like temperature and wind speed.The results of the correction demonstrated that the ensemble learning method has a considerably corrective effect and the three methods(Random Forest,AdaBoost,and Bagging)adopted in the study had similar results.The mean bias between CMPAS and 85%of automatic weather stations has dropped by more than 30%.The plateau region displays the largest accuracy increase,the winter season shows the greatest error reduction,and decreasing precipitation improves the correction outcome.Additionally,the heavy precipitation process’precision has improved to some degree.For individual stations,the revised CMPAS error fluctuation range is significantly reduced.
文摘This paper provides a review of predictive analytics for roads,identifying gaps and limitations in current methodologies.It explores the implications of these limitations on accuracy and application,while also discussing how advanced predictive analytics can address these challenges.The article acknowledges the transformative shift brought about by technological advancements and increased computational capabilities.The degradation of pavement surfaces due to increased road users has resulted in safety and comfort issues.Researchers have conducted studies to assess pavement condition and predict future changes in pavement structure.Pavement Management Systems are crucial in developing prediction performance models that estimate pavement condition and degradation severity over time.Machine learning algorithms,artificial neural networks,and regression models have been used,with strengths and weaknesses.Researchers generally agree on their accuracy in estimating pavement condition considering factors like traffic,pavement age,and weather conditions.However,it is important to carefully select an appropriate prediction model to achieve a high-quality prediction performance system.Understanding the strengths and weaknesses of each model enables informed decisions for implementing prediction models that suit specific needs.The advancement of prediction models,coupled with innovative technologies,will contribute to improved pavement management and the overall safety and comfort of road users.
基金funded by the National Natural Science Foundation of China(NFSC)(No.52072011)。
文摘The exhaust emissions and frequent traffic incidents caused by traffic congestion have affected the operation and development of urban transport systems.Monitoring and accurately forecasting urban traffic operation is a critical task to formulate pertinent strategies to alleviate traffic congestion.Compared with traditional short-time traffic prediction,this study proposes a machine learning algorithm-based traffic forecasting model for daily-level peak hour traffic operation status prediction by using abundant historical data of urban traffic performance index(TPI).The study also constructed a multi-dimensional influencing factor set to further investigate the relationship between different factors on the quality of road network operation,including day of week,time period,public holiday,car usage restriction policy,special events,etc.Based on long-term historical TPI data,this research proposed a daily dimensional road network TPI prediction model by using an extreme gradient boosting algorithm(XGBoost).The model validation results show that the model prediction accuracy can reach higher than 90%.Compared with other prediction models,including Bayesian Ridge,Linear Regression,ElatsicNet,SVR,the XGBoost model has a better performance,and proves its superiority in large high-dimensional data sets.The daily dimensional prediction model proposed in this paper has an important application value for predicting traffic status and improving the operation quality of urban road networks.
文摘N-11-azaartemisinins potentially active against Plasmodium falciparum are designed by combining molecular electrostatic potential (MEP), ligand-receptor interaction, and models built with supervised machine learning methods (PCA, HCA, KNN, SIMCA, and SDA). The optimization of molecular structures was performed using the B3LYP/6-31G* approach. MEP maps and ligand-receptor interactions were used to investigate key structural features required for biological activities and likely interactions between N-11-azaartemisinins and heme, respectively. The supervised machine learning methods allowed the separation of the investigated compounds into two classes: cha and cla, with the properties ε<sub>LUMO+1</sub> (one level above lowest unoccupied molecular orbital energy), d(C<sub>6</sub>-C<sub>5</sub>) (distance between C<sub>6</sub> and C<sub>5</sub> atoms in ligands), and TSA (total surface area) responsible for the classification. The insights extracted from the investigation developed and the chemical intuition enabled the design of sixteen new N-11-azaartemisinins (prediction set), moreover, models built with supervised machine learning methods were applied to this prediction set. The result of this application showed twelve new promising N-11-azaartemisinins for synthesis and biological evaluation.
基金This work is funded by the National Natural Science Foundation of China(Grant Nos.42377164 and 52079062)the National Science Fund for Distinguished Young Scholars of China(Grant No.52222905).
文摘In the existing landslide susceptibility prediction(LSP)models,the influences of random errors in landslide conditioning factors on LSP are not considered,instead the original conditioning factors are directly taken as the model inputs,which brings uncertainties to LSP results.This study aims to reveal the influence rules of the different proportional random errors in conditioning factors on the LSP un-certainties,and further explore a method which can effectively reduce the random errors in conditioning factors.The original conditioning factors are firstly used to construct original factors-based LSP models,and then different random errors of 5%,10%,15% and 20%are added to these original factors for con-structing relevant errors-based LSP models.Secondly,low-pass filter-based LSP models are constructed by eliminating the random errors using low-pass filter method.Thirdly,the Ruijin County of China with 370 landslides and 16 conditioning factors are used as study case.Three typical machine learning models,i.e.multilayer perceptron(MLP),support vector machine(SVM)and random forest(RF),are selected as LSP models.Finally,the LSP uncertainties are discussed and results show that:(1)The low-pass filter can effectively reduce the random errors in conditioning factors to decrease the LSP uncertainties.(2)With the proportions of random errors increasing from 5%to 20%,the LSP uncertainty increases continuously.(3)The original factors-based models are feasible for LSP in the absence of more accurate conditioning factors.(4)The influence degrees of two uncertainty issues,machine learning models and different proportions of random errors,on the LSP modeling are large and basically the same.(5)The Shapley values effectively explain the internal mechanism of machine learning model predicting landslide sus-ceptibility.In conclusion,greater proportion of random errors in conditioning factors results in higher LSP uncertainty,and low-pass filter can effectively reduce these random errors.
基金National Key Research and Development Program of China(2022YFB3903302 and 2021YFC1809104)。
文摘Rapid and accurate acquisition of soil organic matter(SOM)information in cultivated land is important for sustainable agricultural development and carbon balance management.This study proposed a novel approach to predict SOM with high accuracy using multiyear synthetic remote sensing variables on a monthly scale.We obtained 12 monthly synthetic Sentinel-2 images covering the study area from 2016 to 2021 through the Google Earth Engine(GEE)platform,and reflectance bands and vegetation indices were extracted from these composite images.Then the random forest(RF),support vector machine(SVM)and gradient boosting regression tree(GBRT)models were tested to investigate the difference in SOM prediction accuracy under different combinations of monthly synthetic variables.Results showed that firstly,all monthly synthetic spectral bands of Sentinel-2 showed a significant correlation with SOM(P<0.05)for the months of January,March,April,October,and November.Secondly,in terms of single-monthly composite variables,the prediction accuracy was relatively poor,with the highest R^(2)value of 0.36 being observed in January.When monthly synthetic environmental variables were grouped in accordance with the four quarters of the year,the first quarter and the fourth quarter showed good performance,and any combination of three quarters was similar in estimation accuracy.The overall best performance was observed when all monthly synthetic variables were incorporated into the models.Thirdly,among the three models compared,the RF model was consistently more accurate than the SVM and GBRT models,achieving an R^(2)value of 0.56.Except for band 12 in December,the importance of the remaining bands did not exhibit significant differences.This research offers a new attempt to map SOM with high accuracy and fine spatial resolution based on monthly synthetic Sentinel-2 images.
基金This research was funded by the National Natural Science Foundation of China(grant no.32271881).
文摘Forest fires are natural disasters that can occur suddenly and can be very damaging,burning thousands of square kilometers.Prevention is better than suppression and prediction models of forest fire occurrence have developed from the logistic regression model,the geographical weighted logistic regression model,the Lasso regression model,the random forest model,and the support vector machine model based on historical forest fire data from 2000 to 2019 in Jilin Province.The models,along with a distribution map are presented in this paper to provide a theoretical basis for forest fire management in this area.Existing studies show that the prediction accuracies of the two machine learning models are higher than those of the three generalized linear regression models.The accuracies of the random forest model,the support vector machine model,geographical weighted logistic regression model,the Lasso regression model,and logistic model were 88.7%,87.7%,86.0%,85.0%and 84.6%,respectively.Weather is the main factor affecting forest fires,while the impacts of topography factors,human and social-economic factors on fire occurrence were similar.
基金supported by the National Natural Science Foundation of China(52302437)the Cangzhou Science and Technology Plan Project(213101011)+1 种基金the Science and Technology Program Projects of Shandong Provincial Department of Transportation(2024B28)the Doctoral Scientific Research Start-up Foundation of Shandong University of Technology(422049).
文摘Tunnels are vital in connecting crucial transportation hubs as transportation infrastructure evolves.Variations in tunnel design standards and driving conditions across different levels directly impact driver visual perception and traffic safety.This study employs a Gaussian hybrid clustering machine learning model to explore driver gaze patterns in highway tunnels and exits.By utilizing contour coefficients,the optimal number of classification clusters is determined.Analysis of driver visual behavior across tunnel levels,focusing on gaze point distribution,gaze duration,and sweep speed,was conducted.Findings indicate freeway tunnel exits exhibit three distinct fixation point categories aligning with Gaussian distribution,while highway tunnels display four such characteristics.Notably,in both tunnel types,65%of driver gaze is concentrated on the near area ahead of their lane.Differences emerge in highway tunnels due to oncoming traffic,leading to 13.47%more fixation points and 0.9%increased fixation time in the right lane compared to regular highway tunnel conditions.Moreover,scanning speeds predominantly fall within the 0.25-0.3 range,accounting for 75.47%and 31.14%of the total sweep speed.
文摘Hyperparameter tuning is a key step in developing high-performing machine learning models, but searching large hyperparameter spaces requires extensive computation using standard sequential methods. This work analyzes the performance gains from parallel versus sequential hyperparameter optimization. Using scikit-learn’s Randomized SearchCV, this project tuned a Random Forest classifier for fake news detection via randomized grid search. Setting n_jobs to -1 enabled full parallelization across CPU cores. Results show the parallel implementation achieved over 5× faster CPU times and 3× faster total run times compared to sequential tuning. However, test accuracy slightly dropped from 99.26% sequentially to 99.15% with parallelism, indicating a trade-off between evaluation efficiency and model performance. Still, the significant computational gains allow more extensive hyperparameter exploration within reasonable timeframes, outweighing the small accuracy decrease. Further analysis could better quantify this trade-off across different models, tuning techniques, tasks, and hardware.
文摘Artificial Intelligence (AI) is transforming organizational dynamics, and revolutionizing corporate leadership practices. This research paper delves into the question of how AI influences corporate leadership, examining both its advantages and disadvantages. Positive impacts of AI are evident in communication, feedback systems, tracking mechanisms, and decision-making processes within organizations. AI-powered communication tools, as exemplified by Slack, facilitate seamless collaboration, transcending geographical barriers. Feedback systems, like Adobe’s Performance Management System, employ AI algorithms to provide personalized development opportunities, enhancing employee growth. AI-based tracking systems optimize resource allocation, as exemplified by studies like “AI-Based Tracking Systems: Enhancing Efficiency and Accountability.” Additionally, AI-powered decision support, demonstrated during the COVID-19 pandemic, showcases the capability to navigate complex challenges and maintain resilience. However, AI adoption poses challenges in human resources, potentially leading to job displacement and necessitating upskilling efforts. Managing AI errors becomes crucial, as illustrated by instances like Amazon’s biased recruiting tool. Data privacy concerns also arise, emphasizing the need for robust security measures. The proposed solution suggests leveraging Local Machine Learning Models (LLMs) to address data privacy issues. Approaches such as federated learning, on-device learning, differential privacy, and homomorphic encryption offer promising strategies. By exploring the evolving dynamics of AI and leadership, this research advocates for responsible AI adoption and proposes LLMs as a potential solution, fostering a balanced integration of AI benefits while mitigating associated risks in corporate settings.
基金Project supported by the National Natural Science Foundation of China (Grant No 60573065)the Natural Science Foundation of Shandong Province,China (Grant No Y2007G33)the Key Subject Research Foundation of Shandong Province,China(Grant No XTD0708)
文摘In this paper we apply the nonlinear time series analysis method to small-time scale traffic measurement data. The prediction-based method is used to determine the embedding dimension of the traffic data. Based on the reconstructed phase space, the local support vector machine prediction method is used to predict the traffic measurement data, and the BIC-based neighbouring point selection method is used to choose the number of the nearest neighbouring points for the local support vector machine regression model. The experimental results show that the local support vector machine prediction method whose neighbouring points are optimized can effectively predict the small-time scale traffic measurement data and can reproduce the statistical features of real traffic measurements.
基金supported in part by the National Research Foundation of Korea(NRF)grant funded by Korea government(No.2020R1C1C1013260)in part by INHA UNIVERSITY Research Grant.
文摘This paper presents a state-of-the-art review in modeling approach of hardware in the loop simulation(HILS)realization of electric machine drives using commercial real time machines.HILS implementation using digital signal processors(DSPs)and field programmable gate array(FPGA)for electric machine drives has been investigated but those methods have drawbacks such as complexity in development and verification.Among various HILS implementation approaches,more efficient development and verification for electric machine drives can be achieved through use of commercial real time machines.As well as implementation of the HILS,accurate modeling of a control target system plays an important role.Therefore,modeling trend in electric machine drives for HILS implementation is needed to be reviewed.This paper provides a background of HILS and commercially available real time machines and characteristics of each real time machine are introduced.Also,recent trends and progress of permanent magnet synchronous machines(PMSMs)modeling are presented for providing more accurate HILS implementation approaches in this paper.