There exists the challenge of seeming lack of empirically determined cargo throughput benchmark models for the privatized West African port terminals particularly in Nigeria,as target benchmarks which terminal operato...There exists the challenge of seeming lack of empirically determined cargo throughput benchmark models for the privatized West African port terminals particularly in Nigeria,as target benchmarks which terminal operators and port authorities must drive towards to ensure that the current improvement in port productivity experienced in the post concession era is sustained.The study was therefore aimed at developing benchmarks for the cargo throughput performances of the privatized five Nigeria ports of Apapa(Lagos),Port-Harcourt,Onne,Warri and Calabar.Such benchmarks developed for each seaport must be higher than the pre-privatization cargo throughput performances of the seaport.This became important following the improvements observed in the cargo throughput performances of the various ports from the year 2006 after the privatization of the ports and the recent recession faced in the Country which seems to have retarded the cargo throughput performances and other measures of seaport performance in the various Nigeria ports.Using Cp1,CL1,Cw1,Co1,Cc1,to represent the base year 2006 cargo throughput performances of Port-Harcourt,Lagos,Warri,Onne and Calabar seaport respectively;and n,d,to represent the number of post privatization years covered in the study and common difference in cargo throughput performances;the study used a historical design approach in which time series data on cargo throughput performances of the ports were obtained from the Nigeria ports Authority(NPA)annual statistical reports were analyzed using the converging and diverging arithmetic series mathematical modeling tool and MATLAB software,to determine benchmark models,for ensuring that the improved cargo throughput performances of the various seaports,are sustained to remain higher that the pre-privatization cargo throughput performances.The study developed the following Cargo throughput benchmark models for each seaport as findings.Lagos port=CL1+(n–1)d≥15223340;Onne port=Co1+(n-1)d≥15820381;Port-Harcourt port=Cp1+(n-1)d≥28016979;Warri=Cw1+(n-1)d≥4643128;Calabar=Cc1+(n-1)d≥7963434.It was recommended that to improve port revenue which is a dependent factor on cargo throughput and vessel call rate,cargo throughput benchmarks model developed for the individual seaports should be used to empirically model quantum s of cargo throughput needed to economically sustain and improve the level of port operations.It should equally influence port marketing drives.This will ensure that the performance of the ports does not recede into the poor performance indices experienced in the pre-privatization era.展开更多
With the adoption of the Luanda Declaration at the end of the conference,it was fairly evident that African governments could no longer deny the causal links or intersection-between the environment and health care for...With the adoption of the Luanda Declaration at the end of the conference,it was fairly evident that African governments could no longer deny the causal links or intersection-between the environment and health care for people across the continent.展开更多
AIM:To estimate and compare sex-specific screening polypectomy rates to quality benchmarks of 40%in men and 30%in women.METHODS:A prospective cohort study was undertaken of patients aged 50-75,scheduled for colonoscop...AIM:To estimate and compare sex-specific screening polypectomy rates to quality benchmarks of 40%in men and 30%in women.METHODS:A prospective cohort study was undertaken of patients aged 50-75,scheduled for colonoscopy,and covered by the Québec universal health insurance plan.Endoscopist and patient questionnaires were used to obtain screening and non-screening colonoscopy indications.Patient self-report was used to obtain history of gastrointestinal conditions/symptoms and prior colonoscopy.Sex-specific polypectomy rates(PRs)and95%CI were calculated using Bayesian hierarchical logistic regression.RESULTS:In total,45 endoscopists and 2134(mean age=61,50%female)of their patients participated.According to patients,screening PRs in males and females were 32.4%(95%CI:23.8-41.8)and19.4%(95%CI:13.1-25.4),respectively.According to endoscopists,screening PRs in males and females were 30.2%(95%CI:27.0-41.9)and 16.6%(95%CI:16.3-28.6),respectively.Sex-specific PRs did not meet quality benchmarks at all ages except for:males aged65-69(patient screening indication),and males aged70-74(endoscopist screening indication).For all patients aged 50-54,none of the CI included the quality benchmarks.CONCLUSION:Most sex-specific screening PRs in Québec were below quality benchmarks;PRs were especially low for all 50-54 year olds.展开更多
Helicopter EMS (HEMS) allows for patients to be quickly transported into regional cardiac centers, often to receive primary percutaneous coronary intervention (PCI). Since PCI is a time-critical therapy, it is importa...Helicopter EMS (HEMS) allows for patients to be quickly transported into regional cardiac centers, often to receive primary percutaneous coronary intervention (PCI). Since PCI is a time-critical therapy, it is important that patients get to primary PCI as quickly as possible. HEMS crews’ “on-scene” times for trauma patients have been extensively studied, and recent years have seen many efforts to minimize the time required to prepare patients for transport. There has been less attention to interfacility transport “scene times” for HEMS crews at referring hospitals;this includes stabilization times for preparing cardiac patients for loading onto aircraft for HEMS transport to primary PCI. In the absence of guiding evidence, system benchmarking and quality improvement are difficult. Therefore the current study was undertaken, to assess and describe the HEMS crew “on-scene” times or “patient stabilization times” (PSTs) at referring hospitals, for interfacility transported cardiac patients flown for primary PCI. Descriptive analysis identified a PST median of 19 minutes (interquartile range 15 - 24), and univariate analyses using Kruskal-Wallis testing found no association between prolonged PST and sending unit type (Emergency Department versus other), off-hours transports, or relatively frequent (at least monthly) use of HEMS (p for all comparisons > 0.64). Outlier PSTs, defined a priori as those exceeding the median by at least a half-hour, were found in 12% of all cases. These data could be useful as a starting point for system planning and benchmarking efforts in regionalized systems of acute cardiac care.展开更多
Piráis a reading comprehension dataset focused on the ocean,the Brazilian coast,and climate change,built from a collection of scientific abstracts and reports on these topics.This dataset represents a versatile l...Piráis a reading comprehension dataset focused on the ocean,the Brazilian coast,and climate change,built from a collection of scientific abstracts and reports on these topics.This dataset represents a versatile language resource,particularly useful for testing the ability of current machine learning models to acquire expert scientific knowledge.Despite its potential,a detailed set of baselines has not yet been developed for Pirá.By creating these baselines,researchers can more easily utilize Piráas a resource for testing machine learning models across a wide range of question answering tasks.In this paper,we define six benchmarks over the Pirádataset,covering closed generative question answering,machine reading comprehension,information retrieval,open question answering,answer triggering,and multiple choice question answering.As part of this effort,we have also produced a curated version of the original dataset,where we fixed a number of grammar issues,repetitions,and other shortcomings.Furthermore,the dataset has been extended in several new directions,so as to face the aforementioned benchmarks:translation of supporting texts from English into Portuguese,classification labels for answerability,automatic paraphrases of questions and answers,and multiple choice candidates.The results described in this paper provide several points of reference for researchers interested in exploring the challenges provided by the Pirádataset.展开更多
Automatic modal identification via automatically interpreting the stabilization diagram provides key technique in bridge structural health monitoring.This paper reviews the progress in the area of automatic modal iden...Automatic modal identification via automatically interpreting the stabilization diagram provides key technique in bridge structural health monitoring.This paper reviews the progress in the area of automatic modal identification based on interpreting the stabilization diagram.The whole identification process is divided into four steps from establishing the stabilization diagram to removing the outliers in the identification results.The criteria and algorithms used in each step in the existing studies are carefully summarized and classified.Comparisons between typical methods in cleaning and interpreting the stabilization diagram are also conducted.Real structure benchmarks used in the existing studies to validate the proposed automatic modal identification methods are also summarized.Based on the review and comparison,the specific ratio method for cleaning the stabilization diagram,the hierarchical clustering method for interpreting the stabilization diagram and the adjusted boxplot for removing the outliers in the identification results are the most suitable methods for each step.The key point of automatic modal identification based on interpreting the stabilization diagram has also discussed,and it is recommended to pay more attention to cleaning the stabilization diagram.Future study about automatic modal identification under situation with very few sensors deployed should be more concerned.This review aims to help researchers and practitioners in implementing existing automatic modal identification algorithms effectively and developing more suitable and practical methods for civil engineering structures in the future.展开更多
This research paper presents a novel optimization method called the Synergistic Swarm Optimization Algorithm(SSOA).The SSOA combines the principles of swarmintelligence and synergistic cooperation to search for optima...This research paper presents a novel optimization method called the Synergistic Swarm Optimization Algorithm(SSOA).The SSOA combines the principles of swarmintelligence and synergistic cooperation to search for optimal solutions efficiently.A synergistic cooperation mechanism is employed,where particles exchange information and learn from each other to improve their search behaviors.This cooperation enhances the exploitation of promising regions in the search space while maintaining exploration capabilities.Furthermore,adaptive mechanisms,such as dynamic parameter adjustment and diversification strategies,are incorporated to balance exploration and exploitation.By leveraging the collaborative nature of swarm intelligence and integrating synergistic cooperation,the SSOAmethod aims to achieve superior convergence speed and solution quality performance compared to other optimization algorithms.The effectiveness of the proposed SSOA is investigated in solving the 23 benchmark functions and various engineering design problems.The experimental results highlight the effectiveness and potential of the SSOA method in addressing challenging optimization problems,making it a promising tool for a wide range of applications in engineering and beyond.Matlab codes of SSOA are available at:https://www.mathworks.com/matlabcentral/fileexchange/153466-synergistic-swarm-optimization-algorithm.展开更多
In multimodal multiobjective optimization problems(MMOPs),there are several Pareto optimal solutions corre-sponding to the identical objective vector.This paper proposes a new differential evolution algorithm to solve...In multimodal multiobjective optimization problems(MMOPs),there are several Pareto optimal solutions corre-sponding to the identical objective vector.This paper proposes a new differential evolution algorithm to solve MMOPs with higher-dimensional decision variables.Due to the increase in the dimensions of decision variables in real-world MMOPs,it is diffi-cult for current multimodal multiobjective optimization evolu-tionary algorithms(MMOEAs)to find multiple Pareto optimal solutions.The proposed algorithm adopts a dual-population framework and an improved environmental selection method.It utilizes a convergence archive to help the first population improve the quality of solutions.The improved environmental selection method enables the other population to search the remaining decision space and reserve more Pareto optimal solutions through the information of the first population.The combination of these two strategies helps to effectively balance and enhance conver-gence and diversity performance.In addition,to study the per-formance of the proposed algorithm,a novel set of multimodal multiobjective optimization test functions with extensible decision variables is designed.The proposed MMOEA is certified to be effective through comparison with six state-of-the-art MMOEAs on the test functions.展开更多
Purpose-Prominent at the intersections of national educational agencies,higher education,and international educational performance assessments are two reform standards:“benchmarks”determining optimal student perform...Purpose-Prominent at the intersections of national educational agencies,higher education,and international educational performance assessments are two reform standards:“benchmarks”determining optimal student performance,and“empirical evidence”for determining the quality of reform practices.These two notions are often taken as connecting policy and research to effective changes in many countries.The article examines the historical and cultural principles about educational change and its sciences embedded in these standards through examining OECD’s PISA and the McKinsey&Company reports that draw on PISA’s data.Findings/Originality/Value-First,the reports express salvation themes associated with modernity;that is,the promise of a better future through governing the present.The promise is to provide nations with data and models to achieve social equality,economic prosperity,and a participatory democracy.Second,the promise of the future is not descriptive of some present reality but to fabricate the universal characteristics about society and individuals.The numbers embody social and psychological categories about a desired unity of all students.Third,the“empirical evidence”of the international assessment entails a particular notion of science and“evidence”;one that paradoxically uses the universals in comparing and creating divisions.展开更多
There exists a gap between control theory and control practice,i.e.,all control methods suggested by researchers are not implemented in real systems and,on the other hand,many important in dustrial problems are not st...There exists a gap between control theory and control practice,i.e.,all control methods suggested by researchers are not implemented in real systems and,on the other hand,many important in dustrial problems are not studied in the academic research.Benchmark problems can help close this gap and provide many opportunities for members in both the controls theory and application communities.The goal is to survey and give pointers to different general controls and modeling related benchmark problems that can serve as inspiration for future benchmarks and then specifically focus the benchmark coverage on automotive control engineering application.In the paper reflections are given on how different categories of benchmark designers,benchmark solvers and third part users can benefit from providing,solving,and studying benchmark problems.The paper also collects information about several benchmark problems and gives pointers to papers than give more detailed information about different problems that have been presented.展开更多
An enhancement in the wheel-rail contact model used in a nonlinear vehicle-structure interaction(VSI)methodology for railway applications is presented,in which the detection of the contact points between wheel and rai...An enhancement in the wheel-rail contact model used in a nonlinear vehicle-structure interaction(VSI)methodology for railway applications is presented,in which the detection of the contact points between wheel and rail in the concave region of the thread-flange transition is implemented in a simplified way.After presenting the enhanced formulation,the model is validated with two numerical applications(namely,the Manchester Benchmarks and a hunting stability problem of a sus-pended wheelset),and one experimental test performed in a test rig from the Railway Technical Research Institute(RTRI)in Japan.Given its finite element(FE)nature,and contrary to most of the vehicle multibody dynamic commercial software that cannot account for the infrastructure flexibility,the proposed VSI model can be easily used in the study of train-bridge systems with any degree of complexity.The validation presented in this work proves the accuracy of the proposed model,making it a suitable tool for dealing with different railway dynamic applications,such as the study of bridge dynamics,train running safety under different scenarios(namely,earthquakes and crosswinds,among others),and passenger riding comfort.展开更多
In recent years,artificial intelligence technology has exhibited great potential in seismic signal recognition,setting off a new wave of research.Vast amounts of high-quality labeled data are required to develop and a...In recent years,artificial intelligence technology has exhibited great potential in seismic signal recognition,setting off a new wave of research.Vast amounts of high-quality labeled data are required to develop and apply artificial intelligence in seismology research.In this study,based on the 2013–2020 seismic cataloging reports of the China Earthquake Networks Center,we constructed an artificial intelligence seismological training dataset(“DiTing”)with the largest known total time length.Data were recorded using broadband and short-period seismometers.The obtained dataset included 2,734,748 threecomponent waveform traces from 787,010 regional seismic events,the corresponding P-and S-phase arrival time labels,and 641,025 P-wave first-motion polarity labels.All waveforms were sampled at 50 Hz and cut to a time length of 180 s starting from a random number of seconds before the occurrence of an earthquake.Each three-component waveform contained a considerable amount of descriptive information,such as the epicentral distance,back azimuth,and signal-to-noise ratios.The magnitudes of seismic events,epicentral distance,signal-to-noise ratio of P-wave data,and signal-to-noise ratio of S-wave data ranged from 0 to 7.7,0 to 330 km,–0.05 to 5.31 dB,and–0.05 to 4.73 dB,respectively.The dataset compiled in this study can serve as a high-quality benchmark for machine learning model development and data-driven seismological research on earthquake detection,seismic phase picking,first-motion polarity determination,earthquake magnitude prediction,early warning systems,and strong ground-motion prediction.Such research will further promote the development and application of artificial intelligence in seismology.展开更多
The Bald Eagle Search algorithm(BES)is an emerging meta-heuristic algorithm.The algorithm simulates the hunting behavior of eagles,and obtains an optimal solution through three stages,namely selection stage,search sta...The Bald Eagle Search algorithm(BES)is an emerging meta-heuristic algorithm.The algorithm simulates the hunting behavior of eagles,and obtains an optimal solution through three stages,namely selection stage,search stage and swooping stage.However,BES tends to drop-in local optimization and the maximum value of search space needs to be improved.To fill this research gap,we propose an improved bald eagle algorithm(CABES)that integrates Cauchy mutation and adaptive optimization to improve the performance of BES from local optima.Firstly,CABES introduces the Cauchy mutation strategy to adjust the step size of the selection stage,to select a better search range.Secondly,in the search stage,CABES updates the search position update formula by an adaptive weight factor to further promote the local optimization capability of BES.To verify the performance of CABES,the benchmark function of CEC2017 is used to simulate the algorithm.The findings of the tests are compared to those of the Particle Swarm Optimization algorithm(PSO),Whale Optimization Algorithm(WOA)and Archimedes Algorithm(AOA).The experimental results show that CABES can provide good exploration and development capabilities,and it has strong competitiveness in testing algorithms.Finally,CABES is applied to four constrained engineering problems and a groundwater engineeringmodel,which further verifies the effectiveness and efficiency of CABES in practical engineering problems.展开更多
Iron is commonly used as a structural and shielding material in nuclear devices. The accuracy of its nuclear data is critical for the design of nuclear devices. The evaluation data of ^(56)Fe isotopes in the latest ve...Iron is commonly used as a structural and shielding material in nuclear devices. The accuracy of its nuclear data is critical for the design of nuclear devices. The evaluation data of ^(56)Fe isotopes in the latest version of the CENDL-3.2 library from China was significantly updated. This new data must be tested before it can be used. To test the reliability of this data and assess the shielding effect, a shielding benchmark experiment was conducted with natural Fe spherical samples using a pulsed deuterium–tritium neutron source at the China Institute of Atomic Energy(CIAE). The leakage neutron spectra from the natural spherical iron samples with different thicknesses(4.5, 7.5, and 12 cm) were measured between 0.8 and 16 MeV after interacting with 14 MeV neutrons using the time-of-flight method. The simulation results were obtained by Monte Carlo simulations by employing the Fe data from the CENDL-3.2, ENDF/B-VIII.0, and JEDNL-5.0 libraries. The measured and simulated leakage neutron spectra and penetration rates were compared, demonstrating that the CENDL-3.2 library performs sufficiently overall. The simulation results of the other two libraries were underestimated for scattering at the continuum energy level.展开更多
Recently,deep learning has achieved remarkable results in fields that require human cognitive ability,learning ability,and reasoning ability.Activation functions are very important because they provide the ability of ...Recently,deep learning has achieved remarkable results in fields that require human cognitive ability,learning ability,and reasoning ability.Activation functions are very important because they provide the ability of artificial neural networks to learn complex patterns through nonlinearity.Various activation functions are being studied to solve problems such as vanishing gradients and dying nodes that may occur in the deep learning process.However,it takes a lot of time and effort for researchers to use the existing activation function in their research.Therefore,in this paper,we propose a universal activation function(UA)so that researchers can easily create and apply various activation functions and improve the performance of neural networks.UA can generate new types of activation functions as well as functions like traditional activation functions by properly adjusting three hyperparameters.The famous Convolutional Neural Network(CNN)and benchmark datasetwere used to evaluate the experimental performance of the UA proposed in this study.We compared the performance of the artificial neural network to which the traditional activation function is applied and the artificial neural network to which theUA is applied.In addition,we evaluated the performance of the new activation function generated by adjusting the hyperparameters of theUA.The experimental performance evaluation results showed that the classification performance of CNNs improved by up to 5%through the UA,although most of them showed similar performance to the traditional activation function.展开更多
With the development of artificial intelligence-related technologies such as deep learning,various organizations,including the government,are making various efforts to generate and manage big data for use in artificia...With the development of artificial intelligence-related technologies such as deep learning,various organizations,including the government,are making various efforts to generate and manage big data for use in artificial intelligence.However,it is difficult to acquire big data due to various social problems and restrictions such as personal information leakage.There are many problems in introducing technology in fields that do not have enough training data necessary to apply deep learning technology.Therefore,this study proposes a mixed contour data augmentation technique,which is a data augmentation technique using contour images,to solve a problem caused by a lack of data.ResNet,a famous convolutional neural network(CNN)architecture,and CIFAR-10,a benchmark data set,are used for experimental performance evaluation to prove the superiority of the proposed method.And to prove that high performance improvement can be achieved even with a small training dataset,the ratio of the training dataset was divided into 70%,50%,and 30%for comparative analysis.As a result of applying the mixed contour data augmentation technique,it was possible to achieve a classification accuracy improvement of up to 4.64%and high accuracy even with a small amount of data set.In addition,it is expected that the mixed contour data augmentation technique can be applied in various fields by proving the excellence of the proposed data augmentation technique using benchmark datasets.展开更多
This article describes the transient models of the neutronics code VITAS that are used for solving time-dependent,pinresolved neutron transport equations.VITAS uses the stiffness confinement method(SCM)for temporal di...This article describes the transient models of the neutronics code VITAS that are used for solving time-dependent,pinresolved neutron transport equations.VITAS uses the stiffness confinement method(SCM)for temporal discretization to transform the transient equation into the corresponding transient eigenvalue problem(TEVP).To solve the pin-resolved TEVP,VITAS uses a heterogeneous variational nodal method(VNM).The spatial flux is approximated at each Cartesian node using finite elements in the x-y plane and orthogonal polynomials along the z-axis.Angular discretization utilizes the even-parity integral approach at the nodes and spherical harmonic expansions at the interfaces.To further lower the computational cost,a predictor–corrector quasi-static SCM(PCQ-SCM)was developed.Within the VNM framework,computational models for the adjoint neutron flux and kinetic parameters are presented.The direct-SCM and PCQ-SCM were implemented in VITAS and verified using the two-dimensional(2D)and three-dimensional(3D)exercises on the OECD/NEA C5G7-TD benchmark.In the 2D and 3D problems,the discrepancy between the direct-SCM solver’s results and those reported by MPACT and PANDAS-MOC was under 0.97%and 1.57%,respectively.In addition,numerical studies comparing the PCQ-SCM solver to the direct-SCM solver demonstrated that the PCQ-SCM enabled substantially larger time steps,thereby reducing the computational cost 100-fold,without compromising numerical accuracy.展开更多
Objective The study aimed to estimate the benchmark dose(BMD)of coke oven emissions(COEs)exposure based on mitochondrial damage with the mitochondrial DNA copy number(mtDNAcn)as a biomarker.Methods A total of 782 subj...Objective The study aimed to estimate the benchmark dose(BMD)of coke oven emissions(COEs)exposure based on mitochondrial damage with the mitochondrial DNA copy number(mtDNAcn)as a biomarker.Methods A total of 782 subjects were recruited,including 238 controls and 544 exposed workers.The mtDNAcn of peripheral leukocytes was detected through the real-time fluorescence-based quantitative polymerase chain reaction.Three BMD approaches were used to calculate the BMD of COEs exposure based on the mitochondrial damage and its 95%confidence lower limit(BMDL).Results The mtDNAcn of the exposure group was lower than that of the control group(0.60±0.29 vs.1.03±0.31;P<0.001).A dose-response relationship was shown between the mtDNAcn damage and COEs.Using the Benchmark Dose Software,the occupational exposure limits(OELs)for COEs exposure in males was 0.00190 mg/m^(3).The OELs for COEs exposure using the BBMD were 0.00170 mg/m^(3)for the total population,0.00158 mg/m^(3)for males,and 0.00174 mg/m^(3)for females.In possible risk obtained from animal studies(PROAST),the OELs of the total population,males,and females were 0.00184,0.00178,and 0.00192 mg/m^(3),respectively.Conclusion Based on our conservative estimate,the BMDL of mitochondrial damage caused by COEs is0.002 mg/m^(3).This value will provide a benchmark for determining possible OELs.展开更多
文摘There exists the challenge of seeming lack of empirically determined cargo throughput benchmark models for the privatized West African port terminals particularly in Nigeria,as target benchmarks which terminal operators and port authorities must drive towards to ensure that the current improvement in port productivity experienced in the post concession era is sustained.The study was therefore aimed at developing benchmarks for the cargo throughput performances of the privatized five Nigeria ports of Apapa(Lagos),Port-Harcourt,Onne,Warri and Calabar.Such benchmarks developed for each seaport must be higher than the pre-privatization cargo throughput performances of the seaport.This became important following the improvements observed in the cargo throughput performances of the various ports from the year 2006 after the privatization of the ports and the recent recession faced in the Country which seems to have retarded the cargo throughput performances and other measures of seaport performance in the various Nigeria ports.Using Cp1,CL1,Cw1,Co1,Cc1,to represent the base year 2006 cargo throughput performances of Port-Harcourt,Lagos,Warri,Onne and Calabar seaport respectively;and n,d,to represent the number of post privatization years covered in the study and common difference in cargo throughput performances;the study used a historical design approach in which time series data on cargo throughput performances of the ports were obtained from the Nigeria ports Authority(NPA)annual statistical reports were analyzed using the converging and diverging arithmetic series mathematical modeling tool and MATLAB software,to determine benchmark models,for ensuring that the improved cargo throughput performances of the various seaports,are sustained to remain higher that the pre-privatization cargo throughput performances.The study developed the following Cargo throughput benchmark models for each seaport as findings.Lagos port=CL1+(n–1)d≥15223340;Onne port=Co1+(n-1)d≥15820381;Port-Harcourt port=Cp1+(n-1)d≥28016979;Warri=Cw1+(n-1)d≥4643128;Calabar=Cc1+(n-1)d≥7963434.It was recommended that to improve port revenue which is a dependent factor on cargo throughput and vessel call rate,cargo throughput benchmarks model developed for the individual seaports should be used to empirically model quantum s of cargo throughput needed to economically sustain and improve the level of port operations.It should equally influence port marketing drives.This will ensure that the performance of the ports does not recede into the poor performance indices experienced in the pre-privatization era.
文摘With the adoption of the Luanda Declaration at the end of the conference,it was fairly evident that African governments could no longer deny the causal links or intersection-between the environment and health care for people across the continent.
基金Supported by Canadian Cancer Society,No.017054Fonds de Recherche SantéQuébec,No.14003
文摘AIM:To estimate and compare sex-specific screening polypectomy rates to quality benchmarks of 40%in men and 30%in women.METHODS:A prospective cohort study was undertaken of patients aged 50-75,scheduled for colonoscopy,and covered by the Québec universal health insurance plan.Endoscopist and patient questionnaires were used to obtain screening and non-screening colonoscopy indications.Patient self-report was used to obtain history of gastrointestinal conditions/symptoms and prior colonoscopy.Sex-specific polypectomy rates(PRs)and95%CI were calculated using Bayesian hierarchical logistic regression.RESULTS:In total,45 endoscopists and 2134(mean age=61,50%female)of their patients participated.According to patients,screening PRs in males and females were 32.4%(95%CI:23.8-41.8)and19.4%(95%CI:13.1-25.4),respectively.According to endoscopists,screening PRs in males and females were 30.2%(95%CI:27.0-41.9)and 16.6%(95%CI:16.3-28.6),respectively.Sex-specific PRs did not meet quality benchmarks at all ages except for:males aged65-69(patient screening indication),and males aged70-74(endoscopist screening indication).For all patients aged 50-54,none of the CI included the quality benchmarks.CONCLUSION:Most sex-specific screening PRs in Québec were below quality benchmarks;PRs were especially low for all 50-54 year olds.
文摘Helicopter EMS (HEMS) allows for patients to be quickly transported into regional cardiac centers, often to receive primary percutaneous coronary intervention (PCI). Since PCI is a time-critical therapy, it is important that patients get to primary PCI as quickly as possible. HEMS crews’ “on-scene” times for trauma patients have been extensively studied, and recent years have seen many efforts to minimize the time required to prepare patients for transport. There has been less attention to interfacility transport “scene times” for HEMS crews at referring hospitals;this includes stabilization times for preparing cardiac patients for loading onto aircraft for HEMS transport to primary PCI. In the absence of guiding evidence, system benchmarking and quality improvement are difficult. Therefore the current study was undertaken, to assess and describe the HEMS crew “on-scene” times or “patient stabilization times” (PSTs) at referring hospitals, for interfacility transported cardiac patients flown for primary PCI. Descriptive analysis identified a PST median of 19 minutes (interquartile range 15 - 24), and univariate analyses using Kruskal-Wallis testing found no association between prolonged PST and sending unit type (Emergency Department versus other), off-hours transports, or relatively frequent (at least monthly) use of HEMS (p for all comparisons > 0.64). Outlier PSTs, defined a priori as those exceeding the median by at least a half-hour, were found in 12% of all cases. These data could be useful as a starting point for system planning and benchmarking efforts in regionalized systems of acute cardiac care.
基金The work was carried out at the Center for Artificial Intelligence(C4AI-USP)with support from the São Paulo Research Foundation(FAPESP grant#2019/07665-4)from the IBM Corporation.This research was also partially supported by ItaúUnibanco S.A.+1 种基金M.M.Joséand F.Nakasato have been supported by the ItaúScholarship Program(PBI)of the Data Science Center(C2D)of the Escola Politécnica da Universidade de São PauloWe acknowledge support by CAPES-Finance Code 001.A.H.R.Costa and F.G.Cozman were partially supported by CNPq grants 310085/2020-9 and 305753/2022-3 respectively.Paulo Pirozelli was supported by the FAPESP grant 2019/26762-0.
文摘Piráis a reading comprehension dataset focused on the ocean,the Brazilian coast,and climate change,built from a collection of scientific abstracts and reports on these topics.This dataset represents a versatile language resource,particularly useful for testing the ability of current machine learning models to acquire expert scientific knowledge.Despite its potential,a detailed set of baselines has not yet been developed for Pirá.By creating these baselines,researchers can more easily utilize Piráas a resource for testing machine learning models across a wide range of question answering tasks.In this paper,we define six benchmarks over the Pirádataset,covering closed generative question answering,machine reading comprehension,information retrieval,open question answering,answer triggering,and multiple choice question answering.As part of this effort,we have also produced a curated version of the original dataset,where we fixed a number of grammar issues,repetitions,and other shortcomings.Furthermore,the dataset has been extended in several new directions,so as to face the aforementioned benchmarks:translation of supporting texts from English into Portuguese,classification labels for answerability,automatic paraphrases of questions and answers,and multiple choice candidates.The results described in this paper provide several points of reference for researchers interested in exploring the challenges provided by the Pirádataset.
基金supported by National Key R&D Program of China(No.2019YFB1600702)the National Natural Science Foundation of China(No.51878059).
文摘Automatic modal identification via automatically interpreting the stabilization diagram provides key technique in bridge structural health monitoring.This paper reviews the progress in the area of automatic modal identification based on interpreting the stabilization diagram.The whole identification process is divided into four steps from establishing the stabilization diagram to removing the outliers in the identification results.The criteria and algorithms used in each step in the existing studies are carefully summarized and classified.Comparisons between typical methods in cleaning and interpreting the stabilization diagram are also conducted.Real structure benchmarks used in the existing studies to validate the proposed automatic modal identification methods are also summarized.Based on the review and comparison,the specific ratio method for cleaning the stabilization diagram,the hierarchical clustering method for interpreting the stabilization diagram and the adjusted boxplot for removing the outliers in the identification results are the most suitable methods for each step.The key point of automatic modal identification based on interpreting the stabilization diagram has also discussed,and it is recommended to pay more attention to cleaning the stabilization diagram.Future study about automatic modal identification under situation with very few sensors deployed should be more concerned.This review aims to help researchers and practitioners in implementing existing automatic modal identification algorithms effectively and developing more suitable and practical methods for civil engineering structures in the future.
基金King Saud University for funding this research through Researchers Supporting Program Number(RSPD2023R704),King Saud University,Riyadh,Saudi Arabia.
文摘This research paper presents a novel optimization method called the Synergistic Swarm Optimization Algorithm(SSOA).The SSOA combines the principles of swarmintelligence and synergistic cooperation to search for optimal solutions efficiently.A synergistic cooperation mechanism is employed,where particles exchange information and learn from each other to improve their search behaviors.This cooperation enhances the exploitation of promising regions in the search space while maintaining exploration capabilities.Furthermore,adaptive mechanisms,such as dynamic parameter adjustment and diversification strategies,are incorporated to balance exploration and exploitation.By leveraging the collaborative nature of swarm intelligence and integrating synergistic cooperation,the SSOAmethod aims to achieve superior convergence speed and solution quality performance compared to other optimization algorithms.The effectiveness of the proposed SSOA is investigated in solving the 23 benchmark functions and various engineering design problems.The experimental results highlight the effectiveness and potential of the SSOA method in addressing challenging optimization problems,making it a promising tool for a wide range of applications in engineering and beyond.Matlab codes of SSOA are available at:https://www.mathworks.com/matlabcentral/fileexchange/153466-synergistic-swarm-optimization-algorithm.
基金supported in part by National Natural Science Foundation of China(62106230,U23A20340,62376253,62176238)China Postdoctoral Science Foundation(2023M743185)Key Laboratory of Big Data Intelligent Computing,Chongqing University of Posts and Telecommunications Open Fundation(BDIC-2023-A-007)。
文摘In multimodal multiobjective optimization problems(MMOPs),there are several Pareto optimal solutions corre-sponding to the identical objective vector.This paper proposes a new differential evolution algorithm to solve MMOPs with higher-dimensional decision variables.Due to the increase in the dimensions of decision variables in real-world MMOPs,it is diffi-cult for current multimodal multiobjective optimization evolu-tionary algorithms(MMOEAs)to find multiple Pareto optimal solutions.The proposed algorithm adopts a dual-population framework and an improved environmental selection method.It utilizes a convergence archive to help the first population improve the quality of solutions.The improved environmental selection method enables the other population to search the remaining decision space and reserve more Pareto optimal solutions through the information of the first population.The combination of these two strategies helps to effectively balance and enhance conver-gence and diversity performance.In addition,to study the per-formance of the proposed algorithm,a novel set of multimodal multiobjective optimization test functions with extensible decision variables is designed.The proposed MMOEA is certified to be effective through comparison with six state-of-the-art MMOEAs on the test functions.
文摘Purpose-Prominent at the intersections of national educational agencies,higher education,and international educational performance assessments are two reform standards:“benchmarks”determining optimal student performance,and“empirical evidence”for determining the quality of reform practices.These two notions are often taken as connecting policy and research to effective changes in many countries.The article examines the historical and cultural principles about educational change and its sciences embedded in these standards through examining OECD’s PISA and the McKinsey&Company reports that draw on PISA’s data.Findings/Originality/Value-First,the reports express salvation themes associated with modernity;that is,the promise of a better future through governing the present.The promise is to provide nations with data and models to achieve social equality,economic prosperity,and a participatory democracy.Second,the promise of the future is not descriptive of some present reality but to fabricate the universal characteristics about society and individuals.The numbers embody social and psychological categories about a desired unity of all students.Third,the“empirical evidence”of the international assessment entails a particular notion of science and“evidence”;one that paradoxically uses the universals in comparing and creating divisions.
文摘There exists a gap between control theory and control practice,i.e.,all control methods suggested by researchers are not implemented in real systems and,on the other hand,many important in dustrial problems are not studied in the academic research.Benchmark problems can help close this gap and provide many opportunities for members in both the controls theory and application communities.The goal is to survey and give pointers to different general controls and modeling related benchmark problems that can serve as inspiration for future benchmarks and then specifically focus the benchmark coverage on automotive control engineering application.In the paper reflections are given on how different categories of benchmark designers,benchmark solvers and third part users can benefit from providing,solving,and studying benchmark problems.The paper also collects information about several benchmark problems and gives pointers to papers than give more detailed information about different problems that have been presented.
基金Base Funding-UIDB/04708/2020 and Programmatic Funding-UIDP/04708/2020 of the CONSTRUCT-Instituto de I&D em Estruturas e Construções-funded by national funds through the FCT/MCTES(PIDDAC)Grant no.2020.00305.CEECIND from the Stimulus of Scientific Employment,Individual Support(CEECIND)-3rd Edition provided by“FCT-Fundação para a Ciência e Tecnologia.”。
文摘An enhancement in the wheel-rail contact model used in a nonlinear vehicle-structure interaction(VSI)methodology for railway applications is presented,in which the detection of the contact points between wheel and rail in the concave region of the thread-flange transition is implemented in a simplified way.After presenting the enhanced formulation,the model is validated with two numerical applications(namely,the Manchester Benchmarks and a hunting stability problem of a sus-pended wheelset),and one experimental test performed in a test rig from the Railway Technical Research Institute(RTRI)in Japan.Given its finite element(FE)nature,and contrary to most of the vehicle multibody dynamic commercial software that cannot account for the infrastructure flexibility,the proposed VSI model can be easily used in the study of train-bridge systems with any degree of complexity.The validation presented in this work proves the accuracy of the proposed model,making it a suitable tool for dealing with different railway dynamic applications,such as the study of bridge dynamics,train running safety under different scenarios(namely,earthquakes and crosswinds,among others),and passenger riding comfort.
基金the National Natural Science Foundation of China(Nos.41804047 and 42111540260)Fundamental Research Funds of the Institute of Geophysics,China Earthquake Administration(NO.DQJB19A0114)the Key Research Program of the Institute of Geology and Geophysics,Chinese Academy of Sciences(No.IGGCAS-201904).
文摘In recent years,artificial intelligence technology has exhibited great potential in seismic signal recognition,setting off a new wave of research.Vast amounts of high-quality labeled data are required to develop and apply artificial intelligence in seismology research.In this study,based on the 2013–2020 seismic cataloging reports of the China Earthquake Networks Center,we constructed an artificial intelligence seismological training dataset(“DiTing”)with the largest known total time length.Data were recorded using broadband and short-period seismometers.The obtained dataset included 2,734,748 threecomponent waveform traces from 787,010 regional seismic events,the corresponding P-and S-phase arrival time labels,and 641,025 P-wave first-motion polarity labels.All waveforms were sampled at 50 Hz and cut to a time length of 180 s starting from a random number of seconds before the occurrence of an earthquake.Each three-component waveform contained a considerable amount of descriptive information,such as the epicentral distance,back azimuth,and signal-to-noise ratios.The magnitudes of seismic events,epicentral distance,signal-to-noise ratio of P-wave data,and signal-to-noise ratio of S-wave data ranged from 0 to 7.7,0 to 330 km,–0.05 to 5.31 dB,and–0.05 to 4.73 dB,respectively.The dataset compiled in this study can serve as a high-quality benchmark for machine learning model development and data-driven seismological research on earthquake detection,seismic phase picking,first-motion polarity determination,earthquake magnitude prediction,early warning systems,and strong ground-motion prediction.Such research will further promote the development and application of artificial intelligence in seismology.
基金Project of Key Science and Technology of the Henan Province (No.202102310259)Henan Province University Scientific and Technological Innovation Team (No.18IRTSTHN009).
文摘The Bald Eagle Search algorithm(BES)is an emerging meta-heuristic algorithm.The algorithm simulates the hunting behavior of eagles,and obtains an optimal solution through three stages,namely selection stage,search stage and swooping stage.However,BES tends to drop-in local optimization and the maximum value of search space needs to be improved.To fill this research gap,we propose an improved bald eagle algorithm(CABES)that integrates Cauchy mutation and adaptive optimization to improve the performance of BES from local optima.Firstly,CABES introduces the Cauchy mutation strategy to adjust the step size of the selection stage,to select a better search range.Secondly,in the search stage,CABES updates the search position update formula by an adaptive weight factor to further promote the local optimization capability of BES.To verify the performance of CABES,the benchmark function of CEC2017 is used to simulate the algorithm.The findings of the tests are compared to those of the Particle Swarm Optimization algorithm(PSO),Whale Optimization Algorithm(WOA)and Archimedes Algorithm(AOA).The experimental results show that CABES can provide good exploration and development capabilities,and it has strong competitiveness in testing algorithms.Finally,CABES is applied to four constrained engineering problems and a groundwater engineeringmodel,which further verifies the effectiveness and efficiency of CABES in practical engineering problems.
基金supported by the National Natural Science Foundation of China (No. 11775311)。
文摘Iron is commonly used as a structural and shielding material in nuclear devices. The accuracy of its nuclear data is critical for the design of nuclear devices. The evaluation data of ^(56)Fe isotopes in the latest version of the CENDL-3.2 library from China was significantly updated. This new data must be tested before it can be used. To test the reliability of this data and assess the shielding effect, a shielding benchmark experiment was conducted with natural Fe spherical samples using a pulsed deuterium–tritium neutron source at the China Institute of Atomic Energy(CIAE). The leakage neutron spectra from the natural spherical iron samples with different thicknesses(4.5, 7.5, and 12 cm) were measured between 0.8 and 16 MeV after interacting with 14 MeV neutrons using the time-of-flight method. The simulation results were obtained by Monte Carlo simulations by employing the Fe data from the CENDL-3.2, ENDF/B-VIII.0, and JEDNL-5.0 libraries. The measured and simulated leakage neutron spectra and penetration rates were compared, demonstrating that the CENDL-3.2 library performs sufficiently overall. The simulation results of the other two libraries were underestimated for scattering at the continuum energy level.
基金This work was supported by the National Research Foundation of Korea(NRF)grant funded by the Korea government(MSIT)(No.2022R1F1A1062953).
文摘Recently,deep learning has achieved remarkable results in fields that require human cognitive ability,learning ability,and reasoning ability.Activation functions are very important because they provide the ability of artificial neural networks to learn complex patterns through nonlinearity.Various activation functions are being studied to solve problems such as vanishing gradients and dying nodes that may occur in the deep learning process.However,it takes a lot of time and effort for researchers to use the existing activation function in their research.Therefore,in this paper,we propose a universal activation function(UA)so that researchers can easily create and apply various activation functions and improve the performance of neural networks.UA can generate new types of activation functions as well as functions like traditional activation functions by properly adjusting three hyperparameters.The famous Convolutional Neural Network(CNN)and benchmark datasetwere used to evaluate the experimental performance of the UA proposed in this study.We compared the performance of the artificial neural network to which the traditional activation function is applied and the artificial neural network to which theUA is applied.In addition,we evaluated the performance of the new activation function generated by adjusting the hyperparameters of theUA.The experimental performance evaluation results showed that the classification performance of CNNs improved by up to 5%through the UA,although most of them showed similar performance to the traditional activation function.
文摘With the development of artificial intelligence-related technologies such as deep learning,various organizations,including the government,are making various efforts to generate and manage big data for use in artificial intelligence.However,it is difficult to acquire big data due to various social problems and restrictions such as personal information leakage.There are many problems in introducing technology in fields that do not have enough training data necessary to apply deep learning technology.Therefore,this study proposes a mixed contour data augmentation technique,which is a data augmentation technique using contour images,to solve a problem caused by a lack of data.ResNet,a famous convolutional neural network(CNN)architecture,and CIFAR-10,a benchmark data set,are used for experimental performance evaluation to prove the superiority of the proposed method.And to prove that high performance improvement can be achieved even with a small training dataset,the ratio of the training dataset was divided into 70%,50%,and 30%for comparative analysis.As a result of applying the mixed contour data augmentation technique,it was possible to achieve a classification accuracy improvement of up to 4.64%and high accuracy even with a small amount of data set.In addition,it is expected that the mixed contour data augmentation technique can be applied in various fields by proving the excellence of the proposed data augmentation technique using benchmark datasets.
基金supported by the National Natural Science Foundation of China (Nos. 12175138, U20B2011)the Young Talent Project of the China National Nuclear Corporation
文摘This article describes the transient models of the neutronics code VITAS that are used for solving time-dependent,pinresolved neutron transport equations.VITAS uses the stiffness confinement method(SCM)for temporal discretization to transform the transient equation into the corresponding transient eigenvalue problem(TEVP).To solve the pin-resolved TEVP,VITAS uses a heterogeneous variational nodal method(VNM).The spatial flux is approximated at each Cartesian node using finite elements in the x-y plane and orthogonal polynomials along the z-axis.Angular discretization utilizes the even-parity integral approach at the nodes and spherical harmonic expansions at the interfaces.To further lower the computational cost,a predictor–corrector quasi-static SCM(PCQ-SCM)was developed.Within the VNM framework,computational models for the adjoint neutron flux and kinetic parameters are presented.The direct-SCM and PCQ-SCM were implemented in VITAS and verified using the two-dimensional(2D)and three-dimensional(3D)exercises on the OECD/NEA C5G7-TD benchmark.In the 2D and 3D problems,the discrepancy between the direct-SCM solver’s results and those reported by MPACT and PANDAS-MOC was under 0.97%and 1.57%,respectively.In addition,numerical studies comparing the PCQ-SCM solver to the direct-SCM solver demonstrated that the PCQ-SCM enabled substantially larger time steps,thereby reducing the computational cost 100-fold,without compromising numerical accuracy.
基金supported by the National Natural Science Foundation of China[grant numbers:NSFC81872597,81001239]。
文摘Objective The study aimed to estimate the benchmark dose(BMD)of coke oven emissions(COEs)exposure based on mitochondrial damage with the mitochondrial DNA copy number(mtDNAcn)as a biomarker.Methods A total of 782 subjects were recruited,including 238 controls and 544 exposed workers.The mtDNAcn of peripheral leukocytes was detected through the real-time fluorescence-based quantitative polymerase chain reaction.Three BMD approaches were used to calculate the BMD of COEs exposure based on the mitochondrial damage and its 95%confidence lower limit(BMDL).Results The mtDNAcn of the exposure group was lower than that of the control group(0.60±0.29 vs.1.03±0.31;P<0.001).A dose-response relationship was shown between the mtDNAcn damage and COEs.Using the Benchmark Dose Software,the occupational exposure limits(OELs)for COEs exposure in males was 0.00190 mg/m^(3).The OELs for COEs exposure using the BBMD were 0.00170 mg/m^(3)for the total population,0.00158 mg/m^(3)for males,and 0.00174 mg/m^(3)for females.In possible risk obtained from animal studies(PROAST),the OELs of the total population,males,and females were 0.00184,0.00178,and 0.00192 mg/m^(3),respectively.Conclusion Based on our conservative estimate,the BMDL of mitochondrial damage caused by COEs is0.002 mg/m^(3).This value will provide a benchmark for determining possible OELs.