The direction-of-arrival(DoA) estimation is one of the hot research areas in signal processing. To overcome the DoA estimation challenge without the prior information about signal sources number and multipath number i...The direction-of-arrival(DoA) estimation is one of the hot research areas in signal processing. To overcome the DoA estimation challenge without the prior information about signal sources number and multipath number in millimeter wave system,the multi-task deep residual shrinkage network(MTDRSN) and transfer learning-based convolutional neural network(TCNN), namely MDTCNet, are proposed. The sampling covariance matrix based on the received signal is used as the input to the proposed network. A DRSN-based multi-task classifications model is first introduced to estimate signal sources number and multipath number simultaneously. Then, the DoAs with multi-signal and multipath are estimated by the regression model. The proposed CNN is applied for DoAs estimation with the predicted number of signal sources and paths. Furthermore, the modelbased transfer learning is also introduced into the regression model. The TCNN inherits the partial network parameters of the already formed optimization model obtained by the CNN. A series of experimental results show that the MDTCNet-based DoAs estimation method can accurately predict the signal sources number and multipath number under a range of signal-to-noise ratios. Remarkably, the proposed method achieves the lower root mean square error compared with some existing deep learning-based and traditional methods.展开更多
Although hydrofluoric acid(HF)surface treatment is known to enhance the joining of metals with polymers,there is limited information on its effect on the joining of AZ31 alloy and carbon-fiber-reinforced plastics(CFRP...Although hydrofluoric acid(HF)surface treatment is known to enhance the joining of metals with polymers,there is limited information on its effect on the joining of AZ31 alloy and carbon-fiber-reinforced plastics(CFRPs)through laser-assisted metal and plastic direct joining(LAMP).This study uses the LAMP technique to produce AZ31-CFRP joints.The joining process involves as-received AZ31,HFpretreated AZ31,and thermally oxidized HF-pretreated AZ31 alloy sheets.Furthermore,the bonding strength of joints prepared with thermally oxidized AZ31 alloy sheets is examined to ascertain the combined effect of HF treatment and thermal oxidation on bonding strength.The microstructures,surface chemical interactions,and mechanical performances of joints are investigated under tensile shear loading.Various factors,such as bubble formation,CFRP resin decomposition,and mechanical interlocking considerably affect joint strength.Additionally,surface chemical interactions between the active species on metal parts and the polar amide along with carbonyl groups of polymer play a significant role in improving joint strength.Joints prepared with surface-pretreated AZ31 alloy sheets show significant improvements in bonding strength.展开更多
Purpose–Material selection,driven by wide and often conflicting objectives,is an important,sometimes difficult problem in material engineering.In this context,multi-criteria decision-making(MCDM)methodologies are eff...Purpose–Material selection,driven by wide and often conflicting objectives,is an important,sometimes difficult problem in material engineering.In this context,multi-criteria decision-making(MCDM)methodologies are effective.An approach of MCDM is needed to cater to criteria of material assortment simultaneously.More firms are now concerned about increasing their productivity using mathematical tools.To occupy a gap in the previous literature this research recommends an integrated MCDM and mathematical Bi-objective model for the selection of material.In addition,by using the Technique for Order Preference by Similarity to Ideal Solution(TOPSIS),the inherent ambiguities of decision-makers in paired evaluations are considered in this research.It goes on to construct a mathematical bi-objective model for determining the best item to purchase.Design/methodology/approach–The entropy perspective is implemented in this paper to evaluate the weight parameters,while the TOPSIS technique is used to determine the best and worst intermediate pipe materials for automotive exhaust system.The intermediate pipes are used to join the components of the exhaust systems.The materials usually used to manufacture intermediate pipe are SUS 436LM,SUS 430,SUS 304,SUS 436L,SUH 409 L,SUS 441 L and SUS 439L.These seven materials are evaluated based on tensile strength(TS),hardness(H),elongation(E),yield strength(YS)and cost(C).A hybrid methodology combining entropy-based criteria weighting,with the TOPSIS for alternative ranking,is pursued to identify the optimal design material for an engineered application in this paper.This study aims to help while filling the information gap in selecting the most suitable material for use in the exhaust intermediate pipes.After that,the authors searched for and considered eight materials and evaluated them on the following five criteria:(1)TS,(2)YS,(3)H,(4)E and(5)C.The first two criteria have been chosen because they can have a lot of influence on the behavior of the exhaust intermediate pipes,on their performance and on the cost.In this structure,the weights of the criteria are calculated objectively through the entropy method in order to have an unbiased assessment.This essentially measures the quantity of information each criterion contribution,indicating the relative importance of these criteria better.Subsequently,the materials were ranked using the TOPSIS method in terms of their relative performance by measuring each material from an ideal solution to determine the best alternative.The results show that SUS 309,SUS 432L and SUS 436 LM are the first three materials that the exhaust intermediate pipe optimal design should consider.Findings–The material matrix of the decision presented in Table 3 was normalized through Equation 5,as shown in Table 5,and the matrix was multiplied with weighting criteriaß_j.The obtained weighted normalized matrix V_ij is presented in Table 6.However,the ideal,worst and best value was ascertained by employing Equation 7.This study is based on the selection of material for the development of intermediate pipe using MCDM,and it involves four basic stages,i.e.method of translation criteria,screening process,method of ranking and search for methods.The selection was done through the TOPSIS method,and the criteria weight was obtained by the entropy method.The result showed that the top three materials are SUS 309,SUS 432L and SUS 436 LM,respectively.For the future work,it is suggested to select more alternatives and criteria.The comparison can also be done by using different MCDM techniques like and Choice Expressing Reality(ELECTRE),Decision-Making Trial and Evaluation Laboratory(DEMATEL)and Preference Ranking Organization Method for Enrichment Evaluation(PROMETHEE).Originality/value–The results provide important conclusions for material selection in this targeted application,verifying the employment of mutual entropy-TOPSIS methodology for a series of difficult engineering decisions in material engineering concepts that combine superior capacity with better performance as well as cost-efficiency in various engineering design.展开更多
By pushing computation,cache,and network control to the edge,mobile edge computing(MEC)is expected to play a leading role in fifth generation(5G)and future sixth generation(6G).Nevertheless,facing ubiquitous fast-grow...By pushing computation,cache,and network control to the edge,mobile edge computing(MEC)is expected to play a leading role in fifth generation(5G)and future sixth generation(6G).Nevertheless,facing ubiquitous fast-growing computational demands,it is impossible for a single MEC paradigm to effectively support high-quality intelligent services at end user equipments(UEs).To address this issue,we propose an air-ground collaborative MEC(AGCMEC)architecture in this article.The proposed AGCMEC integrates all potentially available MEC servers within air and ground in the envisioned 6G,by a variety of collaborative ways to provide computation services at their best for UEs.Firstly,we introduce the AGC-MEC architecture and elaborate three typical use cases.Then,we discuss four main challenges in the AGC-MEC as well as their potential solutions.Next,we conduct a case study of collaborative service placement for AGC-MEC to validate the effectiveness of the proposed collaborative service placement strategy.Finally,we highlight several potential research directions of the AGC-MEC.展开更多
The research volume increases at the study rate,causing massive text corpora.Due to these enormous text corpora,we are drowning in data and starving for information.Therefore,recent research employed different text mi...The research volume increases at the study rate,causing massive text corpora.Due to these enormous text corpora,we are drowning in data and starving for information.Therefore,recent research employed different text mining approaches to extract information from this text corpus.These proposed approaches extract meaningful and precise phrases that effectively describe the text’s information.These extracted phrases are commonly termed keyphrases.Further,these key phrases are employed to determine the different fields of study trends.Moreover,these key phrases can also be used to determine the spatiotemporal trends in the various research fields.In this research,the progress of a research field can be better revealed through spatiotemporal bibliographic trend analysis.Therefore,an effective spatiotemporal trend extraction mechanism is required to disclose textile research trends of particular regions during a specific period.This study collected a diversified dataset of textile research from 2011–2019 and different countries to determine the research trend.This data was collected from various open access journals.Further,this research determined the spatiotemporal trends using quality phrasemining.This research also focused on finding the research collaboration of different countries in a particular research subject.The research collaborations of other countries’researchers show the impact on import and export of those countries.The visualization approach is also incorporated to understand the results better.展开更多
Mental workload plays a vital role in cognitive impairment. The impairment refers to a person’s difficulty in remembering, receiving new information, learning new things, concentrating, or making decisions that serio...Mental workload plays a vital role in cognitive impairment. The impairment refers to a person’s difficulty in remembering, receiving new information, learning new things, concentrating, or making decisions that seriously affect everyday life. In this paper, the simultaneous capacity (SIMKAP) experiment-based EEG workload analysis was presented using 45 subjects for multitasking mental workload estimation with subject wise attention loss calculation as well as short term memory loss measurement. Using an open access preprocessed EEG dataset, Discrete wavelet transforms (DWT) was utilized for feature extraction and Minimum redundancy and maximum relevancy (MRMR) technique was used to select most relevance features. Wavelet decomposition technique was also used for decomposing EEG signals into five sub bands. Fourteen statistical features were calculated from each sub band signal to form a 5 × 14 window size. The Neural Network (Narrow) classification algorithm was used to classify dataset for low and high workload conditions and comparison was made using some other machine learning models. The results show the classifier’s accuracy of 86.7%, precision of 84.4%, F1 score of 86.33%, and recall of 88.37% that crosses the state-of-the art methodologies in the literature. This prediction is expected to greatly facilitate the improved way in memory and attention loss impairments assessment.展开更多
Quality of Maternal, Neonatal and Child (MNCH) care is an important aspect in ensuring healthy outcomes and survival of mothers and children. To maintain quality in health services provided, organizations and other st...Quality of Maternal, Neonatal and Child (MNCH) care is an important aspect in ensuring healthy outcomes and survival of mothers and children. To maintain quality in health services provided, organizations and other stakeholders in maternal and child health recommend regular quality measurement. Quality indicators are the key components in the quality measurement process. However, the literature shows neither an indicator selection process nor a set of quality indicators for quality measurement that is universally accepted. The lack of a universally accepted quality indicator selection process and set of quality indicators results in the establishment of a variety of quality indicator selection processes and several sets of quality indicators whenever the need for quality measurement arises. This adds extra processes that render quality measurement process. This study, therefore, aims to establish a set of quality indicators from a broad set of quality indicators recommended by the World Health Organization (WHO). The study deployed a machine learning technique, specifically a random forest classifier to select important indicators for quality measurement. Twenty-nine indicators were identified as important features and among those, eight indicators namely maternal mortality ratio, still-birth rate, delivery at a health facility, deliveries assisted by skilled attendants, proportional breach delivery, normal delivery rate, born before arrival rate and antenatal care visit coverage were identified to be the most important indicators for quality measurement.展开更多
Antennas are an indispensable element in wireless networks. For long-distance wireless communication, antenna gains need to be very strong (highly directive) because the signal from the antenna loses a lot of str...Antennas are an indispensable element in wireless networks. For long-distance wireless communication, antenna gains need to be very strong (highly directive) because the signal from the antenna loses a lot of strength as it travels over long distances. This is true in the military with missile, radar, and satellite systems, etc. Antenna arrays are commonly employed to focus electromagnetic waves in a certain direction that cannot be achieved perfectly with a single-element antenna. The goal of this study is to design a rectangular microstrip high-gain 2 × 1 array antenna using ADS Momentum. This microstrip patch array design makes use of the RT-DUROID 5880 as a substrate with a dielectric constant of 2.2, substrate height of 1.588 mm, and tangent loss of 0.001. To achieve efficient gain and return loss characteristics for the proposed array antenna, RT-Duroid is a good choice of dielectric material. The designed array antenna is made up of two rectangular patches, which have a resonance frequency of 3.3 GHz. These rectangular patches are excited by microstrip feed lines with 13 mm lengths and 4.8 mm widths. The impedance of the patches is perfectly matched by these transmission lines, which helps to get better antenna characteristics. At a resonance frequency of 3.3 GHz, the suggested antenna array has a directivity of 10.50 dB and a maximum gain of 9.90 dB in the S-band. The S parameters, 3D radiation pattern, directivity, gain, and efficiency of the constructed array antenna are all available in ADS Momentum.展开更多
At present,water pollution has become an important factor affecting and restricting national and regional economic development.Total phosphorus is one of the main sources of water pollution and eutrophication,so the p...At present,water pollution has become an important factor affecting and restricting national and regional economic development.Total phosphorus is one of the main sources of water pollution and eutrophication,so the prediction of total phosphorus in water quality has good research significance.This paper selects the total phosphorus and turbidity data for analysis by crawling the data of the water quality monitoring platform.By constructing the attribute object mapping relationship,the correlation between the two indicators was analyzed and used to predict the future data.Firstly,the monthly mean and daily mean concentrations of total phosphorus and turbidity outliers were calculated after cleaning,and the correlation between them was analyzed.Secondly,the correlation coefficients of different times and frequencies were used to predict the values for the next five days,and the data trend was predicted by python visualization.Finally,the real value was compared with the predicted value data,and the results showed that the correlation between total phosphorus and turbidity was useful in predicting the water quality.展开更多
Smart manufacturing is a process that optimizes factory performance and production quality by utilizing various technologies including the Internet of Things(IoT)and artificial intelligence(AI).Quality control is an i...Smart manufacturing is a process that optimizes factory performance and production quality by utilizing various technologies including the Internet of Things(IoT)and artificial intelligence(AI).Quality control is an important part of today’s smart manufacturing process,effectively reducing costs and enhancing operational efficiency.As technology in the industry becomes more advanced,identifying and classifying defects has become an essential element in ensuring the quality of products during the manufacturing process.In this study,we introduce a CNN model for classifying defects on hot-rolled steel strip surfaces using hybrid deep learning techniques,incorporating a global average pooling(GAP)layer and a machine learning-based SVM classifier,with the aim of enhancing accuracy.Initially,features are extracted by the VGG19 convolutional block.Then,after processing through the GAP layer,the extracted features are fed to the SVM classifier for classification.For this purpose,we collected images from publicly available datasets,including the Xsteel surface defect dataset(XSDD)and the NEU surface defect(NEU-CLS)datasets,and we employed offline data augmentation techniques to balance and increase the size of the datasets.The outcome of experiments shows that the proposed methodology achieves the highest metrics score,with 99.79%accuracy,99.80%precision,99.79%recall,and a 99.79%F1-score for the NEU-CLS dataset.Similarly,it achieves 99.64%accuracy,99.65%precision,99.63%recall,and a 99.64%F1-score for the XSDD dataset.A comparison of the proposed methodology to the most recent study showed that it achieved superior results as compared to the other studies.展开更多
Image steganography is one of the prominent technologies in data hiding standards.Steganographic system performance mostly depends on the embedding strategy.Its goal is to embed strictly confidential information into ...Image steganography is one of the prominent technologies in data hiding standards.Steganographic system performance mostly depends on the embedding strategy.Its goal is to embed strictly confidential information into images without causing perceptible changes in the original image.The randomization strategies in data embedding techniques may utilize random domains,pixels,or region-of-interest for concealing secrets into a cover image,preventing information from being discovered by an attacker.The implementation of an appropriate embedding technique can achieve a fair balance between embedding capability and stego image imperceptibility,but it is challenging.A systematic approach is used with a standard methodology to carry out this study.This review concentrates on the critical examination of several embedding strategies,incorporating experimental results with state-of-the-art methods emphasizing the robustness,security,payload capacity,and visual quality metrics of the stego images.The fundamental ideas of steganography are presented in this work,along with a unique viewpoint that sets it apart from previous works by highlighting research gaps,important problems,and difficulties.Additionally,it offers a discussion of suggested directions for future study to advance and investigate uncharted territory in image steganography.展开更多
Unmanned Aerial Vehicles(UAvs)as aerial base stations to provide communication services for ground users is a flexible and cost-effective paradigm in B5G.Besides,dynamic resource allocation and multi-connectivity can ...Unmanned Aerial Vehicles(UAvs)as aerial base stations to provide communication services for ground users is a flexible and cost-effective paradigm in B5G.Besides,dynamic resource allocation and multi-connectivity can be adopted to further harness the potentials of UAVs in improving communication capacity,in such situations such that the interference among users becomes a pivotal disincentive requiring effective solutions.To this end,we investigate the Joint UAV-User Association,Channel Allocation,and transmission Power Control(J-UACAPC)problem in a multi-connectivity-enabled UAV network with constrained backhaul links,where each UAV can determine the reusable channels and transmission power to serve the selected ground users.The goal was to mitigate co-channel interference while maximizing long-term system utility.The problem was modeled as a cooperative stochastic game with hybrid discrete-continuous action space.A Multi-Agent Hybrid Deep Reinforcement Learning(MAHDRL)algorithm was proposed to address this problem.Extensive simulation results demonstrated the effectiveness of the proposed algorithm and showed that it has a higher system utility than the baseline methods.展开更多
In Saudi Arabia,drones are increasingly used in different sensitive domains like military,health,and agriculture to name a few.Typically,drone cameras capture aerial images of objects and convert them into crucial dat...In Saudi Arabia,drones are increasingly used in different sensitive domains like military,health,and agriculture to name a few.Typically,drone cameras capture aerial images of objects and convert them into crucial data,alongside collecting data from distributed sensors supplemented by location data.The interception of the data sent from the drone to the station can lead to substantial threats.To address this issue,highly confidential protection methods must be employed.This paper introduces a novel steganography approach called the Shuffling Steganography Approach(SSA).SSA encompasses five fundamental stages and three proposed algorithms,designed to enhance security through strategic encryption and data hiding techniques.Notably,this method introduces advanced resistance to brute force attacks by employing predefined patterns across a wide array of images,complicating unauthorized access.The initial stage involves encryption,dividing,and disassembling the encrypted data.A small portion of the encrypted data is concealed within the text(Algorithm 1)in the third stage.Subsequently,the parts are merged and mixed(Algorithm 2),and finally,the composed text is hidden within an image(Algorithm 3).Through meticulous investigation and comparative analysis with existing methodologies,the proposed approach demonstrates superiority across various pertinent criteria,including robustness,secret message size capacity,resistance to multiple attacks,and multilingual support.展开更多
Federated learning is an innovative machine learning technique that deals with centralized data storage issues while maintaining privacy and security.It involves constructing machine learning models using datasets spr...Federated learning is an innovative machine learning technique that deals with centralized data storage issues while maintaining privacy and security.It involves constructing machine learning models using datasets spread across several data centers,including medical facilities,clinical research facilities,Internet of Things devices,and even mobile devices.The main goal of federated learning is to improve robust models that benefit from the collective knowledge of these disparate datasets without centralizing sensitive information,reducing the risk of data loss,privacy breaches,or data exposure.The application of federated learning in the healthcare industry holds significant promise due to the wealth of data generated from various sources,such as patient records,medical imaging,wearable devices,and clinical research surveys.This research conducts a systematic evaluation and highlights essential issues for the selection and implementation of federated learning approaches in healthcare.It evaluates the effectiveness of federated learning strategies in the field of healthcare.It offers a systematic analysis of federated learning in the healthcare domain,encompassing the evaluation metrics employed.In addition,this study highlights the increasing interest in federated learning applications in healthcare among scholars and provides foundations for further studies.展开更多
Predicting depression intensity from microblogs and social media posts has numerous benefits and applications,including predicting early psychological disorders and stress in individuals or the general public.A major ...Predicting depression intensity from microblogs and social media posts has numerous benefits and applications,including predicting early psychological disorders and stress in individuals or the general public.A major challenge in predicting depression using social media posts is that the existing studies do not focus on predicting the intensity of depression in social media texts but rather only perform the binary classification of depression and moreover noisy data makes it difficult to predict the true depression in the social media text.This study intends to begin by collecting relevant Tweets and generating a corpus of 210000 public tweets using Twitter public application programming interfaces(APIs).A strategy is devised to filter out only depression-related tweets by creating a list of relevant hashtags to reduce noise in the corpus.Furthermore,an algorithm is developed to annotate the data into three depression classes:‘Mild,’‘Moderate,’and‘Severe,’based on International Classification of Diseases-10(ICD-10)depression diagnostic criteria.Different baseline classifiers are applied to the annotated dataset to get a preliminary idea of classification performance on the corpus.Further FastText-based model is applied and fine-tuned with different preprocessing techniques and hyperparameter tuning to produce the tuned model,which significantly increases the depression classification performance to an 84%F1 score and 90%accuracy compared to baselines.Finally,a FastText-based weighted soft voting ensemble(WSVE)is proposed to boost the model’s performance by combining several other classifiers and assigning weights to individual models according to their individual performances.The proposed WSVE outperformed all baselines as well as FastText alone,with an F1 of 89%,5%higher than FastText alone,and an accuracy of 93%,3%higher than FastText alone.The proposed model better captures the contextual features of the relatively small sample class and aids in the detection of early depression intensity prediction from tweets with impactful performances.展开更多
Digital image forgery (DIF) is a prevalent issue in the modern age, where malicious actors manipulate images for various purposes, including deception and misinformation. Detecting such forgeries is a critical task fo...Digital image forgery (DIF) is a prevalent issue in the modern age, where malicious actors manipulate images for various purposes, including deception and misinformation. Detecting such forgeries is a critical task for maintaining the integrity of digital content. This thesis explores the use of Modified Error Level Analysis (ELA) in combination with a Convolutional Neural Network (CNN), as well as, Feedforward Neural Network (FNN) model to detect digital image forgeries. Additionally, incorporation of Explainable Artificial Intelligence (XAI) to this research provided insights into the process of decision-making by the models. The study trains and tests the models on the CASIA2 dataset, emphasizing both authentic and forged images. The CNN model is trained and evaluated, and Explainable AI (SHapley Additive exPlanation— SHAP) is incorporated to explain the model’s predictions. Similarly, the FNN model is trained and evaluated, and XAI (SHAP) is incorporated to explain the model’s predictions. The results obtained from the analysis reveals that the proposed approach using CNN model is most effective in detecting image forgeries and provides valuable explanations for decision interpretability.展开更多
The growing interest of Ivorian companies in sports celebrities has led us to conduct research on sports stars as a means of brand communication in C?te d’Ivoire.This work,based on a semiological reading,reveals seve...The growing interest of Ivorian companies in sports celebrities has led us to conduct research on sports stars as a means of brand communication in C?te d’Ivoire.This work,based on a semiological reading,reveals several persuasive strategies used by advertisers to seduce consumers:The use of sports celebrities as spokespersons for brands and above all as promoters of products because of their youth,elegance,and notoriety.These promotions are most often made from the competition areas where they work.Due to the low literacy level of the population,we are witnessing the use of simple language,etc.展开更多
In present-day society,train tunnels are extensively used as a means of transportation.Therefore,to ensure safety,streamlined train operations,and uninterrupted internet access inside train tunnels,reliable wave propa...In present-day society,train tunnels are extensively used as a means of transportation.Therefore,to ensure safety,streamlined train operations,and uninterrupted internet access inside train tunnels,reliable wave propagation modeling is required.We have experimented and measured wave propagation models in a 1674 m long straight train tunnel in South Korea.The measured path loss and the received signal strength were modeled with the Close-In(CI),Floating intercept(FI),CI model with a frequency-weighted path loss exponent(CIF),and alpha-beta-gamma(ABG)models,where the model parameters were determined using minimum mean square error(MMSE)methods.The measured and the CI,FI,CIF,and ABG modelderived path loss was plotted in graphs,and the model closest to the measured path loss was identified through investigation.Based on the measured results,it was observed that every model had a comparatively lower(n<2)path loss exponent(PLE)inside the tunnel.We also determined the path loss component’s possible deviation(shadow factor)through a Gaussian distribution considering zero mean and standard deviation calculations of random error variables.The FI model outperformed all the examined models as it yielded a path loss closer to the measured datasets,as well as a minimum standard deviation of the shadow factor.展开更多
The Internet of Things(IoT)has numerous applications in every domain,e.g.,smart cities to provide intelligent services to sustainable cities.The next-generation of IoT networks is expected to be densely deployed in a ...The Internet of Things(IoT)has numerous applications in every domain,e.g.,smart cities to provide intelligent services to sustainable cities.The next-generation of IoT networks is expected to be densely deployed in a resource-constrained and lossy environment.The densely deployed nodes producing radically heterogeneous traffic pattern causes congestion and collision in the network.At the medium access control(MAC)layer,mitigating channel collision is still one of the main challenges of future IoT networks.Similarly,the standardized network layer uses a ranking mechanism based on hop-counts and expected transmission counts(ETX),which often does not adapt to the dynamic and lossy environment and impact performance.The ranking mechanism also requires large control overheads to update rank information.The resource-constrained IoT devices operating in a low-power and lossy network(LLN)environment need an efficient solution to handle these problems.Reinforcement learning(RL)algorithms like Q-learning are recently utilized to solve learning problems in LLNs devices like sensors.Thus,in this paper,an RL-based optimization of dense LLN IoT devices with heavy heterogeneous traffic is devised.The proposed protocol learns the collision information from the MAC layer and makes an intelligent decision at the network layer.The proposed protocol also enhances the operation of the trickle timer algorithm.A Q-learning model is employed to adaptively learn the channel collision probability and network layer ranking states with accumulated reward function.Based on a simulation using Contiki 3.0 Cooja,the proposed intelligent scheme achieves a lower packet loss ratio,improves throughput,produces lower control overheads,and consumes less energy than other state-of-the-art mechanisms.展开更多
基金funded by Beijing University of Posts and Telecommunications-China Mobile Research Institute Joint Innovation Center。
文摘The direction-of-arrival(DoA) estimation is one of the hot research areas in signal processing. To overcome the DoA estimation challenge without the prior information about signal sources number and multipath number in millimeter wave system,the multi-task deep residual shrinkage network(MTDRSN) and transfer learning-based convolutional neural network(TCNN), namely MDTCNet, are proposed. The sampling covariance matrix based on the received signal is used as the input to the proposed network. A DRSN-based multi-task classifications model is first introduced to estimate signal sources number and multipath number simultaneously. Then, the DoAs with multi-signal and multipath are estimated by the regression model. The proposed CNN is applied for DoAs estimation with the predicted number of signal sources and paths. Furthermore, the modelbased transfer learning is also introduced into the regression model. The TCNN inherits the partial network parameters of the already formed optimization model obtained by the CNN. A series of experimental results show that the MDTCNet-based DoAs estimation method can accurately predict the signal sources number and multipath number under a range of signal-to-noise ratios. Remarkably, the proposed method achieves the lower root mean square error compared with some existing deep learning-based and traditional methods.
基金supported by the Nano&Material Technology Development Program through the National Research Foundation of Korea(NRF),funded by the Ministry of Science and ICT(RS-2023-00234757).
文摘Although hydrofluoric acid(HF)surface treatment is known to enhance the joining of metals with polymers,there is limited information on its effect on the joining of AZ31 alloy and carbon-fiber-reinforced plastics(CFRPs)through laser-assisted metal and plastic direct joining(LAMP).This study uses the LAMP technique to produce AZ31-CFRP joints.The joining process involves as-received AZ31,HFpretreated AZ31,and thermally oxidized HF-pretreated AZ31 alloy sheets.Furthermore,the bonding strength of joints prepared with thermally oxidized AZ31 alloy sheets is examined to ascertain the combined effect of HF treatment and thermal oxidation on bonding strength.The microstructures,surface chemical interactions,and mechanical performances of joints are investigated under tensile shear loading.Various factors,such as bubble formation,CFRP resin decomposition,and mechanical interlocking considerably affect joint strength.Additionally,surface chemical interactions between the active species on metal parts and the polar amide along with carbonyl groups of polymer play a significant role in improving joint strength.Joints prepared with surface-pretreated AZ31 alloy sheets show significant improvements in bonding strength.
文摘Purpose–Material selection,driven by wide and often conflicting objectives,is an important,sometimes difficult problem in material engineering.In this context,multi-criteria decision-making(MCDM)methodologies are effective.An approach of MCDM is needed to cater to criteria of material assortment simultaneously.More firms are now concerned about increasing their productivity using mathematical tools.To occupy a gap in the previous literature this research recommends an integrated MCDM and mathematical Bi-objective model for the selection of material.In addition,by using the Technique for Order Preference by Similarity to Ideal Solution(TOPSIS),the inherent ambiguities of decision-makers in paired evaluations are considered in this research.It goes on to construct a mathematical bi-objective model for determining the best item to purchase.Design/methodology/approach–The entropy perspective is implemented in this paper to evaluate the weight parameters,while the TOPSIS technique is used to determine the best and worst intermediate pipe materials for automotive exhaust system.The intermediate pipes are used to join the components of the exhaust systems.The materials usually used to manufacture intermediate pipe are SUS 436LM,SUS 430,SUS 304,SUS 436L,SUH 409 L,SUS 441 L and SUS 439L.These seven materials are evaluated based on tensile strength(TS),hardness(H),elongation(E),yield strength(YS)and cost(C).A hybrid methodology combining entropy-based criteria weighting,with the TOPSIS for alternative ranking,is pursued to identify the optimal design material for an engineered application in this paper.This study aims to help while filling the information gap in selecting the most suitable material for use in the exhaust intermediate pipes.After that,the authors searched for and considered eight materials and evaluated them on the following five criteria:(1)TS,(2)YS,(3)H,(4)E and(5)C.The first two criteria have been chosen because they can have a lot of influence on the behavior of the exhaust intermediate pipes,on their performance and on the cost.In this structure,the weights of the criteria are calculated objectively through the entropy method in order to have an unbiased assessment.This essentially measures the quantity of information each criterion contribution,indicating the relative importance of these criteria better.Subsequently,the materials were ranked using the TOPSIS method in terms of their relative performance by measuring each material from an ideal solution to determine the best alternative.The results show that SUS 309,SUS 432L and SUS 436 LM are the first three materials that the exhaust intermediate pipe optimal design should consider.Findings–The material matrix of the decision presented in Table 3 was normalized through Equation 5,as shown in Table 5,and the matrix was multiplied with weighting criteriaß_j.The obtained weighted normalized matrix V_ij is presented in Table 6.However,the ideal,worst and best value was ascertained by employing Equation 7.This study is based on the selection of material for the development of intermediate pipe using MCDM,and it involves four basic stages,i.e.method of translation criteria,screening process,method of ranking and search for methods.The selection was done through the TOPSIS method,and the criteria weight was obtained by the entropy method.The result showed that the top three materials are SUS 309,SUS 432L and SUS 436 LM,respectively.For the future work,it is suggested to select more alternatives and criteria.The comparison can also be done by using different MCDM techniques like and Choice Expressing Reality(ELECTRE),Decision-Making Trial and Evaluation Laboratory(DEMATEL)and Preference Ranking Organization Method for Enrichment Evaluation(PROMETHEE).Originality/value–The results provide important conclusions for material selection in this targeted application,verifying the employment of mutual entropy-TOPSIS methodology for a series of difficult engineering decisions in material engineering concepts that combine superior capacity with better performance as well as cost-efficiency in various engineering design.
基金supported in part by the National Natural Science Foundation of China under Grant 62171465,62072303,62272223,U22A2031。
文摘By pushing computation,cache,and network control to the edge,mobile edge computing(MEC)is expected to play a leading role in fifth generation(5G)and future sixth generation(6G).Nevertheless,facing ubiquitous fast-growing computational demands,it is impossible for a single MEC paradigm to effectively support high-quality intelligent services at end user equipments(UEs).To address this issue,we propose an air-ground collaborative MEC(AGCMEC)architecture in this article.The proposed AGCMEC integrates all potentially available MEC servers within air and ground in the envisioned 6G,by a variety of collaborative ways to provide computation services at their best for UEs.Firstly,we introduce the AGC-MEC architecture and elaborate three typical use cases.Then,we discuss four main challenges in the AGC-MEC as well as their potential solutions.Next,we conduct a case study of collaborative service placement for AGC-MEC to validate the effectiveness of the proposed collaborative service placement strategy.Finally,we highlight several potential research directions of the AGC-MEC.
文摘The research volume increases at the study rate,causing massive text corpora.Due to these enormous text corpora,we are drowning in data and starving for information.Therefore,recent research employed different text mining approaches to extract information from this text corpus.These proposed approaches extract meaningful and precise phrases that effectively describe the text’s information.These extracted phrases are commonly termed keyphrases.Further,these key phrases are employed to determine the different fields of study trends.Moreover,these key phrases can also be used to determine the spatiotemporal trends in the various research fields.In this research,the progress of a research field can be better revealed through spatiotemporal bibliographic trend analysis.Therefore,an effective spatiotemporal trend extraction mechanism is required to disclose textile research trends of particular regions during a specific period.This study collected a diversified dataset of textile research from 2011–2019 and different countries to determine the research trend.This data was collected from various open access journals.Further,this research determined the spatiotemporal trends using quality phrasemining.This research also focused on finding the research collaboration of different countries in a particular research subject.The research collaborations of other countries’researchers show the impact on import and export of those countries.The visualization approach is also incorporated to understand the results better.
文摘Mental workload plays a vital role in cognitive impairment. The impairment refers to a person’s difficulty in remembering, receiving new information, learning new things, concentrating, or making decisions that seriously affect everyday life. In this paper, the simultaneous capacity (SIMKAP) experiment-based EEG workload analysis was presented using 45 subjects for multitasking mental workload estimation with subject wise attention loss calculation as well as short term memory loss measurement. Using an open access preprocessed EEG dataset, Discrete wavelet transforms (DWT) was utilized for feature extraction and Minimum redundancy and maximum relevancy (MRMR) technique was used to select most relevance features. Wavelet decomposition technique was also used for decomposing EEG signals into five sub bands. Fourteen statistical features were calculated from each sub band signal to form a 5 × 14 window size. The Neural Network (Narrow) classification algorithm was used to classify dataset for low and high workload conditions and comparison was made using some other machine learning models. The results show the classifier’s accuracy of 86.7%, precision of 84.4%, F1 score of 86.33%, and recall of 88.37% that crosses the state-of-the art methodologies in the literature. This prediction is expected to greatly facilitate the improved way in memory and attention loss impairments assessment.
文摘Quality of Maternal, Neonatal and Child (MNCH) care is an important aspect in ensuring healthy outcomes and survival of mothers and children. To maintain quality in health services provided, organizations and other stakeholders in maternal and child health recommend regular quality measurement. Quality indicators are the key components in the quality measurement process. However, the literature shows neither an indicator selection process nor a set of quality indicators for quality measurement that is universally accepted. The lack of a universally accepted quality indicator selection process and set of quality indicators results in the establishment of a variety of quality indicator selection processes and several sets of quality indicators whenever the need for quality measurement arises. This adds extra processes that render quality measurement process. This study, therefore, aims to establish a set of quality indicators from a broad set of quality indicators recommended by the World Health Organization (WHO). The study deployed a machine learning technique, specifically a random forest classifier to select important indicators for quality measurement. Twenty-nine indicators were identified as important features and among those, eight indicators namely maternal mortality ratio, still-birth rate, delivery at a health facility, deliveries assisted by skilled attendants, proportional breach delivery, normal delivery rate, born before arrival rate and antenatal care visit coverage were identified to be the most important indicators for quality measurement.
文摘Antennas are an indispensable element in wireless networks. For long-distance wireless communication, antenna gains need to be very strong (highly directive) because the signal from the antenna loses a lot of strength as it travels over long distances. This is true in the military with missile, radar, and satellite systems, etc. Antenna arrays are commonly employed to focus electromagnetic waves in a certain direction that cannot be achieved perfectly with a single-element antenna. The goal of this study is to design a rectangular microstrip high-gain 2 × 1 array antenna using ADS Momentum. This microstrip patch array design makes use of the RT-DUROID 5880 as a substrate with a dielectric constant of 2.2, substrate height of 1.588 mm, and tangent loss of 0.001. To achieve efficient gain and return loss characteristics for the proposed array antenna, RT-Duroid is a good choice of dielectric material. The designed array antenna is made up of two rectangular patches, which have a resonance frequency of 3.3 GHz. These rectangular patches are excited by microstrip feed lines with 13 mm lengths and 4.8 mm widths. The impedance of the patches is perfectly matched by these transmission lines, which helps to get better antenna characteristics. At a resonance frequency of 3.3 GHz, the suggested antenna array has a directivity of 10.50 dB and a maximum gain of 9.90 dB in the S-band. The S parameters, 3D radiation pattern, directivity, gain, and efficiency of the constructed array antenna are all available in ADS Momentum.
基金the National Natural Science Foundation of China(No.51775185)Natural Science Foundation of Hunan Province(No.2022JJ90013)+1 种基金Intelligent Environmental Monitoring Technology Hunan Provincial Joint Training Base for Graduate Students in the Integration of Industry and Education,and Hunan Normal University University-Industry Cooperation.the 2011 Collaborative Innovation Center for Development and Utilization of Finance and Economics Big Data Property,Universities of Hunan Province,Open Project,Grant Number 20181901CRP04.
文摘At present,water pollution has become an important factor affecting and restricting national and regional economic development.Total phosphorus is one of the main sources of water pollution and eutrophication,so the prediction of total phosphorus in water quality has good research significance.This paper selects the total phosphorus and turbidity data for analysis by crawling the data of the water quality monitoring platform.By constructing the attribute object mapping relationship,the correlation between the two indicators was analyzed and used to predict the future data.Firstly,the monthly mean and daily mean concentrations of total phosphorus and turbidity outliers were calculated after cleaning,and the correlation between them was analyzed.Secondly,the correlation coefficients of different times and frequencies were used to predict the values for the next five days,and the data trend was predicted by python visualization.Finally,the real value was compared with the predicted value data,and the results showed that the correlation between total phosphorus and turbidity was useful in predicting the water quality.
基金This research was supported by the Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Education(NRF-2022R1I1A3063493).
文摘Smart manufacturing is a process that optimizes factory performance and production quality by utilizing various technologies including the Internet of Things(IoT)and artificial intelligence(AI).Quality control is an important part of today’s smart manufacturing process,effectively reducing costs and enhancing operational efficiency.As technology in the industry becomes more advanced,identifying and classifying defects has become an essential element in ensuring the quality of products during the manufacturing process.In this study,we introduce a CNN model for classifying defects on hot-rolled steel strip surfaces using hybrid deep learning techniques,incorporating a global average pooling(GAP)layer and a machine learning-based SVM classifier,with the aim of enhancing accuracy.Initially,features are extracted by the VGG19 convolutional block.Then,after processing through the GAP layer,the extracted features are fed to the SVM classifier for classification.For this purpose,we collected images from publicly available datasets,including the Xsteel surface defect dataset(XSDD)and the NEU surface defect(NEU-CLS)datasets,and we employed offline data augmentation techniques to balance and increase the size of the datasets.The outcome of experiments shows that the proposed methodology achieves the highest metrics score,with 99.79%accuracy,99.80%precision,99.79%recall,and a 99.79%F1-score for the NEU-CLS dataset.Similarly,it achieves 99.64%accuracy,99.65%precision,99.63%recall,and a 99.64%F1-score for the XSDD dataset.A comparison of the proposed methodology to the most recent study showed that it achieved superior results as compared to the other studies.
基金This research was funded by the Ministry of Higher Education(MOHE)through Fundamental Research Grant Scheme(FRGS)under the Grand Number FRGS/1/2020/ICT01/UK M/02/4,and University Kebangsaan Malaysia for open access publication.
文摘Image steganography is one of the prominent technologies in data hiding standards.Steganographic system performance mostly depends on the embedding strategy.Its goal is to embed strictly confidential information into images without causing perceptible changes in the original image.The randomization strategies in data embedding techniques may utilize random domains,pixels,or region-of-interest for concealing secrets into a cover image,preventing information from being discovered by an attacker.The implementation of an appropriate embedding technique can achieve a fair balance between embedding capability and stego image imperceptibility,but it is challenging.A systematic approach is used with a standard methodology to carry out this study.This review concentrates on the critical examination of several embedding strategies,incorporating experimental results with state-of-the-art methods emphasizing the robustness,security,payload capacity,and visual quality metrics of the stego images.The fundamental ideas of steganography are presented in this work,along with a unique viewpoint that sets it apart from previous works by highlighting research gaps,important problems,and difficulties.Additionally,it offers a discussion of suggested directions for future study to advance and investigate uncharted territory in image steganography.
基金supported in part by the National Natural Science Foundation of China(grant nos.61971365,61871339,62171392)Digital Fujian Province Key Laboratory of IoT Communication,Architecture and Safety Technology(grant no.2010499)+1 种基金the State Key Program of the National Natural Science Foundation of China(grant no.61731012)the Natural Science Foundation of Fujian Province of China No.2021J01004.
文摘Unmanned Aerial Vehicles(UAvs)as aerial base stations to provide communication services for ground users is a flexible and cost-effective paradigm in B5G.Besides,dynamic resource allocation and multi-connectivity can be adopted to further harness the potentials of UAVs in improving communication capacity,in such situations such that the interference among users becomes a pivotal disincentive requiring effective solutions.To this end,we investigate the Joint UAV-User Association,Channel Allocation,and transmission Power Control(J-UACAPC)problem in a multi-connectivity-enabled UAV network with constrained backhaul links,where each UAV can determine the reusable channels and transmission power to serve the selected ground users.The goal was to mitigate co-channel interference while maximizing long-term system utility.The problem was modeled as a cooperative stochastic game with hybrid discrete-continuous action space.A Multi-Agent Hybrid Deep Reinforcement Learning(MAHDRL)algorithm was proposed to address this problem.Extensive simulation results demonstrated the effectiveness of the proposed algorithm and showed that it has a higher system utility than the baseline methods.
基金funded by the Research Deanship of the Islamic University of Madinah under grant number 966.
文摘In Saudi Arabia,drones are increasingly used in different sensitive domains like military,health,and agriculture to name a few.Typically,drone cameras capture aerial images of objects and convert them into crucial data,alongside collecting data from distributed sensors supplemented by location data.The interception of the data sent from the drone to the station can lead to substantial threats.To address this issue,highly confidential protection methods must be employed.This paper introduces a novel steganography approach called the Shuffling Steganography Approach(SSA).SSA encompasses five fundamental stages and three proposed algorithms,designed to enhance security through strategic encryption and data hiding techniques.Notably,this method introduces advanced resistance to brute force attacks by employing predefined patterns across a wide array of images,complicating unauthorized access.The initial stage involves encryption,dividing,and disassembling the encrypted data.A small portion of the encrypted data is concealed within the text(Algorithm 1)in the third stage.Subsequently,the parts are merged and mixed(Algorithm 2),and finally,the composed text is hidden within an image(Algorithm 3).Through meticulous investigation and comparative analysis with existing methodologies,the proposed approach demonstrates superiority across various pertinent criteria,including robustness,secret message size capacity,resistance to multiple attacks,and multilingual support.
基金This work was supported by a research fund from Chosun University,2023。
文摘Federated learning is an innovative machine learning technique that deals with centralized data storage issues while maintaining privacy and security.It involves constructing machine learning models using datasets spread across several data centers,including medical facilities,clinical research facilities,Internet of Things devices,and even mobile devices.The main goal of federated learning is to improve robust models that benefit from the collective knowledge of these disparate datasets without centralizing sensitive information,reducing the risk of data loss,privacy breaches,or data exposure.The application of federated learning in the healthcare industry holds significant promise due to the wealth of data generated from various sources,such as patient records,medical imaging,wearable devices,and clinical research surveys.This research conducts a systematic evaluation and highlights essential issues for the selection and implementation of federated learning approaches in healthcare.It evaluates the effectiveness of federated learning strategies in the field of healthcare.It offers a systematic analysis of federated learning in the healthcare domain,encompassing the evaluation metrics employed.In addition,this study highlights the increasing interest in federated learning applications in healthcare among scholars and provides foundations for further studies.
文摘Predicting depression intensity from microblogs and social media posts has numerous benefits and applications,including predicting early psychological disorders and stress in individuals or the general public.A major challenge in predicting depression using social media posts is that the existing studies do not focus on predicting the intensity of depression in social media texts but rather only perform the binary classification of depression and moreover noisy data makes it difficult to predict the true depression in the social media text.This study intends to begin by collecting relevant Tweets and generating a corpus of 210000 public tweets using Twitter public application programming interfaces(APIs).A strategy is devised to filter out only depression-related tweets by creating a list of relevant hashtags to reduce noise in the corpus.Furthermore,an algorithm is developed to annotate the data into three depression classes:‘Mild,’‘Moderate,’and‘Severe,’based on International Classification of Diseases-10(ICD-10)depression diagnostic criteria.Different baseline classifiers are applied to the annotated dataset to get a preliminary idea of classification performance on the corpus.Further FastText-based model is applied and fine-tuned with different preprocessing techniques and hyperparameter tuning to produce the tuned model,which significantly increases the depression classification performance to an 84%F1 score and 90%accuracy compared to baselines.Finally,a FastText-based weighted soft voting ensemble(WSVE)is proposed to boost the model’s performance by combining several other classifiers and assigning weights to individual models according to their individual performances.The proposed WSVE outperformed all baselines as well as FastText alone,with an F1 of 89%,5%higher than FastText alone,and an accuracy of 93%,3%higher than FastText alone.The proposed model better captures the contextual features of the relatively small sample class and aids in the detection of early depression intensity prediction from tweets with impactful performances.
文摘Digital image forgery (DIF) is a prevalent issue in the modern age, where malicious actors manipulate images for various purposes, including deception and misinformation. Detecting such forgeries is a critical task for maintaining the integrity of digital content. This thesis explores the use of Modified Error Level Analysis (ELA) in combination with a Convolutional Neural Network (CNN), as well as, Feedforward Neural Network (FNN) model to detect digital image forgeries. Additionally, incorporation of Explainable Artificial Intelligence (XAI) to this research provided insights into the process of decision-making by the models. The study trains and tests the models on the CASIA2 dataset, emphasizing both authentic and forged images. The CNN model is trained and evaluated, and Explainable AI (SHapley Additive exPlanation— SHAP) is incorporated to explain the model’s predictions. Similarly, the FNN model is trained and evaluated, and XAI (SHAP) is incorporated to explain the model’s predictions. The results obtained from the analysis reveals that the proposed approach using CNN model is most effective in detecting image forgeries and provides valuable explanations for decision interpretability.
文摘The growing interest of Ivorian companies in sports celebrities has led us to conduct research on sports stars as a means of brand communication in C?te d’Ivoire.This work,based on a semiological reading,reveals several persuasive strategies used by advertisers to seduce consumers:The use of sports celebrities as spokespersons for brands and above all as promoters of products because of their youth,elegance,and notoriety.These promotions are most often made from the competition areas where they work.Due to the low literacy level of the population,we are witnessing the use of simple language,etc.
文摘In present-day society,train tunnels are extensively used as a means of transportation.Therefore,to ensure safety,streamlined train operations,and uninterrupted internet access inside train tunnels,reliable wave propagation modeling is required.We have experimented and measured wave propagation models in a 1674 m long straight train tunnel in South Korea.The measured path loss and the received signal strength were modeled with the Close-In(CI),Floating intercept(FI),CI model with a frequency-weighted path loss exponent(CIF),and alpha-beta-gamma(ABG)models,where the model parameters were determined using minimum mean square error(MMSE)methods.The measured and the CI,FI,CIF,and ABG modelderived path loss was plotted in graphs,and the model closest to the measured path loss was identified through investigation.Based on the measured results,it was observed that every model had a comparatively lower(n<2)path loss exponent(PLE)inside the tunnel.We also determined the path loss component’s possible deviation(shadow factor)through a Gaussian distribution considering zero mean and standard deviation calculations of random error variables.The FI model outperformed all the examined models as it yielded a path loss closer to the measured datasets,as well as a minimum standard deviation of the shadow factor.
基金supported by the National Research Foundation of Korea(NRF)grant funded by the Korea government.(No.2018R1A2B6002399).
文摘The Internet of Things(IoT)has numerous applications in every domain,e.g.,smart cities to provide intelligent services to sustainable cities.The next-generation of IoT networks is expected to be densely deployed in a resource-constrained and lossy environment.The densely deployed nodes producing radically heterogeneous traffic pattern causes congestion and collision in the network.At the medium access control(MAC)layer,mitigating channel collision is still one of the main challenges of future IoT networks.Similarly,the standardized network layer uses a ranking mechanism based on hop-counts and expected transmission counts(ETX),which often does not adapt to the dynamic and lossy environment and impact performance.The ranking mechanism also requires large control overheads to update rank information.The resource-constrained IoT devices operating in a low-power and lossy network(LLN)environment need an efficient solution to handle these problems.Reinforcement learning(RL)algorithms like Q-learning are recently utilized to solve learning problems in LLNs devices like sensors.Thus,in this paper,an RL-based optimization of dense LLN IoT devices with heavy heterogeneous traffic is devised.The proposed protocol learns the collision information from the MAC layer and makes an intelligent decision at the network layer.The proposed protocol also enhances the operation of the trickle timer algorithm.A Q-learning model is employed to adaptively learn the channel collision probability and network layer ranking states with accumulated reward function.Based on a simulation using Contiki 3.0 Cooja,the proposed intelligent scheme achieves a lower packet loss ratio,improves throughput,produces lower control overheads,and consumes less energy than other state-of-the-art mechanisms.