Nuclearmagnetic resonance imaging of breasts often presents complex backgrounds.Breast tumors exhibit varying sizes,uneven intensity,and indistinct boundaries.These characteristics can lead to challenges such as low a...Nuclearmagnetic resonance imaging of breasts often presents complex backgrounds.Breast tumors exhibit varying sizes,uneven intensity,and indistinct boundaries.These characteristics can lead to challenges such as low accuracy and incorrect segmentation during tumor segmentation.Thus,we propose a two-stage breast tumor segmentation method leveraging multi-scale features and boundary attention mechanisms.Initially,the breast region of interest is extracted to isolate the breast area from surrounding tissues and organs.Subsequently,we devise a fusion network incorporatingmulti-scale features and boundary attentionmechanisms for breast tumor segmentation.We incorporate multi-scale parallel dilated convolution modules into the network,enhancing its capability to segment tumors of various sizes through multi-scale convolution and novel fusion techniques.Additionally,attention and boundary detection modules are included to augment the network’s capacity to locate tumors by capturing nonlocal dependencies in both spatial and channel domains.Furthermore,a hybrid loss function with boundary weight is employed to address sample class imbalance issues and enhance the network’s boundary maintenance capability through additional loss.Themethod was evaluated using breast data from 207 patients at RuijinHospital,resulting in a 6.64%increase in Dice similarity coefficient compared to the benchmarkU-Net.Experimental results demonstrate the superiority of the method over other segmentation techniques,with fewer model parameters.展开更多
First mirror(FM)cleaning,using radio frequency(RF)plasma,has been proposed to recover FM reflectivity in nuclear fusion reactors such as the International Thermonuclear Experimental Reactor(ITER).To investigate the in...First mirror(FM)cleaning,using radio frequency(RF)plasma,has been proposed to recover FM reflectivity in nuclear fusion reactors such as the International Thermonuclear Experimental Reactor(ITER).To investigate the influence of simultaneous cleaning of two mirrors on mirror cleaning efficiency and uniformity,experiments involving single-mirror cleaning and dual-mirror cleaning were conducted using RF capacitively coupled plasma in the laboratory.For the test and simultaneous cleaning of two mirrors,the FM and second mirror(SM),both measuring 110 mm×80 mm,were placed inside the first mirror unit(FMU).They were composed of 16 mirror samples,each with a dimension of 27.5 mm×20 mm.These mirror samples consist of a titanium-zirconium-molybdenum alloy substrate,a 500 nm Mo intermediate layer and a 30 nm Al_(2)O_(3) surface coating as a proxy for Be impurities.The cleaning of a single first mirror(SFM)and the simultaneous cleaning of the FM and SM(DFM and DSM)lasted for 9 h using Ar plasma at a pressure of 1 Pa.The total reflectivity of mirror samples on the DSM did not fully recover and varied with location,with a self-bias of−140 V.With a self-bias of−300 V,the total reflectivity of mirror samples on the SFM and DFM was fully recovered.The energy dispersive spectrometer results demonstrated that the Al_(2)O_(3) coating had been completely removed from these mirror samples.However,the mass loss of each mirror sample on the SFM and DFM before and after cleaning varied depending on its location,with higher mass loss observed for mirror samples located in the corners and lower loss for those in the center.Compared with SM cleaning,the simultaneous cleaning of two mirrors reduced the difference between the highest and lowest mass loss.Furthermore,this mass loss for the mirror samples of the DFM facing the DSM was increased.This indicated that mirror samples cleaned face to face in the FMU simultaneously could influence each other,highlighting the need for special attention in future studies.展开更多
This paper introduces the integration of the Social Group Optimization(SGO)algorithm to enhance the accuracy of software cost estimation using the Constructive Cost Model(COCOMO).COCOMO’s fixed coefficients often lim...This paper introduces the integration of the Social Group Optimization(SGO)algorithm to enhance the accuracy of software cost estimation using the Constructive Cost Model(COCOMO).COCOMO’s fixed coefficients often limit its adaptability,as they don’t account for variations across organizations.By fine-tuning these parameters with SGO,we aim to improve estimation accuracy.We train and validate our SGO-enhanced model using historical project data,evaluating its performance with metrics like the mean magnitude of relative error(MMRE)and Manhattan distance(MD).Experimental results show that SGO optimization significantly improves the predictive accuracy of software cost models,offering valuable insights for project managers and practitioners in the field.However,the approach’s effectiveness may vary depending on the quality and quantity of available historical data,and its scalability across diverse project types and sizes remains a key consideration for future research.展开更多
Redundancy,correlation,feature irrelevance,and missing samples are just a few problems that make it difficult to analyze software defect data.Additionally,it might be challenging to maintain an even distribution of da...Redundancy,correlation,feature irrelevance,and missing samples are just a few problems that make it difficult to analyze software defect data.Additionally,it might be challenging to maintain an even distribution of data relating to both defective and non-defective software.The latter software class’s data are predominately present in the dataset in the majority of experimental situations.The objective of this review study is to demonstrate the effectiveness of combining ensemble learning and feature selection in improving the performance of defect classification.Besides the successful feature selection approach,a novel variant of the ensemble learning technique is analyzed to address the challenges of feature redundancy and data imbalance,providing robustness in the classification process.To overcome these problems and lessen their impact on the fault classification performance,authors carefully integrate effective feature selection with ensemble learning models.Forward selection demonstrates that a significant area under the receiver operating curve(ROC)can be attributed to only a small subset of features.The Greedy forward selection(GFS)technique outperformed Pearson’s correlation method when evaluating feature selection techniques on the datasets.Ensemble learners,such as random forests(RF)and the proposed average probability ensemble(APE),demonstrate greater resistance to the impact of weak features when compared to weighted support vector machines(W-SVMs)and extreme learning machines(ELM).Furthermore,in the case of the NASA and Java datasets,the enhanced average probability ensemble model,which incorporates the Greedy forward selection technique with the average probability ensemble model,achieved remarkably high accuracy for the area under the ROC.It approached a value of 1.0,indicating exceptional performance.This review emphasizes the importance of meticulously selecting attributes in a software dataset to accurately classify damaged components.In addition,the suggested ensemble learning model successfully addressed the aforementioned problems with software data and produced outstanding classification performance.展开更多
As an introductory course for the emerging major of big data management and application,“Introduction to Big Data”has not yet formed a curriculum standard and implementation plan that is widely accepted and used by ...As an introductory course for the emerging major of big data management and application,“Introduction to Big Data”has not yet formed a curriculum standard and implementation plan that is widely accepted and used by everyone.To this end,we discuss some of our explorations and attempts in the construction and teaching process of big data courses for the major of big data management and application from the perspective of course planning,course implementation,and course summary.After interviews with students and feedback from questionnaires,students are highly satisfied with some of the teaching measures and programs currently adopted.展开更多
t In this paper an overall scheme of the task management system of ternary optical computer (TOC) is proposed, and the software architecture chart is given. The function and accomplishment of each module in the syst...t In this paper an overall scheme of the task management system of ternary optical computer (TOC) is proposed, and the software architecture chart is given. The function and accomplishment of each module in the system are described in general. In addition, according to the aforementioned scheme a prototype of TOC task management system is implemented, and the feasibility, rationality and completeness of the scheme are verified via running and testing the prototype.展开更多
Reconfiguration is the key to produce an applicable ternary optical computer (TOC). The method to implement the reconfiguration function determines whether a TOC can step into applied fields or not. In this work, a ...Reconfiguration is the key to produce an applicable ternary optical computer (TOC). The method to implement the reconfiguration function determines whether a TOC can step into applied fields or not. In this work, a design of the reconfiguration circuit based on field programmable gates array (FPGA) is proposed, and the structure of the entire hardware system is discussed.展开更多
The division operation is not frequent relatively in traditional applications, but it is increasingly indispensable and important in many modern applications. In this paper, the implementation of modified signed-digit...The division operation is not frequent relatively in traditional applications, but it is increasingly indispensable and important in many modern applications. In this paper, the implementation of modified signed-digit (MSD) floating-point division using Newton-Raphson method on the system of ternary optical computer (TOC) is studied. Since the addition of MSD floating-point is carry-free and the digit width of the system of TOC is large, it is easy to deal with the enough wide data and transform the division operation into multiplication and addition operations. And using data scan and truncation the problem of digits expansion is effectively solved in the range of error limit. The division gets the good results and the efficiency is high. The instance of MSD floating-point division shows that the method is feasible.展开更多
A large number of Web APIs have been released as services in mobile communications,but the service provided by a single Web API is usually limited.To enrich the services in mobile communications,developers have combin...A large number of Web APIs have been released as services in mobile communications,but the service provided by a single Web API is usually limited.To enrich the services in mobile communications,developers have combined Web APIs and developed a new service,which is known as a mashup.The emergence of mashups greatly increases the number of services in mobile communications,especially in mobile networks and the Internet-of-Things(IoT),and has encouraged companies and individuals to develop even more mashups,which has led to the dramatic increase in the number of mashups.Such a trend brings with it big data,such as the massive text data from the mashups themselves and continually-generated usage data.Thus,the question of how to determine the most suitable mashups from big data has become a challenging problem.In this paper,we propose a mashup recommendation framework from big data in mobile networks and the IoT.The proposed framework is driven by machine learning techniques,including neural embedding,clustering,and matrix factorization.We employ neural embedding to learn the distributed representation of mashups and propose to use cluster analysis to learn the relationship among the mashups.We also develop a novel Joint Matrix Factorization(JMF)model to complete the mashup recommendation task,where we design a new objective function and an optimization algorithm.We then crawl through a real-world large mashup dataset and perform experiments.The experimental results demonstrate that our framework achieves high accuracy in mashup recommendation and performs better than all compared baselines.展开更多
For Printed Circuit Board(PCB)surface defect detection,traditional detection methods mostly focus on template matching-based reference method and manual detections,which have the disadvantages of low defect detection ...For Printed Circuit Board(PCB)surface defect detection,traditional detection methods mostly focus on template matching-based reference method and manual detections,which have the disadvantages of low defect detection efficiency,large errors in defect identification and localization,and low versatility of detectionmethods.In order to furthermeet the requirements of high detection accuracy,real-time and interactivity required by the PCB industry in actual production life.In the current work,we improve the Youonly-look-once(YOLOv4)defect detection method to train and detect six types of PCB small target defects.Firstly,the original Cross Stage Partial Darknet53(CSPDarknet53)backbone network is preserved for PCB defect feature information extraction,and secondly,the original multi-layer cascade fusion method is changed to a single-layer feature layer structure to greatly avoid the problem of uneven distribution of priori anchor boxes size in PCB defect detection process.Then,the K-means++clustering method is used to accurately cluster the anchor boxes to obtain the required size requirements for the defect detection,which further improves the recognition and localization of small PCB defects.Finally,the improved YOLOv4 defect detection model is compared and analyzed on PCB dataset with multi-class algorithms.The experimental results show that the average detection accuracy value of the improved defect detection model reaches 99.34%,which has better detection capability,lower leakage rate and false detection rate for PCB defects in comparison with similar defect detection algorithms.展开更多
The earliest and most accurate detection of the pathological manifestations of hepatic diseases ensures effective treatments and thus positive prognostic outcomes.In clinical settings,screening and determining the ext...The earliest and most accurate detection of the pathological manifestations of hepatic diseases ensures effective treatments and thus positive prognostic outcomes.In clinical settings,screening and determining the extent of a pathology are prominent factors in preparing remedial agents and administering approp-riate therapeutic procedures.Moreover,in a patient undergoing liver resection,a realistic preoperative simulation of the subject-specific anatomy and physiology also plays a vital part in conducting initial assessments,making surgical decisions during the procedure,and anticipating postoperative results.Conventionally,various medical imaging modalities,e.g.,computed tomography,magnetic resonance imaging,and positron emission tomography,have been employed to assist in these tasks.In fact,several standardized procedures,such as lesion detection and liver segmentation,are also incorporated into prominent commercial software packages.Thus far,most integrated software as a medical device typically involves tedious interactions from the physician,such as manual delineation and empirical adjustments,as per a given patient.With the rapid progress in digital health approaches,especially medical image analysis,a wide range of computer algorithms have been proposed to facilitate those procedures.They include pattern recognition of a liver,its periphery,and lesion,as well as pre-and postoperative simulations.Prior to clinical adoption,however,software must conform to regulatory requirements set by the governing agency,for instance,valid clinical association and analytical and clinical validation.Therefore,this paper provides a detailed account and discussion of the state-of-the-art methods for liver image analyses,visualization,and simulation in the literature.Emphasis is placed upon their concepts,algorithmic classifications,merits,limitations,clinical considerations,and future research trends.展开更多
The fingerprinting-based approach using the wireless local area network(WLAN)is widely used for indoor localization.However,the construction of the fingerprint database is quite time-consuming.Especially when the posi...The fingerprinting-based approach using the wireless local area network(WLAN)is widely used for indoor localization.However,the construction of the fingerprint database is quite time-consuming.Especially when the position of the access point(AP)or wall changes,updating the fingerprint database in real-time is difficult.An appropriate indoor localization approach,which has a low implementation cost,excellent real-time performance,and high localization accuracy and fully considers complex indoor environment factors,is preferred in location-based services(LBSs)applications.In this paper,we proposed a fine-grained grid computing(FGGC)model to achieve decimeter-level localization accuracy.Reference points(RPs)are generated in the grid by the FGGC model.Then,the received signal strength(RSS)values at each RP are calculated with the attenuation factors,such as the frequency band,three-dimensional propagation distance,and walls in complex environments.As a result,the fingerprint database can be established automatically without manual measurement,and the efficiency and cost that the FGGC model takes for the fingerprint database are superior to previous methods.The proposed indoor localization approach,which estimates the position step by step from the approximate grid location to the fine-grained location,can achieve higher real-time performance and localization accuracy simultaneously.The mean error of the proposed model is 0.36 m,far lower than that of previous approaches.Thus,the proposed model is feasible to improve the efficiency and accuracy of Wi-Fi indoor localization.It also shows high-accuracy performance with a fast running speed even under a large-size grid.The results indicate that the proposed method can also be suitable for precise marketing,indoor navigation,and emergency rescue.展开更多
With the emergence of the artificial intelligence era,all kinds of robots are traditionally used in agricultural production.However,studies concerning the robot task assignment problem in the agriculture field,which i...With the emergence of the artificial intelligence era,all kinds of robots are traditionally used in agricultural production.However,studies concerning the robot task assignment problem in the agriculture field,which is closely related to the cost and efficiency of a smart farm,are limited.Therefore,a Multi-Weeding Robot Task Assignment(MWRTA)problem is addressed in this paper to minimize the maximum completion time and residual herbicide.A mathematical model is set up,and a Multi-Objective Teaching-Learning-Based Optimization(MOTLBO)algorithm is presented to solve the problem.In the MOTLBO algorithm,a heuristicbased initialization comprising an improved Nawaz Enscore,and Ham(NEH)heuristic and maximum loadbased heuristic is used to generate an initial population with a high level of quality and diversity.An effective teaching-learning-based optimization process is designed with a dynamic grouping mechanism and a redefined individual updating rule.A multi-neighborhood-based local search strategy is provided to balance the exploitation and exploration of the algorithm.Finally,a comprehensive experiment is conducted to compare the proposed algorithm with several state-of-the-art algorithms in the literature.Experimental results demonstrate the significant superiority of the proposed algorithm for solving the problem under consideration.展开更多
Optical image-based ship detection can ensure the safety of ships and promote the orderly management of ships in offshore waters.Current deep learning researches on optical image-based ship detection mainly focus on i...Optical image-based ship detection can ensure the safety of ships and promote the orderly management of ships in offshore waters.Current deep learning researches on optical image-based ship detection mainly focus on improving one-stage detectors for real-time ship detection but sacrifices the accuracy of detection.To solve this problem,we present a hybrid ship detection framework which is named EfficientShip in this paper.The core parts of the EfficientShip are DLA-backboned object location(DBOL)and CascadeRCNN-guided object classification(CROC).The DBOL is responsible for finding potential ship objects,and the CROC is used to categorize the potential ship objects.We also design a pixel-spatial-level data augmentation(PSDA)to reduce the risk of detection model overfitting.We compare the proposed EfficientShip with state-of-the-art(SOTA)literature on a ship detection dataset called Seaships.Experiments show our ship detection framework achieves a result of 99.63%(mAP)at 45 fps,which is much better than 8 SOTA approaches on detection accuracy and can also meet the requirements of real-time application scenarios.展开更多
Electric power training is essential for ensuring the safety and reliability of the system.In this study,we introduce a novel Abnormal Action Recognition(AAR)system that utilizes a Lightweight Pose Estimation Network(...Electric power training is essential for ensuring the safety and reliability of the system.In this study,we introduce a novel Abnormal Action Recognition(AAR)system that utilizes a Lightweight Pose Estimation Network(LPEN)to efficiently and effectively detect abnormal fall-down and trespass incidents in electric power training scenarios.The LPEN network,comprising three stages—MobileNet,Initial Stage,and Refinement Stage—is employed to swiftly extract image features,detect human key points,and refine them for accurate analysis.Subsequently,a Pose-aware Action Analysis Module(PAAM)captures the positional coordinates of human skeletal points in each frame.Finally,an Abnormal Action Inference Module(AAIM)evaluates whether abnormal fall-down or unauthorized trespass behavior is occurring.For fall-down recognition,three criteria—falling speed,main angles of skeletal points,and the person’s bounding box—are considered.To identify unauthorized trespass,emphasis is placed on the position of the ankles.Extensive experiments validate the effectiveness and efficiency of the proposed system in ensuring the safety and reliability of electric power training.展开更多
Railway switch machine is essential for maintaining the safety and punctuality of train operations.A data-driven fault diagnosis scheme for railway switch machine using tensor machine and multi-representation monitori...Railway switch machine is essential for maintaining the safety and punctuality of train operations.A data-driven fault diagnosis scheme for railway switch machine using tensor machine and multi-representation monitoring data is developed herein.Unlike existing methods,this approach takes into account the spatial information of the time series monitoring data,aligning with the domain expertise of on-site manual monitoring.Besides,a multi-sensor fusion tensor machine is designed to improve single signal data’s limitations in insufficient information.First,one-dimensional signal data is preprocessed and transformed into two-dimensional images.Afterward,the fusion feature tensor is created by utilizing the images of the three-phase current and employing the CANDE-COMP/PARAFAC(CP)decomposition method.Then,the tensor learning-based model is built using the extracted fusion feature tensor.The developed fault diagnosis scheme is valid with the field three-phase current dataset.The experiment indicates an enhanced performance of the developed fault diagnosis scheme over the current approach,particularly in terms of recall,precision,and F1-score.展开更多
In a cloud environment,outsourced graph data is widely used in companies,enterprises,medical institutions,and so on.Data owners and users can save costs and improve efficiency by storing large amounts of graph data on...In a cloud environment,outsourced graph data is widely used in companies,enterprises,medical institutions,and so on.Data owners and users can save costs and improve efficiency by storing large amounts of graph data on cloud servers.Servers on cloud platforms usually have some subjective or objective attacks,which make the outsourced graph data in an insecure state.The issue of privacy data protection has become an important obstacle to data sharing and usage.How to query outsourcing graph data safely and effectively has become the focus of research.Adjacency query is a basic and frequently used operation in graph,and it will effectively promote the query range and query ability if multi-keyword fuzzy search can be supported at the same time.This work proposes to protect the privacy information of outsourcing graph data by encryption,mainly studies the problem of multi-keyword fuzzy adjacency query,and puts forward a solution.In our scheme,we use the Bloom filter and encryption mechanism to build a secure index and query token,and adjacency queries are implemented through indexes and query tokens on the cloud server.Our proposed scheme is proved by formal analysis,and the performance and effectiveness of the scheme are illustrated by experimental analysis.The research results of this work will provide solid theoretical and technical support for the further popularization and application of encrypted graph data processing technology.展开更多
Visual Question Answering(VQA)has sparked widespread interest as a crucial task in integrating vision and language.VQA primarily uses attention mechanisms to effectively answer questions to associate relevant visual r...Visual Question Answering(VQA)has sparked widespread interest as a crucial task in integrating vision and language.VQA primarily uses attention mechanisms to effectively answer questions to associate relevant visual regions with input questions.The detection-based features extracted by the object detection network aim to acquire the visual attention distribution on a predetermined detection frame and provide object-level insights to answer questions about foreground objects more effectively.However,it cannot answer the question about the background forms without detection boxes due to the lack of fine-grained details,which is the advantage of grid-based features.In this paper,we propose a Dual-Level Feature Embedding(DLFE)network,which effectively integrates grid-based and detection-based image features in a unified architecture to realize the complementary advantages of both features.Specifically,in DLFE,In DLFE,firstly,a novel Dual-Level Self-Attention(DLSA)modular is proposed to mine the intrinsic properties of the two features,where Positional Relation Attention(PRA)is designed to model the position information.Then,we propose a Feature Fusion Attention(FFA)to address the semantic noise caused by the fusion of two features and construct an alignment graph to enhance and align the grid and detection features.Finally,we use co-attention to learn the interactive features of the image and question and answer questions more accurately.Our method has significantly improved compared to the baseline,increasing accuracy from 66.01%to 70.63%on the test-std dataset of VQA 1.0 and from 66.24%to 70.91%for the test-std dataset of VQA 2.0.展开更多
Tactile paving is a professional road facility to ensure the safe travel of people with visual impairment.However,there are many problems with tactile paving travel in practice.For one,some tactile paving is seriously...Tactile paving is a professional road facility to ensure the safe travel of people with visual impairment.However,there are many problems with tactile paving travel in practice.For one,some tactile paving is seriously damaged,and the other is the accumulation of obstacles.How to help visually impaired people recognize and locate obstacles in tactile paving is a problem worth studying.In this paper,image recognition technology is used to recognize the tactile paving pictures with obstacles,and an attention mechanism is used to optimize samples to improve recognition accuracy.展开更多
In order to assure quality and control process in the development of the aircraft collaborative design software, a maturity assessment model is proposed. The requirements designing—house of quality is designed to eva...In order to assure quality and control process in the development of the aircraft collaborative design software, a maturity assessment model is proposed. The requirements designing—house of quality is designed to evaluate the maturity degree of the solution, and the evaluation results can help to manage and control the development process. Furthermore, a fuzzy evaluation method based on the minimum deviation is proposed to deal with the fuzzy information. The quantitative evaluation result of the maturity degree can be calculated by optimizing the semantic discount factor aim for the minimum deviation. Finally, this model is illustrated and analyzed by an example study of the aircraft collaborative design software.展开更多
基金funded by the National Natural Foundation of China under Grant No.61172167the Science Fund Project of Heilongjiang Province(LH2020F035).
文摘Nuclearmagnetic resonance imaging of breasts often presents complex backgrounds.Breast tumors exhibit varying sizes,uneven intensity,and indistinct boundaries.These characteristics can lead to challenges such as low accuracy and incorrect segmentation during tumor segmentation.Thus,we propose a two-stage breast tumor segmentation method leveraging multi-scale features and boundary attention mechanisms.Initially,the breast region of interest is extracted to isolate the breast area from surrounding tissues and organs.Subsequently,we devise a fusion network incorporatingmulti-scale features and boundary attentionmechanisms for breast tumor segmentation.We incorporate multi-scale parallel dilated convolution modules into the network,enhancing its capability to segment tumors of various sizes through multi-scale convolution and novel fusion techniques.Additionally,attention and boundary detection modules are included to augment the network’s capacity to locate tumors by capturing nonlocal dependencies in both spatial and channel domains.Furthermore,a hybrid loss function with boundary weight is employed to address sample class imbalance issues and enhance the network’s boundary maintenance capability through additional loss.Themethod was evaluated using breast data from 207 patients at RuijinHospital,resulting in a 6.64%increase in Dice similarity coefficient compared to the benchmarkU-Net.Experimental results demonstrate the superiority of the method over other segmentation techniques,with fewer model parameters.
基金supported by the National Key R&D Project of China(No.2022YFE03030000)National Natural Science Foundation of China(Nos.11975269,12275306 and 12075279)+3 种基金the Youth Innovation Promotion Association of the Chinese Academy of Sciences(No.2022452)the Anhui Provincial Natural Science Foundation(No.2208085J40)the CASHIPS Director’s Fund(Nos.YZJJQY202302 and BJPY2023B03)the Comprehensive Research Facility for Fusion Technology Program of China(No.2018-000052-73-01-001228).
文摘First mirror(FM)cleaning,using radio frequency(RF)plasma,has been proposed to recover FM reflectivity in nuclear fusion reactors such as the International Thermonuclear Experimental Reactor(ITER).To investigate the influence of simultaneous cleaning of two mirrors on mirror cleaning efficiency and uniformity,experiments involving single-mirror cleaning and dual-mirror cleaning were conducted using RF capacitively coupled plasma in the laboratory.For the test and simultaneous cleaning of two mirrors,the FM and second mirror(SM),both measuring 110 mm×80 mm,were placed inside the first mirror unit(FMU).They were composed of 16 mirror samples,each with a dimension of 27.5 mm×20 mm.These mirror samples consist of a titanium-zirconium-molybdenum alloy substrate,a 500 nm Mo intermediate layer and a 30 nm Al_(2)O_(3) surface coating as a proxy for Be impurities.The cleaning of a single first mirror(SFM)and the simultaneous cleaning of the FM and SM(DFM and DSM)lasted for 9 h using Ar plasma at a pressure of 1 Pa.The total reflectivity of mirror samples on the DSM did not fully recover and varied with location,with a self-bias of−140 V.With a self-bias of−300 V,the total reflectivity of mirror samples on the SFM and DFM was fully recovered.The energy dispersive spectrometer results demonstrated that the Al_(2)O_(3) coating had been completely removed from these mirror samples.However,the mass loss of each mirror sample on the SFM and DFM before and after cleaning varied depending on its location,with higher mass loss observed for mirror samples located in the corners and lower loss for those in the center.Compared with SM cleaning,the simultaneous cleaning of two mirrors reduced the difference between the highest and lowest mass loss.Furthermore,this mass loss for the mirror samples of the DFM facing the DSM was increased.This indicated that mirror samples cleaned face to face in the FMU simultaneously could influence each other,highlighting the need for special attention in future studies.
文摘This paper introduces the integration of the Social Group Optimization(SGO)algorithm to enhance the accuracy of software cost estimation using the Constructive Cost Model(COCOMO).COCOMO’s fixed coefficients often limit its adaptability,as they don’t account for variations across organizations.By fine-tuning these parameters with SGO,we aim to improve estimation accuracy.We train and validate our SGO-enhanced model using historical project data,evaluating its performance with metrics like the mean magnitude of relative error(MMRE)and Manhattan distance(MD).Experimental results show that SGO optimization significantly improves the predictive accuracy of software cost models,offering valuable insights for project managers and practitioners in the field.However,the approach’s effectiveness may vary depending on the quality and quantity of available historical data,and its scalability across diverse project types and sizes remains a key consideration for future research.
文摘Redundancy,correlation,feature irrelevance,and missing samples are just a few problems that make it difficult to analyze software defect data.Additionally,it might be challenging to maintain an even distribution of data relating to both defective and non-defective software.The latter software class’s data are predominately present in the dataset in the majority of experimental situations.The objective of this review study is to demonstrate the effectiveness of combining ensemble learning and feature selection in improving the performance of defect classification.Besides the successful feature selection approach,a novel variant of the ensemble learning technique is analyzed to address the challenges of feature redundancy and data imbalance,providing robustness in the classification process.To overcome these problems and lessen their impact on the fault classification performance,authors carefully integrate effective feature selection with ensemble learning models.Forward selection demonstrates that a significant area under the receiver operating curve(ROC)can be attributed to only a small subset of features.The Greedy forward selection(GFS)technique outperformed Pearson’s correlation method when evaluating feature selection techniques on the datasets.Ensemble learners,such as random forests(RF)and the proposed average probability ensemble(APE),demonstrate greater resistance to the impact of weak features when compared to weighted support vector machines(W-SVMs)and extreme learning machines(ELM).Furthermore,in the case of the NASA and Java datasets,the enhanced average probability ensemble model,which incorporates the Greedy forward selection technique with the average probability ensemble model,achieved remarkably high accuracy for the area under the ROC.It approached a value of 1.0,indicating exceptional performance.This review emphasizes the importance of meticulously selecting attributes in a software dataset to accurately classify damaged components.In addition,the suggested ensemble learning model successfully addressed the aforementioned problems with software data and produced outstanding classification performance.
文摘As an introductory course for the emerging major of big data management and application,“Introduction to Big Data”has not yet formed a curriculum standard and implementation plan that is widely accepted and used by everyone.To this end,we discuss some of our explorations and attempts in the construction and teaching process of big data courses for the major of big data management and application from the perspective of course planning,course implementation,and course summary.After interviews with students and feedback from questionnaires,students are highly satisfied with some of the teaching measures and programs currently adopted.
基金Project supported by the National Natural Science Foundation of China(Grant No.61073049)the Ph D Programs Foundation of the Ministry of Education of China(Grant No.20093108110016)the Shanghai Leading Academic Discipline Project(Grant No.J50103)
文摘t In this paper an overall scheme of the task management system of ternary optical computer (TOC) is proposed, and the software architecture chart is given. The function and accomplishment of each module in the system are described in general. In addition, according to the aforementioned scheme a prototype of TOC task management system is implemented, and the feasibility, rationality and completeness of the scheme are verified via running and testing the prototype.
基金Project supported by the National Natural Science Foundation of China(Grant No.61073049)the Shanghai Leading Academic Discipline Project(Grant No.J50103)the Doctorate Foundation of Education Ministry of China(Grant No.20093108110016)
文摘Reconfiguration is the key to produce an applicable ternary optical computer (TOC). The method to implement the reconfiguration function determines whether a TOC can step into applied fields or not. In this work, a design of the reconfiguration circuit based on field programmable gates array (FPGA) is proposed, and the structure of the entire hardware system is discussed.
基金Project supported by the Shanghai Leading Academic Discipline Project(Grant No.J50103)the National Natural Science Foundation of China(Grant No.61073049)
文摘The division operation is not frequent relatively in traditional applications, but it is increasingly indispensable and important in many modern applications. In this paper, the implementation of modified signed-digit (MSD) floating-point division using Newton-Raphson method on the system of ternary optical computer (TOC) is studied. Since the addition of MSD floating-point is carry-free and the digit width of the system of TOC is large, it is easy to deal with the enough wide data and transform the division operation into multiplication and addition operations. And using data scan and truncation the problem of digits expansion is effectively solved in the range of error limit. The division gets the good results and the efficiency is high. The instance of MSD floating-point division shows that the method is feasible.
基金supported by the National Key R&D Program of China (No.2021YFF0901002)the National Natural Science Foundation of China (No.61802291)+1 种基金Fundamental Research Funds for the Provincial Universities of Zhejiang (GK199900299012-025)Fundamental Research Funds for the Central Universities (No.JB210311).
文摘A large number of Web APIs have been released as services in mobile communications,but the service provided by a single Web API is usually limited.To enrich the services in mobile communications,developers have combined Web APIs and developed a new service,which is known as a mashup.The emergence of mashups greatly increases the number of services in mobile communications,especially in mobile networks and the Internet-of-Things(IoT),and has encouraged companies and individuals to develop even more mashups,which has led to the dramatic increase in the number of mashups.Such a trend brings with it big data,such as the massive text data from the mashups themselves and continually-generated usage data.Thus,the question of how to determine the most suitable mashups from big data has become a challenging problem.In this paper,we propose a mashup recommendation framework from big data in mobile networks and the IoT.The proposed framework is driven by machine learning techniques,including neural embedding,clustering,and matrix factorization.We employ neural embedding to learn the distributed representation of mashups and propose to use cluster analysis to learn the relationship among the mashups.We also develop a novel Joint Matrix Factorization(JMF)model to complete the mashup recommendation task,where we design a new objective function and an optimization algorithm.We then crawl through a real-world large mashup dataset and perform experiments.The experimental results demonstrate that our framework achieves high accuracy in mashup recommendation and performs better than all compared baselines.
基金This work was funded by the Natural Science Research Project of Higher Education Institutions in Jiangsu Province(No.20KJA520007)Min Zhang receives the grant and the URLs to sponsors’websites are http://jyt.jiangsu.gov.cn/.
文摘For Printed Circuit Board(PCB)surface defect detection,traditional detection methods mostly focus on template matching-based reference method and manual detections,which have the disadvantages of low defect detection efficiency,large errors in defect identification and localization,and low versatility of detectionmethods.In order to furthermeet the requirements of high detection accuracy,real-time and interactivity required by the PCB industry in actual production life.In the current work,we improve the Youonly-look-once(YOLOv4)defect detection method to train and detect six types of PCB small target defects.Firstly,the original Cross Stage Partial Darknet53(CSPDarknet53)backbone network is preserved for PCB defect feature information extraction,and secondly,the original multi-layer cascade fusion method is changed to a single-layer feature layer structure to greatly avoid the problem of uneven distribution of priori anchor boxes size in PCB defect detection process.Then,the K-means++clustering method is used to accurately cluster the anchor boxes to obtain the required size requirements for the defect detection,which further improves the recognition and localization of small PCB defects.Finally,the improved YOLOv4 defect detection model is compared and analyzed on PCB dataset with multi-class algorithms.The experimental results show that the average detection accuracy value of the improved defect detection model reaches 99.34%,which has better detection capability,lower leakage rate and false detection rate for PCB defects in comparison with similar defect detection algorithms.
文摘The earliest and most accurate detection of the pathological manifestations of hepatic diseases ensures effective treatments and thus positive prognostic outcomes.In clinical settings,screening and determining the extent of a pathology are prominent factors in preparing remedial agents and administering approp-riate therapeutic procedures.Moreover,in a patient undergoing liver resection,a realistic preoperative simulation of the subject-specific anatomy and physiology also plays a vital part in conducting initial assessments,making surgical decisions during the procedure,and anticipating postoperative results.Conventionally,various medical imaging modalities,e.g.,computed tomography,magnetic resonance imaging,and positron emission tomography,have been employed to assist in these tasks.In fact,several standardized procedures,such as lesion detection and liver segmentation,are also incorporated into prominent commercial software packages.Thus far,most integrated software as a medical device typically involves tedious interactions from the physician,such as manual delineation and empirical adjustments,as per a given patient.With the rapid progress in digital health approaches,especially medical image analysis,a wide range of computer algorithms have been proposed to facilitate those procedures.They include pattern recognition of a liver,its periphery,and lesion,as well as pre-and postoperative simulations.Prior to clinical adoption,however,software must conform to regulatory requirements set by the governing agency,for instance,valid clinical association and analytical and clinical validation.Therefore,this paper provides a detailed account and discussion of the state-of-the-art methods for liver image analyses,visualization,and simulation in the literature.Emphasis is placed upon their concepts,algorithmic classifications,merits,limitations,clinical considerations,and future research trends.
基金the Open Project of Sichuan Provincial Key Laboratory of Philosophy and Social Science for Language Intelligence in Special Education under Grant No.YYZN-2023-4the Ph.D.Fund of Chengdu Technological University under Grant No.2020RC002.
文摘The fingerprinting-based approach using the wireless local area network(WLAN)is widely used for indoor localization.However,the construction of the fingerprint database is quite time-consuming.Especially when the position of the access point(AP)or wall changes,updating the fingerprint database in real-time is difficult.An appropriate indoor localization approach,which has a low implementation cost,excellent real-time performance,and high localization accuracy and fully considers complex indoor environment factors,is preferred in location-based services(LBSs)applications.In this paper,we proposed a fine-grained grid computing(FGGC)model to achieve decimeter-level localization accuracy.Reference points(RPs)are generated in the grid by the FGGC model.Then,the received signal strength(RSS)values at each RP are calculated with the attenuation factors,such as the frequency band,three-dimensional propagation distance,and walls in complex environments.As a result,the fingerprint database can be established automatically without manual measurement,and the efficiency and cost that the FGGC model takes for the fingerprint database are superior to previous methods.The proposed indoor localization approach,which estimates the position step by step from the approximate grid location to the fine-grained location,can achieve higher real-time performance and localization accuracy simultaneously.The mean error of the proposed model is 0.36 m,far lower than that of previous approaches.Thus,the proposed model is feasible to improve the efficiency and accuracy of Wi-Fi indoor localization.It also shows high-accuracy performance with a fast running speed even under a large-size grid.The results indicate that the proposed method can also be suitable for precise marketing,indoor navigation,and emergency rescue.
基金supported by the National Natural Science Foundation of China(Nos.62273221 and 61973203)the Program of Shanghai Academic/Technology Research Leader(No.21XD1401000)the Shanghai Key Laboratory of Power Station Automation Technology.
文摘With the emergence of the artificial intelligence era,all kinds of robots are traditionally used in agricultural production.However,studies concerning the robot task assignment problem in the agriculture field,which is closely related to the cost and efficiency of a smart farm,are limited.Therefore,a Multi-Weeding Robot Task Assignment(MWRTA)problem is addressed in this paper to minimize the maximum completion time and residual herbicide.A mathematical model is set up,and a Multi-Objective Teaching-Learning-Based Optimization(MOTLBO)algorithm is presented to solve the problem.In the MOTLBO algorithm,a heuristicbased initialization comprising an improved Nawaz Enscore,and Ham(NEH)heuristic and maximum loadbased heuristic is used to generate an initial population with a high level of quality and diversity.An effective teaching-learning-based optimization process is designed with a dynamic grouping mechanism and a redefined individual updating rule.A multi-neighborhood-based local search strategy is provided to balance the exploitation and exploration of the algorithm.Finally,a comprehensive experiment is conducted to compare the proposed algorithm with several state-of-the-art algorithms in the literature.Experimental results demonstrate the significant superiority of the proposed algorithm for solving the problem under consideration.
基金This work was supported by the Outstanding Youth Science and Technology Innovation Team Project of Colleges and Universities in Hubei Province(Grant No.T201923)Key Science and Technology Project of Jingmen(Grant Nos.2021ZDYF024,2022ZDYF019)+2 种基金LIAS Pioneering Partnerships Award,UK(Grant No.P202ED10)Data Science Enhancement Fund,UK(Grant No.P202RE237)Cultivation Project of Jingchu University of Technology(Grant No.PY201904).
文摘Optical image-based ship detection can ensure the safety of ships and promote the orderly management of ships in offshore waters.Current deep learning researches on optical image-based ship detection mainly focus on improving one-stage detectors for real-time ship detection but sacrifices the accuracy of detection.To solve this problem,we present a hybrid ship detection framework which is named EfficientShip in this paper.The core parts of the EfficientShip are DLA-backboned object location(DBOL)and CascadeRCNN-guided object classification(CROC).The DBOL is responsible for finding potential ship objects,and the CROC is used to categorize the potential ship objects.We also design a pixel-spatial-level data augmentation(PSDA)to reduce the risk of detection model overfitting.We compare the proposed EfficientShip with state-of-the-art(SOTA)literature on a ship detection dataset called Seaships.Experiments show our ship detection framework achieves a result of 99.63%(mAP)at 45 fps,which is much better than 8 SOTA approaches on detection accuracy and can also meet the requirements of real-time application scenarios.
基金supportted by Natural Science Foundation of Jiangsu Province(No.BK20230696).
文摘Electric power training is essential for ensuring the safety and reliability of the system.In this study,we introduce a novel Abnormal Action Recognition(AAR)system that utilizes a Lightweight Pose Estimation Network(LPEN)to efficiently and effectively detect abnormal fall-down and trespass incidents in electric power training scenarios.The LPEN network,comprising three stages—MobileNet,Initial Stage,and Refinement Stage—is employed to swiftly extract image features,detect human key points,and refine them for accurate analysis.Subsequently,a Pose-aware Action Analysis Module(PAAM)captures the positional coordinates of human skeletal points in each frame.Finally,an Abnormal Action Inference Module(AAIM)evaluates whether abnormal fall-down or unauthorized trespass behavior is occurring.For fall-down recognition,three criteria—falling speed,main angles of skeletal points,and the person’s bounding box—are considered.To identify unauthorized trespass,emphasis is placed on the position of the ankles.Extensive experiments validate the effectiveness and efficiency of the proposed system in ensuring the safety and reliability of electric power training.
基金supported by the National Key Research and Development Program of China under Grant 2022YFB4300504-4the HKRGC Research Impact Fund under Grant R5020-18.
文摘Railway switch machine is essential for maintaining the safety and punctuality of train operations.A data-driven fault diagnosis scheme for railway switch machine using tensor machine and multi-representation monitoring data is developed herein.Unlike existing methods,this approach takes into account the spatial information of the time series monitoring data,aligning with the domain expertise of on-site manual monitoring.Besides,a multi-sensor fusion tensor machine is designed to improve single signal data’s limitations in insufficient information.First,one-dimensional signal data is preprocessed and transformed into two-dimensional images.Afterward,the fusion feature tensor is created by utilizing the images of the three-phase current and employing the CANDE-COMP/PARAFAC(CP)decomposition method.Then,the tensor learning-based model is built using the extracted fusion feature tensor.The developed fault diagnosis scheme is valid with the field three-phase current dataset.The experiment indicates an enhanced performance of the developed fault diagnosis scheme over the current approach,particularly in terms of recall,precision,and F1-score.
基金This research was supported in part by the Nature Science Foundation of China(Nos.62262033,61962029,61762055,62062045 and 62362042)the Jiangxi Provincial Natural Science Foundation of China(Nos.20224BAB202012,20202ACBL202005 and 20202BAB212006)+3 种基金the Science and Technology Research Project of Jiangxi Education Department(Nos.GJJ211815,GJJ2201914 and GJJ201832)the Hubei Natural Science Foundation Innovation and Development Joint Fund Project(No.2022CFD101)Xiangyang High-Tech Key Science and Technology Plan Project(No.2022ABH006848)Hubei Superior and Distinctive Discipline Group of“New Energy Vehicle and Smart Transportation”,the Project of Zhejiang Institute of Mechanical&Electrical Engineering,and the Jiangxi Provincial Social Science Foundation of China(No.23GL52D).
文摘In a cloud environment,outsourced graph data is widely used in companies,enterprises,medical institutions,and so on.Data owners and users can save costs and improve efficiency by storing large amounts of graph data on cloud servers.Servers on cloud platforms usually have some subjective or objective attacks,which make the outsourced graph data in an insecure state.The issue of privacy data protection has become an important obstacle to data sharing and usage.How to query outsourcing graph data safely and effectively has become the focus of research.Adjacency query is a basic and frequently used operation in graph,and it will effectively promote the query range and query ability if multi-keyword fuzzy search can be supported at the same time.This work proposes to protect the privacy information of outsourcing graph data by encryption,mainly studies the problem of multi-keyword fuzzy adjacency query,and puts forward a solution.In our scheme,we use the Bloom filter and encryption mechanism to build a secure index and query token,and adjacency queries are implemented through indexes and query tokens on the cloud server.Our proposed scheme is proved by formal analysis,and the performance and effectiveness of the scheme are illustrated by experimental analysis.The research results of this work will provide solid theoretical and technical support for the further popularization and application of encrypted graph data processing technology.
文摘Visual Question Answering(VQA)has sparked widespread interest as a crucial task in integrating vision and language.VQA primarily uses attention mechanisms to effectively answer questions to associate relevant visual regions with input questions.The detection-based features extracted by the object detection network aim to acquire the visual attention distribution on a predetermined detection frame and provide object-level insights to answer questions about foreground objects more effectively.However,it cannot answer the question about the background forms without detection boxes due to the lack of fine-grained details,which is the advantage of grid-based features.In this paper,we propose a Dual-Level Feature Embedding(DLFE)network,which effectively integrates grid-based and detection-based image features in a unified architecture to realize the complementary advantages of both features.Specifically,in DLFE,In DLFE,firstly,a novel Dual-Level Self-Attention(DLSA)modular is proposed to mine the intrinsic properties of the two features,where Positional Relation Attention(PRA)is designed to model the position information.Then,we propose a Feature Fusion Attention(FFA)to address the semantic noise caused by the fusion of two features and construct an alignment graph to enhance and align the grid and detection features.Finally,we use co-attention to learn the interactive features of the image and question and answer questions more accurately.Our method has significantly improved compared to the baseline,increasing accuracy from 66.01%to 70.63%on the test-std dataset of VQA 1.0 and from 66.24%to 70.91%for the test-std dataset of VQA 2.0.
基金supported by the Jiangsu Province College Student Innovation Training Program(Project No.20221127684Y)the Talent Startup project of Nanjing Institute of Technology(Project No.YKJ202117)。
文摘Tactile paving is a professional road facility to ensure the safe travel of people with visual impairment.However,there are many problems with tactile paving travel in practice.For one,some tactile paving is seriously damaged,and the other is the accumulation of obstacles.How to help visually impaired people recognize and locate obstacles in tactile paving is a problem worth studying.In this paper,image recognition technology is used to recognize the tactile paving pictures with obstacles,and an attention mechanism is used to optimize samples to improve recognition accuracy.
基金supported by the National Natural Science Foundation for Youth of China(61802174)the Natural Science Foundation for Youth of Jiangsu Province(BK20181016)+1 种基金the Natural Science Foundation of the Jiangsu Higher Education Institutions of China(18KJB520019)the Scientific Research Foundation of Nanjing Institute of Technology of China(YKJ201614)
文摘In order to assure quality and control process in the development of the aircraft collaborative design software, a maturity assessment model is proposed. The requirements designing—house of quality is designed to evaluate the maturity degree of the solution, and the evaluation results can help to manage and control the development process. Furthermore, a fuzzy evaluation method based on the minimum deviation is proposed to deal with the fuzzy information. The quantitative evaluation result of the maturity degree can be calculated by optimizing the semantic discount factor aim for the minimum deviation. Finally, this model is illustrated and analyzed by an example study of the aircraft collaborative design software.