The concepts of complementary cofactor pairs, normal double-graphs and feasible torn vertex seta are introduced. By using them a decomposition theorem for first-order cofactor C(Y) is derived. Combining it with the mo...The concepts of complementary cofactor pairs, normal double-graphs and feasible torn vertex seta are introduced. By using them a decomposition theorem for first-order cofactor C(Y) is derived. Combining it with the modified double-graph method, a new decomposition analysis-modified double-graph decomposition analysis is presented for finding symbolic network functions. Its advantages are that the resultant symbolic expressions are compact and contain no cancellation terms, and its sign evaluation is very simple.展开更多
In this paper we discuss neural network-based matrix effect correction in energy dispersive X-ray fluorescence (EDXRF) analysis,with detailed algorithm to classify the samples.The method can correct the matrix effect ...In this paper we discuss neural network-based matrix effect correction in energy dispersive X-ray fluorescence (EDXRF) analysis,with detailed algorithm to classify the samples.The method can correct the matrix effect effectively through classifying the samples automatically,and influence of X-ray absorption and enhancement by major elements of the samples is reduced.Experiments for the complex matrix effect correction in EDXRF analysis of samples in Pangang showed improved accuracy of the elemental analysis result.展开更多
The tool system of the organizational risk analyzer (ORA) to study the network of East Turkistan terrorists is selected. The model of the relationships among its personnel, knowledge, resources and task entities is re...The tool system of the organizational risk analyzer (ORA) to study the network of East Turkistan terrorists is selected. The model of the relationships among its personnel, knowledge, resources and task entities is represented by the meta-matrix in ORA, with which to analyze the risks and vulnerabilities of organizational structure quantitatively, and obtain the last vulnerabilities and risks of the organization. Case study in this system shows that it should be a shortcut to destroy effectively the network of terrorists by recognizing the caucus persons of the terrorism organization for the first and eliminating them when strikes the terror organization. It is vital to ensure effective use of the resources and control the risks of terrorist attacks.展开更多
Tensor robust principal component analysis has received a substantial amount of attention in various fields.Most existing methods,normally relying on tensor nuclear norm minimization,need to pay an expensive computati...Tensor robust principal component analysis has received a substantial amount of attention in various fields.Most existing methods,normally relying on tensor nuclear norm minimization,need to pay an expensive computational cost due to multiple singular value decompositions at each iteration.To overcome the drawback,we propose a scalable and efficient method,named parallel active subspace decomposition,which divides the unfolding along each mode of the tensor into a columnwise orthonormal matrix(active subspace)and another small-size matrix in parallel.Such a transformation leads to a nonconvex optimization problem in which the scale of nuclear norm minimization is generally much smaller than that in the original problem.We solve the optimization problem by an alternating direction method of multipliers and show that the iterates can be convergent within the given stopping criterion and the convergent solution is close to the global optimum solution within the prescribed bound.Experimental results are given to demonstrate that the performance of the proposed model is better than the state-of-the-art methods.展开更多
One paper in a preceding issue of this journal has introduced the Bayesian Ying-Yang(BYY)harmony learning from a perspective of problem solving,parameter learning,and model selection.In a complementary role,the paper ...One paper in a preceding issue of this journal has introduced the Bayesian Ying-Yang(BYY)harmony learning from a perspective of problem solving,parameter learning,and model selection.In a complementary role,the paper provides further insights from another perspective that a co-dimensional matrix pair(shortly co-dim matrix pair)forms a building unit and a hierarchy of such building units sets up the BYY system.The BYY harmony learning is re-examined via exploring the nature of a co-dim matrix pair,which leads to improved learning performance with refined model selection criteria and a modified mechanism that coordinates automatic model selection and sparse learning.Besides updating typical algorithms of factor analysis(FA),binary FA(BFA),binary matrix factorization(BMF),and nonnegative matrix factorization(NMF)to share such a mechanism,we are also led to(a)a new parametrization that embeds a de-noise nature to Gaussian mixture and local FA(LFA);(b)an alternative formulation of graph Laplacian based linear manifold learning;(c)a codecomposition of data and covariance for learning regularization and data integration;and(d)a co-dim matrix pair based generalization of temporal FA and state space model.Moreover,with help of a co-dim matrix pair in Hadamard product,we are led to a semi-supervised formation for regression analysis and a semi-blind learning formation for temporal FA and state space model.Furthermore,we address that these advances provide with new tools for network biology studies,including learning transcriptional regulatory,Protein-Protein Interaction network alignment,and network integration.展开更多
The architecture strategy of the Unmanned Aerial Vehicle(UAV)pneumatic launch system should continue to evolve to adapt to complex and variable operating environments.Architecture representation,decomposition perspect...The architecture strategy of the Unmanned Aerial Vehicle(UAV)pneumatic launch system should continue to evolve to adapt to complex and variable operating environments.Architecture representation,decomposition perspective,and cluster analysis play a vital role in the early phase of system architecture development.In order for the system to emerge anticipated and desirable intrinsic functional properties,an architecture decomposition method based on the ObjectProcess Methodology(OPM)and Design Structure Matrix(DSM)is put forward in this paper.The OPM is proposed to model the UAV launch process formally,and the matrix representation of the architecture of the pneumatic launch system is established.After the extension of the definition and operations of DSM,with the Idicula-Gutierrez-Thebeau Algorithm plus(IGTA+)clustering algorithm,the transformation of the pneumatic launch system architecture from process decomposition to function decomposition is demonstrated in this paper.The analysis shows that the architecture decomposition of the pneumatic launch system meets the functional requirements of stakeholders.展开更多
Structural health monitoring (SHM) is a relevant topic for civil systems and involves the monitoring, data processing and interpretation to evaluate the condition of a structure, in order to detect damage. In real str...Structural health monitoring (SHM) is a relevant topic for civil systems and involves the monitoring, data processing and interpretation to evaluate the condition of a structure, in order to detect damage. In real structures, two or more sites or types of damage can be present at the same time. It has been shown that one kind of damaged condition can interfere with the detection of another kind of damage, leading to an incorrect assessment about the structure condition. Identifying combined damage on structures still represents a challenge for condition monitoring, because the reliable identification of a combined damaged condition is a difficult task. Thus, this work presents a fusion of methodologies, where a single wavelet-packet and the empirical mode decomposition (EMD) method are combined with artificial neural networks (ANNs) for the automated and online identification-location of single or multiple-combined damage in a scaled model of a five-bay truss-type structure. Results showed that the proposed methodology is very efficient and reliable for identifying and locating the three kinds of damage, as well as their combinations. Therefore, this methodology could be applied to detection-location of damage in real truss-type structures, which would help to improve the characteristics and life span of real structures.展开更多
This study established back-propagation neural networks(BPNNs)for evaluating the freshness of bighead carp(Hypophthalmichthys nobilis)heads during chilled storage via fluorescence spectroscopy using an excitation-emis...This study established back-propagation neural networks(BPNNs)for evaluating the freshness of bighead carp(Hypophthalmichthys nobilis)heads during chilled storage via fluorescence spectroscopy using an excitation-emission matrix(EEM).The total volatile basic nitrogen(TVB-N)and total aerobic count(TAC)of fish increased obviously during storage at 0,4,8,12,and 16°C,while sensory scores decreased with increasing storage time.The EEM fluorescence intensity was measured,and its change was correlated with the freshness indicators of the samples.Three characteristic components of EEM data were extracted by parallel factor analysis,and two freshness indicators were used to construct the EEM-BPNNs model.The results demonstrated that the relative errors of the EEM-BPNNs model for TVB-N and TAC were less than 14%.This result indicated that the EEM-BPNNs model could determine the freshness of fish in cold chains in a rapid and nondestructive way.展开更多
In recent years, aluminum-matrix composites (AMCs) have been widely used to replace cast iron in aerospace and automotive industries. Machining of these composite materials requires better understanding of cutting pro...In recent years, aluminum-matrix composites (AMCs) have been widely used to replace cast iron in aerospace and automotive industries. Machining of these composite materials requires better understanding of cutting processes re- garding accuracy and efficiency. This study addresses the modeling of the machinability of self-lubricated aluminum /alumina/graphite hybrid composites synthesized by the powder metallurgy method. In this study, multiple regression analysis (MRA) and artificial neural networks (ANN) were used to investigate the influence of some parameters on the thrust force and torque in the drilling processes of self-lubricated hybrid composite materials. The models were identi- fied by using cutting speed, feed, and volume fraction of the reinforcement particles as input data and the thrust force and torque as the output data. A comparison between two prediction methods was developed to compare the prediction accuracy. ANNs showed better predictability results compared to MRA due to the nonlinearity nature of ANNs. The statistical analysis accompanied with artificial neural network results showed that Al2O3, Gr and cutting feed (f) were the most significant parameters on the drilling process, while spindle speed seemed insignificant. Since the spindle speed was insignificant, it directed us to set it either at the highest spindle speed to obtain high material removal rate or at the lowest spindle speed to prolong the tool life depending on the need for the application.展开更多
Artificial Intelligence(AI)is being increasingly used for diagnosing Vision-Threatening Diabetic Retinopathy(VTDR),which is a leading cause of visual impairment and blindness worldwide.However,previous automated VTDR ...Artificial Intelligence(AI)is being increasingly used for diagnosing Vision-Threatening Diabetic Retinopathy(VTDR),which is a leading cause of visual impairment and blindness worldwide.However,previous automated VTDR detection methods have mainly relied on manual feature extraction and classification,leading to errors.This paper proposes a novel VTDR detection and classification model that combines different models through majority voting.Our proposed methodology involves preprocessing,data augmentation,feature extraction,and classification stages.We use a hybrid convolutional neural network-singular value decomposition(CNN-SVD)model for feature extraction and selection and an improved SVM-RBF with a Decision Tree(DT)and K-Nearest Neighbor(KNN)for classification.We tested our model on the IDRiD dataset and achieved an accuracy of 98.06%,a sensitivity of 83.67%,and a specificity of 100%for DR detection and evaluation tests,respectively.Our proposed approach outperforms baseline techniques and provides a more robust and accurate method for VTDR detection.展开更多
Although the SSA (singular spectral analysis) is a potential tool for analysing time series of different physical processes, the processing of large geophysical data set requires more time and is found to be computa...Although the SSA (singular spectral analysis) is a potential tool for analysing time series of different physical processes, the processing of large geophysical data set requires more time and is found to be computationally expansive. In particular for the SVD (singular value decomposition) of large trajectory matrix, the processing units require huge memory and high performance computing system. In the present work, we propose an alternative scheme based on WSSA (windowed singular spectral analysis), which is robust for analysing long data sets without losing any valuable low-frequency information contained in the data. The underlying scheme reduces the floating point operations in SVD computations as the size of the trajectory matrix is small in windowed processing. In order to test the efficiency, the authors applied the proposed method on two geophysical data sets i.e., the climatic record with 30,000 data points and seismic reflection trace with 8,000 data points. The authors have shown that without distorting any physical information, the low-frequency contents of the data are well preserved after the windowed processing in both the cases.展开更多
文摘The concepts of complementary cofactor pairs, normal double-graphs and feasible torn vertex seta are introduced. By using them a decomposition theorem for first-order cofactor C(Y) is derived. Combining it with the modified double-graph method, a new decomposition analysis-modified double-graph decomposition analysis is presented for finding symbolic network functions. Its advantages are that the resultant symbolic expressions are compact and contain no cancellation terms, and its sign evaluation is very simple.
基金supported by the National Natural Science Foundation of China (No.40574059)the Ministry of Education (No.NCET-04-0904)
文摘In this paper we discuss neural network-based matrix effect correction in energy dispersive X-ray fluorescence (EDXRF) analysis,with detailed algorithm to classify the samples.The method can correct the matrix effect effectively through classifying the samples automatically,and influence of X-ray absorption and enhancement by major elements of the samples is reduced.Experiments for the complex matrix effect correction in EDXRF analysis of samples in Pangang showed improved accuracy of the elemental analysis result.
文摘The tool system of the organizational risk analyzer (ORA) to study the network of East Turkistan terrorists is selected. The model of the relationships among its personnel, knowledge, resources and task entities is represented by the meta-matrix in ORA, with which to analyze the risks and vulnerabilities of organizational structure quantitatively, and obtain the last vulnerabilities and risks of the organization. Case study in this system shows that it should be a shortcut to destroy effectively the network of terrorists by recognizing the caucus persons of the terrorism organization for the first and eliminating them when strikes the terror organization. It is vital to ensure effective use of the resources and control the risks of terrorist attacks.
基金the HKRGC GRF 12306616,12200317,12300218 and 12300519,and HKU Grant 104005583.
文摘Tensor robust principal component analysis has received a substantial amount of attention in various fields.Most existing methods,normally relying on tensor nuclear norm minimization,need to pay an expensive computational cost due to multiple singular value decompositions at each iteration.To overcome the drawback,we propose a scalable and efficient method,named parallel active subspace decomposition,which divides the unfolding along each mode of the tensor into a columnwise orthonormal matrix(active subspace)and another small-size matrix in parallel.Such a transformation leads to a nonconvex optimization problem in which the scale of nuclear norm minimization is generally much smaller than that in the original problem.We solve the optimization problem by an alternating direction method of multipliers and show that the iterates can be convergent within the given stopping criterion and the convergent solution is close to the global optimum solution within the prescribed bound.Experimental results are given to demonstrate that the performance of the proposed model is better than the state-of-the-art methods.
基金supported by the General Research Fund from Research Grant Council of Hong Kong(Project No.CUHK4180/10E)the National Basic Research Program of China(973 Program)(No.2009CB825404).
文摘One paper in a preceding issue of this journal has introduced the Bayesian Ying-Yang(BYY)harmony learning from a perspective of problem solving,parameter learning,and model selection.In a complementary role,the paper provides further insights from another perspective that a co-dimensional matrix pair(shortly co-dim matrix pair)forms a building unit and a hierarchy of such building units sets up the BYY system.The BYY harmony learning is re-examined via exploring the nature of a co-dim matrix pair,which leads to improved learning performance with refined model selection criteria and a modified mechanism that coordinates automatic model selection and sparse learning.Besides updating typical algorithms of factor analysis(FA),binary FA(BFA),binary matrix factorization(BMF),and nonnegative matrix factorization(NMF)to share such a mechanism,we are also led to(a)a new parametrization that embeds a de-noise nature to Gaussian mixture and local FA(LFA);(b)an alternative formulation of graph Laplacian based linear manifold learning;(c)a codecomposition of data and covariance for learning regularization and data integration;and(d)a co-dim matrix pair based generalization of temporal FA and state space model.Moreover,with help of a co-dim matrix pair in Hadamard product,we are led to a semi-supervised formation for regression analysis and a semi-blind learning formation for temporal FA and state space model.Furthermore,we address that these advances provide with new tools for network biology studies,including learning transcriptional regulatory,Protein-Protein Interaction network alignment,and network integration.
基金was co-supported by the National Defense Outstanding Youth Science Foundation,China(No.2018-JCJQZQ-053)the Natural Science Foundation of Jiangsu Province,China(No.BK20220911).
文摘The architecture strategy of the Unmanned Aerial Vehicle(UAV)pneumatic launch system should continue to evolve to adapt to complex and variable operating environments.Architecture representation,decomposition perspective,and cluster analysis play a vital role in the early phase of system architecture development.In order for the system to emerge anticipated and desirable intrinsic functional properties,an architecture decomposition method based on the ObjectProcess Methodology(OPM)and Design Structure Matrix(DSM)is put forward in this paper.The OPM is proposed to model the UAV launch process formally,and the matrix representation of the architecture of the pneumatic launch system is established.After the extension of the definition and operations of DSM,with the Idicula-Gutierrez-Thebeau Algorithm plus(IGTA+)clustering algorithm,the transformation of the pneumatic launch system architecture from process decomposition to function decomposition is demonstrated in this paper.The analysis shows that the architecture decomposition of the pneumatic launch system meets the functional requirements of stakeholders.
基金Project (No. PIFI-2012 U. de Gto.) supported by the Secretariat of Public Education (SEP), Mexico
文摘Structural health monitoring (SHM) is a relevant topic for civil systems and involves the monitoring, data processing and interpretation to evaluate the condition of a structure, in order to detect damage. In real structures, two or more sites or types of damage can be present at the same time. It has been shown that one kind of damaged condition can interfere with the detection of another kind of damage, leading to an incorrect assessment about the structure condition. Identifying combined damage on structures still represents a challenge for condition monitoring, because the reliable identification of a combined damaged condition is a difficult task. Thus, this work presents a fusion of methodologies, where a single wavelet-packet and the empirical mode decomposition (EMD) method are combined with artificial neural networks (ANNs) for the automated and online identification-location of single or multiple-combined damage in a scaled model of a five-bay truss-type structure. Results showed that the proposed methodology is very efficient and reliable for identifying and locating the three kinds of damage, as well as their combinations. Therefore, this methodology could be applied to detection-location of damage in real truss-type structures, which would help to improve the characteristics and life span of real structures.
基金This study was supported by the Young Beijing Scholars Program and Beijing Agricultural Forestry Academy Foundation(QNJJ202218).
文摘This study established back-propagation neural networks(BPNNs)for evaluating the freshness of bighead carp(Hypophthalmichthys nobilis)heads during chilled storage via fluorescence spectroscopy using an excitation-emission matrix(EEM).The total volatile basic nitrogen(TVB-N)and total aerobic count(TAC)of fish increased obviously during storage at 0,4,8,12,and 16°C,while sensory scores decreased with increasing storage time.The EEM fluorescence intensity was measured,and its change was correlated with the freshness indicators of the samples.Three characteristic components of EEM data were extracted by parallel factor analysis,and two freshness indicators were used to construct the EEM-BPNNs model.The results demonstrated that the relative errors of the EEM-BPNNs model for TVB-N and TAC were less than 14%.This result indicated that the EEM-BPNNs model could determine the freshness of fish in cold chains in a rapid and nondestructive way.
文摘In recent years, aluminum-matrix composites (AMCs) have been widely used to replace cast iron in aerospace and automotive industries. Machining of these composite materials requires better understanding of cutting processes re- garding accuracy and efficiency. This study addresses the modeling of the machinability of self-lubricated aluminum /alumina/graphite hybrid composites synthesized by the powder metallurgy method. In this study, multiple regression analysis (MRA) and artificial neural networks (ANN) were used to investigate the influence of some parameters on the thrust force and torque in the drilling processes of self-lubricated hybrid composite materials. The models were identi- fied by using cutting speed, feed, and volume fraction of the reinforcement particles as input data and the thrust force and torque as the output data. A comparison between two prediction methods was developed to compare the prediction accuracy. ANNs showed better predictability results compared to MRA due to the nonlinearity nature of ANNs. The statistical analysis accompanied with artificial neural network results showed that Al2O3, Gr and cutting feed (f) were the most significant parameters on the drilling process, while spindle speed seemed insignificant. Since the spindle speed was insignificant, it directed us to set it either at the highest spindle speed to obtain high material removal rate or at the lowest spindle speed to prolong the tool life depending on the need for the application.
基金This research was funded by the National Natural Science Foundation of China(Nos.71762010,62262019,62162025,61966013,12162012)the Hainan Provincial Natural Science Foundation of China(Nos.823RC488,623RC481,620RC603,621QN241,620RC602,121RC536)+1 种基金the Haikou Science and Technology Plan Project of China(No.2022-016)the Project supported by the Education Department of Hainan Province,No.Hnky2021-23.
文摘Artificial Intelligence(AI)is being increasingly used for diagnosing Vision-Threatening Diabetic Retinopathy(VTDR),which is a leading cause of visual impairment and blindness worldwide.However,previous automated VTDR detection methods have mainly relied on manual feature extraction and classification,leading to errors.This paper proposes a novel VTDR detection and classification model that combines different models through majority voting.Our proposed methodology involves preprocessing,data augmentation,feature extraction,and classification stages.We use a hybrid convolutional neural network-singular value decomposition(CNN-SVD)model for feature extraction and selection and an improved SVM-RBF with a Decision Tree(DT)and K-Nearest Neighbor(KNN)for classification.We tested our model on the IDRiD dataset and achieved an accuracy of 98.06%,a sensitivity of 83.67%,and a specificity of 100%for DR detection and evaluation tests,respectively.Our proposed approach outperforms baseline techniques and provides a more robust and accurate method for VTDR detection.
文摘Although the SSA (singular spectral analysis) is a potential tool for analysing time series of different physical processes, the processing of large geophysical data set requires more time and is found to be computationally expansive. In particular for the SVD (singular value decomposition) of large trajectory matrix, the processing units require huge memory and high performance computing system. In the present work, we propose an alternative scheme based on WSSA (windowed singular spectral analysis), which is robust for analysing long data sets without losing any valuable low-frequency information contained in the data. The underlying scheme reduces the floating point operations in SVD computations as the size of the trajectory matrix is small in windowed processing. In order to test the efficiency, the authors applied the proposed method on two geophysical data sets i.e., the climatic record with 30,000 data points and seismic reflection trace with 8,000 data points. The authors have shown that without distorting any physical information, the low-frequency contents of the data are well preserved after the windowed processing in both the cases.