The concept WALKING on structures is proposed, and the partial ordering between a structure and a query structure (substructure) is also created by means of WALKING. Based upon the above concepts, authors create the H...The concept WALKING on structures is proposed, and the partial ordering between a structure and a query structure (substructure) is also created by means of WALKING. Based upon the above concepts, authors create the Heuristic-Backtracking Algorithm (HBA) of structural match with high performance. In the last part of the paper, the applications of HBA in molecular graphics, synthetic planning, spectrum simulation , the representation and recognition of general structures are discussed.展开更多
The firework algorithm(FWA) is a novel swarm intelligence-based method recently proposed for the optimization of multi-parameter, nonlinear functions. Numerical waveform inversion experiments using a synthetic model...The firework algorithm(FWA) is a novel swarm intelligence-based method recently proposed for the optimization of multi-parameter, nonlinear functions. Numerical waveform inversion experiments using a synthetic model show that the FWA performs well in both solution quality and efficiency. We apply the FWA in this study to crustal velocity structure inversion using regional seismic waveform data of central Gansu on the northeastern margin of the Qinghai-Tibet plateau. Seismograms recorded from the moment magnitude(MW) 5.4 Minxian earthquake enable obtaining an average crustal velocity model for this region. We initially carried out a series of FWA robustness tests in regional waveform inversion at the same earthquake and station positions across the study region,inverting two velocity structure models, with and without a low-velocity crustal layer; the accuracy of our average inversion results and their standard deviations reveal the advantages of the FWA for the inversion of regional seismic waveforms. We applied the FWA across our study area using three component waveform data recorded by nine broadband permanent seismic stations with epicentral distances ranging between 146 and 437 km. These inversion results show that the average thickness of the crust in this region is 46.75 km, while thicknesses of the sedimentary layer, and the upper, middle, and lower crust are 3.15,15.69, 13.08, and 14.83 km, respectively. Results also show that the P-wave velocities of these layers and the upper mantle are 4.47, 6.07, 6.12, 6.87, and 8.18 km/s,respectively.展开更多
Purpose: This study introduces an algorithm to construct tag trees that can be used as a userfriendly navigation tool for knowledge sharing and retrieval by solving two issues of previous studies, i.e. semantic drift...Purpose: This study introduces an algorithm to construct tag trees that can be used as a userfriendly navigation tool for knowledge sharing and retrieval by solving two issues of previous studies, i.e. semantic drift and structural skew.Design/methodology/approach: Inspired by the generality based methods, this study builds tag trees from a co-occurrence tag network and uses the h-degree as a node generality metric. The proposed algorithm is characterized by the following four features:(1) the ancestors should be more representative than the descendants,(2) the semantic meaning along the ancestor-descendant paths needs to be coherent,(3) the children of one parent are collectively exhaustive and mutually exclusive in describing their parent, and(4) tags are roughly evenly distributed to their upper-level parents to avoid structural skew. Findings: The proposed algorithm has been compared with a well-established solution Heymann Tag Tree(HTT). The experimental results using a social tag dataset showed that the proposed algorithm with its default condition outperformed HTT in precision based on Open Directory Project(ODP) classification. It has been verified that h-degree can be applied as a better node generality metric compared with degree centrality.Research limitations: A thorough investigation into the evaluation methodology is needed, including user studies and a set of metrics for evaluating semantic coherence and navigation performance.Practical implications: The algorithm will benefit the use of digital resources by generating a flexible domain knowledge structure that is easy to navigate. It could be used to manage multiple resource collections even without social annotations since tags can be keywords created by authors or experts, as well as automatically extracted from text.Originality/value: Few previous studies paid attention to the issue of whether the tagging systems are easy to navigate for users. The contributions of this study are twofold:(1) an algorithm was developed to construct tag trees with consideration given to both semanticcoherence and structural balance and(2) the effectiveness of a node generality metric, h-degree, was investigated in a tag co-occurrence network.展开更多
"Data Structure and Algorithm",which is an important major subject in computer science,has a lot of problems in teaching activity.This paper introduces and analyzes the situation and problems in this course ..."Data Structure and Algorithm",which is an important major subject in computer science,has a lot of problems in teaching activity.This paper introduces and analyzes the situation and problems in this course study.A "programming factory" method is then brought out which is indeed a practice-oriented platform of the teachingstudy process.Good results are obtained by this creative method.展开更多
The probability-based covering algorithm(PBCA) is a new algorithm based on probability distribution. It decides, by voting, the class of the tested samples on the border of the coverage area, based on the probability ...The probability-based covering algorithm(PBCA) is a new algorithm based on probability distribution. It decides, by voting, the class of the tested samples on the border of the coverage area, based on the probability of training samples. When using the original covering algorithm(CA), many tested samples that are located on the border of the coverage cannot be classified by the spherical neighborhood gained. The network structure of PBCA is a mixed structure composed of both a feed-forward network and a feedback network. By using this method of adding some heterogeneous samples and enlarging the coverage radius,it is possible to decrease the number of rejected samples and improve the rate of recognition accuracy. Relevant computer experiments indicate that the algorithm improves the study precision and achieves reasonably good results in text classification.展开更多
Bayesian networks are a powerful class of graphical decision models used to represent causal relationships among variables.However,the reliability and integrity of learned Bayesian network models are highly dependent ...Bayesian networks are a powerful class of graphical decision models used to represent causal relationships among variables.However,the reliability and integrity of learned Bayesian network models are highly dependent on the quality of incoming data streams.One of the primary challenges with Bayesian networks is their vulnerability to adversarial data poisoning attacks,wherein malicious data is injected into the training dataset to negatively influence the Bayesian network models and impair their performance.In this research paper,we propose an efficient framework for detecting data poisoning attacks against Bayesian network structure learning algorithms.Our framework utilizes latent variables to quantify the amount of belief between every two nodes in each causal model over time.We use our innovative methodology to tackle an important issue with data poisoning assaults in the context of Bayesian networks.With regard to four different forms of data poisoning attacks,we specifically aim to strengthen the security and dependability of Bayesian network structure learning techniques,such as the PC algorithm.By doing this,we explore the complexity of this area and offer workablemethods for identifying and reducing these sneaky dangers.Additionally,our research investigates one particular use case,the“Visit to Asia Network.”The practical consequences of using uncertainty as a way to spot cases of data poisoning are explored in this inquiry,which is of utmost relevance.Our results demonstrate the promising efficacy of latent variables in detecting and mitigating the threat of data poisoning attacks.Additionally,our proposed latent-based framework proves to be sensitive in detecting malicious data poisoning attacks in the context of stream data.展开更多
With a more complex pore structure system compared with clastic rocks, carbonate rocks have not yet been well described by existing conventional rock physical models concerning the pore structure vagary as well as the...With a more complex pore structure system compared with clastic rocks, carbonate rocks have not yet been well described by existing conventional rock physical models concerning the pore structure vagary as well as the influence on elastic rock properties. We start with a discussion and an analysis about carbonate rock pore structure utilizing rock slices. Then, given appropriate assumptions, we introduce a new approach to modeling carbonate rocks and construct a pore structure algorithm to identify pore structure mutation with a basis on the Gassmann equation and the Eshelby-Walsh ellipsoid inclusion crack theory. Finally, we compute a single well's porosity using this new approach with full wave log data and make a comparison with the predicted result of traditional method and simultaneously invert for reservoir parameters. The study results reveal that the rock pore structure can significantly influence the rocks' elastic properties and the predicted porosity error of the new modeling approach is merely 0.74%. Therefore, the approach we introduce can effectively decrease the predicted error of reservoir parameters.展开更多
Using the double-difference relocation algo- rithm, we relocated the 20 April 2013 Lushan, Sichuan, earthquake (Ms 7.0), and its 4,567 aftershocks recorded during the period between 20 April and May 3, 2013. Our res...Using the double-difference relocation algo- rithm, we relocated the 20 April 2013 Lushan, Sichuan, earthquake (Ms 7.0), and its 4,567 aftershocks recorded during the period between 20 April and May 3, 2013. Our results showed that most aftershocks are relocated between 10 and 20 km depths, but some large aftershocks were relocated around 30 krn depth and small events extended upward near the surface. Vertical cross sections illustrate a shovel-shaped fault plane with a variable dip angle from the southwest to northeast along the fault. Furthermore, the dip angle of the fault plane is smaller around the mainshock than that in the surrounding areas along the fault. These results suggest that it may be easy to generate the strong earthquake in the place having a small dip angle of the fault, which is somewhat similar to the genesis of the 2008 Wenchuan earthquake. The Lushan mainshock is underlain by the seismically anomalous layers with low-Vp, low-Vs, and high-Poisson's ratio anomalies, possibly suggesting that the fluid-filled fractured rock matrices might signifi- cantly reduce the effective normal stress on the fault plane to bring the brittle failure. The seismic gap between Lushan and Wenchuan aftershocks is suspected to be vulnerable to future seismic risks at greater depths, if any.展开更多
With increasingly more smart cameras deployed in infrastructure and commercial buildings,3D reconstruction can quickly obtain cities’information and improve the efficiency of government services.Images collected in o...With increasingly more smart cameras deployed in infrastructure and commercial buildings,3D reconstruction can quickly obtain cities’information and improve the efficiency of government services.Images collected in outdoor hazy environments are prone to color distortion and low contrast;thus,the desired visual effect cannot be achieved and the difficulty of target detection is increased.Artificial intelligence(AI)solutions provide great help for dehazy images,which can automatically identify patterns or monitor the environment.Therefore,we propose a 3D reconstruction method of dehazed images for smart cities based on deep learning.First,we propose a fine transmission image deep convolutional regression network(FT-DCRN)dehazing algorithm that uses fine transmission image and atmospheric light value to compute dehazed image.The DCRN is used to obtain the coarse transmission image,which can not only expand the receptive field of the network but also retain the features to maintain the nonlinearity of the overall network.The fine transmission image is obtained by refining the coarse transmission image using a guided filter.The atmospheric light value is estimated according to the position and brightness of the pixels in the original hazy image.Second,we use the dehazed images generated by the FT-DCRN dehazing algorithm for 3D reconstruction.An advanced relaxed iterative fine matching based on the structure from motion(ARI-SFM)algorithm is proposed.The ARISFM algorithm,which obtains the fine matching corner pairs and reduces the number of iterations,establishes an accurate one-to-one matching corner relationship.The experimental results show that our FT-DCRN dehazing algorithm improves the accuracy compared to other representative algorithms.In addition,the ARI-SFM algorithm guarantees the precision and improves the efficiency.展开更多
In this paper,it has proposed a realtime implementation of low-density paritycheck(LDPC) decoder with less complexity used for satellite communication on FPGA platform.By adopting a(2048.4096)irregular quasi-cyclic(QC...In this paper,it has proposed a realtime implementation of low-density paritycheck(LDPC) decoder with less complexity used for satellite communication on FPGA platform.By adopting a(2048.4096)irregular quasi-cyclic(QC) LDPC code,the proposed partly parallel decoding structure balances the complexity between the check node unit(CNU) and the variable node unit(VNU) based on min-sum(MS) algorithm,thereby achieving less Slice resources and superior clock performance.Moreover,as a lookup table(LUT) is utilized in this paper to search the node message stored in timeshare memory unit,it is simple to reuse and save large amount of storage resources.The implementation results on Xilinx FPGA chip illustrate that,compared with conventional structure,the proposed scheme can achieve at last 28.6%and 8%cost reduction in RAM and Slice respectively.The clock frequency is also increased to 280 MHz without decoding performance deterioration and convergence speed reduction.展开更多
The variable structure control (VSC) theory is applied to the electro-hydraulic servo system here. The VSC control law is achieved using Lyapunov method and pole placement. To eliminate the chattering phenomena, a s...The variable structure control (VSC) theory is applied to the electro-hydraulic servo system here. The VSC control law is achieved using Lyapunov method and pole placement. To eliminate the chattering phenomena, a saturation function is adopted. The proposed VSC approach is fairly robust to load disturbance and system parameter variation. Since the distortion. including phase lag and amplitude attenuation occurs in the system sinusoid response, the amplitude and phase control (APC) algorithm, based on Adaline neural network and using LMS algorithm, is developed for distortion cancellation. The APC controller is simple and can on-line adjust, thus it gives accurate tracking.展开更多
A ship is operated under an extremely complex environment, and waves and winds are assumed to be the stochastic excitations. Moreover, the propeller, host and mechanical equipment can also induce the harmonic response...A ship is operated under an extremely complex environment, and waves and winds are assumed to be the stochastic excitations. Moreover, the propeller, host and mechanical equipment can also induce the harmonic responses. In order to reduce structural vibration, it is important to obtain the modal parameters information of a ship. However, the traditional modal parameter identification methods are not suitable since the excitation information is difficult to obtain. Natural excitation technique-eigensystem realization algorithm (NExT-ERA) is an operational modal identification method which abstracts modal parameters only from the response signals, and it is based on the assumption that the input to the structure is pure white noise. Hence, it is necessary to study the influence of harmonic excitations while applying the NExT-ERA method to a ship structure. The results of this research paper indicate the practical experiences under ambient excitation, ship model experiments were successfully done in the modal parameters identification only when the harmonic frequencies were not too close to the modal frequencies.展开更多
MatBase is a prototype data and knowledge base management expert intelligent system based on the Relational,Entity-Relationship,and(Elementary)Mathematical Data Models.Dyadic relationships are quite common in data mod...MatBase is a prototype data and knowledge base management expert intelligent system based on the Relational,Entity-Relationship,and(Elementary)Mathematical Data Models.Dyadic relationships are quite common in data modeling.Besides their relational-type constraints,they often exhibit mathematical properties that are not covered by the Relational Data Model.This paper presents and discusses the MatBase algorithm that assists database designers in discovering all non-relational constraints associated to them,as well as its algorithm for enforcing them,thus providing a significantly higher degree of data quality.展开更多
RNA secondary structure has become the most exploitable feature for ab initio detection of non-coding RNA(nc RNA) genes from genome sequences. Previous work has used Minimum Free Energy(MFE) based methods develope...RNA secondary structure has become the most exploitable feature for ab initio detection of non-coding RNA(nc RNA) genes from genome sequences. Previous work has used Minimum Free Energy(MFE) based methods developed to identify nc RNAs by measuring sequence fold stability and certainty. However, these methods yielded variable performances across different nc RNA species. Designing novel reliable structural measures will help to develop effective nc RNA gene finding tools. This paper introduces a new RNA structural measure based on a novel RNA secondary structure ensemble constrained by characteristics of native RNA tertiary structures. The new method makes it possible to achieve a performance leap from the previous structure-based methods. Test results on standard nc RNA datasets(benchmarks) demonstrate that this method can effectively separate most nc RNAs families from genome backgrounds.展开更多
In order to decrease the number of design variables and improve the efficiency of com- posite structure optimal design, a single-level composite structure optimization method based on a tapered model is presented. Com...In order to decrease the number of design variables and improve the efficiency of com- posite structure optimal design, a single-level composite structure optimization method based on a tapered model is presented. Compared with the conventional multi-level composite structure opti- mization method, this single-level method has many advantages. First, by using a distance variable and a ply group variable, the number of design variables is decreased evidently and independent with the density of sub-regions, which makes the single-level method very suitable for large-scale composite structures. Second, it is very convenient to optimize laminate thickness and stacking sequence in the same level, which probably improves the quality of optimal result. Third, ply con-tinuity can be guaranteed between sub-regions in the single-level method, which could reduce stress concentration and manufacturing difficulty. An example of a composite wing is used to demonstrate the advantages and competence of the single-level method proposed.展开更多
We propose a k-d tree variant that is resilient to a pre-described number of memory corruptions while still us- ing only linear space. While the data structure is of indepen- dent interest, we demonstrate its use in t...We propose a k-d tree variant that is resilient to a pre-described number of memory corruptions while still us- ing only linear space. While the data structure is of indepen- dent interest, we demonstrate its use in the context of high- radiation environments. Our experimental evaluation demon- strates that the resulting approach leads to a significantly higher resiliency rate compared to previous results. This is es- pecially the case for large-scale multi-spectral satellite data, which renders the proposed approach well-suited to operate aboard today's satellites.展开更多
The generating function methods have been applied successfully to generalized Hamiltonian systems with constant or invertible Poisson-structure matrices.In this paper,we extend these results and present the generating...The generating function methods have been applied successfully to generalized Hamiltonian systems with constant or invertible Poisson-structure matrices.In this paper,we extend these results and present the generating function methods preserving the Poisson structures for generalized Hamiltonian systems with general variable Poisson-structure matrices.In particular,some obtained Poisson schemes are applied efficiently to some dynamical systems which can be written into generalized Hamiltonian systems(such as generalized Lotka-Volterra systems,Robbins equations and so on).展开更多
In this paper,a novel symplectic conservative perturbation series expansion method is proposed to investigate the dynamic response of linear Hamiltonian systems accounting for perturbations,which mainly originate from...In this paper,a novel symplectic conservative perturbation series expansion method is proposed to investigate the dynamic response of linear Hamiltonian systems accounting for perturbations,which mainly originate from parameters dispersions and measurement errors.Taking the perturbations into account,the perturbed system is regarded as a modification of the nominal system.By combining the perturbation series expansion method with the deterministic linear Hamiltonian system,the solution to the perturbed system is expressed in the form of asymptotic series by introducing a small parameter and a series of Hamiltonian canonical equations to predict the dynamic response are derived.Eventually,the response of the perturbed system can be obtained successfully by solving these Hamiltonian canonical equations using the symplectic difference schemes.The symplectic conservation of the proposed method is demonstrated mathematically indicating that the proposed method can preserve the characteristic property of the system.The performance of the proposed method is evaluated by three examples compared with the Runge-Kutta algorithm.Numerical examples illustrate the superiority of the proposed method in accuracy and stability,especially symplectic conservation for solving linear Hamiltonian systems with perturbations and the applicability in structural dynamic response estimation.展开更多
The soil-rock mixture(SRM) is highly heterogeneous. Before carrying out numerical analysis,a structure model should be generated. A reliable way to obtain such structure is by generating random aggregate structure bas...The soil-rock mixture(SRM) is highly heterogeneous. Before carrying out numerical analysis,a structure model should be generated. A reliable way to obtain such structure is by generating random aggregate structure based on random sequential addition(RSA). The classical RSA is neither efficient nor robust since valid positions to place new inclusions are formulated by trial, which involves repetitive overlapping tests. In this paper, the algorithm of Entrance block between block A and B(EAB)is synergized with background mesh to redesign RSA so that permissible positions to place new inclusions can be predicted,resulting in dramatic improvement in efficiency and robustness.展开更多
This paper investigates how to maintain an efficient dynamic ordered set of bit strings, which is an important problem in the field of information search and information processing. Generally, a dynamic ordered set is...This paper investigates how to maintain an efficient dynamic ordered set of bit strings, which is an important problem in the field of information search and information processing. Generally, a dynamic ordered set is required to support 5 essential operations including search, insertion, deletion, max-value retrieval and next-larger-value retrieval. Based on previous research fruits, we present an advanced data structure named rich binary tree (RBT), which follows both the binary-search-tree property and the digital-search-tree property. Also, every key K keeps the most significant difference bit (MSDB) between itself and the next larger value among K's ancestors, as well as that between itself and the next smaller one among its ancestors. With the new data structure, we can maintain a dynamic ordered set in O(L) time. Since computers represent objects in binary mode, our method has a big potential in application. In fact, RBT can be viewed as a general-purpose data structure for problems concerning order, such as search, sorting and maintaining a priority queue. For example, when RBT is applied in sorting, we get a linear-time algorithm with regard to the key number and its performance is far better than quick-sort. What is more powerful than quick-sort is that RBT supports constant-time dynamic insertion/deletion.展开更多
文摘The concept WALKING on structures is proposed, and the partial ordering between a structure and a query structure (substructure) is also created by means of WALKING. Based upon the above concepts, authors create the Heuristic-Backtracking Algorithm (HBA) of structural match with high performance. In the last part of the paper, the applications of HBA in molecular graphics, synthetic planning, spectrum simulation , the representation and recognition of general structures are discussed.
基金supported by the National Natural Science Foundation of China (No. 41174034)
文摘The firework algorithm(FWA) is a novel swarm intelligence-based method recently proposed for the optimization of multi-parameter, nonlinear functions. Numerical waveform inversion experiments using a synthetic model show that the FWA performs well in both solution quality and efficiency. We apply the FWA in this study to crustal velocity structure inversion using regional seismic waveform data of central Gansu on the northeastern margin of the Qinghai-Tibet plateau. Seismograms recorded from the moment magnitude(MW) 5.4 Minxian earthquake enable obtaining an average crustal velocity model for this region. We initially carried out a series of FWA robustness tests in regional waveform inversion at the same earthquake and station positions across the study region,inverting two velocity structure models, with and without a low-velocity crustal layer; the accuracy of our average inversion results and their standard deviations reveal the advantages of the FWA for the inversion of regional seismic waveforms. We applied the FWA across our study area using three component waveform data recorded by nine broadband permanent seismic stations with epicentral distances ranging between 146 and 437 km. These inversion results show that the average thickness of the crust in this region is 46.75 km, while thicknesses of the sedimentary layer, and the upper, middle, and lower crust are 3.15,15.69, 13.08, and 14.83 km, respectively. Results also show that the P-wave velocities of these layers and the upper mantle are 4.47, 6.07, 6.12, 6.87, and 8.18 km/s,respectively.
基金funded by the National Natural Science Foundation of China(Grand No.:70903008)supported by COGS Lab in School of Government,Beijing Normal University
文摘Purpose: This study introduces an algorithm to construct tag trees that can be used as a userfriendly navigation tool for knowledge sharing and retrieval by solving two issues of previous studies, i.e. semantic drift and structural skew.Design/methodology/approach: Inspired by the generality based methods, this study builds tag trees from a co-occurrence tag network and uses the h-degree as a node generality metric. The proposed algorithm is characterized by the following four features:(1) the ancestors should be more representative than the descendants,(2) the semantic meaning along the ancestor-descendant paths needs to be coherent,(3) the children of one parent are collectively exhaustive and mutually exclusive in describing their parent, and(4) tags are roughly evenly distributed to their upper-level parents to avoid structural skew. Findings: The proposed algorithm has been compared with a well-established solution Heymann Tag Tree(HTT). The experimental results using a social tag dataset showed that the proposed algorithm with its default condition outperformed HTT in precision based on Open Directory Project(ODP) classification. It has been verified that h-degree can be applied as a better node generality metric compared with degree centrality.Research limitations: A thorough investigation into the evaluation methodology is needed, including user studies and a set of metrics for evaluating semantic coherence and navigation performance.Practical implications: The algorithm will benefit the use of digital resources by generating a flexible domain knowledge structure that is easy to navigate. It could be used to manage multiple resource collections even without social annotations since tags can be keywords created by authors or experts, as well as automatically extracted from text.Originality/value: Few previous studies paid attention to the issue of whether the tagging systems are easy to navigate for users. The contributions of this study are twofold:(1) an algorithm was developed to construct tag trees with consideration given to both semanticcoherence and structural balance and(2) the effectiveness of a node generality metric, h-degree, was investigated in a tag co-occurrence network.
基金supported by NSF B55101680,NTIF B2090571,B2110140,SCUT x2rjD2116860,Y1080170,Y1090160,Y1100030,Y1100050,Y1110020 and S1010561121,G101056137
文摘"Data Structure and Algorithm",which is an important major subject in computer science,has a lot of problems in teaching activity.This paper introduces and analyzes the situation and problems in this course study.A "programming factory" method is then brought out which is indeed a practice-oriented platform of the teachingstudy process.Good results are obtained by this creative method.
基金supported by the Fund for Philosophy and Social Science of Anhui Provincethe Fund for Human and Art Social Science of the Education Department of Anhui Province(Grant Nos.AHSKF0708D13 and 2009sk038)
文摘The probability-based covering algorithm(PBCA) is a new algorithm based on probability distribution. It decides, by voting, the class of the tested samples on the border of the coverage area, based on the probability of training samples. When using the original covering algorithm(CA), many tested samples that are located on the border of the coverage cannot be classified by the spherical neighborhood gained. The network structure of PBCA is a mixed structure composed of both a feed-forward network and a feedback network. By using this method of adding some heterogeneous samples and enlarging the coverage radius,it is possible to decrease the number of rejected samples and improve the rate of recognition accuracy. Relevant computer experiments indicate that the algorithm improves the study precision and achieves reasonably good results in text classification.
文摘Bayesian networks are a powerful class of graphical decision models used to represent causal relationships among variables.However,the reliability and integrity of learned Bayesian network models are highly dependent on the quality of incoming data streams.One of the primary challenges with Bayesian networks is their vulnerability to adversarial data poisoning attacks,wherein malicious data is injected into the training dataset to negatively influence the Bayesian network models and impair their performance.In this research paper,we propose an efficient framework for detecting data poisoning attacks against Bayesian network structure learning algorithms.Our framework utilizes latent variables to quantify the amount of belief between every two nodes in each causal model over time.We use our innovative methodology to tackle an important issue with data poisoning assaults in the context of Bayesian networks.With regard to four different forms of data poisoning attacks,we specifically aim to strengthen the security and dependability of Bayesian network structure learning techniques,such as the PC algorithm.By doing this,we explore the complexity of this area and offer workablemethods for identifying and reducing these sneaky dangers.Additionally,our research investigates one particular use case,the“Visit to Asia Network.”The practical consequences of using uncertainty as a way to spot cases of data poisoning are explored in this inquiry,which is of utmost relevance.Our results demonstrate the promising efficacy of latent variables in detecting and mitigating the threat of data poisoning attacks.Additionally,our proposed latent-based framework proves to be sensitive in detecting malicious data poisoning attacks in the context of stream data.
基金sponsored by the National Nature Science Foundation of China (Grant No.40904034 and 40839905)
文摘With a more complex pore structure system compared with clastic rocks, carbonate rocks have not yet been well described by existing conventional rock physical models concerning the pore structure vagary as well as the influence on elastic rock properties. We start with a discussion and an analysis about carbonate rock pore structure utilizing rock slices. Then, given appropriate assumptions, we introduce a new approach to modeling carbonate rocks and construct a pore structure algorithm to identify pore structure mutation with a basis on the Gassmann equation and the Eshelby-Walsh ellipsoid inclusion crack theory. Finally, we compute a single well's porosity using this new approach with full wave log data and make a comparison with the predicted result of traditional method and simultaneously invert for reservoir parameters. The study results reveal that the rock pore structure can significantly influence the rocks' elastic properties and the predicted porosity error of the new modeling approach is merely 0.74%. Therefore, the approach we introduce can effectively decrease the predicted error of reservoir parameters.
基金supported by the National Natural Scientific Foundation of China (41274059 and 40974021)Beijing Natural Scientific Foundation (8122039 and 8092028) to J. LeiSpecial Project for Basic Scientific Research (ZDJ2013-12) to G. Zhang
文摘Using the double-difference relocation algo- rithm, we relocated the 20 April 2013 Lushan, Sichuan, earthquake (Ms 7.0), and its 4,567 aftershocks recorded during the period between 20 April and May 3, 2013. Our results showed that most aftershocks are relocated between 10 and 20 km depths, but some large aftershocks were relocated around 30 krn depth and small events extended upward near the surface. Vertical cross sections illustrate a shovel-shaped fault plane with a variable dip angle from the southwest to northeast along the fault. Furthermore, the dip angle of the fault plane is smaller around the mainshock than that in the surrounding areas along the fault. These results suggest that it may be easy to generate the strong earthquake in the place having a small dip angle of the fault, which is somewhat similar to the genesis of the 2008 Wenchuan earthquake. The Lushan mainshock is underlain by the seismically anomalous layers with low-Vp, low-Vs, and high-Poisson's ratio anomalies, possibly suggesting that the fluid-filled fractured rock matrices might signifi- cantly reduce the effective normal stress on the fault plane to bring the brittle failure. The seismic gap between Lushan and Wenchuan aftershocks is suspected to be vulnerable to future seismic risks at greater depths, if any.
基金supported in part by the National Natural Science Foundation of China under Grant 61902311in part by the Japan Society for the Promotion of Science(JSPS)Grants-in-Aid for Scientific Research(KAKENHI)under Grant JP18K18044.
文摘With increasingly more smart cameras deployed in infrastructure and commercial buildings,3D reconstruction can quickly obtain cities’information and improve the efficiency of government services.Images collected in outdoor hazy environments are prone to color distortion and low contrast;thus,the desired visual effect cannot be achieved and the difficulty of target detection is increased.Artificial intelligence(AI)solutions provide great help for dehazy images,which can automatically identify patterns or monitor the environment.Therefore,we propose a 3D reconstruction method of dehazed images for smart cities based on deep learning.First,we propose a fine transmission image deep convolutional regression network(FT-DCRN)dehazing algorithm that uses fine transmission image and atmospheric light value to compute dehazed image.The DCRN is used to obtain the coarse transmission image,which can not only expand the receptive field of the network but also retain the features to maintain the nonlinearity of the overall network.The fine transmission image is obtained by refining the coarse transmission image using a guided filter.The atmospheric light value is estimated according to the position and brightness of the pixels in the original hazy image.Second,we use the dehazed images generated by the FT-DCRN dehazing algorithm for 3D reconstruction.An advanced relaxed iterative fine matching based on the structure from motion(ARI-SFM)algorithm is proposed.The ARISFM algorithm,which obtains the fine matching corner pairs and reduces the number of iterations,establishes an accurate one-to-one matching corner relationship.The experimental results show that our FT-DCRN dehazing algorithm improves the accuracy compared to other representative algorithms.In addition,the ARI-SFM algorithm guarantees the precision and improves the efficiency.
文摘In this paper,it has proposed a realtime implementation of low-density paritycheck(LDPC) decoder with less complexity used for satellite communication on FPGA platform.By adopting a(2048.4096)irregular quasi-cyclic(QC) LDPC code,the proposed partly parallel decoding structure balances the complexity between the check node unit(CNU) and the variable node unit(VNU) based on min-sum(MS) algorithm,thereby achieving less Slice resources and superior clock performance.Moreover,as a lookup table(LUT) is utilized in this paper to search the node message stored in timeshare memory unit,it is simple to reuse and save large amount of storage resources.The implementation results on Xilinx FPGA chip illustrate that,compared with conventional structure,the proposed scheme can achieve at last 28.6%and 8%cost reduction in RAM and Slice respectively.The clock frequency is also increased to 280 MHz without decoding performance deterioration and convergence speed reduction.
文摘The variable structure control (VSC) theory is applied to the electro-hydraulic servo system here. The VSC control law is achieved using Lyapunov method and pole placement. To eliminate the chattering phenomena, a saturation function is adopted. The proposed VSC approach is fairly robust to load disturbance and system parameter variation. Since the distortion. including phase lag and amplitude attenuation occurs in the system sinusoid response, the amplitude and phase control (APC) algorithm, based on Adaline neural network and using LMS algorithm, is developed for distortion cancellation. The APC controller is simple and can on-line adjust, thus it gives accurate tracking.
基金Supported by the National Natural Science Foundation of China(51079027)
文摘A ship is operated under an extremely complex environment, and waves and winds are assumed to be the stochastic excitations. Moreover, the propeller, host and mechanical equipment can also induce the harmonic responses. In order to reduce structural vibration, it is important to obtain the modal parameters information of a ship. However, the traditional modal parameter identification methods are not suitable since the excitation information is difficult to obtain. Natural excitation technique-eigensystem realization algorithm (NExT-ERA) is an operational modal identification method which abstracts modal parameters only from the response signals, and it is based on the assumption that the input to the structure is pure white noise. Hence, it is necessary to study the influence of harmonic excitations while applying the NExT-ERA method to a ship structure. The results of this research paper indicate the practical experiences under ambient excitation, ship model experiments were successfully done in the modal parameters identification only when the harmonic frequencies were not too close to the modal frequencies.
文摘MatBase is a prototype data and knowledge base management expert intelligent system based on the Relational,Entity-Relationship,and(Elementary)Mathematical Data Models.Dyadic relationships are quite common in data modeling.Besides their relational-type constraints,they often exhibit mathematical properties that are not covered by the Relational Data Model.This paper presents and discusses the MatBase algorithm that assists database designers in discovering all non-relational constraints associated to them,as well as its algorithm for enforcing them,thus providing a significantly higher degree of data quality.
基金supported in part by NSF MRI 0821263NIH BISTI R01GM072080-01A1 grant+1 种基金NIH ARRA Administrative Supplement to NIH BISTI R01GM072080-01A1NSF IIS grant of award No 0916250
文摘RNA secondary structure has become the most exploitable feature for ab initio detection of non-coding RNA(nc RNA) genes from genome sequences. Previous work has used Minimum Free Energy(MFE) based methods developed to identify nc RNAs by measuring sequence fold stability and certainty. However, these methods yielded variable performances across different nc RNA species. Designing novel reliable structural measures will help to develop effective nc RNA gene finding tools. This paper introduces a new RNA structural measure based on a novel RNA secondary structure ensemble constrained by characteristics of native RNA tertiary structures. The new method makes it possible to achieve a performance leap from the previous structure-based methods. Test results on standard nc RNA datasets(benchmarks) demonstrate that this method can effectively separate most nc RNAs families from genome backgrounds.
基金supported by National Natural Science Foundation of China(No.1110216/A020312)Foundation Sciences of Northwestern Polytechnical University(No.JC20120210)
文摘In order to decrease the number of design variables and improve the efficiency of com- posite structure optimal design, a single-level composite structure optimization method based on a tapered model is presented. Compared with the conventional multi-level composite structure opti- mization method, this single-level method has many advantages. First, by using a distance variable and a ply group variable, the number of design variables is decreased evidently and independent with the density of sub-regions, which makes the single-level method very suitable for large-scale composite structures. Second, it is very convenient to optimize laminate thickness and stacking sequence in the same level, which probably improves the quality of optimal result. Third, ply con-tinuity can be guaranteed between sub-regions in the single-level method, which could reduce stress concentration and manufacturing difficulty. An example of a composite wing is used to demonstrate the advantages and competence of the single-level method proposed.
文摘We propose a k-d tree variant that is resilient to a pre-described number of memory corruptions while still us- ing only linear space. While the data structure is of indepen- dent interest, we demonstrate its use in the context of high- radiation environments. Our experimental evaluation demon- strates that the resulting approach leads to a significantly higher resiliency rate compared to previous results. This is es- pecially the case for large-scale multi-spectral satellite data, which renders the proposed approach well-suited to operate aboard today's satellites.
基金projects NSF of China(11271311)Program for Changjiang Scholars and Innovative Research Team in University of China(IRT1179)the Aid Program for Science and Technology,Innovative Research Team in Higher Educational Institutions of Hunan Province of China,and Hunan Province Innovation Foundation for Postgraduate(CX2011B245).
文摘The generating function methods have been applied successfully to generalized Hamiltonian systems with constant or invertible Poisson-structure matrices.In this paper,we extend these results and present the generating function methods preserving the Poisson structures for generalized Hamiltonian systems with general variable Poisson-structure matrices.In particular,some obtained Poisson schemes are applied efficiently to some dynamical systems which can be written into generalized Hamiltonian systems(such as generalized Lotka-Volterra systems,Robbins equations and so on).
基金the National Nature Science Foundation of China(No.11772026)the Defense Industrial Technology Development Program(Nos.JCKY2016204B101,JCKY2018601B001)+1 种基金the Beijing Municipal Science and Technology Commission via project(No.Z191100004619006)the Beijing Advanced Discipline Center for Unmanned Aircraft System for the financial supports.
文摘In this paper,a novel symplectic conservative perturbation series expansion method is proposed to investigate the dynamic response of linear Hamiltonian systems accounting for perturbations,which mainly originate from parameters dispersions and measurement errors.Taking the perturbations into account,the perturbed system is regarded as a modification of the nominal system.By combining the perturbation series expansion method with the deterministic linear Hamiltonian system,the solution to the perturbed system is expressed in the form of asymptotic series by introducing a small parameter and a series of Hamiltonian canonical equations to predict the dynamic response are derived.Eventually,the response of the perturbed system can be obtained successfully by solving these Hamiltonian canonical equations using the symplectic difference schemes.The symplectic conservation of the proposed method is demonstrated mathematically indicating that the proposed method can preserve the characteristic property of the system.The performance of the proposed method is evaluated by three examples compared with the Runge-Kutta algorithm.Numerical examples illustrate the superiority of the proposed method in accuracy and stability,especially symplectic conservation for solving linear Hamiltonian systems with perturbations and the applicability in structural dynamic response estimation.
基金supported by the National Basic Research Program of China(973 Program)(Grant No.2014CB047100)the National Natural Science Foundation of China(Grant Nos.11572009,51538001 and 51609240)
文摘The soil-rock mixture(SRM) is highly heterogeneous. Before carrying out numerical analysis,a structure model should be generated. A reliable way to obtain such structure is by generating random aggregate structure based on random sequential addition(RSA). The classical RSA is neither efficient nor robust since valid positions to place new inclusions are formulated by trial, which involves repetitive overlapping tests. In this paper, the algorithm of Entrance block between block A and B(EAB)is synergized with background mesh to redesign RSA so that permissible positions to place new inclusions can be predicted,resulting in dramatic improvement in efficiency and robustness.
基金Supported by the National Natural Science Foundation of China (Grant No. 60873111)the National Basic Research Program of China(Grant No. 2004CB719400)
文摘This paper investigates how to maintain an efficient dynamic ordered set of bit strings, which is an important problem in the field of information search and information processing. Generally, a dynamic ordered set is required to support 5 essential operations including search, insertion, deletion, max-value retrieval and next-larger-value retrieval. Based on previous research fruits, we present an advanced data structure named rich binary tree (RBT), which follows both the binary-search-tree property and the digital-search-tree property. Also, every key K keeps the most significant difference bit (MSDB) between itself and the next larger value among K's ancestors, as well as that between itself and the next smaller one among its ancestors. With the new data structure, we can maintain a dynamic ordered set in O(L) time. Since computers represent objects in binary mode, our method has a big potential in application. In fact, RBT can be viewed as a general-purpose data structure for problems concerning order, such as search, sorting and maintaining a priority queue. For example, when RBT is applied in sorting, we get a linear-time algorithm with regard to the key number and its performance is far better than quick-sort. What is more powerful than quick-sort is that RBT supports constant-time dynamic insertion/deletion.