期刊文献+
共找到53篇文章
< 1 2 3 >
每页显示 20 50 100
Mathematical Modeling Algorithms for Creating New Materials with Desired Properties Using Nano-Hierarchical Structures
1
作者 Olga Hachay Andrey Khachay Oleg Khachay 《Advances in Materials Physics and Chemistry》 2019年第11期211-217,共7页
In the enormous and still poorly mastered gap between the macro level, where well developed continuum theories of continuous media and engineering methods of calculation and design operate, and atomic, subordinate to ... In the enormous and still poorly mastered gap between the macro level, where well developed continuum theories of continuous media and engineering methods of calculation and design operate, and atomic, subordinate to the laws of quantum mechanics, there is an extensive meso-hierarchical level of the structure of matter. At this level unprecedented previously products and technologies can be artificially created. Nano technology is a qualitatively new strategy in technology: it creates objects in exactly the opposite way—large objects are created from small ones [1]. We have developed a new method for modeling acoustic monitoring of a layered-block elastic medium with several inclusions of various physical and mechanical hierarchical structures [2]. An iterative process is developed for solving the direct problem for the case of three hierarchical inclusions of l, m, s-th ranks based on the use of 2D integro-differential equations. The degree of hierarchy of inclusions is determined by the values of their ranks, which may be different, while the first rank is associated with the atomic structure, the following ranks are associated with increasing geometric sizes, which contain inclusions of lower ranks and sizes. Hierarchical inclusions are located in different layers one above the other: the upper one is abnormally plastic, the second is abnormally elastic and the third is abnormally dense. The degree of filling with inclusions of each rank for all three hierarchical inclusions is different. Modeling is carried out from smaller sizes to large inclusions;as a result, it becomes possible to determine the necessary parameters of the formed material from acoustic monitoring data. 展开更多
关键词 Nano Hierarchic Objects algorithms of modeling Integral-Differential Equations Iterative Process Block Layered Medium Combined Hiererachical
下载PDF
Bangla language modeling algorithm for automatic recognition of hand-sign-spelled Bangla sign language
2
作者 Muhammad Aminur RAHAMAN Mahmood JASIM +1 位作者 Md.Haider ALI Md.HASANUZZAMAN 《Frontiers of Computer Science》 SCIE EI CSCD 2020年第3期45-64,共20页
Because of using traditional hand-sign segmentation and classification algorithm,many diversities of Bangla language including joint-letters,dependent vowels etc.and representing 51 Bangla written characters by using ... Because of using traditional hand-sign segmentation and classification algorithm,many diversities of Bangla language including joint-letters,dependent vowels etc.and representing 51 Bangla written characters by using only 36 hand-signs,continuous hand-sign-spelled Bangla sign language(BdSL)recognition is challenging.This paper presents a Bangla language modeling algorithm for automatic recognition of hand-sign-spelled Bangla sign language which consists of two phases.First phase is designed for hand-sign classification and the second phase is designed for Bangla language modeling algorithm(BLMA)for automatic recognition of hand-sign-spelled Bangla sign language.In first phase,we have proposed two step classifiers for hand-sign classification using normalized outer boundary vector(NOBV)and window-grid vector(WGV)by calculating maximum inter correlation coefficient(ICC)between test feature vector and pre-trained feature vectors.At first,the system classifies hand-signs using NOBV.If classification score does not satisfy specific threshold then another classifier based on WGV is used.The system is trained using 5,200 images and tested using another(5,200×6)images of 52 hand-signs from 10 signers in 6 different challenging environments achieving mean accuracy of 95.83%for classification with the computational cost of 39.972 milliseconds per frame.In the Second Phase,we have proposed Bangla language modeling algorithm(BLMA)which discovers all"hidden characters"based on"recognized characters"from 52 hand-signs of BdSL to make any Bangla words,composite numerals and sentences in BdSL with no training,only based on the result of first phase.To the best of our knowledge,the proposed system is the first system in BdSL designed on automatic recognition of hand-sign-spelled BdSL for large lexicon.The system is tested for BLMA using hand-sign-spelled 500 words,100 composite numerals and 80 sentences in BdSL achieving mean accuracy of 93.50%,95.50%and 90.50%respectively. 展开更多
关键词 Bangla sign language(BdSL) hand-sign CLASSIFICATION Bangla language modeling rules(BLMR) Bangla language modeling algorithm(BLMA)
原文传递
RECONFIGURABLE PRODUCTION LINE MODELING AND SCHEDULING USING PETRI NETS AND GENETIC ALGORITHM 被引量:8
3
作者 XIE Nan LI Aiping 《Chinese Journal of Mechanical Engineering》 SCIE EI CAS CSCD 2006年第3期362-367,共6页
In response to the production capacity and functionality variations, a genetic algorithm (GA) embedded with deterministic timed Petri nets(DTPN) for reconfigurable production line(RPL) is proposed to solve its s... In response to the production capacity and functionality variations, a genetic algorithm (GA) embedded with deterministic timed Petri nets(DTPN) for reconfigurable production line(RPL) is proposed to solve its scheduling problem. The basic DTPN modules are presented to model the corresponding variable structures in RPL, and then the scheduling model of the whole RPL is constructed. And in the scheduling algorithm, firing sequences of the Petri nets model are used as chromosomes, thus the selection, crossover, and mutation operator do not deal with the elements in the problem space, but the elements of Petri nets model. Accordingly, all the algorithms for GA operations embedded with Petri nets model are proposed. Moreover, the new weighted single-objective optimization based on reconfiguration cost and E/T is used. The results of a DC motor RPL scheduling suggest that the presented DTPN-GA scheduling algorithm has a significant impact on RPL scheduling, and provide obvious improvements over the conventional scheduling method in practice that meets duedate, minimizes reconfiguration cost, and enhances cost effectivity. 展开更多
关键词 Reconfigurable production line Deterministic timed Petri nets (DTPN) modeling Scheduling Genetic algorithm(GA)
下载PDF
On the E-Valuation of Certain E-Business Strategies on Firm Performance by Adaptive Algorithmic Modeling: An Alternative Strategic Managerial Approach
4
作者 Alexandra Lipitakis Evangelia A.E.C. Lipitakis 《Computer Technology and Application》 2012年第1期38-46,共9页
This paper describes an innovative adaptive algorithmic modeling approach, for solving a wide class of e-business and strategic management problems under uncertainty conditions. The proposed methodology is based on ba... This paper describes an innovative adaptive algorithmic modeling approach, for solving a wide class of e-business and strategic management problems under uncertainty conditions. The proposed methodology is based on basic ideas and concepts of four key-field interrelated sciences, i.e., computing science, applied mathematics, management sciences and economic sciences. Furthermore, the fundamental scientific concepts of adaptability and uncertainty are shown to play a critical role of major importance for a (near) optimum solution of a class of complex e-business/services and strategic management problems. Two characteristic case studies, namely measuring e-business performance under certain environmental pressures and organizational constraints and describing the relationships between technology, innovation and firm performance, are considered as effective applications of the proposed adaptive algorithmic modeling approach. A theoretical time-dependent model for the evaluation of firm e-business performances is also proposed. 展开更多
关键词 Adaptive algorithms algorithmic modeling e-business problems e-service strategy management methodologies hybrid algorithmic modeling strategy management (SM) methodologies time-dependent performance evaluation model.
下载PDF
Research on a Fog Computing Architecture and BP Algorithm Application for Medical Big Data
5
作者 Baoling Qin 《Intelligent Automation & Soft Computing》 SCIE 2023年第7期255-267,共13页
Although the Internet of Things has been widely applied,the problems of cloud computing in the application of digital smart medical Big Data collection,processing,analysis,and storage remain,especially the low efficie... Although the Internet of Things has been widely applied,the problems of cloud computing in the application of digital smart medical Big Data collection,processing,analysis,and storage remain,especially the low efficiency of medical diagnosis.And with the wide application of the Internet of Things and Big Data in the medical field,medical Big Data is increasing in geometric magnitude resulting in cloud service overload,insufficient storage,communication delay,and network congestion.In order to solve these medical and network problems,a medical big-data-oriented fog computing architec-ture and BP algorithm application are proposed,and its structural advantages and characteristics are studied.This architecture enables the medical Big Data generated by medical edge devices and the existing data in the cloud service center to calculate,compare and analyze the fog node through the Internet of Things.The diagnosis results are designed to reduce the business processing delay and improve the diagnosis effect.Considering the weak computing of each edge device,the artificial intelligence BP neural network algorithm is used in the core computing model of the medical diagnosis system to improve the system computing power,enhance the medical intelligence-aided decision-making,and improve the clinical diagnosis and treatment efficiency.In the application process,combined with the characteristics of medical Big Data technology,through fog architecture design and Big Data technology integration,we could research the processing and analysis of heterogeneous data of the medical diagnosis system in the context of the Internet of Things.The results are promising:The medical platform network is smooth,the data storage space is sufficient,the data processing and analysis speed is fast,the diagnosis effect is remarkable,and it is a good assistant to doctors’treatment effect.It not only effectively solves the problem of low clinical diagnosis,treatment efficiency and quality,but also reduces the waiting time of patients,effectively solves the contradiction between doctors and patients,and improves the medical service quality and management level. 展开更多
关键词 Medical big data IOT fog computing distributed computing BP algorithm model
下载PDF
A New Three-Dimensional(3D)Printing Prepress Algorithm for Simulation of Planned Surgery for Congenital Heart Disease
6
作者 Vitaliy Suvorov Olga Loboda +1 位作者 Maria Balakina Igor Kulczycki 《Congenital Heart Disease》 SCIE 2023年第5期491-505,共15页
Background:Three-dimensional printing technology may become a key factor in transforming clinical practice and in significant improvement of treatment outcomes.The introduction of this technique into pediatric cardiac... Background:Three-dimensional printing technology may become a key factor in transforming clinical practice and in significant improvement of treatment outcomes.The introduction of this technique into pediatric cardiac surgery will allow us to study features of the anatomy and spatial relations of a defect and to simulate the optimal surgical repair on a printed model in every individual case.Methods:We performed the prospective cohort study which included 29 children with congenital heart defects.The hearts and the great vessels were modeled and printed out.Measurements of the same cardiac areas were taken in the same planes and points at multislice computed tomography images(group 1)and on printed 3D models of the hearts(group 2).Pre-printing treatment of the multislice computed tomography data and 3D model preparation were performed according to a newly developed algorithm.Results:The measurements taken on the 3D-printed cardiac models and the tomographic images did not differ significantly,which allowed us to conclude that the models were highly accurate and informative.The new algorithm greatly simplifies and speeds up the preparation of a 3D model for printing,while maintaining high accuracy and level of detail.Conclusions:The 3D-printed models provide an accurate preoperative assessment of the anatomy of a defect in each case.The new algorithm has several important advantages over other available programs.They enable the development of customized preliminary plans for surgical repair of each specific complex congenital heart disease,predict possible issues,determine the optimal surgical tactics,and significantly improve surgical outcomes. 展开更多
关键词 3D printing imaging in cardiac surgery congenital heart disease modelling in cardiac surgery pediatric cardiology algorithmic modelling of the heart medical imaging 3D modelling
下载PDF
Space-Time Chaos Filtering for the Incoherent Paradigm for 6G Wireless System Design from Theoretic Perspective
7
作者 Valeri Kontorovich 《Communications and Network》 2024年第3期74-89,共16页
The following material is devoted to the generalization of the chaos modeling to random fields in communication channels and its application on the space-time filtering for the incoherent paradigm;that is the purpose ... The following material is devoted to the generalization of the chaos modeling to random fields in communication channels and its application on the space-time filtering for the incoherent paradigm;that is the purpose of this research. The approach, presented hereafter, is based on the “Markovian” trend in modeling of random fields, and it is applied for the first time to the chaos field modeling through the well-known concept of the random “treatment” of deterministic dynamic systems, first presented by A. Kolmogorov, M. Born, etc. The material presents the generalized Stratonovich-Kushner Equations (SKE) for the optimum filtering of chaotic models of random fields and its simplified quasi-optimum solutions. In addition to this, the application of the multi-moment algorithms for quasi-optimum solutions is considered and, it is shown, that for scenarios, when the covariation interval of the input random field is less than the distance between the antenna elements, the gain of the space-time algorithms against their “time” analogies is significant. This is the general result presented in the following. 展开更多
关键词 Chaotic Fields Variation (Functional) Derivatives Quasi-Optimum algorithms for Chaotic Models
下载PDF
Nonlinear Model Algorithmic Control of a pH Neutralization Process 被引量:11
8
作者 邹志云 于蒙 +4 位作者 王志甄 刘兴红 郭宇晴 张风波 郭宁 《Chinese Journal of Chemical Engineering》 SCIE EI CAS CSCD 2013年第4期395-400,共6页
Control of pH neutralization processes is challenging in the chemical process industry because of their inherent strong nonlinearity. In this paper, the model algorithmic control (MAC) strategy is extended to nonlinea... Control of pH neutralization processes is challenging in the chemical process industry because of their inherent strong nonlinearity. In this paper, the model algorithmic control (MAC) strategy is extended to nonlinear processes using Hammerstein model that consists of a static nonlinear polynomial function followed in series by a linear impulse response dynamic element. A new nonlinear Hammerstein MAC algorithm (named NLH-MAC) is presented in detail. The simulation control results of a pH neutralization process show that NLH-MAC gives better control performance than linear MAC and the commonly used industrial nonlinear propotional plus integral plus derivative (PID) controller. Further simulation experiment demonstrates that NLH-MAC not only gives good control response, but also possesses good stability and robustness even with large modeling errors. 展开更多
关键词 model algorithmic control nonlinear model predictive control Hammerstein model pH neutralization process control simulation
下载PDF
Adaptive learning algorithm based on mixture Gaussian background 被引量:9
9
作者 Zha Yufei Bi Duyan 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2007年第2期369-376,共8页
The key problem of the adaptive mixture background model is that the parameters can adaptively change according to the input data. To address the problem, a new method is proposed. Firstly, the recursive equations are... The key problem of the adaptive mixture background model is that the parameters can adaptively change according to the input data. To address the problem, a new method is proposed. Firstly, the recursive equations are inferred based on the maximum likelihood rule. Secondly, the forgetting factor and learning rate factor are redefined, and their still more general formulations are obtained by analyzing their practical functions. Lastly, the convergence of the proposed algorithm is proved to enable the estimation converge to a local maximum of the data likelihood function according to the stochastic approximation theory. The experiments show that the proposed learning algorithm excels the formers both in converging rate and accuracy. 展开更多
关键词 Mixture Gaussian model Background model Learning algorithm.
下载PDF
Model algorithm control using neural networks for input delayed nonlinear control system 被引量:2
10
作者 Yuanliang Zhang Kil To Chong 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2015年第1期142-150,共9页
The performance of the model algorithm control method is partially based on the accuracy of the system's model. It is difficult to obtain a good model of a nonlinear system, especially when the nonlinearity is high. ... The performance of the model algorithm control method is partially based on the accuracy of the system's model. It is difficult to obtain a good model of a nonlinear system, especially when the nonlinearity is high. Neural networks have the ability to "learn"the characteristics of a system through nonlinear mapping to represent nonlinear functions as well as their inverse functions. This paper presents a model algorithm control method using neural networks for nonlinear time delay systems. Two neural networks are used in the control scheme. One neural network is trained as the model of the nonlinear time delay system, and the other one produces the control inputs. The neural networks are combined with the model algorithm control method to control the nonlinear time delay systems. Three examples are used to illustrate the proposed control method. The simulation results show that the proposed control method has a good control performance for nonlinear time delay systems. 展开更多
关键词 model algorithm control neural network nonlinear system time delay
下载PDF
Multiple model tracking algorithms based on neural network and multiple process noise soft switching 被引量:2
11
作者 NieXiaohua 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2009年第6期1227-1232,共6页
A multiple model tracking algorithm based on neural network and multiple-process noise soft-switching for maneuvering targets is presented.In this algorithm, the"current"statistical model and neural network are runn... A multiple model tracking algorithm based on neural network and multiple-process noise soft-switching for maneuvering targets is presented.In this algorithm, the"current"statistical model and neural network are running in parallel.The neural network algorithm is used to modify the adaptive noise filtering algorithm based on the mean value and variance of the"current"statistical model for maneuvering targets, and then the multiple model tracking algorithm of the multiple processing switch is used to improve the precision of tracking maneuvering targets.The modified algorithm is proved to be effective by simulation. 展开更多
关键词 maneuvering target current statistical model neural network multiple model algorithm.
下载PDF
Fast Parallel Algorithm for Slicing STL Based on Pipeline 被引量:4
12
作者 MA Xulong LIN Feng YAO Bo 《Chinese Journal of Mechanical Engineering》 SCIE EI CAS CSCD 2016年第3期549-555,共7页
In Additive Manufacturing field, the current researches of data processing mainly focus on a slicing process of large STL files or complicated CAD models. To improve the efficiency and reduce the slicing time, a paral... In Additive Manufacturing field, the current researches of data processing mainly focus on a slicing process of large STL files or complicated CAD models. To improve the efficiency and reduce the slicing time, a parallel algorithm has great advantages. However, traditional algorithms can't make full use of multi-core CPU hardware resources. In the paper, a fast parallel algorithm is presented to speed up data processing. A pipeline mode is adopted to design the parallel algorithm. And the complexity of the pipeline algorithm is analyzed theoretically. To evaluate the performance of the new algorithm, effects of threads number and layers number are investigated by a serial of experiments. The experimental results show that the threads number and layers number are two remarkable factors to the speedup ratio. The tendency of speedup versus threads number reveals a positive relationship which greatly agrees with the Amdahl's law, and the tendency of speedup versus layers number also keeps a positive relationship agreeing with Gustafson's law. The new algorithm uses topological information to compute contours with a parallel method of speedup. Another parallel algorithm based on data parallel is used in experiments to show that pipeline parallel mode is more efficient. A case study at last shows a suspending performance of the new parallel algorithm. Compared with the serial slicing algorithm, the new pipeline parallel algorithm can make full use of the multi-core CPU hardware, accelerate the slicing process, and compared with the data parallel slicing algorithm, the new slicing algorithm in this paper adopts a pipeline parallel model, and a much higher speedup ratio and efficiency is achieved. 展开更多
关键词 additive manufacturing STL model slicing algorithm data parallel pipeline parallel
下载PDF
Nonlinear model predictive control based on support vector machine and genetic algorithm 被引量:5
13
作者 冯凯 卢建刚 陈金水 《Chinese Journal of Chemical Engineering》 SCIE EI CAS CSCD 2015年第12期2048-2052,共5页
This paper presents a nonlinear model predictive control(NMPC) approach based on support vector machine(SVM) and genetic algorithm(GA) for multiple-input multiple-output(MIMO) nonlinear systems.Individual SVM is used ... This paper presents a nonlinear model predictive control(NMPC) approach based on support vector machine(SVM) and genetic algorithm(GA) for multiple-input multiple-output(MIMO) nonlinear systems.Individual SVM is used to approximate each output of the controlled plant Then the model is used in MPC control scheme to predict the outputs of the controlled plant.The optimal control sequence is calculated using GA with elite preserve strategy.Simulation results of a typical MIMO nonlinear system show that this method has a good ability of set points tracking and disturbance rejection. 展开更多
关键词 Support vector machine Genetic algorithm Nonlinear model predictive control Neural network modeling
下载PDF
MODELING, VALIDATION AND OPTIMAL DESIGN OF THE CLAMPING FORCE CONTROL VALVE USED IN CONTINUOUSLY VARIABLE TRANSMISSION 被引量:4
14
作者 ZHOU Yunshan LIU Jin'gang +1 位作者 CAIYuanchun ZOU Naiwei 《Chinese Journal of Mechanical Engineering》 SCIE EI CAS CSCD 2008年第4期51-55,共5页
Associated dynamic performance of the clamping force control valve used in continuously variable transmission (CVT) is optimized. Firstly, the structure and working principle of the valve are analyzed, and then a dy... Associated dynamic performance of the clamping force control valve used in continuously variable transmission (CVT) is optimized. Firstly, the structure and working principle of the valve are analyzed, and then a dynamic model is set up by means of mechanism analysis. For the purpose of checking the validity of the modeling method, a prototype workpiece of the valve is manufactured for comparison test, and its simulation result follows the experimental result quite well. An associated performance index is founded considering the response time, overshoot and saving energy, and five structural parameters are selected to adjust for deriving the optimal associated performance index. The optimization problem is solved by the genetic algorithm (GA) with necessary constraints. Finally, the properties of the optimized valve are compared with those of the prototype workpiece, and the results prove that the dynamic performance indexes of the optimized valve are much better than those of the prototype workpiece. 展开更多
关键词 Dynamic modeling Optimal design Genetic algorithm Clamping force control valve Continuously variable transmission (CVT)
下载PDF
Optimization of the catch bench design using a genetic algorithm 被引量:1
15
作者 Ruvin Wijesinghe Dakshith You Greg 《International Journal of Mining Science and Technology》 SCIE EI CSCD 2016年第6期1011-1016,共6页
Rockfalls are one of the hazards that may be associated with open pit mining. The majority of rockfalls occur due to the existing conditions of slopes, such as back break, fractures and joints. Constructing a berm on ... Rockfalls are one of the hazards that may be associated with open pit mining. The majority of rockfalls occur due to the existing conditions of slopes, such as back break, fractures and joints. Constructing a berm on the catch bench is a popular method for the mitigation of rockfall hazards in open pit mining.The width of the catch bench and the height of the berm play a major role in the open pit bench design.However, there is no systematic method currently available to optimize the size of these parameters. This study proposes a novel methodology which calculates the optimum catch bench width by integrating the rockfall simulation model and genetic algorithm into a Simulation-Optimization Model. The proposed methodology is useful when used to determine the minimum catch bench width, or the maximum overall slope angle, insuring that a sufficient factor of safety of the slope is included while maximizing the overall profitability of the open pit mine. 展开更多
关键词 Catch bench width RocFall Genetic algorithm Simulation-Optimization model
下载PDF
Dynamic airspace sectorization via improved genetic algorithm 被引量:6
16
作者 Yangzhou Chen Hong Bi +1 位作者 Defu Zhang Zhuoxi Song 《Journal of Modern Transportation》 2013年第2期117-124,共8页
This paper deals with dynamic airspace sectorization (DAS) problem by an improved genetic algorithm (iGA). A graph model is first constructed that represents the airspace static structure. Then the DAS problem is ... This paper deals with dynamic airspace sectorization (DAS) problem by an improved genetic algorithm (iGA). A graph model is first constructed that represents the airspace static structure. Then the DAS problem is formulated as a graph-partitioning problem to balance the sector workload under the premise of ensuring safety. In the iGA, multiple populations and hybrid coding are applied to determine the optimal sector number and airspace sectorization. The sector constraints are well satisfied by the improved genetic operators and protect zones. This method is validated by being applied to the airspace of North China in terms of three indexes, which are sector balancing index, coordination workload index and sector average flight time index. The improvement is obvious, as the sector balancing index is reduced by 16.5 %, the coordination workload index is reduced by 11.2 %, and the sector average flight time index is increased by 11.4 % during the peak-hour traffic. 展开更多
关键词 Dynamic airspace sectorization (DAS) Improved genetic algorithm (iGA) Graph model Multiple populations Hybrid coding Sector constraints
下载PDF
Method for improving RLS algorithms
17
作者 LI Tian-shu TIAN Kai LI Wen-xiu 《Journal of Marine Science and Application》 2007年第3期68-70,共3页
The recursive least-square (RLS) algorithm has been extensively used in adaptive identification, prediction, filtering, and many other fields. This paper proposes adding a second-difference term to the standard recurr... The recursive least-square (RLS) algorithm has been extensively used in adaptive identification, prediction, filtering, and many other fields. This paper proposes adding a second-difference term to the standard recurrent formula to create a novel method for improving tracing capabilities. Test results show that this can greatly improve the convergence capability of RLS algorithms. 展开更多
关键词 adaptive model algorithms RLS tracing capabilities
下载PDF
Collusion detector based on G-N algorithm for trust model
18
作者 Lin Zhang Na Yin +1 位作者 Jingwen Liu Ruchuan Wang 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2016年第4期926-935,共10页
In the open network environment, malicious attacks to the trust model have become increasingly serious. Compared with single node attacks, collusion attacks do more harm to the trust model. To solve this problem, a co... In the open network environment, malicious attacks to the trust model have become increasingly serious. Compared with single node attacks, collusion attacks do more harm to the trust model. To solve this problem, a collusion detector based on the GN algorithm for the trust evaluation model is proposed in the open Internet environment. By analyzing the behavioral characteristics of collusion groups, the concept of flatting is defined and the G-N community mining algorithm is used to divide suspicious communities. On this basis, a collusion community detector method is proposed based on the breaking strength of suspicious communities. Simulation results show that the model has high recognition accuracy in identifying collusion nodes, so as to effectively defend against malicious attacks of collusion nodes. 展开更多
关键词 trust model collusion detector G-N algorithm
下载PDF
A Class of Generalized Approximate Inverse Solvers for Unsymmetric Linear Systems of Irregular Structure Based on Adaptive Algorithmic Modelling for Solving Complex Computational Problems in Three Space Dimensions
19
作者 Anastasia-Dimitra Lipitakis 《Applied Mathematics》 2016年第11期1225-1240,共17页
A class of general inverse matrix techniques based on adaptive algorithmic modelling methodologies is derived yielding iterative methods for solving unsymmetric linear systems of irregular structure arising in complex... A class of general inverse matrix techniques based on adaptive algorithmic modelling methodologies is derived yielding iterative methods for solving unsymmetric linear systems of irregular structure arising in complex computational problems in three space dimensions. The proposed class of approximate inverse is chosen as the basis to yield systems on which classic and preconditioned iterative methods are explicitly applied. Optimized versions of the proposed approximate inverse are presented using special storage (k-sweep) techniques leading to economical forms of the approximate inverses. Application of the adaptive algorithmic methodologies on a characteristic nonlinear boundary value problem is discussed and numerical results are given. 展开更多
关键词 Adaptive algorithms algorithmic Modelling Approximate Inverse Incomplete LU Factorization Approximate Decomposition Unsymmetric Linear Systems Preconditioned Iterative Methods Systems of Irregular Structure
下载PDF
Dislocation parameters of Gonghe earthquake jointly inferred by using genetic algorithms and least squares method
20
作者 王文萍 王庆良 《Acta Seismologica Sinica(English Edition)》 EI CSCD 1999年第3期314-320,共7页
The Second Crustal Deformation Monitoring Center, China Seismological Bureau, has detected a marked uplift associated with the Gonghe Ms=7.0 earthquake on April 26, 1990, Qinghai Province. From the observed vertical d... The Second Crustal Deformation Monitoring Center, China Seismological Bureau, has detected a marked uplift associated with the Gonghe Ms=7.0 earthquake on April 26, 1990, Qinghai Province. From the observed vertical deformations and using a rectangular uniform slip model in a homogeneous elastic half space, we first employ genetic algorithms (GA) to infer the approximate global optimal solution, and further use least squares method to get more accurate global optimal solution by taking the approximate solution of GA as the initial parameters of least squares. The inversion results show that the causative fault of Gonghe Ms=7.0 earthquake is a right-lateral reverse fault with strike NW60°, dip SW and dip angle 37°, the coseismic fracture length, width and slip are 37 km, 6 km and 2.7 m respectively. Combination of GA and least squares algorithms is an effective joint inversion method, which could not only escape from local optimum of least squares, but also solve the slow convergence problem of GA after reaching adjacency of global optimal solution. 展开更多
关键词 genetic algorithms least squares method Gonghe earthquake dislocation model
下载PDF
上一页 1 2 3 下一页 到第
使用帮助 返回顶部