期刊文献+
共找到5,885篇文章
< 1 2 250 >
每页显示 20 50 100
Dynamic access task scheduling of LEO constellation based on space-based distributed computing
1
作者 LIU Wei JIN Yifeng +2 位作者 ZHANG Lei GAO Zihe TAO Ying 《Journal of Systems Engineering and Electronics》 SCIE CSCD 2024年第4期842-854,共13页
A dynamic multi-beam resource allocation algorithm for large low Earth orbit(LEO)constellation based on on-board distributed computing is proposed in this paper.The allocation is a combinatorial optimization process u... A dynamic multi-beam resource allocation algorithm for large low Earth orbit(LEO)constellation based on on-board distributed computing is proposed in this paper.The allocation is a combinatorial optimization process under a series of complex constraints,which is important for enhancing the matching between resources and requirements.A complex algorithm is not available because that the LEO on-board resources is limi-ted.The proposed genetic algorithm(GA)based on two-dimen-sional individual model and uncorrelated single paternal inheri-tance method is designed to support distributed computation to enhance the feasibility of on-board application.A distributed system composed of eight embedded devices is built to verify the algorithm.A typical scenario is built in the system to evalu-ate the resource allocation process,algorithm mathematical model,trigger strategy,and distributed computation architec-ture.According to the simulation and measurement results,the proposed algorithm can provide an allocation result for more than 1500 tasks in 14 s and the success rate is more than 91%in a typical scene.The response time is decreased by 40%com-pared with the conditional GA. 展开更多
关键词 beam resource allocation distributed computing low Earth obbit(LEO)constellation spacecraft access task scheduling
下载PDF
Parallel Computing Based Solution for Reliability-constrained Distribution Network Planning
2
作者 Yaqi Sun Wenchuan Wu +2 位作者 Yi Lin Hai Huang Hao Chen 《Journal of Modern Power Systems and Clean Energy》 SCIE EI CSCD 2024年第4期1147-1158,共12页
The main goal of distribution network(DN)expan-sion planning is essentially to achieve minimal investment con-strained by specified reliability requirements.The reliability-constrained distribution network planning(Rc... The main goal of distribution network(DN)expan-sion planning is essentially to achieve minimal investment con-strained by specified reliability requirements.The reliability-constrained distribution network planning(RcDNP)problem can be cast as an instance of mixed-integer linear programming(MILP)which involves ultra-heavy computation burden espe-cially for large-scale DNs.In this paper,we propose a parallel computing based solution method for the RcDNP problem.The RcDNP is decomposed into a backbone grid and several lateral grid problems with coordination.Then,a parallelizable aug-mented Lagrangian algorithm with acceleration method is devel-oped to solve the coordination planning problems.The lateral grid problems are solved in parallel through coordinating with the backbone grid planning problem.Gauss-Seidel iteration is adopted on the subset of the convex hull of the feasible region constructed by decomposition.Under mild conditions,the opti-mality and convergence of the proposed method are verified.Numerical tests show that the proposed method can significant-ly reduce the solution time and make the RcDNP applicable for real-worldproblems. 展开更多
关键词 distribution network expansion planning RELIABILITY parallel computing
原文传递
L_(1)-Smooth SVM with Distributed Adaptive Proximal Stochastic Gradient Descent with Momentum for Fast Brain Tumor Detection
3
作者 Chuandong Qin Yu Cao Liqun Meng 《Computers, Materials & Continua》 SCIE EI 2024年第5期1975-1994,共20页
Brain tumors come in various types,each with distinct characteristics and treatment approaches,making manual detection a time-consuming and potentially ambiguous process.Brain tumor detection is a valuable tool for ga... Brain tumors come in various types,each with distinct characteristics and treatment approaches,making manual detection a time-consuming and potentially ambiguous process.Brain tumor detection is a valuable tool for gaining a deeper understanding of tumors and improving treatment outcomes.Machine learning models have become key players in automating brain tumor detection.Gradient descent methods are the mainstream algorithms for solving machine learning models.In this paper,we propose a novel distributed proximal stochastic gradient descent approach to solve the L_(1)-Smooth Support Vector Machine(SVM)classifier for brain tumor detection.Firstly,the smooth hinge loss is introduced to be used as the loss function of SVM.It avoids the issue of nondifferentiability at the zero point encountered by the traditional hinge loss function during gradient descent optimization.Secondly,the L_(1) regularization method is employed to sparsify features and enhance the robustness of the model.Finally,adaptive proximal stochastic gradient descent(PGD)with momentum,and distributed adaptive PGDwithmomentum(DPGD)are proposed and applied to the L_(1)-Smooth SVM.Distributed computing is crucial in large-scale data analysis,with its value manifested in extending algorithms to distributed clusters,thus enabling more efficient processing ofmassive amounts of data.The DPGD algorithm leverages Spark,enabling full utilization of the computer’s multi-core resources.Due to its sparsity induced by L_(1) regularization on parameters,it exhibits significantly accelerated convergence speed.From the perspective of loss reduction,DPGD converges faster than PGD.The experimental results show that adaptive PGD withmomentumand its variants have achieved cutting-edge accuracy and efficiency in brain tumor detection.Frompre-trained models,both the PGD andDPGD outperform other models,boasting an accuracy of 95.21%. 展开更多
关键词 Support vector machine proximal stochastic gradient descent brain tumor detection distributed computing
下载PDF
Hybrid Hierarchical Particle Swarm Optimization with Evolutionary Artificial Bee Colony Algorithm for Task Scheduling in Cloud Computing
4
作者 Shasha Zhao Huanwen Yan +3 位作者 Qifeng Lin Xiangnan Feng He Chen Dengyin Zhang 《Computers, Materials & Continua》 SCIE EI 2024年第1期1135-1156,共22页
Task scheduling plays a key role in effectively managing and allocating computing resources to meet various computing tasks in a cloud computing environment.Short execution time and low load imbalance may be the chall... Task scheduling plays a key role in effectively managing and allocating computing resources to meet various computing tasks in a cloud computing environment.Short execution time and low load imbalance may be the challenges for some algorithms in resource scheduling scenarios.In this work,the Hierarchical Particle Swarm Optimization-Evolutionary Artificial Bee Colony Algorithm(HPSO-EABC)has been proposed,which hybrids our presented Evolutionary Artificial Bee Colony(EABC),and Hierarchical Particle Swarm Optimization(HPSO)algorithm.The HPSO-EABC algorithm incorporates both the advantages of the HPSO and the EABC algorithm.Comprehensive testing including evaluations of algorithm convergence speed,resource execution time,load balancing,and operational costs has been done.The results indicate that the EABC algorithm exhibits greater parallelism compared to the Artificial Bee Colony algorithm.Compared with the Particle Swarm Optimization algorithm,the HPSO algorithmnot only improves the global search capability but also effectively mitigates getting stuck in local optima.As a result,the hybrid HPSO-EABC algorithm demonstrates significant improvements in terms of stability and convergence speed.Moreover,it exhibits enhanced resource scheduling performance in both homogeneous and heterogeneous environments,effectively reducing execution time and cost,which also is verified by the ablation experimental. 展开更多
关键词 Cloud computing distributed processing evolutionary artificial bee colony algorithm hierarchical particle swarm optimization load balancing
下载PDF
A Noise Reduction Method for Multiple Signals Combining Computed Order Tracking Based on Chirplet Path Pursuit and Distributed Compressed Sensing
5
作者 Guangfei Jia Fengwei Guo +2 位作者 Zhe Wu Suxiao Cui Jiajun Yang 《Structural Durability & Health Monitoring》 EI 2023年第5期383-405,共23页
With the development of multi-signal monitoring technology,the research on multiple signal analysis and processing has become a hot subject.Mechanical equipment often works under variable working conditions,and the ac... With the development of multi-signal monitoring technology,the research on multiple signal analysis and processing has become a hot subject.Mechanical equipment often works under variable working conditions,and the acquired vibration signals are often non-stationary and nonlinear,which are difficult to be processed by traditional analysis methods.In order to solve the noise reduction problem of multiple signals under variable speed,a COT-DCS method combining the Computed Order Tracking(COT)based on Chirplet Path Pursuit(CPP)and Distributed Compressed Sensing(DCS)is proposed.Firstly,the instantaneous frequency(IF)is extracted by CPP,and the speed is obtained by fitting.Then,the speed is used for equal angle sampling of time-domain signals,and angle-domain signals are obtained by COT without a tachometer to eliminate the nonstationarity,and the angledomain signals are compressed and reconstructed by DCS to achieve noise reduction of multiple signals.The accuracy of the CPP method is verified by simulated,experimental signals and compared with some existing IF extraction methods.The COT method also shows good signal stabilization ability through simulation and experiment.Finally,combined with the comparative test of the other two algorithms and four noise reduction effect indicators,the COT-DCS based on the CPP method combines the advantages of the two algorithms and has better noise reduction effect and stability.It is shown that this method is an effective multi-signal noise reduction method. 展开更多
关键词 Gearbox fault diagnosis chirplet path pursuit computed order tracking distributed compressed sensing
下载PDF
The Climate Impact of Large-Scale Distributors:Acting on the Supply Chain
6
作者 Daniele Pernigotti Arianna Bertoni 《Journal of Environmental Science and Engineering(A)》 2023年第2期53-58,共6页
This work aims to analyse the actions that companies working in large-scale distribution carry along their value chains to minimise impacts on climate change.Companies operating in this field are aware that it is less... This work aims to analyse the actions that companies working in large-scale distribution carry along their value chains to minimise impacts on climate change.Companies operating in this field are aware that it is less effective to act directly on the core processes and need to involve the upstream value chain in their carbon reduction strategy.These businesses,in fact,need to focus on the indirect GHG(Greenhouse Gases)emissions and depend on how suppliers manage their impacts.In this sector,virtuous companies collaborate with their suppliers to get involved in a common path of quantifying and cutting said impacts together.This aspect is particularly relevant in the case of large-scale retailers.However,the process is not immediate since the supply chain is usually very dense and diverse,for instance,adopting various approaches that do not always coincide.In any case,the key aspect is mapping these suppliers:one of the tools mostly used for this purpose is the survey,as a quick instrument able to reach hundreds of suppliers at the same time,receiving a fast and standardized response,which can easily be processed to form a comprehensive and harmonized mapping of the results as the first step for the subsequent implementation of mitigation strategies. 展开更多
关键词 Climate change suppliers value chain SUSTAINABILITY large-scale distribution.
下载PDF
A Distributed Framework for Large-scale Protein-protein Interaction Data Analysis and Prediction Using MapReduce 被引量:2
7
作者 Lun Hu Shicheng Yang +3 位作者 Xin Luo Huaqiang Yuan Khaled Sedraoui MengChu Zhou 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2022年第1期160-172,共13页
Protein-protein interactions are of great significance for human to understand the functional mechanisms of proteins.With the rapid development of high-throughput genomic technologies,massive protein-protein interacti... Protein-protein interactions are of great significance for human to understand the functional mechanisms of proteins.With the rapid development of high-throughput genomic technologies,massive protein-protein interaction(PPI)data have been generated,making it very difficult to analyze them efficiently.To address this problem,this paper presents a distributed framework by reimplementing one of state-of-the-art algorithms,i.e.,CoFex,using MapReduce.To do so,an in-depth analysis of its limitations is conducted from the perspectives of efficiency and memory consumption when applying it for large-scale PPI data analysis and prediction.Respective solutions are then devised to overcome these limitations.In particular,we adopt a novel tree-based data structure to reduce the heavy memory consumption caused by the huge sequence information of proteins.After that,its procedure is modified by following the MapReduce framework to take the prediction task distributively.A series of extensive experiments have been conducted to evaluate the performance of our framework in terms of both efficiency and accuracy.Experimental results well demonstrate that the proposed framework can considerably improve its computational efficiency by more than two orders of magnitude while retaining the same high accuracy. 展开更多
关键词 distributed computing large-scale prediction machine learning MAPREDUCE protein-protein interaction(PPI)
下载PDF
A Public Blockchain Consensus Mechanism for Fault-Tolerant Distributed Computing in LEO Satellite Communications 被引量:2
8
作者 Zhen Zhang Bing Guo +3 位作者 Lidong Zhu Yan Shen Chaoxia Qin Chengjie Li 《China Communications》 SCIE CSCD 2022年第7期110-123,共14页
In LEO(Low Earth Orbit)satellite communication systems,the satellite network is made up of a large number of satellites,the dynamically changing network environment affects the results of distributed computing.In orde... In LEO(Low Earth Orbit)satellite communication systems,the satellite network is made up of a large number of satellites,the dynamically changing network environment affects the results of distributed computing.In order to improve the fault tolerance rate,a novel public blockchain consensus mechanism that applies a distributed computing architecture in a public network is proposed.Redundant calculation of blockchain ensures the credibility of the results;and the transactions with calculation results of a task are stored distributed in sequence in Directed Acyclic Graphs(DAG).The transactions issued by nodes are connected to form a net.The net can quickly provide node reputation evaluation that does not rely on third parties.Simulations show that our proposed blockchain has the following advantages:1.The task processing speed of the blockchain can be close to that of the fastest node in the entire blockchain;2.When the tasks’arrival time intervals and demanded working nodes(WNs)meet certain conditions,the network can tolerate more than 50%of malicious devices;3.No matter the number of nodes in the blockchain is increased or reduced,the network can keep robustness by adjusting the task’s arrival time interval and demanded WNs. 展开更多
关键词 distributed computing public blockchain network consensus mechanism CREDIBILITY FAULTTOLERANCE
下载PDF
A Distributed Computing Framework Based on Lightweight Variance Reduction Method to Accelerate Machine Learning Training on Blockchain 被引量:1
9
作者 Zhen Huang Feng Liu +2 位作者 Mingxing Tang Jinyan Qiu Yuxing Peng 《China Communications》 SCIE CSCD 2020年第9期77-89,共13页
To security support large-scale intelligent applications,distributed machine learning based on blockchain is an intuitive solution scheme.However,the distributed machine learning is difficult to train due to that the ... To security support large-scale intelligent applications,distributed machine learning based on blockchain is an intuitive solution scheme.However,the distributed machine learning is difficult to train due to that the corresponding optimization solver algorithms converge slowly,which highly demand on computing and memory resources.To overcome the challenges,we propose a distributed computing framework for L-BFGS optimization algorithm based on variance reduction method,which is a lightweight,few additional cost and parallelized scheme for the model training process.To validate the claims,we have conducted several experiments on multiple classical datasets.Results show that our proposed computing framework can steadily accelerate the training process of solver in either local mode or distributed mode. 展开更多
关键词 machine learning optimization algorithm blockchain distributed computing variance reduction
下载PDF
Wireless distributed computing for cyclostationary feature detection 被引量:1
10
作者 Mohammed I.M. Alfaqawi Jalel Chebil +1 位作者 Mohamed Hadi Habaebi Dinesh Datla 《Digital Communications and Networks》 SCIE 2016年第1期46-55,共10页
Recently, wireless distributed computing (WDC) concept has emerged promising manifolds improvements to current wireless technotogies. Despite the various expected benefits of this concept, significant drawbacks were... Recently, wireless distributed computing (WDC) concept has emerged promising manifolds improvements to current wireless technotogies. Despite the various expected benefits of this concept, significant drawbacks were addressed in the open literature. One of WDC key challenges is the impact of wireless channel quality on the load of distributed computations. Therefore, this research investigates the wireless channel impact on WDC performance when the tatter is applied to spectrum sensing in cognitive radio (CR) technology. However, a trade- off is found between accuracy and computational complexity in spectrum sensing approaches. Increasing these approaches accuracy is accompanied by an increase in computational complexity. This results in greater power consumption and processing time. A novel WDC scheme for cyclostationary feature detection spectrum sensing approach is proposed in this paper and thoroughly investigated. The benefits of the proposed scheme are firstly presented. Then, the impact of the wireless channel of the proposed scheme is addressed considering two scenarios. In the first scenario, workload matrices are distributed over the wireless channel 展开更多
关键词 Cotnttive radio Spectrum sensing Cyclostattonary feature detection FFT time smoothing algorithms Wireless distributed computing
下载PDF
Distributed Optimal Control for Traffic Networks with Fog Computing
11
作者 Yijie Wang Lei Wang +1 位作者 Saeed Amir Qing-Guo Wang 《China Communications》 SCIE CSCD 2019年第10期202-213,共12页
This paper presents a distributed optimization strategy for large-scale traffic network based on fog computing. Different from the traditional cloud-based centralized optimization strategy, the fog-based distributed o... This paper presents a distributed optimization strategy for large-scale traffic network based on fog computing. Different from the traditional cloud-based centralized optimization strategy, the fog-based distributed optimization strategy distributes its computing tasks to individual sub-processors, thus significantly reducing computation time. A traffic model is built and a series of communication rules between subsystems are set to ensure that the entire transportation network can be globally optimized while the subsystem is achieving its local optimization. Finally, this paper numerically simulates the operation of the traffic network by mixed-Integer programming, also, compares the advantages and disadvantages of the two optimization strategies. 展开更多
关键词 FOG computing TRAFFIC NETWORK distributed optimization distributed control
下载PDF
DCCS:A General-Purpose Distributed Cryptographic Computing System
12
作者 JIANG Zhonghua LIN Dongdai +1 位作者 XU Lin LIN Lei 《Wuhan University Journal of Natural Sciences》 CAS 2007年第1期46-50,共5页
Distributed cryptographic computing system plays an important role since cryptographic computing is extremely computation sensitive. However, no general cryptographic computing system is available. Grid technology can... Distributed cryptographic computing system plays an important role since cryptographic computing is extremely computation sensitive. However, no general cryptographic computing system is available. Grid technology can give an efficient computational support for cryptographic applications. Therefore, a general-purpose grid-based distributed computing system called DCCS is put forward in this paper. The architecture of DCCS is simply described at first. The policy of task division adapted in DCCS is then presented. The method to manage subtask is further discussed in detail. Furthermore, the building and execution process of a computing job is revealed. Finally, the details of DCCS implementation under Globus Toolkit 4 are illustrated. 展开更多
关键词 CRYPTOGRAPHY distributed computing execution plan computational grid
下载PDF
Short-circuit Analysis in Large-scale Distribution Systems With High Penetration of Distributed Generators
13
作者 Luka V.Strezoski Marija D.Prica 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2017年第2期243-251,共9页
In this paper a short-circuit computation(SCC) procedure for large-scale distribution systems with high penetration of distributed generators based on contemporary technologies is proposed. The procedure is suitable f... In this paper a short-circuit computation(SCC) procedure for large-scale distribution systems with high penetration of distributed generators based on contemporary technologies is proposed. The procedure is suitable for real-time calculations.Modeling of modern distributed generators differs from the modeling of traditional synchronous and induction generators.Hence, SCC procedures found on the presumption of distribution systems with only traditional generators are not suitable in nowadays systems. In the work presented in this paper, for computation of the state of the system with short-circuit, the improved backward/forward sweep(IBFS) procedure is used.Computation results show that the IBFS procedure is much more robust than previous SCC procedures, as it takes into account all distribution system elements, including modern distributed generators. 展开更多
关键词 distributed generation(DG) distribution system distribution management system(DMS) short-circuit computation
下载PDF
Dynamic Allocation Strategy Based on Pre-allocation and Agent to Implement Ada95's Distributed Computing
14
作者 Zhu Fu-xi, Fu Jian-ming,Wu Chan-le, Cao Zheng School of Computer,Wuhan University,Wuhan 430072,Hubei, China 《Wuhan University Journal of Natural Sciences》 CAS 2003年第04A期1061-1064,共4页
This paper discusses the model of how the Agent is applied to implement distributed computing of Ada95 and presents a dynamic allocation strategy for distributed computing that based on pre-allocationand Agent. The ... This paper discusses the model of how the Agent is applied to implement distributed computing of Ada95 and presents a dynamic allocation strategy for distributed computing that based on pre-allocationand Agent. The aim of this strategy is realizing dynamic equilibrium allocation. 展开更多
关键词 distributed computing ADA95 AGENT equilibrium allocation
下载PDF
A Jave-Based Multi-tier Distributed Object Enterprise Computing Model
15
作者 李春林 李腊元 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2001年第4期85-90,共6页
In this paper, we adopt Java platform to achieve a multi-tier distributed object enterprise computing model which provides an open, flexible, robust and cross-platform standard for enterprise applications of new gener... In this paper, we adopt Java platform to achieve a multi-tier distributed object enterprise computing model which provides an open, flexible, robust and cross-platform standard for enterprise applications of new generation. In addition to this model, we define remote server objects as session or entity objects according to their roles in a distributed application server, which separate information details from business operations for software reuse. A web store system is implement by using this multi-tier distributed object enterprise computing model. 展开更多
关键词 distributed object computing Remote method invocation (RMI) Java Servlet.
下载PDF
Planning Management of Multiagent-based Distributed Open Computing Environment Model
16
作者 何炎祥 《High Technology Letters》 EI CAS 1998年第1期57-61,共5页
his paper examines planning management problems in a Multiagentbased Distributed Open Computing Environment Model (MDOCEM). First the meaning of planning management in MDOCEM is introduced, and then a formal method to... his paper examines planning management problems in a Multiagentbased Distributed Open Computing Environment Model (MDOCEM). First the meaning of planning management in MDOCEM is introduced, and then a formal method to describe the associated task partition problems is presented, and a heuristic algorithm which gives an approximate optimum solution is given. Finally the task coordination and integration of execution results are discussed. 展开更多
关键词 distributed OPEN computing ENVIRONMENT Agent Task partition PLANNING MANAGEMENT
下载PDF
A Cloud Computing Perspective for Distributed Routing in Vehicular Environments
17
作者 Smitha Shivshankar Abbas Jamalipour 《ZTE Communications》 2016年第3期36-44,共9页
Vehicular networks have been envisioned to provide us with numerous interesting services such as dissemination of real-time safety warnings and commercial advertisements via car-to-car communication. However, efficien... Vehicular networks have been envisioned to provide us with numerous interesting services such as dissemination of real-time safety warnings and commercial advertisements via car-to-car communication. However, efficient routing is a research challenge due to the highly dynamic nature of these networks. Nevertheless, the availability of connections imposes additional constraint. Our earlier works in the area of efficient dissemination integrates the advantages of middleware operations with muhicast routing to de- sign a framework for distributed routing in vehicular networks. Cloud computing makes use of pools of physical computing resourc- es to meet the requirements of such highly dynamic networks. The proposed solution in this paper applies the principles of cloud computing to our existing framework. The routing protocol works at the network layer for the formation of clouds in specific geo- graphic regions. Simulation results present the effieiency of the model in terms of serviee discovery, download time and the queu- ing delay at the controller nodes. 展开更多
关键词 cloud computing distributed routing vehicular networks
下载PDF
A Distributed LRTCO Algorithm in Large-Scale DVE Multimedia Systems
18
作者 Hangjun Zhou Guang Sun +3 位作者 Sha Fu Wangdong Jiang Tingting Xie Danqing Duan 《Computers, Materials & Continua》 SCIE EI 2018年第7期73-89,共17页
In the large-scale Distributed Virtual Environment(DVE)multimedia systems,one of key challenges is to distributedly preserve causal order delivery of messages in real time.Most of the existing causal order control app... In the large-scale Distributed Virtual Environment(DVE)multimedia systems,one of key challenges is to distributedly preserve causal order delivery of messages in real time.Most of the existing causal order control approaches with real-time constraints use vector time as causal control information which is closely coupled with system scales.As the scale expands,each message is attached a large amount of control information that introduces too much network transmission overhead to maintain the real-time causal order delivery.In this article,a novel Lightweight Real-Time Causal Order(LRTCO)algorithm is proposed for large-scale DVE multimedia systems.LRTCO predicts and compares the network transmission times of messages so as to select the proper causal control information of which the amount is dynamically adapted to the network latency variations and unconcerned with system scales.The control information in LRTCO is effective to preserve causal order delivery of messages and lightweight to maintain the real-time property of DVE systems.Experimental results demonstrate that LRTCO costs low transmission overhead and communication bandwidth,reduces causal order violations efficiently,and improves the scalability of DVE systems. 展开更多
关键词 distributed computing distributed virtual environment multimedia system causality violation causal order delivery real time
下载PDF
Antenna selection based on large-scale fading for distributed MIMO systems
19
作者 施荣华 Yuan Zexi +2 位作者 Dong Jian Lei Wentai Peng Chunhua 《High Technology Letters》 EI CAS 2016年第3期233-240,共8页
An antenna selection algorithm based on large-scale fading between the transmitter and receiver is proposed for the uplink receive antenna selection in distributed multiple-input multiple-output(D-MIMO) systems. By ut... An antenna selection algorithm based on large-scale fading between the transmitter and receiver is proposed for the uplink receive antenna selection in distributed multiple-input multiple-output(D-MIMO) systems. By utilizing the radio access units(RAU) selection based on large-scale fading,the proposed algorithm decreases enormously the computational complexity. Based on the characteristics of distributed systems,an improved particle swarm optimization(PSO) has been proposed for the antenna selection after the RAU selection. In order to apply the improved PSO algorithm better in antenna selection,a general form of channel capacity was transformed into a binary expression by analyzing the formula of channel capacity. The proposed algorithm can make full use of the advantages of D-MIMO systems,and achieve near-optimal performance in terms of channel capacity with low computational complexity. 展开更多
关键词 distributed MIMO systems antenna selection particle swarm optimization large-scale fading
下载PDF
Regularized focusing inversion for large-scale gravity data based on GPU parallel computing
20
作者 WANG Haoran DING Yidan +1 位作者 LI Feida LI Jing 《Global Geology》 2019年第3期179-187,共9页
Processing large-scale 3-D gravity data is an important topic in geophysics field. Many existing inversion methods lack the competence of processing massive data and practical application capacity. This study proposes... Processing large-scale 3-D gravity data is an important topic in geophysics field. Many existing inversion methods lack the competence of processing massive data and practical application capacity. This study proposes the application of GPU parallel processing technology to the focusing inversion method, aiming at improving the inversion accuracy while speeding up calculation and reducing the memory consumption, thus obtaining the fast and reliable inversion results for large complex model. In this paper, equivalent storage of geometric trellis is used to calculate the sensitivity matrix, and the inversion is based on GPU parallel computing technology. The parallel computing program that is optimized by reducing data transfer, access restrictions and instruction restrictions as well as latency hiding greatly reduces the memory usage, speeds up the calculation, and makes the fast inversion of large models possible. By comparing and analyzing the computing speed of traditional single thread CPU method and CUDA-based GPU parallel technology, the excellent acceleration performance of GPU parallel computing is verified, which provides ideas for practical application of some theoretical inversion methods restricted by computing speed and computer memory. The model test verifies that the focusing inversion method can overcome the problem of severe skin effect and ambiguity of geological body boundary. Moreover, the increase of the model cells and inversion data can more clearly depict the boundary position of the abnormal body and delineate its specific shape. 展开更多
关键词 large-scale gravity data GPU parallel computing CUDA equivalent geometric TRELLIS FOCUSING INVERSION
下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部