Network storage increase capacity and scalability of storage system, data availability and enables the sharing of data among clients. When the developing network technology reduce performance gap between disk and netw...Network storage increase capacity and scalability of storage system, data availability and enables the sharing of data among clients. When the developing network technology reduce performance gap between disk and network, however, mismatched policies and access pattern can significantly reduce network storage performance. So the strategy of data placement in system is an important factor that impacts the performance of overall system. In this paper, the two algorithms of file assignment are presented. One is Greed partition that aims at the load balance across all NADs (Network Attached Disk). The other is Sort partition that tries to minimize variance of service time in each NAD. Moreover, we also compare the performance of our two algorithms in practical environment. Our experimental results show that when the size distribution (load characters) of all assigning files is closer and larger, Sort partition provides consistently better response times than Greedy algorithm. However, when the range of all assigning files is wider, there are more small files and access rate is higher, the Greedy algorithm has superior performance in compared with the Sort partition in off-line.展开更多
With the rapid development of Internet technology and the rapid increase of multimedia information, people put forward higher requirements for the security, reliability, stability and efficiency of multimedia file tra...With the rapid development of Internet technology and the rapid increase of multimedia information, people put forward higher requirements for the security, reliability, stability and efficiency of multimedia file transmission. As far as the current network situation is concerned, there are various problems in network security, so how to ensure the safe transmission of files is a very important research topic. In order to improve the security effect of database information, we should also do a good job of classification and archiving storage. Multimedia files must be heavily compressed so that they can be efficiently transmitted over traditional data communication networks. The rapid development of computer networks has made people's lives more and more convenient. In the process of using computer networks, it is necessary to actively strengthen the security control of file transmission on computer networks. Take appropriate measures to avoid network file transmission security risks.展开更多
With the rapid development of Internet technology and the rapid increase of multimedia information, people put forward higher requirements for the security, reliability, stability and efficiency of multimedia file tra...With the rapid development of Internet technology and the rapid increase of multimedia information, people put forward higher requirements for the security, reliability, stability and efficiency of multimedia file transmission. As far as the current network situation is concerned, there are various problems in network security, so how to ensure the safe transmission of files is a very important research topic. In order to improve the security effect of database information, we should also do a good job of classification and archiving storage. Multimedia files must be heavily compressed so that they can be efficiently transmitted over traditional data communication networks. The rapid development of computer networks has made people's lives more and more convenient. In the process of using computer networks, it is necessary to actively strengthen the security control of file transmission on computer networks. Take appropriate measures to avoid network file transmission security risks.展开更多
In order to improve the performance of peer-to-peer files sharing system under mobile distributed en-vironments,a novel always-optimally-coordinated(AOC)criterion and corresponding candidate selectionalgorithm are pro...In order to improve the performance of peer-to-peer files sharing system under mobile distributed en-vironments,a novel always-optimally-coordinated(AOC)criterion and corresponding candidate selectionalgorithm are proposed in this paper.Compared with the traditional min-hops criterion,the new approachintroduces a fuzzy knowledge combination theory to investigate several important factors that influence filestransfer success rate and efficiency .Whereas the min-hops based protocols only ask the nearest candidatepeer for desired files,the selection algorithm based on AOC comprehensively considers users' preferencesand network requirements with flexible balancing rules.Furthermore,its advantage also expresses in theindependence of specified resource discovering protocols,allowing for scalability.The simulation resultsshow that when using the AOC based peer selection algorithm,system performance is much better thanthe min-hops scheme,with files successful transfer rate improved more than 50% and transfer time re-duced at least 20%.展开更多
Satellite networking communications in navigation satellite system and spacebased deep space exploration have the features of a long delay and high bit error rate (BER). Through analyzing the advantages and disadvan...Satellite networking communications in navigation satellite system and spacebased deep space exploration have the features of a long delay and high bit error rate (BER). Through analyzing the advantages and disadvantages of the Consulta tive Committee for the Space Data System (CCSDS) file delivery protocol (CFDP), a new improved repeated sending file delivery protocol (RSFDP) based on the adaptive repeated sending is put forward to build an efficient and reliable file transmission. According to the estimation of the BER of the transmission link, RSFDP repeatedly sends the lost protocol data units (PDUs) at the stage of the retransmission to improve the success rate and reduce time of the retransmission. Theoretical analyses and results of the Opnet simulation indicate that the performance of RSFDP has significant improvement gains over CFDP in the link with a long delay and high BER. The realizing results based on the space borne filed programmable gate array (FPGA) platform show the applicability of the proposed algorithm.展开更多
In distributed storage systems,file access efficiency has an important impact on the real-time nature of information forensics.As a popular approach to improve file accessing efficiency,prefetching model can fetches d...In distributed storage systems,file access efficiency has an important impact on the real-time nature of information forensics.As a popular approach to improve file accessing efficiency,prefetching model can fetches data before it is needed according to the file access pattern,which can reduce the I/O waiting time and increase the system concurrency.However,prefetching model needs to mine the degree of association between files to ensure the accuracy of prefetching.In the massive small file situation,the sheer volume of files poses a challenge to the efficiency and accuracy of relevance mining.In this paper,we propose a massive files prefetching model based on LSTM neural network with cache transaction strategy to improve file access efficiency.Firstly,we propose a file clustering algorithm based on temporal locality and spatial locality to reduce the computational complexity.Secondly,we propose a definition of cache transaction according to files occurrence in cache instead of time-offset distance based methods to extract file block feature accurately.Lastly,we innovatively propose a file access prediction algorithm based on LSTM neural network which predict the file that have high possibility to be accessed.Experiments show that compared with the traditional LRU and the plain grouping methods,the proposed model notably increase the cache hit rate and effectively reduces the I/O wait time.展开更多
A new routing algorithm of peer-to-peer file sharing system with routing indices was proposed, in which a node forwards a query to neighbors that are more likely to have answers based on its statistics. The proposed a...A new routing algorithm of peer-to-peer file sharing system with routing indices was proposed, in which a node forwards a query to neighbors that are more likely to have answers based on its statistics. The proposed algorithm was tested by creating a P2P simulator and varying the input parameters, and was compared to the search algorithms using flooding (FLD) and random walk (RW). The result shows that with the proposed design, the queries are muted effectively, the network flows are reduced remarkably, and the peer-to-peer file sharing system gains a good expansibility.展开更多
This project was designated as Meritorious of Mathematical Contest inModeling (MCM'94). We have been required tu solve a problem of findins thebest schedule of a file transfer network in order to niake the niaktis...This project was designated as Meritorious of Mathematical Contest inModeling (MCM'94). We have been required tu solve a problem of findins thebest schedule of a file transfer network in order to niake the niaktispan the smallestone. Three situations with展开更多
The rapid development of Internet of Things(IoT)technology has made previously unavailable data available,and applications can take advantage of device data for people to visualize,explore,and build complex analyses.A...The rapid development of Internet of Things(IoT)technology has made previously unavailable data available,and applications can take advantage of device data for people to visualize,explore,and build complex analyses.As the size of the network and the number of network users continue to increase,network requests tend to aggregate on a small number of network resources,which results in uneven load on network requests.Real-time,highly reliable network file distribution technology is of great importance in the Internet of Things.This paper studies real-time and highly reliable file distribution technology for large-scale networks.In response to this topic,this paper studies the current file distribution technology,proposes a file distribution model,and proposes a corresponding load balancing method based on the file distribution model.Experiments show that the system has achieved real-time and high reliability of network transmission.展开更多
The increase in issue width and instructions window size in modern processors demand an increase in the size of the register files, as well as an increase in the number of ports. Bigger register files implies an incre...The increase in issue width and instructions window size in modern processors demand an increase in the size of the register files, as well as an increase in the number of ports. Bigger register files implies an increase in power consumed by these units as well as longer access delays. Models that assist in estimating the size of the register file, and its timing early in the design cycle are critical to the time-budget allocated to a processor design and to its performance. In this work, we discuss a Radial Base Function (RBF) Artificial Neural Network (ANN) model for the prediction of time and area for standard cell register files designed using optimized Synopsys Design Ware components and an UMC130 nm library. The ANN model predictions were compared against experimental results (obtained using detailed simulation) and a nonlinear regression-based model, and it is observed that the ANN model is very accurate and outperformed the non-linear model in several statistical parameters. Using the trained ANN model, a parametric study was carried out to study the effect of the number of words in the file (D), the number of bit in one word (W) and the total number of Read and Write ports (P) on the latency and area of standard cell register files.展开更多
The evolution of telecommunications has allowed the development of broadband services based mainly on fiber optic backbone networks. The operation and maintenance of these optical networks is made possible by using su...The evolution of telecommunications has allowed the development of broadband services based mainly on fiber optic backbone networks. The operation and maintenance of these optical networks is made possible by using supervision platforms that generate alarms that can be archived in the form of log files. But analyzing the alarms in the log files is a laborious and difficult task for the engineers who need a degree of expertise. Identifying failures and their root cause can be time consuming and impact the quality of service, network availability and service level agreements signed between the operator and its customers. Therefore, it is more than important to study the different possibilities of alarms classification and to use machine learning algorithms for alarms correlation in order to quickly determine the root causes of problems faster. We conducted a research case study on one of the operators in Cameroon who held an optical backbone based on SDH and WDM technologies with data collected from 2016-03-28 to “2022-09-01” with 7201 rows and 18. In this paper, we will classify alarms according to different criteria and use 02 unsupervised learning algorithms namely the K-Means algorithm and the DBSCAN to establish correlations between alarms in order to identify root causes of problems and reduce the time to troubleshoot. To achieve this objective, log files were exploited in order to obtain the root causes of the alarms, and then K-Means algorithm and the DBSCAN were used firstly to evaluate their performance and their capability to identify the root cause of alarms in optical network.展开更多
传统的网络文件系统难以满足高性能计算系统的I/O需求,基于对象存储的全局并行文件系统Lustre可以有效地解决传统文件系统在可扩展性、可用性和性能上存在的问题。该文介绍了Lustre文件系统的结构及其优势,对NFS over Lustre进行了性能...传统的网络文件系统难以满足高性能计算系统的I/O需求,基于对象存储的全局并行文件系统Lustre可以有效地解决传统文件系统在可扩展性、可用性和性能上存在的问题。该文介绍了Lustre文件系统的结构及其优势,对NFS over Lustre进行了性能测试,并将测试结果与Lustre文件系统、NFS网络文件系统及本地磁盘Ext3文件系统的性能进行了比较分析,给出了性能差异的原因,提出了一种可行的解决方法。展开更多
文摘Network storage increase capacity and scalability of storage system, data availability and enables the sharing of data among clients. When the developing network technology reduce performance gap between disk and network, however, mismatched policies and access pattern can significantly reduce network storage performance. So the strategy of data placement in system is an important factor that impacts the performance of overall system. In this paper, the two algorithms of file assignment are presented. One is Greed partition that aims at the load balance across all NADs (Network Attached Disk). The other is Sort partition that tries to minimize variance of service time in each NAD. Moreover, we also compare the performance of our two algorithms in practical environment. Our experimental results show that when the size distribution (load characters) of all assigning files is closer and larger, Sort partition provides consistently better response times than Greedy algorithm. However, when the range of all assigning files is wider, there are more small files and access rate is higher, the Greedy algorithm has superior performance in compared with the Sort partition in off-line.
文摘With the rapid development of Internet technology and the rapid increase of multimedia information, people put forward higher requirements for the security, reliability, stability and efficiency of multimedia file transmission. As far as the current network situation is concerned, there are various problems in network security, so how to ensure the safe transmission of files is a very important research topic. In order to improve the security effect of database information, we should also do a good job of classification and archiving storage. Multimedia files must be heavily compressed so that they can be efficiently transmitted over traditional data communication networks. The rapid development of computer networks has made people's lives more and more convenient. In the process of using computer networks, it is necessary to actively strengthen the security control of file transmission on computer networks. Take appropriate measures to avoid network file transmission security risks.
文摘With the rapid development of Internet technology and the rapid increase of multimedia information, people put forward higher requirements for the security, reliability, stability and efficiency of multimedia file transmission. As far as the current network situation is concerned, there are various problems in network security, so how to ensure the safe transmission of files is a very important research topic. In order to improve the security effect of database information, we should also do a good job of classification and archiving storage. Multimedia files must be heavily compressed so that they can be efficiently transmitted over traditional data communication networks. The rapid development of computer networks has made people's lives more and more convenient. In the process of using computer networks, it is necessary to actively strengthen the security control of file transmission on computer networks. Take appropriate measures to avoid network file transmission security risks.
基金supported by the National Nature Science Foundation of China(No.60672124)the National High Technology Research and Development Programme the of China(No.2007AA01Z221)
文摘In order to improve the performance of peer-to-peer files sharing system under mobile distributed en-vironments,a novel always-optimally-coordinated(AOC)criterion and corresponding candidate selectionalgorithm are proposed in this paper.Compared with the traditional min-hops criterion,the new approachintroduces a fuzzy knowledge combination theory to investigate several important factors that influence filestransfer success rate and efficiency .Whereas the min-hops based protocols only ask the nearest candidatepeer for desired files,the selection algorithm based on AOC comprehensively considers users' preferencesand network requirements with flexible balancing rules.Furthermore,its advantage also expresses in theindependence of specified resource discovering protocols,allowing for scalability.The simulation resultsshow that when using the AOC based peer selection algorithm,system performance is much better thanthe min-hops scheme,with files successful transfer rate improved more than 50% and transfer time re-duced at least 20%.
基金supported by the National High Technology Research and Development Program of China (863 Program) (2011AA1569)
文摘Satellite networking communications in navigation satellite system and spacebased deep space exploration have the features of a long delay and high bit error rate (BER). Through analyzing the advantages and disadvantages of the Consulta tive Committee for the Space Data System (CCSDS) file delivery protocol (CFDP), a new improved repeated sending file delivery protocol (RSFDP) based on the adaptive repeated sending is put forward to build an efficient and reliable file transmission. According to the estimation of the BER of the transmission link, RSFDP repeatedly sends the lost protocol data units (PDUs) at the stage of the retransmission to improve the success rate and reduce time of the retransmission. Theoretical analyses and results of the Opnet simulation indicate that the performance of RSFDP has significant improvement gains over CFDP in the link with a long delay and high BER. The realizing results based on the space borne filed programmable gate array (FPGA) platform show the applicability of the proposed algorithm.
基金This work is supported by‘The Fundamental Research Funds for the Central Universities(Grant No.HIT.NSRIF.201714)’‘Weihai Science and Technology Development Program(2016DXGJMS15)’‘Key Research and Development Program in Shandong Provincial(2017GGX90103)’.
文摘In distributed storage systems,file access efficiency has an important impact on the real-time nature of information forensics.As a popular approach to improve file accessing efficiency,prefetching model can fetches data before it is needed according to the file access pattern,which can reduce the I/O waiting time and increase the system concurrency.However,prefetching model needs to mine the degree of association between files to ensure the accuracy of prefetching.In the massive small file situation,the sheer volume of files poses a challenge to the efficiency and accuracy of relevance mining.In this paper,we propose a massive files prefetching model based on LSTM neural network with cache transaction strategy to improve file access efficiency.Firstly,we propose a file clustering algorithm based on temporal locality and spatial locality to reduce the computational complexity.Secondly,we propose a definition of cache transaction according to files occurrence in cache instead of time-offset distance based methods to extract file block feature accurately.Lastly,we innovatively propose a file access prediction algorithm based on LSTM neural network which predict the file that have high possibility to be accessed.Experiments show that compared with the traditional LRU and the plain grouping methods,the proposed model notably increase the cache hit rate and effectively reduces the I/O wait time.
文摘A new routing algorithm of peer-to-peer file sharing system with routing indices was proposed, in which a node forwards a query to neighbors that are more likely to have answers based on its statistics. The proposed algorithm was tested by creating a P2P simulator and varying the input parameters, and was compared to the search algorithms using flooding (FLD) and random walk (RW). The result shows that with the proposed design, the queries are muted effectively, the network flows are reduced remarkably, and the peer-to-peer file sharing system gains a good expansibility.
文摘This project was designated as Meritorious of Mathematical Contest inModeling (MCM'94). We have been required tu solve a problem of findins thebest schedule of a file transfer network in order to niake the niaktispan the smallestone. Three situations with
基金This work was supported by National Key Research&Development Plan of China under Grant 2016QY05X1000National Natural Science Foundation of China under Grant No.61771166CERNET Innovation Project(NGII20170412).
文摘The rapid development of Internet of Things(IoT)technology has made previously unavailable data available,and applications can take advantage of device data for people to visualize,explore,and build complex analyses.As the size of the network and the number of network users continue to increase,network requests tend to aggregate on a small number of network resources,which results in uneven load on network requests.Real-time,highly reliable network file distribution technology is of great importance in the Internet of Things.This paper studies real-time and highly reliable file distribution technology for large-scale networks.In response to this topic,this paper studies the current file distribution technology,proposes a file distribution model,and proposes a corresponding load balancing method based on the file distribution model.Experiments show that the system has achieved real-time and high reliability of network transmission.
文摘The increase in issue width and instructions window size in modern processors demand an increase in the size of the register files, as well as an increase in the number of ports. Bigger register files implies an increase in power consumed by these units as well as longer access delays. Models that assist in estimating the size of the register file, and its timing early in the design cycle are critical to the time-budget allocated to a processor design and to its performance. In this work, we discuss a Radial Base Function (RBF) Artificial Neural Network (ANN) model for the prediction of time and area for standard cell register files designed using optimized Synopsys Design Ware components and an UMC130 nm library. The ANN model predictions were compared against experimental results (obtained using detailed simulation) and a nonlinear regression-based model, and it is observed that the ANN model is very accurate and outperformed the non-linear model in several statistical parameters. Using the trained ANN model, a parametric study was carried out to study the effect of the number of words in the file (D), the number of bit in one word (W) and the total number of Read and Write ports (P) on the latency and area of standard cell register files.
文摘The evolution of telecommunications has allowed the development of broadband services based mainly on fiber optic backbone networks. The operation and maintenance of these optical networks is made possible by using supervision platforms that generate alarms that can be archived in the form of log files. But analyzing the alarms in the log files is a laborious and difficult task for the engineers who need a degree of expertise. Identifying failures and their root cause can be time consuming and impact the quality of service, network availability and service level agreements signed between the operator and its customers. Therefore, it is more than important to study the different possibilities of alarms classification and to use machine learning algorithms for alarms correlation in order to quickly determine the root causes of problems faster. We conducted a research case study on one of the operators in Cameroon who held an optical backbone based on SDH and WDM technologies with data collected from 2016-03-28 to “2022-09-01” with 7201 rows and 18. In this paper, we will classify alarms according to different criteria and use 02 unsupervised learning algorithms namely the K-Means algorithm and the DBSCAN to establish correlations between alarms in order to identify root causes of problems and reduce the time to troubleshoot. To achieve this objective, log files were exploited in order to obtain the root causes of the alarms, and then K-Means algorithm and the DBSCAN were used firstly to evaluate their performance and their capability to identify the root cause of alarms in optical network.
文摘传统的网络文件系统难以满足高性能计算系统的I/O需求,基于对象存储的全局并行文件系统Lustre可以有效地解决传统文件系统在可扩展性、可用性和性能上存在的问题。该文介绍了Lustre文件系统的结构及其优势,对NFS over Lustre进行了性能测试,并将测试结果与Lustre文件系统、NFS网络文件系统及本地磁盘Ext3文件系统的性能进行了比较分析,给出了性能差异的原因,提出了一种可行的解决方法。