The composition of base oils affects the performance of lubricants made from them.This paper proposes a hybrid model based on gradient-boosted decision tree(GBDT)to analyze the effect of different ratios of KN4010,PAO...The composition of base oils affects the performance of lubricants made from them.This paper proposes a hybrid model based on gradient-boosted decision tree(GBDT)to analyze the effect of different ratios of KN4010,PAO40,and PriEco3000 component in a composite base oil system on the performance of lubricants.The study was conducted under small laboratory sample conditions,and a data expansion method using the Gaussian Copula function was proposed to improve the prediction ability of the hybrid model.The study also compared four optimization algorithms,sticky mushroom algorithm(SMA),genetic algorithm(GA),whale optimization algorithm(WOA),and seagull optimization algorithm(SOA),to predict the kinematic viscosity at 40℃,kinematic viscosity at 100℃,viscosity index,and oxidation induction time performance of the lubricant.The results showed that the Gaussian Copula function data expansion method improved the prediction ability of the hybrid model in the case of small samples.The SOA-GBDT hybrid model had the fastest convergence speed for the samples and the best prediction effect,with determination coefficients(R^(2))for the four indicators of lubricants reaching 0.98,0.99,0.96 and 0.96,respectively.Thus,this model can significantly reduce the model’s prediction error and has good prediction ability.展开更多
Cloud storage is widely used by large companies to store vast amounts of data and files,offering flexibility,financial savings,and security.However,information shoplifting poses significant threats,potentially leading...Cloud storage is widely used by large companies to store vast amounts of data and files,offering flexibility,financial savings,and security.However,information shoplifting poses significant threats,potentially leading to poor performance and privacy breaches.Blockchain-based cognitive computing can help protect and maintain information security and privacy in cloud platforms,ensuring businesses can focus on business development.To ensure data security in cloud platforms,this research proposed a blockchain-based Hybridized Data Driven Cognitive Computing(HD2C)model.However,the proposed HD2C framework addresses breaches of the privacy information of mixed participants of the Internet of Things(IoT)in the cloud.HD2C is developed by combining Federated Learning(FL)with a Blockchain consensus algorithm to connect smart contracts with Proof of Authority.The“Data Island”problem can be solved by FL’s emphasis on privacy and lightning-fast processing,while Blockchain provides a decentralized incentive structure that is impervious to poisoning.FL with Blockchain allows quick consensus through smart member selection and verification.The HD2C paradigm significantly improves the computational processing efficiency of intelligent manufacturing.Extensive analysis results derived from IIoT datasets confirm HD2C superiority.When compared to other consensus algorithms,the Blockchain PoA’s foundational cost is significant.The accuracy and memory utilization evaluation results predict the total benefits of the system.In comparison to the values 0.004 and 0.04,the value of 0.4 achieves good accuracy.According to the experiment results,the number of transactions per second has minimal impact on memory requirements.The findings of this study resulted in the development of a brand-new IIoT framework based on blockchain technology.展开更多
1 Introduction Information technology has been playing an ever-increasing role in geoscience.Sphisicated database platforms are essential for geological data storage,analysis and exchange of Big Data(Feblowitz,2013;Zh...1 Introduction Information technology has been playing an ever-increasing role in geoscience.Sphisicated database platforms are essential for geological data storage,analysis and exchange of Big Data(Feblowitz,2013;Zhang et al.,2016;Teng et al.,2016;Tian and Li,2018).The United States has built an information-sharing platform for state-owned scientific data as a national strategy.展开更多
Since web based GIS processes large size spatial geographic information on internet, we should try to improve the efficiency of spatial data query processing and transmission. This paper presents two efficient metho...Since web based GIS processes large size spatial geographic information on internet, we should try to improve the efficiency of spatial data query processing and transmission. This paper presents two efficient methods for this purpose: division transmission and progressive transmission methods. In division transmission method, a map can be divided into several parts, called “tiles”, and only tiles can be transmitted at the request of a client. In progressive transmission method, a map can be split into several phase views based on the significance of vertices, and a server produces a target object and then transmits it progressively when this spatial object is requested from a client. In order to achieve these methods, the algorithms, “tile division”, “priority order estimation” and the strategies for data transmission are proposed in this paper, respectively. Compared with such traditional methods as “map total transmission” and “layer transmission”, the web based GIS data transmission, proposed in this paper, is advantageous in the increase of the data transmission efficiency by a great margin.展开更多
Sensor nodes in a wireless sensor network (WSN) are typically powered by batteries, thus the energy is constrained. It is our design goal to efficiently utilize the energy of each sensor node to extend its lifetime,...Sensor nodes in a wireless sensor network (WSN) are typically powered by batteries, thus the energy is constrained. It is our design goal to efficiently utilize the energy of each sensor node to extend its lifetime, so as to prolong the lifetime of the whole WSN. In this paper, we propose a path-based data aggregation scheme (PBDAS) for grid-based wireless sensor networks. In order to extend the lifetime of a WSN, we construct a grid infrastructure by partitioning the whole sensor field into a grid of cells. Each cell has a head responsible for aggregating its own data with the data sensed by the others in the same cell and then transmitting out. In order to efficiently and rapidly transmit the data to the base station (BS), we link each cell head to form a chain. Each cell head on the chain takes turn becoming the chain leader responsible for transmitting data to the BS. Aggregated data moves from head to head along the chain, and finally the chain leader transmits to the BS. In PBDAS, only the cell heads need to transmit data toward the BS. Therefore, the data transmissions to the BS substantially decrease. Besides, the cell heads and chain leader are designated in turn according to the energy level so that the energy depletion of nodes is evenly distributed. Simulation results show that the proposed PBDAS extends the lifetime of sensor nodes, so as to make the lifetime of the whole network longer.展开更多
为了提高太阳电池阵多变量预测的精度,解决阳电池阵遥测参数存在周期波动与增长性互相耦合的问题,提出一种基于STL-Prophet-Informer模型的太阳电池阵多变量预测算法.该算法首先应用局部加权周期趋势分解算法(seasonal and trend decomp...为了提高太阳电池阵多变量预测的精度,解决阳电池阵遥测参数存在周期波动与增长性互相耦合的问题,提出一种基于STL-Prophet-Informer模型的太阳电池阵多变量预测算法.该算法首先应用局部加权周期趋势分解算法(seasonal and trend decomposition procedure based on loess,STL)对太阳电池阵的多个参数分解为趋势分量、周期分量和残差分量,然后采用对趋势性数据预测效果较好的Prophet预测趋势分量,Informer模型预测周期分量和残差分量,最后将各分量预测结果相加后得到总的太阳电池阵参数预测值.以某卫星太阳电池阵实际遥测数据做算例分析,提出算法的各项误差评价指标和单一的Informer模型、LSTM模型等相比有明显减小,将该组合预测模型用于太阳电池阵多变量参数预测中,可以提高参数预测精度,提升卫星自主运行性能.展开更多
The traditional threat score based on fixed thresholds for precipitation verification is sensitive to intensity forecast bias. In this study, the neighborhood precipitation threat score is modified by defining the thr...The traditional threat score based on fixed thresholds for precipitation verification is sensitive to intensity forecast bias. In this study, the neighborhood precipitation threat score is modified by defining the thresholds in terms of the percentiles of overall precipitation instead of fixed threshold values. The impact of intensity forecast bias on the calculated threat score is reduced. The method is tested with the forecasts of a tropical storm that re-intensified after making landfall and caused heavy flooding. The forecasts are produced with and without radar data assimilation. The forecast with assimilation of both radial velocity and reflectivity produce precipitation patterns that better match observations but have large positive intensity bias. When using fixed thresholds, the neighborhood threat scores fail to yield high scores for forecasts that have good pattern match with observations, due to large intensity bias. In contrast, the percentile-based neighborhood method yields the highest score for the forecast with the best pattern match and the smallest position error. The percentile-based method also yields scores that are more consistent with object-based verifications, which are less sensitive to intensity bias, demonstrating the potential value of percentile-based verification.展开更多
Over the last few years, the Internet of Things (IoT) has become an omnipresent term. The IoT expands the existing common concepts, anytime and anyplace to the connectivity for anything. The proliferation in IoT offer...Over the last few years, the Internet of Things (IoT) has become an omnipresent term. The IoT expands the existing common concepts, anytime and anyplace to the connectivity for anything. The proliferation in IoT offers opportunities but may also bear risks. A hitherto neglected aspect is the possible increase in power consumption as smart devices in IoT applications are expected to be reachable by other devices at all times. This implies that the device is consuming electrical energy even when it is not in use for its primary function. Many researchers’ communities have started addressing storage ability like cache memory of smart devices using the concept called—Named Data Networking (NDN) to achieve better energy efficient communication model. In NDN, memory or buffer overflow is the common challenge especially when internal memory of node exceeds its limit and data with highest degree of freshness may not be accommodated and entire scenarios behaves like a traditional network. In such case, Data Caching is not performed by intermediate nodes to guarantee highest degree of freshness. On the periodical updates sent from data producers, it is exceedingly demanded that data consumers must get up to date information at cost of lease energy. Consequently, there is challenge in maintaining tradeoff between freshness energy consumption during Publisher-Subscriber interaction. In our work, we proposed the architecture to overcome cache strategy issue by Smart Caching Algorithm for improvement in memory management and data freshness. The smart caching strategy updates the data at precise interval by keeping garbage data into consideration. It is also observed from experiment that data redundancy can be easily obtained by ignoring/dropping data packets for the information which is not of interest by other participating nodes in network, ultimately leading to optimizing tradeoff between freshness and energy required.展开更多
Wireless sensor Mobile ad hoc networks have excellent potential in moving and monitoring disaster area networks on real-time basis.The recent challenges faced in Mobile Ad Hoc Networks(MANETs)include scalability,local...Wireless sensor Mobile ad hoc networks have excellent potential in moving and monitoring disaster area networks on real-time basis.The recent challenges faced in Mobile Ad Hoc Networks(MANETs)include scalability,localization,heterogeneous network,self-organization,and self-sufficient operation.In this background,the current study focuses on specially-designed communication link establishment for high connection stability of wireless mobile sensor networks,especially in disaster area network.Existing protocols focus on location-dependent communications and use networks based on typically-used Internet Protocol(IP)architecture.However,IP-based communications have a few limitations such as inefficient bandwidth utilization,high processing,less transfer speeds,and excessive memory intake.To overcome these challenges,the number of neighbors(Node Density)is minimized and high Mobility Nodes(Node Speed)are avoided.The proposed Geographic Drone Based Route Optimization(GDRO)method reduces the entire overhead to a considerable level in an efficient manner and significantly improves the overall performance by identifying the disaster region.This drone communicates with anchor node periodically and shares the information to it so as to introduce a drone-based disaster network in an area.Geographic routing is a promising approach to enhance the routing efficiency in MANET.This algorithm helps in reaching the anchor(target)node with the help of Geographical Graph-Based Mapping(GGM).Global Positioning System(GPS)is enabled on mobile network of the anchor node which regularly broadcasts its location information that helps in finding the location.In first step,the node searches for local and remote anticipated Expected Transmission Count(ETX),thereby calculating the estimated distance.Received Signal Strength Indicator(RSSI)results are stored in the local memory of the node.Then,the node calculates the least remote anticipated ETX,Link Loss Rate,and information to the new location.Freeway Heuristic algorithm improves the data speed,efficiency and determines the path and optimization problem.In comparison with other models,the proposed method yielded an efficient communication,increased the throughput,and reduced the end-to-end delay,energy consumption and packet loss performance in disaster area networks.展开更多
Freebase is a large collaborative knowledge base and database of general, structured information for public use. Its structured data had been harvested from many sources, including individual, user-submitted wiki cont...Freebase is a large collaborative knowledge base and database of general, structured information for public use. Its structured data had been harvested from many sources, including individual, user-submitted wiki contributions. Its aim is to create a global resource so that people (and machines) can access common information more effectively which is mostly available in English. In this research work, we have tried to build the technique of creating the Freebase for Bengali language. Today the number of Bengali articles on the internet is growing day by day. So it has become a necessary to have a structured data store in Bengali. It consists of different types of concepts (topics) and relationships between those topics. These include different types of areas like popular culture (e.g. films, music, books, sports, television), location information (restaurants, geolocations, businesses), scholarly information (linguistics, biology, astronomy), birth place of (poets, politicians, actor, actress) and general knowledge (Wikipedia). It will be much more helpful for relation extraction or any kind of Natural Language Processing (NLP) works on Bengali language. In this work, we identified the technique of creating the Bengali Freebase and made a collection of Bengali data. We applied SPARQL query language to extract information from natural language (Bengali) documents such as Wikidata which is typically in RDF (Resource Description Format) triple format.展开更多
Various code development platforms, such as the ATHENA Framework [1] of the ATLAS [2] experiment encounter lengthy compilation/linking times. To augment this situation, the IRIS Development Platform was built as a sof...Various code development platforms, such as the ATHENA Framework [1] of the ATLAS [2] experiment encounter lengthy compilation/linking times. To augment this situation, the IRIS Development Platform was built as a software development framework acting as compiler, cross-project linker and data fetcher, which allow hot-swaps in order to compare various versions of software under test. The flexibility fostered by IRIS allowed modular exchange of software libraries among developers, making it a powerful development tool. The IRIS platform used input data ROOT-ntuples [3];however a new data model is sought, in line with the facilities offered by IRIS. The schematic of a possible new data structuring—as a user implemented object oriented data base, is presented.展开更多
A field-programmable gate array(FPGA)based high-speed broadband data acquisition system is designed.The system has a dual channel simultaneous acquisition function.The maximum sampling rate is 500 MSa/s and bandwidth ...A field-programmable gate array(FPGA)based high-speed broadband data acquisition system is designed.The system has a dual channel simultaneous acquisition function.The maximum sampling rate is 500 MSa/s and bandwidth is200 MHz,which solves the large bandwidth,high-speed signal acquisition and processing problems.At present,the data acquisition system is successfully used in broadband receiver test systems.展开更多
The Moon-based Ultraviolet Telescope (MUVT) is one of the payloads on the Chang'e-3 (CE-3) lunar lander. Because of the advantages of having no at- mospheric disturbances and the slow rotation of the Moon, we can...The Moon-based Ultraviolet Telescope (MUVT) is one of the payloads on the Chang'e-3 (CE-3) lunar lander. Because of the advantages of having no at- mospheric disturbances and the slow rotation of the Moon, we can make long-term continuous observations of a series of important celestial objects in the near ultra- violet band (245-340 nm), and perform a sky survey of selected areas, which can- not be completed on Earth. We can find characteristic changes in celestial brightness with time by analyzing image data from the MUVT, and deduce the radiation mech- anism and physical properties of these celestial objects after comparing with a phys- ical model. In order to explain the scientific purposes of MUVT, this article analyzes the preprocessing of MUVT image data and makes a preliminary evaluation of data quality. The results demonstrate that the methods used for data collection and prepro- cessing are effective, and the Level 2A and 2B image data satisfy the requirements of follow-up scientific researches.展开更多
This paper is concerned with the synchronization of delayed neural networks via sampled-data control. A new technique, namely, the free-matrix-based time-dependent discontinuous Lyapunov functional approach, is adopte...This paper is concerned with the synchronization of delayed neural networks via sampled-data control. A new technique, namely, the free-matrix-based time-dependent discontinuous Lyapunov functional approach, is adopted in constructing the Lyapunov functional, which takes advantage of the sampling characteristic of sawtooth input delay. Based on this discontinuous Lyapunov functional, some less conservative synchronization criteria are established to ensure that the slave system is synchronous with the master system. The desired sampled-data controller can be obtained through the use of the linear matrix inequality(LMI) technique. Finally, two numerical examples are provided to demonstrate the effectiveness and the improvements of the proposed methods.展开更多
基金financial support extended for this academic work by the Beijing Natural Science Foundation(Grant 2232066)the Open Project Foundation of State Key Laboratory of Solid Lubrication(Grant LSL-2212).
文摘The composition of base oils affects the performance of lubricants made from them.This paper proposes a hybrid model based on gradient-boosted decision tree(GBDT)to analyze the effect of different ratios of KN4010,PAO40,and PriEco3000 component in a composite base oil system on the performance of lubricants.The study was conducted under small laboratory sample conditions,and a data expansion method using the Gaussian Copula function was proposed to improve the prediction ability of the hybrid model.The study also compared four optimization algorithms,sticky mushroom algorithm(SMA),genetic algorithm(GA),whale optimization algorithm(WOA),and seagull optimization algorithm(SOA),to predict the kinematic viscosity at 40℃,kinematic viscosity at 100℃,viscosity index,and oxidation induction time performance of the lubricant.The results showed that the Gaussian Copula function data expansion method improved the prediction ability of the hybrid model in the case of small samples.The SOA-GBDT hybrid model had the fastest convergence speed for the samples and the best prediction effect,with determination coefficients(R^(2))for the four indicators of lubricants reaching 0.98,0.99,0.96 and 0.96,respectively.Thus,this model can significantly reduce the model’s prediction error and has good prediction ability.
文摘Cloud storage is widely used by large companies to store vast amounts of data and files,offering flexibility,financial savings,and security.However,information shoplifting poses significant threats,potentially leading to poor performance and privacy breaches.Blockchain-based cognitive computing can help protect and maintain information security and privacy in cloud platforms,ensuring businesses can focus on business development.To ensure data security in cloud platforms,this research proposed a blockchain-based Hybridized Data Driven Cognitive Computing(HD2C)model.However,the proposed HD2C framework addresses breaches of the privacy information of mixed participants of the Internet of Things(IoT)in the cloud.HD2C is developed by combining Federated Learning(FL)with a Blockchain consensus algorithm to connect smart contracts with Proof of Authority.The“Data Island”problem can be solved by FL’s emphasis on privacy and lightning-fast processing,while Blockchain provides a decentralized incentive structure that is impervious to poisoning.FL with Blockchain allows quick consensus through smart member selection and verification.The HD2C paradigm significantly improves the computational processing efficiency of intelligent manufacturing.Extensive analysis results derived from IIoT datasets confirm HD2C superiority.When compared to other consensus algorithms,the Blockchain PoA’s foundational cost is significant.The accuracy and memory utilization evaluation results predict the total benefits of the system.In comparison to the values 0.004 and 0.04,the value of 0.4 achieves good accuracy.According to the experiment results,the number of transactions per second has minimal impact on memory requirements.The findings of this study resulted in the development of a brand-new IIoT framework based on blockchain technology.
基金Supported by National Natural Science Foundation of China (61304079, 61125306, 61034002), the Open Research Project from SKLMCCS (20120106), the Fundamental Research Funds for the Central Universities (FRF-TP-13-018A), and the China Postdoctoral Science. Foundation (201_3M_ 5305_27)_ _ _
文摘为有致动器浸透和未知动力学的分离时间的系统的一个班的一个新奇最佳的追踪控制方法在这份报纸被建议。计划基于反复的适应动态编程(自动数据处理) 算法。以便实现控制计划,一个 data-based 标识符首先为未知系统动力学被构造。由介绍 M 网络,稳定的控制的明确的公式被完成。以便消除致动器浸透的效果, nonquadratic 表演功能被介绍,然后一个反复的自动数据处理算法被建立与集中分析完成最佳的追踪控制解决方案。为实现最佳的控制方法,神经网络被用来建立 data-based 标识符,计算性能索引功能,近似最佳的控制政策并且分别地解决稳定的控制。模拟例子被提供验证介绍最佳的追踪的控制计划的有效性。
基金granted by the National Science&Technology Major Projects of China(Grant No.2016ZX05033).
文摘1 Introduction Information technology has been playing an ever-increasing role in geoscience.Sphisicated database platforms are essential for geological data storage,analysis and exchange of Big Data(Feblowitz,2013;Zhang et al.,2016;Teng et al.,2016;Tian and Li,2018).The United States has built an information-sharing platform for state-owned scientific data as a national strategy.
文摘Since web based GIS processes large size spatial geographic information on internet, we should try to improve the efficiency of spatial data query processing and transmission. This paper presents two efficient methods for this purpose: division transmission and progressive transmission methods. In division transmission method, a map can be divided into several parts, called “tiles”, and only tiles can be transmitted at the request of a client. In progressive transmission method, a map can be split into several phase views based on the significance of vertices, and a server produces a target object and then transmits it progressively when this spatial object is requested from a client. In order to achieve these methods, the algorithms, “tile division”, “priority order estimation” and the strategies for data transmission are proposed in this paper, respectively. Compared with such traditional methods as “map total transmission” and “layer transmission”, the web based GIS data transmission, proposed in this paper, is advantageous in the increase of the data transmission efficiency by a great margin.
基金supported by the NSC under Grant No.NSC-101-2221-E-239-032 and NSC-102-2221-E-239-020
文摘Sensor nodes in a wireless sensor network (WSN) are typically powered by batteries, thus the energy is constrained. It is our design goal to efficiently utilize the energy of each sensor node to extend its lifetime, so as to prolong the lifetime of the whole WSN. In this paper, we propose a path-based data aggregation scheme (PBDAS) for grid-based wireless sensor networks. In order to extend the lifetime of a WSN, we construct a grid infrastructure by partitioning the whole sensor field into a grid of cells. Each cell has a head responsible for aggregating its own data with the data sensed by the others in the same cell and then transmitting out. In order to efficiently and rapidly transmit the data to the base station (BS), we link each cell head to form a chain. Each cell head on the chain takes turn becoming the chain leader responsible for transmitting data to the BS. Aggregated data moves from head to head along the chain, and finally the chain leader transmits to the BS. In PBDAS, only the cell heads need to transmit data toward the BS. Therefore, the data transmissions to the BS substantially decrease. Besides, the cell heads and chain leader are designated in turn according to the energy level so that the energy depletion of nodes is evenly distributed. Simulation results show that the proposed PBDAS extends the lifetime of sensor nodes, so as to make the lifetime of the whole network longer.
文摘为了提高太阳电池阵多变量预测的精度,解决阳电池阵遥测参数存在周期波动与增长性互相耦合的问题,提出一种基于STL-Prophet-Informer模型的太阳电池阵多变量预测算法.该算法首先应用局部加权周期趋势分解算法(seasonal and trend decomposition procedure based on loess,STL)对太阳电池阵的多个参数分解为趋势分量、周期分量和残差分量,然后采用对趋势性数据预测效果较好的Prophet预测趋势分量,Informer模型预测周期分量和残差分量,最后将各分量预测结果相加后得到总的太阳电池阵参数预测值.以某卫星太阳电池阵实际遥测数据做算例分析,提出算法的各项误差评价指标和单一的Informer模型、LSTM模型等相比有明显减小,将该组合预测模型用于太阳电池阵多变量参数预测中,可以提高参数预测精度,提升卫星自主运行性能.
基金primarily supported by the National 973 Fundamental Research Program of China(Grant No.2013CB430103)the Department of Transportation Federal Aviation Administration(Grant No.NA17RJ1227)through the National Oceanic and Atmospheric Administration+1 种基金supported by the National Science Foundation of China(Grant No.41405100)the Fundamental Research Funds for the Central Universities(Grant No.20620140343)
文摘The traditional threat score based on fixed thresholds for precipitation verification is sensitive to intensity forecast bias. In this study, the neighborhood precipitation threat score is modified by defining the thresholds in terms of the percentiles of overall precipitation instead of fixed threshold values. The impact of intensity forecast bias on the calculated threat score is reduced. The method is tested with the forecasts of a tropical storm that re-intensified after making landfall and caused heavy flooding. The forecasts are produced with and without radar data assimilation. The forecast with assimilation of both radial velocity and reflectivity produce precipitation patterns that better match observations but have large positive intensity bias. When using fixed thresholds, the neighborhood threat scores fail to yield high scores for forecasts that have good pattern match with observations, due to large intensity bias. In contrast, the percentile-based neighborhood method yields the highest score for the forecast with the best pattern match and the smallest position error. The percentile-based method also yields scores that are more consistent with object-based verifications, which are less sensitive to intensity bias, demonstrating the potential value of percentile-based verification.
文摘Over the last few years, the Internet of Things (IoT) has become an omnipresent term. The IoT expands the existing common concepts, anytime and anyplace to the connectivity for anything. The proliferation in IoT offers opportunities but may also bear risks. A hitherto neglected aspect is the possible increase in power consumption as smart devices in IoT applications are expected to be reachable by other devices at all times. This implies that the device is consuming electrical energy even when it is not in use for its primary function. Many researchers’ communities have started addressing storage ability like cache memory of smart devices using the concept called—Named Data Networking (NDN) to achieve better energy efficient communication model. In NDN, memory or buffer overflow is the common challenge especially when internal memory of node exceeds its limit and data with highest degree of freshness may not be accommodated and entire scenarios behaves like a traditional network. In such case, Data Caching is not performed by intermediate nodes to guarantee highest degree of freshness. On the periodical updates sent from data producers, it is exceedingly demanded that data consumers must get up to date information at cost of lease energy. Consequently, there is challenge in maintaining tradeoff between freshness energy consumption during Publisher-Subscriber interaction. In our work, we proposed the architecture to overcome cache strategy issue by Smart Caching Algorithm for improvement in memory management and data freshness. The smart caching strategy updates the data at precise interval by keeping garbage data into consideration. It is also observed from experiment that data redundancy can be easily obtained by ignoring/dropping data packets for the information which is not of interest by other participating nodes in network, ultimately leading to optimizing tradeoff between freshness and energy required.
文摘Wireless sensor Mobile ad hoc networks have excellent potential in moving and monitoring disaster area networks on real-time basis.The recent challenges faced in Mobile Ad Hoc Networks(MANETs)include scalability,localization,heterogeneous network,self-organization,and self-sufficient operation.In this background,the current study focuses on specially-designed communication link establishment for high connection stability of wireless mobile sensor networks,especially in disaster area network.Existing protocols focus on location-dependent communications and use networks based on typically-used Internet Protocol(IP)architecture.However,IP-based communications have a few limitations such as inefficient bandwidth utilization,high processing,less transfer speeds,and excessive memory intake.To overcome these challenges,the number of neighbors(Node Density)is minimized and high Mobility Nodes(Node Speed)are avoided.The proposed Geographic Drone Based Route Optimization(GDRO)method reduces the entire overhead to a considerable level in an efficient manner and significantly improves the overall performance by identifying the disaster region.This drone communicates with anchor node periodically and shares the information to it so as to introduce a drone-based disaster network in an area.Geographic routing is a promising approach to enhance the routing efficiency in MANET.This algorithm helps in reaching the anchor(target)node with the help of Geographical Graph-Based Mapping(GGM).Global Positioning System(GPS)is enabled on mobile network of the anchor node which regularly broadcasts its location information that helps in finding the location.In first step,the node searches for local and remote anticipated Expected Transmission Count(ETX),thereby calculating the estimated distance.Received Signal Strength Indicator(RSSI)results are stored in the local memory of the node.Then,the node calculates the least remote anticipated ETX,Link Loss Rate,and information to the new location.Freeway Heuristic algorithm improves the data speed,efficiency and determines the path and optimization problem.In comparison with other models,the proposed method yielded an efficient communication,increased the throughput,and reduced the end-to-end delay,energy consumption and packet loss performance in disaster area networks.
文摘Freebase is a large collaborative knowledge base and database of general, structured information for public use. Its structured data had been harvested from many sources, including individual, user-submitted wiki contributions. Its aim is to create a global resource so that people (and machines) can access common information more effectively which is mostly available in English. In this research work, we have tried to build the technique of creating the Freebase for Bengali language. Today the number of Bengali articles on the internet is growing day by day. So it has become a necessary to have a structured data store in Bengali. It consists of different types of concepts (topics) and relationships between those topics. These include different types of areas like popular culture (e.g. films, music, books, sports, television), location information (restaurants, geolocations, businesses), scholarly information (linguistics, biology, astronomy), birth place of (poets, politicians, actor, actress) and general knowledge (Wikipedia). It will be much more helpful for relation extraction or any kind of Natural Language Processing (NLP) works on Bengali language. In this work, we identified the technique of creating the Bengali Freebase and made a collection of Bengali data. We applied SPARQL query language to extract information from natural language (Bengali) documents such as Wikidata which is typically in RDF (Resource Description Format) triple format.
文摘Various code development platforms, such as the ATHENA Framework [1] of the ATLAS [2] experiment encounter lengthy compilation/linking times. To augment this situation, the IRIS Development Platform was built as a software development framework acting as compiler, cross-project linker and data fetcher, which allow hot-swaps in order to compare various versions of software under test. The flexibility fostered by IRIS allowed modular exchange of software libraries among developers, making it a powerful development tool. The IRIS platform used input data ROOT-ntuples [3];however a new data model is sought, in line with the facilities offered by IRIS. The schematic of a possible new data structuring—as a user implemented object oriented data base, is presented.
文摘A field-programmable gate array(FPGA)based high-speed broadband data acquisition system is designed.The system has a dual channel simultaneous acquisition function.The maximum sampling rate is 500 MSa/s and bandwidth is200 MHz,which solves the large bandwidth,high-speed signal acquisition and processing problems.At present,the data acquisition system is successfully used in broadband receiver test systems.
文摘The Moon-based Ultraviolet Telescope (MUVT) is one of the payloads on the Chang'e-3 (CE-3) lunar lander. Because of the advantages of having no at- mospheric disturbances and the slow rotation of the Moon, we can make long-term continuous observations of a series of important celestial objects in the near ultra- violet band (245-340 nm), and perform a sky survey of selected areas, which can- not be completed on Earth. We can find characteristic changes in celestial brightness with time by analyzing image data from the MUVT, and deduce the radiation mech- anism and physical properties of these celestial objects after comparing with a phys- ical model. In order to explain the scientific purposes of MUVT, this article analyzes the preprocessing of MUVT image data and makes a preliminary evaluation of data quality. The results demonstrate that the methods used for data collection and prepro- cessing are effective, and the Level 2A and 2B image data satisfy the requirements of follow-up scientific researches.
基金Project supported by the National Natural Science Foundation of China(Grant No.61304064)the Scientific Research Fund of Hunan Provincial Education Department,China(Grant Nos.15B067 and 16C0475)a Discovering Grant from Australian Research Council
文摘This paper is concerned with the synchronization of delayed neural networks via sampled-data control. A new technique, namely, the free-matrix-based time-dependent discontinuous Lyapunov functional approach, is adopted in constructing the Lyapunov functional, which takes advantage of the sampling characteristic of sawtooth input delay. Based on this discontinuous Lyapunov functional, some less conservative synchronization criteria are established to ensure that the slave system is synchronous with the master system. The desired sampled-data controller can be obtained through the use of the linear matrix inequality(LMI) technique. Finally, two numerical examples are provided to demonstrate the effectiveness and the improvements of the proposed methods.