期刊文献+
共找到543,609篇文章
< 1 2 250 >
每页显示 20 50 100
Data Secure Storage Mechanism for IIoT Based on Blockchain 被引量:1
1
作者 Jin Wang Guoshu Huang +2 位作者 R.Simon Sherratt Ding Huang Jia Ni 《Computers, Materials & Continua》 SCIE EI 2024年第3期4029-4048,共20页
With the development of Industry 4.0 and big data technology,the Industrial Internet of Things(IIoT)is hampered by inherent issues such as privacy,security,and fault tolerance,which pose certain challenges to the rapi... With the development of Industry 4.0 and big data technology,the Industrial Internet of Things(IIoT)is hampered by inherent issues such as privacy,security,and fault tolerance,which pose certain challenges to the rapid development of IIoT.Blockchain technology has immutability,decentralization,and autonomy,which can greatly improve the inherent defects of the IIoT.In the traditional blockchain,data is stored in a Merkle tree.As data continues to grow,the scale of proofs used to validate it grows,threatening the efficiency,security,and reliability of blockchain-based IIoT.Accordingly,this paper first analyzes the inefficiency of the traditional blockchain structure in verifying the integrity and correctness of data.To solve this problem,a new Vector Commitment(VC)structure,Partition Vector Commitment(PVC),is proposed by improving the traditional VC structure.Secondly,this paper uses PVC instead of the Merkle tree to store big data generated by IIoT.PVC can improve the efficiency of traditional VC in the process of commitment and opening.Finally,this paper uses PVC to build a blockchain-based IIoT data security storage mechanism and carries out a comparative analysis of experiments.This mechanism can greatly reduce communication loss and maximize the rational use of storage space,which is of great significance for maintaining the security and stability of blockchain-based IIoT. 展开更多
关键词 Blockchain IIoT data storage cryptographic commitment
下载PDF
Prediction of Lubricant Physicochemical Properties Based on Gaussian Copula Data Expansion
2
作者 Feng Xin Yang Rui +1 位作者 Xie Peiyuan Xia Yanqiu 《China Petroleum Processing & Petrochemical Technology》 SCIE CAS CSCD 2024年第1期161-174,共14页
The composition of base oils affects the performance of lubricants made from them.This paper proposes a hybrid model based on gradient-boosted decision tree(GBDT)to analyze the effect of different ratios of KN4010,PAO... The composition of base oils affects the performance of lubricants made from them.This paper proposes a hybrid model based on gradient-boosted decision tree(GBDT)to analyze the effect of different ratios of KN4010,PAO40,and PriEco3000 component in a composite base oil system on the performance of lubricants.The study was conducted under small laboratory sample conditions,and a data expansion method using the Gaussian Copula function was proposed to improve the prediction ability of the hybrid model.The study also compared four optimization algorithms,sticky mushroom algorithm(SMA),genetic algorithm(GA),whale optimization algorithm(WOA),and seagull optimization algorithm(SOA),to predict the kinematic viscosity at 40℃,kinematic viscosity at 100℃,viscosity index,and oxidation induction time performance of the lubricant.The results showed that the Gaussian Copula function data expansion method improved the prediction ability of the hybrid model in the case of small samples.The SOA-GBDT hybrid model had the fastest convergence speed for the samples and the best prediction effect,with determination coefficients(R^(2))for the four indicators of lubricants reaching 0.98,0.99,0.96 and 0.96,respectively.Thus,this model can significantly reduce the model’s prediction error and has good prediction ability. 展开更多
关键词 base oil data augmentation machine learning performance prediction seagull algorithm
下载PDF
A phenology-based vegetation index for improving ratoon rice mapping using harmonized Landsat and Sentinel-2 data 被引量:1
3
作者 Yunping Chen Jie Hu +6 位作者 Zhiwen Cai Jingya Yang Wei Zhou Qiong Hu Cong Wang Liangzhi You Baodong Xu 《Journal of Integrative Agriculture》 SCIE CAS CSCD 2024年第4期1164-1178,共15页
Ratoon rice,which refers to a second harvest of rice obtained from the regenerated tillers originating from the stubble of the first harvested crop,plays an important role in both food security and agroecology while r... Ratoon rice,which refers to a second harvest of rice obtained from the regenerated tillers originating from the stubble of the first harvested crop,plays an important role in both food security and agroecology while requiring minimal agricultural inputs.However,accurately identifying ratoon rice crops is challenging due to the similarity of its spectral features with other rice cropping systems(e.g.,double rice).Moreover,images with a high spatiotemporal resolution are essential since ratoon rice is generally cultivated in fragmented croplands within regions that frequently exhibit cloudy and rainy weather.In this study,taking Qichun County in Hubei Province,China as an example,we developed a new phenology-based ratoon rice vegetation index(PRVI)for the purpose of ratoon rice mapping at a 30 m spatial resolution using a robust time series generated from Harmonized Landsat and Sentinel-2(HLS)images.The PRVI that incorporated the red,near-infrared,and shortwave infrared 1 bands was developed based on the analysis of spectro-phenological separability and feature selection.Based on actual field samples,the performance of the PRVI for ratoon rice mapping was carefully evaluated by comparing it to several vegetation indices,including normalized difference vegetation index(NDVI),enhanced vegetation index(EVI)and land surface water index(LSWI).The results suggested that the PRVI could sufficiently capture the specific characteristics of ratoon rice,leading to a favorable separability between ratoon rice and other land cover types.Furthermore,the PRVI showed the best performance for identifying ratoon rice in the phenological phases characterized by grain filling and harvesting to tillering of the ratoon crop(GHS-TS2),indicating that only several images are required to obtain an accurate ratoon rice map.Finally,the PRVI performed better than NDVI,EVI,LSWI and their combination at the GHS-TS2 stages,with producer's accuracy and user's accuracy of 92.22 and 89.30%,respectively.These results demonstrate that the proposed PRVI based on HLS data can effectively identify ratoon rice in fragmented croplands at crucial phenological stages,which is promising for identifying the earliest timing of ratoon rice planting and can provide a fundamental dataset for crop management activities. 展开更多
关键词 ratoon rice phenology-based ratoon rice vegetation index(PRVI) phenological phase feature selection Harmonized Landsat Sentinel-2 data
下载PDF
Big Data Access Control Mechanism Based on Two-Layer Permission Decision Structure
4
作者 Aodi Liu Na Wang +3 位作者 Xuehui Du Dibin Shan Xiangyu Wu Wenjuan Wang 《Computers, Materials & Continua》 SCIE EI 2024年第4期1705-1726,共22页
Big data resources are characterized by large scale, wide sources, and strong dynamics. Existing access controlmechanisms based on manual policy formulation by security experts suffer from drawbacks such as low policy... Big data resources are characterized by large scale, wide sources, and strong dynamics. Existing access controlmechanisms based on manual policy formulation by security experts suffer from drawbacks such as low policymanagement efficiency and difficulty in accurately describing the access control policy. To overcome theseproblems, this paper proposes a big data access control mechanism based on a two-layer permission decisionstructure. This mechanism extends the attribute-based access control (ABAC) model. Business attributes areintroduced in the ABAC model as business constraints between entities. The proposed mechanism implementsa two-layer permission decision structure composed of the inherent attributes of access control entities and thebusiness attributes, which constitute the general permission decision algorithm based on logical calculation andthe business permission decision algorithm based on a bi-directional long short-term memory (BiLSTM) neuralnetwork, respectively. The general permission decision algorithm is used to implement accurate policy decisions,while the business permission decision algorithm implements fuzzy decisions based on the business constraints.The BiLSTM neural network is used to calculate the similarity of the business attributes to realize intelligent,adaptive, and efficient access control permission decisions. Through the two-layer permission decision structure,the complex and diverse big data access control management requirements can be satisfied by considering thesecurity and availability of resources. Experimental results show that the proposed mechanism is effective andreliable. In summary, it can efficiently support the secure sharing of big data resources. 展开更多
关键词 Big data access control data security BiLSTM
下载PDF
Redundant Data Detection and Deletion to Meet Privacy Protection Requirements in Blockchain-Based Edge Computing Environment
5
作者 Zhang Lejun Peng Minghui +6 位作者 Su Shen Wang Weizheng Jin Zilong Su Yansen Chen Huiling Guo Ran Sergey Gataullin 《China Communications》 SCIE CSCD 2024年第3期149-159,共11页
With the rapid development of information technology,IoT devices play a huge role in physiological health data detection.The exponential growth of medical data requires us to reasonably allocate storage space for clou... With the rapid development of information technology,IoT devices play a huge role in physiological health data detection.The exponential growth of medical data requires us to reasonably allocate storage space for cloud servers and edge nodes.The storage capacity of edge nodes close to users is limited.We should store hotspot data in edge nodes as much as possible,so as to ensure response timeliness and access hit rate;However,the current scheme cannot guarantee that every sub-message in a complete data stored by the edge node meets the requirements of hot data;How to complete the detection and deletion of redundant data in edge nodes under the premise of protecting user privacy and data dynamic integrity has become a challenging problem.Our paper proposes a redundant data detection method that meets the privacy protection requirements.By scanning the cipher text,it is determined whether each sub-message of the data in the edge node meets the requirements of the hot data.It has the same effect as zero-knowledge proof,and it will not reveal the privacy of users.In addition,for redundant sub-data that does not meet the requirements of hot data,our paper proposes a redundant data deletion scheme that meets the dynamic integrity of the data.We use Content Extraction Signature(CES)to generate the remaining hot data signature after the redundant data is deleted.The feasibility of the scheme is proved through safety analysis and efficiency analysis. 展开更多
关键词 blockchain data integrity edge computing privacy protection redundant data
下载PDF
Traffic Flow Prediction with Heterogeneous Spatiotemporal Data Based on a Hybrid Deep Learning Model Using Attention-Mechanism
6
作者 Jing-Doo Wang Chayadi Oktomy Noto Susanto 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第8期1711-1728,共18页
A significant obstacle in intelligent transportation systems(ITS)is the capacity to predict traffic flow.Recent advancements in deep neural networks have enabled the development of models to represent traffic flow acc... A significant obstacle in intelligent transportation systems(ITS)is the capacity to predict traffic flow.Recent advancements in deep neural networks have enabled the development of models to represent traffic flow accurately.However,accurately predicting traffic flow at the individual road level is extremely difficult due to the complex interplay of spatial and temporal factors.This paper proposes a technique for predicting short-term traffic flow data using an architecture that utilizes convolutional bidirectional long short-term memory(Conv-BiLSTM)with attention mechanisms.Prior studies neglected to include data pertaining to factors such as holidays,weather conditions,and vehicle types,which are interconnected and significantly impact the accuracy of forecast outcomes.In addition,this research incorporates recurring monthly periodic pattern data that significantly enhances the accuracy of forecast outcomes.The experimental findings demonstrate a performance improvement of 21.68%when incorporating the vehicle type feature. 展开更多
关键词 Traffic flow prediction sptiotemporal data heterogeneous data Conv-BiLSTM data-CENTRIC intra-data
下载PDF
Hadoop-based secure storage solution for big data in cloud computing environment
7
作者 Shaopeng Guan Conghui Zhang +1 位作者 Yilin Wang Wenqing Liu 《Digital Communications and Networks》 SCIE CSCD 2024年第1期227-236,共10页
In order to address the problems of the single encryption algorithm,such as low encryption efficiency and unreliable metadata for static data storage of big data platforms in the cloud computing environment,we propose... In order to address the problems of the single encryption algorithm,such as low encryption efficiency and unreliable metadata for static data storage of big data platforms in the cloud computing environment,we propose a Hadoop based big data secure storage scheme.Firstly,in order to disperse the NameNode service from a single server to multiple servers,we combine HDFS federation and HDFS high-availability mechanisms,and use the Zookeeper distributed coordination mechanism to coordinate each node to achieve dual-channel storage.Then,we improve the ECC encryption algorithm for the encryption of ordinary data,and adopt a homomorphic encryption algorithm to encrypt data that needs to be calculated.To accelerate the encryption,we adopt the dualthread encryption mode.Finally,the HDFS control module is designed to combine the encryption algorithm with the storage model.Experimental results show that the proposed solution solves the problem of a single point of failure of metadata,performs well in terms of metadata reliability,and can realize the fault tolerance of the server.The improved encryption algorithm integrates the dual-channel storage mode,and the encryption storage efficiency improves by 27.6% on average. 展开更多
关键词 Big data security data encryption HADOOP Parallel encrypted storage Zookeeper
下载PDF
A Power Data Anomaly Detection Model Based on Deep Learning with Adaptive Feature Fusion
8
作者 Xiu Liu Liang Gu +3 位作者 Xin Gong Long An Xurui Gao Juying Wu 《Computers, Materials & Continua》 SCIE EI 2024年第6期4045-4061,共17页
With the popularisation of intelligent power,power devices have different shapes,numbers and specifications.This means that the power data has distributional variability,the model learning process cannot achieve suffi... With the popularisation of intelligent power,power devices have different shapes,numbers and specifications.This means that the power data has distributional variability,the model learning process cannot achieve sufficient extraction of data features,which seriously affects the accuracy and performance of anomaly detection.Therefore,this paper proposes a deep learning-based anomaly detection model for power data,which integrates a data alignment enhancement technique based on random sampling and an adaptive feature fusion method leveraging dimension reduction.Aiming at the distribution variability of power data,this paper developed a sliding window-based data adjustment method for this model,which solves the problem of high-dimensional feature noise and low-dimensional missing data.To address the problem of insufficient feature fusion,an adaptive feature fusion method based on feature dimension reduction and dictionary learning is proposed to improve the anomaly data detection accuracy of the model.In order to verify the effectiveness of the proposed method,we conducted effectiveness comparisons through elimination experiments.The experimental results show that compared with the traditional anomaly detection methods,the method proposed in this paper not only has an advantage in model accuracy,but also reduces the amount of parameter calculation of the model in the process of feature matching and improves the detection speed. 展开更多
关键词 data alignment dimension reduction feature fusion data anomaly detection deep learning
下载PDF
Sec-Auditor:A Blockchain-Based Data Auditing Solution for Ensuring Integrity and Semantic Correctness
9
作者 Guodong Han Hecheng Li 《Computers, Materials & Continua》 SCIE EI 2024年第8期2121-2137,共17页
Currently,there is a growing trend among users to store their data in the cloud.However,the cloud is vulnerable to persistent data corruption risks arising from equipment failures and hacker attacks.Additionally,when ... Currently,there is a growing trend among users to store their data in the cloud.However,the cloud is vulnerable to persistent data corruption risks arising from equipment failures and hacker attacks.Additionally,when users perform file operations,the semantic integrity of the data can be compromised.Ensuring both data integrity and semantic correctness has become a critical issue that requires attention.We introduce a pioneering solution called Sec-Auditor,the first of its kind with the ability to verify data integrity and semantic correctness simultaneously,while maintaining a constant communication cost independent of the audited data volume.Sec-Auditor also supports public auditing,enabling anyone with access to public information to conduct data audits.This feature makes Sec-Auditor highly adaptable to open data environments,such as the cloud.In Sec-Auditor,users are assigned specific rules that are utilized to verify the accuracy of data semantic.Furthermore,users are given the flexibility to update their own rules as needed.We conduct in-depth analyses of the correctness and security of Sec-Auditor.We also compare several important security attributes with existing schemes,demonstrating the superior properties of Sec-Auditor.Evaluation results demonstrate that even for time-consuming file upload operations,our solution is more efficient than the comparison one. 展开更多
关键词 Provable data possession public auditing cloud storage data integrity semantic correctness
下载PDF
Research on Enhanced Contraband Dataset ACXray Based on ETL
10
作者 Xueping Song Jianming Yang +1 位作者 Shuyu Zhang Jicun Zhang 《Computers, Materials & Continua》 SCIE EI 2024年第6期4551-4572,共22页
To address the shortage of public datasets for customs X-ray images of contraband and the difficulties in deploying trained models in engineering applications,a method has been proposed that employs the Extract-Transf... To address the shortage of public datasets for customs X-ray images of contraband and the difficulties in deploying trained models in engineering applications,a method has been proposed that employs the Extract-Transform-Load(ETL)approach to create an X-ray dataset of contraband items.Initially,X-ray scatter image data is collected and cleaned.Using Kafka message queues and the Elasticsearch(ES)distributed search engine,the data is transmitted in real-time to cloud servers.Subsequently,contraband data is annotated using a combination of neural networks and manual methods to improve annotation efficiency and implemented mean hash algorithm for quick image retrieval.The method of integrating targets with backgrounds has enhanced the X-ray contraband image data,increasing the number of positive samples.Finally,an Airport Customs X-ray dataset(ACXray)compatible with customs business scenarios has been constructed,featuring an increased number of positive contraband samples.Experimental tests using three datasets to train the Mask Region-based Convolutional Neural Network(Mask R-CNN)algorithm and tested on 400 real customs images revealed that the recognition accuracy of algorithms trained with Security Inspection X-ray(SIXray)and Occluded Prohibited Items X-ray(OPIXray)decreased by 16.3%and 15.1%,respectively,while the ACXray dataset trained algorithm’s accuracy was almost unaffected.This indicates that the ACXray dataset-trained algorithm possesses strong generalization capabilities and is more suitable for customs detection scenarios. 展开更多
关键词 X-ray contraband ETL data enhancement dataSET
下载PDF
A General Framework for Intelligent IoT Data Acquisition and Sharing in an Untrusted Environment Based on Blockchain
11
作者 Lu Yin Xue Yongtao +4 位作者 Li Qingyuan Wu Luocheng Li Taosen Yang Peipei Zhu Hongbo 《China Communications》 SCIE CSCD 2024年第3期137-148,共12页
Traditional Io T systems suffer from high equipment management costs and difficulty in trustworthy data sharing caused by centralization.Blockchain provides a feasible research direction to solve these problems. The m... Traditional Io T systems suffer from high equipment management costs and difficulty in trustworthy data sharing caused by centralization.Blockchain provides a feasible research direction to solve these problems. The main challenge at this stage is to integrate the blockchain from the resourceconstrained Io T devices and ensure the data of Io T system is credible. We provide a general framework for intelligent Io T data acquisition and sharing in an untrusted environment based on the blockchain, where gateways become Oracles. A distributed Oracle network based on Byzantine Fault Tolerant algorithm is used to provide trusted data for the blockchain to make intelligent Io T data trustworthy. An aggregation contract is deployed to collect data from various Oracle and share the credible data to all on-chain users. We also propose a gateway data aggregation scheme based on the REST API event publishing/subscribing mechanism which uses SQL to achieve flexible data aggregation. The experimental results show that the proposed scheme can alleviate the problem of limited performance of Io T equipment, make data reliable, and meet the diverse data needs on the chain. 展开更多
关键词 blockchain data sharing Internet of Things ORACLE
下载PDF
Improved Data Stream Clustering Method: Incorporating KD-Tree for Typicality and Eccentricity-Based Approach
12
作者 Dayu Xu Jiaming Lu +1 位作者 Xuyao Zhang Hongtao Zhang 《Computers, Materials & Continua》 SCIE EI 2024年第2期2557-2573,共17页
Data stream clustering is integral to contemporary big data applications.However,addressing the ongoing influx of data streams efficiently and accurately remains a primary challenge in current research.This paper aims... Data stream clustering is integral to contemporary big data applications.However,addressing the ongoing influx of data streams efficiently and accurately remains a primary challenge in current research.This paper aims to elevate the efficiency and precision of data stream clustering,leveraging the TEDA(Typicality and Eccentricity Data Analysis)algorithm as a foundation,we introduce improvements by integrating a nearest neighbor search algorithm to enhance both the efficiency and accuracy of the algorithm.The original TEDA algorithm,grounded in the concept of“Typicality and Eccentricity Data Analytics”,represents an evolving and recursive method that requires no prior knowledge.While the algorithm autonomously creates and merges clusters as new data arrives,its efficiency is significantly hindered by the need to traverse all existing clusters upon the arrival of further data.This work presents the NS-TEDA(Neighbor Search Based Typicality and Eccentricity Data Analysis)algorithm by incorporating a KD-Tree(K-Dimensional Tree)algorithm integrated with the Scapegoat Tree.Upon arrival,this ensures that new data points interact solely with clusters in very close proximity.This significantly enhances algorithm efficiency while preventing a single data point from joining too many clusters and mitigating the merging of clusters with high overlap to some extent.We apply the NS-TEDA algorithm to several well-known datasets,comparing its performance with other data stream clustering algorithms and the original TEDA algorithm.The results demonstrate that the proposed algorithm achieves higher accuracy,and its runtime exhibits almost linear dependence on the volume of data,making it more suitable for large-scale data stream analysis research. 展开更多
关键词 data stream clustering TEDA KD-TREE scapegoat tree
下载PDF
A Fair and Trusted Trading Scheme for Medical Data Based on Smart Contracts
13
作者 Xiaohui Yang Kun Zhang 《Computers, Materials & Continua》 SCIE EI 2024年第2期1843-1859,共17页
Data is regarded as a valuable asset,and sharing data is a prerequisite for fully exploiting the value of data.However,the current medical data sharing scheme lacks a fair incentive mechanism,and the authenticity of d... Data is regarded as a valuable asset,and sharing data is a prerequisite for fully exploiting the value of data.However,the current medical data sharing scheme lacks a fair incentive mechanism,and the authenticity of data cannot be guaranteed,resulting in low enthusiasm of participants.A fair and trusted medical data trading scheme based on smart contracts is proposed,which aims to encourage participants to be honest and improve their enthusiasm for participation.The scheme uses zero-knowledge range proof for trusted verification,verifies the authenticity of the patient’s data and the specific attributes of the data before the transaction,and realizes privacy protection.At the same time,the game pricing strategy selects the best revenue strategy for all parties involved and realizes the fairness and incentive of the transaction price.The smart contract is used to complete the verification and game bargaining process,and the blockchain is used as a distributed ledger to record the medical data transaction process to prevent data tampering and transaction denial.Finally,by deploying smart contracts on the Ethereum test network and conducting experiments and theoretical calculations,it is proved that the transaction scheme achieves trusted verification and fair bargaining while ensuring privacy protection in a decentralized environment.The experimental results show that the model improves the credibility and fairness of medical data transactions,maximizes social benefits,encourages more patients and medical institutions to participate in the circulation of medical data,and more fully taps the potential value of medical data. 展开更多
关键词 Blockchain data transactions zero-knowledge proof game pricing
下载PDF
Cross-Domain Bilateral Access Control on Blockchain-Cloud Based Data Trading System
14
作者 Youngho Park Su Jin Shin Sang Uk Shin 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第10期671-688,共18页
Data trading enables data owners and data requesters to sell and purchase data.With the emergence of blockchain technology,research on blockchain-based data trading systems is receiving a lot of attention.Particularly... Data trading enables data owners and data requesters to sell and purchase data.With the emergence of blockchain technology,research on blockchain-based data trading systems is receiving a lot of attention.Particularly,to reduce the on-chain storage cost,a novel paradigm of blockchain and cloud fusion has been widely considered as a promising data trading platform.Moreover,the fact that data can be used for commercial purposes will encourage users and organizations from various fields to participate in the data marketplace.In the data marketplace,it is a challenge how to trade the data securely outsourced to the external cloud in a way that restricts access to the data only to authorized users across multiple domains.In this paper,we propose a cross-domain bilateral access control protocol for blockchain-cloud based data trading systems.We consider a system model that consists of domain authorities,data senders,data receivers,a blockchain layer,and a cloud provider.The proposed protocol enables access control and source identification of the outsourced data by leveraging identity-based cryptographic techniques.In the proposed protocol,the outsourced data of the sender is encrypted under the target receiver’s identity,and the cloud provider performs policy-match verification on the authorization tags of the sender and receiver generated by the identity-based signature scheme.Therefore,data trading can be achieved only if the identities of the data sender and receiver simultaneously meet the policies specified by each other.To demonstrate efficiency,we evaluate the performance of the proposed protocol and compare it with existing studies. 展开更多
关键词 Bilateral access control blockchain data sharing policy-match
下载PDF
FADSF:A Data Sharing Model for Intelligent Connected Vehicles Based on Blockchain Technology
15
作者 Yan Sun Caiyun Liu +1 位作者 Jun Li Yitong Liu 《Computers, Materials & Continua》 SCIE EI 2024年第8期2351-2362,共12页
With the development of technology,the connected vehicle has been upgraded from a traditional transport vehicle to an information terminal and energy storage terminal.The data of ICV(intelligent connected vehicles)is ... With the development of technology,the connected vehicle has been upgraded from a traditional transport vehicle to an information terminal and energy storage terminal.The data of ICV(intelligent connected vehicles)is the key to organically maximizing their efficiency.However,in the context of increasingly strict global data security supervision and compliance,numerous problems,including complex types of connected vehicle data,poor data collaboration between the IT(information technology)domain and OT(operation technology)domain,different data format standards,lack of shared trust sources,difficulty in ensuring the quality of shared data,lack of data control rights,as well as difficulty in defining data ownership,make vehicle data sharing face a lot of problems,and data islands are widespread.This study proposes FADSF(Fuzzy Anonymous Data Share Frame),an automobile data sharing scheme based on blockchain.The data holder publishes the shared data information and forms the corresponding label storage on the blockchain.The data demander browses the data directory information to select and purchase data assets and verify them.The data demander selects and purchases data assets and verifies them by browsing the data directory information.Meanwhile,this paper designs a data structure Data Discrimination Bloom Filter(DDBF),making complaints about illegal data.When the number of data complaints reaches the threshold,the audit traceability contract is triggered to punish the illegal data publisher,aiming to improve the data quality and maintain a good data sharing ecology.In this paper,based on Ethereum,the above scheme is tested to demonstrate its feasibility,efficiency and security. 展开更多
关键词 Blockchain connected vehicles data sharing smart contracts credible traceability
下载PDF
Accurate method based on data filtering for quantitative multi-element analysis of soils using CF-LIBS
16
作者 韩伟伟 孙对兄 +7 位作者 张国鼎 董光辉 崔小娜 申金成 王浩亮 张登红 董晨钟 苏茂根 《Plasma Science and Technology》 SCIE EI CAS CSCD 2024年第6期149-158,共10页
To obtain more stable spectral data for accurate quantitative analysis of multi-element,especially for the large-area in-situ elements detection of soils, we propose a method for a multielement quantitative analysis o... To obtain more stable spectral data for accurate quantitative analysis of multi-element,especially for the large-area in-situ elements detection of soils, we propose a method for a multielement quantitative analysis of soils using calibration-free laser-induced breakdown spectroscopy(CF-LIBS) based on data filtering. In this study, we analyze a standard soil sample doped with two heavy metal elements, Cu and Cd, with a specific focus on the line of Cu I324.75 nm for filtering the experimental data of multiple sample sets. Pre-and post-data filtering,the relative standard deviation for Cu decreased from 30% to 10%, The limits of detection(LOD)values for Cu and Cd decreased by 5% and 4%, respectively. Through CF-LIBS, a quantitative analysis was conducted to determine the relative content of elements in soils. Using Cu as a reference, the concentration of Cd was accurately calculated. The results show that post-data filtering, the average relative error of the Cd decreases from 11% to 5%, indicating the effectiveness of data filtering in improving the accuracy of quantitative analysis. Moreover, the content of Si, Fe and other elements can be accurately calculated using this method. To further correct the calculation, the results for Cd was used to provide a more precise calculation. This approach is of great importance for the large-area in-situ heavy metals and trace elements detection in soil, as well as for rapid and accurate quantitative analysis. 展开更多
关键词 laser-induced breakdown spectroscopy SOIL data filtering quantitative analysis multielement
下载PDF
Data augmentation method for insulators based on Cycle-GAN
17
作者 Run Ye Azzedine Boukerche +3 位作者 Xiao-Song Yu Cheng Zhang Bin Yan Xiao-Jia Zhou 《Journal of Electronic Science and Technology》 EI CAS CSCD 2024年第2期36-47,共12页
Data augmentation is an important task of using existing data to expand data sets.Using generative countermeasure network technology to realize data augmentation has the advantages of high-quality generated samples,si... Data augmentation is an important task of using existing data to expand data sets.Using generative countermeasure network technology to realize data augmentation has the advantages of high-quality generated samples,simple training,and fewer restrictions on the number of generated samples.However,in the field of transmission line insulator images,the freely synthesized samples are prone to produce fuzzy backgrounds and disordered samples of the main insulator features.To solve the above problems,this paper uses the cycle generative adversarial network(Cycle-GAN)used for domain conversion in the generation countermeasure network as the initial framework and uses the self-attention mechanism and channel attention mechanism to assist the conversion to realize the mutual conversion of different insulator samples.The attention module with prior knowledge is used to build the generation countermeasure network,and the generative adversarial network(GAN)model with local controllable generation is built to realize the directional generation of insulator belt defect samples.The experimental results show that the samples obtained by this method are improved in a number of quality indicators,and the quality effect of the samples obtained is excellent,which has a reference value for the data expansion of insulator images. 展开更多
关键词 data expansion Deep learning Generate confrontation network INSULATOR
下载PDF
Image segmentation of exfoliated two-dimensional materials by generative adversarial network-based data augmentation
18
作者 程晓昱 解晨雪 +6 位作者 刘宇伦 白瑞雪 肖南海 任琰博 张喜林 马惠 蒋崇云 《Chinese Physics B》 SCIE EI CAS CSCD 2024年第3期112-117,共6页
Mechanically cleaved two-dimensional materials are random in size and thickness.Recognizing atomically thin flakes by human experts is inefficient and unsuitable for scalable production.Deep learning algorithms have b... Mechanically cleaved two-dimensional materials are random in size and thickness.Recognizing atomically thin flakes by human experts is inefficient and unsuitable for scalable production.Deep learning algorithms have been adopted as an alternative,nevertheless a major challenge is a lack of sufficient actual training images.Here we report the generation of synthetic two-dimensional materials images using StyleGAN3 to complement the dataset.DeepLabv3Plus network is trained with the synthetic images which reduces overfitting and improves recognition accuracy to over 90%.A semi-supervisory technique for labeling images is introduced to reduce manual efforts.The sharper edges recognized by this method facilitate material stacking with precise edge alignment,which benefits exploring novel properties of layered-material devices that crucially depend on the interlayer twist-angle.This feasible and efficient method allows for the rapid and high-quality manufacturing of atomically thin materials and devices. 展开更多
关键词 two-dimensional materials deep learning data augmentation generating adversarial networks
下载PDF
An Innovative Deep Architecture for Flight Safety Risk Assessment Based on Time Series Data
19
作者 Hong Sun Fangquan Yang +2 位作者 Peiwen Zhang Yang Jiao Yunxiang Zhao 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第3期2549-2569,共21页
With the development of the integration of aviation safety and artificial intelligence,research on the combination of risk assessment and artificial intelligence is particularly important in the field of risk manageme... With the development of the integration of aviation safety and artificial intelligence,research on the combination of risk assessment and artificial intelligence is particularly important in the field of risk management,but searching for an efficient and accurate risk assessment algorithm has become a challenge for the civil aviation industry.Therefore,an improved risk assessment algorithm(PS-AE-LSTM)based on long short-term memory network(LSTM)with autoencoder(AE)is proposed for the various supervised deep learning algorithms in flight safety that cannot adequately address the problem of the quality on risk level labels.Firstly,based on the normal distribution characteristics of flight data,a probability severity(PS)model is established to enhance the quality of risk assessment labels.Secondly,autoencoder is introduced to reconstruct the flight parameter data to improve the data quality.Finally,utilizing the time-series nature of flight data,a long and short-termmemory network is used to classify the risk level and improve the accuracy of risk assessment.Thus,a risk assessment experimentwas conducted to analyze a fleet landing phase dataset using the PS-AE-LSTMalgorithm to assess the risk level associated with aircraft hard landing events.The results show that the proposed algorithm achieves an accuracy of 86.45%compared with seven baseline models and has excellent risk assessment capability. 展开更多
关键词 Safety engineering risk assessment time series data autoencoder LSTM
下载PDF
Quantitative Analysis of Seeing with Height and Time at Muztagh-Ata Site Based on ERA5 Database
20
作者 Xiao-Qi Wu Cun-Ying Xiao +3 位作者 Ali Esamdin Jing Xu Ze-Wei Wang Luo Xiao 《Research in Astronomy and Astrophysics》 SCIE CAS CSCD 2024年第1期87-95,共9页
Seeing is an important index to evaluate the quality of an astronomical site.To estimate seeing at the Muztagh-Ata site with height and time quantitatively,the European Centre for Medium-Range Weather Forecasts reanal... Seeing is an important index to evaluate the quality of an astronomical site.To estimate seeing at the Muztagh-Ata site with height and time quantitatively,the European Centre for Medium-Range Weather Forecasts reanalysis database(ERA5)is used.Seeing calculated from ERA5 is compared consistently with the Differential Image Motion Monitor seeing at the height of 12 m.Results show that seeing decays exponentially with height at the Muztagh-Ata site.Seeing decays the fastest in fall in 2021 and most slowly with height in summer.The seeing condition is better in fall than in summer.The median value of seeing at 12 m is 0.89 arcsec,the maximum value is1.21 arcsec in August and the minimum is 0.66 arcsec in October.The median value of seeing at 12 m is 0.72arcsec in the nighttime and 1.08 arcsec in the daytime.Seeing is a combination of annual and about biannual variations with the same phase as temperature and wind speed indicating that seeing variation with time is influenced by temperature and wind speed.The Richardson number Ri is used to analyze the atmospheric stability and the variations of seeing are consistent with Ri between layers.These quantitative results can provide an important reference for a telescopic observation strategy. 展开更多
关键词 site testing atmospheric effects methods:data analysis telescopes EARTH
下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部