This paper first estimated the infectious capacity of COVID-19 based on the time series evolution data of confirmed cases in multiple countries. Then, a method to infer the cross-regional spread speed of COVID-19 was ...This paper first estimated the infectious capacity of COVID-19 based on the time series evolution data of confirmed cases in multiple countries. Then, a method to infer the cross-regional spread speed of COVID-19 was introduced in this paper, which took the gross domestic product(GDP) of each region as one of the factors that affect the spread speed of COVID-19 and studied the relationship between the GDP and the infection density of each region(China's Mainland, the United States, and EU countries). In addition, the geographic distance between regions was also considered in this method and the effect of geographic distance on the spread speed of COVID-19 was studied. Studies have shown that the probability of mutual infection of these two regions decreases with increasing geographic distance. Therefore, this paper proposed an epidemic disease spread index based on GDP and geographic distance to quantify the spread speed of COVID-19 in a region. The analysis results showed a strong correlation between the epidemic disease spread index in a region and the number of confirmed cases. This finding provides reasonable suggestions for the control of epidemics. Strengthening the control measures in regions with higher epidemic disease spread index can effectively control the spread of epidemics.展开更多
This paper presents a software turbo decoder on graphics processing units(GPU).Unlike previous works,the proposed decoding architecture for turbo codes mainly focuses on the Consultative Committee for Space Data Syste...This paper presents a software turbo decoder on graphics processing units(GPU).Unlike previous works,the proposed decoding architecture for turbo codes mainly focuses on the Consultative Committee for Space Data Systems(CCSDS)standard.However,the information frame lengths of the CCSDS turbo codes are not suitable for flexible sub-frame parallelism design.To mitigate this issue,we propose a padding method that inserts several bits before the information frame header.To obtain low-latency performance and high resource utilization,two-level intra-frame parallelisms and an efficient data structure are considered.The presented Max-Log-Map decoder can be adopted to decode the Long Term Evolution(LTE)turbo codes with only small modifications.The proposed CCSDS turbo decoder at 10 iterations on NVIDIA RTX3070 achieves about 150 Mbps and 50Mbps throughputs for the code rates 1/6 and 1/2,respectively.展开更多
Since the British National Archive put forward the concept of the digital continuity in 2007,several developed countries have worked out their digital continuity action plan.However,the technologies of the digital con...Since the British National Archive put forward the concept of the digital continuity in 2007,several developed countries have worked out their digital continuity action plan.However,the technologies of the digital continuity guarantee are still lacked.At first,this paper analyzes the requirements of digital continuity guarantee for electronic record based on data quality theory,then points out the necessity of data quality guarantee for electronic record.Moreover,we convert the digital continuity guarantee of electronic record to ensure the consistency,completeness and timeliness of electronic record,and construct the first technology framework of the digital continuity guarantee for electronic record.Finally,the temporal functional dependencies technology is utilized to build the first integration method to insure the consistency,completeness and timeliness of electronic record.展开更多
In the big data environment, enterprises must constantly assimilate big dataknowledge and private knowledge by multiple knowledge transfers to maintain theircompetitive advantage. The optimal time of knowledge transfe...In the big data environment, enterprises must constantly assimilate big dataknowledge and private knowledge by multiple knowledge transfers to maintain theircompetitive advantage. The optimal time of knowledge transfer is one of the mostimportant aspects to improve knowledge transfer efficiency. Based on the analysis of thecomplex characteristics of knowledge transfer in the big data environment, multipleknowledge transfers can be divided into two categories. One is the simultaneous transferof various types of knowledge, and the other one is multiple knowledge transfers atdifferent time points. Taking into consideration the influential factors, such as theknowledge type, knowledge structure, knowledge absorptive capacity, knowledge updaterate, discount rate, market share, profit contributions of each type of knowledge, transfercosts, product life cycle and so on, time optimization models of multiple knowledgetransfers in the big data environment are presented by maximizing the total discountedexpected profits (DEPs) of an enterprise. Some simulation experiments have beenperformed to verify the validity of the models, and the models can help enterprisesdetermine the optimal time of multiple knowledge transfer in the big data environment.展开更多
As the number of sensor network application scenarios continues to grow,the security problems inherent in this approach have become obstacles that hinder its wide application.However,it has attracted increasing attent...As the number of sensor network application scenarios continues to grow,the security problems inherent in this approach have become obstacles that hinder its wide application.However,it has attracted increasing attention from industry and academia.The blockchain is based on a distributed network and has the characteristics of non-tampering and traceability of block data.It is thus naturally able to solve the security problems of the sensor networks.Accordingly,this paper first analyzes the security risks associated with data storage in the sensor networks,then proposes using blockchain technology to ensure that data storage in the sensor networks is secure.In the traditional blockchain,the data layer uses a Merkle hash tree to store data;however,the Merkle hash tree cannot provide non-member proof,which makes it unable to resist the attacks of malicious nodes in networks.To solve this problem,this paper utilizes a cryptographic accumulator rather than a Merkle hash tree to provide both member proof and non-member proof.Moreover,the number of elements in the existing accumulator is limited and unable to meet the blockchain’s expansion requirements.This paper therefore proposes a new type of unbounded accumulator and provides its definition and security model.Finally,this paper constructs an unbounded accumulator scheme using bilinear pairs and analyzes its performance.展开更多
Intelligent Transportation System(ITS)is essential for effective identification of vulnerable units in the transport network and its stable operation.Also,it is necessary to establish an urban transport network vulner...Intelligent Transportation System(ITS)is essential for effective identification of vulnerable units in the transport network and its stable operation.Also,it is necessary to establish an urban transport network vulnerability assessment model with solutions based on Internet of Things(IoT).Previous research on vulnerability has no congestion effect on the peak time of urban road network.The cascading failure of links or nodes is presented by IoT monitoring system,which can collect data from a wireless sensor network in the transport environment.The IoT monitoring system collects wireless data via Vehicle-to-Infrastructure(V2I)channels to simulate key segments and their failure probability.Finally,the topological structure vulnerability index and the traffic function vulnerability index of road network are extracted from the vulnerability factors.The two indices are standardized by calculating the relative change rate,and the comprehensive index of the consequence after road network unit is in a failure state.Therefore,by calculating the failure probability of road network unit and comprehensive index of road network unit in failure state,the comprehensive vulnerability of road network can be evaluated by a risk calculation formula.In short,the IoT-based solutions to the new vulnerability assessment can help road network planning and traffic management departments to achieve the ITS goals.展开更多
In this paper,we investigate the performance of secondary transmission scheme based on Markov ON-OFF state of primary users in Underlay cognitive radio networks.We propose flexible secondary cooperative transmission s...In this paper,we investigate the performance of secondary transmission scheme based on Markov ON-OFF state of primary users in Underlay cognitive radio networks.We propose flexible secondary cooperative transmission schemewith interference cancellation technique according to the ON-OFF status of primary transmitter.For maximal ratio combining(MRC)at destination,we have derived exact closed-form expressions of the outage probability in different situations.The numerical simulation results also reveal that the proposed scheme improve the secondary transmission performance compared with traditional mechanism in terms of secondary outage probability and energy efficiency.展开更多
Belief propagation(BP)decoding outputs soft information and can be naturally used in iterative receivers.BP list(BPL)decoding provides comparable error-correction performance to the successive cancellation list(SCL)de...Belief propagation(BP)decoding outputs soft information and can be naturally used in iterative receivers.BP list(BPL)decoding provides comparable error-correction performance to the successive cancellation list(SCL)decoding.In this paper,we firstly introduce an enhanced code construction scheme for BPL decoding to improve its errorcorrection capability.Then,a GPU-based BPL decoder with adoption of the new code construction is presented.Finally,the proposed BPL decoder is tested on NVIDIA RTX3070 and GTX1060.Experimental results show that the presented BPL decoder with early termination criterion achieves above 1 Gbps throughput on RTX3070 for the code(1024,512)with 32 lists under good channel conditions.展开更多
As a common and high-risk type of disease,heart disease seriously threatens people’s health.At the same time,in the era of the Internet of Thing(IoT),smart medical device has strong practical significance for medical...As a common and high-risk type of disease,heart disease seriously threatens people’s health.At the same time,in the era of the Internet of Thing(IoT),smart medical device has strong practical significance for medical workers and patients because of its ability to assist in the diagnosis of diseases.Therefore,the research of real-time diagnosis and classification algorithms for arrhythmia can help to improve the diagnostic efficiency of diseases.In this paper,we design an automatic arrhythmia classification algorithm model based on Convolutional Neural Network(CNN)and Encoder-Decoder model.The model uses Long Short-Term Memory(LSTM)to consider the influence of time series features on classification results.Simultaneously,it is trained and tested by the MIT-BIH arrhythmia database.Besides,Generative Adversarial Networks(GAN)is adopted as a method of data equalization for solving data imbalance problem.The simulation results show that for the inter-patient arrhythmia classification,the hybrid model combining CNN and Encoder-Decoder model has the best classification accuracy,of which the accuracy can reach 94.05%.Especially,it has a better advantage for the classification effect of supraventricular ectopic beats(class S)and fusion beats(class F).展开更多
Classification of skin lesions is a complex identification challenge.Due to the wide variety of skin lesions,doctors need to spend a lot of time and effort to judge the lesion image which zoomed through the dermatosco...Classification of skin lesions is a complex identification challenge.Due to the wide variety of skin lesions,doctors need to spend a lot of time and effort to judge the lesion image which zoomed through the dermatoscopy.The diagnosis which the algorithm of identifying pathological images assists doctors gets more and more attention.With the development of deep learning,the field of image recognition has made long-term progress.The effect of recognizing images through convolutional neural network models is better than traditional image recognition technology.In this work,we try to classify seven kinds of lesion images by various models and methods of deep learning,common models of convolutional neural network in the field of image classification include ResNet,DenseNet and SENet,etc.We use a fine-tuning model with a multi-layer perceptron,by training the skin lesion model,in the validation set and test set we use data expansion based on multiple cropping,and use five models’ensemble as the final results.The experimental results show that the program has good results in improving the sensitivity of skin lesion diagnosis.展开更多
Since transactions in blockchain are based on public ledger verification,this raises security concerns about privacy protection.And it will cause the accumulation of data on the chain and resulting in the low efficien...Since transactions in blockchain are based on public ledger verification,this raises security concerns about privacy protection.And it will cause the accumulation of data on the chain and resulting in the low efficiency of block verification,when the whole transaction on the chain is verified.In order to improve the efficiency and privacy protection of block data verification,this paper proposes an efficient block verification mechanism with privacy protection based on zeroknowledge proof(ZKP),which not only protects the privacy of users but also improves the speed of data block verification.There is no need to put the whole transaction on the chain when verifying block data.It just needs to generate the ZKP and root hash with the transaction information,then save them to the smart contract for verification.Moreover,the ZKP verification in smart contract is carried out to realize the privacy protection of the transaction and efficient verification of the block.When the data is validated,the buffer accepts the complete transaction,updates the transaction status in the cloud database,and packages up the chain.So,the ZKP strengthens the privacy protection ability of blockchain,and the smart contracts save the time cost of block verification.展开更多
The medical community has more concern on lung cancer analysis.Medical experts’physical segmentation of lung cancers is time-consuming and needs to be automated.The research study’s objective is to diagnose lung tum...The medical community has more concern on lung cancer analysis.Medical experts’physical segmentation of lung cancers is time-consuming and needs to be automated.The research study’s objective is to diagnose lung tumors at an early stage to extend the life of humans using deep learning techniques.Computer-Aided Diagnostic(CAD)system aids in the diagnosis and shortens the time necessary to detect the tumor detected.The application of Deep Neural Networks(DNN)has also been exhibited as an excellent and effective method in classification and segmentation tasks.This research aims to separate lung cancers from images of Magnetic Resonance Imaging(MRI)with threshold segmentation.The Honey hook process categorizes lung cancer based on characteristics retrieved using several classifiers.Considering this principle,the work presents a solution for image compression utilizing a Deep Wave Auto-Encoder(DWAE).The combination of the two approaches significantly reduces the overall size of the feature set required for any future classification process performed using DNN.The proposed DWAE-DNN image classifier is applied to a lung imaging dataset with Radial Basis Function(RBF)classifier.The study reported promising results with an accuracy of 97.34%,whereas using the Decision Tree(DT)classifier has an accuracy of 94.24%.The proposed approach(DWAE-DNN)is found to classify the images with an accuracy of 98.67%,either as malignant or normal patients.In contrast to the accuracy requirements,the work also uses the benchmark standards like specificity,sensitivity,and precision to evaluate the efficiency of the network.It is found from an investigation that the DT classifier provides the maximum performance in the DWAE-DNN depending on the network’s performance on image testing,as shown by the data acquired by the categorizers themselves.展开更多
Carbon monoxide(CO)is harmful to our health,and even causes death.The main source of CO is automobile exhaust.Therefore,this article determines that CO is the emission factor,and finally the evaluation model is establ...Carbon monoxide(CO)is harmful to our health,and even causes death.The main source of CO is automobile exhaust.Therefore,this article determines that CO is the emission factor,and finally the evaluation model is established.The model provides an important basis for the highway construction project design,traffic management,environmental pollution control,energy saving,environmental evaluation and so on.Compared with the traditional method that calculates the road traffic volume through the air emissions model,according to the total amount of air pollution control,this paper builds the emission diffusion model,which calculates the road traffic volume by road exhaust density.First of all,this paper measures CO emissions by testing 435 multifunction detectors from Shanghai typical roads,and compares the results with the national standard control.According to the standard in automobile exhaust emissions,the extreme values of the traffic volume over the road are calculated.Finally,the model’s reasonableness and accuracy are validated through case study.The results from case analysis show that the evaluation model is of great practical significance.展开更多
In this paper,we study the state-dependent interference channel,where the Rayleigh channel is non-causally known at cognitive network.We propose an active secondary transmission mechanism with interference cancellatio...In this paper,we study the state-dependent interference channel,where the Rayleigh channel is non-causally known at cognitive network.We propose an active secondary transmission mechanism with interference cancellation technique according to the ON-OFF status of primary network.the secondary transmission mechanism is divided into four cases according to the active state of the primary user in the two time slots.For these interference cases,numerical results are provided to show that active interference cancellation mechanism significantly reduces the secondary transmission performance in terms of secondary outage probability and energy efficiency.展开更多
With the development of Industry 4.0 and big data technology,the Industrial Internet of Things(IIoT)is hampered by inherent issues such as privacy,security,and fault tolerance,which pose certain challenges to the rapi...With the development of Industry 4.0 and big data technology,the Industrial Internet of Things(IIoT)is hampered by inherent issues such as privacy,security,and fault tolerance,which pose certain challenges to the rapid development of IIoT.Blockchain technology has immutability,decentralization,and autonomy,which can greatly improve the inherent defects of the IIoT.In the traditional blockchain,data is stored in a Merkle tree.As data continues to grow,the scale of proofs used to validate it grows,threatening the efficiency,security,and reliability of blockchain-based IIoT.Accordingly,this paper first analyzes the inefficiency of the traditional blockchain structure in verifying the integrity and correctness of data.To solve this problem,a new Vector Commitment(VC)structure,Partition Vector Commitment(PVC),is proposed by improving the traditional VC structure.Secondly,this paper uses PVC instead of the Merkle tree to store big data generated by IIoT.PVC can improve the efficiency of traditional VC in the process of commitment and opening.Finally,this paper uses PVC to build a blockchain-based IIoT data security storage mechanism and carries out a comparative analysis of experiments.This mechanism can greatly reduce communication loss and maximize the rational use of storage space,which is of great significance for maintaining the security and stability of blockchain-based IIoT.展开更多
Blockchain can realize the reliable storage of a large amount of data that is chronologically related and verifiable within the system.This technology has been widely used and has developed rapidly in big data systems...Blockchain can realize the reliable storage of a large amount of data that is chronologically related and verifiable within the system.This technology has been widely used and has developed rapidly in big data systems across various fields.An increasing number of users are participating in application systems that use blockchain as their underlying architecture.As the number of transactions and the capital involved in blockchain grow,ensuring information security becomes imperative.Addressing the verification of transactional information security and privacy has emerged as a critical challenge.Blockchain-based verification methods can effectively eliminate the need for centralized third-party organizations.However,the efficiency of nodes in storing and verifying blockchain data faces unprecedented challenges.To address this issue,this paper introduces an efficient verification scheme for transaction security.Initially,it presents a node evaluation module to estimate the activity level of user nodes participating in transactions,accompanied by a probabilistic analysis for all transactions.Subsequently,this paper optimizes the conventional transaction organization form,introduces a heterogeneous Merkle tree storage structure,and designs algorithms for constructing these heterogeneous trees.Theoretical analyses and simulation experiments conclusively demonstrate the superior performance of this scheme.When verifying the same number of transactions,the heterogeneous Merkle tree transmits less data and is more efficient than traditional methods.The findings indicate that the heterogeneous Merkle tree structure is suitable for various blockchain applications,including the Internet of Things.This scheme can markedly enhance the efficiency of information verification and bolster the security of distributed systems.展开更多
Wireless Sensor Networks(WSNs)are large-scale and high-density networks that typically have coverage area overlap.In addition,a random deployment of sensor nodes cannot fully guarantee coverage of the sensing area,whi...Wireless Sensor Networks(WSNs)are large-scale and high-density networks that typically have coverage area overlap.In addition,a random deployment of sensor nodes cannot fully guarantee coverage of the sensing area,which leads to coverage holes in WSNs.Thus,coverage control plays an important role in WSNs.To alleviate unnecessary energy wastage and improve network performance,we consider both energy efficiency and coverage rate for WSNs.In this paper,we present a novel coverage control algorithm based on Particle Swarm Optimization(PSO).Firstly,the sensor nodes are randomly deployed in a target area and remain static after deployment.Then,the whole network is partitioned into grids,and we calculate each grid’s coverage rate and energy consumption.Finally,each sensor nodes’sensing radius is adjusted according to the coverage rate and energy consumption of each grid.Simulation results show that our algorithm can effectively improve coverage rate and reduce energy consumption.展开更多
Deep Learning(DL)is such a powerful tool that we have seen tremendous success in areas such as Computer Vision,Speech Recognition,and Natural Language Processing.Since Automated Modulation Classification(AMC)is an imp...Deep Learning(DL)is such a powerful tool that we have seen tremendous success in areas such as Computer Vision,Speech Recognition,and Natural Language Processing.Since Automated Modulation Classification(AMC)is an important part in Cognitive Radio Networks,we try to explore its potential in solving signal modulation recognition problem.It cannot be overlooked that DL model is a complex model,thus making them prone to over-fitting.DL model requires many training data to combat with over-fitting,but adding high quality labels to training data manually is not always cheap and accessible,especially in real-time system,which may counter unprecedented data in dataset.Semi-supervised Learning is a way to exploit unlabeled data effectively to reduce over-fitting in DL.In this paper,we extend Generative Adversarial Networks(GANs)to the semi-supervised learning will show it is a method can be used to create a more dataefficient classifier.展开更多
Recently,many researchers have concentrated on using neural networks to learn features for Distant Supervised Relation Extraction(DSRE).These approaches generally use a softmax classifier with cross-entropy loss,which...Recently,many researchers have concentrated on using neural networks to learn features for Distant Supervised Relation Extraction(DSRE).These approaches generally use a softmax classifier with cross-entropy loss,which inevitably brings the noise of artificial class NA into classification process.To address the shortcoming,the classifier with ranking loss is employed to DSRE.Uniformly randomly selecting a relation or heuristically selecting the highest score among all incorrect relations are two common methods for generating a negative class in the ranking loss function.However,the majority of the generated negative class can be easily discriminated from positive class and will contribute little towards the training.Inspired by Generative Adversarial Networks(GANs),we use a neural network as the negative class generator to assist the training of our desired model,which acts as the discriminator in GANs.Through the alternating optimization of generator and discriminator,the generator is learning to produce more and more discriminable negative classes and the discriminator has to become better as well.This framework is independent of the concrete form of generator and discriminator.In this paper,we use a two layers fully-connected neural network as the generator and the Piecewise Convolutional Neural Networks(PCNNs)as the discriminator.Experiment results show that our proposed GAN-based method is effective and performs better than state-of-the-art methods.展开更多
The aim of information hiding is to embed the secret message in a normal cover media such as image,video,voice or text,and then the secret message is transmitted through the transmission of the cover media.The secret ...The aim of information hiding is to embed the secret message in a normal cover media such as image,video,voice or text,and then the secret message is transmitted through the transmission of the cover media.The secret message should not be damaged on the process of the cover media.In order to ensure the invisibility of secret message,complex texture objects should be chosen for embedding information.In this paper,an approach which corresponds multiple steganographic algorithms to complex texture objects was presented for hiding secret message.Firstly,complex texture regions are selected based on a kind of objects detection algorithm.Secondly,three different steganographic methods were used to hide secret message into the selected block region.Experimental results show that the approach enhances the security and robustness.展开更多
基金Project supported by the National Natural Science Foundation of China (Grant Nos.62266030 and 61863025)International S & T Cooperation Projects of Gansu province (Grant No.144WCGA166)Longyuan Young Innovation Talents and the Doctoral Foundation of LUT。
文摘This paper first estimated the infectious capacity of COVID-19 based on the time series evolution data of confirmed cases in multiple countries. Then, a method to infer the cross-regional spread speed of COVID-19 was introduced in this paper, which took the gross domestic product(GDP) of each region as one of the factors that affect the spread speed of COVID-19 and studied the relationship between the GDP and the infection density of each region(China's Mainland, the United States, and EU countries). In addition, the geographic distance between regions was also considered in this method and the effect of geographic distance on the spread speed of COVID-19 was studied. Studies have shown that the probability of mutual infection of these two regions decreases with increasing geographic distance. Therefore, this paper proposed an epidemic disease spread index based on GDP and geographic distance to quantify the spread speed of COVID-19 in a region. The analysis results showed a strong correlation between the epidemic disease spread index in a region and the number of confirmed cases. This finding provides reasonable suggestions for the control of epidemics. Strengthening the control measures in regions with higher epidemic disease spread index can effectively control the spread of epidemics.
基金supported by the Fundamental Research Funds for the Central Universities(FRF-TP20-062A1)Guangdong Basic and Applied Basic Research Foundation(2021A1515110070)。
文摘This paper presents a software turbo decoder on graphics processing units(GPU).Unlike previous works,the proposed decoding architecture for turbo codes mainly focuses on the Consultative Committee for Space Data Systems(CCSDS)standard.However,the information frame lengths of the CCSDS turbo codes are not suitable for flexible sub-frame parallelism design.To mitigate this issue,we propose a padding method that inserts several bits before the information frame header.To obtain low-latency performance and high resource utilization,two-level intra-frame parallelisms and an efficient data structure are considered.The presented Max-Log-Map decoder can be adopted to decode the Long Term Evolution(LTE)turbo codes with only small modifications.The proposed CCSDS turbo decoder at 10 iterations on NVIDIA RTX3070 achieves about 150 Mbps and 50Mbps throughputs for the code rates 1/6 and 1/2,respectively.
基金This work is supported by the NSFC(Nos.61772280,61772454)the Changzhou Sci&Tech Program(No.CJ20179027)the PAPD fund from NUIST.Prof.Jin Wang is the corresponding author。
文摘Since the British National Archive put forward the concept of the digital continuity in 2007,several developed countries have worked out their digital continuity action plan.However,the technologies of the digital continuity guarantee are still lacked.At first,this paper analyzes the requirements of digital continuity guarantee for electronic record based on data quality theory,then points out the necessity of data quality guarantee for electronic record.Moreover,we convert the digital continuity guarantee of electronic record to ensure the consistency,completeness and timeliness of electronic record,and construct the first technology framework of the digital continuity guarantee for electronic record.Finally,the temporal functional dependencies technology is utilized to build the first integration method to insure the consistency,completeness and timeliness of electronic record.
基金supported by the National Natural Science Foundation ofChina (Grant No. 71704016,71331008, 71402010)the Natural Science Foundation of HunanProvince (Grant No. 2017JJ2267)+1 种基金the Educational Economy and Financial Research Base ofHunan Province (Grant No. 13JCJA2)the Project of China Scholarship Council forOverseas Studies (201508430121, 201208430233).
文摘In the big data environment, enterprises must constantly assimilate big dataknowledge and private knowledge by multiple knowledge transfers to maintain theircompetitive advantage. The optimal time of knowledge transfer is one of the mostimportant aspects to improve knowledge transfer efficiency. Based on the analysis of thecomplex characteristics of knowledge transfer in the big data environment, multipleknowledge transfers can be divided into two categories. One is the simultaneous transferof various types of knowledge, and the other one is multiple knowledge transfers atdifferent time points. Taking into consideration the influential factors, such as theknowledge type, knowledge structure, knowledge absorptive capacity, knowledge updaterate, discount rate, market share, profit contributions of each type of knowledge, transfercosts, product life cycle and so on, time optimization models of multiple knowledgetransfers in the big data environment are presented by maximizing the total discountedexpected profits (DEPs) of an enterprise. Some simulation experiments have beenperformed to verify the validity of the models, and the models can help enterprisesdetermine the optimal time of multiple knowledge transfer in the big data environment.
基金supported by the NSFC(61772454)the Researchers Supporting Project No.RSP-2020/102 King Saud University,Riyadh,Saudi Arabiafunded by National Key Research and Development Program of China(2019YFC1511000).
文摘As the number of sensor network application scenarios continues to grow,the security problems inherent in this approach have become obstacles that hinder its wide application.However,it has attracted increasing attention from industry and academia.The blockchain is based on a distributed network and has the characteristics of non-tampering and traceability of block data.It is thus naturally able to solve the security problems of the sensor networks.Accordingly,this paper first analyzes the security risks associated with data storage in the sensor networks,then proposes using blockchain technology to ensure that data storage in the sensor networks is secure.In the traditional blockchain,the data layer uses a Merkle hash tree to store data;however,the Merkle hash tree cannot provide non-member proof,which makes it unable to resist the attacks of malicious nodes in networks.To solve this problem,this paper utilizes a cryptographic accumulator rather than a Merkle hash tree to provide both member proof and non-member proof.Moreover,the number of elements in the existing accumulator is limited and unable to meet the blockchain’s expansion requirements.This paper therefore proposes a new type of unbounded accumulator and provides its definition and security model.Finally,this paper constructs an unbounded accumulator scheme using bilinear pairs and analyzes its performance.
基金supported by the Shanghai philosophy and social science planning project(2017ECK004).
文摘Intelligent Transportation System(ITS)is essential for effective identification of vulnerable units in the transport network and its stable operation.Also,it is necessary to establish an urban transport network vulnerability assessment model with solutions based on Internet of Things(IoT).Previous research on vulnerability has no congestion effect on the peak time of urban road network.The cascading failure of links or nodes is presented by IoT monitoring system,which can collect data from a wireless sensor network in the transport environment.The IoT monitoring system collects wireless data via Vehicle-to-Infrastructure(V2I)channels to simulate key segments and their failure probability.Finally,the topological structure vulnerability index and the traffic function vulnerability index of road network are extracted from the vulnerability factors.The two indices are standardized by calculating the relative change rate,and the comprehensive index of the consequence after road network unit is in a failure state.Therefore,by calculating the failure probability of road network unit and comprehensive index of road network unit in failure state,the comprehensive vulnerability of road network can be evaluated by a risk calculation formula.In short,the IoT-based solutions to the new vulnerability assessment can help road network planning and traffic management departments to achieve the ITS goals.
基金This work is supported by Sichuan science and Technology Program(2019YFG0212)China Postdoctoral Science Foundation(2019M653401)Sichuan Science and Technology Program(2018GZ0184).
文摘In this paper,we investigate the performance of secondary transmission scheme based on Markov ON-OFF state of primary users in Underlay cognitive radio networks.We propose flexible secondary cooperative transmission schemewith interference cancellation technique according to the ON-OFF status of primary transmitter.For maximal ratio combining(MRC)at destination,we have derived exact closed-form expressions of the outage probability in different situations.The numerical simulation results also reveal that the proposed scheme improve the secondary transmission performance compared with traditional mechanism in terms of secondary outage probability and energy efficiency.
基金supported by the Fundamental Research Funds for the Central Universities (FRF-TP20-062A1)Guangdong Basic and Applied Basic Research Foundation (2021A1515110070)
文摘Belief propagation(BP)decoding outputs soft information and can be naturally used in iterative receivers.BP list(BPL)decoding provides comparable error-correction performance to the successive cancellation list(SCL)decoding.In this paper,we firstly introduce an enhanced code construction scheme for BPL decoding to improve its errorcorrection capability.Then,a GPU-based BPL decoder with adoption of the new code construction is presented.Finally,the proposed BPL decoder is tested on NVIDIA RTX3070 and GTX1060.Experimental results show that the presented BPL decoder with early termination criterion achieves above 1 Gbps throughput on RTX3070 for the code(1024,512)with 32 lists under good channel conditions.
基金Fundamental Research Funds for the Central Universities(Grant No.FRF-TP-19-006A3).
文摘As a common and high-risk type of disease,heart disease seriously threatens people’s health.At the same time,in the era of the Internet of Thing(IoT),smart medical device has strong practical significance for medical workers and patients because of its ability to assist in the diagnosis of diseases.Therefore,the research of real-time diagnosis and classification algorithms for arrhythmia can help to improve the diagnostic efficiency of diseases.In this paper,we design an automatic arrhythmia classification algorithm model based on Convolutional Neural Network(CNN)and Encoder-Decoder model.The model uses Long Short-Term Memory(LSTM)to consider the influence of time series features on classification results.Simultaneously,it is trained and tested by the MIT-BIH arrhythmia database.Besides,Generative Adversarial Networks(GAN)is adopted as a method of data equalization for solving data imbalance problem.The simulation results show that for the inter-patient arrhythmia classification,the hybrid model combining CNN and Encoder-Decoder model has the best classification accuracy,of which the accuracy can reach 94.05%.Especially,it has a better advantage for the classification effect of supraventricular ectopic beats(class S)and fusion beats(class F).
基金This work is supported by Intelligent Manufacturing Standardization Program of Ministry of Industry and Information Technology(No.2016ZXFB01001).
文摘Classification of skin lesions is a complex identification challenge.Due to the wide variety of skin lesions,doctors need to spend a lot of time and effort to judge the lesion image which zoomed through the dermatoscopy.The diagnosis which the algorithm of identifying pathological images assists doctors gets more and more attention.With the development of deep learning,the field of image recognition has made long-term progress.The effect of recognizing images through convolutional neural network models is better than traditional image recognition technology.In this work,we try to classify seven kinds of lesion images by various models and methods of deep learning,common models of convolutional neural network in the field of image classification include ResNet,DenseNet and SENet,etc.We use a fine-tuning model with a multi-layer perceptron,by training the skin lesion model,in the validation set and test set we use data expansion based on multiple cropping,and use five models’ensemble as the final results.The experimental results show that the program has good results in improving the sensitivity of skin lesion diagnosis.
基金This work was supported by China’s National Natural Science Foundation(No.62072249,62072056).Jin Wang and Yongjun Ren received the grant and the URLs to sponsors’websites are https://www.nsfc.gov.cn/.This work was also funded by the Researchers Supporting Project No.(RSP-2021/102)King Saud University,Riyadh,Saudi Arabia.
文摘Since transactions in blockchain are based on public ledger verification,this raises security concerns about privacy protection.And it will cause the accumulation of data on the chain and resulting in the low efficiency of block verification,when the whole transaction on the chain is verified.In order to improve the efficiency and privacy protection of block data verification,this paper proposes an efficient block verification mechanism with privacy protection based on zeroknowledge proof(ZKP),which not only protects the privacy of users but also improves the speed of data block verification.There is no need to put the whole transaction on the chain when verifying block data.It just needs to generate the ZKP and root hash with the transaction information,then save them to the smart contract for verification.Moreover,the ZKP verification in smart contract is carried out to realize the privacy protection of the transaction and efficient verification of the block.When the data is validated,the buffer accepts the complete transaction,updates the transaction status in the cloud database,and packages up the chain.So,the ZKP strengthens the privacy protection ability of blockchain,and the smart contracts save the time cost of block verification.
基金the Researchers Supporting Project Number(RSP2023R 509)King Saud University,Riyadh,Saudi ArabiaThis work was supported in part by the Higher Education Sprout Project from the Ministry of Education(MOE)and National Science and Technology Council,Taiwan,(109-2628-E-224-001-MY3)in part by Isuzu Optics Corporation.Dr.Shih-Yu Chen is the corresponding author.
文摘The medical community has more concern on lung cancer analysis.Medical experts’physical segmentation of lung cancers is time-consuming and needs to be automated.The research study’s objective is to diagnose lung tumors at an early stage to extend the life of humans using deep learning techniques.Computer-Aided Diagnostic(CAD)system aids in the diagnosis and shortens the time necessary to detect the tumor detected.The application of Deep Neural Networks(DNN)has also been exhibited as an excellent and effective method in classification and segmentation tasks.This research aims to separate lung cancers from images of Magnetic Resonance Imaging(MRI)with threshold segmentation.The Honey hook process categorizes lung cancer based on characteristics retrieved using several classifiers.Considering this principle,the work presents a solution for image compression utilizing a Deep Wave Auto-Encoder(DWAE).The combination of the two approaches significantly reduces the overall size of the feature set required for any future classification process performed using DNN.The proposed DWAE-DNN image classifier is applied to a lung imaging dataset with Radial Basis Function(RBF)classifier.The study reported promising results with an accuracy of 97.34%,whereas using the Decision Tree(DT)classifier has an accuracy of 94.24%.The proposed approach(DWAE-DNN)is found to classify the images with an accuracy of 98.67%,either as malignant or normal patients.In contrast to the accuracy requirements,the work also uses the benchmark standards like specificity,sensitivity,and precision to evaluate the efficiency of the network.It is found from an investigation that the DT classifier provides the maximum performance in the DWAE-DNN depending on the network’s performance on image testing,as shown by the data acquired by the categorizers themselves.
文摘Carbon monoxide(CO)is harmful to our health,and even causes death.The main source of CO is automobile exhaust.Therefore,this article determines that CO is the emission factor,and finally the evaluation model is established.The model provides an important basis for the highway construction project design,traffic management,environmental pollution control,energy saving,environmental evaluation and so on.Compared with the traditional method that calculates the road traffic volume through the air emissions model,according to the total amount of air pollution control,this paper builds the emission diffusion model,which calculates the road traffic volume by road exhaust density.First of all,this paper measures CO emissions by testing 435 multifunction detectors from Shanghai typical roads,and compares the results with the national standard control.According to the standard in automobile exhaust emissions,the extreme values of the traffic volume over the road are calculated.Finally,the model’s reasonableness and accuracy are validated through case study.The results from case analysis show that the evaluation model is of great practical significance.
文摘In this paper,we study the state-dependent interference channel,where the Rayleigh channel is non-causally known at cognitive network.We propose an active secondary transmission mechanism with interference cancellation technique according to the ON-OFF status of primary network.the secondary transmission mechanism is divided into four cases according to the active state of the primary user in the two time slots.For these interference cases,numerical results are provided to show that active interference cancellation mechanism significantly reduces the secondary transmission performance in terms of secondary outage probability and energy efficiency.
基金supported by China’s National Natural Science Foundation(Nos.62072249,62072056)This work is also funded by the National Science Foundation of Hunan Province(2020JJ2029).
文摘With the development of Industry 4.0 and big data technology,the Industrial Internet of Things(IIoT)is hampered by inherent issues such as privacy,security,and fault tolerance,which pose certain challenges to the rapid development of IIoT.Blockchain technology has immutability,decentralization,and autonomy,which can greatly improve the inherent defects of the IIoT.In the traditional blockchain,data is stored in a Merkle tree.As data continues to grow,the scale of proofs used to validate it grows,threatening the efficiency,security,and reliability of blockchain-based IIoT.Accordingly,this paper first analyzes the inefficiency of the traditional blockchain structure in verifying the integrity and correctness of data.To solve this problem,a new Vector Commitment(VC)structure,Partition Vector Commitment(PVC),is proposed by improving the traditional VC structure.Secondly,this paper uses PVC instead of the Merkle tree to store big data generated by IIoT.PVC can improve the efficiency of traditional VC in the process of commitment and opening.Finally,this paper uses PVC to build a blockchain-based IIoT data security storage mechanism and carries out a comparative analysis of experiments.This mechanism can greatly reduce communication loss and maximize the rational use of storage space,which is of great significance for maintaining the security and stability of blockchain-based IIoT.
基金funded by the National Natural Science Foundation of China(62072056,62172058)the Researchers Supporting Project Number(RSP2023R102)King Saud University,Riyadh,Saudi Arabia+4 种基金funded by the Hunan Provincial Key Research and Development Program(2022SK2107,2022GK2019)the Natural Science Foundation of Hunan Province(2023JJ30054)the Foundation of State Key Laboratory of Public Big Data(PBD2021-15)the Young Doctor Innovation Program of Zhejiang Shuren University(2019QC30)Postgraduate Scientific Research Innovation Project of Hunan Province(CX20220940,CX20220941).
文摘Blockchain can realize the reliable storage of a large amount of data that is chronologically related and verifiable within the system.This technology has been widely used and has developed rapidly in big data systems across various fields.An increasing number of users are participating in application systems that use blockchain as their underlying architecture.As the number of transactions and the capital involved in blockchain grow,ensuring information security becomes imperative.Addressing the verification of transactional information security and privacy has emerged as a critical challenge.Blockchain-based verification methods can effectively eliminate the need for centralized third-party organizations.However,the efficiency of nodes in storing and verifying blockchain data faces unprecedented challenges.To address this issue,this paper introduces an efficient verification scheme for transaction security.Initially,it presents a node evaluation module to estimate the activity level of user nodes participating in transactions,accompanied by a probabilistic analysis for all transactions.Subsequently,this paper optimizes the conventional transaction organization form,introduces a heterogeneous Merkle tree storage structure,and designs algorithms for constructing these heterogeneous trees.Theoretical analyses and simulation experiments conclusively demonstrate the superior performance of this scheme.When verifying the same number of transactions,the heterogeneous Merkle tree transmits less data and is more efficient than traditional methods.The findings indicate that the heterogeneous Merkle tree structure is suitable for various blockchain applications,including the Internet of Things.This scheme can markedly enhance the efficiency of information verification and bolster the security of distributed systems.
基金This research work was supported by the National Natural Science Foundation of China(61772454,61811530332).Professor Gwang-jun Kim is the corresponding author.
文摘Wireless Sensor Networks(WSNs)are large-scale and high-density networks that typically have coverage area overlap.In addition,a random deployment of sensor nodes cannot fully guarantee coverage of the sensing area,which leads to coverage holes in WSNs.Thus,coverage control plays an important role in WSNs.To alleviate unnecessary energy wastage and improve network performance,we consider both energy efficiency and coverage rate for WSNs.In this paper,we present a novel coverage control algorithm based on Particle Swarm Optimization(PSO).Firstly,the sensor nodes are randomly deployed in a target area and remain static after deployment.Then,the whole network is partitioned into grids,and we calculate each grid’s coverage rate and energy consumption.Finally,each sensor nodes’sensing radius is adjusted according to the coverage rate and energy consumption of each grid.Simulation results show that our algorithm can effectively improve coverage rate and reduce energy consumption.
基金This work is supported by the National Natural Science Foundation of China(Nos.61771154,61603239,61772454,6171101570).
文摘Deep Learning(DL)is such a powerful tool that we have seen tremendous success in areas such as Computer Vision,Speech Recognition,and Natural Language Processing.Since Automated Modulation Classification(AMC)is an important part in Cognitive Radio Networks,we try to explore its potential in solving signal modulation recognition problem.It cannot be overlooked that DL model is a complex model,thus making them prone to over-fitting.DL model requires many training data to combat with over-fitting,but adding high quality labels to training data manually is not always cheap and accessible,especially in real-time system,which may counter unprecedented data in dataset.Semi-supervised Learning is a way to exploit unlabeled data effectively to reduce over-fitting in DL.In this paper,we extend Generative Adversarial Networks(GANs)to the semi-supervised learning will show it is a method can be used to create a more dataefficient classifier.
基金This research work is supported by the National Natural Science Foundation of China(NO.61772454,6171101570,61602059)Hunan Provincial Natural Science Foundation of China(No.2017JJ3334)+1 种基金the Research Foundation of Education Bureau of Hunan Province,China(No.16C0045)the Open Project Program of the National Laboratory of Pattern Recognition(NLPR).Professor Jin Wang is the corresponding author.
文摘Recently,many researchers have concentrated on using neural networks to learn features for Distant Supervised Relation Extraction(DSRE).These approaches generally use a softmax classifier with cross-entropy loss,which inevitably brings the noise of artificial class NA into classification process.To address the shortcoming,the classifier with ranking loss is employed to DSRE.Uniformly randomly selecting a relation or heuristically selecting the highest score among all incorrect relations are two common methods for generating a negative class in the ranking loss function.However,the majority of the generated negative class can be easily discriminated from positive class and will contribute little towards the training.Inspired by Generative Adversarial Networks(GANs),we use a neural network as the negative class generator to assist the training of our desired model,which acts as the discriminator in GANs.Through the alternating optimization of generator and discriminator,the generator is learning to produce more and more discriminable negative classes and the discriminator has to become better as well.This framework is independent of the concrete form of generator and discriminator.In this paper,we use a two layers fully-connected neural network as the generator and the Piecewise Convolutional Neural Networks(PCNNs)as the discriminator.Experiment results show that our proposed GAN-based method is effective and performs better than state-of-the-art methods.
基金This work is supported,in part,by the National Natural Science Foundation of China under grant numbers U1536206,U1405254,61772283,61602253,61672294,61502242in part,by the Jiangsu Basic Research Programs-Natural Science Foundation under grant numbers BK20150925 and BK20151530+1 种基金in part,by the Priority Academic Program Development of Jiangsu Higher Education Institutions(PAPD)fundin part,by the Collaborative Innovation Center of Atmospheric Environment and Equipment Technology(CICAEET)fund,China.
文摘The aim of information hiding is to embed the secret message in a normal cover media such as image,video,voice or text,and then the secret message is transmitted through the transmission of the cover media.The secret message should not be damaged on the process of the cover media.In order to ensure the invisibility of secret message,complex texture objects should be chosen for embedding information.In this paper,an approach which corresponds multiple steganographic algorithms to complex texture objects was presented for hiding secret message.Firstly,complex texture regions are selected based on a kind of objects detection algorithm.Secondly,three different steganographic methods were used to hide secret message into the selected block region.Experimental results show that the approach enhances the security and robustness.