期刊文献+
共找到3,000篇文章
< 1 2 150 >
每页显示 20 50 100
Large-scale spatial data visualization method based on augmented reality
1
作者 Xiaoning QIAO Wenming XIE +4 位作者 Xiaodong PENG Guangyun LI Dalin LI Yingyi GUO Jingyi REN 《虚拟现实与智能硬件(中英文)》 EI 2024年第2期132-147,共16页
Background A task assigned to space exploration satellites involves detecting the physical environment within a certain space.However,space detection data are complex and abstract.These data are not conducive for rese... Background A task assigned to space exploration satellites involves detecting the physical environment within a certain space.However,space detection data are complex and abstract.These data are not conducive for researchers'visual perceptions of the evolution and interaction of events in the space environment.Methods A time-series dynamic data sampling method for large-scale space was proposed for sample detection data in space and time,and the corresponding relationships between data location features and other attribute features were established.A tone-mapping method based on statistical histogram equalization was proposed and applied to the final attribute feature data.The visualization process is optimized for rendering by merging materials,reducing the number of patches,and performing other operations.Results The results of sampling,feature extraction,and uniform visualization of the detection data of complex types,long duration spans,and uneven spatial distributions were obtained.The real-time visualization of large-scale spatial structures using augmented reality devices,particularly low-performance devices,was also investigated.Conclusions The proposed visualization system can reconstruct the three-dimensional structure of a large-scale space,express the structure and changes in the spatial environment using augmented reality,and assist in intuitively discovering spatial environmental events and evolutionary rules. 展开更多
关键词 large-scale spatial data analysis Visual analysis technology Augmented reality 3D reconstruction Space environment
下载PDF
Numerical and theoretical study of large-scale failure of strata overlying sublevel caving mines with steeply dipping discontinuities
2
作者 Kaizong Xia Zhiwei Si +3 位作者 Congxin Chen Xiaoshuang Li Junpeng Zou Jiahao Yuan 《International Journal of Minerals,Metallurgy and Materials》 SCIE EI CAS CSCD 2024年第8期1799-1815,共17页
The deformation and fracture evolution mechanisms of the strata overlying mines mined using sublevel caving were studied via numerical simulations.Moreover,an expression for the normal force acting on the side face of... The deformation and fracture evolution mechanisms of the strata overlying mines mined using sublevel caving were studied via numerical simulations.Moreover,an expression for the normal force acting on the side face of a steeply dipping superimposed cantilever beam in the surrounding rock was deduced based on limit equilibrium theory.The results show the following:(1)surface displacement above metal mines with steeply dipping discontinuities shows significant step characteristics,and(2)the behavior of the strata as they fail exhibits superimposition characteristics.Generally,failure first occurs in certain superimposed strata slightly far from the goaf.Subsequently,with the constant downward excavation of the orebody,the superimposed strata become damaged both upwards away from and downwards toward the goaf.This process continues until the deep part of the steeply dipping superimposed strata forms a large-scale deep fracture plane that connects with the goaf.The deep fracture plane generally makes an angle of 12°-20°with the normal to the steeply dipping discontinuities.The effect of the constant outward transfer of strata movement due to the constant outward failure of the superimposed strata in the metal mines with steeply dipping discontinuities causes the scope of the strata movement in these mines to be larger than expected.The strata in the metal mines with steeply dipping discontinuities mainly show flexural toppling failure.However,the steeply dipping structural strata near the goaf mainly exhibit shear slipping failure,in which case the mechanical model used to describe them can be simplified by treating them as steeply dipping superimposed cantilever beams.By taking the steeply dipping superimposed cantilever beam that first experiences failure as the key stratum,the failure scope of the strata(and criteria for the stability of metal mines with steeply dipping discontinuities mined using sublevel caving)can be obtained via iterative computations from the key stratum,moving downward toward and upwards away from the goaf. 展开更多
关键词 sublevel caving mines universal distinct element code(UDEC)numerical approach large-scale ground movement steeply dipping superimposed cantilever beam toppling failure
下载PDF
Galaxy Interactions in Filaments and Sheets:Effects of the Large-scale Structures Versus the Local Density
3
作者 Apashanka Das Biswajit Pandey Suman Sarkar 《Research in Astronomy and Astrophysics》 SCIE CAS CSCD 2023年第2期197-204,共8页
Major interactions are known to trigger star formation in galaxies and alter their color.We study the major interactions in filaments and sheets using SDSS data to understand the influence of large-scale environments ... Major interactions are known to trigger star formation in galaxies and alter their color.We study the major interactions in filaments and sheets using SDSS data to understand the influence of large-scale environments on galaxy interactions.We identify the galaxies in filaments and sheets using the local dimension and also find the major pairs residing in these environments.The star formation rate(SFR) and color of the interacting galaxies as a function of pair separation are separately analyzed in filaments and sheets.The analysis is repeated for three volume limited samples covering different magnitude ranges.The major pairs residing in the filaments show a significantly higher SFR and bluer color than those residing in the sheets up to the projected pair separation of~50 kpc.We observe a complete reversal of this behavior for both the SFR and color of the galaxy pairs having a projected separation larger than 50 kpc.Some earlier studies report that the galaxy pairs align with the filament axis.Such alignment inside filaments indicates anisotropic accretion that may cause these differences.We do not observe these trends in the brighter galaxy samples.The pairs in filaments and sheets from the brighter galaxy samples trace relatively denser regions in these environments.The absence of these trends in the brighter samples may be explained by the dominant effect of the local density over the effects of the large-scale environment. 展开更多
关键词 methods statistical-methods data analysis-galaxies evolution-galaxies interactions-(cosmology:)large-scale structure of universe
下载PDF
Bayesian model averaging(BMA)for nuclear data evaluation
4
作者 E.Alhassan D.Rochman +1 位作者 G.Schnabel A.J.Koning 《Nuclear Science and Techniques》 SCIE EI CAS CSCD 2024年第11期193-218,共26页
To ensure agreement between theoretical calculations and experimental data,parameters to selected nuclear physics models are perturbed and fine-tuned in nuclear data evaluations.This approach assumes that the chosen s... To ensure agreement between theoretical calculations and experimental data,parameters to selected nuclear physics models are perturbed and fine-tuned in nuclear data evaluations.This approach assumes that the chosen set of models accurately represents the‘true’distribution of considered observables.Furthermore,the models are chosen globally,indicating their applicability across the entire energy range of interest.However,this approach overlooks uncertainties inherent in the models themselves.In this work,we propose that instead of selecting globally a winning model set and proceeding with it as if it was the‘true’model set,we,instead,take a weighted average over multiple models within a Bayesian model averaging(BMA)framework,each weighted by its posterior probability.The method involves executing a set of TALYS calculations by randomly varying multiple nuclear physics models and their parameters to yield a vector of calculated observables.Next,computed likelihood function values at each incident energy point were then combined with the prior distributions to obtain updated posterior distributions for selected cross sections and the elastic angular distributions.As the cross sections and elastic angular distributions were updated locally on a per-energy-point basis,the approach typically results in discontinuities or“kinks”in the cross section curves,and these were addressed using spline interpolation.The proposed BMA method was applied to the evaluation of proton-induced reactions on ^(58)Ni between 1 and 100 MeV.The results demonstrated a favorable comparison with experimental data as well as with the TENDL-2023 evaluation. 展开更多
关键词 Bayesian model averaging(BMA) Nuclear data Nuclear reaction models Model parameters TALYS code system Covariances
下载PDF
Regularized focusing inversion for large-scale gravity data based on GPU parallel computing
5
作者 WANG Haoran DING Yidan +1 位作者 LI Feida LI Jing 《Global Geology》 2019年第3期179-187,共9页
Processing large-scale 3-D gravity data is an important topic in geophysics field. Many existing inversion methods lack the competence of processing massive data and practical application capacity. This study proposes... Processing large-scale 3-D gravity data is an important topic in geophysics field. Many existing inversion methods lack the competence of processing massive data and practical application capacity. This study proposes the application of GPU parallel processing technology to the focusing inversion method, aiming at improving the inversion accuracy while speeding up calculation and reducing the memory consumption, thus obtaining the fast and reliable inversion results for large complex model. In this paper, equivalent storage of geometric trellis is used to calculate the sensitivity matrix, and the inversion is based on GPU parallel computing technology. The parallel computing program that is optimized by reducing data transfer, access restrictions and instruction restrictions as well as latency hiding greatly reduces the memory usage, speeds up the calculation, and makes the fast inversion of large models possible. By comparing and analyzing the computing speed of traditional single thread CPU method and CUDA-based GPU parallel technology, the excellent acceleration performance of GPU parallel computing is verified, which provides ideas for practical application of some theoretical inversion methods restricted by computing speed and computer memory. The model test verifies that the focusing inversion method can overcome the problem of severe skin effect and ambiguity of geological body boundary. Moreover, the increase of the model cells and inversion data can more clearly depict the boundary position of the abnormal body and delineate its specific shape. 展开更多
关键词 large-scale gravity data GPU parallel computing CUDA equivalent geometric TRELLIS FOCUSING INVERSION
下载PDF
Trend Analysis of Large-Scale Twitter Data Based on Witnesses during a Hazardous Event: A Case Study on California Wildfire Evacuation
6
作者 Syed A. Morshed Khandakar Mamun Ahmed +1 位作者 Kamar Amine Kazi Ashraf Moinuddin 《World Journal of Engineering and Technology》 2021年第2期229-239,共11页
Social media data created a paradigm shift in assessing situational awareness during a natural disaster or emergencies such as wildfire, hurricane, tropical storm etc. Twitter as an emerging data source is an effectiv... Social media data created a paradigm shift in assessing situational awareness during a natural disaster or emergencies such as wildfire, hurricane, tropical storm etc. Twitter as an emerging data source is an effective and innovative digital platform to observe trend from social media users’ perspective who are direct or indirect witnesses of the calamitous event. This paper aims to collect and analyze twitter data related to the recent wildfire in California to perform a trend analysis by classifying firsthand and credible information from Twitter users. This work investigates tweets on the recent wildfire in California and classifies them based on witnesses into two types: 1) direct witnesses and 2) indirect witnesses. The collected and analyzed information can be useful for law enforcement agencies and humanitarian organizations for communication and verification of the situational awareness during wildfire hazards. Trend analysis is an aggregated approach that includes sentimental analysis and topic modeling performed through domain-expert manual annotation and machine learning. Trend analysis ultimately builds a fine-grained analysis to assess evacuation routes and provide valuable information to the firsthand emergency responders<span style="font-family:Verdana;">.</span> 展开更多
关键词 WILDFIRE EVACUATION TWITTER large-scale data Topic Model Sentimental Analysis Trend Analysis
下载PDF
Quantitative Comparative Study of the Performance of Lossless Compression Methods Based on a Text Data Model
7
作者 Namogo Silué Sié Ouattara +1 位作者 Mouhamadou Dosso Alain Clément 《Open Journal of Applied Sciences》 2024年第7期1944-1962,共19页
Data compression plays a key role in optimizing the use of memory storage space and also reducing latency in data transmission. In this paper, we are interested in lossless compression techniques because their perform... Data compression plays a key role in optimizing the use of memory storage space and also reducing latency in data transmission. In this paper, we are interested in lossless compression techniques because their performance is exploited with lossy compression techniques for images and videos generally using a mixed approach. To achieve our intended objective, which is to study the performance of lossless compression methods, we first carried out a literature review, a summary of which enabled us to select the most relevant, namely the following: arithmetic coding, LZW, Tunstall’s algorithm, RLE, BWT, Huffman coding and Shannon-Fano. Secondly, we designed a purposive text dataset with a repeating pattern in order to test the behavior and effectiveness of the selected compression techniques. Thirdly, we designed the compression algorithms and developed the programs (scripts) in Matlab in order to test their performance. Finally, following the tests conducted on relevant data that we constructed according to a deliberate model, the results show that these methods presented in order of performance are very satisfactory:- LZW- Arithmetic coding- Tunstall algorithm- BWT + RLELikewise, it appears that on the one hand, the performance of certain techniques relative to others is strongly linked to the sequencing and/or recurrence of symbols that make up the message, and on the other hand, to the cumulative time of encoding and decoding. 展开更多
关键词 Arithmetic Coding BWT Compression Ratio Comparative Study Compression Techniques Shannon-Fano HUFFMAN Lossless Compression LZW PERFORMANCE REDUNDANCY RLE Text data Tunstall
下载PDF
Research on Big Data Coding System Based on the Classification of Artificial Materials and Mechanical Equipment in Construction Engineering
8
作者 Zuguo Bai Sheng Han +1 位作者 Zhan Wei Jinping Chu 《Journal of Architectural Research and Development》 2024年第6期40-50,共11页
By analyzing and comparing the current application status and advantages and disadvantages of domestic and foreign artificial material mechanical equipment classification coding systems,and conducting a comparative st... By analyzing and comparing the current application status and advantages and disadvantages of domestic and foreign artificial material mechanical equipment classification coding systems,and conducting a comparative study of the existing coding system standards in different regions of the country,a coding data model suitable for big data research needs is proposed based on the current national standard for artificial material mechanical equipment classification coding.This model achieves a horizontal connection of characteristics and a vertical penetration of attribute values for construction materials and machinery through forward automatic coding calculation and reverse automatic decoding.This coding scheme and calculation model can also establish a database file for the coding and unit price of construction materials and machinery,forming a complete big data model for construction material coding unit prices.This provides foundational support for calculating and analyzing big data related to construction material unit prices,real-time information prices,market prices,and various comprehensive prices,thus contributing to the formation of cost-related big data. 展开更多
关键词 data of labor MATERIALS EQUIPMENT CLASSIFICATION Big data coding system
下载PDF
Semi-supervised Affinity Propagation Clustering Based on Subtractive Clustering for Large-Scale Data Sets
9
作者 Qi Zhu Huifu Zhang Quanqin Yang 《国际计算机前沿大会会议论文集》 2015年第1期76-77,共2页
In the face of a growing number of large-scale data sets, affinity propagation clustering algorithm to calculate the process required to build the similarity matrix, will bring huge storage and computation. Therefore,... In the face of a growing number of large-scale data sets, affinity propagation clustering algorithm to calculate the process required to build the similarity matrix, will bring huge storage and computation. Therefore, this paper proposes an improved affinity propagation clustering algorithm. First, add the subtraction clustering, using the density value of the data points to obtain the point of initial clusters. Then, calculate the similarity distance between the initial cluster points, and reference the idea of semi-supervised clustering, adding pairs restriction information, structure sparse similarity matrix. Finally, the cluster representative points conduct AP clustering until a suitable cluster division.Experimental results show that the algorithm allows the calculation is greatly reduced, the similarity matrix storage capacity is also reduced, and better than the original algorithm on the clustering effect and processing speed. 展开更多
关键词 subtractive CLUSTERING INITIAL cluster AFFINITY propagation CLUSTERING SEMI-SUPERVISED CLUSTERING large-scale data SETS
下载PDF
An Efficient Test Data Compression Technique Based on Codes
10
作者 方建平 郝跃 +1 位作者 刘红侠 李康 《Journal of Semiconductors》 EI CAS CSCD 北大核心 2005年第11期2062-2068,共7页
This paper presents a new test data compression/decompression method for SoC testing,called hybrid run length codes. The method makes a full analysis of the factors which influence test parameters:compression ratio,t... This paper presents a new test data compression/decompression method for SoC testing,called hybrid run length codes. The method makes a full analysis of the factors which influence test parameters:compression ratio,test application time, and area overhead. To improve the compression ratio, the new method is based on variable-to-variable run length codes,and a novel algorithm is proposed to reorder the test vectors and fill the unspecified bits in the pre-processing step. With a novel on-chip decoder, low test application time and low area overhead are obtained by hybrid run length codes. Finally, an experimental comparison on ISCAS 89 benchmark circuits validates the proposed method 展开更多
关键词 test data compression unspecified bits assignment system-on-a-chip test hybrid run-length codes
下载PDF
Power Splitting Based SWIPT in Network-Coded Two-Way Networks with Data Rate Fairness:An Information-Theoretic Perspective 被引量:2
11
作者 Ke Xiong Yu Zhang +1 位作者 Yueyun Chen Xiaofei Di 《China Communications》 SCIE CSCD 2016年第12期107-119,共13页
This paper investigates the simultaneous wireless information and powertransfer(SWIPT) for network-coded two-way relay network from an information-theoretic perspective, where two sources exchange information via an S... This paper investigates the simultaneous wireless information and powertransfer(SWIPT) for network-coded two-way relay network from an information-theoretic perspective, where two sources exchange information via an SWIPT-aware energy harvesting(EH) relay. We present a power splitting(PS)-based two-way relaying(PS-TWR) protocol by employing the PS receiver architecture. To explore the system sum rate limit with data rate fairness, an optimization problem under total power constraint is formulated. Then, some explicit solutions are derived for the problem. Numerical results show that due to the path loss effect on energy transfer, with the same total available power, PS-TWR losses some system performance compared with traditional non-EH two-way relaying, where at relatively low and relatively high signalto-noise ratio(SNR), the performance loss is relatively small. Another observation is that, in relatively high SNR regime, PS-TWR outperforms time switching-based two-way relaying(TS-TWR) while in relatively low SNR regime TS-TWR outperforms PS-TWR. It is also shown that with individual available power at the two sources, PS-TWR outperforms TS-TWR in both relatively low and high SNR regimes. 展开更多
关键词 two-way relay energy harvesting wireless power transfer data rate fairness network coding
下载PDF
Research of Methods for Lost Data Reconstruction in Erasure Codes over Binary Fields 被引量:2
12
作者 Dan Tang 《Journal of Electronic Science and Technology》 CAS CSCD 2016年第1期43-48,共6页
In the process of encoding and decoding,erasure codes over binary fields,which just need AND operations and XOR operations and therefore have a high computational efficiency,are widely used in various fields of inform... In the process of encoding and decoding,erasure codes over binary fields,which just need AND operations and XOR operations and therefore have a high computational efficiency,are widely used in various fields of information technology.A matrix decoding method is proposed in this paper.The method is a universal data reconstruction scheme for erasure codes over binary fields.Besides a pre-judgment that whether errors can be recovered,the method can rebuild sectors of loss data on a fault-tolerant storage system constructed by erasure codes for disk errors.Data reconstruction process of the new method has simple and clear steps,so it is beneficial for implementation of computer codes.And more,it can be applied to other non-binary fields easily,so it is expected that the method has an extensive application in the future. 展开更多
关键词 Binary fields data reconstruction decoding erasure codes
下载PDF
Application of Adaptive Coded Modulation Technology in UAV Data Link
13
作者 Rui Xue Deting Hu Tielin Zhu 《International Journal of Communications, Network and System Sciences》 2017年第5期181-190,共10页
UAV data link has been considered as an important part of UAV communication system, through which the UAV could communicate with warships. However, constant coding and modulation scheme that UAV adopts does not make f... UAV data link has been considered as an important part of UAV communication system, through which the UAV could communicate with warships. However, constant coding and modulation scheme that UAV adopts does not make full use of the channel capacity when UAV communicates with warships in a good channel environment. In order to improve channel capacity and spectral efficiency, adaptive coded modulation technology is studied. Based on maritime channel model, SNR estimation technology and adaptive threshold determination technology, the simulation of UAV data link communication is carried out in this paper. Theoretic analysis and simulation results show that according to changes in maritime channel state, UAV can dynamically adjust the adaptive coded modulation scheme on the condition of meeting target Bit-Error-Rate (BER), the maximum amount of data transfer is non-adaptive systems three times. 展开更多
关键词 UAV data Link ADAPTIVE codeD Modulation MARITIME Channel SNR Estimation Target BIT-ERROR-RATE
下载PDF
System-on-Chip Test Data Compression Based on Split-Data Variable Length (SDV) Code
14
作者 J. Robert Theivadas V. Ranganathan J. Raja Paul Perinbam 《Circuits and Systems》 2016年第8期1213-1223,共11页
System-on-a-chips with intellectual property cores need a large volume of data for testing. The large volume of test data requires a large testing time and test data memory. Therefore new techniques are needed to opti... System-on-a-chips with intellectual property cores need a large volume of data for testing. The large volume of test data requires a large testing time and test data memory. Therefore new techniques are needed to optimize the test data volume, decrease the testing time, and conquer the ATE memory limitation for SOC designs. This paper presents a new compression method of testing for intellectual property core-based system-on-chip. The proposed method is based on new split- data variable length (SDV) codes that are designed using the split-options along with identification bits in a string of test data. This paper analyses the reduction of test data volume, testing time, run time, size of memory required in ATE and improvement of compression ratio. Experimental results for ISCAS 85 and ISCAS 89 Benchmark circuits show that SDV codes outperform other compression methods with the best compression ratio for test data compression. The decompression architecture for SDV codes is also presented for decoding the implementations of compressed bits. The proposed scheme shows that SDV codes are accessible to any of the variations in the input test data stream. 展开更多
关键词 Test data Compression SDV codes SOC ATE Benchmark Circuits
下载PDF
Proof of Activity Protocol for IoMT Data Security
15
作者 R.Rajadevi K.Venkatachalam +2 位作者 Mehedi Masud Mohammed A.AlZain Mohamed Abouhawwash 《Computer Systems Science & Engineering》 SCIE EI 2023年第1期339-350,共12页
The Internet of Medical Things(IoMT)is an online device that senses and transmits medical data from users to physicians within a time interval.In,recent years,IoMT has rapidly grown in the medicalfield to provide heal... The Internet of Medical Things(IoMT)is an online device that senses and transmits medical data from users to physicians within a time interval.In,recent years,IoMT has rapidly grown in the medicalfield to provide healthcare services without physical appearance.With the use of sensors,IoMT applications are used in healthcare management.In such applications,one of the most important factors is data security,given that its transmission over the network may cause obtrusion.For data security in IoMT systems,blockchain is used due to its numerous blocks for secure data storage.In this study,Blockchain-assisted secure data management framework(BSDMF)and Proof of Activity(PoA)protocol using malicious code detection algorithm is used in the proposed data security for the healthcare system.The main aim is to enhance the data security over the networks.The PoA protocol enhances high security of data from the literature review.By replacing the malicious node from the block,the PoA can provide high security for medical data in the blockchain.Comparison with existing systems shows that the proposed simulation with BSD-Malicious code detection algorithm achieves higher accuracy ratio,precision ratio,security,and efficiency and less response time for Blockchain-enabled healthcare systems. 展开更多
关键词 Blockchain IoMT malicious code detection SECURITY secure data management framework data management POA
下载PDF
HCS: Expanding H-Code RAID 6 without Recalculating Parity Blocks in Big Data Circumstance
16
作者 Shiying Xia Yu Mao +1 位作者 Minsheng Tan Weipeng Jing 《国际计算机前沿大会会议论文集》 2015年第1期20-22,共3页
This paper introduces a new RAID 6 expanding method HCS, which is facing the circumstance of big data. HCS expands H-Code manner RAID 6. Two key techniques are used to avoid parity blocks’ recalculating.The first one... This paper introduces a new RAID 6 expanding method HCS, which is facing the circumstance of big data. HCS expands H-Code manner RAID 6. Two key techniques are used to avoid parity blocks’ recalculating.The first one is anti-diagonal data blocks’ selection, and the other one is horizontal data migration. These two techniques ensure the data blocks are retained in the same verification zone, that is horizontal verification zone and anti-diagonal verification zone. Experimental results showed that, compared with SDM, which is also a fast expansion method, HCS can reduce 3.6% expansion time and promote 4.62% performance under four traces. 展开更多
关键词 Big data H-code RAID EXPANDING HORIZONTAL coding Anti-diagonal PARITY HORIZONTAL PARITY
下载PDF
Secret Data-Driven Carrier-Free Secret Sharing Scheme Based on Error Correction Blocks of QR Codes
17
作者 Song Wan Yuliang Lu +2 位作者 Xuehu Yan Hanlin Liu Longdan Tan 《国际计算机前沿大会会议论文集》 2017年第1期56-57,共2页
In this paper,a novel secret data-driven carrier-free(semi structural formula)visual secret sharing(VSS)scheme with(2,2)threshold based on the error correction blocks of QR codes is investigated.The proposed scheme is... In this paper,a novel secret data-driven carrier-free(semi structural formula)visual secret sharing(VSS)scheme with(2,2)threshold based on the error correction blocks of QR codes is investigated.The proposed scheme is to search two QR codes that altered to satisfy the secret sharing modules in the error correction mechanism from the large datasets of QR codes according to the secret image,which is to embed the secret image into QR codes based on carrier-free secret sharing.The size of secret image is the same or closest with the region from the coordinate of(7,7)to the lower right corner of QR codes.In this way,we can find the QR codes combination of embedding secret information maximization with secret data-driven based on Big data search.Each output share is a valid QR code which can be decoded correctly utilizing a QR code reader and it may reduce the likelihood of attracting the attention of potential attackers.The proposed scheme can reveal secret image visually with the abilities of stacking and XOR decryptions.The secret image can be recovered by human visual system(HVS)without any computation based on stacking.On the other hand,if the light-weight computation device is available,the secret image can be lossless revealed based on XOR operation.In addition,QR codes could assist alignment for VSS recovery.The experimental results show the effectiveness of our scheme. 展开更多
关键词 Visual SECRET sharing QR code Error correction BLOCKS Carrier-free Big data data-DRIVEN Multiple decryptions
下载PDF
基于Mask R-CNN的试管-支架系统Data Matrix码识别方法
18
作者 刘石坚 林锦嘉 +1 位作者 陈梓灿 邹峥 《福建工程学院学报》 CAS 2023年第4期378-384,共7页
在试管-支架自动化系统的输入图像中,Data Matrix(DM)码呈现为多个小目标,图像存在成像模糊、边缘干扰严重等问题,使得传统方法难以达到良好的识别效果。为此,提出一种基于深度学习的Data Matrix码识别方法DeepDMCode,以Mask R-CNN模型... 在试管-支架自动化系统的输入图像中,Data Matrix(DM)码呈现为多个小目标,图像存在成像模糊、边缘干扰严重等问题,使得传统方法难以达到良好的识别效果。为此,提出一种基于深度学习的Data Matrix码识别方法DeepDMCode,以Mask R-CNN模型为基础,通过内容差异化数据合成和同步自动化标注,实现训练数据的增强,提升模型的学习能力。在模型分割结果的基础上,提出一种旋转校正方法,确保可用标准解码库实现DM码的解码。以分辨率为1600×1200、支架容量为96的数据实验表明,由于该方法在前期码定位阶段最大程度地还原码边界信息,准确度可达0.92(mIoU),完成单张图像中所有DM识别的平均速度为5.2 s,优于YOLO、SegNet、CenterNet等主流工业基准算法。 展开更多
关键词 试管-支架系统 Mask R-CNN data Matrix码 人工数据合成 实验室自动化
下载PDF
基于Ngram-TFIDF的深度恶意代码可视化分类方法
19
作者 王金伟 陈正嘉 +2 位作者 谢雪 罗向阳 马宾 《通信学报》 EI CSCD 北大核心 2024年第6期160-175,共16页
随着恶意代码规模和种类的不断增加,传统恶意代码分析方法由于依赖于人工提取特征,变得耗时且易出错,因此不再适用。为了提高检测效率和准确性,提出了一种基于Ngram-TFIDF的深度恶意代码可视化分类方法。结合N-gram和TF-IDF技术对恶意... 随着恶意代码规模和种类的不断增加,传统恶意代码分析方法由于依赖于人工提取特征,变得耗时且易出错,因此不再适用。为了提高检测效率和准确性,提出了一种基于Ngram-TFIDF的深度恶意代码可视化分类方法。结合N-gram和TF-IDF技术对恶意代码数据集进行处理,并将其转化为灰度图。随后,引入CBAM并调整密集块数量,构建DenseNet88_CBAM网络模型用于灰度图分类。实验结果表明,所提方法在恶意代码家族分类和类型分类上分别提高了1.11%和9.28%的准确率,取得了优越的分类效果。 展开更多
关键词 深度学习 数据可视化 恶意代码检测和分类
下载PDF
基于GeoSOT的文旅数据编码方法
20
作者 苗茹 袁欢 +2 位作者 周珂 张俨娜 杨阳 《科学技术与工程》 北大核心 2024年第2期472-479,共8页
文旅资源数据的种类多样,格式复杂,存储管理难度大,文旅资源的传承保护和发展需求迫切,亟需建立一种数据组织管理方法。基于GeoSOT网格剖分理论,提出了一种对多元文旅资源数据的统一组织管理的方法,并详细介绍了文旅资源数据概况、剖分... 文旅资源数据的种类多样,格式复杂,存储管理难度大,文旅资源的传承保护和发展需求迫切,亟需建立一种数据组织管理方法。基于GeoSOT网格剖分理论,提出了一种对多元文旅资源数据的统一组织管理的方法,并详细介绍了文旅资源数据概况、剖分方案、编码方法、检索机制和检索结果。设计了一个文旅资源数据库,使GeoSOT编码组织管理各种类型数据,结果表明:基于GeoSOT剖分网格编码能够有效地组织管理文旅资源数据,提升数据检索访问效率,满足文旅资源数据的管理需求。 展开更多
关键词 文旅资源数据 文旅资源编码 组织管理 剖分模型 GeoSOT编码
下载PDF
上一页 1 2 150 下一页 到第
使用帮助 返回顶部