期刊文献+
共找到1,441篇文章
< 1 2 73 >
每页显示 20 50 100
Search Processes in the Exploration of Complex Data under Different Display Conditions
1
作者 Charles Tatum David Dickason 《Journal of Data Analysis and Information Processing》 2021年第2期51-62,共12页
The study investigated user experience, display complexity, display type (tables versus graphs), and task difficulty as variables affecting the user’s ability to navigate through complex visual data. A total of 64 pa... The study investigated user experience, display complexity, display type (tables versus graphs), and task difficulty as variables affecting the user’s ability to navigate through complex visual data. A total of 64 participants, 39 undergraduate students (novice users) and 25 graduate students (intermediate-level users) participated in the study. The experimental design was 2 × 2 × 2 × 3 mixed design using two between-subject variables (display complexity, user experience) and two within-subject variables (display format, question difficulty). The results indicated that response time was superior for graphs (relative to tables), especially when the questions were difficult. The intermediate users seemed to adopt more extensive search strategies than novices, as revealed by an analysis of the number of changes they made to the display prior to answering questions. It was concluded that designers of data displays should consider the (a) type of display, (b) difficulty of the task, and (c) expertise level of the user to obtain optimal levels of performance. 展开更多
关键词 Computer Users data Displays data Visualization data Tables data Graphs Visual Search data complexity Visual Displays Visual data
下载PDF
Data complexity-based batch sanitization method against poison in distributed learning
2
作者 Silv Wang Kai Fan +2 位作者 Kuan Zhang Hui Li Yintang Yang 《Digital Communications and Networks》 SCIE CSCD 2024年第2期416-428,共13页
The security of Federated Learning(FL)/Distributed Machine Learning(DML)is gravely threatened by data poisoning attacks,which destroy the usability of the model by contaminating training samples,so such attacks are ca... The security of Federated Learning(FL)/Distributed Machine Learning(DML)is gravely threatened by data poisoning attacks,which destroy the usability of the model by contaminating training samples,so such attacks are called causative availability indiscriminate attacks.Facing the problem that existing data sanitization methods are hard to apply to real-time applications due to their tedious process and heavy computations,we propose a new supervised batch detection method for poison,which can fleetly sanitize the training dataset before the local model training.We design a training dataset generation method that helps to enhance accuracy and uses data complexity features to train a detection model,which will be used in an efficient batch hierarchical detection process.Our model stockpiles knowledge about poison,which can be expanded by retraining to adapt to new attacks.Being neither attack-specific nor scenario-specific,our method is applicable to FL/DML or other online or offline scenarios. 展开更多
关键词 Distributed machine learning security Federated learning data poisoning attacks data sanitization Batch detection data complexity
下载PDF
Linear mixed-effects model for longitudinal complex data with diversified characteristics 被引量:2
3
作者 Zhichao Wang Huiwen Wang +2 位作者 Shanshan Wang Shan Lu Gilbert Saporta 《Journal of Management Science and Engineering》 2020年第2期105-124,共20页
The increasing richness of data encourages a comprehensive understanding of economic and financial activities,where variables of interest may include not only scalar(point-like)indicators,but also functional(curve-lik... The increasing richness of data encourages a comprehensive understanding of economic and financial activities,where variables of interest may include not only scalar(point-like)indicators,but also functional(curve-like)and compositional(pie-like)ones.In many research topics,the variables are also chronologically collected across individuals,which falls into the paradigm of longitudinal analysis.The complicated nature of data,however,increases the difficulty of modeling these variables under the classic longitudinal frame-work.In this study,we investigate the linear mixed-effects model(LMM)for such complex data.Different types of variables arefirst consistently represented using the corresponding basis expansions so that the classic LMM can then be conducted on them,which gener-alizes the theoretical framework of LMM to complex data analysis.A number of simulation studies indicate the feasibility and effectiveness of the proposed model.We further illustrate its practical utility in a real data study on Chinese stock market and show that the proposed method can enhance the performance and interpretability of the regression for complex data with diversified characteristics. 展开更多
关键词 Longitudinal complex data Linear mixed-effects model Compositional data analysis Functional data analysis Chinese stock market Online investors'sentiment
原文传递
Reversible Data Hiding Algorithm in Encrypted Images Based on Adaptive Median Edge Detection and Ciphertext-Policy Attribute-Based Encryption
4
作者 Zongbao Jiang Minqing Zhang +2 位作者 Weina Dong Chao Jiang Fuqiang Di 《Computers, Materials & Continua》 SCIE EI 2024年第10期1123-1155,共33页
With the rapid advancement of cloud computing technology,reversible data hiding algorithms in encrypted images(RDH-EI)have developed into an important field of study concentrated on safeguarding privacy in distributed... With the rapid advancement of cloud computing technology,reversible data hiding algorithms in encrypted images(RDH-EI)have developed into an important field of study concentrated on safeguarding privacy in distributed cloud environments.However,existing algorithms often suffer from low embedding capacities and are inadequate for complex data access scenarios.To address these challenges,this paper proposes a novel reversible data hiding algorithm in encrypted images based on adaptive median edge detection(AMED)and ciphertext-policy attributebased encryption(CP-ABE).This proposed algorithm enhances the conventional median edge detection(MED)by incorporating dynamic variables to improve pixel prediction accuracy.The carrier image is subsequently reconstructed using the Huffman coding technique.Encrypted image generation is then achieved by encrypting the image based on system user attributes and data access rights,with the hierarchical embedding of the group’s secret data seamlessly integrated during the encryption process using the CP-ABE scheme.Ultimately,the encrypted image is transmitted to the data hider,enabling independent embedding of the secret data and resulting in the creation of the marked encrypted image.This approach allows only the receiver to extract the authorized group’s secret data,thereby enabling fine-grained,controlled access.Test results indicate that,in contrast to current algorithms,the method introduced here considerably improves the embedding rate while preserving lossless image recovery.Specifically,the average maximum embedding rates for the(3,4)-threshold and(6,6)-threshold schemes reach 5.7853 bits per pixel(bpp)and 7.7781 bpp,respectively,across the BOSSbase,BOW-2,and USD databases.Furthermore,the algorithm facilitates permission-granting and joint-decryption capabilities.Additionally,this paper conducts a comprehensive examination of the algorithm’s robustness using metrics such as image correlation,information entropy,and number of pixel change rate(NPCR),confirming its high level of security.Overall,the algorithm can be applied in a multi-user and multi-level cloud service environment to realize the secure storage of carrier images and secret data. 展开更多
关键词 Ciphertext-policy attribute-based encryption complex data access structure reversible data hiding large embedding space
下载PDF
Interference in Complex CDMA-OFDM/OQAM for Better Performance at Low SNR
5
作者 Chrislin Martial Lélé 《International Journal of Communications, Network and System Sciences》 2024年第8期113-128,共16页
This article is about orthogonal frequency-division multiplexing with quadrature amplitude modulation combined with code division multiplexing access for complex data transmission. It aims to present a method which us... This article is about orthogonal frequency-division multiplexing with quadrature amplitude modulation combined with code division multiplexing access for complex data transmission. It aims to present a method which uses two interfering subsets in order to improve the performance of the transmission scheme. The idea is to spread in a coherent manner some data amongst two different codes belonging to the two different subsets involved in complex orthogonal frequency-division multiplexing with quadrature amplitude modulation and code division multiplexing access. This will improve the useful signal level at the receiving side and therefore improve the decoding process especially at low signal to noise ratio. However, this procedure implies some interference with other codes therefore creating a certain noise which is noticeable at high signal to noise ratio. 展开更多
关键词 CDMA OFDM/OQAM complex data
下载PDF
Source complexity of the 2016 M_W7.8 Kaikoura (New Zealand) earthquake revealed from teleseismic and InSAR data 被引量:4
6
作者 HaiLin Du Xu Zhang +3 位作者 LiSheng Xu WanPeng Feng Lei Yi Peng Li 《Earth and Planetary Physics》 2018年第4期310-326,共17页
On November 13, 2016, an MW7.8 earthquake struck Kaikoura in South Island of New Zealand. By means of back-projection of array recordings, ASTFs-analysis of global seismic recordings, and joint inversion of global sei... On November 13, 2016, an MW7.8 earthquake struck Kaikoura in South Island of New Zealand. By means of back-projection of array recordings, ASTFs-analysis of global seismic recordings, and joint inversion of global seismic data and co-seismic In SAR data, we investigated complexity of the earthquake source. The result shows that the 2016 MW7.8 Kaikoura earthquake ruptured about 100 s unilaterally from south to northeast(~N28°–33°E), producing a rupture area about 160 km long and about 50 km wide and releasing scalar moment 1.01×1021 Nm. In particular, the rupture area consisted of two slip asperities, with one close to the initial rupture point having a maximal slip value ~6.9 m while the other far away in the northeast having a maximal slip value ~9.3 m. The first asperity slipped for about 65 s and the second one started 40 s after the first one had initiated. The two slipped simultaneously for about 25 s.Furthermore, the first had a nearly thrust slip while the second had both thrust and strike slip. It is interesting that the rupture velocity was not constant, and the whole process may be divided into 5 stages in which the velocities were estimated to be 1.4 km/s, 0 km/s, 2.1 km/s, 0 km/s and 1.1 km/s, respectively. The high-frequency sources distributed nearly along the lower edge of the rupture area, the highfrequency radiating mainly occurred at launching of the asperities, and it seemed that no high-frequency energy was radiated when the rupturing was going to stop. 展开更多
关键词 2016 MW7.8 Kaikoura EARTHQUAKE BACK-PROJECTION of array RECORDINGS ASTFs-analysis of global RECORDINGS joint inversion of teleseismic and InSAR data complexITY of SOURCE
下载PDF
Data Driven Uncertainty Evaluation for Complex Engineered System Design 被引量:1
7
作者 LIU Boyuan HUANG Shuangxi +4 位作者 FAN Wenhui XIAO Tianyuan James HUMANN LAI Yuyang JIN Yan 《Chinese Journal of Mechanical Engineering》 SCIE EI CAS CSCD 2016年第5期889-900,共12页
Complex engineered systems are often difficult to analyze and design due to the tangled interdependencies among their subsystems and components. Conventional design methods often need exact modeling or accurate struct... Complex engineered systems are often difficult to analyze and design due to the tangled interdependencies among their subsystems and components. Conventional design methods often need exact modeling or accurate structure decomposition, which limits their practical application. The rapid expansion of data makes utilizing data to guide and improve system design indispensable in practical engineering. In this paper, a data driven uncertainty evaluation approach is proposed to support the design of complex engineered systems. The core of the approach is a data-mining based uncertainty evaluation method that predicts the uncertainty level of a specific system design by means of analyzing association relations along different system attributes and synthesizing the information entropy of the covered attribute areas, and a quantitative measure of system uncertainty can be obtained accordingly. Monte Carlo simulation is introduced to get the uncertainty extrema, and the possible data distributions under different situations is discussed in detail The uncertainty values can be normalized using the simulation results and the values can be used to evaluate different system designs. A prototype system is established, and two case studies have been carded out. The case of an inverted pendulum system validates the effectiveness of the proposed method, and the case of an oil sump design shows the practicability when two or more design plans need to be compared. This research can be used to evaluate the uncertainty of complex engineered systems completely relying on data, and is ideally suited for plan selection and performance analysis in system design. 展开更多
关键词 complex engineered system design UNCERTAINTY data-driven evaluation Monte Carlo simulation
下载PDF
Analysis of Complex Correlated Interval-Censored HIV Data from Population Based Survey
8
作者 Khangelani Zuma Goitseone Mafoko 《Open Journal of Statistics》 2015年第2期120-126,共7页
In studies of HIV, interval-censored data occur naturally. HIV infection time is not usually known exactly, only that it occurred before the survey, within some time interval or has not occurred at the time of the sur... In studies of HIV, interval-censored data occur naturally. HIV infection time is not usually known exactly, only that it occurred before the survey, within some time interval or has not occurred at the time of the survey. Infections are often clustered within geographical areas such as enumerator areas (EAs) and thus inducing unobserved frailty. In this paper we consider an approach for estimating parameters when infection time is unknown and assumed correlated within an EA where dependency is modeled as frailties assuming a normal distribution for frailties and a Weibull distribution for baseline hazards. The data was from a household based population survey that used a multi-stage stratified sample design to randomly select 23,275 interviewed individuals from 10,584 households of whom 15,851 interviewed individuals were further tested for HIV (crude prevalence = 9.1%). A further test conducted among those that tested HIV positive found 181 (12.5%) recently infected. Results show high degree of heterogeneity in HIV distribution between EAs translating to a modest correlation of 0.198. Intervention strategies should target geographical areas that contribute disproportionately to the epidemic of HIV. Further research needs to identify such hot spot areas and understand what factors make these areas prone to HIV. 展开更多
关键词 Analysis of complex CORRELATED Interval-Censored HIV data from Population Based SURVEY
下载PDF
Baddeleyite from Large Complex Deposits: Significance for Archean-Paleozoic Plume Processes in the Arctic Region (NE Fennoscandian Shield) Based on U-Pb (ID-TIMS) and LA-ICP-MS Data 被引量:1
9
作者 Tamara Bayanova Viktor Subbotin +2 位作者 Svetlana Drogobuzhskaya Anatoliy Nikolaev Ekaterina Steshenko 《Open Journal of Geology》 2019年第8期474-496,共23页
Baddeleyite is an important mineral geochronometer. It is valued in the U-Pb (ID-TIMS) geochronology more than zircon because of its magmatic origin, while zircon can be metamorphic, hydrothermal or occur as xenocryst... Baddeleyite is an important mineral geochronometer. It is valued in the U-Pb (ID-TIMS) geochronology more than zircon because of its magmatic origin, while zircon can be metamorphic, hydrothermal or occur as xenocrysts. Detailed mineralogical (BSE, KL, etc.) research of baddeleyite started in the Fennoscandian Shield in the 1990s. The mineral was first extracted from the Paleozoic Kovdor deposit, the second-biggest baddeleyite deposit in the world after Phalaborwa (2.1 Ga), South Africa. The mineral was successfully introduced into the U-Pb systematics. This study provides new U-Pb and LA-ICP-MS data on Archean Ti-Mgt and BIF deposits, Paleoproterozoic layered PGE intrusions with Pt-Pd and Cu-Ni reefs and Paleozoic complex deposits (baddeleyite, apatite, foscorite ores, etc.) in the NE Fennoscandian Shield. Data on concentrations of REE in baddeleyite and temperature of the U-Pb systematics closure are also provided. It is shown that baddeleyite plays an important role in the geological history of the Earth, in particular, in the break-up of supercontinents. 展开更多
关键词 BADDELEYITE PGE U-PB Isotope data Geochronology Paleoproterozoic PGE Layered Intrusion complex DEPOSITS PALEOZOIC Fennoscandian Shield
下载PDF
Multilevel Modeling of Binary Outcomes with Three-Level Complex Health Survey Data
10
作者 Shafquat Rozi Sadia Mahmud +2 位作者 Gillian Lancaster Wilbur Hadden Gregory Pappas 《Open Journal of Epidemiology》 2017年第1期27-43,共17页
Complex survey designs often involve unequal selection probabilities of clus-ters or units within clusters. When estimating models for complex survey data, scaled weights are incorporated into the likelihood, producin... Complex survey designs often involve unequal selection probabilities of clus-ters or units within clusters. When estimating models for complex survey data, scaled weights are incorporated into the likelihood, producing a pseudo likeli-hood. In a 3-level weighted analysis for a binary outcome, we implemented two methods for scaling the sampling weights in the National Health Survey of Pa-kistan (NHSP). For NHSP with health care utilization as a binary outcome we found age, gender, household (HH) goods, urban/rural status, community de-velopment index, province and marital status as significant predictors of health care utilization (p-value < 0.05). The variance of the random intercepts using scaling method 1 is estimated as 0.0961 (standard error 0.0339) for PSU level, and 0.2726 (standard error 0.0995) for household level respectively. Both esti-mates are significantly different from zero (p-value < 0.05) and indicate consid-erable heterogeneity in health care utilization with respect to households and PSUs. The results of the NHSP data analysis showed that all three analyses, weighted (two scaling methods) and un-weighted, converged to almost identical results with few exceptions. This may have occurred because of the large num-ber of 3rd and 2nd level clusters and relatively small ICC. We performed a sim-ulation study to assess the effect of varying prevalence and intra-class correla-tion coefficients (ICCs) on bias of fixed effect parameters and variance components of a multilevel pseudo maximum likelihood (weighted) analysis. The simulation results showed that the performance of the scaled weighted estimators is satisfactory for both scaling methods. Incorporating simulation into the analysis of complex multilevel surveys allows the integrity of the results to be tested and is recommended as good practice. 展开更多
关键词 HEALTH Care Utilization complex HEALTH SURVEY with Sampling WEIGHTS Simulations for complex SURVEY Pseudo LIKELIHOOD THREE-LEVEL data
下载PDF
A Complexity Analysis and Entropy for Different Data Compression Algorithms on Text Files
11
作者 Mohammad Hjouj Btoush Ziad E. Dawahdeh 《Journal of Computer and Communications》 2018年第1期301-315,共15页
In this paper, we analyze the complexity and entropy of different methods of data compression algorithms: LZW, Huffman, Fixed-length code (FLC), and Huffman after using Fixed-length code (HFLC). We test those algorith... In this paper, we analyze the complexity and entropy of different methods of data compression algorithms: LZW, Huffman, Fixed-length code (FLC), and Huffman after using Fixed-length code (HFLC). We test those algorithms on different files of different sizes and then conclude that: LZW is the best one in all compression scales that we tested especially on the large files, then Huffman, HFLC, and FLC, respectively. Data compression still is an important topic for research these days, and has many applications and uses needed. Therefore, we suggest continuing searching in this field and trying to combine two techniques in order to reach a best one, or use another source mapping (Hamming) like embedding a linear array into a Hypercube with other good techniques like Huffman and trying to reach good results. 展开更多
关键词 TEXT FILES data Compression HUFFMAN Coding LZW Hamming ENTROPY complexITY
下载PDF
Pinning sampled-data synchronization for complex networks with probabilistic coupling delay
12
作者 王健安 聂瑞兴 孙志毅 《Chinese Physics B》 SCIE EI CAS CSCD 2014年第5期172-179,共8页
We deal with the problem of pinning sampled-data synchronization for a complex network with probabilistic time-varying coupling delay. The sampling period considered here is assumed to be less than a given bound. With... We deal with the problem of pinning sampled-data synchronization for a complex network with probabilistic time-varying coupling delay. The sampling period considered here is assumed to be less than a given bound. Without using the Kronecker product, a new synchronization error system is constructed by using the property of the random variable and input delay approach. Based on the Lyapunov theory, a delay-dependent pinning sampled-data synchronization criterion is derived in terms of linear matrix inequalities (LMIs) that can be solved effectively by using MATLAB LMI toolbox. Numerical examples are provided to demonstrate the effectiveness of the proposed scheme. 展开更多
关键词 complex network probabilistic time-varying coupling delay sampled-data synchronization pin-ning control
下载PDF
不平衡数据流的集成分类方法综述
13
作者 朱诗能 韩萌 +3 位作者 杨书蓉 代震龙 杨文艳 丁剑 《计算机工程与应用》 北大核心 2025年第2期59-72,共14页
现实世界的场景中,从数据流中学习会面临着类不平衡的问题,学习算法由于缺少训练数据而无法有效识别少数类样本。为了介绍不平衡数据流集成分类的研究现状和面临的挑战,依据近年来的不平衡数据流集成分类领域文献,从基于加权、选择和投... 现实世界的场景中,从数据流中学习会面临着类不平衡的问题,学习算法由于缺少训练数据而无法有效识别少数类样本。为了介绍不平衡数据流集成分类的研究现状和面临的挑战,依据近年来的不平衡数据流集成分类领域文献,从基于加权、选择和投票的决策规则和基于代价敏感学习、主动学习和增量学习的学习方式的角度详细分析和总结了不平衡数据流的集成方法,并比较了使用相同数据集的算法的性能。针对处理不同类型复杂数据流中的不平问题,从概念漂移、多类、噪声和类重叠四个方面对其集成分类算法进行总结,分析了经典算法的时间复杂度。对动态数据流、缺失信息的数据流、多标签数据流和不确定数据流中不平衡问题的分类挑战提出了下一步的集成策略研究。 展开更多
关键词 不平衡数据流 集成分类 决策规则 学习方式 复杂数据流
下载PDF
基于复杂网络的物流专业人才需求分析
14
作者 李双艳 张得志 +2 位作者 张皖 章雅冲 刘拥民 《物流科技》 2025年第1期52-54,共3页
以前程无忧、智联招聘、中华英才网三大网站获取到的98213条招聘数据为基础,通过复杂网络和文本挖掘方法构建岗位、技能信息实体,从中挖掘岗位、技能关联关系,并将其可视化,以了解物流专业人才需求特点,明确重要岗位技能要求,为各高校... 以前程无忧、智联招聘、中华英才网三大网站获取到的98213条招聘数据为基础,通过复杂网络和文本挖掘方法构建岗位、技能信息实体,从中挖掘岗位、技能关联关系,并将其可视化,以了解物流专业人才需求特点,明确重要岗位技能要求,为各高校培养专业物流人才提供参考。研究发现物流行业的智能化转型升级以及相关企事业对物流专业人才的需求正在发生改变,物流+数据的岗位有所增加,偏重的技能也由传统的操作型技能转为智能工具型技能,针对此类变化,文章提出三点发展意见以求促进物流专业发展,为企业输送高质量、高水平专业人才。 展开更多
关键词 招聘数据 复杂网络 企业物流类职位 岗位-技能分析
下载PDF
GeoDatabase数据模型及其几何网络的拓扑分析应用 被引量:17
15
作者 邵永社 李晶 《测绘工程》 CSCD 2005年第1期17-19,共3页
阐述了ArcInfo软件的GeoDatabase数据模型及特点,介绍了GeoDatabase数据模型的几何网络,依据模型几何网络的特点,分析了几何网络在地理信息系统拓扑分析中的应用。
关键词 几何 网络 特点 依据 应用 数据模型 ArcInfo软件 拓扑分析 地理信息系统
下载PDF
数据增强和复杂特征优化的类不平衡病理嗓音检测
16
作者 武雅琴 张佳庆 张涛 《应用声学》 北大核心 2025年第1期234-244,共11页
该文以提高病理嗓音多分类准确性为目标,构建了一种基于数据增强和复杂特征优化的类不平衡病理嗓音检测系统。首先,对32种声学特征进行分析并将其归类为时域类特征和频域类特征;其次,采用改进的合成少数类过采样技术对数据集进行增广与... 该文以提高病理嗓音多分类准确性为目标,构建了一种基于数据增强和复杂特征优化的类不平衡病理嗓音检测系统。首先,对32种声学特征进行分析并将其归类为时域类特征和频域类特征;其次,采用改进的合成少数类过采样技术对数据集进行增广与均衡处理;然后,结合高效相关性特征选择算法和盒图对多维声学特征进行融合优化,综合评估各特征的判别能力;最后,基于随机森林分类器,详细分析和验证不同特征组合的分类性能。结果表明,该文提出的融合优化特征集(To、Fatr、Jita、sAPQ、vAm、NHR)在随机森林分类器下,对声带小结、息肉、水肿及麻痹4种病理嗓音的分类性能表现优异,取得了88.6%的分类准确率、88.4%的召回率、88.4%的F1分数和99.7%的AUC值。 展开更多
关键词 病理嗓音 数据增强 复杂特征 高效相关性特征选择 盒图
下载PDF
磁法勘探技术在复杂地质构造区矿产资源调查中的应用研究
17
作者 兰超歌 《世界有色金属》 2025年第1期37-39,共3页
复杂地质构造区的矿产资源调查中磁异常的形成受多种地质因素影响,磁法勘探的应用面临着高精度测量、数据处理与解译等技术挑战。本文讨论了磁法勘探的基本原理、测量方法和数据处理技术并分析复杂地质构造区的磁异常特征与干扰因素。... 复杂地质构造区的矿产资源调查中磁异常的形成受多种地质因素影响,磁法勘探的应用面临着高精度测量、数据处理与解译等技术挑战。本文讨论了磁法勘探的基本原理、测量方法和数据处理技术并分析复杂地质构造区的磁异常特征与干扰因素。研究中提出磁法与重力法的联合勘探能提升勘探精度,磁法勘探在定位矿体、估算储量以及揭示地质构造方面具有较高的实际效果。复杂地质条件下磁法勘探仍面临误差来源、磁异常解译难度及数据优化等问题。基于地质背景的反演解译优化方法以及多物理场联合反演技术的创新应用,成为提升磁法勘探精度的有效途径。 展开更多
关键词 磁法勘探 复杂地质构造区 磁异常 数据处理 联合勘探
下载PDF
改进区块链的数据库信息可搜索加密算法研究
18
作者 靖海 吴进国 袁嘉骏 《电子设计工程》 2025年第2期145-148,153,共5页
为消除传统数据库存在的单点故障和数据篡改的风险,提高数据安全性,基于改进区块链研究了新的数据库信息可搜索加密算法。设置相应的数据库信息存储方案,结合区块链改进技术匹配交互信息,并转化不同的数据分享区间,执行内部数据匹配任务... 为消除传统数据库存在的单点故障和数据篡改的风险,提高数据安全性,基于改进区块链研究了新的数据库信息可搜索加密算法。设置相应的数据库信息存储方案,结合区块链改进技术匹配交互信息,并转化不同的数据分享区间,执行内部数据匹配任务,实现信息匹配。计算数据搜索时间复杂度,搜索加密的标签总和的大小,根据重建时的泄漏函数,得到整数样本,计算重加密密钥数据,根据计算结果生成密钥,实现加密。实验结果表明,改进区块链的数据库信息可搜索加密算法数据丢包率不大于0.05%,加密后篡改率低于0.5%,数据库安全性得到明显优化。 展开更多
关键词 改进区块链 数据库信息 数据匹配 可搜索加密 时间复杂度
下载PDF
Hold the Drones: Fostering the Development of Big Data Paradigms through Regulatory Frameworks 被引量:1
19
作者 Robert Spousta Steve Chan 《通讯和计算机(中英文版)》 2015年第3期135-145,共11页
关键词 无人机系统 数据范式 框架 监管 飞机系统 生长调节作用 历史教训 无人飞行器
下载PDF
Empirical topological investigation of practical supply chains based on complex networks
20
作者 廖好 沈婧 +2 位作者 吴兴桐 陈博奎 周明洋 《Chinese Physics B》 SCIE EI CAS CSCD 2017年第11期144-150,共7页
The industrial supply chain networks basically capture the circulation of social resource, dominating the stability and efficiency of the industrial system. In this paper, we provide an empirical study of the topology... The industrial supply chain networks basically capture the circulation of social resource, dominating the stability and efficiency of the industrial system. In this paper, we provide an empirical study of the topology of smartphone supply chain network. The supply chain network is constructed using open online data. Our experimental results show that the smartphone supply chain network has small-world feature with scale-free degree distribution, in which a few high degree nodes play a key role in the function and can effectively reduce the communication cost. We also detect the community structure to find the basic functional unit. It shows that information communication between nodes is crucial to improve the resource utilization. We should pay attention to the global resource configuration for such electronic production management. 展开更多
关键词 China supply chain networks complex networks data science network science
下载PDF
上一页 1 2 73 下一页 到第
使用帮助 返回顶部