期刊文献+
共找到2,001篇文章
< 1 2 101 >
每页显示 20 50 100
ARF鸟苷酸交换因子BIGs对高尔基体相关的囊泡转运的调控作用
1
作者 林思思 林巍 +3 位作者 王莹 周春 李翠限 沈晓燕 《中山大学学报(医学科学版)》 CAS CSCD 北大核心 2012年第3期311-315,共5页
【目的】探讨ARF鸟苷酸交换因子BIG1和BIG2在高尔基体相关囊泡转运方面的功能。【方法】利用脂质体将siRNA干扰序列转入细胞中,western blotting方法检测转染效率;利用细胞免疫荧光染色方法检测Hela细胞中BIGs蛋白水平的表达分布;采用Al... 【目的】探讨ARF鸟苷酸交换因子BIG1和BIG2在高尔基体相关囊泡转运方面的功能。【方法】利用脂质体将siRNA干扰序列转入细胞中,western blotting方法检测转染效率;利用细胞免疫荧光染色方法检测Hela细胞中BIGs蛋白水平的表达分布;采用Alexa568标记的转铁蛋白孵育Hela细胞,检测转铁蛋白相关的内吞体循环;利用脂质体转染VSVG-YFP病毒质粒,检测新生蛋白经从内质网经高尔基体转运至胞膜的途径。【结果】BIG1和BIG2的siRNA干扰效率均高于70%,且特异性良好;干扰掉BIGs后,细胞内TGN230结构变松散,呈现短片状或点状;干扰BIG2可导致细胞内转铁蛋白的积聚,而同时干扰BIG1则可进-步加剧转铁蛋白的积聚;BIG1或/和BIG2干扰均抑制了新生蛋白的从内质网向高尔基及细胞表面的转运过程。【结论】BIGs蛋白主要位于反面高尔基体网络,对维持其结构完整性非常重要;它们均参与调控高尔基体相关的囊泡转运,两者具有协同作用。 展开更多
关键词 RNAI bigs 反面高尔基体网络 囊泡转运 VSVG—YFP
下载PDF
Reliability evaluation of IGBT power module on electric vehicle using big data 被引量:1
2
作者 Li Liu Lei Tang +5 位作者 Huaping Jiang Fanyi Wei Zonghua Li Changhong Du Qianlei Peng Guocheng Lu 《Journal of Semiconductors》 EI CAS CSCD 2024年第5期50-60,共11页
There are challenges to the reliability evaluation for insulated gate bipolar transistors(IGBT)on electric vehicles,such as junction temperature measurement,computational and storage resources.In this paper,a junction... There are challenges to the reliability evaluation for insulated gate bipolar transistors(IGBT)on electric vehicles,such as junction temperature measurement,computational and storage resources.In this paper,a junction temperature estimation approach based on neural network without additional cost is proposed and the lifetime calculation for IGBT using electric vehicle big data is performed.The direct current(DC)voltage,operation current,switching frequency,negative thermal coefficient thermistor(NTC)temperature and IGBT lifetime are inputs.And the junction temperature(T_(j))is output.With the rain flow counting method,the classified irregular temperatures are brought into the life model for the failure cycles.The fatigue accumulation method is then used to calculate the IGBT lifetime.To solve the limited computational and storage resources of electric vehicle controllers,the operation of IGBT lifetime calculation is running on a big data platform.The lifetime is then transmitted wirelessly to electric vehicles as input for neural network.Thus the junction temperature of IGBT under long-term operating conditions can be accurately estimated.A test platform of the motor controller combined with the vehicle big data server is built for the IGBT accelerated aging test.Subsequently,the IGBT lifetime predictions are derived from the junction temperature estimation by the neural network method and the thermal network method.The experiment shows that the lifetime prediction based on a neural network with big data demonstrates a higher accuracy than that of the thermal network,which improves the reliability evaluation of system. 展开更多
关键词 IGBT junction temperature neural network electric vehicles big data
下载PDF
Hadoop-based secure storage solution for big data in cloud computing environment 被引量:1
3
作者 Shaopeng Guan Conghui Zhang +1 位作者 Yilin Wang Wenqing Liu 《Digital Communications and Networks》 SCIE CSCD 2024年第1期227-236,共10页
In order to address the problems of the single encryption algorithm,such as low encryption efficiency and unreliable metadata for static data storage of big data platforms in the cloud computing environment,we propose... In order to address the problems of the single encryption algorithm,such as low encryption efficiency and unreliable metadata for static data storage of big data platforms in the cloud computing environment,we propose a Hadoop based big data secure storage scheme.Firstly,in order to disperse the NameNode service from a single server to multiple servers,we combine HDFS federation and HDFS high-availability mechanisms,and use the Zookeeper distributed coordination mechanism to coordinate each node to achieve dual-channel storage.Then,we improve the ECC encryption algorithm for the encryption of ordinary data,and adopt a homomorphic encryption algorithm to encrypt data that needs to be calculated.To accelerate the encryption,we adopt the dualthread encryption mode.Finally,the HDFS control module is designed to combine the encryption algorithm with the storage model.Experimental results show that the proposed solution solves the problem of a single point of failure of metadata,performs well in terms of metadata reliability,and can realize the fault tolerance of the server.The improved encryption algorithm integrates the dual-channel storage mode,and the encryption storage efficiency improves by 27.6% on average. 展开更多
关键词 Big data security Data encryption HADOOP Parallel encrypted storage Zookeeper
下载PDF
Study of primordial deuterium abundance in Big Bang nucleosynthesis 被引量:1
4
作者 Zhi-Lin Shen Jian-Jun He 《Nuclear Science and Techniques》 SCIE EI CAS CSCD 2024年第3期208-215,共8页
Big Bang nucleosynthesis(BBN)theory predicts the primordial abundances of the light elements^(2) H(referred to as deuterium,or D for short),^(3)He,^(4)He,and^(7) Li produced in the early universe.Among these,deuterium... Big Bang nucleosynthesis(BBN)theory predicts the primordial abundances of the light elements^(2) H(referred to as deuterium,or D for short),^(3)He,^(4)He,and^(7) Li produced in the early universe.Among these,deuterium,the first nuclide produced by BBN,is a key primordial material for subsequent reactions.To date,the uncertainty in predicted deuterium abundance(D/H)remains larger than the observational precision.In this study,the Monte Carlo simulation code PRIMAT was used to investigate the sensitivity of 11 important BBN reactions to deuterium abundance.We found that the reaction rate uncertainties of the four reactions d(d,n)^(3)He,d(d,p)t,d(p,γ)^(3)He,and p(n,γ)d had the largest influence on the calculated D/H uncertainty.Currently,the calculated D/H uncertainty cannot reach observational precision even with the recent LUNA precise d(p,γ)^(3) He rate.From the nuclear physics aspect,there is still room to largely reduce the reaction-rate uncertainties;hence,further measurements of the important reactions involved in BBN are still necessary.A photodisintegration experiment will be conducted at the Shanghai Laser Electron Gamma Source Facility to precisely study the deuterium production reaction of p(n,γ)d. 展开更多
关键词 Big Bang nucleosynthesis Abundance of deuterium Reaction cross section Reaction rate Monte Carlo method
下载PDF
Exploring deep learning for landslide mapping:A comprehensive review 被引量:1
5
作者 Zhi-qiang Yang Wen-wen Qi +1 位作者 Chong Xu Xiao-yi Shao 《China Geology》 CAS CSCD 2024年第2期330-350,共21页
A detailed and accurate inventory map of landslides is crucial for quantitative hazard assessment and land planning.Traditional methods relying on change detection and object-oriented approaches have been criticized f... A detailed and accurate inventory map of landslides is crucial for quantitative hazard assessment and land planning.Traditional methods relying on change detection and object-oriented approaches have been criticized for their dependence on expert knowledge and subjective factors.Recent advancements in highresolution satellite imagery,coupled with the rapid development of artificial intelligence,particularly datadriven deep learning algorithms(DL)such as convolutional neural networks(CNN),have provided rich feature indicators for landslide mapping,overcoming previous limitations.In this review paper,77representative DL-based landslide detection methods applied in various environments over the past seven years were examined.This study analyzed the structures of different DL networks,discussed five main application scenarios,and assessed both the advancements and limitations of DL in geological hazard analysis.The results indicated that the increasing number of articles per year reflects growing interest in landslide mapping by artificial intelligence,with U-Net-based structures gaining prominence due to their flexibility in feature extraction and generalization.Finally,we explored the hindrances of DL in landslide hazard research based on the above research content.Challenges such as black-box operations and sample dependence persist,warranting further theoretical research and future application of DL in landslide detection. 展开更多
关键词 Landslide Mapping Quantitative hazard assessment Deep learning Artificial intelligence Neural network Big data Geological hazard survery engineering
下载PDF
BIG评分对接受去骨瓣减压术的中重度创伤性脑损伤儿童早期脑功能的预测价值
6
作者 徐静静 党红星 《临床医学进展》 2024年第4期2631-2640,共10页
目的:探讨BIG评分(由格拉斯哥评分、国际标准化比值、碱剩余组成)对接受去骨瓣减压术(DC)的中重度创伤性脑损伤(TBI)患儿脑功能早期预后的预测价值。方法:回顾性分析2014年3月至2023年7月于我院接受DC治疗的所有中重度TBI患儿,以出院时... 目的:探讨BIG评分(由格拉斯哥评分、国际标准化比值、碱剩余组成)对接受去骨瓣减压术(DC)的中重度创伤性脑损伤(TBI)患儿脑功能早期预后的预测价值。方法:回顾性分析2014年3月至2023年7月于我院接受DC治疗的所有中重度TBI患儿,以出院时儿童脑功能分类(PCPC)为结局,分为预后良好组(PCPC 1~2)和预后不良组(PCPC 3~6)。通过病历资料回顾,提取患儿的临床信息,并使用Logistic回归分析评估BIG评分的预测价值。结果:共纳入55例接受DC治疗的中重度TBI患儿,其中25例出院时脑功能良好,30例预后不良(包括9例死亡)。患儿入院时的高BIG评分(p < 0.001)、瞳孔对光反射差(p = 0.027),存在失血性休克(p = 0.042)及多发伤(p = 0.043)、脑水肿(p = 0.007),高血糖(p = 0.042)、高乳酸血症(p = 0.029)均与出院时脑功能不良相关。Logistic回归分析显示,入院时的高BIG评分是出院时脑功能不良的独立危险因素。ROC曲线分析确定的最佳BIG评分阈值为17.5,以此预测不良预后的敏感性为66.7%,特异性为88.0%。结论:接受DC的中重度TBI患儿出院时的总体脑功能不良比例为54.5%。入院时的BIG评分能够预测这些患儿出院时的早期脑功能预后,具有较高的敏感性和特异性。 展开更多
关键词 创伤性脑损伤 去骨瓣减压术 BIG评分 儿童 预后
下载PDF
Analysis and Modeling of Mobile Phone Activity Data Using Interactive Cyber-Physical Social System
7
作者 Farhan Amin Gyu Sang Choi 《Computers, Materials & Continua》 SCIE EI 2024年第9期3507-3521,共15页
Mobile networks possess significant information and thus are considered a gold mine for the researcher’s community.The call detail records(CDR)of a mobile network are used to identify the network’s efficacy and the ... Mobile networks possess significant information and thus are considered a gold mine for the researcher’s community.The call detail records(CDR)of a mobile network are used to identify the network’s efficacy and the mobile user’s behavior.It is evident from the recent literature that cyber-physical systems(CPS)were used in the analytics and modeling of telecom data.In addition,CPS is used to provide valuable services in smart cities.In general,a typical telecom company hasmillions of subscribers and thus generatesmassive amounts of data.From this aspect,data storage,analysis,and processing are the key concerns.To solve these issues,herein we propose a multilevel cyber-physical social system(CPSS)for the analysis and modeling of large internet data.Our proposed multilevel system has three levels and each level has a specific functionality.Initially,raw Call Detail Data(CDR)was collected at the first level.Herein,the data preprocessing,cleaning,and error removal operations were performed.In the second level,data processing,cleaning,reduction,integration,processing,and storage were performed.Herein,suggested internet activity record measures were applied.Our proposed system initially constructs a graph and then performs network analysis.Thus proposed CPSS system accurately identifies different areas of internet peak usage in a city(Milan city).Our research is helpful for the network operators to plan effective network configuration,management,and optimization of resources. 展开更多
关键词 Cyber-physical social systems big data cyber-physical systems pervasive computing smart city big data management techniques
下载PDF
Design and implementation of low-cost geomagnetic field monitoring equipment for high-density deployment
8
作者 Sun Lu-Qiang Bai Xian-Fu +3 位作者 Kang Jian Zeng Ning Zhu Hong Zhang Ming-Dong 《Applied Geophysics》 SCIE CSCD 2024年第3期505-512,618,共9页
The observation of geomagnetic field variations is an important approach to studying earthquake precursors.Since 1987,the China Earthquake Administration has explored this seismomagnetic relationship.In particular,the... The observation of geomagnetic field variations is an important approach to studying earthquake precursors.Since 1987,the China Earthquake Administration has explored this seismomagnetic relationship.In particular,they studied local magnetic field anomalies over the Chinese mainland for earthquake prediction.Owing to the years of research on the seismomagnetic relationship,earthquake prediction experts have concluded that the compressive magnetic effect,tectonic magnetic effect,electric magnetic fluid effect,and other factors contribute to preearthquake magnetic anomalies.However,this involves a small magnitude of magnetic field changes.It is difficult to relate them to the abnormal changes of the extremely large magnetic field in regions with extreme earthquakes owing to the high cost of professional geomagnetic equipment,thereby limiting large-scale deployment.Moreover,it is difficult to obtain strong magnetic field changes before an earthquake.The Tianjin Earthquake Agency has developed low-cost geomagnetic field observation equipment through the Beijing–Tianjin–Hebei geomagnetic equipment test project.The new system was used to test the availability of equipment and determine the findings based on big data.. 展开更多
关键词 geomagnetic field earthquake prediction low cost high density big data
下载PDF
Leveraging the potential of big genomic and phenotypic data for genome-wide association mapping in wheat
9
作者 Moritz Lell Yusheng Zhao Jochen C.Reif 《The Crop Journal》 SCIE CSCD 2024年第3期803-813,共11页
Genome-wide association mapping studies(GWAS)based on Big Data are a potential approach to improve marker-assisted selection in plant breeding.The number of available phenotypic and genomic data sets in which medium-s... Genome-wide association mapping studies(GWAS)based on Big Data are a potential approach to improve marker-assisted selection in plant breeding.The number of available phenotypic and genomic data sets in which medium-sized populations of several hundred individuals have been studied is rapidly increasing.Combining these data and using them in GWAS could increase both the power of QTL discovery and the accuracy of estimation of underlying genetic effects,but is hindered by data heterogeneity and lack of interoperability.In this study,we used genomic and phenotypic data sets,focusing on Central European winter wheat populations evaluated for heading date.We explored strategies for integrating these data and subsequently the resulting potential for GWAS.Establishing interoperability between data sets was greatly aided by some overlapping genotypes and a linear relationship between the different phenotyping protocols,resulting in high quality integrated phenotypic data.In this context,genomic prediction proved to be a suitable tool to study relevance of interactions between genotypes and experimental series,which was low in our case.Contrary to expectations,fewer associations between markers and traits were found in the larger combined data than in the individual experimental series.However,the predictive power based on the marker-trait associations of the integrated data set was higher across data sets.Therefore,the results show that the integration of medium-sized to Big Data is an approach to increase the power to detect QTL in GWAS.The results encourage further efforts to standardize and share data in the plant breeding community. 展开更多
关键词 Big Data Genome-wide association study Data integration Genomic prediction WHEAT
下载PDF
Urbanity mapping reveals the complexity,diffuseness,diversity,and connectivity of urbanized areas
10
作者 Dawa Zhaxi Weiqi Zhou +2 位作者 Steward T.A.Pickett Chengmeng Guo Yang Yao 《Geography and Sustainability》 CSCD 2024年第3期357-369,共13页
There are urgent calls for new approaches to map the global urban conditions of complexity,diffuseness,diversity,and connectivity.However,existing methods mostly focus on mapping urbanized areas as bio physical entiti... There are urgent calls for new approaches to map the global urban conditions of complexity,diffuseness,diversity,and connectivity.However,existing methods mostly focus on mapping urbanized areas as bio physical entities.Here,based on the continuum of urbanity framework,we developed an approach for cross-scale urbanity map-ping from town to city and urban megaregion with different spatial resolutions using the Google Earth Engine.This approach was developed based on multi-source remote sensing data,Points of Interest-Open Street Map(POIs-OSM)big data,and the random forest regression model.This approach is scale-independent and revealed significant spatial variations in urbanity,underscoring differences in urbanization patterns across megaregions and between urban and rural areas.Urbanity was observed transcending traditional urban boundaries,diffusing into rural settlements within non-urban locales.The finding of urbanity in rural communities far from urban areas challenges the gradient theory of urban-rural development and distribution.By mapping livelihoods,lifestyles,and connectivity simultaneously,urbanity maps present a more comprehensive characterization of the complex-ity,diffuseness,diversity,and connectivity of urbanized areas than that by land cover or population density alone.It helps enhance the understanding of urbanization beyond biophysical form.This approach can provide a multifaceted understanding of urbanization,and thereby insights on urban and regional sustainability. 展开更多
关键词 Continuum of Urbanity Big data MAPPING Spatial regression Multiscale
下载PDF
A multi-feature-based intelligent redundancy elimination scheme for cloud-assisted health systems
11
作者 Ling Xiao Beiji Zou +4 位作者 Xiaoyan Kui Chengzhang Zhu Wensheng Zhang Xuebing Yang Bob Zhang 《CAAI Transactions on Intelligence Technology》 SCIE EI 2024年第2期491-510,共20页
Redundancy elimination techniques are extensively investigated to reduce storage overheads for cloud-assisted health systems.Deduplication eliminates the redundancy of duplicate blocks by storing one physical instance... Redundancy elimination techniques are extensively investigated to reduce storage overheads for cloud-assisted health systems.Deduplication eliminates the redundancy of duplicate blocks by storing one physical instance referenced by multiple duplicates.Delta compression is usually regarded as a complementary technique to deduplication to further remove the redundancy of similar blocks,but our observations indicate that this is disobedient when data have sparse duplicate blocks.In addition,there are many overlapped deltas in the resemblance detection process of post-deduplication delta compression,which hinders the efficiency of delta compression and the index phase of resemblance detection inquires abundant non-similar blocks,resulting in inefficient system throughput.Therefore,a multi-feature-based redundancy elimination scheme,called MFRE,is proposed to solve these problems.The similarity feature and temporal locality feature are excavated to assist redundancy elimination where the similarity feature well expresses the duplicate attribute.Then,similarity-based dynamic post-deduplication delta compression and temporal locality-based dynamic delta compression discover more similar base blocks to minimise overlapped deltas and improve compression ratios.Moreover,the clustering method based on block-relationship and the feature index strategy based on bloom filters reduce IO overheads and improve system throughput.Experiments demonstrate that the proposed method,compared to the state-of-the-art method,improves the compression ratio and system throughput by 9.68%and 50%,respectively. 展开更多
关键词 big data cloud computing compression data compression medical applications performance evaluation
下载PDF
Big Data Access Control Mechanism Based on Two-Layer Permission Decision Structure
12
作者 Aodi Liu Na Wang +3 位作者 Xuehui Du Dibin Shan Xiangyu Wu Wenjuan Wang 《Computers, Materials & Continua》 SCIE EI 2024年第4期1705-1726,共22页
Big data resources are characterized by large scale, wide sources, and strong dynamics. Existing access controlmechanisms based on manual policy formulation by security experts suffer from drawbacks such as low policy... Big data resources are characterized by large scale, wide sources, and strong dynamics. Existing access controlmechanisms based on manual policy formulation by security experts suffer from drawbacks such as low policymanagement efficiency and difficulty in accurately describing the access control policy. To overcome theseproblems, this paper proposes a big data access control mechanism based on a two-layer permission decisionstructure. This mechanism extends the attribute-based access control (ABAC) model. Business attributes areintroduced in the ABAC model as business constraints between entities. The proposed mechanism implementsa two-layer permission decision structure composed of the inherent attributes of access control entities and thebusiness attributes, which constitute the general permission decision algorithm based on logical calculation andthe business permission decision algorithm based on a bi-directional long short-term memory (BiLSTM) neuralnetwork, respectively. The general permission decision algorithm is used to implement accurate policy decisions,while the business permission decision algorithm implements fuzzy decisions based on the business constraints.The BiLSTM neural network is used to calculate the similarity of the business attributes to realize intelligent,adaptive, and efficient access control permission decisions. Through the two-layer permission decision structure,the complex and diverse big data access control management requirements can be satisfied by considering thesecurity and availability of resources. Experimental results show that the proposed mechanism is effective andreliable. In summary, it can efficiently support the secure sharing of big data resources. 展开更多
关键词 Big data access control data security BiLSTM
下载PDF
Exploring impacts of COVID-19 on spatial and temporal patterns of visitors to Canadian Rocky Mountain National Parks from social media big data
13
作者 Dehui Christina Geng Amy Li +4 位作者 Jieyu Zhang Howie W.Harshaw Christopher Gaston Wanli Wu Guangyu Wang 《Journal of Forestry Research》 SCIE EI CAS CSCD 2024年第4期13-33,共21页
COVID-19 posed challenges for global tourism management.Changes in visitor temporal and spatial patterns and their associated determinants pre-and peri-pandemic in Canadian Rocky Mountain National Parks are analyzed.D... COVID-19 posed challenges for global tourism management.Changes in visitor temporal and spatial patterns and their associated determinants pre-and peri-pandemic in Canadian Rocky Mountain National Parks are analyzed.Data was collected through social media programming and analyzed using spatiotemporal analysis and a geographically weighted regression(GWR)model.Results highlight that COVID-19 significantly changed park visitation patterns.Visitors tended to explore more remote areas peri-pandemic.The GWR model also indicated distance to nearby trails was a significant influence on visitor density.Our results indicate that the pandemic influenced tourism temporal and spatial imbalance.This research presents a novel approach using combined social media big data which can be extended to the field of tourism management,and has important implications to manage visitor patterns and to allocate resources efficiently to satisfy multiple objectives of park management. 展开更多
关键词 Tourism management Social media big data National parks COVID-19 Geographical weighted regression
下载PDF
地理大数据揭示中国主要城市建成环境物质存量的空间格局
14
作者 Zhou Huang Yi Bao +13 位作者 Ruichang Mao Han Wang Ganmin Yin Lin Wan Houji Qi Qiaoxuan Li Hongzhao Tang Qiance Liu Linna Li Bailang Yu Qinghua Guo Yu Liu Huadong Guo Gang Liu 《Engineering》 SCIE EI CAS CSCD 2024年第3期143-153,共11页
The patterns of material accumulation in buildings and infrastructure accompanied by rapid urbanization offer an important,yet hitherto largely missing stock perspective for facilitating urban system engineering and i... The patterns of material accumulation in buildings and infrastructure accompanied by rapid urbanization offer an important,yet hitherto largely missing stock perspective for facilitating urban system engineering and informing urban resources,waste,and climate strategies.However,our existing knowledge on the patterns of built environment stocks across and particularly within cities is limited,largely owing to the lack of sufficient high spatial resolution data.This study leveraged multi-source big geodata,machine learning,and bottom-up stock accounting to characterize the built environment stocks of 50 cities in China at 500 m fine-grained levels.The per capita built environment stock of many cities(261 tonnes per capita on average)is close to that in western cities,despite considerable disparities across cities owing to their varying socioeconomic,geomorphology,and urban form characteristics.This is mainly owing to the construction boom and the building and infrastructure-driven economy of China in the past decades.China’s urban expansion tends to be more“vertical”(with high-rise buildings)than“horizontal”(with expanded road networks).It trades skylines for space,and reflects a concentration-dispersion-concentration pathway for spatialized built environment stocks development within cities in China.These results shed light on future urbanization in developing cities,inform spatial planning,and support circular and low-carbon transitions in cities. 展开更多
关键词 Urban system engineering Built environment stock Spatial pattern Urban sustainability Big geodata
下载PDF
A review on edge analytics:Issues,challenges,opportunities,promises,future directions,and applications
15
作者 Sabuzima Nayak Ripon Patgiri +1 位作者 Lilapati Waikhom Arif Ahmed 《Digital Communications and Networks》 SCIE CSCD 2024年第3期783-804,共22页
Edge technology aims to bring cloud resources(specifically,the computation,storage,and network)to the closed proximity of the edge devices,i.e.,smart devices where the data are produced and consumed.Embedding computin... Edge technology aims to bring cloud resources(specifically,the computation,storage,and network)to the closed proximity of the edge devices,i.e.,smart devices where the data are produced and consumed.Embedding computing and application in edge devices lead to emerging of two new concepts in edge technology:edge computing and edge analytics.Edge analytics uses some techniques or algorithms to analyse the data generated by the edge devices.With the emerging of edge analytics,the edge devices have become a complete set.Currently,edge analytics is unable to provide full support to the analytic techniques.The edge devices cannot execute advanced and sophisticated analytic algorithms following various constraints such as limited power supply,small memory size,limited resources,etc.This article aims to provide a detailed discussion on edge analytics.The key contributions of the paper are as follows-a clear explanation to distinguish between the three concepts of edge technology:edge devices,edge computing,and edge analytics,along with their issues.In addition,the article discusses the implementation of edge analytics to solve many problems and applications in various areas such as retail,agriculture,industry,and healthcare.Moreover,the research papers of the state-of-the-art edge analytics are rigorously reviewed in this article to explore the existing issues,emerging challenges,research opportunities and their directions,and applications. 展开更多
关键词 Edge analytics Edge computing Edge devices Big data Sensor Artificial intelligence Machine learning Smart technology Healthcare
下载PDF
An Innovative K-Anonymity Privacy-Preserving Algorithm to Improve Data Availability in the Context of Big Data
16
作者 Linlin Yuan Tiantian Zhang +2 位作者 Yuling Chen Yuxiang Yang Huang Li 《Computers, Materials & Continua》 SCIE EI 2024年第4期1561-1579,共19页
The development of technologies such as big data and blockchain has brought convenience to life,but at the same time,privacy and security issues are becoming more and more prominent.The K-anonymity algorithm is an eff... The development of technologies such as big data and blockchain has brought convenience to life,but at the same time,privacy and security issues are becoming more and more prominent.The K-anonymity algorithm is an effective and low computational complexity privacy-preserving algorithm that can safeguard users’privacy by anonymizing big data.However,the algorithm currently suffers from the problem of focusing only on improving user privacy while ignoring data availability.In addition,ignoring the impact of quasi-identified attributes on sensitive attributes causes the usability of the processed data on statistical analysis to be reduced.Based on this,we propose a new K-anonymity algorithm to solve the privacy security problem in the context of big data,while guaranteeing improved data usability.Specifically,we construct a new information loss function based on the information quantity theory.Considering that different quasi-identification attributes have different impacts on sensitive attributes,we set weights for each quasi-identification attribute when designing the information loss function.In addition,to reduce information loss,we improve K-anonymity in two ways.First,we make the loss of information smaller than in the original table while guaranteeing privacy based on common artificial intelligence algorithms,i.e.,greedy algorithm and 2-means clustering algorithm.In addition,we improve the 2-means clustering algorithm by designing a mean-center method to select the initial center of mass.Meanwhile,we design the K-anonymity algorithm of this scheme based on the constructed information loss function,the improved 2-means clustering algorithm,and the greedy algorithm,which reduces the information loss.Finally,we experimentally demonstrate the effectiveness of the algorithm in improving the effect of 2-means clustering and reducing information loss. 展开更多
关键词 Blockchain big data K-ANONYMITY 2-means clustering greedy algorithm mean-center method
下载PDF
Big Data Application Simulation Platform Design for Onboard Distributed Processing of LEO Mega-Constellation Networks
17
作者 Zhang Zhikai Gu Shushi +1 位作者 Zhang Qinyu Xue Jiayin 《China Communications》 SCIE CSCD 2024年第7期334-345,共12页
Due to the restricted satellite payloads in LEO mega-constellation networks(LMCNs),remote sensing image analysis,online learning and other big data services desirably need onboard distributed processing(OBDP).In exist... Due to the restricted satellite payloads in LEO mega-constellation networks(LMCNs),remote sensing image analysis,online learning and other big data services desirably need onboard distributed processing(OBDP).In existing technologies,the efficiency of big data applications(BDAs)in distributed systems hinges on the stable-state and low-latency links between worker nodes.However,LMCNs with high-dynamic nodes and long-distance links can not provide the above conditions,which makes the performance of OBDP hard to be intuitively measured.To bridge this gap,a multidimensional simulation platform is indispensable that can simulate the network environment of LMCNs and put BDAs in it for performance testing.Using STK's APIs and parallel computing framework,we achieve real-time simulation for thousands of satellite nodes,which are mapped as application nodes through software defined network(SDN)and container technologies.We elaborate the architecture and mechanism of the simulation platform,and take the Starlink and Hadoop as realistic examples for simulations.The results indicate that LMCNs have dynamic end-to-end latency which fluctuates periodically with the constellation movement.Compared to ground data center networks(GDCNs),LMCNs deteriorate the computing and storage job throughput,which can be alleviated by the utilization of erasure codes and data flow scheduling of worker nodes. 展开更多
关键词 big data application Hadoop LEO mega-constellation multidimensional simulation onboard distributed processing
下载PDF
An intelligent prediction model of epidemic characters based on multi-feature
18
作者 Xiaoying Wang Chunmei Li +6 位作者 Yilei Wang Lin Yin Qilin Zhou Rui Zheng Qingwu Wu Yuqi Zhou Min Dai 《CAAI Transactions on Intelligence Technology》 SCIE EI 2024年第3期595-607,共13页
The epidemic characters of Omicron(e.g.large-scale transmission)are significantly different from the initial variants of COVID-19.The data generated by large-scale transmission is important to predict the trend of epi... The epidemic characters of Omicron(e.g.large-scale transmission)are significantly different from the initial variants of COVID-19.The data generated by large-scale transmission is important to predict the trend of epidemic characters.However,the re-sults of current prediction models are inaccurate since they are not closely combined with the actual situation of Omicron transmission.In consequence,these inaccurate results have negative impacts on the process of the manufacturing and the service industry,for example,the production of masks and the recovery of the tourism industry.The authors have studied the epidemic characters in two ways,that is,investigation and prediction.First,a large amount of data is collected by utilising the Baidu index and conduct questionnaire survey concerning epidemic characters.Second,theβ-SEIDR model is established,where the population is classified as Susceptible,Exposed,Infected,Dead andβ-Recovered persons,to intelligently predict the epidemic characters of COVID-19.Note thatβ-Recovered persons denote that the Recovered persons may become Sus-ceptible persons with probabilityβ.The simulation results show that the model can accurately predict the epidemic characters. 展开更多
关键词 artificial intelligence big data data analysis evaluation feature extraction intelligent information processing medical applications
下载PDF
Smart horticulture as an emerging interdisciplinary field combining novel solutions:Past development,current challenges,and future perspectives
19
作者 Moran Zhang Yutong Han +2 位作者 Dongping Li Shengyong Xu Yuan Huang 《Horticultural Plant Journal》 SCIE CAS CSCD 2024年第6期1257-1273,共17页
Horticultural products such as fruits,vegetables,and tea offer a range of important nutrients such as protein,carbohydrates,vitamins and lipids.However,the present yield and quality do not meet the requirements of the... Horticultural products such as fruits,vegetables,and tea offer a range of important nutrients such as protein,carbohydrates,vitamins and lipids.However,the present yield and quality do not meet the requirements of the rapid population growth associated with global climate change,the decline in horticultural practitioners,poor automation,and epidemic diseases such as COVID-19.In this context,smart horticulture is expected to greatly improve the land output rates,resource-use efficiency,and productivity,all of which should facilitate the sustainable development of the horticulture industry.Emerging technologies,such as artificial intelligence,big data,the Internet of Things,and cloud computing,play an important role.This paper reviews past developments and current challenges,offering future perspectives for horticultural chain management.We expect that the horticulture industry would benefit from integration with smart technologies.This requires the use of novel solutions to build a new advanced system encompassing smart breeding,smart cultivation,smart transportation,and smart sales.Finally,a new development approach combining precise perception,smart operation,and smart control should be instituted in the horticulture industry.Within 30 years,we expect that the industry will embrace mechanical,automatic,and informational production to transform into a smart industry. 展开更多
关键词 Smart horticulture Horticultural crops Emerging technologies Artificial intelligence Big data The Internet of Things
下载PDF
Privacy-Preserving Federated Deep Learning Diagnostic Method for Multi-Stage Diseases
20
作者 Jinbo Yang Hai Huang +2 位作者 Lailai Yin Jiaxing Qu Wanjuan Xie 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第6期3085-3099,共15页
Diagnosing multi-stage diseases typically requires doctors to consider multiple data sources,including clinical symptoms,physical signs,biochemical test results,imaging findings,pathological examination data,and even ... Diagnosing multi-stage diseases typically requires doctors to consider multiple data sources,including clinical symptoms,physical signs,biochemical test results,imaging findings,pathological examination data,and even genetic data.When applying machine learning modeling to predict and diagnose multi-stage diseases,several challenges need to be addressed.Firstly,the model needs to handle multimodal data,as the data used by doctors for diagnosis includes image data,natural language data,and structured data.Secondly,privacy of patients’data needs to be protected,as these data contain the most sensitive and private information.Lastly,considering the practicality of the model,the computational requirements should not be too high.To address these challenges,this paper proposes a privacy-preserving federated deep learning diagnostic method for multi-stage diseases.This method improves the forward and backward propagation processes of deep neural network modeling algorithms and introduces a homomorphic encryption step to design a federated modeling algorithm without the need for an arbiter.It also utilizes dedicated integrated circuits to implement the hardware Paillier algorithm,providing accelerated support for homomorphic encryption in modeling.Finally,this paper designs and conducts experiments to evaluate the proposed solution.The experimental results show that in privacy-preserving federated deep learning diagnostic modeling,the method in this paper achieves the same modeling performance as ordinary modeling without privacy protection,and has higher modeling speed compared to similar algorithms. 展开更多
关键词 Vertical federation homomorphic encryption deep neural network intelligent diagnosis machine learning and big data
下载PDF
上一页 1 2 101 下一页 到第
使用帮助 返回顶部