期刊文献+
共找到2,035篇文章
< 1 2 102 >
每页显示 20 50 100
Hypothesis of Primary Particles and the Creation of the Big Bang and Other Universes 被引量:3
1
作者 Slobodan Spremo 《Journal of Modern Physics》 2019年第13期1532-1547,共16页
In this paper, we have presented a new approach to the dynamics of hypothetical primary particles, moving at speeds greater than the speed of light in a vacuum within their flat spacetime, which is why we understood t... In this paper, we have presented a new approach to the dynamics of hypothetical primary particles, moving at speeds greater than the speed of light in a vacuum within their flat spacetime, which is why we understood the reason why they have not been detected so far. By introducing a new factor, we have linked the space-time coordinates of primary particles, within different inertial frames of reference. We have shown that transformations of coordinates for primary particles with respect to different inertial frames of reference, based on this factor, constitute the Lorentz transformations. Utilizing this factor, we have set the foundations of primary particle dynamics. The results obtained for the dynamic properties of these particles are in accordance with the fundamental laws of physics, and we expect them to be experimentally verifiable. Likewise, due to their dynamic properties, we have concluded that the Big Bang could have occurred during a mutual collision of the primary particles, with a sudden speed decrease of some of these particles to a speed slightly greater than the speed of light in a vacuum, which would release an enormous amount of energy. Created in such manner, our Universe would possess a limit on the maximum speed of energy-mass transfer, the speed of light in a vacuum, which we will show after introducing the dynamic properties of these particles. Similarly, we have concluded that the creation of other universes, possessing a different maximum speed of energy-mass transfer, occurred during the collision of these particles as well, only by means of deceleration of some of these particles to a speed slightly greater than the maximum speed of energy-mass transfer in that particular universe. 展开更多
关键词 big Bang Flat SPACETIME LORENTZ TRANSFORMATIONS
下载PDF
Reliability evaluation of IGBT power module on electric vehicle using big data 被引量:1
2
作者 Li Liu Lei Tang +5 位作者 Huaping Jiang Fanyi Wei Zonghua Li Changhong Du Qianlei Peng Guocheng Lu 《Journal of Semiconductors》 EI CAS CSCD 2024年第5期50-60,共11页
There are challenges to the reliability evaluation for insulated gate bipolar transistors(IGBT)on electric vehicles,such as junction temperature measurement,computational and storage resources.In this paper,a junction... There are challenges to the reliability evaluation for insulated gate bipolar transistors(IGBT)on electric vehicles,such as junction temperature measurement,computational and storage resources.In this paper,a junction temperature estimation approach based on neural network without additional cost is proposed and the lifetime calculation for IGBT using electric vehicle big data is performed.The direct current(DC)voltage,operation current,switching frequency,negative thermal coefficient thermistor(NTC)temperature and IGBT lifetime are inputs.And the junction temperature(T_(j))is output.With the rain flow counting method,the classified irregular temperatures are brought into the life model for the failure cycles.The fatigue accumulation method is then used to calculate the IGBT lifetime.To solve the limited computational and storage resources of electric vehicle controllers,the operation of IGBT lifetime calculation is running on a big data platform.The lifetime is then transmitted wirelessly to electric vehicles as input for neural network.Thus the junction temperature of IGBT under long-term operating conditions can be accurately estimated.A test platform of the motor controller combined with the vehicle big data server is built for the IGBT accelerated aging test.Subsequently,the IGBT lifetime predictions are derived from the junction temperature estimation by the neural network method and the thermal network method.The experiment shows that the lifetime prediction based on a neural network with big data demonstrates a higher accuracy than that of the thermal network,which improves the reliability evaluation of system. 展开更多
关键词 IGBT junction temperature neural network electric vehicles big data
下载PDF
Hadoop-based secure storage solution for big data in cloud computing environment 被引量:1
3
作者 Shaopeng Guan Conghui Zhang +1 位作者 Yilin Wang Wenqing Liu 《Digital Communications and Networks》 SCIE CSCD 2024年第1期227-236,共10页
In order to address the problems of the single encryption algorithm,such as low encryption efficiency and unreliable metadata for static data storage of big data platforms in the cloud computing environment,we propose... In order to address the problems of the single encryption algorithm,such as low encryption efficiency and unreliable metadata for static data storage of big data platforms in the cloud computing environment,we propose a Hadoop based big data secure storage scheme.Firstly,in order to disperse the NameNode service from a single server to multiple servers,we combine HDFS federation and HDFS high-availability mechanisms,and use the Zookeeper distributed coordination mechanism to coordinate each node to achieve dual-channel storage.Then,we improve the ECC encryption algorithm for the encryption of ordinary data,and adopt a homomorphic encryption algorithm to encrypt data that needs to be calculated.To accelerate the encryption,we adopt the dualthread encryption mode.Finally,the HDFS control module is designed to combine the encryption algorithm with the storage model.Experimental results show that the proposed solution solves the problem of a single point of failure of metadata,performs well in terms of metadata reliability,and can realize the fault tolerance of the server.The improved encryption algorithm integrates the dual-channel storage mode,and the encryption storage efficiency improves by 27.6% on average. 展开更多
关键词 big data security Data encryption HADOOP Parallel encrypted storage Zookeeper
下载PDF
Study of primordial deuterium abundance in Big Bang nucleosynthesis 被引量:1
4
作者 Zhi-Lin Shen Jian-Jun He 《Nuclear Science and Techniques》 SCIE EI CAS CSCD 2024年第3期208-215,共8页
Big Bang nucleosynthesis(BBN)theory predicts the primordial abundances of the light elements^(2) H(referred to as deuterium,or D for short),^(3)He,^(4)He,and^(7) Li produced in the early universe.Among these,deuterium... Big Bang nucleosynthesis(BBN)theory predicts the primordial abundances of the light elements^(2) H(referred to as deuterium,or D for short),^(3)He,^(4)He,and^(7) Li produced in the early universe.Among these,deuterium,the first nuclide produced by BBN,is a key primordial material for subsequent reactions.To date,the uncertainty in predicted deuterium abundance(D/H)remains larger than the observational precision.In this study,the Monte Carlo simulation code PRIMAT was used to investigate the sensitivity of 11 important BBN reactions to deuterium abundance.We found that the reaction rate uncertainties of the four reactions d(d,n)^(3)He,d(d,p)t,d(p,γ)^(3)He,and p(n,γ)d had the largest influence on the calculated D/H uncertainty.Currently,the calculated D/H uncertainty cannot reach observational precision even with the recent LUNA precise d(p,γ)^(3) He rate.From the nuclear physics aspect,there is still room to largely reduce the reaction-rate uncertainties;hence,further measurements of the important reactions involved in BBN are still necessary.A photodisintegration experiment will be conducted at the Shanghai Laser Electron Gamma Source Facility to precisely study the deuterium production reaction of p(n,γ)d. 展开更多
关键词 big Bang nucleosynthesis Abundance of deuterium Reaction cross section Reaction rate Monte Carlo method
下载PDF
BIG评分对接受去骨瓣减压术的中重度创伤性脑损伤儿童早期脑功能的预测价值
5
作者 徐静静 党红星 《临床医学进展》 2024年第4期2631-2640,共10页
目的:探讨BIG评分(由格拉斯哥评分、国际标准化比值、碱剩余组成)对接受去骨瓣减压术(DC)的中重度创伤性脑损伤(TBI)患儿脑功能早期预后的预测价值。方法:回顾性分析2014年3月至2023年7月于我院接受DC治疗的所有中重度TBI患儿,以出院时... 目的:探讨BIG评分(由格拉斯哥评分、国际标准化比值、碱剩余组成)对接受去骨瓣减压术(DC)的中重度创伤性脑损伤(TBI)患儿脑功能早期预后的预测价值。方法:回顾性分析2014年3月至2023年7月于我院接受DC治疗的所有中重度TBI患儿,以出院时儿童脑功能分类(PCPC)为结局,分为预后良好组(PCPC 1~2)和预后不良组(PCPC 3~6)。通过病历资料回顾,提取患儿的临床信息,并使用Logistic回归分析评估BIG评分的预测价值。结果:共纳入55例接受DC治疗的中重度TBI患儿,其中25例出院时脑功能良好,30例预后不良(包括9例死亡)。患儿入院时的高BIG评分(p < 0.001)、瞳孔对光反射差(p = 0.027),存在失血性休克(p = 0.042)及多发伤(p = 0.043)、脑水肿(p = 0.007),高血糖(p = 0.042)、高乳酸血症(p = 0.029)均与出院时脑功能不良相关。Logistic回归分析显示,入院时的高BIG评分是出院时脑功能不良的独立危险因素。ROC曲线分析确定的最佳BIG评分阈值为17.5,以此预测不良预后的敏感性为66.7%,特异性为88.0%。结论:接受DC的中重度TBI患儿出院时的总体脑功能不良比例为54.5%。入院时的BIG评分能够预测这些患儿出院时的早期脑功能预后,具有较高的敏感性和特异性。 展开更多
关键词 创伤性脑损伤 去骨瓣减压术 big评分 儿童 预后
下载PDF
Big Brothers in Shanghai Enterprises
6
作者 Li Zhen 《China's Foreign Trade》 2008年第16期000-000,共1页
In reccnt years,the state-owned economy in Shanghai has been going through a stead growth.At the same time,as long as Shanghai's economic reform deepens,the growth of non-public sectors has also maintained a robus... In reccnt years,the state-owned economy in Shanghai has been going through a stead growth.At the same time,as long as Shanghai's economic reform deepens,the growth of non-public sectors has also maintained a robust momentum.Specifically.the proportion of the state-owned economy in Shanghai has decreased from 55% in 2000 to 47.9% in 2006.while the proportion of the non-public sectors has increased from 28.6% in 2000 to 44.1% in 2006. Based on the ratio decrease of Shanghai's state-owned economy in the who... 展开更多
关键词 big Brothers in Shanghai Enterprises
下载PDF
The Worlds on the Other Side of the Big Bang
7
作者 Avas Khugaev Eugeniya Bibaeva 《Journal of Applied Mathematics and Physics》 2023年第1期276-302,共27页
Taking the Big Bang as an established fact, the question inevitably arises about what exactly caused it, in what environment could it have happened and what happened before it. The developed approach allows us to shed... Taking the Big Bang as an established fact, the question inevitably arises about what exactly caused it, in what environment could it have happened and what happened before it. The developed approach allows us to shed light on many raised questions and to establish what universal laws and structures formed what happened before the Big Bang, to understand its cause and the dynamic processes that led to it. This required a radical revision of many views, giving them a new meaning and content. This approach has led to a consistent and conceptually new understanding of these phenomena, which allowed correctly formulate questions to which there are still no clear answers. Based on this formulation of the problem, we came to new ideas about the nature of Dark energy, Dark matter and the region of their birth, formulated and described the mechanism of the formation of worlds and their hierarchy on the other side of the Big Bang and the mechanism of this explosion itself. The Primary Parent Particle was introduced into the concept, which was the basis of everything and is the carrier of the fundamental Primary space introduced by us, which had at least two phase states. This particle consists of Beginnings united in the form of Borromeo rings. This made it possible to calculate the structure and primary spectrum of elementary particles that arose on the other side of the Big Bang, the mechanisms of their formation and the resulting fundamental interactions that lead to the existence of vortices before the Big Bang;the mechanisms of the birth of multiple universes and much more are also considered. The concept of the “cosmic genetic code" is introduced, the characteristics and mechanism of its formation before the Big Bang are presented. 展开更多
关键词 “Dirac Sea” big Bang Primary space Primary Parent Particle “swaddled triads” MATERIALITY WORLDS DNA of “seeds of Creation” Borromeo rings “clumps” of Dark energy
下载PDF
Another Big Step Forward in the Reform of China's Banking System
8
《China's Foreign Trade》 1999年第6期15-16,共2页
关键词 Another big Step Forward in the Reform of China’s Banking System
下载PDF
Research on Tensor Multi-Clustering Distributed Incremental Updating Method for Big Data
9
作者 Hongjun Zhang Zeyu Zhang +3 位作者 Yilong Ruan Hao Ye Peng Li Desheng Shi 《Computers, Materials & Continua》 SCIE EI 2024年第10期1409-1432,共24页
The scale and complexity of big data are growing continuously,posing severe challenges to traditional data processing methods,especially in the field of clustering analysis.To address this issue,this paper introduces ... The scale and complexity of big data are growing continuously,posing severe challenges to traditional data processing methods,especially in the field of clustering analysis.To address this issue,this paper introduces a new method named Big Data Tensor Multi-Cluster Distributed Incremental Update(BDTMCDIncreUpdate),which combines distributed computing,storage technology,and incremental update techniques to provide an efficient and effective means for clustering analysis.Firstly,the original dataset is divided into multiple subblocks,and distributed computing resources are utilized to process the sub-blocks in parallel,enhancing efficiency.Then,initial clustering is performed on each sub-block using tensor-based multi-clustering techniques to obtain preliminary results.When new data arrives,incremental update technology is employed to update the core tensor and factor matrix,ensuring that the clustering model can adapt to changes in data.Finally,by combining the updated core tensor and factor matrix with historical computational results,refined clustering results are obtained,achieving real-time adaptation to dynamic data.Through experimental simulation on the Aminer dataset,the BDTMCDIncreUpdate method has demonstrated outstanding performance in terms of accuracy(ACC)and normalized mutual information(NMI)metrics,achieving an accuracy rate of 90%and an NMI score of 0.85,which outperforms existing methods such as TClusInitUpdate and TKLClusUpdate in most scenarios.Therefore,the BDTMCDIncreUpdate method offers an innovative solution to the field of big data analysis,integrating distributed computing,incremental updates,and tensor-based multi-clustering techniques.It not only improves the efficiency and scalability in processing large-scale high-dimensional datasets but also has been validated for its effectiveness and accuracy through experiments.This method shows great potential in real-world applications where dynamic data growth is common,and it is of significant importance for advancing the development of data analysis technology. 展开更多
关键词 TENSOR incremental update DISTRIBUTED clustering processing big data
下载PDF
Omics big data for crop improvement:Opportunities and challenges
10
作者 Naresh Vasupalli Javaid Akhter Bhat +7 位作者 Priyanka Jain Tanu Sri Md Aminul Islam SMShivaraj Sunil Kumar Singh Rupesh Deshmukh Humira Sonah Xinchun Lin 《The Crop Journal》 SCIE CSCD 2024年第6期1517-1532,共16页
The application of advanced omics technologies in plant science has generated an enormous dataset of sequences,expression profiles,and phenotypic traits,collectively termed“big data”for their significant volume,dive... The application of advanced omics technologies in plant science has generated an enormous dataset of sequences,expression profiles,and phenotypic traits,collectively termed“big data”for their significant volume,diversity,and rapid pace of accumulation.Despite extensive data generation,the process of analyzing and interpreting big data remains complex and challenging.Big data analyses will help identify genes and uncover different mechanisms controlling various agronomic traits in crop plants.The insights gained from big data will assist scientists in developing strategies for crop improvement.Although the big data generated from crop plants opens a world of possibilities,realizing its full potential requires enhancement in computational capacity and advances in machine learning(ML)or deep learning(DL)approaches.The present review discuss the applications of genomics,transcriptomics,proteomics,metabolomics,epigenetics,and phenomics“big data”in crop improvement.Furthermore,we discuss the potential application of artificial intelligence to genomic selection.Additionally,the article outlines the crucial role of big data in precise genetic engineering and understanding plant stress tolerance.Also we highlight the challenges associated with big data storage,analyses,visualization and sharing,and emphasize the need for robust solutions to harness these invaluable resources for crop improvement. 展开更多
关键词 big data GWAS WGRS qQTL TWAS Systems biology CRISPR/Cas9
下载PDF
Leveraging the potential of big genomic and phenotypic data for genome-wide association mapping in wheat
11
作者 Moritz Lell Yusheng Zhao Jochen C.Reif 《The Crop Journal》 SCIE CSCD 2024年第3期803-813,共11页
Genome-wide association mapping studies(GWAS)based on Big Data are a potential approach to improve marker-assisted selection in plant breeding.The number of available phenotypic and genomic data sets in which medium-s... Genome-wide association mapping studies(GWAS)based on Big Data are a potential approach to improve marker-assisted selection in plant breeding.The number of available phenotypic and genomic data sets in which medium-sized populations of several hundred individuals have been studied is rapidly increasing.Combining these data and using them in GWAS could increase both the power of QTL discovery and the accuracy of estimation of underlying genetic effects,but is hindered by data heterogeneity and lack of interoperability.In this study,we used genomic and phenotypic data sets,focusing on Central European winter wheat populations evaluated for heading date.We explored strategies for integrating these data and subsequently the resulting potential for GWAS.Establishing interoperability between data sets was greatly aided by some overlapping genotypes and a linear relationship between the different phenotyping protocols,resulting in high quality integrated phenotypic data.In this context,genomic prediction proved to be a suitable tool to study relevance of interactions between genotypes and experimental series,which was low in our case.Contrary to expectations,fewer associations between markers and traits were found in the larger combined data than in the individual experimental series.However,the predictive power based on the marker-trait associations of the integrated data set was higher across data sets.Therefore,the results show that the integration of medium-sized to Big Data is an approach to increase the power to detect QTL in GWAS.The results encourage further efforts to standardize and share data in the plant breeding community. 展开更多
关键词 big Data Genome-wide association study Data integration Genomic prediction WHEAT
下载PDF
Big Data Access Control Mechanism Based on Two-Layer Permission Decision Structure
12
作者 Aodi Liu Na Wang +3 位作者 Xuehui Du Dibin Shan Xiangyu Wu Wenjuan Wang 《Computers, Materials & Continua》 SCIE EI 2024年第4期1705-1726,共22页
Big data resources are characterized by large scale, wide sources, and strong dynamics. Existing access controlmechanisms based on manual policy formulation by security experts suffer from drawbacks such as low policy... Big data resources are characterized by large scale, wide sources, and strong dynamics. Existing access controlmechanisms based on manual policy formulation by security experts suffer from drawbacks such as low policymanagement efficiency and difficulty in accurately describing the access control policy. To overcome theseproblems, this paper proposes a big data access control mechanism based on a two-layer permission decisionstructure. This mechanism extends the attribute-based access control (ABAC) model. Business attributes areintroduced in the ABAC model as business constraints between entities. The proposed mechanism implementsa two-layer permission decision structure composed of the inherent attributes of access control entities and thebusiness attributes, which constitute the general permission decision algorithm based on logical calculation andthe business permission decision algorithm based on a bi-directional long short-term memory (BiLSTM) neuralnetwork, respectively. The general permission decision algorithm is used to implement accurate policy decisions,while the business permission decision algorithm implements fuzzy decisions based on the business constraints.The BiLSTM neural network is used to calculate the similarity of the business attributes to realize intelligent,adaptive, and efficient access control permission decisions. Through the two-layer permission decision structure,the complex and diverse big data access control management requirements can be satisfied by considering thesecurity and availability of resources. Experimental results show that the proposed mechanism is effective andreliable. In summary, it can efficiently support the secure sharing of big data resources. 展开更多
关键词 big data access control data security BiLSTM
下载PDF
Exploring impacts of COVID-19 on spatial and temporal patterns of visitors to Canadian Rocky Mountain National Parks from social media big data
13
作者 Dehui Christina Geng Amy Li +4 位作者 Jieyu Zhang Howie W.Harshaw Christopher Gaston Wanli Wu Guangyu Wang 《Journal of Forestry Research》 SCIE EI CAS CSCD 2024年第4期13-33,共21页
COVID-19 posed challenges for global tourism management.Changes in visitor temporal and spatial patterns and their associated determinants pre-and peri-pandemic in Canadian Rocky Mountain National Parks are analyzed.D... COVID-19 posed challenges for global tourism management.Changes in visitor temporal and spatial patterns and their associated determinants pre-and peri-pandemic in Canadian Rocky Mountain National Parks are analyzed.Data was collected through social media programming and analyzed using spatiotemporal analysis and a geographically weighted regression(GWR)model.Results highlight that COVID-19 significantly changed park visitation patterns.Visitors tended to explore more remote areas peri-pandemic.The GWR model also indicated distance to nearby trails was a significant influence on visitor density.Our results indicate that the pandemic influenced tourism temporal and spatial imbalance.This research presents a novel approach using combined social media big data which can be extended to the field of tourism management,and has important implications to manage visitor patterns and to allocate resources efficiently to satisfy multiple objectives of park management. 展开更多
关键词 Tourism management Social media big data National parks COVID-19 Geographical weighted regression
下载PDF
An Innovative K-Anonymity Privacy-Preserving Algorithm to Improve Data Availability in the Context of Big Data
14
作者 Linlin Yuan Tiantian Zhang +2 位作者 Yuling Chen Yuxiang Yang Huang Li 《Computers, Materials & Continua》 SCIE EI 2024年第4期1561-1579,共19页
The development of technologies such as big data and blockchain has brought convenience to life,but at the same time,privacy and security issues are becoming more and more prominent.The K-anonymity algorithm is an eff... The development of technologies such as big data and blockchain has brought convenience to life,but at the same time,privacy and security issues are becoming more and more prominent.The K-anonymity algorithm is an effective and low computational complexity privacy-preserving algorithm that can safeguard users’privacy by anonymizing big data.However,the algorithm currently suffers from the problem of focusing only on improving user privacy while ignoring data availability.In addition,ignoring the impact of quasi-identified attributes on sensitive attributes causes the usability of the processed data on statistical analysis to be reduced.Based on this,we propose a new K-anonymity algorithm to solve the privacy security problem in the context of big data,while guaranteeing improved data usability.Specifically,we construct a new information loss function based on the information quantity theory.Considering that different quasi-identification attributes have different impacts on sensitive attributes,we set weights for each quasi-identification attribute when designing the information loss function.In addition,to reduce information loss,we improve K-anonymity in two ways.First,we make the loss of information smaller than in the original table while guaranteeing privacy based on common artificial intelligence algorithms,i.e.,greedy algorithm and 2-means clustering algorithm.In addition,we improve the 2-means clustering algorithm by designing a mean-center method to select the initial center of mass.Meanwhile,we design the K-anonymity algorithm of this scheme based on the constructed information loss function,the improved 2-means clustering algorithm,and the greedy algorithm,which reduces the information loss.Finally,we experimentally demonstrate the effectiveness of the algorithm in improving the effect of 2-means clustering and reducing information loss. 展开更多
关键词 Blockchain big data K-ANONYMITY 2-means clustering greedy algorithm mean-center method
下载PDF
Big Data Application Simulation Platform Design for Onboard Distributed Processing of LEO Mega-Constellation Networks
15
作者 Zhang Zhikai Gu Shushi +1 位作者 Zhang Qinyu Xue Jiayin 《China Communications》 SCIE CSCD 2024年第7期334-345,共12页
Due to the restricted satellite payloads in LEO mega-constellation networks(LMCNs),remote sensing image analysis,online learning and other big data services desirably need onboard distributed processing(OBDP).In exist... Due to the restricted satellite payloads in LEO mega-constellation networks(LMCNs),remote sensing image analysis,online learning and other big data services desirably need onboard distributed processing(OBDP).In existing technologies,the efficiency of big data applications(BDAs)in distributed systems hinges on the stable-state and low-latency links between worker nodes.However,LMCNs with high-dynamic nodes and long-distance links can not provide the above conditions,which makes the performance of OBDP hard to be intuitively measured.To bridge this gap,a multidimensional simulation platform is indispensable that can simulate the network environment of LMCNs and put BDAs in it for performance testing.Using STK's APIs and parallel computing framework,we achieve real-time simulation for thousands of satellite nodes,which are mapped as application nodes through software defined network(SDN)and container technologies.We elaborate the architecture and mechanism of the simulation platform,and take the Starlink and Hadoop as realistic examples for simulations.The results indicate that LMCNs have dynamic end-to-end latency which fluctuates periodically with the constellation movement.Compared to ground data center networks(GDCNs),LMCNs deteriorate the computing and storage job throughput,which can be alleviated by the utilization of erasure codes and data flow scheduling of worker nodes. 展开更多
关键词 big data application Hadoop LEO mega-constellation multidimensional simulation onboard distributed processing
下载PDF
Standard Framework Construction of Technology and Equipment for Big Data in Crop Phenomics
16
作者 Weiliang Wen Shenghao Gu +2 位作者 Ying Zhang Wanneng Yang Xinyu Guo 《Engineering》 SCIE EI CAS CSCD 2024年第11期175-184,共10页
Crop phenomics has rapidly progressed in recent years due to the growing need for crop functional geno-mics,digital breeding,and smart cultivation.Despite this advancement,the lack of standards for the cre-ation and u... Crop phenomics has rapidly progressed in recent years due to the growing need for crop functional geno-mics,digital breeding,and smart cultivation.Despite this advancement,the lack of standards for the cre-ation and usage of crop phenomics technology and equipment has become a bottleneck,limiting the industry’s high-quality development.This paper begins with an overview of the crop phenotyping indus-try and presents an industrial mapping of technology and equipment for big data in crop phenomics.It analyzes the necessity and current state of constructing a standard framework for crop phenotyping.Furthermore,this paper proposes the intended organizational structure and goals of the standard frame-work.It details the essentials of the standard framework in the research and development of hardware and equipment,data acquisition,and the storage and management of crop phenotyping data.Finally,it discusses promoting the construction and evaluation of the standard framework,aiming to provide ideas for developing a high-quality standard framework for crop phenotyping. 展开更多
关键词 Crop phenomics big data Phenotyping technology and equipment Standard framework Industrial mapping
下载PDF
Data-Driven Decision-Making for Bank Target Marketing Using Supervised Learning Classifiers on Imbalanced Big Data
17
作者 Fahim Nasir Abdulghani Ali Ahmed +2 位作者 Mehmet Sabir Kiraz Iryna Yevseyeva Mubarak Saif 《Computers, Materials & Continua》 SCIE EI 2024年第10期1703-1728,共26页
Integrating machine learning and data mining is crucial for processing big data and extracting valuable insights to enhance decision-making.However,imbalanced target variables within big data present technical challen... Integrating machine learning and data mining is crucial for processing big data and extracting valuable insights to enhance decision-making.However,imbalanced target variables within big data present technical challenges that hinder the performance of supervised learning classifiers on key evaluation metrics,limiting their overall effectiveness.This study presents a comprehensive review of both common and recently developed Supervised Learning Classifiers(SLCs)and evaluates their performance in data-driven decision-making.The evaluation uses various metrics,with a particular focus on the Harmonic Mean Score(F-1 score)on an imbalanced real-world bank target marketing dataset.The findings indicate that grid-search random forest and random-search random forest excel in Precision and area under the curve,while Extreme Gradient Boosting(XGBoost)outperforms other traditional classifiers in terms of F-1 score.Employing oversampling methods to address the imbalanced data shows significant performance improvement in XGBoost,delivering superior results across all metrics,particularly when using the SMOTE variant known as the BorderlineSMOTE2 technique.The study concludes several key factors for effectively addressing the challenges of supervised learning with imbalanced datasets.These factors include the importance of selecting appropriate datasets for training and testing,choosing the right classifiers,employing effective techniques for processing and handling imbalanced datasets,and identifying suitable metrics for performance evaluation.Additionally,factors also entail the utilisation of effective exploratory data analysis in conjunction with visualisation techniques to yield insights conducive to data-driven decision-making. 展开更多
关键词 big data machine learning data mining data visualization label encoding imbalanced dataset sampling techniques
下载PDF
Evaluation of a software positioning tool to support SMEs in adoption of big data analytics
18
作者 Matthew Willetts Anthony S.Atkins 《Journal of Electronic Science and Technology》 EI CAS CSCD 2024年第1期13-24,共12页
Big data analytics has been widely adopted by large companies to achieve measurable benefits including increased profitability,customer demand forecasting,cheaper development of products,and improved stock control.Sma... Big data analytics has been widely adopted by large companies to achieve measurable benefits including increased profitability,customer demand forecasting,cheaper development of products,and improved stock control.Small and medium sized enterprises(SMEs)are the backbone of the global economy,comprising of 90%of businesses worldwide.However,only 10%SMEs have adopted big data analytics despite the competitive advantage they could achieve.Previous research has analysed the barriers to adoption and a strategic framework has been developed to help SMEs adopt big data analytics.The framework was converted into a scoring tool which has been applied to multiple case studies of SMEs in the UK.This paper documents the process of evaluating the framework based on the structured feedback from a focus group composed of experienced practitioners.The results of the evaluation are presented with a discussion on the results,and the paper concludes with recommendations to improve the scoring tool based on the proposed framework.The research demonstrates that this positioning tool is beneficial for SMEs to achieve competitive advantages by increasing the application of business intelligence and big data analytics. 展开更多
关键词 big data analytics EVALUATION Small and medium sized enterprises (SMEs) Strategic framework
下载PDF
Big data challenge for monitoring quality in higher education institutions using business intelligence dashboards
19
作者 Ali Sorour Anthony S.Atkins 《Journal of Electronic Science and Technology》 EI CAS CSCD 2024年第1期25-41,共17页
As big data becomes an apparent challenge to handle when building a business intelligence(BI)system,there is a motivation to handle this challenging issue in higher education institutions(HEIs).Monitoring quality in H... As big data becomes an apparent challenge to handle when building a business intelligence(BI)system,there is a motivation to handle this challenging issue in higher education institutions(HEIs).Monitoring quality in HEIs encompasses handling huge amounts of data coming from different sources.This paper reviews big data and analyses the cases from the literature regarding quality assurance(QA)in HEIs.It also outlines a framework that can address the big data challenge in HEIs to handle QA monitoring using BI dashboards and a prototype dashboard is presented in this paper.The dashboard was developed using a utilisation tool to monitor QA in HEIs to provide visual representations of big data.The prototype dashboard enables stakeholders to monitor compliance with QA standards while addressing the big data challenge associated with the substantial volume of data managed by HEIs’QA systems.This paper also outlines how the developed system integrates big data from social media into the monitoring dashboard. 展开更多
关键词 big data Business intelligence(BI) Dashboards Higher education(HE) Quality assurance(QA) Social media
下载PDF
科技消费流行趋势Big in2008--Other 其他 网络电话 固态硬盘 数字广播 绿色环保电子产品
20
《数码》 2008年第1期66-69,共4页
固态硬盘逐渐受宠,在线办公,Skype电话。
关键词 网络电话 数字广播 流行趋势 电子产品 绿色环保 硬盘 big 消费
下载PDF
上一页 1 2 102 下一页 到第
使用帮助 返回顶部