L2 reading is not only an important channel for people to obtain information and knowledge,but also the main way for people to learn a foreign language.Reading information processing can be divided into controlled pro...L2 reading is not only an important channel for people to obtain information and knowledge,but also the main way for people to learn a foreign language.Reading information processing can be divided into controlled processing and automatic processing.Controlled information processing is a conscious and resource-intensive processing model,while automatic information processing is an unconscious and automatic processing model.This study investigates the characteristics and interactivity of controlled and automatic information processing in L2 reading,and explores the roles of controlled and automatic information processing strategies in improving L2 reading ability.The findings are as follows:(a)controlled and automatic information processing is interactive in L2 reading;and(b)the uses of controlled and automatic information processing strategies are beneficial to the improvement of the reading ability of L2 learners.This study has important theoretical and practical value in improving the efficiency of L2 reading teaching and learning.展开更多
The advent of the big data era has presented unprecedented challenges to remedies for personal information infringement in areas such as damage assessment,proof of causation,determination of illegality,fault assessmen...The advent of the big data era has presented unprecedented challenges to remedies for personal information infringement in areas such as damage assessment,proof of causation,determination of illegality,fault assessment,and liability.Traditional tort law is unable to provide a robust response for these challenges,which severely hinders human rights protection in the digital society.The dynamic system theory represents a third path between fixed constitutive elements and general clauses.It both overcomes the rigidity of the“allor-nothing”legal effect evaluation mechanism of the“element-effect”model and avoids the uncertainty of the general clause model.It can effectively enhance the flexibility of the legal system in responding to social changes.In light of this,it is necessary to construct a dynamic foundational evaluation framework for personal information infringement under the guidance of the dynamic system theory.By relying on the dynamic interplay effect of various foundational evaluation elements,this framework can achieve a flexible evaluation of the constitutive elements of liability and the legal effects of liability for personal information infringement.Through this approach,the crisis of personal information infringement in the era of big data can be mitigated,and the realization of personal information rights as digital human rights can be promoted.展开更多
Objective To study how to improve and perfect the information platform and processing mechanism of drug shortages in China.Methods By searching the relevant policies from official websites of FDA,European Medicines Ag...Objective To study how to improve and perfect the information platform and processing mechanism of drug shortages in China.Methods By searching the relevant policies from official websites of FDA,European Medicines Agency(EMA),Health Canada(HC)and National Health Commission,the good experience of the United States,the European Union and Canada in the construction of information platform and processing mechanism of drug shortages was summarized for reference in China.Results and Conclusion China has initially established the processing mechanism of drug shortages,but the platform construction should be improved,and the information disclosure of drug shortages varies from province to province.We should improve the information platform of drug shortages,strengthen the disclosure and communication of information,enrich the processing tools and measures after the drug shortages occurs,and strengthen the cooperation with relevant associations and other non-governmental departments.展开更多
The presence of numerous uncertainties in hybrid decision information systems(HDISs)renders attribute reduction a formidable task.Currently available attribute reduction algorithms,including those based on Pawlak attr...The presence of numerous uncertainties in hybrid decision information systems(HDISs)renders attribute reduction a formidable task.Currently available attribute reduction algorithms,including those based on Pawlak attribute importance,Skowron discernibility matrix,and information entropy,struggle to effectively manages multiple uncertainties simultaneously in HDISs like the precise measurement of disparities between nominal attribute values,and attributes with fuzzy boundaries and abnormal values.In order to address the aforementioned issues,this paper delves into the study of attribute reduction withinHDISs.First of all,a novel metric based on the decision attribute is introduced to solve the problem of accurately measuring the differences between nominal attribute values.The newly introduced distance metric has been christened the supervised distance that can effectively quantify the differences between the nominal attribute values.Then,based on the newly developed metric,a novel fuzzy relationship is defined from the perspective of“feedback on parity of attribute values to attribute sets”.This new fuzzy relationship serves as a valuable tool in addressing the challenges posed by abnormal attribute values.Furthermore,leveraging the newly introduced fuzzy relationship,the fuzzy conditional information entropy is defined as a solution to the challenges posed by fuzzy attributes.It effectively quantifies the uncertainty associated with fuzzy attribute values,thereby providing a robust framework for handling fuzzy information in hybrid information systems.Finally,an algorithm for attribute reduction utilizing the fuzzy conditional information entropy is presented.The experimental results on 12 datasets show that the average reduction rate of our algorithm reaches 84.04%,and the classification accuracy is improved by 3.91%compared to the original dataset,and by an average of 11.25%compared to the other 9 state-of-the-art reduction algorithms.The comprehensive analysis of these research results clearly indicates that our algorithm is highly effective in managing the intricate uncertainties inherent in hybrid data.展开更多
Using Gagne's information processing theory to analyze the listening process so as to provide a pedagogical model for L2 learners to solve listening problems as well as to cast some lights on listening teaching.
Vocabulary acquisition is an intricate process,which has a close relationship with memory.In cognitive psychology,a large number of studies on memory system have been conducted based on the information processing theo...Vocabulary acquisition is an intricate process,which has a close relationship with memory.In cognitive psychology,a large number of studies on memory system have been conducted based on the information processing theory,placing great value on second language learners’cognitive process.This study intends to probe into second language vocabulary acquisition from the perspective of information processing theory in hope to help learners acquire vocabulary more scientifically and efficiently.展开更多
“Cantonese Cuisine Master”project is an important policy proposed by China to inherit Cantonese Cuisine culture,promote employment,and achieve targeted poverty reduction and rural revitalization.Confronted with the ...“Cantonese Cuisine Master”project is an important policy proposed by China to inherit Cantonese Cuisine culture,promote employment,and achieve targeted poverty reduction and rural revitalization.Confronted with the demands of more diverse education,it is an essential opportunity and task for the education system to consider how to construct high-quality online courses and pursue higher-quality“Cantonese Cuisine Master”projects in line with the new era.This paper,based on the theory of instructional media and information processing theory,will further clarify the demand,dilemma,and developing strategy of online course construction for culinary majors,and explore its construction and practice with the example of“A Bite of Teochew Cuisine,”a Guangdong first-class course.展开更多
The increasing dependence on data highlights the need for a detailed understanding of its behavior,encompassing the challenges involved in processing and evaluating it.However,current research lacks a comprehensive st...The increasing dependence on data highlights the need for a detailed understanding of its behavior,encompassing the challenges involved in processing and evaluating it.However,current research lacks a comprehensive structure for measuring the worth of data elements,hindering effective navigation of the changing digital environment.This paper aims to fill this research gap by introducing the innovative concept of“data components.”It proposes a graphtheoretic representation model that presents a clear mathematical definition and demonstrates the superiority of data components over traditional processing methods.Additionally,the paper introduces an information measurement model that provides a way to calculate the information entropy of data components and establish their increased informational value.The paper also assesses the value of information,suggesting a pricing mechanism based on its significance.In conclusion,this paper establishes a robust framework for understanding and quantifying the value of implicit information in data,laying the groundwork for future research and practical applications.展开更多
Hyperspectral images typically have high spectral resolution but low spatial resolution,which impacts the reliability and accuracy of subsequent applications,for example,remote sensingclassification and mineral identi...Hyperspectral images typically have high spectral resolution but low spatial resolution,which impacts the reliability and accuracy of subsequent applications,for example,remote sensingclassification and mineral identification.But in traditional methods via deep convolution neural net-works,indiscriminately extracting and fusing spectral and spatial features makes it challenging toutilize the differentiated information across adjacent spectral channels.Thus,we proposed a multi-branch interleaved iterative upsampling hyperspectral image super-resolution reconstruction net-work(MIIUSR)to address the above problems.We reinforce spatial feature extraction by integrat-ing detailed features from different receptive fields across adjacent channels.Furthermore,we pro-pose an interleaved iterative upsampling process during the reconstruction stage,which progres-sively fuses incremental information among adjacent frequency bands.Additionally,we add twoparallel three dimensional(3D)feature extraction branches to the backbone network to extractspectral and spatial features of varying granularity.We further enhance the backbone network’sconstruction results by leveraging the difference between two dimensional(2D)channel-groupingspatial features and 3D multi-granularity features.The results obtained by applying the proposednetwork model to the CAVE test set show that,at a scaling factor of×4,the peak signal to noiseratio,spectral angle mapping,and structural similarity are 37.310 dB,3.525 and 0.9438,respec-tively.Besides,extensive experiments conducted on the Harvard and Foster datasets demonstratethe superior potential of the proposed model in hyperspectral super-resolution reconstruction.展开更多
The research aims to improve the performance of image recognition methods based on a description in the form of a set of keypoint descriptors.The main focus is on increasing the speed of establishing the relevance of ...The research aims to improve the performance of image recognition methods based on a description in the form of a set of keypoint descriptors.The main focus is on increasing the speed of establishing the relevance of object and etalon descriptions while maintaining the required level of classification efficiency.The class to be recognized is represented by an infinite set of images obtained from the etalon by applying arbitrary geometric transformations.It is proposed to reduce the descriptions for the etalon database by selecting the most significant descriptor components according to the information content criterion.The informativeness of an etalon descriptor is estimated by the difference of the closest distances to its own and other descriptions.The developed method determines the relevance of the full description of the recognized object with the reduced description of the etalons.Several practical models of the classifier with different options for establishing the correspondence between object descriptors and etalons are considered.The results of the experimental modeling of the proposed methods for a database including images of museum jewelry are presented.The test sample is formed as a set of images from the etalon database and out of the database with the application of geometric transformations of scale and rotation in the field of view.The practical problems of determining the threshold for the number of votes,based on which a classification decision is made,have been researched.Modeling has revealed the practical possibility of tenfold reducing descriptions with full preservation of classification accuracy.Reducing the descriptions by twenty times in the experiment leads to slightly decreased accuracy.The speed of the analysis increases in proportion to the degree of reduction.The use of reduction by the informativeness criterion confirmed the possibility of obtaining the most significant subset of features for classification,which guarantees a decent level of accuracy.展开更多
Aim To develop an information processing system with real time processing capability and artistic user interface for the optoelectronic antagonism general measuring system. Methods The A/D board and the multifun...Aim To develop an information processing system with real time processing capability and artistic user interface for the optoelectronic antagonism general measuring system. Methods The A/D board and the multifunctional board communicating with every instruments were designed, data collecting and processing were realized by selecting appropriate software platform. Results Simulating results show the information processing system can operate correctly and dependably, the measuring rules, interactive interface and data handling method were all accepted by the user. Conclusion The designing approach based on the mix platform takes advantages of the two operating systems, the desired performances are acquired both in the real time processing and with the friendly artistic user interface.展开更多
As Natural Language Processing(NLP)continues to advance,driven by the emergence of sophisticated large language models such as ChatGPT,there has been a notable growth in research activity.This rapid uptake reflects in...As Natural Language Processing(NLP)continues to advance,driven by the emergence of sophisticated large language models such as ChatGPT,there has been a notable growth in research activity.This rapid uptake reflects increasing interest in the field and induces critical inquiries into ChatGPT’s applicability in the NLP domain.This review paper systematically investigates the role of ChatGPT in diverse NLP tasks,including information extraction,Name Entity Recognition(NER),event extraction,relation extraction,Part of Speech(PoS)tagging,text classification,sentiment analysis,emotion recognition and text annotation.The novelty of this work lies in its comprehensive analysis of the existing literature,addressing a critical gap in understanding ChatGPT’s adaptability,limitations,and optimal application.In this paper,we employed a systematic stepwise approach following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses(PRISMA)framework to direct our search process and seek relevant studies.Our review reveals ChatGPT’s significant potential in enhancing various NLP tasks.Its adaptability in information extraction tasks,sentiment analysis,and text classification showcases its ability to comprehend diverse contexts and extract meaningful details.Additionally,ChatGPT’s flexibility in annotation tasks reducesmanual efforts and accelerates the annotation process,making it a valuable asset in NLP development and research.Furthermore,GPT-4 and prompt engineering emerge as a complementary mechanism,empowering users to guide the model and enhance overall accuracy.Despite its promising potential,challenges persist.The performance of ChatGP Tneeds tobe testedusingmore extensivedatasets anddiversedata structures.Subsequently,its limitations in handling domain-specific language and the need for fine-tuning in specific applications highlight the importance of further investigations to address these issues.展开更多
Organizations may increase data security and operational efficiency by connecting Salesforce with Identity and Access Management (IAM) systems like Saviynt. This study delves deeply into the details of this revolution...Organizations may increase data security and operational efficiency by connecting Salesforce with Identity and Access Management (IAM) systems like Saviynt. This study delves deeply into the details of this revolution that is being encouraged to shift towards IAM software and potential drawbacks such as excessive provisioning and implementation issues. The study illuminated excellent practices and emphasized the importance of constant monitoring by using secondary theme analysis and qualitative research as proof. The findings indicate Saviynt as a viable solution and provide detailed information for firms seeking a smooth and secure integration path.展开更多
In the field of target recognition based on the temporal-spatial information fusion,evidence the-ory has received extensive attention.To achieve accurate and efficient target recognition by the evi-dence theory,an ada...In the field of target recognition based on the temporal-spatial information fusion,evidence the-ory has received extensive attention.To achieve accurate and efficient target recognition by the evi-dence theory,an adaptive temporal-spatial information fusion model is proposed.Firstly,an adaptive evaluation correction mechanism is constructed by the evidence distance and Deng entropy,which realizes the credibility discrimination and adaptive correction of the spatial evidence.Secondly,the credibility decay operator is introduced to obtain the dynamic credibility of temporal evidence.Finally,the sequential combination of temporal-spatial evidences is achieved by Shafer’s discount criterion and Dempster’s combination rule.The simulation results show that the proposed method not only considers the dynamic and sequential characteristics of the temporal-spatial evidences com-bination,but also has a strong conflict information processing capability,which provides a new refer-ence for the field of temporal-spatial information fusion.展开更多
This study investigated the groundwater quality and health risks associated with informal e-waste processing in the Alaba International Market in Lagos, Nigeria. Twenty-two groundwater samples were collected from hand...This study investigated the groundwater quality and health risks associated with informal e-waste processing in the Alaba International Market in Lagos, Nigeria. Twenty-two groundwater samples were collected from hand-dug wells in the market area and analyzed for physicochemical properties and heavy metal concentrations. The results showed that the groundwater quality was poor, with high levels of heavy metals, including cadmium, lead, and chromium. The health index (HI) for children and adults was above the tolerable threshold levels, indicating a potential health risk to the population. Principal component analysis and hierarchical cluster analysis were used to identify the sources of metals in groundwater, and the results showed that informal e-waste processing was a significant source of contamination. The study highlights the need for effective management strategies to mitigate the potential health risks associated with informal e-waste processing and ensure public health and environmental safety.展开更多
Maritime radar and automatic identification systems (AIS), which are essential auxiliary equipment for navigation safety in the shipping industry, have played significant roles in maritime safety supervision. However,...Maritime radar and automatic identification systems (AIS), which are essential auxiliary equipment for navigation safety in the shipping industry, have played significant roles in maritime safety supervision. However, in practical applications, the information obtained by a single device is limited, and it is necessary to integrate the information of maritime radar and AIS messages to achieve better recognition effects. In this study, the D-S evidence theory is used to fusion the two kinds of heterogeneous information: maritime radar images and AIS messages. Firstly, the radar image and AIS message are processed to get the targets of interest in the same coordinate system. Then, the coordinate position and heading of targets are chosen as the indicators for judging target similarity. Finally, a piece of D-S evidence theory based on the information fusion method is proposed to match the radar target and the AIS target of the same ship. Particularly, the effectiveness of the proposed method has been validated and evaluated through several experiments, which proves that such a method is practical in maritime safety supervision.展开更多
Data processing of small samples is an important and valuable research problem in the electronic equipment test. Because it is difficult and complex to determine the probability distribution of small samples, it is di...Data processing of small samples is an important and valuable research problem in the electronic equipment test. Because it is difficult and complex to determine the probability distribution of small samples, it is difficult to use the traditional probability theory to process the samples and assess the degree of uncertainty. Using the grey relational theory and the norm theory, the grey distance information approach, which is based on the grey distance information quantity of a sample and the average grey distance information quantity of the samples, is proposed in this article. The definitions of the grey distance information quantity of a sample and the average grey distance information quantity of the samples, with their characteristics and algorithms, are introduced. The correlative problems, including the algorithm of estimated value, the standard deviation, and the acceptance and rejection criteria of the samples and estimated results, are also proposed. Moreover, the information whitening ratio is introduced to select the weight algorithm and to compare the different samples. Several examples are given to demonstrate the application of the proposed approach. The examples show that the proposed approach, which has no demand for the probability distribution of small samples, is feasible and effective.展开更多
This paper takes“Research on Education and Teaching Theory and Practice in the Information Age”as the main research object,and discusses the importance and influence of education and teaching reform and innovation i...This paper takes“Research on Education and Teaching Theory and Practice in the Information Age”as the main research object,and discusses the importance and influence of education and teaching reform and innovation in the information age.First,it introduces the background of education in the information age,including the characteristics of the information age and the development status of education.Secondly,it summarizes the main content of the book“Research on the Theory and Practice of Education and Teaching in the Information Age.”Lastly,the key points of education and teaching reform in the information age are discussed in order to provide effective guidelines and support for the practice of future education and teaching reform.展开更多
Landfilling is one of the most effective and responsible ways to dispose of municipal solid waste(MSW).Identifying landfill sites,however,is a challenging and complex undertaking because it depends on social,environme...Landfilling is one of the most effective and responsible ways to dispose of municipal solid waste(MSW).Identifying landfill sites,however,is a challenging and complex undertaking because it depends on social,environmental,technical,economic,and legal issues.This study aims to map the optimal sites that were environmentally suitable for locating a landfill site in Butuan City,Philippines.With reference to the policy requirements from DENR Section I,Landfill Site Identification Criteria and Screening Guidelines of National Solid Waste Management Commission,the integration of a Geographic Information System(GIS)model builder and Analytical Hierarchy Process(AHP)has been used in this study to address the aforementioned challenges related to the landfill site suitability analysis.Based on the generated sanitary landfill suitability map,results showed that Barangay Tungao(1131.42967 ha)and Florida(518.48 ha)were able to meet and consider the three(3)main components,namely economic,environmental,and physical criteria,and are highly suitable as landfill site locations in Butuan City.It is recommended that there will conduct a geotechnical evaluation,involving rigorous geological and hydrogeological assessment employing a combination of site investigation and laboratory techniques.In addition,additional specific social,ecological,climatic,and economic factors need to be considered(i.e.including impact on humans,flora,fauna,soil,water,air,climate,and landscape).展开更多
In a measurement system, new representation methods are necessary to maintain the uncertainty and to supply more powerful ability for reasoning and transformation between numerical system and symbolic system. A grey m...In a measurement system, new representation methods are necessary to maintain the uncertainty and to supply more powerful ability for reasoning and transformation between numerical system and symbolic system. A grey measurement system is discussed from the point of view of intelligent sensors and incomplete information processing compared with a numerical and symbolized measurement system. The methods of grey representation and information processing are proposed for data collection and reasoning. As a case study, multi-ultrasonic sensor systems are demonstrated to verify the effectiveness of the proposed methods.展开更多
文摘L2 reading is not only an important channel for people to obtain information and knowledge,but also the main way for people to learn a foreign language.Reading information processing can be divided into controlled processing and automatic processing.Controlled information processing is a conscious and resource-intensive processing model,while automatic information processing is an unconscious and automatic processing model.This study investigates the characteristics and interactivity of controlled and automatic information processing in L2 reading,and explores the roles of controlled and automatic information processing strategies in improving L2 reading ability.The findings are as follows:(a)controlled and automatic information processing is interactive in L2 reading;and(b)the uses of controlled and automatic information processing strategies are beneficial to the improvement of the reading ability of L2 learners.This study has important theoretical and practical value in improving the efficiency of L2 reading teaching and learning.
基金the“Application of the Dynamic System Theory in the Determination of Infringement Liability for Immaterial Personality Rights in the Civil Code”(Project Approval Number 2022MFXH006)a project of the young scholar research program of the Civil Law Society of CLS in 2022。
文摘The advent of the big data era has presented unprecedented challenges to remedies for personal information infringement in areas such as damage assessment,proof of causation,determination of illegality,fault assessment,and liability.Traditional tort law is unable to provide a robust response for these challenges,which severely hinders human rights protection in the digital society.The dynamic system theory represents a third path between fixed constitutive elements and general clauses.It both overcomes the rigidity of the“allor-nothing”legal effect evaluation mechanism of the“element-effect”model and avoids the uncertainty of the general clause model.It can effectively enhance the flexibility of the legal system in responding to social changes.In light of this,it is necessary to construct a dynamic foundational evaluation framework for personal information infringement under the guidance of the dynamic system theory.By relying on the dynamic interplay effect of various foundational evaluation elements,this framework can achieve a flexible evaluation of the constitutive elements of liability and the legal effects of liability for personal information infringement.Through this approach,the crisis of personal information infringement in the era of big data can be mitigated,and the realization of personal information rights as digital human rights can be promoted.
文摘Objective To study how to improve and perfect the information platform and processing mechanism of drug shortages in China.Methods By searching the relevant policies from official websites of FDA,European Medicines Agency(EMA),Health Canada(HC)and National Health Commission,the good experience of the United States,the European Union and Canada in the construction of information platform and processing mechanism of drug shortages was summarized for reference in China.Results and Conclusion China has initially established the processing mechanism of drug shortages,but the platform construction should be improved,and the information disclosure of drug shortages varies from province to province.We should improve the information platform of drug shortages,strengthen the disclosure and communication of information,enrich the processing tools and measures after the drug shortages occurs,and strengthen the cooperation with relevant associations and other non-governmental departments.
基金Anhui Province Natural Science Research Project of Colleges and Universities(2023AH040321)Excellent Scientific Research and Innovation Team of Anhui Colleges(2022AH010098).
文摘The presence of numerous uncertainties in hybrid decision information systems(HDISs)renders attribute reduction a formidable task.Currently available attribute reduction algorithms,including those based on Pawlak attribute importance,Skowron discernibility matrix,and information entropy,struggle to effectively manages multiple uncertainties simultaneously in HDISs like the precise measurement of disparities between nominal attribute values,and attributes with fuzzy boundaries and abnormal values.In order to address the aforementioned issues,this paper delves into the study of attribute reduction withinHDISs.First of all,a novel metric based on the decision attribute is introduced to solve the problem of accurately measuring the differences between nominal attribute values.The newly introduced distance metric has been christened the supervised distance that can effectively quantify the differences between the nominal attribute values.Then,based on the newly developed metric,a novel fuzzy relationship is defined from the perspective of“feedback on parity of attribute values to attribute sets”.This new fuzzy relationship serves as a valuable tool in addressing the challenges posed by abnormal attribute values.Furthermore,leveraging the newly introduced fuzzy relationship,the fuzzy conditional information entropy is defined as a solution to the challenges posed by fuzzy attributes.It effectively quantifies the uncertainty associated with fuzzy attribute values,thereby providing a robust framework for handling fuzzy information in hybrid information systems.Finally,an algorithm for attribute reduction utilizing the fuzzy conditional information entropy is presented.The experimental results on 12 datasets show that the average reduction rate of our algorithm reaches 84.04%,and the classification accuracy is improved by 3.91%compared to the original dataset,and by an average of 11.25%compared to the other 9 state-of-the-art reduction algorithms.The comprehensive analysis of these research results clearly indicates that our algorithm is highly effective in managing the intricate uncertainties inherent in hybrid data.
文摘Using Gagne's information processing theory to analyze the listening process so as to provide a pedagogical model for L2 learners to solve listening problems as well as to cast some lights on listening teaching.
文摘Vocabulary acquisition is an intricate process,which has a close relationship with memory.In cognitive psychology,a large number of studies on memory system have been conducted based on the information processing theory,placing great value on second language learners’cognitive process.This study intends to probe into second language vocabulary acquisition from the perspective of information processing theory in hope to help learners acquire vocabulary more scientifically and efficiently.
基金The research result of“A Bite of Teochew Cuisine”of Guangdong Quality Project(Open Online Course)“The Creation of Excellent Science Popularization Works for Chinese Molecular Cooking Micro-course”of Guangdong Science and Technology Program(Project No.:2019A141405059).
文摘“Cantonese Cuisine Master”project is an important policy proposed by China to inherit Cantonese Cuisine culture,promote employment,and achieve targeted poverty reduction and rural revitalization.Confronted with the demands of more diverse education,it is an essential opportunity and task for the education system to consider how to construct high-quality online courses and pursue higher-quality“Cantonese Cuisine Master”projects in line with the new era.This paper,based on the theory of instructional media and information processing theory,will further clarify the demand,dilemma,and developing strategy of online course construction for culinary majors,and explore its construction and practice with the example of“A Bite of Teochew Cuisine,”a Guangdong first-class course.
基金supported by the EU H2020 Research and Innovation Program under the Marie Sklodowska-Curie Grant Agreement(Project-DEEP,Grant number:101109045)National Key R&D Program of China with Grant number 2018YFB1800804+2 种基金the National Natural Science Foundation of China(Nos.NSFC 61925105,and 62171257)Tsinghua University-China Mobile Communications Group Co.,Ltd,Joint Institutethe Fundamental Research Funds for the Central Universities,China(No.FRF-NP-20-03)。
文摘The increasing dependence on data highlights the need for a detailed understanding of its behavior,encompassing the challenges involved in processing and evaluating it.However,current research lacks a comprehensive structure for measuring the worth of data elements,hindering effective navigation of the changing digital environment.This paper aims to fill this research gap by introducing the innovative concept of“data components.”It proposes a graphtheoretic representation model that presents a clear mathematical definition and demonstrates the superiority of data components over traditional processing methods.Additionally,the paper introduces an information measurement model that provides a way to calculate the information entropy of data components and establish their increased informational value.The paper also assesses the value of information,suggesting a pricing mechanism based on its significance.In conclusion,this paper establishes a robust framework for understanding and quantifying the value of implicit information in data,laying the groundwork for future research and practical applications.
基金the National Natural Science Foun-dation of China(Nos.61471263,61872267 and U21B2024)the Natural Science Foundation of Tianjin,China(No.16JCZDJC31100)Tianjin University Innovation Foundation(No.2021XZC0024).
文摘Hyperspectral images typically have high spectral resolution but low spatial resolution,which impacts the reliability and accuracy of subsequent applications,for example,remote sensingclassification and mineral identification.But in traditional methods via deep convolution neural net-works,indiscriminately extracting and fusing spectral and spatial features makes it challenging toutilize the differentiated information across adjacent spectral channels.Thus,we proposed a multi-branch interleaved iterative upsampling hyperspectral image super-resolution reconstruction net-work(MIIUSR)to address the above problems.We reinforce spatial feature extraction by integrat-ing detailed features from different receptive fields across adjacent channels.Furthermore,we pro-pose an interleaved iterative upsampling process during the reconstruction stage,which progres-sively fuses incremental information among adjacent frequency bands.Additionally,we add twoparallel three dimensional(3D)feature extraction branches to the backbone network to extractspectral and spatial features of varying granularity.We further enhance the backbone network’sconstruction results by leveraging the difference between two dimensional(2D)channel-groupingspatial features and 3D multi-granularity features.The results obtained by applying the proposednetwork model to the CAVE test set show that,at a scaling factor of×4,the peak signal to noiseratio,spectral angle mapping,and structural similarity are 37.310 dB,3.525 and 0.9438,respec-tively.Besides,extensive experiments conducted on the Harvard and Foster datasets demonstratethe superior potential of the proposed model in hyperspectral super-resolution reconstruction.
基金This research was funded by Prince Sattam bin Abdulaziz University(Project Number PSAU/2023/01/25387).
文摘The research aims to improve the performance of image recognition methods based on a description in the form of a set of keypoint descriptors.The main focus is on increasing the speed of establishing the relevance of object and etalon descriptions while maintaining the required level of classification efficiency.The class to be recognized is represented by an infinite set of images obtained from the etalon by applying arbitrary geometric transformations.It is proposed to reduce the descriptions for the etalon database by selecting the most significant descriptor components according to the information content criterion.The informativeness of an etalon descriptor is estimated by the difference of the closest distances to its own and other descriptions.The developed method determines the relevance of the full description of the recognized object with the reduced description of the etalons.Several practical models of the classifier with different options for establishing the correspondence between object descriptors and etalons are considered.The results of the experimental modeling of the proposed methods for a database including images of museum jewelry are presented.The test sample is formed as a set of images from the etalon database and out of the database with the application of geometric transformations of scale and rotation in the field of view.The practical problems of determining the threshold for the number of votes,based on which a classification decision is made,have been researched.Modeling has revealed the practical possibility of tenfold reducing descriptions with full preservation of classification accuracy.Reducing the descriptions by twenty times in the experiment leads to slightly decreased accuracy.The speed of the analysis increases in proportion to the degree of reduction.The use of reduction by the informativeness criterion confirmed the possibility of obtaining the most significant subset of features for classification,which guarantees a decent level of accuracy.
文摘Aim To develop an information processing system with real time processing capability and artistic user interface for the optoelectronic antagonism general measuring system. Methods The A/D board and the multifunctional board communicating with every instruments were designed, data collecting and processing were realized by selecting appropriate software platform. Results Simulating results show the information processing system can operate correctly and dependably, the measuring rules, interactive interface and data handling method were all accepted by the user. Conclusion The designing approach based on the mix platform takes advantages of the two operating systems, the desired performances are acquired both in the real time processing and with the friendly artistic user interface.
文摘As Natural Language Processing(NLP)continues to advance,driven by the emergence of sophisticated large language models such as ChatGPT,there has been a notable growth in research activity.This rapid uptake reflects increasing interest in the field and induces critical inquiries into ChatGPT’s applicability in the NLP domain.This review paper systematically investigates the role of ChatGPT in diverse NLP tasks,including information extraction,Name Entity Recognition(NER),event extraction,relation extraction,Part of Speech(PoS)tagging,text classification,sentiment analysis,emotion recognition and text annotation.The novelty of this work lies in its comprehensive analysis of the existing literature,addressing a critical gap in understanding ChatGPT’s adaptability,limitations,and optimal application.In this paper,we employed a systematic stepwise approach following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses(PRISMA)framework to direct our search process and seek relevant studies.Our review reveals ChatGPT’s significant potential in enhancing various NLP tasks.Its adaptability in information extraction tasks,sentiment analysis,and text classification showcases its ability to comprehend diverse contexts and extract meaningful details.Additionally,ChatGPT’s flexibility in annotation tasks reducesmanual efforts and accelerates the annotation process,making it a valuable asset in NLP development and research.Furthermore,GPT-4 and prompt engineering emerge as a complementary mechanism,empowering users to guide the model and enhance overall accuracy.Despite its promising potential,challenges persist.The performance of ChatGP Tneeds tobe testedusingmore extensivedatasets anddiversedata structures.Subsequently,its limitations in handling domain-specific language and the need for fine-tuning in specific applications highlight the importance of further investigations to address these issues.
文摘Organizations may increase data security and operational efficiency by connecting Salesforce with Identity and Access Management (IAM) systems like Saviynt. This study delves deeply into the details of this revolution that is being encouraged to shift towards IAM software and potential drawbacks such as excessive provisioning and implementation issues. The study illuminated excellent practices and emphasized the importance of constant monitoring by using secondary theme analysis and qualitative research as proof. The findings indicate Saviynt as a viable solution and provide detailed information for firms seeking a smooth and secure integration path.
基金the National Natural Science Foundation of China(No.61976080)the Key Project on Research and Practice of Henan University Graduate Education and Teaching Reform(YJSJG2023XJ006)+1 种基金the Key Research and Development Projects of Henan Province(231111212500)the Henan University Graduate Education Innovation and Quality Improvement Program(SYLKC2023016).
文摘In the field of target recognition based on the temporal-spatial information fusion,evidence the-ory has received extensive attention.To achieve accurate and efficient target recognition by the evi-dence theory,an adaptive temporal-spatial information fusion model is proposed.Firstly,an adaptive evaluation correction mechanism is constructed by the evidence distance and Deng entropy,which realizes the credibility discrimination and adaptive correction of the spatial evidence.Secondly,the credibility decay operator is introduced to obtain the dynamic credibility of temporal evidence.Finally,the sequential combination of temporal-spatial evidences is achieved by Shafer’s discount criterion and Dempster’s combination rule.The simulation results show that the proposed method not only considers the dynamic and sequential characteristics of the temporal-spatial evidences com-bination,but also has a strong conflict information processing capability,which provides a new refer-ence for the field of temporal-spatial information fusion.
文摘This study investigated the groundwater quality and health risks associated with informal e-waste processing in the Alaba International Market in Lagos, Nigeria. Twenty-two groundwater samples were collected from hand-dug wells in the market area and analyzed for physicochemical properties and heavy metal concentrations. The results showed that the groundwater quality was poor, with high levels of heavy metals, including cadmium, lead, and chromium. The health index (HI) for children and adults was above the tolerable threshold levels, indicating a potential health risk to the population. Principal component analysis and hierarchical cluster analysis were used to identify the sources of metals in groundwater, and the results showed that informal e-waste processing was a significant source of contamination. The study highlights the need for effective management strategies to mitigate the potential health risks associated with informal e-waste processing and ensure public health and environmental safety.
文摘Maritime radar and automatic identification systems (AIS), which are essential auxiliary equipment for navigation safety in the shipping industry, have played significant roles in maritime safety supervision. However, in practical applications, the information obtained by a single device is limited, and it is necessary to integrate the information of maritime radar and AIS messages to achieve better recognition effects. In this study, the D-S evidence theory is used to fusion the two kinds of heterogeneous information: maritime radar images and AIS messages. Firstly, the radar image and AIS message are processed to get the targets of interest in the same coordinate system. Then, the coordinate position and heading of targets are chosen as the indicators for judging target similarity. Finally, a piece of D-S evidence theory based on the information fusion method is proposed to match the radar target and the AIS target of the same ship. Particularly, the effectiveness of the proposed method has been validated and evaluated through several experiments, which proves that such a method is practical in maritime safety supervision.
文摘Data processing of small samples is an important and valuable research problem in the electronic equipment test. Because it is difficult and complex to determine the probability distribution of small samples, it is difficult to use the traditional probability theory to process the samples and assess the degree of uncertainty. Using the grey relational theory and the norm theory, the grey distance information approach, which is based on the grey distance information quantity of a sample and the average grey distance information quantity of the samples, is proposed in this article. The definitions of the grey distance information quantity of a sample and the average grey distance information quantity of the samples, with their characteristics and algorithms, are introduced. The correlative problems, including the algorithm of estimated value, the standard deviation, and the acceptance and rejection criteria of the samples and estimated results, are also proposed. Moreover, the information whitening ratio is introduced to select the weight algorithm and to compare the different samples. Several examples are given to demonstrate the application of the proposed approach. The examples show that the proposed approach, which has no demand for the probability distribution of small samples, is feasible and effective.
文摘This paper takes“Research on Education and Teaching Theory and Practice in the Information Age”as the main research object,and discusses the importance and influence of education and teaching reform and innovation in the information age.First,it introduces the background of education in the information age,including the characteristics of the information age and the development status of education.Secondly,it summarizes the main content of the book“Research on the Theory and Practice of Education and Teaching in the Information Age.”Lastly,the key points of education and teaching reform in the information age are discussed in order to provide effective guidelines and support for the practice of future education and teaching reform.
文摘Landfilling is one of the most effective and responsible ways to dispose of municipal solid waste(MSW).Identifying landfill sites,however,is a challenging and complex undertaking because it depends on social,environmental,technical,economic,and legal issues.This study aims to map the optimal sites that were environmentally suitable for locating a landfill site in Butuan City,Philippines.With reference to the policy requirements from DENR Section I,Landfill Site Identification Criteria and Screening Guidelines of National Solid Waste Management Commission,the integration of a Geographic Information System(GIS)model builder and Analytical Hierarchy Process(AHP)has been used in this study to address the aforementioned challenges related to the landfill site suitability analysis.Based on the generated sanitary landfill suitability map,results showed that Barangay Tungao(1131.42967 ha)and Florida(518.48 ha)were able to meet and consider the three(3)main components,namely economic,environmental,and physical criteria,and are highly suitable as landfill site locations in Butuan City.It is recommended that there will conduct a geotechnical evaluation,involving rigorous geological and hydrogeological assessment employing a combination of site investigation and laboratory techniques.In addition,additional specific social,ecological,climatic,and economic factors need to be considered(i.e.including impact on humans,flora,fauna,soil,water,air,climate,and landscape).
基金the National Natural Science Foundation of China (6070308360575033).
文摘In a measurement system, new representation methods are necessary to maintain the uncertainty and to supply more powerful ability for reasoning and transformation between numerical system and symbolic system. A grey measurement system is discussed from the point of view of intelligent sensors and incomplete information processing compared with a numerical and symbolized measurement system. The methods of grey representation and information processing are proposed for data collection and reasoning. As a case study, multi-ultrasonic sensor systems are demonstrated to verify the effectiveness of the proposed methods.