Traditional methods for selecting models in experimental data analysis are susceptible to researcher bias, hindering exploration of alternative explanations and potentially leading to overfitting. The Finite Informati...Traditional methods for selecting models in experimental data analysis are susceptible to researcher bias, hindering exploration of alternative explanations and potentially leading to overfitting. The Finite Information Quantity (FIQ) approach offers a novel solution by acknowledging the inherent limitations in information processing capacity of physical systems. This framework facilitates the development of objective criteria for model selection (comparative uncertainty) and paves the way for a more comprehensive understanding of phenomena through exploring diverse explanations. This work presents a detailed comparison of the FIQ approach with ten established model selection methods, highlighting the advantages and limitations of each. We demonstrate the potential of FIQ to enhance the objectivity and robustness of scientific inquiry through three practical examples: selecting appropriate models for measuring fundamental constants, sound velocity, and underwater electrical discharges. Further research is warranted to explore the full applicability of FIQ across various scientific disciplines.展开更多
When building a model of a physical phenomenon or process, scientists face an inevitable compromise between the simplicity of the model (qualitative-quantitative set of variables) and its accuracy. For hundreds of yea...When building a model of a physical phenomenon or process, scientists face an inevitable compromise between the simplicity of the model (qualitative-quantitative set of variables) and its accuracy. For hundreds of years, the visual simplicity of a law testified to the genius and depth of the physical thinking of the scientist who proposed it. Currently, the desire for a deeper physical understanding of the surrounding world and newly discovered physical phenomena motivates researchers to increase the number of variables considered in a model. This direction leads to an increased probability of choosing an inaccurate or even erroneous model. This study describes a method for estimating the limit of measurement accuracy, taking into account the stage of model building in terms of storage, transmission, processing and use of information by the observer. This limit, due to the finite amount of information stored in the model, allows you to select the optimal number of variables for the best reproduction of the observed object and calculate the exact values of the threshold discrepancy between the model and the phenomenon under study in measurement theory. We consider two examples: measurement of the speed of sound and measurement of physical constants.展开更多
Using the characteristic of addition of information quantity and the principle of equivalence of information quantity, this paper derives the general conversion formulae of the formation theory method conversion (synt...Using the characteristic of addition of information quantity and the principle of equivalence of information quantity, this paper derives the general conversion formulae of the formation theory method conversion (synthesis) on the systems consisting of different success failure model units. According to the fundamental method of the unit reliability assessment, the general models of system reliability approximate lower limits are given. Finally, this paper analyses the application of the assessment method by examples, the assessment results are neither conservative nor radical and very satisfactory. The assessment method can be popularized to the systems which have fixed reliability structural models.展开更多
Assessment of quantity of information that is present in own noises of EE (electronic elements)--informational noise's entropy--using spectrum distribution of noises' probability of real elements is proposed. It i...Assessment of quantity of information that is present in own noises of EE (electronic elements)--informational noise's entropy--using spectrum distribution of noises' probability of real elements is proposed. It is shown that informational noise's entropy as opposed to used differential entropy of continuous signals defines the quantity of qualitative information related to the features of element's structure. Proposed quantitative assessment of information can be used for calculation of information contained in the own noises of other technical systems.展开更多
Electroencephalography(EEG),helps to analyze the neuronal activity of a human brain in the form of electrical signals with high temporal resolution in the millisecond range.To extract clean clinical information from E...Electroencephalography(EEG),helps to analyze the neuronal activity of a human brain in the form of electrical signals with high temporal resolution in the millisecond range.To extract clean clinical information from EEG signals,it is essential to remove unwanted artifacts that are due to different causes including at the time of acquisition.In this piece of work,the authors considered the EEG signal contaminated with Electrocardiogram(ECG)artifacts that occurs mostly in cardiac patients.The clean EEG is taken from the openly available Mendeley database whereas the ECG signal is collected from the Physionet database to create artifacts in the EEG signal and verify the proposed algorithm.Being the artifactual signal is non-linear and non-stationary the Random Vector Functional Link Network(RVFLN)model is used in this case.The Machine Learning approach has taken a leading role in every field of current research and RVFLN is one of them.For the proof of adaptive nature,the model is designed with EEG as a reference and artifactual EEG as input.The peaks of ECG signals are evaluated for artifact estimation as the amplitude is higher than the EEG signal.To vary the weight and reduce the error,an exponentially weighted Recursive Least Square(RLS)algorithm is used to design the adaptive filter with the novel RVFLN model.The random vectors are considered in this model with a radial basis function to satisfy the required signal experimentation.It is found that the result is excellent in terms of Mean Square Error(MSE),Normalized Mean Square Error(NMSE),Relative Error(RE),Gain in Signal to Artifact Ratio(GSAR),Signal Noise Ratio(SNR),Information Quantity(IQ),and Improvement in Normalized Power Spectrum(INPS).Also,the proposed method is compared with the earlier methods to show its efficacy.展开更多
The reliability assessment of unit-system near two levels is the mostimportant content in the reliability multi-level synthesis of complex systems. Introducing theinformation theory into system reliability assessment,...The reliability assessment of unit-system near two levels is the mostimportant content in the reliability multi-level synthesis of complex systems. Introducing theinformation theory into system reliability assessment, using the addible characteristic ofinformation quantity and the principle of equivalence of information quantity, an entropy method ofdata information conversion is presented for the system consisted of identical exponential units.The basic conversion formulae of entropy method of unit test data are derived based on the principleof information quantity equivalence. The general models of entropy method synthesis assessment forsystem reliability approximate lower limits are established according to the fundamental principleof the unit reliability assessment. The applications of the entropy method are discussed by way ofpractical examples. Compared with the traditional methods, the entropy method is found to be validand practicable and the assessment results are very satisfactory.展开更多
Csiszar's strong coding theorem for discrete memoryless scarce is generalized to arbitrarily varying source.We also determine the asymptotic error exponent for arbitrarily wrying source.
In this essay the mathematical model of information communication in teaching process is presented. The value of Shannon’s quantity of information in teaching process is defined, so that the problem in using quantity...In this essay the mathematical model of information communication in teaching process is presented. The value of Shannon’s quantity of information in teaching process is defined, so that the problem in using quantity of information to evaluate teaching process can be solved. Through calculating the channel capacity and considering its value, the possible maximum quantity of information transmitted in teaching process can be obtained. Thus, evaluation of teaching efficiency and students’ learning potentiality are realized.展开更多
文摘Traditional methods for selecting models in experimental data analysis are susceptible to researcher bias, hindering exploration of alternative explanations and potentially leading to overfitting. The Finite Information Quantity (FIQ) approach offers a novel solution by acknowledging the inherent limitations in information processing capacity of physical systems. This framework facilitates the development of objective criteria for model selection (comparative uncertainty) and paves the way for a more comprehensive understanding of phenomena through exploring diverse explanations. This work presents a detailed comparison of the FIQ approach with ten established model selection methods, highlighting the advantages and limitations of each. We demonstrate the potential of FIQ to enhance the objectivity and robustness of scientific inquiry through three practical examples: selecting appropriate models for measuring fundamental constants, sound velocity, and underwater electrical discharges. Further research is warranted to explore the full applicability of FIQ across various scientific disciplines.
文摘When building a model of a physical phenomenon or process, scientists face an inevitable compromise between the simplicity of the model (qualitative-quantitative set of variables) and its accuracy. For hundreds of years, the visual simplicity of a law testified to the genius and depth of the physical thinking of the scientist who proposed it. Currently, the desire for a deeper physical understanding of the surrounding world and newly discovered physical phenomena motivates researchers to increase the number of variables considered in a model. This direction leads to an increased probability of choosing an inaccurate or even erroneous model. This study describes a method for estimating the limit of measurement accuracy, taking into account the stage of model building in terms of storage, transmission, processing and use of information by the observer. This limit, due to the finite amount of information stored in the model, allows you to select the optimal number of variables for the best reproduction of the observed object and calculate the exact values of the threshold discrepancy between the model and the phenomenon under study in measurement theory. We consider two examples: measurement of the speed of sound and measurement of physical constants.
文摘Using the characteristic of addition of information quantity and the principle of equivalence of information quantity, this paper derives the general conversion formulae of the formation theory method conversion (synthesis) on the systems consisting of different success failure model units. According to the fundamental method of the unit reliability assessment, the general models of system reliability approximate lower limits are given. Finally, this paper analyses the application of the assessment method by examples, the assessment results are neither conservative nor radical and very satisfactory. The assessment method can be popularized to the systems which have fixed reliability structural models.
文摘Assessment of quantity of information that is present in own noises of EE (electronic elements)--informational noise's entropy--using spectrum distribution of noises' probability of real elements is proposed. It is shown that informational noise's entropy as opposed to used differential entropy of continuous signals defines the quantity of qualitative information related to the features of element's structure. Proposed quantitative assessment of information can be used for calculation of information contained in the own noises of other technical systems.
文摘Electroencephalography(EEG),helps to analyze the neuronal activity of a human brain in the form of electrical signals with high temporal resolution in the millisecond range.To extract clean clinical information from EEG signals,it is essential to remove unwanted artifacts that are due to different causes including at the time of acquisition.In this piece of work,the authors considered the EEG signal contaminated with Electrocardiogram(ECG)artifacts that occurs mostly in cardiac patients.The clean EEG is taken from the openly available Mendeley database whereas the ECG signal is collected from the Physionet database to create artifacts in the EEG signal and verify the proposed algorithm.Being the artifactual signal is non-linear and non-stationary the Random Vector Functional Link Network(RVFLN)model is used in this case.The Machine Learning approach has taken a leading role in every field of current research and RVFLN is one of them.For the proof of adaptive nature,the model is designed with EEG as a reference and artifactual EEG as input.The peaks of ECG signals are evaluated for artifact estimation as the amplitude is higher than the EEG signal.To vary the weight and reduce the error,an exponentially weighted Recursive Least Square(RLS)algorithm is used to design the adaptive filter with the novel RVFLN model.The random vectors are considered in this model with a radial basis function to satisfy the required signal experimentation.It is found that the result is excellent in terms of Mean Square Error(MSE),Normalized Mean Square Error(NMSE),Relative Error(RE),Gain in Signal to Artifact Ratio(GSAR),Signal Noise Ratio(SNR),Information Quantity(IQ),and Improvement in Normalized Power Spectrum(INPS).Also,the proposed method is compared with the earlier methods to show its efficacy.
文摘The reliability assessment of unit-system near two levels is the mostimportant content in the reliability multi-level synthesis of complex systems. Introducing theinformation theory into system reliability assessment, using the addible characteristic ofinformation quantity and the principle of equivalence of information quantity, an entropy method ofdata information conversion is presented for the system consisted of identical exponential units.The basic conversion formulae of entropy method of unit test data are derived based on the principleof information quantity equivalence. The general models of entropy method synthesis assessment forsystem reliability approximate lower limits are established according to the fundamental principleof the unit reliability assessment. The applications of the entropy method are discussed by way ofpractical examples. Compared with the traditional methods, the entropy method is found to be validand practicable and the assessment results are very satisfactory.
文摘Csiszar's strong coding theorem for discrete memoryless scarce is generalized to arbitrarily varying source.We also determine the asymptotic error exponent for arbitrarily wrying source.
文摘In this essay the mathematical model of information communication in teaching process is presented. The value of Shannon’s quantity of information in teaching process is defined, so that the problem in using quantity of information to evaluate teaching process can be solved. Through calculating the channel capacity and considering its value, the possible maximum quantity of information transmitted in teaching process can be obtained. Thus, evaluation of teaching efficiency and students’ learning potentiality are realized.