In this paper, a strong limit theorem on gambling strategy for binary Bernoulli sequence, (i.e.) irregularity theorem, is extended to random selection for dependent m-valued random variables, via using a new method-di...In this paper, a strong limit theorem on gambling strategy for binary Bernoulli sequence, (i.e.) irregularity theorem, is extended to random selection for dependent m-valued random variables, via using a new method-differentiability on net. Furthermore, by allowing the selection function to take value in finite interval [-M,M], the conception of random selection is generalized.展开更多
Random pixel selection is one of the image steganography methods that has achieved significant success in enhancing the robustness of hidden data.This property makes it difficult for steganalysts’powerful data extrac...Random pixel selection is one of the image steganography methods that has achieved significant success in enhancing the robustness of hidden data.This property makes it difficult for steganalysts’powerful data extraction tools to detect the hidden data and ensures high-quality stego image generation.However,using a seed key to generate non-repeated sequential numbers takes a long time because it requires specific mathematical equations.In addition,these numbers may cluster in certain ranges.The hidden data in these clustered pixels will reduce the image quality,which steganalysis tools can detect.Therefore,this paper proposes a data structure that safeguards the steganographic model data and maintains the quality of the stego image.This paper employs the AdelsonVelsky and Landis(AVL)tree data structure algorithm to implement the randomization pixel selection technique for data concealment.The AVL tree algorithm provides several advantages for image steganography.Firstly,it ensures balanced tree structures,which leads to efficient data retrieval and insertion operations.Secondly,the self-balancing nature of AVL trees minimizes clustering by maintaining an even distribution of pixels,thereby preserving the stego image quality.The data structure employs the pixel indicator technique for Red,Green,and Blue(RGB)channel extraction.The green channel serves as the foundation for building a balanced binary tree.First,the sender identifies the colored cover image and secret data.The sender will use the two least significant bits(2-LSB)of RGB channels to conceal the data’s size and associated information.The next step is to create a balanced binary tree based on the green channel.Utilizing the channel pixel indicator on the LSB of the green channel,we can conceal bits in the 2-LSB of the red or blue channel.The first four levels of the data structure tree will mask the data size,while subsequent levels will conceal the remaining digits of secret data.After embedding the bits in the binary tree level by level,the model restores the AVL tree to create the stego image.Ultimately,the receiver receives this stego image through the public channel,enabling secret data recovery without stego or crypto keys.This method ensures that the stego image appears unsuspicious to potential attackers.Without an extraction algorithm,a third party cannot extract the original secret information from an intercepted stego image.Experimental results showed high levels of imperceptibility and security.展开更多
Moran or Wright–Fisher processes are probably the most well known models to study the evolution of a population under environmental various effects.Our object of study will be the Simpson index which measures the lev...Moran or Wright–Fisher processes are probably the most well known models to study the evolution of a population under environmental various effects.Our object of study will be the Simpson index which measures the level of diversity of the population,one of the key parameters for ecologists who study for example,forest dynamics.Following ecological motivations,we will consider,here,the case,where there are various species with fitness and immigration parameters being random processes(and thus time evolving).The Simpson index is difficult to evaluate when the population is large,except in the neutral(no selection)case,because it has no closed formula.Our approach relies on the large population limit in the“weak”selection case,and thus to give a procedure which enables us to approximate,with controlled rate,the expectation of the Simpson index at fixed time.We will also study the long time behavior(invariant measure and convergence speed towards equilibrium)of the Wright–Fisher process in a simplified setting,allowing us to get a full picture for the approximation of the expectation of the Simpson index.展开更多
Combining the characteristics of peer-to-peer (P2P) and grid, a super-peer selection algorithm--SSABC is presented in the distributed network merging P2P and grid. The algorithm computes nodes capacities using their...Combining the characteristics of peer-to-peer (P2P) and grid, a super-peer selection algorithm--SSABC is presented in the distributed network merging P2P and grid. The algorithm computes nodes capacities using their resource properties provided by a grid monitoring and discovery system, such as available bandwidth, free CPU and idle memory, as well as the number of current connections and online time. when a new node joins the network and the super-peers are all saturated, it should select a new super-peer from the new node or joined nodes with the highest capacity. By theoretical analyses and simulation experiments, it is shown that super-peers selected by capacity can achieve higher query success rates and shorten the average hop count when compared with super-peers selected randomly, and they can also balance the network load when all super-peers are saturated. When the number of total nodes changes, the conclusion is still valid, which explains that the algorithm SSABC is feasible and stable.展开更多
The gravitational search algorithm (GSA) is a population-based heuristic optimization technique and has been proposed for solving continuous optimization problems. The GSA tries to obtain optimum or near optimum solut...The gravitational search algorithm (GSA) is a population-based heuristic optimization technique and has been proposed for solving continuous optimization problems. The GSA tries to obtain optimum or near optimum solution for the optimization problems by using interaction in all agents or masses in the population. This paper proposes and analyzes fitness-based proportional (rou- lette-wheel), tournament, rank-based and random selection mechanisms for choosing agents which they act masses in the GSA. The proposed methods are applied to solve 23 numerical benchmark functions, and obtained results are compared with the basic GSA algorithm. Experimental results show that the proposed methods are better than the basic GSA in terms of solution quality.展开更多
In this paper, we proposed a way to realize an Er-doped random fiber laser(RFL) with a disordered fiber Bragg grating(FBG) array, as well as to control the lasing mode of the RFL by heating specific locations of the d...In this paper, we proposed a way to realize an Er-doped random fiber laser(RFL) with a disordered fiber Bragg grating(FBG) array, as well as to control the lasing mode of the RFL by heating specific locations of the disordered FBG array. The disordered FBG array performs as both the gain medium and random distributed reflectors, which together with a tunable point reflector form the RFL. Coherent multi-mode random lasing is obtained with a threshold of between 7.5 and 10 mW and a power efficiency between 23% and 27% when the reflectivity of the point reflector changes from 4% to 50%. To control the lasing mode of random emission, a specific point of the disordered FBG array is heated so as to shift the wavelength of the FBG(s) at this point away from the other FBGs.Thus, different resonance cavities are formed, and the lasing mode can be controlled by changing the location of the heating point.展开更多
An innovative and uniform framework based on a combination of Gabor wavelets with principal component analysis (PCA) and multiple discriminant analysis (MDA) is presented in this paper. In this framework, features...An innovative and uniform framework based on a combination of Gabor wavelets with principal component analysis (PCA) and multiple discriminant analysis (MDA) is presented in this paper. In this framework, features are extracted from the optimal random image components using greedy approach. These feature vectors are then projected to subspaces for dimensionality reduction which is used for solving linear problems. The design of Gabor filters, PCA and MDA are crucial processes used for facial feature extraction. The FERET, ORL and YALE face databases are used to generate the results. Experiments show that optimal random image component selection (ORICS) plus MDA outperforms ORICS and subspace projection approach such as ORICS plus PCA. Our method achieves 96.25%, 99.44% and 100% recognition accuracy on the FERET, ORL and YALE databases for 30% training respectively. This is a considerably improved performance compared with other standard methodologies described in the literature.展开更多
Instance-specific algorithm selection technologies have been successfully used in many research fields,such as constraint satisfaction and planning. Researchers have been increasingly trying to model the potential rel...Instance-specific algorithm selection technologies have been successfully used in many research fields,such as constraint satisfaction and planning. Researchers have been increasingly trying to model the potential relations between different candidate algorithms for the algorithm selection. In this study, we propose an instancespecific algorithm selection method based on multi-output learning, which can manage these relations more directly.Three kinds of multi-output learning methods are used to predict the performances of the candidate algorithms:(1)multi-output regressor stacking;(2) multi-output extremely randomized trees; and(3) hybrid single-output and multioutput trees. The experimental results obtained using 11 SAT datasets and 5 Max SAT datasets indicate that our proposed methods can obtain a better performance over the state-of-the-art algorithm selection methods.展开更多
文摘In this paper, a strong limit theorem on gambling strategy for binary Bernoulli sequence, (i.e.) irregularity theorem, is extended to random selection for dependent m-valued random variables, via using a new method-differentiability on net. Furthermore, by allowing the selection function to take value in finite interval [-M,M], the conception of random selection is generalized.
文摘Random pixel selection is one of the image steganography methods that has achieved significant success in enhancing the robustness of hidden data.This property makes it difficult for steganalysts’powerful data extraction tools to detect the hidden data and ensures high-quality stego image generation.However,using a seed key to generate non-repeated sequential numbers takes a long time because it requires specific mathematical equations.In addition,these numbers may cluster in certain ranges.The hidden data in these clustered pixels will reduce the image quality,which steganalysis tools can detect.Therefore,this paper proposes a data structure that safeguards the steganographic model data and maintains the quality of the stego image.This paper employs the AdelsonVelsky and Landis(AVL)tree data structure algorithm to implement the randomization pixel selection technique for data concealment.The AVL tree algorithm provides several advantages for image steganography.Firstly,it ensures balanced tree structures,which leads to efficient data retrieval and insertion operations.Secondly,the self-balancing nature of AVL trees minimizes clustering by maintaining an even distribution of pixels,thereby preserving the stego image quality.The data structure employs the pixel indicator technique for Red,Green,and Blue(RGB)channel extraction.The green channel serves as the foundation for building a balanced binary tree.First,the sender identifies the colored cover image and secret data.The sender will use the two least significant bits(2-LSB)of RGB channels to conceal the data’s size and associated information.The next step is to create a balanced binary tree based on the green channel.Utilizing the channel pixel indicator on the LSB of the green channel,we can conceal bits in the 2-LSB of the red or blue channel.The first four levels of the data structure tree will mask the data size,while subsequent levels will conceal the remaining digits of secret data.After embedding the bits in the binary tree level by level,the model restores the AVL tree to create the stego image.Ultimately,the receiver receives this stego image through the public channel,enabling secret data recovery without stego or crypto keys.This method ensures that the stego image appears unsuspicious to potential attackers.Without an extraction algorithm,a third party cannot extract the original secret information from an intercepted stego image.Experimental results showed high levels of imperceptibility and security.
文摘Moran or Wright–Fisher processes are probably the most well known models to study the evolution of a population under environmental various effects.Our object of study will be the Simpson index which measures the level of diversity of the population,one of the key parameters for ecologists who study for example,forest dynamics.Following ecological motivations,we will consider,here,the case,where there are various species with fitness and immigration parameters being random processes(and thus time evolving).The Simpson index is difficult to evaluate when the population is large,except in the neutral(no selection)case,because it has no closed formula.Our approach relies on the large population limit in the“weak”selection case,and thus to give a procedure which enables us to approximate,with controlled rate,the expectation of the Simpson index at fixed time.We will also study the long time behavior(invariant measure and convergence speed towards equilibrium)of the Wright–Fisher process in a simplified setting,allowing us to get a full picture for the approximation of the expectation of the Simpson index.
基金The National High Technology Research and Development Program of China (863 Program) (No.2007AA01Z422)the NaturalFoundation of Anhui Provincial Education Department (No.2006KJ041B,KJ2007B073)
文摘Combining the characteristics of peer-to-peer (P2P) and grid, a super-peer selection algorithm--SSABC is presented in the distributed network merging P2P and grid. The algorithm computes nodes capacities using their resource properties provided by a grid monitoring and discovery system, such as available bandwidth, free CPU and idle memory, as well as the number of current connections and online time. when a new node joins the network and the super-peers are all saturated, it should select a new super-peer from the new node or joined nodes with the highest capacity. By theoretical analyses and simulation experiments, it is shown that super-peers selected by capacity can achieve higher query success rates and shorten the average hop count when compared with super-peers selected randomly, and they can also balance the network load when all super-peers are saturated. When the number of total nodes changes, the conclusion is still valid, which explains that the algorithm SSABC is feasible and stable.
基金supported by Scientific Research Project of Selçuk University
文摘The gravitational search algorithm (GSA) is a population-based heuristic optimization technique and has been proposed for solving continuous optimization problems. The GSA tries to obtain optimum or near optimum solution for the optimization problems by using interaction in all agents or masses in the population. This paper proposes and analyzes fitness-based proportional (rou- lette-wheel), tournament, rank-based and random selection mechanisms for choosing agents which they act masses in the GSA. The proposed methods are applied to solve 23 numerical benchmark functions, and obtained results are compared with the basic GSA algorithm. Experimental results show that the proposed methods are better than the basic GSA in terms of solution quality.
基金supported in part by the National Natural Science Foundation of China under Grants 61575040 and 61106045the PCSIRT under Grant IRT1218+1 种基金the 111 Project under Grant B14039the open research fund of Jiangsu Key Laboratory for Advanced Optical Manufacturing Technologies under Grant KJS1402
文摘In this paper, we proposed a way to realize an Er-doped random fiber laser(RFL) with a disordered fiber Bragg grating(FBG) array, as well as to control the lasing mode of the RFL by heating specific locations of the disordered FBG array. The disordered FBG array performs as both the gain medium and random distributed reflectors, which together with a tunable point reflector form the RFL. Coherent multi-mode random lasing is obtained with a threshold of between 7.5 and 10 mW and a power efficiency between 23% and 27% when the reflectivity of the point reflector changes from 4% to 50%. To control the lasing mode of random emission, a specific point of the disordered FBG array is heated so as to shift the wavelength of the FBG(s) at this point away from the other FBGs.Thus, different resonance cavities are formed, and the lasing mode can be controlled by changing the location of the heating point.
文摘An innovative and uniform framework based on a combination of Gabor wavelets with principal component analysis (PCA) and multiple discriminant analysis (MDA) is presented in this paper. In this framework, features are extracted from the optimal random image components using greedy approach. These feature vectors are then projected to subspaces for dimensionality reduction which is used for solving linear problems. The design of Gabor filters, PCA and MDA are crucial processes used for facial feature extraction. The FERET, ORL and YALE face databases are used to generate the results. Experiments show that optimal random image component selection (ORICS) plus MDA outperforms ORICS and subspace projection approach such as ORICS plus PCA. Our method achieves 96.25%, 99.44% and 100% recognition accuracy on the FERET, ORL and YALE databases for 30% training respectively. This is a considerably improved performance compared with other standard methodologies described in the literature.
基金mainly supported by the National Natural Science Foundation of China(Nos.61125201,61303070,and U1435219)
文摘Instance-specific algorithm selection technologies have been successfully used in many research fields,such as constraint satisfaction and planning. Researchers have been increasingly trying to model the potential relations between different candidate algorithms for the algorithm selection. In this study, we propose an instancespecific algorithm selection method based on multi-output learning, which can manage these relations more directly.Three kinds of multi-output learning methods are used to predict the performances of the candidate algorithms:(1)multi-output regressor stacking;(2) multi-output extremely randomized trees; and(3) hybrid single-output and multioutput trees. The experimental results obtained using 11 SAT datasets and 5 Max SAT datasets indicate that our proposed methods can obtain a better performance over the state-of-the-art algorithm selection methods.