This paper presents a game theory-based method for predicting the outcomes of negotiation and group decision-making problems. We propose an extension to the BDM model to address problems where actors’ positions are d...This paper presents a game theory-based method for predicting the outcomes of negotiation and group decision-making problems. We propose an extension to the BDM model to address problems where actors’ positions are distributed over a position spectrum. We generalize the concept of position in the model to incorporate continuous positions for the actors, enabling them to have more flexibility in defining their targets. We explore different possible functions to study the role of the position function and discuss appropriate distance measures for computing the distance between the positions of actors. To validate the proposed extension, we demonstrate the trustworthiness of our model’s performance and interpretation by replicating the results based on data used in earlier studies.展开更多
Pi-Calculus is a formal method for describing and analyzing the behavior of large distributed and concurrent systems. Pi-calculus offers a conceptual framework for describing and analyzing the concurrent systems whose...Pi-Calculus is a formal method for describing and analyzing the behavior of large distributed and concurrent systems. Pi-calculus offers a conceptual framework for describing and analyzing the concurrent systems whose configuration may change during the computation. With all the advantages that pi-calculus offers, it does not provide any methods for performance evaluation of the systems described by it;nevertheless performance is a crucial factor that needs to be considered in designing of a multi-process system. Currently, the available tools for pi-calculus are high level language tools that provide facilities for describing and analyzing systems but there is no practical tool on hand for pi-calculus based performance evaluation. In this paper, the performance evaluation is incorporated with pi-calculus by adding performance primitives and associating performance parameters with each action that takes place internally in a system. By using such parameters, the designers can benchmark multi-process systems and compare the performance of different architectures against one another.展开更多
In this article, we have developed a game theory based prediction tool, named Preana, based on a promising model developed by Professor Bruce Beuno de Mesquita. The first part of this work is dedicated to exploration ...In this article, we have developed a game theory based prediction tool, named Preana, based on a promising model developed by Professor Bruce Beuno de Mesquita. The first part of this work is dedicated to exploration of the specifics of Mesquita’s algorithm and reproduction of the factors and features that have not been revealed in literature. In addition, we have developed a learning mechanism to model the players’ reasoning ability when it comes to taking risks. Preana can predict the outcome of any issue with multiple steak-holders who have conflicting interests in economic, business, and political sciences. We have utilized game theory, expected utility theory, Median voter theory, probability distribution and reinforcement learning. We were able to reproduce Mesquita’s reported results and have included two case studies from his publications and compared his results to that of Preana. We have also applied Preana on Irans 2013 presidential election to verify the accuracy of the prediction made by Preana.展开更多
Item response theory (IRT) is a modern test theory that has been used in various aspects of educational and psychological measurement. The fully Bayesian approach shows promise for estimating IRT models. Given that it...Item response theory (IRT) is a modern test theory that has been used in various aspects of educational and psychological measurement. The fully Bayesian approach shows promise for estimating IRT models. Given that it is computation- ally expensive, the procedure is limited in practical applications. It is hence important to seek ways to reduce the execution time. A suitable solution is the use of high performance computing. This study focuses on the fully Bayesian algorithm for a conventional IRT model so that it can be implemented on a high performance parallel machine. Empirical results suggest that this parallel version of the algorithm achieves a considerable speedup and thus reduces the execution time considerably.展开更多
The larger the size of the data, structured or unstructured, the harder to understand and make use of it. One of the fundamentals to machine learning is feature selection. Feature selection, by reducing the number of ...The larger the size of the data, structured or unstructured, the harder to understand and make use of it. One of the fundamentals to machine learning is feature selection. Feature selection, by reducing the number of irrelevant/redundant features, dramatically reduces the run time of a learning algorithm and leads to a more general concept. In this paper, realization of feature selection through a neural network based algorithm, with the aid of a topology optimizer genetic algorithm, is investigated. We have utilized NeuroEvolution of Augmenting Topologies (NEAT) to select a subset of features with the most relevant connection to the target concept. Discovery and improvement of solutions are two main goals of machine learning, however, the accuracy of these varies depends on dimensions of problem space. Although feature selection methods can help to improve this accuracy, complexity of problem can also affect their performance. Artificialneural networks are proven effective in feature elimination, but as a consequence of fixed topology of most neural networks, it loses accuracy when the number of local minimas is considerable in the problem. To minimize this drawback, topology of neural network should be flexible and it should be able to avoid local minimas especially when a feature is removed. In this work, the power of feature selection through NEAT method is demonstrated. When compared to the evolution of networks with fixed structure, NEAT discovers significantly more sophisticated strategies. The results show NEAT can provide better accuracy compared to conventional Multi-Layer Perceptron and leads to improved feature selection.展开更多
文摘This paper presents a game theory-based method for predicting the outcomes of negotiation and group decision-making problems. We propose an extension to the BDM model to address problems where actors’ positions are distributed over a position spectrum. We generalize the concept of position in the model to incorporate continuous positions for the actors, enabling them to have more flexibility in defining their targets. We explore different possible functions to study the role of the position function and discuss appropriate distance measures for computing the distance between the positions of actors. To validate the proposed extension, we demonstrate the trustworthiness of our model’s performance and interpretation by replicating the results based on data used in earlier studies.
文摘Pi-Calculus is a formal method for describing and analyzing the behavior of large distributed and concurrent systems. Pi-calculus offers a conceptual framework for describing and analyzing the concurrent systems whose configuration may change during the computation. With all the advantages that pi-calculus offers, it does not provide any methods for performance evaluation of the systems described by it;nevertheless performance is a crucial factor that needs to be considered in designing of a multi-process system. Currently, the available tools for pi-calculus are high level language tools that provide facilities for describing and analyzing systems but there is no practical tool on hand for pi-calculus based performance evaluation. In this paper, the performance evaluation is incorporated with pi-calculus by adding performance primitives and associating performance parameters with each action that takes place internally in a system. By using such parameters, the designers can benchmark multi-process systems and compare the performance of different architectures against one another.
文摘In this article, we have developed a game theory based prediction tool, named Preana, based on a promising model developed by Professor Bruce Beuno de Mesquita. The first part of this work is dedicated to exploration of the specifics of Mesquita’s algorithm and reproduction of the factors and features that have not been revealed in literature. In addition, we have developed a learning mechanism to model the players’ reasoning ability when it comes to taking risks. Preana can predict the outcome of any issue with multiple steak-holders who have conflicting interests in economic, business, and political sciences. We have utilized game theory, expected utility theory, Median voter theory, probability distribution and reinforcement learning. We were able to reproduce Mesquita’s reported results and have included two case studies from his publications and compared his results to that of Preana. We have also applied Preana on Irans 2013 presidential election to verify the accuracy of the prediction made by Preana.
文摘Item response theory (IRT) is a modern test theory that has been used in various aspects of educational and psychological measurement. The fully Bayesian approach shows promise for estimating IRT models. Given that it is computation- ally expensive, the procedure is limited in practical applications. It is hence important to seek ways to reduce the execution time. A suitable solution is the use of high performance computing. This study focuses on the fully Bayesian algorithm for a conventional IRT model so that it can be implemented on a high performance parallel machine. Empirical results suggest that this parallel version of the algorithm achieves a considerable speedup and thus reduces the execution time considerably.
文摘The larger the size of the data, structured or unstructured, the harder to understand and make use of it. One of the fundamentals to machine learning is feature selection. Feature selection, by reducing the number of irrelevant/redundant features, dramatically reduces the run time of a learning algorithm and leads to a more general concept. In this paper, realization of feature selection through a neural network based algorithm, with the aid of a topology optimizer genetic algorithm, is investigated. We have utilized NeuroEvolution of Augmenting Topologies (NEAT) to select a subset of features with the most relevant connection to the target concept. Discovery and improvement of solutions are two main goals of machine learning, however, the accuracy of these varies depends on dimensions of problem space. Although feature selection methods can help to improve this accuracy, complexity of problem can also affect their performance. Artificialneural networks are proven effective in feature elimination, but as a consequence of fixed topology of most neural networks, it loses accuracy when the number of local minimas is considerable in the problem. To minimize this drawback, topology of neural network should be flexible and it should be able to avoid local minimas especially when a feature is removed. In this work, the power of feature selection through NEAT method is demonstrated. When compared to the evolution of networks with fixed structure, NEAT discovers significantly more sophisticated strategies. The results show NEAT can provide better accuracy compared to conventional Multi-Layer Perceptron and leads to improved feature selection.