Attempts to determine characters of astronomical objects have been one of major and vibrant activities in both astronomy and data science fields.Instead of a manual inspection,various automated systems are invented to...Attempts to determine characters of astronomical objects have been one of major and vibrant activities in both astronomy and data science fields.Instead of a manual inspection,various automated systems are invented to satisfy the need,including the classification of light curve profiles.A specific Kaggle competition,namely Photometric LSST Astronomical Time-Series Classification Challenge(PLAsTiCC),is launched to gather new ideas of tackling the abovementioned task using the data set collected from the Large Synoptic Survey Telescope(LSST)project.Almost all proposed methods fall into the supervised family with a common aim to categorize each object into one of pre-defined types.As this challenge focuses on developing a predictive model that is robust to classifying unseen data,those previous attempts similarly encounter the lack of discriminate features,since distribution of training and actual test datasets are largely different.As a result,well-known classification algorithms prove to be sub-optimal,while more complicated feature extraction techniques may help to slightly boost the predictive performance.Given such a burden,this research is set to explore an unsupervised alternative to the difficult quest,where common classifiers fail to reach the 50%accuracy mark.A clustering technique is exploited to transform the space of training data,from which a more accurate classifier can be built.In addition to a single clustering framework that provides a comparable accuracy to the front runners of supervised learning,a multiple-clustering alternative is also introduced with improved performance.In fact,it is able to yield a higher accuracy rate of 58.32%from 51.36%that is obtained using a simple clustering.For this difficult problem,it is rather good considering for those achieved by well-known models like support vector machine(SVM)with 51.80%and Naive Bayes(NB)with only 2.92%.展开更多
基金funded by the Security BigData Fusion Project(Office of theMinistry of Higher Education,Science,Research and Innovation).The corresponding author is the project PI.
文摘Attempts to determine characters of astronomical objects have been one of major and vibrant activities in both astronomy and data science fields.Instead of a manual inspection,various automated systems are invented to satisfy the need,including the classification of light curve profiles.A specific Kaggle competition,namely Photometric LSST Astronomical Time-Series Classification Challenge(PLAsTiCC),is launched to gather new ideas of tackling the abovementioned task using the data set collected from the Large Synoptic Survey Telescope(LSST)project.Almost all proposed methods fall into the supervised family with a common aim to categorize each object into one of pre-defined types.As this challenge focuses on developing a predictive model that is robust to classifying unseen data,those previous attempts similarly encounter the lack of discriminate features,since distribution of training and actual test datasets are largely different.As a result,well-known classification algorithms prove to be sub-optimal,while more complicated feature extraction techniques may help to slightly boost the predictive performance.Given such a burden,this research is set to explore an unsupervised alternative to the difficult quest,where common classifiers fail to reach the 50%accuracy mark.A clustering technique is exploited to transform the space of training data,from which a more accurate classifier can be built.In addition to a single clustering framework that provides a comparable accuracy to the front runners of supervised learning,a multiple-clustering alternative is also introduced with improved performance.In fact,it is able to yield a higher accuracy rate of 58.32%from 51.36%that is obtained using a simple clustering.For this difficult problem,it is rather good considering for those achieved by well-known models like support vector machine(SVM)with 51.80%and Naive Bayes(NB)with only 2.92%.