We propose to use discriminative subgraphs to discover family photos from group photos in an efficient and effective way. Group photos are represented as face graphs by identifying social contexts such as age, gender,...We propose to use discriminative subgraphs to discover family photos from group photos in an efficient and effective way. Group photos are represented as face graphs by identifying social contexts such as age, gender, and face position. The previous work utilized bag-of-word models and considered frequent subgraphs from all group photos as features for classification. This approach, however, produces numerous subgraphs, resulting in high dimensions.Furthermore, some of them are not discriminative.To solve these issues, we adopt a state-of-the-art,frequent subgraph mining method that removes nondiscriminative subgraphs. We also use TF-IDF normalization, which is more suitable for the bag-ofword model. To validate our method, we experiment in two datasets. Our method shows consistently better performance, higher accuracy in lower feature dimensions, compared to the previous method. We also integrate our method with the recent Microsoft face recognition API and release it in a public website.展开更多
基金supported in part by MSIP/IITP (Nos. R0126-16-1108, R0101-16-0176)MSIP/NRF (No. 2013-067321)
文摘We propose to use discriminative subgraphs to discover family photos from group photos in an efficient and effective way. Group photos are represented as face graphs by identifying social contexts such as age, gender, and face position. The previous work utilized bag-of-word models and considered frequent subgraphs from all group photos as features for classification. This approach, however, produces numerous subgraphs, resulting in high dimensions.Furthermore, some of them are not discriminative.To solve these issues, we adopt a state-of-the-art,frequent subgraph mining method that removes nondiscriminative subgraphs. We also use TF-IDF normalization, which is more suitable for the bag-ofword model. To validate our method, we experiment in two datasets. Our method shows consistently better performance, higher accuracy in lower feature dimensions, compared to the previous method. We also integrate our method with the recent Microsoft face recognition API and release it in a public website.