Semantic transfer of kinship terms in address forms to nonkin has always been an intriguing topic to researchers from different fields, anthropologists and sociolinguists alike, whose investigations involve data from ...Semantic transfer of kinship terms in address forms to nonkin has always been an intriguing topic to researchers from different fields, anthropologists and sociolinguists alike, whose investigations involve data from various cultures. This study focuses on eight Chinese kinship terms used in twenty-one address forms for addressing nonkin in newly created occupations and practices during the recent economic reform in China, in an attempt to identify the driving force behind such transfer. By examining the use of these kinship terms in addressing nonkin in the economic reform, this study compares its findings to those from earlier studies and has found surprisingly that except gender, the other distinctive features, such as consanguinity, affinity, seniority, and generation, have all become neutralized in the semantic transfer, driven chiefly by the nonkin's occupational status. The results from this study demonstrate that the "categorical falsity" evidenced in the semantic transfer of kinship terms in address forms to nonkin is in essence a manifestation of performing an attitudinal "speech act" by the general public about how the nonkin's occupational status is evaluated in the economic reform.展开更多
Traditional image-sentence cross-modal retrieval methods usually aim to learn consistent representations of heterogeneous modalities,thereby to search similar instances in one modality according to the query from anot...Traditional image-sentence cross-modal retrieval methods usually aim to learn consistent representations of heterogeneous modalities,thereby to search similar instances in one modality according to the query from another modality in result.The basic assumption behind these methods is that parallel multi-modal data(i.e.,different modalities of the same example are aligned)can be obtained in prior.In other words,the image-sentence cross-modal retrieval task is a supervised task with the alignments as ground-truths.However,in many real-world applications,it is difficult to realign a large amount of parallel data for new scenarios due to the substantial labor costs,leading the non-parallel multi-modal data and existing methods cannot be used directly.On the other hand,there actually exists auxiliary parallel multi-modal data with similar semantics,which can assist the non-parallel data to learn the consistent representations.Therefore,in this paper,we aim at“Alignment Efficient Image-Sentence Retrieval”(AEIR),which recurs to the auxiliary parallel image-sentence data as the source domain data,and takes the non-parallel data as the target domain data.Unlike single-modal transfer learning,AEIR learns consistent image-sentence cross-modal representations of target domain by transferring the alignments of existing parallel data.Specifically,AEIR learns the image-sentence consistent representations in source domain with parallel data,while transferring the alignment knowledge across domains by jointly optimizing a novel designed cross-domain cross-modal metric learning based constraint with intra-modal domain adversarial loss.Consequently,we can effectively learn the consistent representations for target domain considering both the structure and semantic transfer.Furthermore,extensive experiments on different transfer scenarios validate that AEIR can achieve better retrieval results comparing with the baselines.展开更多
文摘Semantic transfer of kinship terms in address forms to nonkin has always been an intriguing topic to researchers from different fields, anthropologists and sociolinguists alike, whose investigations involve data from various cultures. This study focuses on eight Chinese kinship terms used in twenty-one address forms for addressing nonkin in newly created occupations and practices during the recent economic reform in China, in an attempt to identify the driving force behind such transfer. By examining the use of these kinship terms in addressing nonkin in the economic reform, this study compares its findings to those from earlier studies and has found surprisingly that except gender, the other distinctive features, such as consanguinity, affinity, seniority, and generation, have all become neutralized in the semantic transfer, driven chiefly by the nonkin's occupational status. The results from this study demonstrate that the "categorical falsity" evidenced in the semantic transfer of kinship terms in address forms to nonkin is in essence a manifestation of performing an attitudinal "speech act" by the general public about how the nonkin's occupational status is evaluated in the economic reform.
基金supported by the National Key R&D Program of China(2022YFF0712100)the National Natural Science Foundation of China(Grant Nos.62006118,62276131,62006119)+2 种基金Natural Science Foundation of Jiangsu Province of China(BK20200460)Jiangsu Shuangchuang(Mass Innovation and Entrepreneurship)Talent ProgramYoung Elite Scientists Sponsorship Program by CAST,the Fundamental Research Funds for the Central Universities(Nos.NJ2022028,30922010317).
文摘Traditional image-sentence cross-modal retrieval methods usually aim to learn consistent representations of heterogeneous modalities,thereby to search similar instances in one modality according to the query from another modality in result.The basic assumption behind these methods is that parallel multi-modal data(i.e.,different modalities of the same example are aligned)can be obtained in prior.In other words,the image-sentence cross-modal retrieval task is a supervised task with the alignments as ground-truths.However,in many real-world applications,it is difficult to realign a large amount of parallel data for new scenarios due to the substantial labor costs,leading the non-parallel multi-modal data and existing methods cannot be used directly.On the other hand,there actually exists auxiliary parallel multi-modal data with similar semantics,which can assist the non-parallel data to learn the consistent representations.Therefore,in this paper,we aim at“Alignment Efficient Image-Sentence Retrieval”(AEIR),which recurs to the auxiliary parallel image-sentence data as the source domain data,and takes the non-parallel data as the target domain data.Unlike single-modal transfer learning,AEIR learns consistent image-sentence cross-modal representations of target domain by transferring the alignments of existing parallel data.Specifically,AEIR learns the image-sentence consistent representations in source domain with parallel data,while transferring the alignment knowledge across domains by jointly optimizing a novel designed cross-domain cross-modal metric learning based constraint with intra-modal domain adversarial loss.Consequently,we can effectively learn the consistent representations for target domain considering both the structure and semantic transfer.Furthermore,extensive experiments on different transfer scenarios validate that AEIR can achieve better retrieval results comparing with the baselines.