By lifelogging, we understand a specific, very recent phenomenon of digital technology, which falls within the range of practices of the quantified self. It is a complex form of self-management through self-monitoring...By lifelogging, we understand a specific, very recent phenomenon of digital technology, which falls within the range of practices of the quantified self. It is a complex form of self-management through self-monitoring and self-tracking practices, which combines the use of wearable computers for measuring psycho-physical performances through specific apps for the processing, selecting and describing of the data collected, possibly in combination with video recordings. Given that lifelogging is becoming increasingly widespread in technologically advanced societies and that practices related to it are becoming part of most people's everyday lives, it is more important than ever to gain an understanding of the phenomenon. In this paper, I am interested in particular in exploring the issue of the transformations in the perception, comprehension, and construction of self, and hence in subjectification practices, deriving from the new digital technologies, and especially lifelogging.展开更多
Lifelog is a digital record of an individual’s daily life.It collects,records,and archives a large amount of unstructured data;therefore,techniques are required to organize and summarize those data for easy retrieval...Lifelog is a digital record of an individual’s daily life.It collects,records,and archives a large amount of unstructured data;therefore,techniques are required to organize and summarize those data for easy retrieval.Lifelogging has been utilized for diverse applications including healthcare,self-tracking,and entertainment,among others.With regard to the imagebased lifelogging,even though most users prefer to present photos with facial expressions that allow us to infer their emotions,there have been few studies on lifelogging techniques that focus upon users’emotions.In this paper,we develop a system that extracts users’own photos from their smartphones and congures their lifelogs with a focus on their emotions.We design an emotion classier based on convolutional neural networks(CNN)to predict the users’emotions.To train the model,we create a new dataset by collecting facial images from the CelebFaces Attributes(CelebA)dataset and labeling their facial emotion expressions,and by integrating parts of the Radboud Faces Database(RaFD).Our dataset consists of 4,715 high-resolution images.We propose Representative Emotional Data Extraction Scheme(REDES)to select representative photos based on inferring users’emotions from their facial expressions.In addition,we develop a system that allows users to easily congure diaries for a special day and summaize their lifelogs.Our experimental results show that our method is able to effectively incorporate emotions into lifelog,allowing an enriched experience.展开更多
In social science,health care,digital therapeutics,etc.,smartphone data have played important roles to infer users’daily lives.However,smartphone data col-lection systems could not be used effectively and widely beca...In social science,health care,digital therapeutics,etc.,smartphone data have played important roles to infer users’daily lives.However,smartphone data col-lection systems could not be used effectively and widely because they did not exploit any Internet of Things(IoT)standards(e.g.,oneM2M)and class labeling methods for machine learning(ML)services.Therefore,in this paper,we propose a novel Android IoT lifelog system complying with oneM2M standards to collect various lifelog data in smartphones and provide two manual and automated class labeling methods for inference of users’daily lives.The proposed system consists of an Android IoT client application,an oneM2M-compliant IoT server,and an ML server whose high-level functional architecture was carefully designed to be open,accessible,and internation-ally recognized in accordance with the oneM2M standards.In particular,we explain implementation details of activity diagrams for the Android IoT client application,the primary component of the proposed system.Experimental results verified that this application could work with the oneM2M-compliant IoT server normally and provide corresponding class labels properly.As an application of the proposed system,we also propose motion inference based on three multi-class ML classifiers(i.e.,k nearest neighbors,Naive Bayes,and support vector machine)which were created by using only motion and location data(i.e.,acceleration force,gyroscope rate of rotation,and speed)and motion class labels(i.e.,driving,cycling,running,walking,and stil-ling).When compared with confusion matrices of the ML classifiers,the k nearest neighbors classifier outperformed the other two overall.Furthermore,we evaluated its output quality by analyzing the receiver operating characteristic(ROC)curves with area under the curve(AUC)values.The AUC values of the ROC curves for all motion classes were more than 0.9,and the macro-average and micro-average ROC curves achieved very high AUC values of 0.96 and 0.99,respectively.展开更多
文摘By lifelogging, we understand a specific, very recent phenomenon of digital technology, which falls within the range of practices of the quantified self. It is a complex form of self-management through self-monitoring and self-tracking practices, which combines the use of wearable computers for measuring psycho-physical performances through specific apps for the processing, selecting and describing of the data collected, possibly in combination with video recordings. Given that lifelogging is becoming increasingly widespread in technologically advanced societies and that practices related to it are becoming part of most people's everyday lives, it is more important than ever to gain an understanding of the phenomenon. In this paper, I am interested in particular in exploring the issue of the transformations in the perception, comprehension, and construction of self, and hence in subjectification practices, deriving from the new digital technologies, and especially lifelogging.
基金supported by the MSIT(Ministry of Science and ICT),Korea,under the Grand Information Technology Research Center Support Program(IITP-2020-2015-0-00742)Articial Intelligence Graduate School Program(Sungkyunkwan University,2019-0-00421)+1 种基金the ICT Creative Consilience Program(IITP-2020-2051-001)supervised by the IITPsupported by NRF of Korea(2019R1C1C1008956,2018R1A5A1059921)to J.J.Whan。
文摘Lifelog is a digital record of an individual’s daily life.It collects,records,and archives a large amount of unstructured data;therefore,techniques are required to organize and summarize those data for easy retrieval.Lifelogging has been utilized for diverse applications including healthcare,self-tracking,and entertainment,among others.With regard to the imagebased lifelogging,even though most users prefer to present photos with facial expressions that allow us to infer their emotions,there have been few studies on lifelogging techniques that focus upon users’emotions.In this paper,we develop a system that extracts users’own photos from their smartphones and congures their lifelogs with a focus on their emotions.We design an emotion classier based on convolutional neural networks(CNN)to predict the users’emotions.To train the model,we create a new dataset by collecting facial images from the CelebFaces Attributes(CelebA)dataset and labeling their facial emotion expressions,and by integrating parts of the Radboud Faces Database(RaFD).Our dataset consists of 4,715 high-resolution images.We propose Representative Emotional Data Extraction Scheme(REDES)to select representative photos based on inferring users’emotions from their facial expressions.In addition,we develop a system that allows users to easily congure diaries for a special day and summaize their lifelogs.Our experimental results show that our method is able to effectively incorporate emotions into lifelog,allowing an enriched experience.
文摘In social science,health care,digital therapeutics,etc.,smartphone data have played important roles to infer users’daily lives.However,smartphone data col-lection systems could not be used effectively and widely because they did not exploit any Internet of Things(IoT)standards(e.g.,oneM2M)and class labeling methods for machine learning(ML)services.Therefore,in this paper,we propose a novel Android IoT lifelog system complying with oneM2M standards to collect various lifelog data in smartphones and provide two manual and automated class labeling methods for inference of users’daily lives.The proposed system consists of an Android IoT client application,an oneM2M-compliant IoT server,and an ML server whose high-level functional architecture was carefully designed to be open,accessible,and internation-ally recognized in accordance with the oneM2M standards.In particular,we explain implementation details of activity diagrams for the Android IoT client application,the primary component of the proposed system.Experimental results verified that this application could work with the oneM2M-compliant IoT server normally and provide corresponding class labels properly.As an application of the proposed system,we also propose motion inference based on three multi-class ML classifiers(i.e.,k nearest neighbors,Naive Bayes,and support vector machine)which were created by using only motion and location data(i.e.,acceleration force,gyroscope rate of rotation,and speed)and motion class labels(i.e.,driving,cycling,running,walking,and stil-ling).When compared with confusion matrices of the ML classifiers,the k nearest neighbors classifier outperformed the other two overall.Furthermore,we evaluated its output quality by analyzing the receiver operating characteristic(ROC)curves with area under the curve(AUC)values.The AUC values of the ROC curves for all motion classes were more than 0.9,and the macro-average and micro-average ROC curves achieved very high AUC values of 0.96 and 0.99,respectively.