BACKGROUND Preoperative evaluation of future remnant liver reserves is important for safe hepatectomy.If the remnant is small,preoperative portal vein embolization(PVE)is useful.Liver volume analysis has been the prim...BACKGROUND Preoperative evaluation of future remnant liver reserves is important for safe hepatectomy.If the remnant is small,preoperative portal vein embolization(PVE)is useful.Liver volume analysis has been the primary method of preoperative evaluation,although functional examination may be more accurate.We have used the functional evaluation liver using the indocyanine green plasma clearance rate(KICG)and 99mTc-galactosyl human serum albumin single-photon emission computed tomography(99mTc-GSA SPECT)for safe hepatectomy.AIM To analyze the safety of our institution’s system for evaluating the remnant liver reserve.METHODS We retrospectively reviewed the records of 23 patients who underwent preoperative PVE.Two types of remnant liver KICG were defined as follows:Anatomical volume remnant KICG(a-rem-KICG),determined as the remnant liver anatomical volume rate×KICG;and functional volume remnant KICG(frem-KICG),determined as the remnant liver functional volume rate based on 99mTc-GSA SPECT×KICG.If either of the remnant liver KICGs were>0.05,a hepatectomy was performed.Perioperative factors were analyzed.We defined the marginal group as patients with a-rem-KICG of<0.05 and a f-rem-KICG of>0.05 and compared the postoperative outcomes between the marginal and not marginal(both a-rem-KICG and f-rem-KICG>0.05)groups.RESULTS All 23 patients underwent planned hepatectomies.Right hepatectomy,right trisectionectomy and left trisectionectomy were in 16,6 and 1 cases,respectively.The mean of blood loss and operative time were 576 mL and 474 min,respectively.The increased amount of frem-KICG was significantly larger than that of a-rem-KICG after PVE(0.034 vs 0.012,P=0.0273).The not marginal and marginal groups had 17(73.9%)and 6(26.1%)patients,respectively.The complications of Clavian-Dindo classification grade II or higher and post-hepatectomy liver failure were observed in six(26.1%)and one(grade A,4.3%)patient,respectively.The 90-d mortality was zero.The marginal group had no significant difference in postoperative outcomes(prothrombin time/international normalised ratio,total bilirubin,complication,post-hepatectomy liver failure,hospital stay,90-d,and mortality)compared with the not-marginal group.CONCLUSION Functional evaluation of the remnant liver enabled safe hepatectomy and may extend the indication for hepatectomy after PVE treatment.展开更多
With the continuing development and improvement of genome-wide techniques, a great number of candidate genes are discovered. How to identify the most likely disease genes among a large number of candidates becomes a f...With the continuing development and improvement of genome-wide techniques, a great number of candidate genes are discovered. How to identify the most likely disease genes among a large number of candidates becomes a fundamental challenge in human health. A common view is that genes related to a specific or similar disease tend to reside in the same neighbourhood of biomolecular networks. Recently, based on such observations,many methods have been developed to tackle this challenge. In this review, we firstly introduce the concept of disease genes, their properties, and available data for identifying them. Then we review the recent computational approaches for prioritizing candidate disease genes based on Protein-Protein Interaction(PPI) networks and investigate their advantages and disadvantages. Furthermore, some pieces of existing software and network resources are summarized. Finally, we discuss key issues in prioritizing candidate disease genes and point out some future research directions.展开更多
The Internet based cyber-physical world has profoundly changed the information environment for the development of artificial intelligence(AI), bringing a new wave of AI research and promoting it into the new era of AI...The Internet based cyber-physical world has profoundly changed the information environment for the development of artificial intelligence(AI), bringing a new wave of AI research and promoting it into the new era of AI 2.0. As one of the most prominent characteristics of research in AI 2.0 era, crowd intelligence has attracted much attention from both industry and research communities. Specifically, crowd intelligence provides a novel problem-solving paradigm through gathering the intelligence of crowds to address challenges. In particular, due to the rapid development of the sharing economy, crowd intelligence not only becomes a new approach to solving scientific challenges, but has also been integrated into all kinds of application scenarios in daily life, e.g., online-tooffline(O2O) application, real-time traffic monitoring, and logistics management. In this paper, we survey existing studies of crowd intelligence. First, we describe the concept of crowd intelligence, and explain its relationship to the existing related concepts, e.g., crowdsourcing and human computation. Then, we introduce four categories of representative crowd intelligence platforms. We summarize three core research problems and the state-of-the-art techniques of crowd intelligence. Finally, we discuss promising future research directions of crowd intelligence.展开更多
In this contribution, we present iHEARu-PLAY, an online, multi-player platform for crowdsourced database collection and labelling, including the voice analysis application (VoiLA), a free web-based speech classificati...In this contribution, we present iHEARu-PLAY, an online, multi-player platform for crowdsourced database collection and labelling, including the voice analysis application (VoiLA), a free web-based speech classification tool designed to educate iHEARu-PLAY users about state-of-the-art speech analysis paradigms. Via this associated speech analysis web interface, in addition, VoiLA encourages users to take an active role in improving the service by providing labelled speech data. The platform allows users to record and upload voice samples directly from their browser, which are then analysed in a state-of-the-art classification pipeline. A set of pre-trained models targeting a range of speaker states and traits such as gender, valence, arousal, dominance, and 24 different discrete emotions is employed. The analysis results are visualised in a way that they are easily interpretable by laymen, giving users unique insights into how their voice sounds. We assess the effectiveness of iHEARu-PLAY and its integrated VoiLA feature via a series of user evaluations which indicate that it is fun and easy to use, and that it provides accurate and informative results.展开更多
With the rise of linked data and knowledge graphs,the need becomes compelling to find suitable solutions to increase the coverage and correctness of data sets,to add missing knowledge and to identify and remove errors...With the rise of linked data and knowledge graphs,the need becomes compelling to find suitable solutions to increase the coverage and correctness of data sets,to add missing knowledge and to identify and remove errors.Several approaches-mostly relying on machine learning and natural language processing techniques-have been proposed to address this refinement goal;they usually need a partial gold standard,i.e.,some“ground truth”to train automatic models.Gold standards are manually constructed,either by involving domain experts or by adopting crowdsourcing and human computation solutions.In this paper,we present an open source software framework to build Games with a Purpose for linked data refinement,i.e.,Web applications to crowdsource partial ground truth,by motivating user participation through fun incentive.We detail the impact of this new resource by explaining the specific data linking“purposes”supported by the framework(creation,ranking and validation of links)and by defining the respective crowdsourcing tasks to achieve those goals.We also introduce our approach for incremental truth inference over the contributions provided by players of Games with a Purpose(also abbreviated as GWAP):we motivate the need for such a method with the specificity of GWAP vs.traditional crowdsourcing;we explain and formalize the proposed process,explain its positive consequences and illustrate the results of an experimental comparison with state-of-the-art approaches.To show this resource’s versatility,we describe a set of diverse applications that we built on top of it;to demonstrate its reusability and extensibility potential,we provide references to detailed documentation,including an entire tutorial which in a few hours guides new adopters to customize and adapt the framework to a new use case.展开更多
Can WiFi signals be used for sensing purpose? The growing PHY layer capabilities of WiFi has made it possible to reuse WiFi signals for both communication and sensing. Sensing via WiFi would enable remote sensing wit...Can WiFi signals be used for sensing purpose? The growing PHY layer capabilities of WiFi has made it possible to reuse WiFi signals for both communication and sensing. Sensing via WiFi would enable remote sensing without wearable sensors, simultaneous perception and data transmission without extra communication infrastructure, and contactless sensing in privacy-preserving mode. Due to the popularity of WiFi devices and the ubiquitous deployment of WiFi networks, WiFi-based sensing networks, if fully connected, would potentially rank as one of the world's largest wireless sensor networks. Yet the concept of wireless and sensorless sensing is not the simple combination of WiFi and radar. It seeks breakthroughs from dedicated radar systems, and aims to balance between low cost and high accuracy, to meet the rising demand for pervasive environment perception in everyday life. Despite increasing research interest, wireless sensing is still in its infancy. Through introductions on basic principles and working prototypes, we review the feasibilities and limitations of wireless, sensorless, and contactless sensing via WiFi. We envision this article as a brief primer on wireless sensing for interested readers to explore this open and largely unexplored field and create next-generation wireless and mobile computing applications.展开更多
文摘BACKGROUND Preoperative evaluation of future remnant liver reserves is important for safe hepatectomy.If the remnant is small,preoperative portal vein embolization(PVE)is useful.Liver volume analysis has been the primary method of preoperative evaluation,although functional examination may be more accurate.We have used the functional evaluation liver using the indocyanine green plasma clearance rate(KICG)and 99mTc-galactosyl human serum albumin single-photon emission computed tomography(99mTc-GSA SPECT)for safe hepatectomy.AIM To analyze the safety of our institution’s system for evaluating the remnant liver reserve.METHODS We retrospectively reviewed the records of 23 patients who underwent preoperative PVE.Two types of remnant liver KICG were defined as follows:Anatomical volume remnant KICG(a-rem-KICG),determined as the remnant liver anatomical volume rate×KICG;and functional volume remnant KICG(frem-KICG),determined as the remnant liver functional volume rate based on 99mTc-GSA SPECT×KICG.If either of the remnant liver KICGs were>0.05,a hepatectomy was performed.Perioperative factors were analyzed.We defined the marginal group as patients with a-rem-KICG of<0.05 and a f-rem-KICG of>0.05 and compared the postoperative outcomes between the marginal and not marginal(both a-rem-KICG and f-rem-KICG>0.05)groups.RESULTS All 23 patients underwent planned hepatectomies.Right hepatectomy,right trisectionectomy and left trisectionectomy were in 16,6 and 1 cases,respectively.The mean of blood loss and operative time were 576 mL and 474 min,respectively.The increased amount of frem-KICG was significantly larger than that of a-rem-KICG after PVE(0.034 vs 0.012,P=0.0273).The not marginal and marginal groups had 17(73.9%)and 6(26.1%)patients,respectively.The complications of Clavian-Dindo classification grade II or higher and post-hepatectomy liver failure were observed in six(26.1%)and one(grade A,4.3%)patient,respectively.The 90-d mortality was zero.The marginal group had no significant difference in postoperative outcomes(prothrombin time/international normalised ratio,total bilirubin,complication,post-hepatectomy liver failure,hospital stay,90-d,and mortality)compared with the not-marginal group.CONCLUSION Functional evaluation of the remnant liver enabled safe hepatectomy and may extend the indication for hepatectomy after PVE treatment.
文摘With the continuing development and improvement of genome-wide techniques, a great number of candidate genes are discovered. How to identify the most likely disease genes among a large number of candidates becomes a fundamental challenge in human health. A common view is that genes related to a specific or similar disease tend to reside in the same neighbourhood of biomolecular networks. Recently, based on such observations,many methods have been developed to tackle this challenge. In this review, we firstly introduce the concept of disease genes, their properties, and available data for identifying them. Then we review the recent computational approaches for prioritizing candidate disease genes based on Protein-Protein Interaction(PPI) networks and investigate their advantages and disadvantages. Furthermore, some pieces of existing software and network resources are summarized. Finally, we discuss key issues in prioritizing candidate disease genes and point out some future research directions.
基金supported by the National Natural Science Foundation of China(No.61532004)
文摘The Internet based cyber-physical world has profoundly changed the information environment for the development of artificial intelligence(AI), bringing a new wave of AI research and promoting it into the new era of AI 2.0. As one of the most prominent characteristics of research in AI 2.0 era, crowd intelligence has attracted much attention from both industry and research communities. Specifically, crowd intelligence provides a novel problem-solving paradigm through gathering the intelligence of crowds to address challenges. In particular, due to the rapid development of the sharing economy, crowd intelligence not only becomes a new approach to solving scientific challenges, but has also been integrated into all kinds of application scenarios in daily life, e.g., online-tooffline(O2O) application, real-time traffic monitoring, and logistics management. In this paper, we survey existing studies of crowd intelligence. First, we describe the concept of crowd intelligence, and explain its relationship to the existing related concepts, e.g., crowdsourcing and human computation. Then, we introduce four categories of representative crowd intelligence platforms. We summarize three core research problems and the state-of-the-art techniques of crowd intelligence. Finally, we discuss promising future research directions of crowd intelligence.
基金supported by the European Community’s Seventh Framework Programme(No.338164)(ERC Starting Grant iHEARu)
文摘In this contribution, we present iHEARu-PLAY, an online, multi-player platform for crowdsourced database collection and labelling, including the voice analysis application (VoiLA), a free web-based speech classification tool designed to educate iHEARu-PLAY users about state-of-the-art speech analysis paradigms. Via this associated speech analysis web interface, in addition, VoiLA encourages users to take an active role in improving the service by providing labelled speech data. The platform allows users to record and upload voice samples directly from their browser, which are then analysed in a state-of-the-art classification pipeline. A set of pre-trained models targeting a range of speaker states and traits such as gender, valence, arousal, dominance, and 24 different discrete emotions is employed. The analysis results are visualised in a way that they are easily interpretable by laymen, giving users unique insights into how their voice sounds. We assess the effectiveness of iHEARu-PLAY and its integrated VoiLA feature via a series of user evaluations which indicate that it is fun and easy to use, and that it provides accurate and informative results.
基金This work was partially supported by the STARS4ALL project(H2020-688135)co-funded by the European Commission.
文摘With the rise of linked data and knowledge graphs,the need becomes compelling to find suitable solutions to increase the coverage and correctness of data sets,to add missing knowledge and to identify and remove errors.Several approaches-mostly relying on machine learning and natural language processing techniques-have been proposed to address this refinement goal;they usually need a partial gold standard,i.e.,some“ground truth”to train automatic models.Gold standards are manually constructed,either by involving domain experts or by adopting crowdsourcing and human computation solutions.In this paper,we present an open source software framework to build Games with a Purpose for linked data refinement,i.e.,Web applications to crowdsource partial ground truth,by motivating user participation through fun incentive.We detail the impact of this new resource by explaining the specific data linking“purposes”supported by the framework(creation,ranking and validation of links)and by defining the respective crowdsourcing tasks to achieve those goals.We also introduce our approach for incremental truth inference over the contributions provided by players of Games with a Purpose(also abbreviated as GWAP):we motivate the need for such a method with the specificity of GWAP vs.traditional crowdsourcing;we explain and formalize the proposed process,explain its positive consequences and illustrate the results of an experimental comparison with state-of-the-art approaches.To show this resource’s versatility,we describe a set of diverse applications that we built on top of it;to demonstrate its reusability and extensibility potential,we provide references to detailed documentation,including an entire tutorial which in a few hours guides new adopters to customize and adapt the framework to a new use case.
文摘Can WiFi signals be used for sensing purpose? The growing PHY layer capabilities of WiFi has made it possible to reuse WiFi signals for both communication and sensing. Sensing via WiFi would enable remote sensing without wearable sensors, simultaneous perception and data transmission without extra communication infrastructure, and contactless sensing in privacy-preserving mode. Due to the popularity of WiFi devices and the ubiquitous deployment of WiFi networks, WiFi-based sensing networks, if fully connected, would potentially rank as one of the world's largest wireless sensor networks. Yet the concept of wireless and sensorless sensing is not the simple combination of WiFi and radar. It seeks breakthroughs from dedicated radar systems, and aims to balance between low cost and high accuracy, to meet the rising demand for pervasive environment perception in everyday life. Despite increasing research interest, wireless sensing is still in its infancy. Through introductions on basic principles and working prototypes, we review the feasibilities and limitations of wireless, sensorless, and contactless sensing via WiFi. We envision this article as a brief primer on wireless sensing for interested readers to explore this open and largely unexplored field and create next-generation wireless and mobile computing applications.