Objectives: To investigate lipid and diabetic profiles of school teachers in Kabul, Afghanistan, who face food insecurity, and examine the association of those with the teachers’ knowledge of non-communicable disease...Objectives: To investigate lipid and diabetic profiles of school teachers in Kabul, Afghanistan, who face food insecurity, and examine the association of those with the teachers’ knowledge of non-communicable diseases (NCDs). Methods: A survey to examine biochemical indicators of NCDs (triglycerides (TG), total cholesterol (TC), high-density lipoprotein (HDL), hemoglobin A1c (HbA1c), blood pressure, height, weight, waist circumference), food insecurity, lifestyle and knowledge of NCDs was conducted among 600 school teachers. Analyses were made of biochemical indicators of NCDs, blood pressure, metabolic syndrome, obesity, and subject’s lifestyle in relation to food security and the subject’s knowledge of NCDs. Results: Thirty-nine percent of school teachers experienced food insecurity. The percentage of TC ≥ 200 mg/dL;HbA1c ≥ 5.5%;hypertension and metabolic syndrome were 20.2%, 29.7%, 32.2% and 33.7%, respectively. Food insecurity was associated with lower fruit and vegetable consumption and higher potato consumption. Food insecurity was associated with increased TC (AOR 2.03;95%CI: 1.23 - 3.34), decreased HDL (AOR 1.70;95%CI: 1.12 - 2.58), increased HbA1c (AOR 1.73;95%CI: 1.14 - 2.64), hypertension (AOR 1.68;95%CI: 1.01 - 2.80) and diagnosis of metabolic syndrome (AOR 1.78;95%CI: 1.18 - 2.68), after adjustment by demographic, socioeconomic and lifestyle variables. Among people living under condition of food insecurity, greater NCD knowledge was associated with smaller prevalence of TG ≥ 150 mg/dL, HDL Conclusions: Under conditions of food insecurity, diets have less variety and individuals are more likely to exhibit biomedical risk factors of NCDs. Even under conditions of food insecurity, people with knowledge of NCDs may have better coping strategies for their choice of lifestyles and exhibited a lower percentage of risk factors of NCDs.展开更多
Background: Increase of elderly people living alone has been a concern even in the Philippines where filial piety is widely practiced with the support of large number of young people. Objectives of this study were to ...Background: Increase of elderly people living alone has been a concern even in the Philippines where filial piety is widely practiced with the support of large number of young people. Objectives of this study were to examine the relationships between living alone with self-reported illness among community elderly and living alone with health facility utilization among sick community elderly in the Philippines. Methods: Data of 5577 elderly (aged ≥ 60 years) from the 2013 Philippines National Demographic and Health Survey were retrieved. Variables on living arrangements, self-reported illness, frequency of health facility visits, and admission to a health facility were used for analysis. Results: Among the elderly included in the analysis, 5.0% of them were living alone. Percentage of living alone was larger among rural elderly (6.0%) compared with urban elderly (3.6%);and among poor elderly (9.0%) compared with rich elderly (2.8%). Results of adjusted multivariate logistic regression analysis showed that the elderly living alone were more likely to report suffering from common colds (AOR 2.12;95% CI 1.57 - 2.86) or non-communicable diseases (AOR 2.18;95% CI 1.55 - 3.06), regardless of their socioeconomic status or insurance coverage. Among those who reported illness, the elderly living alone were more likely to visit a health facility with non-communicable disease (AOR 1.95;95% CI 1.22 - 3.14), after adjustment of other variables. Although elderly living alone who reported illness were likely to be admitted in a health facility, statistically significant association was not observed. Conclusion: Elderly living alone are more likely to report self-reported illness and use health facilities when they recognize their illness.展开更多
Optical spectroscopy devices are being developed and tested for the screening and diagnosis of oral precancer and cancer lesions. This study reports a device that uses white light for detection of suspicious lesions a...Optical spectroscopy devices are being developed and tested for the screening and diagnosis of oral precancer and cancer lesions. This study reports a device that uses white light for detection of suspicious lesions and green–amber light at 545 nm that detect tissue vascularity on patients with several suspicious oral lesions. The clinical grading of vascularity was compared to the histological grading of the biopsied lesions using specific biomarkers. Such a device, in the hands of dentists and other health professionals, could greatly increase the number of oral cancerous lesions detected in early phase. The purpose of this study is to correlate the clinical grading of tissue vascularity in several oral suspicious lesions using the IdentafiH system with the histological grading of the biopsied lesions using specific vascular markers. Twenty-one patients with various oral lesions were enrolled in the study. The lesions were visualized using IdentafiH device with white light illumination, followed by visualization of tissue autofluorescence and tissue reflectance. Tissue biopsied was obtained from the all lesions and both histopathological and immunohistochemical studies using a vascular endothelial biomarker(CD34) were performed on these tissue samples. The clinical vascular grading using the green–amber light at 545 nm and the expression pattern and intensity of staining for CD34 in the different biopsies varied depending on lesions, grading ranged from 1 to3. The increase in vascularity was observed in abnormal tissues when compared to normal mucosa, but this increase was not limited to carcinoma only as hyperkeratosis and other oral diseases, such as lichen planus, also showed increase in vascularity. Optical spectroscopy is a promising technology for the detection of oral mucosal abnormalities; however, further investigations with a larger population group is required to evaluate the usefulness of these devices in differentiating benign lesions from potentially malignant lesions.展开更多
BACKGROUND Liver transplantation is the accepted standard of care for end-stage liver disease due to a variety of etiologies including decompensated cirrhosis, fulminant hepatic failure, and primary hepatic malignancy...BACKGROUND Liver transplantation is the accepted standard of care for end-stage liver disease due to a variety of etiologies including decompensated cirrhosis, fulminant hepatic failure, and primary hepatic malignancy. There are currently over 13000 candidates on the liver transplant waiting list emphasizing the importance of rigorous patient selection. There are few studies regarding the impact of additional psychosocial barriers to liver transplant including financial hardship, lack of caregiver support, polysubstance abuse, and issues with medical noncompliance. We hypothesized that patients with certain psychosocial comorbidities experienced worse outcomes after liver transplantation. AIM To assess the impact of certain pre-transplant psychosocial comorbidities on outcomes after liver transplantation. METHODS A retrospective analysis was performed on all adult patients from 2012-2016. Psychosocial comorbidities including documented medical non-compliance, polysubstance abuse, financial issues, and lack of caregiver support were collected. The primary outcome assessed post-transplantation was survival. Secondary outcomes measured included graft failure, episodes of acute rejection, psychiatric decompensation, number of readmissions, presence of infection, recidivism for alcohol and other substances, and documented caregiver support failure.RESULTS For the primary outcome, there were no differences in survival. Patients with a history of psychiatric disease had a higher incidence of psychiatric decompensation after liver transplantation (19% vs 10%, P = 0.013). Treatment of psychiatric disorders resulted in a reduction of the incidence of psychiatric decompensation (21% vs 11%, P = 0.022). Patients with a history of polysubstance abuse in the transplant evaluation had a higher incidence of substance abuse after transplantation (5.8% vs 1.2%, P = 0.05). In this cohort, 15 patients (3.8%) were found to have medical compliance issues in the transplant evaluation. Of these specific patients, 13.3% were found to have substance abuse after transplantation as opposed to 1.3% in patients without documented compliance issues (P = 0.03). CONCLUSION Patients with certain psychosocial comorbidities had worse outcomes following liver transplantation. Further prospective and multi-center studies are warranted to properly determine guidelines for liver transplantation regarding this highrisk population.展开更多
Worldwide epidemiological reports assert that drinking water is a source for infections and Legionella control represents a critical issue in healthcare settings. Chemical disinfections of water networks are control m...Worldwide epidemiological reports assert that drinking water is a source for infections and Legionella control represents a critical issue in healthcare settings. Chemical disinfections of water networks are control measures that need to be fine-tuned to obtain satisfactory results in large buildings over prolonged time periods. Aim of study is the evaluation of the effect of anolyte and chlorine dioxide, applied in two different hot water networks of a nursing home to manage Legionella risk. Nursing home has two buildings (A and B), with the same point of aqueduct water entrance. From June 2016, following a shock chlorination, the continuous disinfections with chlorine dioxide and anolyte were applied in hot networks of building A and B, respectively. Hot water was sampled at the central heating system and at two points of use for Legionella research, while chemical tests of manganese (Mn), iron (Fe), zinc (Zn) and trihalomethanes compounds (THM) were implemented to evaluate the disinfection by-products presence. Before chlorination Legionella pneumophila sg1 was recovered with a mean count of 2.4 × 104 CFU/L, while chemical compounds concentrations were within the law limits (Directive 98/83/EC). Then the disinfections Legionella was not recovered in both hot water plants. After the disinfection with chlorine dioxide (from June 2016 to May 2018), a statistically significant increase of iron, zinc and THM concentrations was detected in building A (p = 0.012;p = 0.004;p = 0.008). Both disinfectants appear effective against Legionella spp. growth in water network, but anolyte ensures a lower disinfection by-products release.展开更多
Objective: To identify the patterns of tuberculosis (TB) notification rates in Phnom Penh and examine their relationships with the population density, socioeconomic, residential and occupational characteristics. Metho...Objective: To identify the patterns of tuberculosis (TB) notification rates in Phnom Penh and examine their relationships with the population density, socioeconomic, residential and occupational characteristics. Methods: The numbers of total TB and smear-positive pulmonary TB cases reported between January 1, 2010 and December 31, 2012 in Phnom Penh were counted for 76 communes in Cambodia according to TB registration records filed under the national TB programme. Population, socioeconomic, residential and occupational characteristics for the communes were obtained from the 2008 General Population Census of Cambodia. The following indicators were developed for individual communes: smear-positive pulmonary TB notification rate (SPTB-NR) (per 100,000 population, in 36 months), population density (per km2), socioeconomic indicators, residential characteristics and occupational characteristics. Geographic patterns of these indicators and characteristics were analysed using ArcGIS. Associations between SPTB-NR and characteristics were analysed. Results: A total of 4102 TB cases were reported in 36 months, including 2046 SPTB cases. SPTB-NR for Phnom Penh was 135 cases per 100,000;median SPTB-NR by commune was 100. SPTB-NR was higher in outlying areas than in city centre communes;population density was high in the centre and low in the outlying areas. SPTB-NR was associated with larger percentage of household members per room (PR: 2.81, 95%CI: 2.68 - 2.93), percentage of population resident in the same commune Conclusions: The SPTB-NR in Phnom Penh did not follow the pattern of population density. Socioeconomic, residential and occupational characteristics by commune were associated with SPTB-NR. Development of prevention and control programmes by considering commune level characteristics is encouraged.展开更多
In 2006, Methodist Le Bonheur Healthcare (MLH) created the Congregational Health Network (CHN, TM pending) which works closely with clergy in the most under-served zip codes of the city to improve access to care and o...In 2006, Methodist Le Bonheur Healthcare (MLH) created the Congregational Health Network (CHN, TM pending) which works closely with clergy in the most under-served zip codes of the city to improve access to care and overall health status of the population. To best coordinate CHN resources around high-utilization and address the largest health needs in the community, MLH applied hot spotting and geographic information system (GIS) spatial analysis techniques. These techniques were coupled with the community health needs assessment process at MLH and qualitative, participatory research findings captured in collaboration with church and other community partners. The methodology, which we call “participatory hot spotting,” is based upon the Camden Model, which leverages hot spotting to assess and prioritize community need in the provision of charity care, but adds a participatory, qualitative layer. In this study, spatial analysis was employed to evaluate hospital-based inpatient and outpatient utilization and define costs of charity care for the health system by area of residence. Ten zip codes accounted for 56% of total system charity care costs. Among these, the largest zip code, as defined by a percentage of total charity costs, contributed 18% of the inpatient utilization and 17% of the cost. Further, this zip code (38109) contributed 69% of the inpatient and 76% of the outpatient charity care volume and accounted for 75% of inpatient and 76% of outpatient charity care costs for the system. These findings were combined with grassroots intelligence that enabled a partnership with clergy and community members and Cigna Healthcare to better coordinate care in a place-based population health management strategy. Presentations of the analytics have subsequently been made to HHS and the CDC, referred to by some as the “Memphis Model”.展开更多
Neighborhood socioeconomic deprivation has been associated with health behaviors and outcomes. However, neighborhood socioeconomic status has been measured inconsistently across studies. It remains unclear whether app...Neighborhood socioeconomic deprivation has been associated with health behaviors and outcomes. However, neighborhood socioeconomic status has been measured inconsistently across studies. It remains unclear whether appropriate socioeconomic indicators vary over geographic areas and geographic levels. The aim of this study is to compare the composite socioeconomic index to six socioeconomic indicators reflecting different aspects of socioeconomic environment by both geographic areas and levels. Using 2000 U.S. Census data, we performed a multivariate common factor analysis to identify significant socioeconomic resources and constructed 12 composite indexes at the county, the census tract, and the block group levels across the nation and for three states, respectively. We assessed the agreement between composite indexes and single socioeconomic variables. The component of the composite index varied across geographic areas. At a specific geographic region, the component of the composite index was similar at the levels of census tracts and block groups but different from that at the county level. The percentage of population below federal poverty line was a significant contributor to the composite index, regardless of geographic areas and levels. Compared with non-component socioeconomic indicators, component variables were more agreeable to the composite index. Based on these findings, we conclude that a composite index is better as a measure of neighborhood socioeconomic deprivation than a single indicator, and it should be constructed on an area- and unit-specific basis to accurately identify and quantify small-area socioeconomic inequalities over a specific study region.展开更多
Introduction: COVID-19 has become a global public health concern. In Nepal, the government has imposed lockdown, school closures, non-pharmacological interventions, isolation, and quarantine. People were asked to acce...Introduction: COVID-19 has become a global public health concern. In Nepal, the government has imposed lockdown, school closures, non-pharmacological interventions, isolation, and quarantine. People were asked to accept self-care interventions. However, the effectiveness of these preventive measures depends on the knowledge and practice of an individual. Therefore, this study aimed to investigate the association between knowledge and practice among Bagmati province residents during the COVID-19 pandemic. Methods: A cross-sectional study was conducted using an online Google Form questionnaire. A total of 296 participants completed the surveys on social media, particularly Facebook. To assess the factors associated with knowledge and practices toward COVID-19, logistic regression analysis was applied. Results: The total scores of knowledge and practice were 7.62 ± 2.06 and 11 ± 1.91, respectively. Results showed that education, people having a medical background, and occupation were significantly associated with knowledge. While urban residence, older age, and living in a rental with a shared room were significantly associated with practice. Conclusions: People with higher education, medical backgrounds, and household workers had high knowledge about COVID-19;however, knowledge was not associated with practice. There was a gap between knowledge and practice.展开更多
Background:The School Wellness Integration Targeting Child Health(SWITCH)intervention has demonstrated feasibility as an implementation approach to help schools facilitate changes in students’physical activity(PA),se...Background:The School Wellness Integration Targeting Child Health(SWITCH)intervention has demonstrated feasibility as an implementation approach to help schools facilitate changes in students’physical activity(PA),sedentary screen time(SST),and dietary intake(DI).This study evaluated the comparative effectiveness of enhanced(individualized)implementation and standard(group-based)implementation.Methods:Twenty-two Iowa elementary schools participated,with each receiving standardized training(wellness conference and webinars).Schools were matched within region and randomized to receive either individualized or group implementation support.The PA,SST,and DI outcomes of 1097 students were assessed at pre-and post-intervention periods using the Youth Activity Profile.Linear mixed models evaluated differential change in outcomes by condition,for comparative effectiveness,and by gender.Results:Both implementation conditions led to significant improvements in PA and SST over time(p<0.01),but DI did not improve commensurately(p value range:0.02‒0.05).There were no differential changes between the group and individualized conditions for PA(p=0.51),SST(p=0.19),or DI(p=0.73).There were no differential effects by gender(i.e.,non-significant condition-by-gender interactions)for PA(p_(for interaction)=0.86),SST(p_(for interaction)=0.46),or DI(p_(for interaction)=0.15).Effect sizes for both conditions equated to approximately 6 min more PA per day and approximately 3 min less sedentary time.Conclusion:The observed lack of difference in outcomes suggests that group implementation of SWITCH is equally effective as individualized implementation for building capacity in school wellness programming.Similarly,the lack of interaction by gender suggests that SWITCH can be beneficial for both boys and girls.Additional research is needed to understand the school-level factors that influence implementation(and outcomes)of SWITCH.展开更多
AIM To establish minimum clinically important difference(MCID) for measurements in an orthopaedic patient population with joint disorders.METHODS Adult patients aged 18 years and older seeking care for joint condition...AIM To establish minimum clinically important difference(MCID) for measurements in an orthopaedic patient population with joint disorders.METHODS Adult patients aged 18 years and older seeking care for joint conditions at an orthopaedic clinic took the Patient-Reported Outcomes Measurement Information System Physical Function(PROMIS~? PF) computerized adaptive test(CAT), hip disability and osteoarthritis outcome score for joint reconstruction(HOOS JR), and the knee injury and osteoarthritis outcome score for joint reconstruction(KOOS JR) from February 2014 to April 2017. MCIDs were calculated using anchorbased and distribution-based methods. Patient reports of meaningful change in function since their first clinic encounter were used as an anchor.RESULTS There were 2226 patients who participated with a mean age of 61.16(SD = 12.84) years, 41.6% male, and 89.7% Caucasian. Mean change ranged from 7.29 to 8.41 for the PROMIS~? PF CAT, from 14.81 to 19.68 for the HOOS JR, and from 14.51 to 18.85 for the KOOS JR. ROC cut-offs ranged from 1.97-8.18 for the PF CAT, 6.33-43.36 for the HOOS JR, and 2.21-8.16 for the KOOS JR. Distribution-based methods estimated MCID values ranging from 2.45 to 21.55 for the PROMIS~? PF CAT; from 3.90 to 43.61 for the HOOS JR, and from 3.98 to 40.67 for the KOOS JR. The median MCID value in the range was similar to the mean change score for each measure and was 7.9 for the PF CAT, 18.0 for the HOOS JR, and 15.1 for the KOOS JR.CONCLUSION This is the first comprehensive study providing a wide range of MCIDs for the PROMIS? PF, HOOS JR, and KOOS JR in orthopaedic patients with joint ailments.展开更多
Objective:There is no consensus on the role of biomarkers in determining the utility of prostate biopsy in men with elevated prostate-specific antigen(PSA).There are numerous biomarkers such as prostate health index,4...Objective:There is no consensus on the role of biomarkers in determining the utility of prostate biopsy in men with elevated prostate-specific antigen(PSA).There are numerous biomarkers such as prostate health index,4Kscore,prostate cancer antigen 3,ExoDX,SelectMDx,and Mi-Prostate Score that may be useful in this decision-making process.However,it is unclear whether any of these tests are accurate and cost-effective enough to warrant being a widespread reflex test following an elevated PSA.Our goal was to report on the clinical utility of these blood and urine biomarkers in prostate cancer screening.Methods:We performed a systematic review of studies published between January 2000 and October 2020 to report the available parameters and cost-effectiveness of the aforementioned diagnostic tests.We focus on the negative predictive value,the area under the curve,and the decision curve analysis in comparing reflexive tests due to their relevance in evaluating diagnostic screening tests.Results:Overall,the biomarkers are roughly equivalent in predictive accuracy.Each test has additional clinical utility to the current diagnostic standard of care,but the added benefit is not substantial to justify using the test reflexively after an elevated PSA.Conclusions:Our findings suggest these biomarkers should not be used in binary fashion and should be understood in the context of pre-existing risk predictors,patient’s ethnicity,cost of the test,patient life-expectancy,and patient goals.There are more recent diagnostic tools such as multi-parametric magnetic resonance imaging,polygenic single-nucleotide panels,IsoPSA,and miR Sentinel tests that are promising in the realm of prostate cancer screening and need to be investigated further to be considered a consensus reflexive test in the setting of prostate cancer screening.展开更多
Context: The hypothesis that a low- fat dietary pattern can reduce breast cancer risk has existed for decades but has never been tested in a controlled intervention trial. Objective: To assess the effects of undertaki...Context: The hypothesis that a low- fat dietary pattern can reduce breast cancer risk has existed for decades but has never been tested in a controlled intervention trial. Objective: To assess the effects of undertaking a low- fat dietary pattern on breast cancer incidence. Design and Setting: A randomized, controlled, primary prevention trial conducted at 40 US clinical centers from 1993 to 2005. Participants: A total of 48 835 postmenopausal women, aged 50 to 79 years, without prior breast cancer, including 18.6% of minority race/ethnicity, were enrolled. Interventions: Women were randomly assigned to the dietary modification intervention group (40% [n = 19 541]) or the comparison group (60% [n = 29 294]). The intervention was designed to promote dietary change with the goals of reducing intake of total fat to 20% of energy and increasing consumption of vegetables and fruit to at least 5 servings daily and grains to at least 6 servings daily. Comparison group participants were not asked to make dietary changes. Main Outcome Measure: Invasive breast cancer incidence. Results: Dietary fat intake was significantly lower in the dietary modification intervention group compared with the comparison group. The difference between groups in change from baseline for percentage of energy from fat varied from 10.7% at year 1 to 8.1% at year 6. Vegetable and fruit consumption was higher in the intervention group by at least 1 serving per day and a smaller, more transient difference was found for grain consumption. The number of women who developed invasive breast cancer (annualized incidence rate) over the 8.1- year average follow- up period was 655 (0.42% ) in the intervention group and 1072 (0.45% ) in the comparison group (hazard ratio, 0.91; 95% confidence interval, 0.83- 1.01 for the comparison between the 2 groups). Secondary analyses suggest a lower hazard ratio among adherent women, provide greater evidence of risk reduction among women having a high- fat diet at baseline, and suggest a dietary effect that varies by hormone receptor characteristics of the tumor. Conclusions: Among postmenopausal women, a low- fat dietary pattern did not result in a statistically significant reduction in invasive breast cancer risk over an 8.1- year average follow- up period. However, the nonsignificant trends observed suggesting reduced risk associated with a low- fat dietary pattern indicate that longer, planned, nonintervention follow- up may yield a more definitive comparison.展开更多
In hospitals, infection control for measles and rubella is important. Medical and nursing students as well as healthcare workers must have immunity against these diseases. Many countries have adopted requirements for ...In hospitals, infection control for measles and rubella is important. Medical and nursing students as well as healthcare workers must have immunity against these diseases. Many countries have adopted requirements for healthcare workers’ documented vaccination history or laboratory tests as evidence of their immunity. Evaluating a written vaccination history is difficult in many cases. Therefore, we compared measles and rubella antibody titers with self-reported vaccination history and we evaluated the association between the history and measles and rubella antibody titers, using the medical and nursing students’ data. We analyzed 564 data for measles and 558 data for rubella. Vaccination history was requested to be completed as accurately as possible. Students with one or more measles or rubella vaccinations had high positive ratios of titer, and the ratio was significantly higher than that of the students without vaccinations. The positive ratio between the two-dose and one-dose vaccination groups was not significantly different for measles or rubella (measles: p = 0.534, rubella: p = 0.452). Although it should be requested that the history is complete by using other resources, such as referring to maternity passbooks or proof of vaccination, self-reported history may be useful to confirm immunity, even if there is a possibility that the history is not accurate.展开更多
BACKGROUND Oral cancer is the sixth most prevalent cancer worldwide.Public knowledge in oral cancer risk factors and survival is limited.AIM To come up with machine learning(ML)algorithms to predict the length of surv...BACKGROUND Oral cancer is the sixth most prevalent cancer worldwide.Public knowledge in oral cancer risk factors and survival is limited.AIM To come up with machine learning(ML)algorithms to predict the length of survival for individuals diagnosed with oral cancer,and to explore the most important factors that were responsible for shortening or lengthening oral cancer survival.METHODS We used the Surveillance,Epidemiology,and End Results database from the years 1975 to 2016 that consisted of a total of 257880 cases and 94 variables.Four ML techniques in the area of artificial intelligence were applied for model training and validation.Model accuracy was evaluated using mean absolute error(MAE),mean squared error(MSE),root mean squared error(RMSE),R2 and adjusted R2.RESULTS The most important factors predictive of oral cancer survival time were age at diagnosis,primary cancer site,tumor size and year of diagnosis.Year of diagnosis referred to the year when the tumor was first diagnosed,implying that individuals with tumors that were diagnosed in the modern era tend to have longer survival than those diagnosed in the past.The extreme gradient boosting ML algorithms showed the best performance,with the MAE equaled to 13.55,MSE 486.55 and RMSE 22.06.CONCLUSION Using artificial intelligence,we developed a tool that can be used for oral cancer survival prediction and for medical-decision making.The finding relating to the year of diagnosis represented an important new discovery in the literature.The results of this study have implications for cancer prevention and education for the public.展开更多
Background and aims: Study of health related quality of life (HRQOL) and the factors responsible for its impairment in primary biliary cirrhosis (PBC) has, to date, been limited. There is increasing need for a HRQOL q...Background and aims: Study of health related quality of life (HRQOL) and the factors responsible for its impairment in primary biliary cirrhosis (PBC) has, to date, been limited. There is increasing need for a HRQOL questionnaire which is specific to PBC. The aim of this study was to develop, validate, and evaluate a patient based PBC specific HRQOL measure. Subjects and methods: A pool of potential questions was derived from thematic analysis of indepth interviews carried out with 30 PBC patients selected to represent demographically the PBC patient population as a whole. This pool was systematically reduced, pretested, and cross validated with other HRQOL measures in national surveys involving a total of 90 0 PBC patients, to produce a quality of life profile measure, the PBC-40, consi sting of 40 questions distributed across six domains. The PBC-40 was then evalu ated in a blinded comparison with other HRQOL measures in a further cohort of 40 PBC patients. Results: The six domains of PBC-40 relate to fatigue, emotional, social, and cognitive function, general symptoms, and itch. The highest mean do main score was seen for fatigue and the lowest for itch. The measure has been fu lly validated for use in PBC and shown to be scientifically sound. PBC patient s atisfaction, measured in terms of the extent to which a questionnaire addresses the problems that they experience, was significantly higher for the PBC-40 than for other HRQOL measures. Conclusion: The PBC-40 is a short easy to complete measure which is acceptable to PBC patients and has significantly greater relevance to their prob lems than other frequently used HRQOL measures. Its scientific soundness, shown in extensive testing, makes it a valuable instrument for future use in clinical and research settings.展开更多
Objective: To evaluate the cost-effectiveness of combining Chinese medicine (CM) with Western medicine (WM) for ischemic stroke patients. Methods: Hospitalization summary reports between 2006 and 2010 from eight...Objective: To evaluate the cost-effectiveness of combining Chinese medicine (CM) with Western medicine (WM) for ischemic stroke patients. Methods: Hospitalization summary reports between 2006 and 2010 from eight hospitals in Beijing were used to analyze the length of stay (LOS), cost per stay (CPS), and outcomes at discharge. Results: Among 12,009 patients (female, 36.44%; mean age, 69.98 + 13.06 years old), a substantial number of patients were treated by the WM_Chinese patent medicine (CPM)_Chinese herbal medicine (CHM) (38.90%); followed by the WM_CPM (32.55%), the WM (24.26%), and the WM_CHM (4.15%). With adjustment for confounding variables, LOS of the WM_CPM_CHM group was about 10 days longer than that of the WM group, and about 6 days longer than that of the WM_CPM group or the WM_CHM group (P〈0.01); CPS of the WM_CPM_CHM group was United States dollar (USD) 1,288 more than that of the WM group, and about USD 600 more than that of the WM_CPM group or the WM_CHM group (P〈0.01). Compared with the WM group, odd ratio (OR) of recovered and improved outcome of the WM_CPM CHM group was the highest [OR: 12.76, 95% confidence intervals (CI): 9.23, 17.64, P〈0.01], OR of death outcome of the WM_CPM_CHM group was the lowest (OR: 0.08, 95% CI: 0.05, 0.12, P〈0.01). There was no significant difference between LOS, CPS and OR of the WM_CPM group and those of the WM_CHM group (P〉0.05). Cost/effectiveness and incremental cost- effectiveness ratio of the WM_CPM_CHM group were robustly higher than those of the WM group. Conclusion: Compared with WM alone, supplementing CPM and CHM to WM provides significant health benefits of improving the chance of recovered and improved outcome, and reducing the death rate, at an expense of longer LOS and higher CPS.展开更多
Background:The novel coronavirus,severe acute respiratory syndrome coronavirus 2(SARS-CoV-2,also called 2019-nCoV)causes different morbidity risks to individuals in different age groups.This study attempts to quantify...Background:The novel coronavirus,severe acute respiratory syndrome coronavirus 2(SARS-CoV-2,also called 2019-nCoV)causes different morbidity risks to individuals in different age groups.This study attempts to quantify the age-specific transmissibility using a mathematical model.Methods:An epidemiological model with five compartments(susceptible-exposed-symptomatic-asymptomatic-recovered/removed[SEIAR])was developed based on observed transmission features.Coronavirus disease 2019(COVID-19)cases were divided into four age groups:group 1,those≤14years old;group 2,those 15 to 44years old;group 3,those 45 to 64years old;and group 4,those≥65 years old.The model was initially based on cases(including imported cases and secondary cases)collected in Hunan Province from January 5 to February 19,2020.Another dataset,from Jilin Province,was used to test the model.Results:The age-specific SEIAR model fitted the data well in each age group(P<0.001).In Hunan Province,the highest transmissibility was from age group 4 to 3(median:β43=7.71×10-9;SAR43=3.86×10-8),followed by group 3 to 4(median:β34=3.07×10-9;SAR34=1.53×10-8),group 2 to 2(median:β22=1.24×10-9;SAR22=6.21×10-9),and group 3 to 1(median:β31=4.10×10-10;SAR31=2.08×10-9).The lowest transmissibility was from age group 3 to 3(median:β33=1.64×10-19;SAR33=8.19×10-19),followed by group 4 to 4(median:β44=3.66×10-17;SAR44=1.83×10-16),group 3 to 2(median:β32=1.21×10-16;SAR32=6.06×10-16),and group 1 to 4(median:β14=7.20×10-14;SAR14=3.60×10-13).In Jilin Province,the highest transmissibility occurred from age group 4 to 4(median:β43=4.27×10-8;SAR43=2.13×10-7),followed by group 3 to 4(median:β34=1.81×10-8;SAR34=9.03×10-8).Conclusions:SARS-CoV-2 exhibits high transmissibility between middle-aged(45 to 64 years old)and elderly(≥65 years old)people.Children(≤14 years old)have very low susceptibility to COVID-19.This study will improve our understanding of the transmission feature of SARS-CoV-2 in different age groups and suggest the most prevention measures should be applied to middle-aged and elderly people.展开更多
文摘Objectives: To investigate lipid and diabetic profiles of school teachers in Kabul, Afghanistan, who face food insecurity, and examine the association of those with the teachers’ knowledge of non-communicable diseases (NCDs). Methods: A survey to examine biochemical indicators of NCDs (triglycerides (TG), total cholesterol (TC), high-density lipoprotein (HDL), hemoglobin A1c (HbA1c), blood pressure, height, weight, waist circumference), food insecurity, lifestyle and knowledge of NCDs was conducted among 600 school teachers. Analyses were made of biochemical indicators of NCDs, blood pressure, metabolic syndrome, obesity, and subject’s lifestyle in relation to food security and the subject’s knowledge of NCDs. Results: Thirty-nine percent of school teachers experienced food insecurity. The percentage of TC ≥ 200 mg/dL;HbA1c ≥ 5.5%;hypertension and metabolic syndrome were 20.2%, 29.7%, 32.2% and 33.7%, respectively. Food insecurity was associated with lower fruit and vegetable consumption and higher potato consumption. Food insecurity was associated with increased TC (AOR 2.03;95%CI: 1.23 - 3.34), decreased HDL (AOR 1.70;95%CI: 1.12 - 2.58), increased HbA1c (AOR 1.73;95%CI: 1.14 - 2.64), hypertension (AOR 1.68;95%CI: 1.01 - 2.80) and diagnosis of metabolic syndrome (AOR 1.78;95%CI: 1.18 - 2.68), after adjustment by demographic, socioeconomic and lifestyle variables. Among people living under condition of food insecurity, greater NCD knowledge was associated with smaller prevalence of TG ≥ 150 mg/dL, HDL Conclusions: Under conditions of food insecurity, diets have less variety and individuals are more likely to exhibit biomedical risk factors of NCDs. Even under conditions of food insecurity, people with knowledge of NCDs may have better coping strategies for their choice of lifestyles and exhibited a lower percentage of risk factors of NCDs.
文摘Background: Increase of elderly people living alone has been a concern even in the Philippines where filial piety is widely practiced with the support of large number of young people. Objectives of this study were to examine the relationships between living alone with self-reported illness among community elderly and living alone with health facility utilization among sick community elderly in the Philippines. Methods: Data of 5577 elderly (aged ≥ 60 years) from the 2013 Philippines National Demographic and Health Survey were retrieved. Variables on living arrangements, self-reported illness, frequency of health facility visits, and admission to a health facility were used for analysis. Results: Among the elderly included in the analysis, 5.0% of them were living alone. Percentage of living alone was larger among rural elderly (6.0%) compared with urban elderly (3.6%);and among poor elderly (9.0%) compared with rich elderly (2.8%). Results of adjusted multivariate logistic regression analysis showed that the elderly living alone were more likely to report suffering from common colds (AOR 2.12;95% CI 1.57 - 2.86) or non-communicable diseases (AOR 2.18;95% CI 1.55 - 3.06), regardless of their socioeconomic status or insurance coverage. Among those who reported illness, the elderly living alone were more likely to visit a health facility with non-communicable disease (AOR 1.95;95% CI 1.22 - 3.14), after adjustment of other variables. Although elderly living alone who reported illness were likely to be admitted in a health facility, statistically significant association was not observed. Conclusion: Elderly living alone are more likely to report self-reported illness and use health facilities when they recognize their illness.
文摘Optical spectroscopy devices are being developed and tested for the screening and diagnosis of oral precancer and cancer lesions. This study reports a device that uses white light for detection of suspicious lesions and green–amber light at 545 nm that detect tissue vascularity on patients with several suspicious oral lesions. The clinical grading of vascularity was compared to the histological grading of the biopsied lesions using specific biomarkers. Such a device, in the hands of dentists and other health professionals, could greatly increase the number of oral cancerous lesions detected in early phase. The purpose of this study is to correlate the clinical grading of tissue vascularity in several oral suspicious lesions using the IdentafiH system with the histological grading of the biopsied lesions using specific vascular markers. Twenty-one patients with various oral lesions were enrolled in the study. The lesions were visualized using IdentafiH device with white light illumination, followed by visualization of tissue autofluorescence and tissue reflectance. Tissue biopsied was obtained from the all lesions and both histopathological and immunohistochemical studies using a vascular endothelial biomarker(CD34) were performed on these tissue samples. The clinical vascular grading using the green–amber light at 545 nm and the expression pattern and intensity of staining for CD34 in the different biopsies varied depending on lesions, grading ranged from 1 to3. The increase in vascularity was observed in abnormal tissues when compared to normal mucosa, but this increase was not limited to carcinoma only as hyperkeratosis and other oral diseases, such as lichen planus, also showed increase in vascularity. Optical spectroscopy is a promising technology for the detection of oral mucosal abnormalities; however, further investigations with a larger population group is required to evaluate the usefulness of these devices in differentiating benign lesions from potentially malignant lesions.
文摘BACKGROUND Liver transplantation is the accepted standard of care for end-stage liver disease due to a variety of etiologies including decompensated cirrhosis, fulminant hepatic failure, and primary hepatic malignancy. There are currently over 13000 candidates on the liver transplant waiting list emphasizing the importance of rigorous patient selection. There are few studies regarding the impact of additional psychosocial barriers to liver transplant including financial hardship, lack of caregiver support, polysubstance abuse, and issues with medical noncompliance. We hypothesized that patients with certain psychosocial comorbidities experienced worse outcomes after liver transplantation. AIM To assess the impact of certain pre-transplant psychosocial comorbidities on outcomes after liver transplantation. METHODS A retrospective analysis was performed on all adult patients from 2012-2016. Psychosocial comorbidities including documented medical non-compliance, polysubstance abuse, financial issues, and lack of caregiver support were collected. The primary outcome assessed post-transplantation was survival. Secondary outcomes measured included graft failure, episodes of acute rejection, psychiatric decompensation, number of readmissions, presence of infection, recidivism for alcohol and other substances, and documented caregiver support failure.RESULTS For the primary outcome, there were no differences in survival. Patients with a history of psychiatric disease had a higher incidence of psychiatric decompensation after liver transplantation (19% vs 10%, P = 0.013). Treatment of psychiatric disorders resulted in a reduction of the incidence of psychiatric decompensation (21% vs 11%, P = 0.022). Patients with a history of polysubstance abuse in the transplant evaluation had a higher incidence of substance abuse after transplantation (5.8% vs 1.2%, P = 0.05). In this cohort, 15 patients (3.8%) were found to have medical compliance issues in the transplant evaluation. Of these specific patients, 13.3% were found to have substance abuse after transplantation as opposed to 1.3% in patients without documented compliance issues (P = 0.03). CONCLUSION Patients with certain psychosocial comorbidities had worse outcomes following liver transplantation. Further prospective and multi-center studies are warranted to properly determine guidelines for liver transplantation regarding this highrisk population.
文摘Worldwide epidemiological reports assert that drinking water is a source for infections and Legionella control represents a critical issue in healthcare settings. Chemical disinfections of water networks are control measures that need to be fine-tuned to obtain satisfactory results in large buildings over prolonged time periods. Aim of study is the evaluation of the effect of anolyte and chlorine dioxide, applied in two different hot water networks of a nursing home to manage Legionella risk. Nursing home has two buildings (A and B), with the same point of aqueduct water entrance. From June 2016, following a shock chlorination, the continuous disinfections with chlorine dioxide and anolyte were applied in hot networks of building A and B, respectively. Hot water was sampled at the central heating system and at two points of use for Legionella research, while chemical tests of manganese (Mn), iron (Fe), zinc (Zn) and trihalomethanes compounds (THM) were implemented to evaluate the disinfection by-products presence. Before chlorination Legionella pneumophila sg1 was recovered with a mean count of 2.4 × 104 CFU/L, while chemical compounds concentrations were within the law limits (Directive 98/83/EC). Then the disinfections Legionella was not recovered in both hot water plants. After the disinfection with chlorine dioxide (from June 2016 to May 2018), a statistically significant increase of iron, zinc and THM concentrations was detected in building A (p = 0.012;p = 0.004;p = 0.008). Both disinfectants appear effective against Legionella spp. growth in water network, but anolyte ensures a lower disinfection by-products release.
文摘Objective: To identify the patterns of tuberculosis (TB) notification rates in Phnom Penh and examine their relationships with the population density, socioeconomic, residential and occupational characteristics. Methods: The numbers of total TB and smear-positive pulmonary TB cases reported between January 1, 2010 and December 31, 2012 in Phnom Penh were counted for 76 communes in Cambodia according to TB registration records filed under the national TB programme. Population, socioeconomic, residential and occupational characteristics for the communes were obtained from the 2008 General Population Census of Cambodia. The following indicators were developed for individual communes: smear-positive pulmonary TB notification rate (SPTB-NR) (per 100,000 population, in 36 months), population density (per km2), socioeconomic indicators, residential characteristics and occupational characteristics. Geographic patterns of these indicators and characteristics were analysed using ArcGIS. Associations between SPTB-NR and characteristics were analysed. Results: A total of 4102 TB cases were reported in 36 months, including 2046 SPTB cases. SPTB-NR for Phnom Penh was 135 cases per 100,000;median SPTB-NR by commune was 100. SPTB-NR was higher in outlying areas than in city centre communes;population density was high in the centre and low in the outlying areas. SPTB-NR was associated with larger percentage of household members per room (PR: 2.81, 95%CI: 2.68 - 2.93), percentage of population resident in the same commune Conclusions: The SPTB-NR in Phnom Penh did not follow the pattern of population density. Socioeconomic, residential and occupational characteristics by commune were associated with SPTB-NR. Development of prevention and control programmes by considering commune level characteristics is encouraged.
文摘In 2006, Methodist Le Bonheur Healthcare (MLH) created the Congregational Health Network (CHN, TM pending) which works closely with clergy in the most under-served zip codes of the city to improve access to care and overall health status of the population. To best coordinate CHN resources around high-utilization and address the largest health needs in the community, MLH applied hot spotting and geographic information system (GIS) spatial analysis techniques. These techniques were coupled with the community health needs assessment process at MLH and qualitative, participatory research findings captured in collaboration with church and other community partners. The methodology, which we call “participatory hot spotting,” is based upon the Camden Model, which leverages hot spotting to assess and prioritize community need in the provision of charity care, but adds a participatory, qualitative layer. In this study, spatial analysis was employed to evaluate hospital-based inpatient and outpatient utilization and define costs of charity care for the health system by area of residence. Ten zip codes accounted for 56% of total system charity care costs. Among these, the largest zip code, as defined by a percentage of total charity costs, contributed 18% of the inpatient utilization and 17% of the cost. Further, this zip code (38109) contributed 69% of the inpatient and 76% of the outpatient charity care volume and accounted for 75% of inpatient and 76% of outpatient charity care costs for the system. These findings were combined with grassroots intelligence that enabled a partnership with clergy and community members and Cigna Healthcare to better coordinate care in a place-based population health management strategy. Presentations of the analytics have subsequently been made to HHS and the CDC, referred to by some as the “Memphis Model”.
文摘Neighborhood socioeconomic deprivation has been associated with health behaviors and outcomes. However, neighborhood socioeconomic status has been measured inconsistently across studies. It remains unclear whether appropriate socioeconomic indicators vary over geographic areas and geographic levels. The aim of this study is to compare the composite socioeconomic index to six socioeconomic indicators reflecting different aspects of socioeconomic environment by both geographic areas and levels. Using 2000 U.S. Census data, we performed a multivariate common factor analysis to identify significant socioeconomic resources and constructed 12 composite indexes at the county, the census tract, and the block group levels across the nation and for three states, respectively. We assessed the agreement between composite indexes and single socioeconomic variables. The component of the composite index varied across geographic areas. At a specific geographic region, the component of the composite index was similar at the levels of census tracts and block groups but different from that at the county level. The percentage of population below federal poverty line was a significant contributor to the composite index, regardless of geographic areas and levels. Compared with non-component socioeconomic indicators, component variables were more agreeable to the composite index. Based on these findings, we conclude that a composite index is better as a measure of neighborhood socioeconomic deprivation than a single indicator, and it should be constructed on an area- and unit-specific basis to accurately identify and quantify small-area socioeconomic inequalities over a specific study region.
文摘Introduction: COVID-19 has become a global public health concern. In Nepal, the government has imposed lockdown, school closures, non-pharmacological interventions, isolation, and quarantine. People were asked to accept self-care interventions. However, the effectiveness of these preventive measures depends on the knowledge and practice of an individual. Therefore, this study aimed to investigate the association between knowledge and practice among Bagmati province residents during the COVID-19 pandemic. Methods: A cross-sectional study was conducted using an online Google Form questionnaire. A total of 296 participants completed the surveys on social media, particularly Facebook. To assess the factors associated with knowledge and practices toward COVID-19, logistic regression analysis was applied. Results: The total scores of knowledge and practice were 7.62 ± 2.06 and 11 ± 1.91, respectively. Results showed that education, people having a medical background, and occupation were significantly associated with knowledge. While urban residence, older age, and living in a rental with a shared room were significantly associated with practice. Conclusions: People with higher education, medical backgrounds, and household workers had high knowledge about COVID-19;however, knowledge was not associated with practice. There was a gap between knowledge and practice.
基金The U.S.Department of Agriculture National Institute of Food and Agriculture(USDA NIFA)grant:2015-68001-23242.The USDA was not involved in the design of the study and collection,analysis,and interpretation of data or writing of the manuscript.The authors wish to thank the School Wellness Teams(SWT)who participated in the intervention and led programming.The authors acknowledge the students and staff who helped facilitate data collection and analysis procedures:Andra Luth,Marisa Rosen,Laura C.Liechty,Ann Torbert,and Quinn M.Zuercher(Iowa State University Extension and Outreach),made contributions to the distribution,implementation,and evaluation of SWITCH.
文摘Background:The School Wellness Integration Targeting Child Health(SWITCH)intervention has demonstrated feasibility as an implementation approach to help schools facilitate changes in students’physical activity(PA),sedentary screen time(SST),and dietary intake(DI).This study evaluated the comparative effectiveness of enhanced(individualized)implementation and standard(group-based)implementation.Methods:Twenty-two Iowa elementary schools participated,with each receiving standardized training(wellness conference and webinars).Schools were matched within region and randomized to receive either individualized or group implementation support.The PA,SST,and DI outcomes of 1097 students were assessed at pre-and post-intervention periods using the Youth Activity Profile.Linear mixed models evaluated differential change in outcomes by condition,for comparative effectiveness,and by gender.Results:Both implementation conditions led to significant improvements in PA and SST over time(p<0.01),but DI did not improve commensurately(p value range:0.02‒0.05).There were no differential changes between the group and individualized conditions for PA(p=0.51),SST(p=0.19),or DI(p=0.73).There were no differential effects by gender(i.e.,non-significant condition-by-gender interactions)for PA(p_(for interaction)=0.86),SST(p_(for interaction)=0.46),or DI(p_(for interaction)=0.15).Effect sizes for both conditions equated to approximately 6 min more PA per day and approximately 3 min less sedentary time.Conclusion:The observed lack of difference in outcomes suggests that group implementation of SWITCH is equally effective as individualized implementation for building capacity in school wellness programming.Similarly,the lack of interaction by gender suggests that SWITCH can be beneficial for both boys and girls.Additional research is needed to understand the school-level factors that influence implementation(and outcomes)of SWITCH.
基金National Institute of Arthritis and Musculoskeletal and Skin Diseases of the National Institutes of Health,No.U01AR067138.
文摘AIM To establish minimum clinically important difference(MCID) for measurements in an orthopaedic patient population with joint disorders.METHODS Adult patients aged 18 years and older seeking care for joint conditions at an orthopaedic clinic took the Patient-Reported Outcomes Measurement Information System Physical Function(PROMIS~? PF) computerized adaptive test(CAT), hip disability and osteoarthritis outcome score for joint reconstruction(HOOS JR), and the knee injury and osteoarthritis outcome score for joint reconstruction(KOOS JR) from February 2014 to April 2017. MCIDs were calculated using anchorbased and distribution-based methods. Patient reports of meaningful change in function since their first clinic encounter were used as an anchor.RESULTS There were 2226 patients who participated with a mean age of 61.16(SD = 12.84) years, 41.6% male, and 89.7% Caucasian. Mean change ranged from 7.29 to 8.41 for the PROMIS~? PF CAT, from 14.81 to 19.68 for the HOOS JR, and from 14.51 to 18.85 for the KOOS JR. ROC cut-offs ranged from 1.97-8.18 for the PF CAT, 6.33-43.36 for the HOOS JR, and 2.21-8.16 for the KOOS JR. Distribution-based methods estimated MCID values ranging from 2.45 to 21.55 for the PROMIS~? PF CAT; from 3.90 to 43.61 for the HOOS JR, and from 3.98 to 40.67 for the KOOS JR. The median MCID value in the range was similar to the mean change score for each measure and was 7.9 for the PF CAT, 18.0 for the HOOS JR, and 15.1 for the KOOS JR.CONCLUSION This is the first comprehensive study providing a wide range of MCIDs for the PROMIS? PF, HOOS JR, and KOOS JR in orthopaedic patients with joint ailments.
文摘Objective:There is no consensus on the role of biomarkers in determining the utility of prostate biopsy in men with elevated prostate-specific antigen(PSA).There are numerous biomarkers such as prostate health index,4Kscore,prostate cancer antigen 3,ExoDX,SelectMDx,and Mi-Prostate Score that may be useful in this decision-making process.However,it is unclear whether any of these tests are accurate and cost-effective enough to warrant being a widespread reflex test following an elevated PSA.Our goal was to report on the clinical utility of these blood and urine biomarkers in prostate cancer screening.Methods:We performed a systematic review of studies published between January 2000 and October 2020 to report the available parameters and cost-effectiveness of the aforementioned diagnostic tests.We focus on the negative predictive value,the area under the curve,and the decision curve analysis in comparing reflexive tests due to their relevance in evaluating diagnostic screening tests.Results:Overall,the biomarkers are roughly equivalent in predictive accuracy.Each test has additional clinical utility to the current diagnostic standard of care,but the added benefit is not substantial to justify using the test reflexively after an elevated PSA.Conclusions:Our findings suggest these biomarkers should not be used in binary fashion and should be understood in the context of pre-existing risk predictors,patient’s ethnicity,cost of the test,patient life-expectancy,and patient goals.There are more recent diagnostic tools such as multi-parametric magnetic resonance imaging,polygenic single-nucleotide panels,IsoPSA,and miR Sentinel tests that are promising in the realm of prostate cancer screening and need to be investigated further to be considered a consensus reflexive test in the setting of prostate cancer screening.
文摘Context: The hypothesis that a low- fat dietary pattern can reduce breast cancer risk has existed for decades but has never been tested in a controlled intervention trial. Objective: To assess the effects of undertaking a low- fat dietary pattern on breast cancer incidence. Design and Setting: A randomized, controlled, primary prevention trial conducted at 40 US clinical centers from 1993 to 2005. Participants: A total of 48 835 postmenopausal women, aged 50 to 79 years, without prior breast cancer, including 18.6% of minority race/ethnicity, were enrolled. Interventions: Women were randomly assigned to the dietary modification intervention group (40% [n = 19 541]) or the comparison group (60% [n = 29 294]). The intervention was designed to promote dietary change with the goals of reducing intake of total fat to 20% of energy and increasing consumption of vegetables and fruit to at least 5 servings daily and grains to at least 6 servings daily. Comparison group participants were not asked to make dietary changes. Main Outcome Measure: Invasive breast cancer incidence. Results: Dietary fat intake was significantly lower in the dietary modification intervention group compared with the comparison group. The difference between groups in change from baseline for percentage of energy from fat varied from 10.7% at year 1 to 8.1% at year 6. Vegetable and fruit consumption was higher in the intervention group by at least 1 serving per day and a smaller, more transient difference was found for grain consumption. The number of women who developed invasive breast cancer (annualized incidence rate) over the 8.1- year average follow- up period was 655 (0.42% ) in the intervention group and 1072 (0.45% ) in the comparison group (hazard ratio, 0.91; 95% confidence interval, 0.83- 1.01 for the comparison between the 2 groups). Secondary analyses suggest a lower hazard ratio among adherent women, provide greater evidence of risk reduction among women having a high- fat diet at baseline, and suggest a dietary effect that varies by hormone receptor characteristics of the tumor. Conclusions: Among postmenopausal women, a low- fat dietary pattern did not result in a statistically significant reduction in invasive breast cancer risk over an 8.1- year average follow- up period. However, the nonsignificant trends observed suggesting reduced risk associated with a low- fat dietary pattern indicate that longer, planned, nonintervention follow- up may yield a more definitive comparison.
文摘In hospitals, infection control for measles and rubella is important. Medical and nursing students as well as healthcare workers must have immunity against these diseases. Many countries have adopted requirements for healthcare workers’ documented vaccination history or laboratory tests as evidence of their immunity. Evaluating a written vaccination history is difficult in many cases. Therefore, we compared measles and rubella antibody titers with self-reported vaccination history and we evaluated the association between the history and measles and rubella antibody titers, using the medical and nursing students’ data. We analyzed 564 data for measles and 558 data for rubella. Vaccination history was requested to be completed as accurately as possible. Students with one or more measles or rubella vaccinations had high positive ratios of titer, and the ratio was significantly higher than that of the students without vaccinations. The positive ratio between the two-dose and one-dose vaccination groups was not significantly different for measles or rubella (measles: p = 0.534, rubella: p = 0.452). Although it should be requested that the history is complete by using other resources, such as referring to maternity passbooks or proof of vaccination, self-reported history may be useful to confirm immunity, even if there is a possibility that the history is not accurate.
基金The authors sincerely thank the Clinical Outcomes Research and Education at Collegeof Dental Medicine, Roseman University of Health Sciences for supporting this study.
文摘BACKGROUND Oral cancer is the sixth most prevalent cancer worldwide.Public knowledge in oral cancer risk factors and survival is limited.AIM To come up with machine learning(ML)algorithms to predict the length of survival for individuals diagnosed with oral cancer,and to explore the most important factors that were responsible for shortening or lengthening oral cancer survival.METHODS We used the Surveillance,Epidemiology,and End Results database from the years 1975 to 2016 that consisted of a total of 257880 cases and 94 variables.Four ML techniques in the area of artificial intelligence were applied for model training and validation.Model accuracy was evaluated using mean absolute error(MAE),mean squared error(MSE),root mean squared error(RMSE),R2 and adjusted R2.RESULTS The most important factors predictive of oral cancer survival time were age at diagnosis,primary cancer site,tumor size and year of diagnosis.Year of diagnosis referred to the year when the tumor was first diagnosed,implying that individuals with tumors that were diagnosed in the modern era tend to have longer survival than those diagnosed in the past.The extreme gradient boosting ML algorithms showed the best performance,with the MAE equaled to 13.55,MSE 486.55 and RMSE 22.06.CONCLUSION Using artificial intelligence,we developed a tool that can be used for oral cancer survival prediction and for medical-decision making.The finding relating to the year of diagnosis represented an important new discovery in the literature.The results of this study have implications for cancer prevention and education for the public.
文摘Background and aims: Study of health related quality of life (HRQOL) and the factors responsible for its impairment in primary biliary cirrhosis (PBC) has, to date, been limited. There is increasing need for a HRQOL questionnaire which is specific to PBC. The aim of this study was to develop, validate, and evaluate a patient based PBC specific HRQOL measure. Subjects and methods: A pool of potential questions was derived from thematic analysis of indepth interviews carried out with 30 PBC patients selected to represent demographically the PBC patient population as a whole. This pool was systematically reduced, pretested, and cross validated with other HRQOL measures in national surveys involving a total of 90 0 PBC patients, to produce a quality of life profile measure, the PBC-40, consi sting of 40 questions distributed across six domains. The PBC-40 was then evalu ated in a blinded comparison with other HRQOL measures in a further cohort of 40 PBC patients. Results: The six domains of PBC-40 relate to fatigue, emotional, social, and cognitive function, general symptoms, and itch. The highest mean do main score was seen for fatigue and the lowest for itch. The measure has been fu lly validated for use in PBC and shown to be scientifically sound. PBC patient s atisfaction, measured in terms of the extent to which a questionnaire addresses the problems that they experience, was significantly higher for the PBC-40 than for other HRQOL measures. Conclusion: The PBC-40 is a short easy to complete measure which is acceptable to PBC patients and has significantly greater relevance to their prob lems than other frequently used HRQOL measures. Its scientific soundness, shown in extensive testing, makes it a valuable instrument for future use in clinical and research settings.
基金Partly supported by the"985"Project from Ministry of Education of China(No.BMU20100107)
文摘Objective: To evaluate the cost-effectiveness of combining Chinese medicine (CM) with Western medicine (WM) for ischemic stroke patients. Methods: Hospitalization summary reports between 2006 and 2010 from eight hospitals in Beijing were used to analyze the length of stay (LOS), cost per stay (CPS), and outcomes at discharge. Results: Among 12,009 patients (female, 36.44%; mean age, 69.98 + 13.06 years old), a substantial number of patients were treated by the WM_Chinese patent medicine (CPM)_Chinese herbal medicine (CHM) (38.90%); followed by the WM_CPM (32.55%), the WM (24.26%), and the WM_CHM (4.15%). With adjustment for confounding variables, LOS of the WM_CPM_CHM group was about 10 days longer than that of the WM group, and about 6 days longer than that of the WM_CPM group or the WM_CHM group (P〈0.01); CPS of the WM_CPM_CHM group was United States dollar (USD) 1,288 more than that of the WM group, and about USD 600 more than that of the WM_CPM group or the WM_CHM group (P〈0.01). Compared with the WM group, odd ratio (OR) of recovered and improved outcome of the WM_CPM CHM group was the highest [OR: 12.76, 95% confidence intervals (CI): 9.23, 17.64, P〈0.01], OR of death outcome of the WM_CPM_CHM group was the lowest (OR: 0.08, 95% CI: 0.05, 0.12, P〈0.01). There was no significant difference between LOS, CPS and OR of the WM_CPM group and those of the WM_CHM group (P〉0.05). Cost/effectiveness and incremental cost- effectiveness ratio of the WM_CPM_CHM group were robustly higher than those of the WM group. Conclusion: Compared with WM alone, supplementing CPM and CHM to WM provides significant health benefits of improving the chance of recovered and improved outcome, and reducing the death rate, at an expense of longer LOS and higher CPS.
基金This work was partly supported by the Open Research Fund of State Key Laboratory of Molecular Vaccinology and Molecular Diagnostics(SKLVD2019KF005)the Bill&Melinda Gates Foundation(INV-005834)+3 种基金the Science and Technology Program of Fujian Province(No:2020Y0002)the Xiamen New Coronavirus Prevention and Control Emergency Tackling Special Topic Program(No:3502Z2020YJ03)the Hunan Provincial Construction of Innovative Provinces Special Social Development Areas Key Research and Development Project(2020SK3012)the Chinese Academy of Medical Sciences Coronavirus Disease 2019 Science and Technology Research Project in 2020(2020HY320003).
文摘Background:The novel coronavirus,severe acute respiratory syndrome coronavirus 2(SARS-CoV-2,also called 2019-nCoV)causes different morbidity risks to individuals in different age groups.This study attempts to quantify the age-specific transmissibility using a mathematical model.Methods:An epidemiological model with five compartments(susceptible-exposed-symptomatic-asymptomatic-recovered/removed[SEIAR])was developed based on observed transmission features.Coronavirus disease 2019(COVID-19)cases were divided into four age groups:group 1,those≤14years old;group 2,those 15 to 44years old;group 3,those 45 to 64years old;and group 4,those≥65 years old.The model was initially based on cases(including imported cases and secondary cases)collected in Hunan Province from January 5 to February 19,2020.Another dataset,from Jilin Province,was used to test the model.Results:The age-specific SEIAR model fitted the data well in each age group(P<0.001).In Hunan Province,the highest transmissibility was from age group 4 to 3(median:β43=7.71×10-9;SAR43=3.86×10-8),followed by group 3 to 4(median:β34=3.07×10-9;SAR34=1.53×10-8),group 2 to 2(median:β22=1.24×10-9;SAR22=6.21×10-9),and group 3 to 1(median:β31=4.10×10-10;SAR31=2.08×10-9).The lowest transmissibility was from age group 3 to 3(median:β33=1.64×10-19;SAR33=8.19×10-19),followed by group 4 to 4(median:β44=3.66×10-17;SAR44=1.83×10-16),group 3 to 2(median:β32=1.21×10-16;SAR32=6.06×10-16),and group 1 to 4(median:β14=7.20×10-14;SAR14=3.60×10-13).In Jilin Province,the highest transmissibility occurred from age group 4 to 4(median:β43=4.27×10-8;SAR43=2.13×10-7),followed by group 3 to 4(median:β34=1.81×10-8;SAR34=9.03×10-8).Conclusions:SARS-CoV-2 exhibits high transmissibility between middle-aged(45 to 64 years old)and elderly(≥65 years old)people.Children(≤14 years old)have very low susceptibility to COVID-19.This study will improve our understanding of the transmission feature of SARS-CoV-2 in different age groups and suggest the most prevention measures should be applied to middle-aged and elderly people.