Crowd counting is a promising hotspot of computer vision involving crowd intelligence analysis,achieving tremendous success recently with the development of deep learning.However,there have been stillmany challenges i...Crowd counting is a promising hotspot of computer vision involving crowd intelligence analysis,achieving tremendous success recently with the development of deep learning.However,there have been stillmany challenges including crowd multi-scale variations and high network complexity,etc.To tackle these issues,a lightweight Resconnection multi-branch network(LRMBNet)for highly accurate crowd counting and localization is proposed.Specifically,using improved ShuffleNet V2 as the backbone,a lightweight shallow extractor has been designed by employing the channel compression mechanism to reduce enormously the number of network parameters.A light multi-branch structure with different expansion rate convolutions is demonstrated to extract multi-scale features and enlarged receptive fields,where the information transmission and fusion of diverse scale features is enhanced via residual concatenation.In addition,a compound loss function is introduced for training themethod to improve global context information correlation.The proposed method is evaluated on the SHHA,SHHB,UCF-QNRF and UCF_CC_50 public datasets.The accuracy is better than those of many advanced approaches,while the number of parameters is smaller.The experimental results show that the proposed method achieves a good tradeoff between the complexity and accuracy of crowd counting,indicating a lightweight and high-precision method for crowd counting.展开更多
Estimation of crowd count is becoming crucial nowadays,as it can help in security surveillance,crowd monitoring,and management for different events.It is challenging to determine the approximate crowd size from an ima...Estimation of crowd count is becoming crucial nowadays,as it can help in security surveillance,crowd monitoring,and management for different events.It is challenging to determine the approximate crowd size from an image of the crowd’s density.Therefore in this research study,we proposed a multi-headed convolutional neural network architecture-based model for crowd counting,where we divided our proposed model into two main components:(i)the convolutional neural network,which extracts the feature across the whole image that is given to it as an input,and(ii)the multi-headed layers,which make it easier to evaluate density maps to estimate the number of people in the input image and determine their number in the crowd.We employed the available public benchmark crowd-counting datasets UCF CC 50 and ShanghaiTech parts A and B for model training and testing to validate the model’s performance.To analyze the results,we used two metrics Mean Absolute Error(MAE)and Mean Square Error(MSE),and compared the results of the proposed systems with the state-of-art models of crowd counting.The results show the superiority of the proposed system.展开更多
When employing penetration ammunition to strike multi-story buildings,the detection methods using acceleration sensors suffer from signal aliasing,while magnetic detection methods are susceptible to interference from ...When employing penetration ammunition to strike multi-story buildings,the detection methods using acceleration sensors suffer from signal aliasing,while magnetic detection methods are susceptible to interference from ferromagnetic materials,thereby posing challenges in accurately determining the number of layers.To address this issue,this research proposes a layer counting method for penetration fuze that incorporates multi-source information fusion,utilizing both the temporal convolutional network(TCN)and the long short-term memory(LSTM)recurrent network.By leveraging the strengths of these two network structures,the method extracts temporal and high-dimensional features from the multi-source physical field during the penetration process,establishing a relationship between the multi-source physical field and the distance between the fuze and the target plate.A simulation model is developed to simulate the overload and magnetic field of a projectile penetrating multiple layers of target plates,capturing the multi-source physical field signals and their patterns during the penetration process.The analysis reveals that the proposed multi-source fusion layer counting method reduces errors by 60% and 50% compared to single overload layer counting and single magnetic anomaly signal layer counting,respectively.The model's predictive performance is evaluated under various operating conditions,including different ratios of added noise to random sample positions,penetration speeds,and spacing between target plates.The maximum errors in fuze penetration time predicted by the three modes are 0.08 ms,0.12 ms,and 0.16 ms,respectively,confirming the robustness of the proposed model.Moreover,the model's predictions indicate that the fitting degree for large interlayer spacings is superior to that for small interlayer spacings due to the influence of stress waves.展开更多
Imaging plates are widely used to detect alpha particles to track information,and the number of alpha particle tracks is affected by the overlapping and fading effects of the track information.In this study,an experim...Imaging plates are widely used to detect alpha particles to track information,and the number of alpha particle tracks is affected by the overlapping and fading effects of the track information.In this study,an experiment and a simulation were used to calibrate the efficiency parameter of an imaging plate,which was used to calculate the grayscale.Images were created by using grayscale,which trained the convolutional neural network to count the alpha tracks.The results demonstrated that the trained convolutional neural network can evaluate the alpha track counts based on the source and background images with a wider linear range,which was unaffected by the overlapping effect.The alpha track counts were unaffected by the fading effect within 60 min,where the calibrated formula for the fading effect was analyzed for 132.7 min.The detection efficiency of the trained convolutional neural network for inhomogeneous ^(241)Am sources(2π emission)was 0.6050±0.0399,whereas the efficiency curve of the photo-stimulated luminescence method was lower than that of the trained convolutional neural network.展开更多
Rice is a major food crop and is planted worldwide. Climatic deterioration, population growth, farmland shrinkage, and other factors have necessitated the application of cutting-edge technology to achieve accurate and...Rice is a major food crop and is planted worldwide. Climatic deterioration, population growth, farmland shrinkage, and other factors have necessitated the application of cutting-edge technology to achieve accurate and efficient rice production. In this study, we mainly focus on the precise counting of rice plants in paddy field and design a novel deep learning network, RPNet, consisting of four modules: feature encoder, attention block, initial density map generator, and attention map generator. Additionally, we propose a novel loss function called RPloss. This loss function considers the magnitude relationship between different sub-loss functions and ensures the validity of the designed network. To verify the proposed method, we conducted experiments on our recently presented URC dataset, which is an unmanned aerial vehicle dataset that is quite challenged at counting rice plants. For experimental comparison, we chose some popular or recently proposed counting methods, namely MCNN, CSRNet, SANet, TasselNetV2, and FIDTM. In the experiment, the mean absolute error(MAE), root mean squared error(RMSE), relative MAE(rMAE) and relative RMSE(rRMSE) of the proposed RPNet were 8.3, 11.2, 1.2% and 1.6%, respectively,for the URC dataset. RPNet surpasses state-of-the-art methods in plant counting. To verify the universality of the proposed method, we conducted experiments on the well-know MTC and WED datasets. The final results on these datasets showed that our network achieved the best results compared with excellent previous approaches. The experiments showed that the proposed RPNet can be utilized to count rice plants in paddy fields and replace traditional methods.展开更多
In the process of aquaculture,monitoring the number of fish bait particles is of great significance to improve the growth and welfare of fish.Although the counting method based on onvolutional neural network(CNN)achie...In the process of aquaculture,monitoring the number of fish bait particles is of great significance to improve the growth and welfare of fish.Although the counting method based on onvolutional neural network(CNN)achieve good accuracy and applicability,it has a high amount of parameters and computation,which limit the deployment on resource-constrained hardware devices.In order to solve the above problems,this paper proposes a lightweight bait particle counting method based on shift quantization and model pruning strategies.Firstly,we take corresponding lightweight strategies for different layers to flexibly balance the counting accuracy and performance of the model.In order to deeply lighten the counting model,the redundant and less informative weights of the model are removed through the combination of model quantization and pruning.The experimental results show that the compression rate is nearly 9 times.Finally,the quantization candidate value is refined by introducing a power-of-two addition term,which improves the matches of the weight distribution.By analyzing the experimental results,the counting loss at 3 bit is reduced by 35.31%.In summary,the lightweight bait particle counting model proposed in this paper achieves lossless counting accuracy and reduces the storage and computational overhead required for running convolutional neural networks.展开更多
AIM:To assess the performance of a bespoke software for automated counting of intraocular lens(IOL)glistenings in slit-lamp images.METHODS:IOL glistenings from slit-lamp-derived digital images were counted manually an...AIM:To assess the performance of a bespoke software for automated counting of intraocular lens(IOL)glistenings in slit-lamp images.METHODS:IOL glistenings from slit-lamp-derived digital images were counted manually and automatically by the bespoke software.The images of one randomly selected eye from each of 34 participants were used as a training set to determine the threshold setting that gave the best agreement between manual and automatic grading.A second set of 63 images,selected using randomised stratified sampling from 290 images,were used for software validation.The images were obtained using a previously described protocol.Software-derived automated glistenings counts were compared to manual counts produced by three ophthalmologists.RESULTS:A threshold value of 140 was determined that minimised the total deviation in the number of glistenings for the 34 images in the training set.Using this threshold value,only slight agreement was found between automated software counts and manual expert counts for the validating set of 63 images(κ=0.104,95%CI,0.040-0.168).Ten images(15.9%)had glistenings counts that agreed between the software and manual counting.There were 49 images(77.8%)where the software overestimated the number of glistenings.CONCLUSION:The low levels of agreement show between an initial release of software used to automatically count glistenings in in vivo slit-lamp images and manual counting indicates that this is a non-trivial application.Iterative improvement involving a dialogue between software developers and experienced ophthalmologists is required to optimise agreement.The results suggest that validation of software is necessary for studies involving semi-automatic evaluation of glistenings.展开更多
Inventory counting is crucial to manufacturing industries in terms of inventory management,production,and procurement planning.Many companies currently require workers to manually count and track the status of materia...Inventory counting is crucial to manufacturing industries in terms of inventory management,production,and procurement planning.Many companies currently require workers to manually count and track the status of materials,which are repetitive and non-value-added activities but incur significant costs to the companies as well as mental fatigue to the employees.This research aims to develop a computer vision system that can automate the material counting activity without applying any marker on the material.The type of material of interest is metal sheet,whose shape is simple,a large rectangular shape,yet difficult to detect.The use of computer vision technology can reduce the costs incurred fromthe loss of high-value materials,eliminate repetitive work requirements for skilled labor,and reduce human error.A computer vision system is proposed and tested on a metal sheet picking process formultiple metal sheet stacks in the storage area by using one video camera.Our results show that the proposed computer vision system can count the metal sheet picks under a real situation with a precision of 97.83%and a recall of 100%.展开更多
In this paper, a deep learning-based method is proposed for crowdcountingproblems. Specifically, by utilizing the convolution kernel densitymap, the ground truth is generated dynamically to enhance the featureextracti...In this paper, a deep learning-based method is proposed for crowdcountingproblems. Specifically, by utilizing the convolution kernel densitymap, the ground truth is generated dynamically to enhance the featureextractingability of the generator model. Meanwhile, the “cross stage partial”module is integrated into congested scene recognition network (CSRNet) toobtain a lightweight network model. In addition, to compensate for the accuracydrop owing to the lightweight model, we take advantage of “structuredknowledge transfer” to train the model in an end-to-end manner. It aimsto accelerate the fitting speed and enhance the learning ability of the studentmodel. The crowd-counting system solution for edge computing is alsoproposed and implemented on an embedded device equipped with a neuralprocessing unit. Simulations demonstrate the performance improvement ofthe proposed solution in terms of model size, processing speed and accuracy.The performance on the Venice dataset shows that the mean absolute error(MAE) and the root mean squared error (RMSE) of our model drop by32.63% and 39.18% compared with CSRNet. Meanwhile, the performance onthe ShanghaiTech PartB dataset reveals that the MAE and the RMSE of ourmodel are close to those of CSRNet. Therefore, we provide a novel embeddedplatform system scheme for public safety pre-warning applications.展开更多
By analyzing the internal features of counting sorting algorithm. Two improvements of counting sorting algorithms are proposed, which have a wide range of applications and better efficiency than the original counting ...By analyzing the internal features of counting sorting algorithm. Two improvements of counting sorting algorithms are proposed, which have a wide range of applications and better efficiency than the original counting sort while maintaining the original stability. Compared with the original counting sort, it has a wider scope of application and better time and space efficiency. In addition, the accuracy of the above conclusions can be proved by a large amount of experimental data.展开更多
The analysis of overcrowded areas is essential for flow monitoring,assembly control,and security.Crowd counting’s primary goal is to calculate the population in a given region,which requires real-time analysis of con...The analysis of overcrowded areas is essential for flow monitoring,assembly control,and security.Crowd counting’s primary goal is to calculate the population in a given region,which requires real-time analysis of congested scenes for prompt reactionary actions.The crowd is always unexpected,and the benchmarked available datasets have a lot of variation,which limits the trained models’performance on unseen test data.In this paper,we proposed an end-to-end deep neural network that takes an input image and generates a density map of a crowd scene.The proposed model consists of encoder and decoder networks comprising batch-free normalization layers known as evolving normalization(EvoNorm).This allows our network to be generalized for unseen data because EvoNorm is not using statistics from the training samples.The decoder network uses dilated 2D convolutional layers to provide large receptive fields and fewer parameters,which enables real-time processing and solves the density drift problem due to its large receptive field.Five benchmark datasets are used in this study to assess the proposed model,resulting in the conclusion that it outperforms conventional models.展开更多
BACKGROUND Spontaneous bacterial peritonitis(SBP)is one of the most important complications of patients with liver cirrhosis entailing high morbidity and mortality.Making an accurate early diagnosis of this infection ...BACKGROUND Spontaneous bacterial peritonitis(SBP)is one of the most important complications of patients with liver cirrhosis entailing high morbidity and mortality.Making an accurate early diagnosis of this infection is key in the outcome of these patients.The current definition of SBP is based on studies performed more than 40 years ago using a manual technique to count the number of polymorphs in ascitic fluid(AF).There is a lack of data comparing the traditional cell count method with a current automated cell counter.Moreover,current international guidelines do not mention the type of cell count method to be employed and around half of the centers still rely on the traditional manual method.AIM To compare the accuracy of polymorph count on AF to diagnose SBP between the traditional manual cell count method and a modern automated cell counter against SBP cases fulfilling gold standard criteria:Positive AF culture and signs/symptoms of peritonitis.METHODS Retrospective analysis including two cohorts:Cross-sectional(cohort 1)and case-control(cohort 2),of patients with decompensated cirrhosis and ascites.Both cell count methods were conducted simultaneously.Positive SBP cases had a pathogenic bacteria isolated on AF and signs/symptoms of peritonitis.RESULTS A total of 137 cases with 5 positive-SBP,and 85 cases with 33 positive-SBP were included in cohort 1 and 2,respectively.Positive-SBP cases had worse liver function in both cohorts.The automated method showed higher sensitivity than the manual cell count:80%vs 52%,P=0.02,in cohort 2.Both methods showed very good specificity(>95%).The best cutoff using the automated cell counter was polymorph≥0.2 cells×10^(9)/L(equivalent to 200 cells/mm^(3))in AF as it has the higher sensitivity keeping a good specificity.CONCLUSION The automated cell count method should be preferred over the manual method to diagnose SBP because of its higher sensitivity.SBP definition,using the automated method,as polymorph cell count≥0.2 cells×10^(9)/L in AF would need to be considered in patients admitted with decompensated cirrhosis.展开更多
Somatic cell count detection is the daily work of dairy farms to monitor the health of cows.The feasibility of applying near-infrared spectroscopy to somatic cell count detection was researched in this paper.Milk samp...Somatic cell count detection is the daily work of dairy farms to monitor the health of cows.The feasibility of applying near-infrared spectroscopy to somatic cell count detection was researched in this paper.Milk samples with different somatic cell counts were collected and preprocessing methods were studied.Variable selection algorithm based on hybrid strategy and modelling method based on ensemble learning were explored for somatic cell count detection.Detection model was used to diagnose subclinical mastitis and the results showed that near-infrared spectroscopy could be a tool to realize rapid detection of somatic cell count in milk.展开更多
BACKGROUND Liver cirrhosis is the end stage of progressive liver fibrosis as a consequence of chronic liver inflammation,wherein the standard hepatic architecture is replaced by regenerative hepatic nodules,which even...BACKGROUND Liver cirrhosis is the end stage of progressive liver fibrosis as a consequence of chronic liver inflammation,wherein the standard hepatic architecture is replaced by regenerative hepatic nodules,which eventually lead to liver failure.Cirrhosis without any symptoms is referred to as compensated cirrhosis.Complications such as ascites,variceal bleeding,and hepatic encephalopathy indicate the onset of decompensated cirrhosis.Gastroesophageal varices are the hallmark of clini-cally significant portal hypertension.AIM To determine the accuracy of the platelet count-to-spleen diameter(PC/SD)ratio to evaluate esophageal varices(EV)in patients with cirrhosis.METHODS This retrospective observational study was conducted at Tikur Anbessa Specia-lized Hospital and Adera Medical Center from January 1,2019,to December 30,2023.Data were collected via chart review and direct patient interviews using structured questionnaires.The data were exported to the SPSS software version 26 for analysis and clearance.A receiver operating characteristic curve was plotted for splenic diameter,platelet count,and PC/SD ratio to obtain sensitivity,speci-ficity,positive predictive value,negative predictive value,positive likelihood ratio,and negative likelihood ratio.RESULTS Of the 140 participants,67%were men.Hepatitis B(38%)was the most common cause of cirrhosis,followed by cryptogenic cirrhosis(28%)and hepatitis C(16%).Approximately 83.6%of the participants had endoscopic evidence of EV,whereas 51.1%had gastric varices.Decompensated cirrhosis and PC were associated with the presence of EV with adjusted odds ratios of 12.63(95%CI:3.16-67.58,P=0.001)and 0.14(95%CI:0.037-0.52,P=0.004),respectively.A PC/SD ratio<1119 had a sensitivity of 86.32%and specificity of 70%with area under the curve of 0.835(95%CI:0.736-0.934,P<0.001).CONCLUSION A PC/SD ratio<1119 predicts EV in patients with cirrhosis.It is a valuable,noninvasive tool for EV risk assess-ment in resource-limited settings.展开更多
The fatigue of concrete structures will gradually appear after being subjected to alternating loads for a long time,and the accidents caused by fatigue failure of bridge structures also appear from time to time.Aiming...The fatigue of concrete structures will gradually appear after being subjected to alternating loads for a long time,and the accidents caused by fatigue failure of bridge structures also appear from time to time.Aiming at the problem of degradation of long-span continuous rigid frame bridges due to fatigue and environmental effects,this paper suggests a method to analyze the fatigue degradation mechanism of this type of bridge,which combines long-term in-site monitoring data collected by the health monitoring system(HMS)and fatigue theory.In the paper,the authors mainly carry out the research work in the following aspects:First of all,a long-span continuous rigid frame bridge installed with HMS is used as an example,and a large amount of health monitoring data have been acquired,which can provide efficient information for fatigue in terms of equivalent stress range and cumulative number of stress cycles;next,for calculating the cumulative fatigue damage of the bridge structure,fatigue stress spectrum got by rain flow counting method,S-N curves and damage criteria are used for fatigue damage analysis.Moreover,it was considered a linear accumulation damage through the Palmgren-Miner rule for the counting of stress cycles.The health monitoring data are adopted to obtain fatigue stress data and the rain flow counting method is used to count the amplitude varying fatigue stress.The proposed fatigue reliability approach in the paper can estimate the fatigue damage degree and its evolution law of bridge structures well,and also can help bridge engineers do the assessment of future service duration.展开更多
BACKGROUND Neonatal sepsis is defined as an infection-related condition characterized by signs and symptoms of bacteremia within the first month of life.It is the leading cause of mortality and morbidity among newborn...BACKGROUND Neonatal sepsis is defined as an infection-related condition characterized by signs and symptoms of bacteremia within the first month of life.It is the leading cause of mortality and morbidity among newborns.While several studies have been conducted in other parts of world to assess the usefulness of complete blood count parameters and hemogram-derived markers as early screening tools for neonatal sepsis,the associations between sepsis and its complications with these blood parameters are still being investigated in our setting and are not yet part of routine practice.AIM To evaluate the diagnostic significance of complete blood cell count hemogramderived novel markers for neonatal sepsis among neonates attending public hospitals in the southwest region of Oromia,Ethiopia,through a case control study.METHODS A case control study was conducted from October 2021 to October 2023 Sociodemographic,clinical history,and laboratory test results data were collected using structured questionnaires.The collected data were entered into Epi-data 3.1 version and exported to SPSS-25 for analysis.Chi-square,independent sample ttest,and receiver operator characteristics curve of curve were used for analysis.A P-value of less than 0.05 was considered statistically significant.RESULTS In this study,significant increases were observed in the following values in the case group compared to the control group:In white blood cell(WBC)count,neutrophils,monocyte,mean platelet volume(MPV),neutrophils to lymphocyte ratio,monocyte to lymphocyte ratio(MLR),red blood cell width to platelet count ratio(RPR),red blood width coefficient variation,MPV to RPR,and platelet to lymphocyte ratio.Regarding MLR,a cut-off value of≥0.26 was found,with a sensitivity of 68%,a specificity of 95%,a positive predictive value(PPV)of 93.2%,and a negative predictive value(NPV)of 74.8%.The area under the curve(AUC)was 0.828(P<0.001).For WBC,a cutoff value of≥11.42 was identified,with a sensitivity of 55%,a specificity of 89%,a PPV of 83.3%,and a NPV of 66.4%.The AUC was 0.81(P<0.001).Neutrophils had a sensitivity of 67%,a specificity of 81%,a PPV of 77.9%,and a NPV of 71.1%.The AUC was 0.801,with a cut-off value of≥6.76(P=0.001).These results indicate that they were excellent predictors of neonatal sepsis diagnosis.CONCLUSION The findings of our study suggest that certain hematological parameters and hemogram-derived markers may have a potential role in the diagnosis of neonatal sepsis.展开更多
BACKGROUND Previous research has highlighted correlations between blood cell counts and chronic liver disease.Nonetheless,the causal relationships remain unknown.AIM To evaluate the causal effect of blood cell traits ...BACKGROUND Previous research has highlighted correlations between blood cell counts and chronic liver disease.Nonetheless,the causal relationships remain unknown.AIM To evaluate the causal effect of blood cell traits on liver enzymes and nonalcoholic fatty liver disease(NAFLD)risk.METHODS Independent genetic variants strongly associated with blood cell traits were extracted from a genome-wide association study(GWAS)conducted by the Blood Cell Consortium.Summary-level data for liver enzymes were obtained from the United Kingdom Biobank.NAFLD data were obtained from a GWAS meta-analysis(8434 cases and 770180 controls,discovery dataset)and the Fingen GWAS(2275 cases and 372727 controls,replication dataset).This analysis was conducted using the inverse-variance weighted method,followed by various sensitivity analyses.RESULTS One SD increase in the genetically predicted haemoglobin concentration(HGB)was associated with aβof 0.0078(95%CI:0.0059-0.0096),0.0108(95%CI:0.0080-0.0136),0.0361(95%CI:0.0156-0.0567),and 0.0083(95%CI:00046-0.0121)for alkaline phosphatase(ALP),alanine aminotransferase(ALT),aspartate aminotransferase,and gammaglutamyl transferase,respectively.Genetically predicted haematocrit was associated with ALP(β=0.0078,95%CI:0.0052-0.0104)and ALT(β=0.0057,95%CI:0.0039-0.0075).Genetically determined HGB and the reticulocyte fraction of red blood cells increased the risk of NAFLD[odds ratio(OR)=1.199,95%CI:1.087-1.322]and(OR=1.157,95%CI:1.071-1.250).The results of the sensitivity analyses remained significant.CONCLUSION Novel causal blood cell traits related to liver enzymes and NAFLD development were revealed through Mendelian randomization analysis,which may facilitate the diagnosis and prevention of NAFLD.展开更多
基金Double First-Class Innovation Research Project for People’s Public Security University of China(2023SYL08).
文摘Crowd counting is a promising hotspot of computer vision involving crowd intelligence analysis,achieving tremendous success recently with the development of deep learning.However,there have been stillmany challenges including crowd multi-scale variations and high network complexity,etc.To tackle these issues,a lightweight Resconnection multi-branch network(LRMBNet)for highly accurate crowd counting and localization is proposed.Specifically,using improved ShuffleNet V2 as the backbone,a lightweight shallow extractor has been designed by employing the channel compression mechanism to reduce enormously the number of network parameters.A light multi-branch structure with different expansion rate convolutions is demonstrated to extract multi-scale features and enlarged receptive fields,where the information transmission and fusion of diverse scale features is enhanced via residual concatenation.In addition,a compound loss function is introduced for training themethod to improve global context information correlation.The proposed method is evaluated on the SHHA,SHHB,UCF-QNRF and UCF_CC_50 public datasets.The accuracy is better than those of many advanced approaches,while the number of parameters is smaller.The experimental results show that the proposed method achieves a good tradeoff between the complexity and accuracy of crowd counting,indicating a lightweight and high-precision method for crowd counting.
基金funded by Naif Arab University for Security Sciences under grant No.NAUSS-23-R10.
文摘Estimation of crowd count is becoming crucial nowadays,as it can help in security surveillance,crowd monitoring,and management for different events.It is challenging to determine the approximate crowd size from an image of the crowd’s density.Therefore in this research study,we proposed a multi-headed convolutional neural network architecture-based model for crowd counting,where we divided our proposed model into two main components:(i)the convolutional neural network,which extracts the feature across the whole image that is given to it as an input,and(ii)the multi-headed layers,which make it easier to evaluate density maps to estimate the number of people in the input image and determine their number in the crowd.We employed the available public benchmark crowd-counting datasets UCF CC 50 and ShanghaiTech parts A and B for model training and testing to validate the model’s performance.To analyze the results,we used two metrics Mean Absolute Error(MAE)and Mean Square Error(MSE),and compared the results of the proposed systems with the state-of-art models of crowd counting.The results show the superiority of the proposed system.
文摘When employing penetration ammunition to strike multi-story buildings,the detection methods using acceleration sensors suffer from signal aliasing,while magnetic detection methods are susceptible to interference from ferromagnetic materials,thereby posing challenges in accurately determining the number of layers.To address this issue,this research proposes a layer counting method for penetration fuze that incorporates multi-source information fusion,utilizing both the temporal convolutional network(TCN)and the long short-term memory(LSTM)recurrent network.By leveraging the strengths of these two network structures,the method extracts temporal and high-dimensional features from the multi-source physical field during the penetration process,establishing a relationship between the multi-source physical field and the distance between the fuze and the target plate.A simulation model is developed to simulate the overload and magnetic field of a projectile penetrating multiple layers of target plates,capturing the multi-source physical field signals and their patterns during the penetration process.The analysis reveals that the proposed multi-source fusion layer counting method reduces errors by 60% and 50% compared to single overload layer counting and single magnetic anomaly signal layer counting,respectively.The model's predictive performance is evaluated under various operating conditions,including different ratios of added noise to random sample positions,penetration speeds,and spacing between target plates.The maximum errors in fuze penetration time predicted by the three modes are 0.08 ms,0.12 ms,and 0.16 ms,respectively,confirming the robustness of the proposed model.Moreover,the model's predictions indicate that the fitting degree for large interlayer spacings is superior to that for small interlayer spacings due to the influence of stress waves.
基金supported by the Hunan Provincial Innovation Foundation for Postgraduates (No.QL20210228)the National Natural Science Foundation of China (No.12075112)the National Natural Science Foundation of China (No.12175102).
文摘Imaging plates are widely used to detect alpha particles to track information,and the number of alpha particle tracks is affected by the overlapping and fading effects of the track information.In this study,an experiment and a simulation were used to calibrate the efficiency parameter of an imaging plate,which was used to calculate the grayscale.Images were created by using grayscale,which trained the convolutional neural network to count the alpha tracks.The results demonstrated that the trained convolutional neural network can evaluate the alpha track counts based on the source and background images with a wider linear range,which was unaffected by the overlapping effect.The alpha track counts were unaffected by the fading effect within 60 min,where the calibrated formula for the fading effect was analyzed for 132.7 min.The detection efficiency of the trained convolutional neural network for inhomogeneous ^(241)Am sources(2π emission)was 0.6050±0.0399,whereas the efficiency curve of the photo-stimulated luminescence method was lower than that of the trained convolutional neural network.
基金supported by the National Natural Science Foundation of China (61701260 and 62271266)the Postgraduate Research&Practice Innovation Program of Jiangsu Province (SJCX21_0255)the Postdoctoral Research Program of Jiangsu Province(2019K287)。
文摘Rice is a major food crop and is planted worldwide. Climatic deterioration, population growth, farmland shrinkage, and other factors have necessitated the application of cutting-edge technology to achieve accurate and efficient rice production. In this study, we mainly focus on the precise counting of rice plants in paddy field and design a novel deep learning network, RPNet, consisting of four modules: feature encoder, attention block, initial density map generator, and attention map generator. Additionally, we propose a novel loss function called RPloss. This loss function considers the magnitude relationship between different sub-loss functions and ensures the validity of the designed network. To verify the proposed method, we conducted experiments on our recently presented URC dataset, which is an unmanned aerial vehicle dataset that is quite challenged at counting rice plants. For experimental comparison, we chose some popular or recently proposed counting methods, namely MCNN, CSRNet, SANet, TasselNetV2, and FIDTM. In the experiment, the mean absolute error(MAE), root mean squared error(RMSE), relative MAE(rMAE) and relative RMSE(rRMSE) of the proposed RPNet were 8.3, 11.2, 1.2% and 1.6%, respectively,for the URC dataset. RPNet surpasses state-of-the-art methods in plant counting. To verify the universality of the proposed method, we conducted experiments on the well-know MTC and WED datasets. The final results on these datasets showed that our network achieved the best results compared with excellent previous approaches. The experiments showed that the proposed RPNet can be utilized to count rice plants in paddy fields and replace traditional methods.
基金supported by the National Key Research and Development Program of China(No.2019YFD0901000)。
文摘In the process of aquaculture,monitoring the number of fish bait particles is of great significance to improve the growth and welfare of fish.Although the counting method based on onvolutional neural network(CNN)achieve good accuracy and applicability,it has a high amount of parameters and computation,which limit the deployment on resource-constrained hardware devices.In order to solve the above problems,this paper proposes a lightweight bait particle counting method based on shift quantization and model pruning strategies.Firstly,we take corresponding lightweight strategies for different layers to flexibly balance the counting accuracy and performance of the model.In order to deeply lighten the counting model,the redundant and less informative weights of the model are removed through the combination of model quantization and pruning.The experimental results show that the compression rate is nearly 9 times.Finally,the quantization candidate value is refined by introducing a power-of-two addition term,which improves the matches of the weight distribution.By analyzing the experimental results,the counting loss at 3 bit is reduced by 35.31%.In summary,the lightweight bait particle counting model proposed in this paper achieves lossless counting accuracy and reduces the storage and computational overhead required for running convolutional neural networks.
文摘AIM:To assess the performance of a bespoke software for automated counting of intraocular lens(IOL)glistenings in slit-lamp images.METHODS:IOL glistenings from slit-lamp-derived digital images were counted manually and automatically by the bespoke software.The images of one randomly selected eye from each of 34 participants were used as a training set to determine the threshold setting that gave the best agreement between manual and automatic grading.A second set of 63 images,selected using randomised stratified sampling from 290 images,were used for software validation.The images were obtained using a previously described protocol.Software-derived automated glistenings counts were compared to manual counts produced by three ophthalmologists.RESULTS:A threshold value of 140 was determined that minimised the total deviation in the number of glistenings for the 34 images in the training set.Using this threshold value,only slight agreement was found between automated software counts and manual expert counts for the validating set of 63 images(κ=0.104,95%CI,0.040-0.168).Ten images(15.9%)had glistenings counts that agreed between the software and manual counting.There were 49 images(77.8%)where the software overestimated the number of glistenings.CONCLUSION:The low levels of agreement show between an initial release of software used to automatically count glistenings in in vivo slit-lamp images and manual counting indicates that this is a non-trivial application.Iterative improvement involving a dialogue between software developers and experienced ophthalmologists is required to optimise agreement.The results suggest that validation of software is necessary for studies involving semi-automatic evaluation of glistenings.
基金This work was jointly supported by the Excellent Research Graduate Scholarship-EreG Scholarship Program Under the Memorandum of Understanding between Thammasat University and National Science and Technology Development Agency(NSTDA),Thailand[No.MOU-CO-2562-8675]the Center of Excellence in Logistics and Supply Chain System Engineering and Technology(COE LogEn)+1 种基金Sirindhorn International Institute of Technology(SIIT)Thammasat University,Thailand.
文摘Inventory counting is crucial to manufacturing industries in terms of inventory management,production,and procurement planning.Many companies currently require workers to manually count and track the status of materials,which are repetitive and non-value-added activities but incur significant costs to the companies as well as mental fatigue to the employees.This research aims to develop a computer vision system that can automate the material counting activity without applying any marker on the material.The type of material of interest is metal sheet,whose shape is simple,a large rectangular shape,yet difficult to detect.The use of computer vision technology can reduce the costs incurred fromthe loss of high-value materials,eliminate repetitive work requirements for skilled labor,and reduce human error.A computer vision system is proposed and tested on a metal sheet picking process formultiple metal sheet stacks in the storage area by using one video camera.Our results show that the proposed computer vision system can count the metal sheet picks under a real situation with a precision of 97.83%and a recall of 100%.
文摘In this paper, a deep learning-based method is proposed for crowdcountingproblems. Specifically, by utilizing the convolution kernel densitymap, the ground truth is generated dynamically to enhance the featureextractingability of the generator model. Meanwhile, the “cross stage partial”module is integrated into congested scene recognition network (CSRNet) toobtain a lightweight network model. In addition, to compensate for the accuracydrop owing to the lightweight model, we take advantage of “structuredknowledge transfer” to train the model in an end-to-end manner. It aimsto accelerate the fitting speed and enhance the learning ability of the studentmodel. The crowd-counting system solution for edge computing is alsoproposed and implemented on an embedded device equipped with a neuralprocessing unit. Simulations demonstrate the performance improvement ofthe proposed solution in terms of model size, processing speed and accuracy.The performance on the Venice dataset shows that the mean absolute error(MAE) and the root mean squared error (RMSE) of our model drop by32.63% and 39.18% compared with CSRNet. Meanwhile, the performance onthe ShanghaiTech PartB dataset reveals that the MAE and the RMSE of ourmodel are close to those of CSRNet. Therefore, we provide a novel embeddedplatform system scheme for public safety pre-warning applications.
文摘By analyzing the internal features of counting sorting algorithm. Two improvements of counting sorting algorithms are proposed, which have a wide range of applications and better efficiency than the original counting sort while maintaining the original stability. Compared with the original counting sort, it has a wider scope of application and better time and space efficiency. In addition, the accuracy of the above conclusions can be proved by a large amount of experimental data.
基金This research was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Education(No.2021R1I1A1A01055652).
文摘The analysis of overcrowded areas is essential for flow monitoring,assembly control,and security.Crowd counting’s primary goal is to calculate the population in a given region,which requires real-time analysis of congested scenes for prompt reactionary actions.The crowd is always unexpected,and the benchmarked available datasets have a lot of variation,which limits the trained models’performance on unseen test data.In this paper,we proposed an end-to-end deep neural network that takes an input image and generates a density map of a crowd scene.The proposed model consists of encoder and decoder networks comprising batch-free normalization layers known as evolving normalization(EvoNorm).This allows our network to be generalized for unseen data because EvoNorm is not using statistics from the training samples.The decoder network uses dilated 2D convolutional layers to provide large receptive fields and fewer parameters,which enables real-time processing and solves the density drift problem due to its large receptive field.Five benchmark datasets are used in this study to assess the proposed model,resulting in the conclusion that it outperforms conventional models.
文摘BACKGROUND Spontaneous bacterial peritonitis(SBP)is one of the most important complications of patients with liver cirrhosis entailing high morbidity and mortality.Making an accurate early diagnosis of this infection is key in the outcome of these patients.The current definition of SBP is based on studies performed more than 40 years ago using a manual technique to count the number of polymorphs in ascitic fluid(AF).There is a lack of data comparing the traditional cell count method with a current automated cell counter.Moreover,current international guidelines do not mention the type of cell count method to be employed and around half of the centers still rely on the traditional manual method.AIM To compare the accuracy of polymorph count on AF to diagnose SBP between the traditional manual cell count method and a modern automated cell counter against SBP cases fulfilling gold standard criteria:Positive AF culture and signs/symptoms of peritonitis.METHODS Retrospective analysis including two cohorts:Cross-sectional(cohort 1)and case-control(cohort 2),of patients with decompensated cirrhosis and ascites.Both cell count methods were conducted simultaneously.Positive SBP cases had a pathogenic bacteria isolated on AF and signs/symptoms of peritonitis.RESULTS A total of 137 cases with 5 positive-SBP,and 85 cases with 33 positive-SBP were included in cohort 1 and 2,respectively.Positive-SBP cases had worse liver function in both cohorts.The automated method showed higher sensitivity than the manual cell count:80%vs 52%,P=0.02,in cohort 2.Both methods showed very good specificity(>95%).The best cutoff using the automated cell counter was polymorph≥0.2 cells×10^(9)/L(equivalent to 200 cells/mm^(3))in AF as it has the higher sensitivity keeping a good specificity.CONCLUSION The automated cell count method should be preferred over the manual method to diagnose SBP because of its higher sensitivity.SBP definition,using the automated method,as polymorph cell count≥0.2 cells×10^(9)/L in AF would need to be considered in patients admitted with decompensated cirrhosis.
基金Supported by the Natural Science Foundation of Heilongjiang Province of China(LH2023C016)the Key Research and Development Program of Heilongjiang Province of China(2022ZX01A24)the National Modern Agricultural Industry Technology System(CARS36)。
文摘Somatic cell count detection is the daily work of dairy farms to monitor the health of cows.The feasibility of applying near-infrared spectroscopy to somatic cell count detection was researched in this paper.Milk samples with different somatic cell counts were collected and preprocessing methods were studied.Variable selection algorithm based on hybrid strategy and modelling method based on ensemble learning were explored for somatic cell count detection.Detection model was used to diagnose subclinical mastitis and the results showed that near-infrared spectroscopy could be a tool to realize rapid detection of somatic cell count in milk.
文摘BACKGROUND Liver cirrhosis is the end stage of progressive liver fibrosis as a consequence of chronic liver inflammation,wherein the standard hepatic architecture is replaced by regenerative hepatic nodules,which eventually lead to liver failure.Cirrhosis without any symptoms is referred to as compensated cirrhosis.Complications such as ascites,variceal bleeding,and hepatic encephalopathy indicate the onset of decompensated cirrhosis.Gastroesophageal varices are the hallmark of clini-cally significant portal hypertension.AIM To determine the accuracy of the platelet count-to-spleen diameter(PC/SD)ratio to evaluate esophageal varices(EV)in patients with cirrhosis.METHODS This retrospective observational study was conducted at Tikur Anbessa Specia-lized Hospital and Adera Medical Center from January 1,2019,to December 30,2023.Data were collected via chart review and direct patient interviews using structured questionnaires.The data were exported to the SPSS software version 26 for analysis and clearance.A receiver operating characteristic curve was plotted for splenic diameter,platelet count,and PC/SD ratio to obtain sensitivity,speci-ficity,positive predictive value,negative predictive value,positive likelihood ratio,and negative likelihood ratio.RESULTS Of the 140 participants,67%were men.Hepatitis B(38%)was the most common cause of cirrhosis,followed by cryptogenic cirrhosis(28%)and hepatitis C(16%).Approximately 83.6%of the participants had endoscopic evidence of EV,whereas 51.1%had gastric varices.Decompensated cirrhosis and PC were associated with the presence of EV with adjusted odds ratios of 12.63(95%CI:3.16-67.58,P=0.001)and 0.14(95%CI:0.037-0.52,P=0.004),respectively.A PC/SD ratio<1119 had a sensitivity of 86.32%and specificity of 70%with area under the curve of 0.835(95%CI:0.736-0.934,P<0.001).CONCLUSION A PC/SD ratio<1119 predicts EV in patients with cirrhosis.It is a valuable,noninvasive tool for EV risk assess-ment in resource-limited settings.
文摘The fatigue of concrete structures will gradually appear after being subjected to alternating loads for a long time,and the accidents caused by fatigue failure of bridge structures also appear from time to time.Aiming at the problem of degradation of long-span continuous rigid frame bridges due to fatigue and environmental effects,this paper suggests a method to analyze the fatigue degradation mechanism of this type of bridge,which combines long-term in-site monitoring data collected by the health monitoring system(HMS)and fatigue theory.In the paper,the authors mainly carry out the research work in the following aspects:First of all,a long-span continuous rigid frame bridge installed with HMS is used as an example,and a large amount of health monitoring data have been acquired,which can provide efficient information for fatigue in terms of equivalent stress range and cumulative number of stress cycles;next,for calculating the cumulative fatigue damage of the bridge structure,fatigue stress spectrum got by rain flow counting method,S-N curves and damage criteria are used for fatigue damage analysis.Moreover,it was considered a linear accumulation damage through the Palmgren-Miner rule for the counting of stress cycles.The health monitoring data are adopted to obtain fatigue stress data and the rain flow counting method is used to count the amplitude varying fatigue stress.The proposed fatigue reliability approach in the paper can estimate the fatigue damage degree and its evolution law of bridge structures well,and also can help bridge engineers do the assessment of future service duration.
文摘BACKGROUND Neonatal sepsis is defined as an infection-related condition characterized by signs and symptoms of bacteremia within the first month of life.It is the leading cause of mortality and morbidity among newborns.While several studies have been conducted in other parts of world to assess the usefulness of complete blood count parameters and hemogram-derived markers as early screening tools for neonatal sepsis,the associations between sepsis and its complications with these blood parameters are still being investigated in our setting and are not yet part of routine practice.AIM To evaluate the diagnostic significance of complete blood cell count hemogramderived novel markers for neonatal sepsis among neonates attending public hospitals in the southwest region of Oromia,Ethiopia,through a case control study.METHODS A case control study was conducted from October 2021 to October 2023 Sociodemographic,clinical history,and laboratory test results data were collected using structured questionnaires.The collected data were entered into Epi-data 3.1 version and exported to SPSS-25 for analysis.Chi-square,independent sample ttest,and receiver operator characteristics curve of curve were used for analysis.A P-value of less than 0.05 was considered statistically significant.RESULTS In this study,significant increases were observed in the following values in the case group compared to the control group:In white blood cell(WBC)count,neutrophils,monocyte,mean platelet volume(MPV),neutrophils to lymphocyte ratio,monocyte to lymphocyte ratio(MLR),red blood cell width to platelet count ratio(RPR),red blood width coefficient variation,MPV to RPR,and platelet to lymphocyte ratio.Regarding MLR,a cut-off value of≥0.26 was found,with a sensitivity of 68%,a specificity of 95%,a positive predictive value(PPV)of 93.2%,and a negative predictive value(NPV)of 74.8%.The area under the curve(AUC)was 0.828(P<0.001).For WBC,a cutoff value of≥11.42 was identified,with a sensitivity of 55%,a specificity of 89%,a PPV of 83.3%,and a NPV of 66.4%.The AUC was 0.81(P<0.001).Neutrophils had a sensitivity of 67%,a specificity of 81%,a PPV of 77.9%,and a NPV of 71.1%.The AUC was 0.801,with a cut-off value of≥6.76(P=0.001).These results indicate that they were excellent predictors of neonatal sepsis diagnosis.CONCLUSION The findings of our study suggest that certain hematological parameters and hemogram-derived markers may have a potential role in the diagnosis of neonatal sepsis.
基金the Shanghai Natural Science Foundation of China,No.23ZR1447800and the Fengxian District Science and Technology Commission Project,China,No.20211838.
文摘BACKGROUND Previous research has highlighted correlations between blood cell counts and chronic liver disease.Nonetheless,the causal relationships remain unknown.AIM To evaluate the causal effect of blood cell traits on liver enzymes and nonalcoholic fatty liver disease(NAFLD)risk.METHODS Independent genetic variants strongly associated with blood cell traits were extracted from a genome-wide association study(GWAS)conducted by the Blood Cell Consortium.Summary-level data for liver enzymes were obtained from the United Kingdom Biobank.NAFLD data were obtained from a GWAS meta-analysis(8434 cases and 770180 controls,discovery dataset)and the Fingen GWAS(2275 cases and 372727 controls,replication dataset).This analysis was conducted using the inverse-variance weighted method,followed by various sensitivity analyses.RESULTS One SD increase in the genetically predicted haemoglobin concentration(HGB)was associated with aβof 0.0078(95%CI:0.0059-0.0096),0.0108(95%CI:0.0080-0.0136),0.0361(95%CI:0.0156-0.0567),and 0.0083(95%CI:00046-0.0121)for alkaline phosphatase(ALP),alanine aminotransferase(ALT),aspartate aminotransferase,and gammaglutamyl transferase,respectively.Genetically predicted haematocrit was associated with ALP(β=0.0078,95%CI:0.0052-0.0104)and ALT(β=0.0057,95%CI:0.0039-0.0075).Genetically determined HGB and the reticulocyte fraction of red blood cells increased the risk of NAFLD[odds ratio(OR)=1.199,95%CI:1.087-1.322]and(OR=1.157,95%CI:1.071-1.250).The results of the sensitivity analyses remained significant.CONCLUSION Novel causal blood cell traits related to liver enzymes and NAFLD development were revealed through Mendelian randomization analysis,which may facilitate the diagnosis and prevention of NAFLD.