Quantum random number generators adopting single negligible dead time of avalanche photodiodes (APDs) photon detection have been restricted due to the non- We propose a new approach based on an APD array to improve...Quantum random number generators adopting single negligible dead time of avalanche photodiodes (APDs) photon detection have been restricted due to the non- We propose a new approach based on an APD array to improve the generation rate of random numbers significantly. This method compares the detectors' responses to consecutive optical pulses and generates the random sequence. We implement a demonstration experiment to show its simplicity, compactness and scalability. The generated numbers are proved to be unbiased, post-processing free, ready to use, and their randomness is verified by using the national institute of standard technology statistical test suite. The random bit generation efficiency is as high as 32.8% and the potential generation rate adopting the 32× 32 APD array is up to tens of Gbits/s.展开更多
In order to carry out numerical simulation using geologic structural data obtained from Landmark(seismic interpretation system), underground geological structures are abstracted into mechanical models which can reflec...In order to carry out numerical simulation using geologic structural data obtained from Landmark(seismic interpretation system), underground geological structures are abstracted into mechanical models which can reflect actual situations and facilitate their computation and analyses.Given the importance of model building, further processing methods about traditional seismic interpretation results from Landmark should be studied and the processed result can then be directly used in numerical simulation computations.Through this data conversion procedure, Landmark and FLAC(the international general stress software) are seamlessly connected.Thus, the format conversion between the two systems and the pre-and post-processing in simulation computation is realized.A practical application indicates that this method has many advantages such as simple operation, high accuracy of the element subdivision and high speed, which may definitely satisfy the actual needs of floor grid cutting.展开更多
The travel time data collection method is used to assist the congestion management. The use of traditional sensors (e.g. inductive loops, AVI sensors) or more recent Bluetooth sensors installed on major roads for coll...The travel time data collection method is used to assist the congestion management. The use of traditional sensors (e.g. inductive loops, AVI sensors) or more recent Bluetooth sensors installed on major roads for collecting data is not sufficient because of their limited coverage and expensive costs for installation and maintenance. Application of the Global Positioning Systems (GPS) in travel time and delay data collections is proven to be efficient in terms of accuracy, level of details for the data and required data collection of man-power. While data collection automation is improved by the GPS technique, human errors can easily find their way through the post-processing phase, and therefore data post-processing remains a challenge especially in case of big projects with high amount of data. This paper introduces a stand-alone post-processing tool called GPS Calculator, which provides an easy-to-use environment to carry out data post-processing. This is a Visual Basic application that processes the data files obtained in the field and integrates them into Geographic Information Systems (GIS) for analysis and representation. The results show that this tool obtains similar results to the currently used data post-processing method, reduces the post-processing effort, and also eliminates the need for the second person during the data collection.展开更多
When castings become complicated and the demands for precision of numerical simulation become higher,the numerical data of casting numerical simulation become more massive.On a general personal computer,these massive ...When castings become complicated and the demands for precision of numerical simulation become higher,the numerical data of casting numerical simulation become more massive.On a general personal computer,these massive numerical data may probably exceed the capacity of available memory,resulting in failure of rendering.Based on the out-of-core technique,this paper proposes a method to effectively utilize external storage and reduce memory usage dramatically,so as to solve the problem of insufficient memory for massive data rendering on general personal computers.Based on this method,a new postprocessor is developed.It is capable to illustrate filling and solidification processes of casting,as well as thermal stess.The new post-processor also provides fast interaction to simulation results.Theoretical analysis as well as several practical examples prove that the memory usage and loading time of the post-processor are independent of the size of the relevant files,but the proportion of the number of cells on surface.Meanwhile,the speed of rendering and fetching of value from the mouse is appreciable,and the demands of real-time and interaction are satisfied.展开更多
To improve the ability of detecting underwater targets under strong wideband interference environment,an efficient method of line spectrum extraction is proposed,which fully utilizes the feature of the target spectrum...To improve the ability of detecting underwater targets under strong wideband interference environment,an efficient method of line spectrum extraction is proposed,which fully utilizes the feature of the target spectrum that the high intense and stable line spectrum is superimposed on the wide continuous spectrum.This method modifies the traditional beam forming algorithm by calculating and fusing the beam forming results at multi-frequency band and multi-azimuth interval,showing an excellent way to extract the line spectrum when the interference and the target are not in the same azimuth interval simultaneously.Statistical efficiency of the estimated azimuth variance and corresponding power of the line spectrum band depends on the line spectra ratio(LSR)of the line spectrum.The change laws of the output signal to noise ratio(SNR)with the LSR,the input SNR,the integration time and the filtering bandwidth of different algorithms bring the selection principle of the critical LSR.As the basis,the detection gain of wideband energy integration and the narrowband line spectrum algorithm are theoretically analyzed.The simulation detection gain demonstrates a good match with the theoretical model.The application conditions of all methods are verified by the receiver operating characteristic(ROC)curve and experimental data from Qiandao Lake.In fact,combining the two methods for target detection reduces the missed detection rate.The proposed post-processing method in2-dimension with the Kalman filter in the time dimension and the background equalization algorithm in the azimuth dimension makes use of the strong correlation between adjacent frames,could further remove background fluctuation and improve the display effect.展开更多
This paper proposed improvements to the low bit rate parametric audio coder with sinusoid model as its kernel. Firstly, we propose a new method to effectively order and select the perceptually most important sinusoids...This paper proposed improvements to the low bit rate parametric audio coder with sinusoid model as its kernel. Firstly, we propose a new method to effectively order and select the perceptually most important sinusoids. The sinusoid which contributes most to the reduction of overall NMR is chosen. Combined with our improved parametric psychoacoustic model and advanced peak riddling techniques, the number of sinusoids required can be greatly reduced and the coding efficiency can be greatly enhanced. A lightweight version is also given to reduce the amount of computation with only little sacrifice of performance. Secondly, we propose two enhancement techniques for sinusoid synthesis: bandwidth enhancement and line enhancement. With little overhead, the effective bandwidth can be extended one more octave; the timbre tends to sound much brighter, thicker and more beautiful.展开更多
Low contrast of Magnetic Resonance(MR)images limits the visibility of subtle structures and adversely affects the outcome of both subjective and automated diagnosis.State-of-the-art contrast boosting techniques intole...Low contrast of Magnetic Resonance(MR)images limits the visibility of subtle structures and adversely affects the outcome of both subjective and automated diagnosis.State-of-the-art contrast boosting techniques intolerably alter inherent features of MR images.Drastic changes in brightness features,induced by post-processing are not appreciated in medical imaging as the grey level values have certain diagnostic meanings.To overcome these issues this paper proposes an algorithm that enhance the contrast of MR images while preserving the underlying features as well.This method termed as Power-law and Logarithmic Modification-based Histogram Equalization(PLMHE)partitions the histogram of the image into two sub histograms after a power-law transformation and a log compression.After a modification intended for improving the dispersion of the sub-histograms and subsequent normalization,cumulative histograms are computed.Enhanced grey level values are computed from the resultant cumulative histograms.The performance of the PLMHE algorithm is comparedwith traditional histogram equalization based algorithms and it has been observed from the results that PLMHE can boost the image contrast without causing dynamic range compression,a significant change in mean brightness,and contrast-overshoot.展开更多
In the analysis of high-rise building, traditional displacement-based plane elements are often used to get the in-plane internal forces of the shear walls by stress integration. Limited by the singular problem produce...In the analysis of high-rise building, traditional displacement-based plane elements are often used to get the in-plane internal forces of the shear walls by stress integration. Limited by the singular problem produced by wall holes and the loss of precision induced by using differential method to derive strains, the displacement-based elements cannot always present accuracy enough for design. In this paper, the hybrid post-processing procedure based on the Hellinger-Reissner variational principle is used for improving the stress precision of two quadrilateral plane elements. In order to find the best stress field, three different forms are assumed for the displacement-based plane elements and with drilling DOF. Numerical results show that by using the proposed method, the accuracy of stress solutions of these two displacement-based plane elements can be improved.展开更多
File labeling techniques have a long history in analyzing the anthological trends in computational linguistics.The situation becomes worse in the case of files downloaded into systems from the Internet.Currently,most ...File labeling techniques have a long history in analyzing the anthological trends in computational linguistics.The situation becomes worse in the case of files downloaded into systems from the Internet.Currently,most users either have to change file names manually or leave a meaningless name of the files,which increases the time to search required files and results in redundancy and duplications of user files.Currently,no significant work is done on automated file labeling during the organization of heterogeneous user files.A few attempts have been made in topic modeling.However,one major drawback of current topic modeling approaches is better results.They rely on specific language types and domain similarity of the data.In this research,machine learning approaches have been employed to analyze and extract the information from heterogeneous corpus.A different file labeling technique has also been used to get the meaningful and`cohesive topic of the files.The results show that the proposed methodology can generate relevant and context-sensitive names for heterogeneous data files and provide additional insight into automated file labeling in operating systems.展开更多
In the Big Data era,numerous sources and environments generate massive amounts of data.This enormous amount of data necessitates specialized advanced tools and procedures that effectively evaluate the information and ...In the Big Data era,numerous sources and environments generate massive amounts of data.This enormous amount of data necessitates specialized advanced tools and procedures that effectively evaluate the information and anticipate decisions for future changes.Hadoop is used to process this kind of data.It is known to handle vast volumes of data more efficiently than tiny amounts,which results in inefficiency in the framework.This study proposes a novel solution to the problem by applying the Enhanced Best Fit Merging algorithm(EBFM)that merges files depending on predefined parameters(type and size).Implementing this algorithm will ensure that the maximum amount of the block size and the generated file size will be in the same range.Its primary goal is to dynamically merge files with the stated criteria based on the file type to guarantee the efficacy and efficiency of the established system.This procedure takes place before the files are available for the Hadoop framework.Additionally,the files generated by the system are named with specific keywords to ensure there is no data loss(file overwrite).The proposed approach guarantees the generation of the fewest possible large files,which reduces the input/output memory burden and corresponds to the Hadoop framework’s effectiveness.The findings show that the proposed technique enhances the framework’s performance by approximately 64%while comparing all other potential performance-impairing variables.The proposed approach is implementable in any environment that uses the Hadoop framework,not limited to smart cities,real-time data analysis,etc.展开更多
Working with files and the safety of information has always been relevant, especially in financial institutions where the requirements for the safety of information and security are especially important. And in today...Working with files and the safety of information has always been relevant, especially in financial institutions where the requirements for the safety of information and security are especially important. And in today’s conditions, when an earthquake can destroy the floor of a city in an instant, or when a missile hits an office and all servers turn into scrap metal, the issue of data safety becomes especially important. Also, you can’t put the cost of the software and the convenience of working with files in last place. Especially if an office worker needs to find the necessary information on a client, a financial contract or a company’s financial product in a few seconds. Also, during the operation of computer equipment, failures are possible, and some of them can lead to partial or complete loss of information. In this paper, it is proposed to create another level of abstraction for working with the file system, which will be based on a relational database as a storage of objects and access rights to objects. Also considered are possible protocols for transferring data to other programs that work with files, these can be both small sites and the operating system itself. This article will be especially interesting for financial institutions or companies operating in the banking sector. The purpose of this article is an attempt to introduce another level of abstraction for working with files. A level that is completely abstracted from the storage medium.展开更多
Byte-addressable non-volatile memory(NVM),as a new participant in the storage hierarchy,gives extremely high performance in storage,which forces changes to be made on current filesystem designs.Page cache,once a signi...Byte-addressable non-volatile memory(NVM),as a new participant in the storage hierarchy,gives extremely high performance in storage,which forces changes to be made on current filesystem designs.Page cache,once a significant mechanism filling the performance gap between Dynamic Random Access Memory(DRAM)and block devices,is now a liability that heavily hinders the writing performance of NVM filesystems.Therefore state-of-the-art NVM filesystems leverage the direct access(DAX)technology to bypass the page cache entirely.However,the DRAM still provides higher bandwidth than NVM,which prevents skewed read workloads from benefiting from a higher bandwidth of the DRAM and leads to sub-optimal performance for the system.In this paper,we propose RCache,a readintensive workload-aware page cache for NVM filesystems.Different from traditional caching mechanisms where all reads go through DRAM,RCache uses a tiered page cache design,including assigning DRAM and NVM to hot and cold data separately,and reading data from both sides.To avoid copying data to DRAM in a critical path,RCache migrates data from NVM to DRAM in a background thread.Additionally,RCache manages data in DRAM in a lock-free manner for better latency and scalability.Evaluations on Intel Optane Data Center(DC)Persistent Memory Modules show that,compared with NOVA,RCache achieves 3 times higher bandwidth for read-intensive workloads and introduces little performance loss for write operations.展开更多
为解决危大工程中吊装作业安全管理的问题,基于深度学习构建目标检测算法(You Only Look Once version 5,YOLOv5)网络模型,针对进入吊装作业区域内人员的防护装备进行多目标融合检测,并对吊钩在施工过程中的状态进行检测。在原始的检测...为解决危大工程中吊装作业安全管理的问题,基于深度学习构建目标检测算法(You Only Look Once version 5,YOLOv5)网络模型,针对进入吊装作业区域内人员的防护装备进行多目标融合检测,并对吊钩在施工过程中的状态进行检测。在原始的检测网络模型中引入4种注意力机制,并通过5种训练模型的结果对比分析,进而选择卷积块注意力模块(Convolutional Block Attention Module,CBAM)最优模型。优化后的检测模型对安全帽的平均识别精度达86.5%,对反光衣的平均识别精度达83.0%,对吊钩的状态识别精度达92.0%。将训练好的人员检测模型和吊钩检测模型打包成exe执行文件,应用到施工安全管理人员的中控平台,可帮助管理人员更好地判断吊装作业的工作情况,进而及时进行风险管控。展开更多
基金Supported by the Chinese Academy of Sciences Center for Excellence and Synergetic Innovation Center in Quantum Information and Quantum Physics,Shanghai Branch,University of Science and Technology of Chinathe National Natural Science Foundation of China under Grant No 11405172
文摘Quantum random number generators adopting single negligible dead time of avalanche photodiodes (APDs) photon detection have been restricted due to the non- We propose a new approach based on an APD array to improve the generation rate of random numbers significantly. This method compares the detectors' responses to consecutive optical pulses and generates the random sequence. We implement a demonstration experiment to show its simplicity, compactness and scalability. The generated numbers are proved to be unbiased, post-processing free, ready to use, and their randomness is verified by using the national institute of standard technology statistical test suite. The random bit generation efficiency is as high as 32.8% and the potential generation rate adopting the 32× 32 APD array is up to tens of Gbits/s.
基金Projects 50221402, 50490271 and 50025413 supported by the National Natural Science Foundation of Chinathe National Basic Research Program of China (2009CB219603, 2009 CB724601, 2006CB202209 and 2005CB221500)+1 种基金the Key Project of the Ministry of Education (306002)the Program for Changjiang Scholars and Innovative Research Teams in Universities of MOE (IRT0408)
文摘In order to carry out numerical simulation using geologic structural data obtained from Landmark(seismic interpretation system), underground geological structures are abstracted into mechanical models which can reflect actual situations and facilitate their computation and analyses.Given the importance of model building, further processing methods about traditional seismic interpretation results from Landmark should be studied and the processed result can then be directly used in numerical simulation computations.Through this data conversion procedure, Landmark and FLAC(the international general stress software) are seamlessly connected.Thus, the format conversion between the two systems and the pre-and post-processing in simulation computation is realized.A practical application indicates that this method has many advantages such as simple operation, high accuracy of the element subdivision and high speed, which may definitely satisfy the actual needs of floor grid cutting.
文摘The travel time data collection method is used to assist the congestion management. The use of traditional sensors (e.g. inductive loops, AVI sensors) or more recent Bluetooth sensors installed on major roads for collecting data is not sufficient because of their limited coverage and expensive costs for installation and maintenance. Application of the Global Positioning Systems (GPS) in travel time and delay data collections is proven to be efficient in terms of accuracy, level of details for the data and required data collection of man-power. While data collection automation is improved by the GPS technique, human errors can easily find their way through the post-processing phase, and therefore data post-processing remains a challenge especially in case of big projects with high amount of data. This paper introduces a stand-alone post-processing tool called GPS Calculator, which provides an easy-to-use environment to carry out data post-processing. This is a Visual Basic application that processes the data files obtained in the field and integrates them into Geographic Information Systems (GIS) for analysis and representation. The results show that this tool obtains similar results to the currently used data post-processing method, reduces the post-processing effort, and also eliminates the need for the second person during the data collection.
基金supported by the New Century Excellent Talents in University(NCET-09-0396)the National Science&Technology Key Projects of Numerical Control(2012ZX04014-031)+1 种基金the Natural Science Foundation of Hubei Province(2011CDB279)the Foundation for Innovative Research Groups of the Natural Science Foundation of Hubei Province,China(2010CDA067)
文摘When castings become complicated and the demands for precision of numerical simulation become higher,the numerical data of casting numerical simulation become more massive.On a general personal computer,these massive numerical data may probably exceed the capacity of available memory,resulting in failure of rendering.Based on the out-of-core technique,this paper proposes a method to effectively utilize external storage and reduce memory usage dramatically,so as to solve the problem of insufficient memory for massive data rendering on general personal computers.Based on this method,a new postprocessor is developed.It is capable to illustrate filling and solidification processes of casting,as well as thermal stess.The new post-processor also provides fast interaction to simulation results.Theoretical analysis as well as several practical examples prove that the memory usage and loading time of the post-processor are independent of the size of the relevant files,but the proportion of the number of cells on surface.Meanwhile,the speed of rendering and fetching of value from the mouse is appreciable,and the demands of real-time and interaction are satisfied.
基金supported by the National Natural Science Foundation of China(51875535)the Natural Science Foundation for Young Scientists of Shanxi Province(201701D221017,201901D211242)。
文摘To improve the ability of detecting underwater targets under strong wideband interference environment,an efficient method of line spectrum extraction is proposed,which fully utilizes the feature of the target spectrum that the high intense and stable line spectrum is superimposed on the wide continuous spectrum.This method modifies the traditional beam forming algorithm by calculating and fusing the beam forming results at multi-frequency band and multi-azimuth interval,showing an excellent way to extract the line spectrum when the interference and the target are not in the same azimuth interval simultaneously.Statistical efficiency of the estimated azimuth variance and corresponding power of the line spectrum band depends on the line spectra ratio(LSR)of the line spectrum.The change laws of the output signal to noise ratio(SNR)with the LSR,the input SNR,the integration time and the filtering bandwidth of different algorithms bring the selection principle of the critical LSR.As the basis,the detection gain of wideband energy integration and the narrowband line spectrum algorithm are theoretically analyzed.The simulation detection gain demonstrates a good match with the theoretical model.The application conditions of all methods are verified by the receiver operating characteristic(ROC)curve and experimental data from Qiandao Lake.In fact,combining the two methods for target detection reduces the missed detection rate.The proposed post-processing method in2-dimension with the Kalman filter in the time dimension and the background equalization algorithm in the azimuth dimension makes use of the strong correlation between adjacent frames,could further remove background fluctuation and improve the display effect.
文摘This paper proposed improvements to the low bit rate parametric audio coder with sinusoid model as its kernel. Firstly, we propose a new method to effectively order and select the perceptually most important sinusoids. The sinusoid which contributes most to the reduction of overall NMR is chosen. Combined with our improved parametric psychoacoustic model and advanced peak riddling techniques, the number of sinusoids required can be greatly reduced and the coding efficiency can be greatly enhanced. A lightweight version is also given to reduce the amount of computation with only little sacrifice of performance. Secondly, we propose two enhancement techniques for sinusoid synthesis: bandwidth enhancement and line enhancement. With little overhead, the effective bandwidth can be extended one more octave; the timbre tends to sound much brighter, thicker and more beautiful.
基金This work was supported by Taif university Researchers Supporting Project Number(TURSP-2020/114),Taif University,Taif,Saudi Arabia.
文摘Low contrast of Magnetic Resonance(MR)images limits the visibility of subtle structures and adversely affects the outcome of both subjective and automated diagnosis.State-of-the-art contrast boosting techniques intolerably alter inherent features of MR images.Drastic changes in brightness features,induced by post-processing are not appreciated in medical imaging as the grey level values have certain diagnostic meanings.To overcome these issues this paper proposes an algorithm that enhance the contrast of MR images while preserving the underlying features as well.This method termed as Power-law and Logarithmic Modification-based Histogram Equalization(PLMHE)partitions the histogram of the image into two sub histograms after a power-law transformation and a log compression.After a modification intended for improving the dispersion of the sub-histograms and subsequent normalization,cumulative histograms are computed.Enhanced grey level values are computed from the resultant cumulative histograms.The performance of the PLMHE algorithm is comparedwith traditional histogram equalization based algorithms and it has been observed from the results that PLMHE can boost the image contrast without causing dynamic range compression,a significant change in mean brightness,and contrast-overshoot.
文摘In the analysis of high-rise building, traditional displacement-based plane elements are often used to get the in-plane internal forces of the shear walls by stress integration. Limited by the singular problem produced by wall holes and the loss of precision induced by using differential method to derive strains, the displacement-based elements cannot always present accuracy enough for design. In this paper, the hybrid post-processing procedure based on the Hellinger-Reissner variational principle is used for improving the stress precision of two quadrilateral plane elements. In order to find the best stress field, three different forms are assumed for the displacement-based plane elements and with drilling DOF. Numerical results show that by using the proposed method, the accuracy of stress solutions of these two displacement-based plane elements can be improved.
文摘File labeling techniques have a long history in analyzing the anthological trends in computational linguistics.The situation becomes worse in the case of files downloaded into systems from the Internet.Currently,most users either have to change file names manually or leave a meaningless name of the files,which increases the time to search required files and results in redundancy and duplications of user files.Currently,no significant work is done on automated file labeling during the organization of heterogeneous user files.A few attempts have been made in topic modeling.However,one major drawback of current topic modeling approaches is better results.They rely on specific language types and domain similarity of the data.In this research,machine learning approaches have been employed to analyze and extract the information from heterogeneous corpus.A different file labeling technique has also been used to get the meaningful and`cohesive topic of the files.The results show that the proposed methodology can generate relevant and context-sensitive names for heterogeneous data files and provide additional insight into automated file labeling in operating systems.
基金This research was supported by the Universiti Sains Malaysia(USM)and the ministry of Higher Education Malaysia through Fundamental Research Grant Scheme(FRGS-Grant No:FRGS/1/2020/TK0/USM/02/1).
文摘In the Big Data era,numerous sources and environments generate massive amounts of data.This enormous amount of data necessitates specialized advanced tools and procedures that effectively evaluate the information and anticipate decisions for future changes.Hadoop is used to process this kind of data.It is known to handle vast volumes of data more efficiently than tiny amounts,which results in inefficiency in the framework.This study proposes a novel solution to the problem by applying the Enhanced Best Fit Merging algorithm(EBFM)that merges files depending on predefined parameters(type and size).Implementing this algorithm will ensure that the maximum amount of the block size and the generated file size will be in the same range.Its primary goal is to dynamically merge files with the stated criteria based on the file type to guarantee the efficacy and efficiency of the established system.This procedure takes place before the files are available for the Hadoop framework.Additionally,the files generated by the system are named with specific keywords to ensure there is no data loss(file overwrite).The proposed approach guarantees the generation of the fewest possible large files,which reduces the input/output memory burden and corresponds to the Hadoop framework’s effectiveness.The findings show that the proposed technique enhances the framework’s performance by approximately 64%while comparing all other potential performance-impairing variables.The proposed approach is implementable in any environment that uses the Hadoop framework,not limited to smart cities,real-time data analysis,etc.
文摘Working with files and the safety of information has always been relevant, especially in financial institutions where the requirements for the safety of information and security are especially important. And in today’s conditions, when an earthquake can destroy the floor of a city in an instant, or when a missile hits an office and all servers turn into scrap metal, the issue of data safety becomes especially important. Also, you can’t put the cost of the software and the convenience of working with files in last place. Especially if an office worker needs to find the necessary information on a client, a financial contract or a company’s financial product in a few seconds. Also, during the operation of computer equipment, failures are possible, and some of them can lead to partial or complete loss of information. In this paper, it is proposed to create another level of abstraction for working with the file system, which will be based on a relational database as a storage of objects and access rights to objects. Also considered are possible protocols for transferring data to other programs that work with files, these can be both small sites and the operating system itself. This article will be especially interesting for financial institutions or companies operating in the banking sector. The purpose of this article is an attempt to introduce another level of abstraction for working with files. A level that is completely abstracted from the storage medium.
基金supported by ZTE Industry⁃University⁃Institute Coopera⁃tion Funds under Grant No.HC⁃CN⁃20181128026.
文摘Byte-addressable non-volatile memory(NVM),as a new participant in the storage hierarchy,gives extremely high performance in storage,which forces changes to be made on current filesystem designs.Page cache,once a significant mechanism filling the performance gap between Dynamic Random Access Memory(DRAM)and block devices,is now a liability that heavily hinders the writing performance of NVM filesystems.Therefore state-of-the-art NVM filesystems leverage the direct access(DAX)technology to bypass the page cache entirely.However,the DRAM still provides higher bandwidth than NVM,which prevents skewed read workloads from benefiting from a higher bandwidth of the DRAM and leads to sub-optimal performance for the system.In this paper,we propose RCache,a readintensive workload-aware page cache for NVM filesystems.Different from traditional caching mechanisms where all reads go through DRAM,RCache uses a tiered page cache design,including assigning DRAM and NVM to hot and cold data separately,and reading data from both sides.To avoid copying data to DRAM in a critical path,RCache migrates data from NVM to DRAM in a background thread.Additionally,RCache manages data in DRAM in a lock-free manner for better latency and scalability.Evaluations on Intel Optane Data Center(DC)Persistent Memory Modules show that,compared with NOVA,RCache achieves 3 times higher bandwidth for read-intensive workloads and introduces little performance loss for write operations.
文摘为解决危大工程中吊装作业安全管理的问题,基于深度学习构建目标检测算法(You Only Look Once version 5,YOLOv5)网络模型,针对进入吊装作业区域内人员的防护装备进行多目标融合检测,并对吊钩在施工过程中的状态进行检测。在原始的检测网络模型中引入4种注意力机制,并通过5种训练模型的结果对比分析,进而选择卷积块注意力模块(Convolutional Block Attention Module,CBAM)最优模型。优化后的检测模型对安全帽的平均识别精度达86.5%,对反光衣的平均识别精度达83.0%,对吊钩的状态识别精度达92.0%。将训练好的人员检测模型和吊钩检测模型打包成exe执行文件,应用到施工安全管理人员的中控平台,可帮助管理人员更好地判断吊装作业的工作情况,进而及时进行风险管控。