The rapid development of mobile network brings opportunities for researchers to analyze user behaviors based on largescale network traffic data. It is important for Internet Service Providers(ISP) to optimize resource...The rapid development of mobile network brings opportunities for researchers to analyze user behaviors based on largescale network traffic data. It is important for Internet Service Providers(ISP) to optimize resource allocation and provide customized services to users. The first step of analyzing user behaviors is to extract information of user actions from HTTP traffic data by multi-pattern URL matching. However, the efficiency is a huge problem when performing this work on massive network traffic data. To solve this problem, we propose a novel and accurate algorithm named Multi-Pattern Parallel Matching(MPPM) that takes advantage of HashMap in data searching for extracting user behaviors from big network data more effectively. Extensive experiments based on real-world traffic data prove the ability of MPPM algorithm to deal with massive HTTP traffic with better performance on accuracy, concurrency and efficiency. We expect the proposed algorithm and it parallelized implementation would be a solid base to build a high-performance analysis engine of user behavior based on massive HTTP traffic data processing.展开更多
Histogram of collinear gradient-enhanced coding (HCGEC), a robust key point descriptor for multi-spectral image matching, is proposed. The HCGEC mainly encodes rough structures within an image and suppresses detaile...Histogram of collinear gradient-enhanced coding (HCGEC), a robust key point descriptor for multi-spectral image matching, is proposed. The HCGEC mainly encodes rough structures within an image and suppresses detailed textural information, which is desirable in multi-spectral image matching. Experiments on two multi-spectral data sets demonstrate that the proposed descriptor can yield significantly better results than some state-of- the-art descriptors.展开更多
E-commerce, as an emerging marketing mode, has attracted more and more attention and gradually changed the way of our life. However, the existing layout of distribution centers can't fulfill the storage and picking d...E-commerce, as an emerging marketing mode, has attracted more and more attention and gradually changed the way of our life. However, the existing layout of distribution centers can't fulfill the storage and picking demands of e-commerce sufficiently. In this paper, a modified miniload automated storage/retrieval system is designed to fit these new characteristics of e-commerce in logistics. Meanwhile, a matching problem, concerning with the improvement of picking efficiency in new system, is studied in this paper. The problem is how to reduce the travelling distance of totes between aisles and picking stations. A multi-stage heuristic algorithm is proposed based on statement and model of this problem. The main idea of this algorithm is, with some heuristic strategies based on similarity coefficients, minimizing the transportations of items which can not arrive in the destination picking stations just through direct conveyors. The experimental results based on the cases generated by computers show that the average reduced rate of indirect transport times can reach 14.36% with the application of multi-stage heuristic algorithm. For the cases from a real e-commerce distribution center, the order processing time can be reduced from 11.20 h to 10.06 h with the help of the modified system and the proposed algorithm. In summary, this research proposed a modified system and a multi-stage heuristic algorithm that can reduce the travelling distance of totes effectively and improve the whole performance of e-commerce distribution center.展开更多
To achieve the dual demand of resisting violent impact and attenuating vibration in vibration-impact-safety of protection for precision equipment such as MEMS packaging system, a theo- retical mathematical model of mu...To achieve the dual demand of resisting violent impact and attenuating vibration in vibration-impact-safety of protection for precision equipment such as MEMS packaging system, a theo- retical mathematical model of multi-medium coupling shock absorber is presented. The coupling of quadratic damping, linear damping, Coulomb damping and nonlinear spring are considered in the model. The approximate theoretical calculating formulae are deduced by introducing transformation-tactics. The contrasts between the analytical results and numerical integration results are developed. The resisting impact characteristics of the model are also analyzed in progress. In the meantime, the optimum model of the parameters matching selection for design of the shock absorber is built. The example design is illustrated to confirm the validity of the modeling method and the theoretical solution.展开更多
This paper represents a template matching using statistical model and parametric template for multi-template. This algorithm consists of two phases: training and matching phases. In the training phase, the statistical...This paper represents a template matching using statistical model and parametric template for multi-template. This algorithm consists of two phases: training and matching phases. In the training phase, the statistical model created by principal component analysis method (PCA) can be used to synthesize multi-template. The advantage of PCA is to reduce the variances of multi-template. In the matching phase, the normalized cross correlation (NCC) is employed to find the candidates in inspection images. The relationship between image block and multi-template is built to use parametric template method. Results show that the proposed method is more efficient than the conventional template matching and parametric template. Furthermore, the proposed method is more robust than conventional template method.展开更多
A new method for quantitative phase analysis is proposed by using X-ray diffraction multi-peak match intensity ratio. This method can obtain the multi-peak match intensity ratio among each phase in the mixture sample ...A new method for quantitative phase analysis is proposed by using X-ray diffraction multi-peak match intensity ratio. This method can obtain the multi-peak match intensity ratio among each phase in the mixture sample by using all diffraction peak data in the mixture sample X-ray diffraction spectrum and combining the relative intensity distribution data of each phase standard peak in JCPDS card to carry on the least square method regression analysis. It is benefit to improve the precision of quantitative phase analysis that the given single line ratio which is usually adopted is taken the place of the multi-peak match intensity ratio and is used in X-ray diffraction quantitative phase analysis of the mixture sample. By analyzing four-group mixture sample, adopting multi-peak match intensity ratio and X-ray diffraction quantitative phase analysis principle of combining the adiabatic and matrix flushing method, it is tested that the experimental results are identical with theory.展开更多
Purpose: The ability to identify the scholarship of individual authors is essential for performance evaluation. A number of factors hinder this endeavor. Common and similarly spelled surnames make it difficult to isol...Purpose: The ability to identify the scholarship of individual authors is essential for performance evaluation. A number of factors hinder this endeavor. Common and similarly spelled surnames make it difficult to isolate the scholarship of individual authors indexed on large databases. Variations in name spelling of individual scholars further complicates matters. Common family names in scientific powerhouses like China make it problematic to distinguish between authors possessing ubiquitous and/or anglicized surnames(as well as the same or similar first names). The assignment of unique author identifiers provides a major step toward resolving these difficulties. We maintain, however, that in and of themselves, author identifiers are not sufficient to fully address the author uncertainty problem. In this study we build on the author identifier approach by considering commonalities in fielded data between authors containing the same surname and first initial of their first name. We illustrate our approach using three case studies.Design/methodology/approach: The approach we advance in this study is based on commonalities among fielded data in search results. We cast a broad initial net—i.e., a Web of Science(WOS) search for a given author's last name, followed by a comma, followed by the first initial of his or her first name(e.g., a search for ‘John Doe' would assume the form: ‘Doe, J'). Results for this search typically contain all of the scholarship legitimately belonging to this author in the given database(i.e., all of his or her true positives), along with a large amount of noise, or scholarship not belonging to this author(i.e., a large number of false positives). From this corpus we proceed to iteratively weed out false positives and retain true positives. Author identifiers provide a good starting point—e.g., if ‘Doe, J' and ‘Doe, John' share the same author identifier, this would be sufficient for us to conclude these are one and the same individual. We find email addresses similarly adequate—e.g., if two author names which share the same surname and same first initial have an email address in common, we conclude these authors are the same person. Author identifier and email address data is not always available, however. When this occurs, other fields are used to address the author uncertainty problem.Commonalities among author data other than unique identifiers and email addresses is less conclusive for name consolidation purposes. For example, if ‘Doe, John' and ‘Doe, J' have an affiliation in common, do we conclude that these names belong the same person? They may or may not; affiliations have employed two or more faculty members sharing the same last and first initial. Similarly, it's conceivable that two individuals with the same last name and first initial publish in the same journal, publish with the same co-authors, and/or cite the same references. Should we then ignore commonalities among these fields and conclude they're too imprecise for name consolidation purposes? It is our position that such commonalities are indeed valuable for addressing the author uncertainty problem, but more so when used in combination.Our approach makes use of automation as well as manual inspection, relying initially on author identifiers, then commonalities among fielded data other than author identifiers, and finally manual verification. To achieve name consolidation independent of author identifier matches, we have developed a procedure that is used with bibliometric software called VantagePoint(see www.thevantagepoint.com). While the application of our technique does not exclusively depend on VantagePoint, it is the software we find most efficient in this study. The script we developed to implement this procedure is designed to implement our name disambiguation procedure in a way that significantly reduces manual effort on the user's part. Those who seek to replicate our procedure independent of VantagePoint can do so by manually following the method we outline, but we note that the manual application of our procedure takes a significant amount of time and effort, especially when working with larger datasets.Our script begins by prompting the user for a surname and a first initial(for any author of interest). It then prompts the user to select a WOS field on which to consolidate author names. After this the user is prompted to point to the name of the authors field, and finally asked to identify a specific author name(referred to by the script as the primary author) within this field whom the user knows to be a true positive(a suggested approach is to point to an author name associated with one of the records that has the author's ORCID iD or email address attached to it).The script proceeds to identify and combine all author names sharing the primary author's surname and first initial of his or her first name who share commonalities in the WOS field on which the user was prompted to consolidate author names. This typically results in significant reduction in the initial dataset size. After the procedure completes the user is usually left with a much smaller(and more manageable) dataset to manually inspect(and/or apply additional name disambiguation techniques to).Research limitations: Match field coverage can be an issue. When field coverage is paltry dataset reduction is not as significant, which results in more manual inspection on the user's part. Our procedure doesn't lend itself to scholars who have had a legal family name change(after marriage, for example). Moreover, the technique we advance is(sometimes, but not always) likely to have a difficult time dealing with scholars who have changed careers or fields dramatically, as well as scholars whose work is highly interdisciplinary.Practical implications: The procedure we advance has the ability to save a significant amount of time and effort for individuals engaged in name disambiguation research, especially when the name under consideration is a more common family name. It is more effective when match field coverage is high and a number of match fields exist.Originality/value: Once again, the procedure we advance has the ability to save a significant amount of time and effort for individuals engaged in name disambiguation research. It combines preexisting with more recent approaches, harnessing the benefits of both.Findings: Our study applies the name disambiguation procedure we advance to three case studies. Ideal match fields are not the same for each of our case studies. We find that match field effectiveness is in large part a function of field coverage. Comparing original dataset size, the timeframe analyzed for each case study is not the same, nor are the subject areas in which they publish. Our procedure is more effective when applied to our third case study, both in terms of list reduction and 100% retention of true positives. We attribute this to excellent match field coverage, and especially in more specific match fields, as well as having a more modest/manageable number of publications.While machine learning is considered authoritative by many, we do not see it as practical or replicable. The procedure advanced herein is both practical, replicable and relatively user friendly. It might be categorized into a space between ORCID and machine learning. Machine learning approaches typically look for commonalities among citation data, which is not always available, structured or easy to work with. The procedure we advance is intended to be applied across numerous fields in a dataset of interest(e.g. emails, coauthors, affiliations, etc.), resulting in multiple rounds of reduction. Results indicate that effective match fields include author identifiers, emails, source titles, co-authors and ISSNs. While the script we present is not likely to result in a dataset consisting solely of true positives(at least for more common surnames), it does significantly reduce manual effort on the user's part. Dataset reduction(after our procedure is applied) is in large part a function of(a) field availability and(b) field coverage.展开更多
The theory of perfectly matched layer (PML) artificial boundary condition (ABC), which is characterized by absorption any wave motions with arbitrary frequency and arbitrarily incident angle, is introduced. The co...The theory of perfectly matched layer (PML) artificial boundary condition (ABC), which is characterized by absorption any wave motions with arbitrary frequency and arbitrarily incident angle, is introduced. The construction process of PML boundary based on elastodynamic partial differential equation (PDE) system is developed. Combining with velocity-stress hybrid finite element formulation, the applicability of PML boundary is investigated and the numerical reflection of PML boundary is estimated. The reflectivity of PML and multi-transmitting formula (MTF) boundary is then compared based on body wave and surface wave simulations. The results show that although PML boundary yields some reflection, its absorption performance is superior to MTF boundary in the numerical simulations of near-fault wave propagation, especially in comer and large angle grazing incidence situations. The PML boundary does not arise any unstable phenomenon and the stability of PML boundary is better than MTF boundary in hybrid finite element method. For a specified problem and analysis tolerance, the computational efficiency of PML boundary is only a little lower than MTF boundary.展开更多
基于PatchMatch的多视图立体(MVS)方法依据输入多幅图像估计场景的深度,目前已应用于大规模场景三维重建。然而,由于特征匹配不稳定、仅依赖光度一致性不可靠等原因,现有方法在弱纹理区域的深度估计准确性和完整性较低。针对上述问题,...基于PatchMatch的多视图立体(MVS)方法依据输入多幅图像估计场景的深度,目前已应用于大规模场景三维重建。然而,由于特征匹配不稳定、仅依赖光度一致性不可靠等原因,现有方法在弱纹理区域的深度估计准确性和完整性较低。针对上述问题,提出一种基于四叉树先验辅助的MVS方法。首先,利用图像像素值获得局部纹理;其次,基于自适应棋盘网格采样的块匹配多视图立体视觉方法(ACMH)获得粗略的深度图,结合弱纹理区域中的结构信息,采用四叉树分割生成先验平面假设;再次,融合上述信息,设计一种新的多视图匹配代价函数,引导弱纹理区域得到最优深度假设,进而提高立体匹配的准确性;最后,在ETH3D、Tanks and Temples和中国科学院古建筑数据集上与多种现有的传统MVS方法进行对比实验。结果表明所提方法性能更优,特别是在ETH3D测试数据集中,当误差阈值为2 cm时,相较于当前先进的多尺度平面先验辅助方法(ACMMP),它的F1分数和完整性分别提高了1.29和2.38个百分点。展开更多
基金supported in part by National Natural Science Foundation of China(61671078)the Director Funds of Beijing Key Laboratory of Network System Architecture and Convergence(2017BKL-NSACZJ-06)
文摘The rapid development of mobile network brings opportunities for researchers to analyze user behaviors based on largescale network traffic data. It is important for Internet Service Providers(ISP) to optimize resource allocation and provide customized services to users. The first step of analyzing user behaviors is to extract information of user actions from HTTP traffic data by multi-pattern URL matching. However, the efficiency is a huge problem when performing this work on massive network traffic data. To solve this problem, we propose a novel and accurate algorithm named Multi-Pattern Parallel Matching(MPPM) that takes advantage of HashMap in data searching for extracting user behaviors from big network data more effectively. Extensive experiments based on real-world traffic data prove the ability of MPPM algorithm to deal with massive HTTP traffic with better performance on accuracy, concurrency and efficiency. We expect the proposed algorithm and it parallelized implementation would be a solid base to build a high-performance analysis engine of user behavior based on massive HTTP traffic data processing.
文摘Histogram of collinear gradient-enhanced coding (HCGEC), a robust key point descriptor for multi-spectral image matching, is proposed. The HCGEC mainly encodes rough structures within an image and suppresses detailed textural information, which is desirable in multi-spectral image matching. Experiments on two multi-spectral data sets demonstrate that the proposed descriptor can yield significantly better results than some state-of- the-art descriptors.
文摘E-commerce, as an emerging marketing mode, has attracted more and more attention and gradually changed the way of our life. However, the existing layout of distribution centers can't fulfill the storage and picking demands of e-commerce sufficiently. In this paper, a modified miniload automated storage/retrieval system is designed to fit these new characteristics of e-commerce in logistics. Meanwhile, a matching problem, concerning with the improvement of picking efficiency in new system, is studied in this paper. The problem is how to reduce the travelling distance of totes between aisles and picking stations. A multi-stage heuristic algorithm is proposed based on statement and model of this problem. The main idea of this algorithm is, with some heuristic strategies based on similarity coefficients, minimizing the transportations of items which can not arrive in the destination picking stations just through direct conveyors. The experimental results based on the cases generated by computers show that the average reduced rate of indirect transport times can reach 14.36% with the application of multi-stage heuristic algorithm. For the cases from a real e-commerce distribution center, the order processing time can be reduced from 11.20 h to 10.06 h with the help of the modified system and the proposed algorithm. In summary, this research proposed a modified system and a multi-stage heuristic algorithm that can reduce the travelling distance of totes effectively and improve the whole performance of e-commerce distribution center.
基金This project is supported by National Defense Science Foundation of China (No.00J16.2.5.DZ0502)Foundation for Qualified Personnel of Jiangsu University, China(No.04JDG027)Provincial Natural Science Foundation of Guangxi. China(No.0339037, No.0141042).
文摘To achieve the dual demand of resisting violent impact and attenuating vibration in vibration-impact-safety of protection for precision equipment such as MEMS packaging system, a theo- retical mathematical model of multi-medium coupling shock absorber is presented. The coupling of quadratic damping, linear damping, Coulomb damping and nonlinear spring are considered in the model. The approximate theoretical calculating formulae are deduced by introducing transformation-tactics. The contrasts between the analytical results and numerical integration results are developed. The resisting impact characteristics of the model are also analyzed in progress. In the meantime, the optimum model of the parameters matching selection for design of the shock absorber is built. The example design is illustrated to confirm the validity of the modeling method and the theoretical solution.
文摘This paper represents a template matching using statistical model and parametric template for multi-template. This algorithm consists of two phases: training and matching phases. In the training phase, the statistical model created by principal component analysis method (PCA) can be used to synthesize multi-template. The advantage of PCA is to reduce the variances of multi-template. In the matching phase, the normalized cross correlation (NCC) is employed to find the candidates in inspection images. The relationship between image block and multi-template is built to use parametric template method. Results show that the proposed method is more efficient than the conventional template matching and parametric template. Furthermore, the proposed method is more robust than conventional template method.
文摘A new method for quantitative phase analysis is proposed by using X-ray diffraction multi-peak match intensity ratio. This method can obtain the multi-peak match intensity ratio among each phase in the mixture sample by using all diffraction peak data in the mixture sample X-ray diffraction spectrum and combining the relative intensity distribution data of each phase standard peak in JCPDS card to carry on the least square method regression analysis. It is benefit to improve the precision of quantitative phase analysis that the given single line ratio which is usually adopted is taken the place of the multi-peak match intensity ratio and is used in X-ray diffraction quantitative phase analysis of the mixture sample. By analyzing four-group mixture sample, adopting multi-peak match intensity ratio and X-ray diffraction quantitative phase analysis principle of combining the adiabatic and matrix flushing method, it is tested that the experimental results are identical with theory.
基金support from the US National Science Foundation under Award 1645237
文摘Purpose: The ability to identify the scholarship of individual authors is essential for performance evaluation. A number of factors hinder this endeavor. Common and similarly spelled surnames make it difficult to isolate the scholarship of individual authors indexed on large databases. Variations in name spelling of individual scholars further complicates matters. Common family names in scientific powerhouses like China make it problematic to distinguish between authors possessing ubiquitous and/or anglicized surnames(as well as the same or similar first names). The assignment of unique author identifiers provides a major step toward resolving these difficulties. We maintain, however, that in and of themselves, author identifiers are not sufficient to fully address the author uncertainty problem. In this study we build on the author identifier approach by considering commonalities in fielded data between authors containing the same surname and first initial of their first name. We illustrate our approach using three case studies.Design/methodology/approach: The approach we advance in this study is based on commonalities among fielded data in search results. We cast a broad initial net—i.e., a Web of Science(WOS) search for a given author's last name, followed by a comma, followed by the first initial of his or her first name(e.g., a search for ‘John Doe' would assume the form: ‘Doe, J'). Results for this search typically contain all of the scholarship legitimately belonging to this author in the given database(i.e., all of his or her true positives), along with a large amount of noise, or scholarship not belonging to this author(i.e., a large number of false positives). From this corpus we proceed to iteratively weed out false positives and retain true positives. Author identifiers provide a good starting point—e.g., if ‘Doe, J' and ‘Doe, John' share the same author identifier, this would be sufficient for us to conclude these are one and the same individual. We find email addresses similarly adequate—e.g., if two author names which share the same surname and same first initial have an email address in common, we conclude these authors are the same person. Author identifier and email address data is not always available, however. When this occurs, other fields are used to address the author uncertainty problem.Commonalities among author data other than unique identifiers and email addresses is less conclusive for name consolidation purposes. For example, if ‘Doe, John' and ‘Doe, J' have an affiliation in common, do we conclude that these names belong the same person? They may or may not; affiliations have employed two or more faculty members sharing the same last and first initial. Similarly, it's conceivable that two individuals with the same last name and first initial publish in the same journal, publish with the same co-authors, and/or cite the same references. Should we then ignore commonalities among these fields and conclude they're too imprecise for name consolidation purposes? It is our position that such commonalities are indeed valuable for addressing the author uncertainty problem, but more so when used in combination.Our approach makes use of automation as well as manual inspection, relying initially on author identifiers, then commonalities among fielded data other than author identifiers, and finally manual verification. To achieve name consolidation independent of author identifier matches, we have developed a procedure that is used with bibliometric software called VantagePoint(see www.thevantagepoint.com). While the application of our technique does not exclusively depend on VantagePoint, it is the software we find most efficient in this study. The script we developed to implement this procedure is designed to implement our name disambiguation procedure in a way that significantly reduces manual effort on the user's part. Those who seek to replicate our procedure independent of VantagePoint can do so by manually following the method we outline, but we note that the manual application of our procedure takes a significant amount of time and effort, especially when working with larger datasets.Our script begins by prompting the user for a surname and a first initial(for any author of interest). It then prompts the user to select a WOS field on which to consolidate author names. After this the user is prompted to point to the name of the authors field, and finally asked to identify a specific author name(referred to by the script as the primary author) within this field whom the user knows to be a true positive(a suggested approach is to point to an author name associated with one of the records that has the author's ORCID iD or email address attached to it).The script proceeds to identify and combine all author names sharing the primary author's surname and first initial of his or her first name who share commonalities in the WOS field on which the user was prompted to consolidate author names. This typically results in significant reduction in the initial dataset size. After the procedure completes the user is usually left with a much smaller(and more manageable) dataset to manually inspect(and/or apply additional name disambiguation techniques to).Research limitations: Match field coverage can be an issue. When field coverage is paltry dataset reduction is not as significant, which results in more manual inspection on the user's part. Our procedure doesn't lend itself to scholars who have had a legal family name change(after marriage, for example). Moreover, the technique we advance is(sometimes, but not always) likely to have a difficult time dealing with scholars who have changed careers or fields dramatically, as well as scholars whose work is highly interdisciplinary.Practical implications: The procedure we advance has the ability to save a significant amount of time and effort for individuals engaged in name disambiguation research, especially when the name under consideration is a more common family name. It is more effective when match field coverage is high and a number of match fields exist.Originality/value: Once again, the procedure we advance has the ability to save a significant amount of time and effort for individuals engaged in name disambiguation research. It combines preexisting with more recent approaches, harnessing the benefits of both.Findings: Our study applies the name disambiguation procedure we advance to three case studies. Ideal match fields are not the same for each of our case studies. We find that match field effectiveness is in large part a function of field coverage. Comparing original dataset size, the timeframe analyzed for each case study is not the same, nor are the subject areas in which they publish. Our procedure is more effective when applied to our third case study, both in terms of list reduction and 100% retention of true positives. We attribute this to excellent match field coverage, and especially in more specific match fields, as well as having a more modest/manageable number of publications.While machine learning is considered authoritative by many, we do not see it as practical or replicable. The procedure advanced herein is both practical, replicable and relatively user friendly. It might be categorized into a space between ORCID and machine learning. Machine learning approaches typically look for commonalities among citation data, which is not always available, structured or easy to work with. The procedure we advance is intended to be applied across numerous fields in a dataset of interest(e.g. emails, coauthors, affiliations, etc.), resulting in multiple rounds of reduction. Results indicate that effective match fields include author identifiers, emails, source titles, co-authors and ISSNs. While the script we present is not likely to result in a dataset consisting solely of true positives(at least for more common surnames), it does significantly reduce manual effort on the user's part. Dataset reduction(after our procedure is applied) is in large part a function of(a) field availability and(b) field coverage.
基金National Natural Science Foundation of China (50608024 and 50538050).
文摘The theory of perfectly matched layer (PML) artificial boundary condition (ABC), which is characterized by absorption any wave motions with arbitrary frequency and arbitrarily incident angle, is introduced. The construction process of PML boundary based on elastodynamic partial differential equation (PDE) system is developed. Combining with velocity-stress hybrid finite element formulation, the applicability of PML boundary is investigated and the numerical reflection of PML boundary is estimated. The reflectivity of PML and multi-transmitting formula (MTF) boundary is then compared based on body wave and surface wave simulations. The results show that although PML boundary yields some reflection, its absorption performance is superior to MTF boundary in the numerical simulations of near-fault wave propagation, especially in comer and large angle grazing incidence situations. The PML boundary does not arise any unstable phenomenon and the stability of PML boundary is better than MTF boundary in hybrid finite element method. For a specified problem and analysis tolerance, the computational efficiency of PML boundary is only a little lower than MTF boundary.
文摘基于PatchMatch的多视图立体(MVS)方法依据输入多幅图像估计场景的深度,目前已应用于大规模场景三维重建。然而,由于特征匹配不稳定、仅依赖光度一致性不可靠等原因,现有方法在弱纹理区域的深度估计准确性和完整性较低。针对上述问题,提出一种基于四叉树先验辅助的MVS方法。首先,利用图像像素值获得局部纹理;其次,基于自适应棋盘网格采样的块匹配多视图立体视觉方法(ACMH)获得粗略的深度图,结合弱纹理区域中的结构信息,采用四叉树分割生成先验平面假设;再次,融合上述信息,设计一种新的多视图匹配代价函数,引导弱纹理区域得到最优深度假设,进而提高立体匹配的准确性;最后,在ETH3D、Tanks and Temples和中国科学院古建筑数据集上与多种现有的传统MVS方法进行对比实验。结果表明所提方法性能更优,特别是在ETH3D测试数据集中,当误差阈值为2 cm时,相较于当前先进的多尺度平面先验辅助方法(ACMMP),它的F1分数和完整性分别提高了1.29和2.38个百分点。