The stability analysis of an abandoned underground gypsum mine requires the determination of the mine pillar’s strength.This is especially important for flooded abandoned mines where the gypsum pillars become saturat...The stability analysis of an abandoned underground gypsum mine requires the determination of the mine pillar’s strength.This is especially important for flooded abandoned mines where the gypsum pillars become saturated and are subjected to dissolution after flooding.Further,mine pillars are subjected to blast vibrations that generate some level of macro- and micro-fracturing.Testing samples of gypsum must,therefore,simulate these conditions as close as possible.In this research,the strength of gypsum is investigated in an as-received saturated condition using uniaxial compressive strength (UCS),Brazilian tensile strength (BTS) and point load index (PLI) tests.The scale effect was investigated and new correlations were derived to describe the effect of sample size on both UCS and BTS under dry and saturated conditions.Effects of blasting on these parameters were observed and the importance of choosing the proper samples was discussed.Finally,correlations were derived for both compressive and tensile strengths under dry and saturated conditions from the PLI test results,which are commonly used as a simple substitute for the indirect determination of UCS and BTS.展开更多
Unusually high levels of dieback have recently been reported in sugar maple, Acer saccharum Marsh., in Upper Michigan, and a network of plots was established to determine the extent and factors associated with the die...Unusually high levels of dieback have recently been reported in sugar maple, Acer saccharum Marsh., in Upper Michigan, and a network of plots was established to determine the extent and factors associated with the dieback. A possible contributor to this dieback is sapstreak disease caused by Ceratocystis virescens (Davidson) Moreau. Unhealthy trees with considerable crown dieback were evaluated across the western Upper Peninsula, MI to determine the prevalence of the sapstreak fungus using a minimally destructive sampling technique. Approximately 8% of 90 trees sampled were sapstreak positive and approximately 10% of trees were positive at one site that had recently been harvested. While the high levels of maple dieback present in these forests appear not to be directly caused by widespread sapstreak disease, the occurrence of sapstreak may be significantly impacting trees at some locations. However, even when present on a low number of trees, the biointeraction of sapstreak and decay rates from other fungi could be important for future tree mortality and value to the forest industry. Therefore, the effect of two sapstreak fungal isolates on the amount of decay caused by two common maple white rot fungi, Trametes versicolor (L.:Fr.) Pilat. And Irpex lacteus (Fr.:Fr.) Fr. was tested in the laboratory. Sugar maple wood blocks were precolonized by two native isolates of C. virescens followed by inoculation and incubation with decay fungi. Mean percent weight loss of blocks by white rot decay fungi ranged from 39% to 55%, but decay rates were not significantly affected by the presence of the sapstreak fungus.展开更多
Understanding how non-industrial private forest (NIPF) owners gain and share information regarding the management of their property is very important to policy makers, yet our knowledge regarding how and to what degre...Understanding how non-industrial private forest (NIPF) owners gain and share information regarding the management of their property is very important to policy makers, yet our knowledge regarding how and to what degree this information flows over privately owned landscapes is limited. The work described here seeks to address this shortfall. Widely administered surveys with close-ended questions may not adequately capture this information flow within NIPF owner communities. This study used open-ended questions in interviews of clusters of NIPF owners to determine whether and to what extent owners in-fluence each other directly (through conversations or referrals to sources of advice) or indirectly (through observation of management). We obtained data from thirty-four telephone interviews with owners of NIPF properties in the Western Upper Peninsula of Michigan, and analyzed the data using open coding. Roughly half of the forest owners we interviewed were influenced either directly or indirectly by other members of their NIPF communities. Reasons for owning forests (such as privacy, hunting and nature recreation, and economics) also influenced owners’ management behaviors and goals. This peer-to-peer flow of information (whether direct or indirect) has significant implications for how to distribute management and programmatic information throughout NIPF owner communities, and how amenable these communities may be to cooperative or cross-boundary programs to achieve ecosystem and landscapescale goals.展开更多
Rough set theory places great importance on approximation accuracy,which is used to gauge how well a rough set model describes a target concept.However,traditional approximation accuracy has limitations since it varie...Rough set theory places great importance on approximation accuracy,which is used to gauge how well a rough set model describes a target concept.However,traditional approximation accuracy has limitations since it varies with changes in the target concept and cannot evaluate the overall descriptive ability of a rough set model.To overcome this,two types of average approximation accuracy that objectively assess a rough set model’s ability to approximate all information granules is proposed.The first is the relative average approximation accuracy,which is based on all sets in the universe and has several basic properties.The second is the absolute average approximation accuracy,which is based on undefinable sets and has yielded significant conclusions.We also explore the relationship between these two types of average approximation accuracy.Finally,the average approximation accuracy has practical applications in addressing missing attribute values in incomplete information tables.展开更多
In this paper,we develop bound-preserving discontinuous Galerkin(DG)methods for chemical reactive flows.There are several difficulties in constructing suitable numerical schemes.First of all,the density and internal e...In this paper,we develop bound-preserving discontinuous Galerkin(DG)methods for chemical reactive flows.There are several difficulties in constructing suitable numerical schemes.First of all,the density and internal energy are positive,and the mass fraction of each species is between 0 and 1.Second,due to the rapid reaction rate,the system may contain stiff sources,and the strong-stability-preserving explicit Runge-Kutta method may result in limited time-step sizes.To obtain physically relevant numerical approximations,we apply the bound-preserving technique to the DG methods.Though traditional positivity-preserving techniques can successfully yield positive density,internal energy,and mass fractions,they may not enforce the upper bound 1 of the mass fractions.To solve this problem,we need to(i)make sure the numerical fluxes in the equations of the mass fractions are consistent with that in the equation of the density;(ii)choose conservative time integrations,such that the summation of the mass fractions is preserved.With the above two conditions,the positive mass fractions have summation 1,and then,they are all between 0 and 1.For time discretization,we apply the modified Runge-Kutta/multi-step Patankar methods,which are explicit for the flux while implicit for the source.Such methods can handle stiff sources with relatively large time steps,preserve the positivity of the target variables,and keep the summation of the mass fractions to be 1.Finally,it is not straightforward to combine the bound-preserving DG methods and the Patankar time integrations.The positivity-preserving technique for DG methods requires positive numerical approximations at the cell interfaces,while Patankar methods can keep the positivity of the pre-selected point values of the target variables.To match the degree of freedom,we use polynomials on rectangular meshes for problems in two space dimensions.To evolve in time,we first read the polynomials at the Gaussian points.Then,suitable slope limiters can be applied to enforce the positivity of the solutions at those points,which can be preserved by the Patankar methods,leading to positive updated numerical cell averages.In addition,we use another slope limiter to get positive solutions used for the bound-preserving technique for the flux.Numerical examples are given to demonstrate the good performance of the proposed schemes.展开更多
Media convergence works by processing information from different modalities and applying them to different domains.It is difficult for the conventional knowledge graph to utilise multi-media features because the intro...Media convergence works by processing information from different modalities and applying them to different domains.It is difficult for the conventional knowledge graph to utilise multi-media features because the introduction of a large amount of information from other modalities reduces the effectiveness of representation learning and makes knowledge graph inference less effective.To address the issue,an inference method based on Media Convergence and Rule-guided Joint Inference model(MCRJI)has been pro-posed.The authors not only converge multi-media features of entities but also introduce logic rules to improve the accuracy and interpretability of link prediction.First,a multi-headed self-attention approach is used to obtain the attention of different media features of entities during semantic synthesis.Second,logic rules of different lengths are mined from knowledge graph to learn new entity representations.Finally,knowledge graph inference is performed based on representing entities that converge multi-media features.Numerous experimental results show that MCRJI outperforms other advanced baselines in using multi-media features and knowledge graph inference,demonstrating that MCRJI provides an excellent approach for knowledge graph inference with converged multi-media features.展开更多
Due to the rapid advancement of the transportation industry and the continual increase in pavement infrastructure,it is difficult to keep up with the huge road maintenance task by relying only on the traditional manua...Due to the rapid advancement of the transportation industry and the continual increase in pavement infrastructure,it is difficult to keep up with the huge road maintenance task by relying only on the traditional manual detection method.Intelligent pavement detection technology with deep learning techniques is available for the research and industry areas by the gradual development of computer vision technology.Due to the different characteristics of pavement distress and the uncertainty of the external environment,this kind of object detection technology for distress classification and location still faces great challenges.This paper discusses the development of object detection technology and analyzes classical convolutional neural network(CNN)architecture.In addition to the one-stage and two-stage object detection frameworks,object detection without anchor frames is introduced,which is divided according to whether the anchor box is used or not.This paper also introduces attention mechanisms based on convolutional neural networks and emphasizes the performance of these mechanisms to further enhance the accuracy of object recognition.Lightweight network architecture is introduced for mobile and industrial deployment.Since stereo cameras and sensors are rapidly developed,a detailed summary of three-dimensional object detection algorithms is also provided.While reviewing the history of the development of object detection,the scope of this review is not only limited to the area of pavement crack detection but also guidance for researchers in related fields is shared.展开更多
文摘The stability analysis of an abandoned underground gypsum mine requires the determination of the mine pillar’s strength.This is especially important for flooded abandoned mines where the gypsum pillars become saturated and are subjected to dissolution after flooding.Further,mine pillars are subjected to blast vibrations that generate some level of macro- and micro-fracturing.Testing samples of gypsum must,therefore,simulate these conditions as close as possible.In this research,the strength of gypsum is investigated in an as-received saturated condition using uniaxial compressive strength (UCS),Brazilian tensile strength (BTS) and point load index (PLI) tests.The scale effect was investigated and new correlations were derived to describe the effect of sample size on both UCS and BTS under dry and saturated conditions.Effects of blasting on these parameters were observed and the importance of choosing the proper samples was discussed.Finally,correlations were derived for both compressive and tensile strengths under dry and saturated conditions from the PLI test results,which are commonly used as a simple substitute for the indirect determination of UCS and BTS.
文摘Unusually high levels of dieback have recently been reported in sugar maple, Acer saccharum Marsh., in Upper Michigan, and a network of plots was established to determine the extent and factors associated with the dieback. A possible contributor to this dieback is sapstreak disease caused by Ceratocystis virescens (Davidson) Moreau. Unhealthy trees with considerable crown dieback were evaluated across the western Upper Peninsula, MI to determine the prevalence of the sapstreak fungus using a minimally destructive sampling technique. Approximately 8% of 90 trees sampled were sapstreak positive and approximately 10% of trees were positive at one site that had recently been harvested. While the high levels of maple dieback present in these forests appear not to be directly caused by widespread sapstreak disease, the occurrence of sapstreak may be significantly impacting trees at some locations. However, even when present on a low number of trees, the biointeraction of sapstreak and decay rates from other fungi could be important for future tree mortality and value to the forest industry. Therefore, the effect of two sapstreak fungal isolates on the amount of decay caused by two common maple white rot fungi, Trametes versicolor (L.:Fr.) Pilat. And Irpex lacteus (Fr.:Fr.) Fr. was tested in the laboratory. Sugar maple wood blocks were precolonized by two native isolates of C. virescens followed by inoculation and incubation with decay fungi. Mean percent weight loss of blocks by white rot decay fungi ranged from 39% to 55%, but decay rates were not significantly affected by the presence of the sapstreak fungus.
文摘Understanding how non-industrial private forest (NIPF) owners gain and share information regarding the management of their property is very important to policy makers, yet our knowledge regarding how and to what degree this information flows over privately owned landscapes is limited. The work described here seeks to address this shortfall. Widely administered surveys with close-ended questions may not adequately capture this information flow within NIPF owner communities. This study used open-ended questions in interviews of clusters of NIPF owners to determine whether and to what extent owners in-fluence each other directly (through conversations or referrals to sources of advice) or indirectly (through observation of management). We obtained data from thirty-four telephone interviews with owners of NIPF properties in the Western Upper Peninsula of Michigan, and analyzed the data using open coding. Roughly half of the forest owners we interviewed were influenced either directly or indirectly by other members of their NIPF communities. Reasons for owning forests (such as privacy, hunting and nature recreation, and economics) also influenced owners’ management behaviors and goals. This peer-to-peer flow of information (whether direct or indirect) has significant implications for how to distribute management and programmatic information throughout NIPF owner communities, and how amenable these communities may be to cooperative or cross-boundary programs to achieve ecosystem and landscapescale goals.
基金National Natural Science Foundation of China,Grant/Award Number:61976254Natural Science Foundation of Fujian Province,Grant/Award Numbers:2020J01707,2020J01710。
文摘Rough set theory places great importance on approximation accuracy,which is used to gauge how well a rough set model describes a target concept.However,traditional approximation accuracy has limitations since it varies with changes in the target concept and cannot evaluate the overall descriptive ability of a rough set model.To overcome this,two types of average approximation accuracy that objectively assess a rough set model’s ability to approximate all information granules is proposed.The first is the relative average approximation accuracy,which is based on all sets in the universe and has several basic properties.The second is the absolute average approximation accuracy,which is based on undefinable sets and has yielded significant conclusions.We also explore the relationship between these two types of average approximation accuracy.Finally,the average approximation accuracy has practical applications in addressing missing attribute values in incomplete information tables.
基金supported by the NSF under Grant DMS-1818467Simons Foundation under Grant 961585.
文摘In this paper,we develop bound-preserving discontinuous Galerkin(DG)methods for chemical reactive flows.There are several difficulties in constructing suitable numerical schemes.First of all,the density and internal energy are positive,and the mass fraction of each species is between 0 and 1.Second,due to the rapid reaction rate,the system may contain stiff sources,and the strong-stability-preserving explicit Runge-Kutta method may result in limited time-step sizes.To obtain physically relevant numerical approximations,we apply the bound-preserving technique to the DG methods.Though traditional positivity-preserving techniques can successfully yield positive density,internal energy,and mass fractions,they may not enforce the upper bound 1 of the mass fractions.To solve this problem,we need to(i)make sure the numerical fluxes in the equations of the mass fractions are consistent with that in the equation of the density;(ii)choose conservative time integrations,such that the summation of the mass fractions is preserved.With the above two conditions,the positive mass fractions have summation 1,and then,they are all between 0 and 1.For time discretization,we apply the modified Runge-Kutta/multi-step Patankar methods,which are explicit for the flux while implicit for the source.Such methods can handle stiff sources with relatively large time steps,preserve the positivity of the target variables,and keep the summation of the mass fractions to be 1.Finally,it is not straightforward to combine the bound-preserving DG methods and the Patankar time integrations.The positivity-preserving technique for DG methods requires positive numerical approximations at the cell interfaces,while Patankar methods can keep the positivity of the pre-selected point values of the target variables.To match the degree of freedom,we use polynomials on rectangular meshes for problems in two space dimensions.To evolve in time,we first read the polynomials at the Gaussian points.Then,suitable slope limiters can be applied to enforce the positivity of the solutions at those points,which can be preserved by the Patankar methods,leading to positive updated numerical cell averages.In addition,we use another slope limiter to get positive solutions used for the bound-preserving technique for the flux.Numerical examples are given to demonstrate the good performance of the proposed schemes.
基金National College Students’Training Programs of Innovation and Entrepreneurship,Grant/Award Number:S202210022060the CACMS Innovation Fund,Grant/Award Number:CI2021A00512the National Nature Science Foundation of China under Grant,Grant/Award Number:62206021。
文摘Media convergence works by processing information from different modalities and applying them to different domains.It is difficult for the conventional knowledge graph to utilise multi-media features because the introduction of a large amount of information from other modalities reduces the effectiveness of representation learning and makes knowledge graph inference less effective.To address the issue,an inference method based on Media Convergence and Rule-guided Joint Inference model(MCRJI)has been pro-posed.The authors not only converge multi-media features of entities but also introduce logic rules to improve the accuracy and interpretability of link prediction.First,a multi-headed self-attention approach is used to obtain the attention of different media features of entities during semantic synthesis.Second,logic rules of different lengths are mined from knowledge graph to learn new entity representations.Finally,knowledge graph inference is performed based on representing entities that converge multi-media features.Numerous experimental results show that MCRJI outperforms other advanced baselines in using multi-media features and knowledge graph inference,demonstrating that MCRJI provides an excellent approach for knowledge graph inference with converged multi-media features.
基金The first author appreciates the financial support from Hunan Provincial Expressway Group Co.,Ltd.and the Hunan Department of Transportation(No.202152)in ChinaThe first author also appreciates the funding support from the National Natural Science Foundation of China(No.51778038)the Beijing high-level overseas talents in China.Any opinion,finding,and conclusion expressed in this paper are those of the authors and do not necessarily represent the view of any organization.
文摘Due to the rapid advancement of the transportation industry and the continual increase in pavement infrastructure,it is difficult to keep up with the huge road maintenance task by relying only on the traditional manual detection method.Intelligent pavement detection technology with deep learning techniques is available for the research and industry areas by the gradual development of computer vision technology.Due to the different characteristics of pavement distress and the uncertainty of the external environment,this kind of object detection technology for distress classification and location still faces great challenges.This paper discusses the development of object detection technology and analyzes classical convolutional neural network(CNN)architecture.In addition to the one-stage and two-stage object detection frameworks,object detection without anchor frames is introduced,which is divided according to whether the anchor box is used or not.This paper also introduces attention mechanisms based on convolutional neural networks and emphasizes the performance of these mechanisms to further enhance the accuracy of object recognition.Lightweight network architecture is introduced for mobile and industrial deployment.Since stereo cameras and sensors are rapidly developed,a detailed summary of three-dimensional object detection algorithms is also provided.While reviewing the history of the development of object detection,the scope of this review is not only limited to the area of pavement crack detection but also guidance for researchers in related fields is shared.
基金The Science and Technology Commission of Shanghai Municipality(no.08390513800)Leading Academic Discipline Project of Shanghai Municipal Education Commission(no.J50401)